Embedded AI - Intelligence at the Deep Edge

Why Humans and Robots must Dream

David Such Season 5 Episode 27

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 23:57

Send us Fan Mail

Put a blindfold on a sighted adult and the visual cortex starts being colonised by touch and hearing within forty-five minutes. Not weeks. Not days. Forty-five minutes. This is not a quirk of extreme cases. It is how the cortex works all the time. Every region of the brain is in continuous low-grade negotiation with its neighbours over territory, and the currency of that negotiation is activity. Stop using a subsystem and the neighbours move in, fast. This is the empirical foundation of a hypothesis from neuroscientist David Eagleman called the defensive activation theory: that REM sleep exists specifically to keep the visual cortex active during the eight hours each night when external input is unavailable, defending its territory against takeover by senses that never go offline.

The theory itself is plausible but not yet directly proven. What is proven, and what matters more for engineers, is the underlying principle. A complex system with reconfigurable resources will silently lose capability in any subsystem that is not regularly exercised, even when nothing is actively trying to take that capability away. This is not catastrophic forgetting in the usual machine learning sense, where new training overwrites old parameters. This is something subtler and arguably more dangerous: passive territorial loss in any system that supports continuous adaptation. It shows up wherever capabilities are not being exercised in long-running adaptive AI: rarely-routed experts in mixture-of-experts models, underused sensor pipelines in multi-modal robotics, capabilities that drift out of online reinforcement learning agents over months of deployment. Most current architectures treat their structure as fixed by design. Biology treats its structure as continuously contested.

This episode looks at what defensive activation reveals about a missing primitive in modern AI architecture. Current systems have two fundamental modes, training and inference. Brains have at least three, and the third one, the maintenance mode that operates during REM sleep, has no clean equivalent in the systems we build. We examine what this mode is doing structurally, why generative replay in continual learning is mechanistically closer to dreaming than the field usually acknowledges, and what a telemetry-driven maintenance subsystem might look like for embedded and edge AI. The closing argument is straightforward: if biology has been running this experiment for a few hundred million years and converged on internally-driven activation as the way to maintain a plastic computational substrate, the absence of an equivalent mechanism in our architectures is not a neutral design choice. It is a gap.

Support the show

If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!