Heliox: Where Evidence Meets Empathy

đź§  The AI Limitation We're Not Talking About: Why Current Machine Learning Could Be Hitting a Wall

• by SC Zoomers • Season 3 • Episode 57

Send us a text

please see the substack resources for this episode

While today's AI dazzles us with its ability to generate text and images, a fundamental limitation lurks beneath the surface: the inability to truly adapt to novel situations without massive pre-training.

We explore an audacious claim by a company called Intuacel that they've revolutionized machine learning by abandoning traditional backpropagation in favor of decentralized sensory learning. Their robot "Luna" seemingly demonstrates autonomous learning without pre-programming, adapting to completely new environments in real-time—something current AI systems struggle with.

More fascinating is how this practical demonstration aligns with a groundbreaking academic paper suggesting that intelligence itself might emerge from a simple principle: minimizing unexpected sensory input. This perspective draws from evolutionary biology, proposing that the same mechanism that helped single-celled organisms maintain stability has scaled up through billions of years to produce complex intelligence.

If these converging ideas hold true, we may be witnessing not just an incremental improvement in AI but a fundamental paradigm shift in how we understand intelligence itself—both artificial and biological. The implications stretch beyond technology into philosophy, challenging our very understanding of consciousness, adaptation, and learning.

A Foundational Theory for Decentralized Sensory Learning

Revolution in Robotics-Robot Dog That Can Learn on Its Own Like a Human | Introduci

This is Heliox: Where Evidence Meets Empathy

Independent, moderated, timely, deep, gentle, clinical, global, and community conversations about things that matter.  Breathe Easy, we go deep and lightly surface the big ideas.

Thanks for listening today!

Four recurring narratives underlie every episode: boundary dissolution, adaptive complexity, embodied knowledge, and quantum-like uncertainty. These aren’t just philosophical musings but frameworks for understanding our modern world. 

We hope you continue exploring our other podcasts, responding to the content, and checking out our related articles on the Heliox Podcast on Substack

Support the show

About SCZoomers:

https://www.facebook.com/groups/1632045180447285
https://x.com/SCZoomers
https://mstdn.ca/@SCZoomers
https://bsky.app/profile/safety.bsky.app


Spoken word, short and sweet, with rhythm and a catchy beat.
http://tinyurl.com/stonefolksongs

Curated, independent, moderated, timely, deep, gentle, evidenced-based, clinical & community information regarding COVID-19. Since 2017, it has focused on Covid since Feb 2020, with Multiple Stores per day, hence a large searchable base of stories to date. More than 4000 stories on COVID-19 alone. Hundreds of stories on Climate Change.

Zoomers of the Sunshine Coast is a news organization with the advantages of deeply rooted connections within our local community, combined with a provincial, national and global following and exposure. In written form, audio, and video, we provide evidence-based and referenced stories interspersed with curated commentary, satire and humour. We reference where our stories come from and who wrote, published, and even inspired them. Using a social media platform means we have a much higher degree of interaction with our readers than conventional media and provides a significant amplification effect, positively. We expect the same courtesy of other media referencing our stories.


Okay, so you know all the buzz around AI right now. Like, it can write amazing stuff, create crazy images, even beat us at super complex games. But lately, I've been thinking, have we kind of hit a wall? Yeah, I see what you mean. It's impressive for sure, but... I was reading something the other day, and it really got me thinking. It basically said, current AI is shackled to this thing called backpropagation. Ah, yeah. That's the core of how a lot of these systems learn. Right. And they're constantly needing to be fed more and more pre-trained data. It just feels like something's missing. Like they're not really learning on the fly the way we do, you know, adapting to totally new situations. Yeah, that's a good point. It's like they're amazing at recognizing patterns they've seen before. But what about when they encounter something truly novel, a real curveball? Exactly. And then boom, I came across this announcement. Into a cell, this company, they put out a video on their YouTube channel. And they're claiming they fixed all this. Whoa, pretty bold statement. Right. Like they're really throwing down the gauntlet. And what's interesting is how they frame the whole problem. They basically say the way we're doing machine learning now is like just shoveling tons of data into these statistical models. But there's no real reasoning there. Like a brute force approach. Just throw enough data at it and something will stick. And their argument is that this lack of real understanding, this lack of reasoning, is why these models are going to fail when they hit the real world. Because reality is messy and unpredictable. Definitely. You're seeing more and more people in the AI community voicing similar concerns. We've had incredible successes in specific areas, but it's becoming clear that these data-driven approaches are kind of fragile. They lack a deeper understanding of what they're doing. And that's where this paper you sent comes in, right? It's on our catch-up. And the title is, let me get it right, 2503.15130, A Foundational Theory for Decentralized Sensory Learning. Yeah, that one. This paper isn't just tweaking things around the edges. They're proposing a whole new foundation for how learning actually works— And they're looking at it through the lens of evolution and how biological systems have solved these problems for billions of years. It's like they're saying, forget everything you thought you knew about learning. Let's go back to basics and look at how nature does it. And they're focusing on this idea of decentralized sensory learning. I know, right? It's mind blowing. It's like we have this company into a cell saying they've cracked the code with this practical demonstration and And then we have this paper from the academic world presenting this radical new theory. And somehow they both seem to be pointing in the similar direction. Both are questioning the fundamental assumptions of how we're building AI today. It's like this perfect storm of theory and practice, right? So this is why we're doing this deep dive. We need to unpack this potential paradigm shift. Let's explore Intuicel's claims through their demonstration. And then we'll see how this wild theory from the research paper might actually explain why their approach is working. Sounds like a plan. So let's start with Intusal. They say there's this fundamental problem with current AI. Like what exactly are they saying is holding it back? They zero in on this thing called backpropagation. That's essentially the algorithm that allows AI to learn by adjusting its internal connections based on errors. Oh yeah, I've heard of that. Like it tries something, sees how wrong it is, and then tweaks itself. Exactly. And in Tussauds' argument is that this method is fundamentally limited. It needs this global error signal to tell the whole system what to change. So it's hard for individual parts of the system to learn on their own just from their own local interactions with the environment. I see. It's like they need a manager to tell everyone what they did wrong instead of figuring it out themselves. It's a good analogy. And the other big issue they highlight is this dependence on pre-trained information. These models need to be fed massive data sets before they can do anything. So they can't really adapt in real time. It's like they've got all this pre-programmed knowledge, but they hit a wall if they encounter something totally new. Right. Like a chess AI that's amazing at chess, but has no idea what to do if you suddenly change the rules of the game. It really shows how different this is from how we learn. We're constantly adapting and learning new things without needing to be pre-programmed. And that's what Into a Cell seems to be going for. So what makes this research contrarian? I mean, how is it different from the stuff that has shaped AI up until now? It seems like this contrarian research is exploring different ways of thinking about how neurons work and how learning happens. Maybe focusing less on these hierarchical top-down models that have been popular in AI. Like the idea that there's this central control center in the brain that's calling all the shots. Exactly. Maybe they're looking more at how individual neurons solve problems locally. Yeah. without needing a global blueprint. So more of a bottom-up decentralized approach. Exactly. And this is where it starts to get really interesting because this idea of decentralized learning is at the heart of the Arxedict paper too. They even specifically critique extrinsic error measurements and global learning algorithms. So they're both kind of saying the same thing, just coming at it from different angles. Right, exactly. The paper argues that biological systems, even down to individual cells, don't learn by some central authority telling every part what to do. It's more like each part is figuring things out based on its own local interactions. Like instead of needing a manager, each team member can just see what's working and what's not, and adjust accordingly. A perfect analogy. And to replace this idea of global error correction, They introduced this concept of reinterpreting sensory signals as part of a negative feedback control system. Okay, that sounds kind of complicated. Can you break that down for us? Sure. Imagine your car's cruise control. You set it to, say, 60 miles per hour. The speedometer is like your sensory input. If the car slows down, the system automatically increases the engine power to get back to 60. If it goes too fast, it reduces the power. So it's constantly adjusting to stay at the target speed. Right. That's negative feedback. The system is acting to minimize the difference between the desired state and the actual state based on sensory information. The ARC-SOUV paper suggests that our senses might be doing something similar. So instead of this global error signal telling everything what to do, each part of the system is just trying to minimize its own local sensory errors. And that leads to what they call local learning algorithms. Each part is adjusting based on its own sensory input without needing instructions from some central command. So it's like each part of the system is learning independently, but they're all contributing to this overall goal. Precisely. And here's where it gets really fascinating. The paper suggests that minimizing this sensory activity can be the only reward signal needed for learning. It's like your body is constantly adjusting to keep you balanced without you consciously thinking about it. So it's not about seeking pleasure or avoiding pain. It's about finding this state of minimal sensory noise or surprise. Exactly. And that brings us to... into a cells robot Luna they built this actual physical robot to demonstrate these principles in action it's not just simulations or theory they've got this robot out there in the real world bumping into things and figuring things out and they're very clear that Luna is learning autonomously and continuously yeah this isn't like pre-trained AI that's essentially fixed after his training phase so Luna is constantly learning and adapting as she goes exactly And they emphasize that she doesn't rely on massive data sets or super finely tuned control algorithms. They even use a generic network architecture, which suggests that their approach isn't limited to just robots. It could potentially work on all sorts of intelligent systems. They mentioned drones, other mobile robots, even digital entities. It's like they found this underlying principle of learning. that could be applied really broadly. Okay, so in their initial demonstration, Luna was just this basic off-the-shelf lab robot, and they basically just let her loose to explore and learn her own mechanics. They let her figure out how to balance and stand all on her own, without programming in any specific instructions. And they even mentioned that the leash was just for safety. They wanted her to fall and learn from her mistakes. Okay. It's a very different approach. Like instead of trying to protect her from failure, they're embracing it as part of the learning process. And they specifically designed her senses to propagate the problem. Instead of pre-defining solutions, they built the system so that she would have to figure out how to achieve stability on her own. It's like they're setting up the environment to encourage learning, to force the system to discover solutions through interaction. Right. And they contrasted this with traditional machine learning, where you would have to program in all the robots' movements and then retrain the model every time something changed. So a very manual and time-consuming process. their approach is all about adaptability. Yeah. And this links back to the ARC-SIF paper. Luna is learning through interaction, and the reward is finding that state of minimal sensory input. When she falls, it creates all this unexpected sensory information, So learning to stand is essentially about minimizing that noise. It's not about someone telling her good job or giving her a point. It's about finding that internal state of balance. Then they took Luna to a whole new challenge, an icy surface. And they presented this as an example of how their system can generalize to totally new situations. It's a key test for any intelligent system. Can you adapt to something you've never encountered before? They were saying that with traditional ML, you'd have to take the robot offline, retrain it with data from the icy surface, and then redeploy it. It's a rigid process that doesn't allow for real-time adaptation. And Intucel is claiming they've moved beyond that. They even said this isn't just next generation AI. It's the first generation of genuine intelligence. That's a bold statement. But their demonstration on the ice does suggest that Luna's doing something different. She's adapting in real time to a novel situation. And they think this approach, making intelligence the starting point, is the key to building truly autonomous systems. systems that can function in the real world with all its unpredictability. It's a fascinating concept. Systems that can learn and evolve on their own without needing constant human intervention. And they even laid out Luna's future development. They're calling this her newborn era where she's just figuring out how to control her body. Right, like a baby learning to crawl. Next, they expect her to start taking her first steps, the toddler phase, And then comes the child era, where they think they'll be able to teach her new skills just by giving her instructions, like you would with a human child or a pet. It's like they're mirroring the stages of development we see in biological organisms. And that ties back into the Arksev paper, which suggests that this principle of minimizing sensory input, which might have started in single-celled organisms, has been scaled up throughout evolution. They talk about how this leads to a division of labor between cells in multicellular organisms. So Luna, trying to stay balanced, could be seen as a very basic version of this local learning. It's a really intriguing connection. The paper argues that this simple principle of minimizing local sensory errors was so effective that evolution built upon it. As organisms became more complex, different cells specialized in different tasks. Like some cells became really good at sensing light, others at contracting muscles and so on. Right. But they were all still operating under this basic principle of minimizing their local sensory input. And the nervous system evolved as a way to connect these specialized cells and coordinate their actions. So it's like each cell is still solving his own local problem, but they're all working together as part of a larger system. And that's how you get complex behavior and intelligence emerging from these simple decentralized interactions. And Intuacel's approach, with their generic network and each node solving its own problem, could be seen as a technological implementation of this same principle. So it's like they're taking inspiration from billions of years of evolution and applying it to AI. Precisely. And if you go back to the Ari Alov paper's core argument, they're proposing that this whole idea of sensory learning as a decentralized process driven by negative feedback might be fundamental to how intelligence works, even at its most basic level. So they're suggesting this might have been present in the very first single-celled organisms. Like even single cells need to maintain their internal stability in a changing environment. They need to sense things like temperature, chemical gradients, and adjust accordingly. So maybe this basic ability to minimize disruptive sensory input to find a stable state is the foundation for all learning in intelligence. It's a really powerful idea. It suggests that intelligence isn't necessarily about super complex algorithms. It's about this fundamental drive to adapt and minimize surprise. And the paper argues that the same principle scaled up to multicellular organisms, which led to the division of labor we see in complex life forms. Right. Different cells specialize, but they're all still working under the same basic principle of minimizing their local sensory activity. Yeah. And that's how you get from single cells to brains and complex behavior. And this idea of local learning algorithms maps really well onto Intuos cells description of how their system works. They talk about neurons, or maybe nodes in their network, solving their own local problems based on their sensory inputs. It's like each part of the system is responsible for its own little piece of the puzzle. And they're all working together, driven by this shared goal of minimizing sensory noise. And this idea of sufficiently good minima in sensory activity as the reward signal could explain why Luna can balance and adapt to new terrains. She's not getting points or praise, she's just finding that state of reduced sensory input related to being off balance. It's like the reduction of that feeling of instability becomes its own reward. It drives the learning process. So to bring it all together, we have this really cool convergence happening. Intuacel is showing us the system that seems to learn and adapt in a way that's more like how we do it. And the ARCSIF paper provides a potential explanation for why this might be working, drawing on this deep evolutionary perspective on learning. It's not just about tweaking the algorithms, it's a whole different way of thinking about intelligence. It really is. It's like we're moving from this idea of intelligence as something programmed from the top down to something that emerges from the bottom up through local interactions and adaptation. And it makes you wonder if this idea of minimizing sensory noise is really at the core of intelligence. How might it change not only our technology, but also how we understand ourselves? And if we're on the verge of creating systems that can truly learn autonomously, what does that mean for the future? Yeah. It's exciting and a bit daunting at the same time, right? It's definitely something to think about. I agree. It raises all sorts of questions about the nature of intelligence, consciousness, even free will. It's a whole new frontier. And on that note, we'll leave you to ponder those questions. Thanks for joining us on this deep dive. We'll catch you next time. See you then.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.