Heliox: Where Evidence Meets Empathy 🇨🇦‬

The Geometry of Forgetting: What Mathematics Reveals About the Limits of Human Perception

• by SC Zoomers • Season 5 • Episode 64

Send us a text

📖 Read the companion essay 

What if your memories aren't stored files, but evolving geometric shapes in constant motion? This week, we're diving into a revolutionary kinetic model that treats memory as a dynamic system governed by mathematical forces—and reveals a stunning limit to human perception.

We explore how every memory exists in tension between two opposing forces: focusing (which sharpens concepts through learning) and forgetting (which generalizes them through absence). This isn't just neuroscience—it's geometry.

The breakthrough? Memory capacity doesn't increase indefinitely with complexity. It peaks at exactly seven dimensions, then collapses due to a phenomenon called the concentration of measure. This might explain why we have the senses we do, why working memory clusters around seven items, and why true expertise always costs flexibility.

IN THIS EPISODE:

  • Why engrams are shapes, not files
  • The unavoidable trade-off between receptivity and precision
  • How high-dimensional spaces destroy the brain's ability to distinguish concepts
  • The magic number seven—and what it means for human perception
  • Why forgetting is a feature, not a bug
  • The speed-complexity trade-off in learning systems

LISTEN if you're curious about: ✓ The mathematics of memory ✓ Why "more information" isn't always better ✓ The geometric limits of human perception ✓ How expertise and flexibility oppose each other ✓ What makes seven so special

Join us at Heliox, where evidence meets empathy. Independent, moderate

This is Heliox: Where Evidence Meets Empathy

Independent, moderated, timely, deep, gentle, clinical, global, and community conversations about things that matter.  Breathe Easy, we go deep and lightly surface the big ideas.

Thanks for listening today!

Four recurring narratives underlie every episode: boundary dissolution, adaptive complexity, embodied knowledge, and quantum-like uncertainty. These aren’t just philosophical musings but frameworks for understanding our modern world. 

We hope you continue exploring our other podcasts, responding to the content, and checking out our related articles on the Heliox Podcast on Substack

Support the show

About SCZoomers:

https://www.facebook.com/groups/1632045180447285
https://x.com/SCZoomers
https://mstdn.ca/@SCZoomers
https://bsky.app/profile/safety.bsky.app


Spoken word, short and sweet, with rhythm and a catchy beat.
http://tinyurl.com/stonefolksongs

Curated, independent, moderated, timely, deep, gentle, evidenced-based, clinical & community information regarding COVID-19. Since 2017, it has focused on Covid since Feb 2020, with Multiple Stores per day, hence a large searchable base of stories to date. More than 4000 stories on COVID-19 alone. Hundreds of stories on Climate Change.

Zoomers of the Sunshine Coast is a news organization with the advantages of deeply rooted connections within our local community, combined with a provincial, national and global following and exposure. In written form, audio, and video, we provide evidence-based and referenced stories interspersed with curated commentary, satire and humour. We reference where our stories come from and who wrote, published, and even inspired them. Using a social media platform means we have a much higher degree of interaction with our readers than conventional media and provides a significant amplification effect, positively. We expect the same courtesy of other media referencing our stories.


Speaker 1:

This is Heliox, where evidence meets empathy. Independent, moderated, timely, deep, gentle, clinical, global, and community conversations about things that matter. Breathe easy. We go deep and lightly surface the big ideas.

Speaker 2:

Welcome back to The Deep Dime. Today we are hacking the enigma of memory, but we're going to skip the usual talk about neurons and synapses.

Speaker 1:

We are. We're looking at memory through a totally different lens today, through geometry.

Speaker 2:

Exactly. We're diving deep into this really novel kinetic model. It completely throws out the old idea of memory being like, you know, static files in a cabinet.

Speaker 1:

And instead it treats them as dynamic, evolving shapes, things that are constantly changing, defined by these mathematical forces.

Speaker 2:

And so our mission today is to pull out the core insights from a major scientific paper on the long-term dynamics of memory engrams.

Speaker 1:

And just to set the stage, an enneagram is the physical trace of a memory in the brain.

Speaker 2:

Right. We want to understand how those traces sharpen or blur over years, even decades, and follow the math to this surprisingly specific conclusion about how we perceive the world.

Speaker 1:

To really get started, we should probably define that term enneagram a little more formally. It was coined by Richard Seaman way back in 1904.

Speaker 2:

Wow, okay. So it's an old idea.

Speaker 1:

A very old idea. He described it as the neural substrate for storing memories. Today, we think of them as these sparse groups of neurons spread out across different brain regions.

Speaker 2:

They're the physical container for a concept, like bicycle or even that feeling of cold rain.

Speaker 1:

Precisely. And the source material really highlights that while we've gotten good at finding where engrams are, The big open question is how they evolve over time.

Speaker 2:

Okay, let's unpack that. What are the big questions this model is trying to answer?

Speaker 1:

Well, do our memories just stay sharp forever? Do they just dissolve into noise?

Speaker 2:

Or maybe they merge together, you know, and form these huge, generalized super concepts.

Speaker 1:

Exactly. This model gives us a quantitative way to predict that evolution. And the big payoff is this incredible discovery. The existence of a critical dimension in conceptual space.

Speaker 2:

And that doesn't just explain memory. It suggests something really fundamental about our capacity for perception itself.

Speaker 1:

It really does.

Speaker 2:

Okay, so let's get into the model itself. The researchers knew, obviously, that modeling billions of neurons was impossible.

Speaker 1:

Just not feasible.

Speaker 2:

So they went with a geometric simplification. They model concepts as these little spatial sets-like areas or spherical caps in a multidimensional conceptual space.

Speaker 1:

Right. And you might be asking, why a sphere? Why put them on the surface of a sphere?

Speaker 2:

Yeah, that was my first question.

Speaker 1:

It's a really smart geometric intuition. Think about any feature of a concept. Color, size, texture, these things are usually bounded.

Speaker 2:

Okay, like on a scale from zero to one.

Speaker 1:

Exactly. If you normalize all the features of a stimulus that way, you naturally force all possible concepts to lie on the surface of a hypersphere. It just makes the math of similarity so much cleaner.

Speaker 2:

So the distance between two concepts is just the angle between them on this sphere.

Speaker 1:

You got it. We're not modeling raw features, but they're normalized relationships.

Speaker 2:

And the outside world interacts with this space through stimuli.

Speaker 1:

Yes, which they call catching shots or CS. I kind of like the sports analogy there.

Speaker 2:

It's like an event trying to land inside your existing concept to trigger a memory.

Speaker 1:

And this whole system is run by a constant tug of war. Two opposing kinetic forces that are always determining an enneagram size.

Speaker 2:

Let's talk about the first force, the one that sharpens a concept.

Speaker 1:

That's focusing or, you know, learning. When an engram gets hit by one of these catching shots, two things happen. First, its center immediately jumps to the exact point of that hit. The memory recalibrates to match the newest information perfectly.

Speaker 2:

And second, its size shrinks.

Speaker 1:

Its size shrinks. This is the act of sharpening the concept. But the key is that the shrinkage isn't a fixed amount. The model treats it probabilistically.

Speaker 2:

Wait, why is it probabilistic? Why not just shrink it by, say, 10% every time?

Speaker 1:

Because learning isn't a fixed mechanical process. The model uses what's called an exponential density, which basically just means that the effect of new information depends on the concept's current state.

Speaker 2:

So it's not about making it smaller by a set amount. It's about shifting its focus to that very last stimulus. The concept gets more specialized.

Speaker 1:

More specialized and more focused on the latest evidence. Yes.

Speaker 2:

Okay, so that's the force pulling inward, making memory sharper. But if that's all that happened, wouldn't every memory just shrink down to a single point and disappear?

Speaker 1:

It would. And that's where the counterforce comes in. Forgetting. Or losing sharpness.

Speaker 2:

So in the absence of any new stimuli, the Enneagram starts to grow.

Speaker 1:

It grows. It gets fuzzier, more generalized. And the really clever mathematical insight here is that the speed of forgetting isn't constant. It depends on the Enneagram's size.

Speaker 2:

Okay, the paper has an equation for this. Something like alpha times e to the minus l. What does that actually mean for how we forget?

Speaker 1:

It's a perfect description of the dynamic. It means you get this really sharp, fast forgetting right at the beginning. The concept expands quickly, but that's followed by a much, much slower relaxation.

Speaker 2:

Ah, so once a concept is already pretty fuzzy, it gets harder and harder for it to become even fuzzier.

Speaker 1:

Exactly. Think about trying to remember a name you just learned versus, say, the definition of a broad concept from high school. The name disappears fast, but that old fuzzy concept is surprisingly stable. It's in a state of very slow decay.

Speaker 2:

So the size of any memory in your head is just this constant mechanical tray-off between focusing and forgetting.

Speaker 1:

It is. And what's so fascinating is this tension it creates, a tension between the system's receptivity to new things and the sharpness of its existing concepts.

Speaker 2:

This is an unavoidable tradeoff.

Speaker 1:

Completely. Let's imagine a five-year-old learning about cars versus an expert mechanic.

Speaker 2:

Okay, so the five-year-old has very high receptivity.

Speaker 1:

Extremely high. Their enneagram for car is huge and fuzzy. It probably includes trucks, buses, maybe even a train.

Speaker 2:

Right. They're very receptive. Almost any vehicle fits the category. But the concept is not sharp at all.

Speaker 1:

Exactly. That's a high forgetting rate, weak focusing, large engrams that cover a lot of ground.

Speaker 2:

And the mechanic is the opposite. They have incredibly sharp, specialized concepts. The difference between a specific type of engine is obvious to them.

Speaker 1:

But their engrams are tiny. If they see a totally new kind of engine, their system might not even register it. The stimulus falls outside their sharp, focused little concept areas.

Speaker 2:

So that's low receptivity.

Speaker 1:

That analogy is perfect. And it mirrors what the paper calls the classical bias-variance trade-off from statistical learning. It's amazing.

Speaker 2:

It maps a psychological thing onto a fundamental rule of math. High receptivity is high bias, and high sharpness is high variance. Got it. Okay, this is where the model really shifts gears and gives us that huge counterintuitive insight. They took this kinetic model and scaled it up to multidimensional space.

Speaker 1:

Right, using those spherical caps we talked about. but now in a space with D dimensions representing concepts with D different features.

Speaker 2:

And the main goal of the simulation was to track the number of distinct centers, or NDC, as you change the number of dimensions.

Speaker 1:

Basically, the NDC is the total number of sepal unique concepts the system can hold on to.

Speaker 2:

And you'd think intuitively that the more dimensions you add, the more capacity you should have. More features, more space, more room for memories.

Speaker 1:

That is absolutely the intuition. But the math shows this profound twist. The relationship is non-monotonic.

Speaker 2:

Meaning it doesn't just go up and up?

Speaker 1:

Not at all. The number of distinct concepts, the NDC, it initially increases with dimension D, but then it hits a peak, a critical dimension, and after that it just plummets. That goes down.

Speaker 2:

Why? Why would having too many dimensions cause your memory capacity to collapse? That seems impossible.

Speaker 1:

It's a very strange consequence of high-dimensional geometry. it's a phenomenon called the concentration of measure.

Speaker 2:

Okay, break that down for us.

Speaker 1:

In really, really high dimensional spaces, everything becomes almost equally far apart from everything else. All the points, the centers of your memory engrams, become equidistant.

Speaker 2:

So there's no close or far anymore.

Speaker 1:

Pretty much. And the focusing force, which needs to pull an engram center toward a specific nearby stimulus, it just stops working effectively. Can't tell the difference.

Speaker 2:

Because everything is equally far away, so the concepts just start merging.

Speaker 1:

They start merging sharing centers, and they all collapse into one giant, uselessly fuzzy super concept. The whole system loses its ability to make distinctions.

Speaker 2:

Wow. So the very complexity that you thought would help actually destroys the geometric machinery of memory.

Speaker 1:

That's a great way to put it.

Speaker 2:

And when they ran the numbers, they found a very specific value for that dimensional sweet spot.

Speaker 1:

They did. For a whole range of parameters, especially for systems with higher forgetting rates, The simulations showed that the critical dimension, this DC, saturates right around 7.

Speaker 2:

7. Just the number 7. Pulled from the pure math of these evolving shapes. How do we even begin to connect that to the real world?

Speaker 1:

Well, the paper makes this really profound leap. It connects the features that define the dimensions to an organism's senses.

Speaker 2:

So if you assume a one-to-one link between a feature of a stimulus and one of our senses.

Speaker 1:

Then the dimension of the engrams is the same as the number of senses.

Speaker 2:

So the model is proposing that the absolute peak capacity of our conceptual space, the richest possible perception of the world, is reached when the number of senses is seven.

Speaker 1:

That's the implication, that the system is optimized for maintaining distinct concepts with an optimal number of input channels.

Speaker 2:

Was this number robust? I mean, if you change the rules a little, does the seven just disappear? Did they try to break it?

Speaker 1:

Yeah. Oh, they definitely did. They found the same general behavior, the existence of a critical dimension, Even in more complex versions of the model. Like what? For instance, a model for learning from scratch, where a new memory only forms after, say, three or four stimuli hit the same empty spot. Or models with inhibition, where if two engrams overlap, only one of them is allowed to shrink.

Speaker 2:

And in all those cases, the capacity still peaked and then collapsed.

Speaker 1:

It still did. The non-monotonic relationship held up, which really confirms that this is a fundamental geometric consequence of high dimensional space.

Speaker 2:

So what does this all mean for you, the listener? This kinetic model, it gives us this surprisingly clear, quantitative way to think about the tradeoff between memory receptivity, how open you are to new things, and concept focus.

Speaker 1:

It translates all that messy biology into a very clean geometric competition.

Speaker 2:

And the primary takeaway here is this powerful idea that our maximum capacity for storing distinct concepts might be tied to a specific dimensional limit. The number seven.

Speaker 1:

It connects this very abstract math directly to the fundamental structure of how we might learn and how our sensory systems might have been optimized over time.

Speaker 2:

That is a powerful thought. The idea that we are, in a way, geometrically limited in how complex our perception can be.

Speaker 1:

Indeed. And there was one final little detail the analysis showed. It found that the critical dimension actually decreases slightly when the forgetting rate goes up.

Speaker 2:

So if you forget faster, your optimal number of dimensions is a bit lower.

Speaker 1:

A little bit lower. And that leaves us with an important question to chew on. If a highly flexible, fast-forgetting system is optimized for a slightly lower dimensional space, does that mean an organism that prioritizes rapid learning inherently has a slightly more limited potential for conceptual complexity compared to a slower, more focused system?

Speaker 2:

A trade-off between speed and complexity. That is definitely something to think about. Thanks for listening today. Four recurring narratives underlie every episode. Boundary dissolution, adaptive complexity, embodied knowledge, and quantum-like uncertainty. These aren't just philosophical musings, but frameworks for understanding our modern world. We hope you continue exploring our other podcasts, responding to the content, and checking out our related articles at helioxpodcast.substack.com.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.