No Way Out

Explainable AI: Active Inference, Free Energy Principle, Spatial Web & Boyd's OODA loop with Denise Holt | Ep 25

Mark McGrath and Brian "Ponch" Rivera Season 1 Episode 25

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 1:15:21

Send us Fan Mail

What if we told you there's an AI that goes beyond the limitations of traditional large language models (LLMs) and is modeled after our very own brains? Join us in our captivating conversation with Denise Holt, Spatial Web podcast host, as we unlock the secrets of active inference AI and its groundbreaking potential to reshape our interaction with the world around us.

Discover how active inference AI, inspired by the same sciences behind the OODA loop, is more energy-efficient and offers a transparent, auditable, and explainable decision-making process. Together with Denise, we delve into the Spatial Web as the next evolution of internet protocol and its role in providing a framework for interoperability between emerging technologies. Learn how active inference AI can be used to create action-perception loops for intelligent agents and how the Free Energy Principle influences both the external and internal states of an agent.

Lastly, we explore the fascinating world of brain perception and inference through the lens of Adelson's Checkerboard Illusion. Uncover the importance of active inference in the brain's ability to make predictions about the external world, and how sensory signals serve as a currency within our bodies. Don't miss this riveting discussion on the future of AI, the Spatial Web, and the incredible potential of active inference AI.

Denise Holt on LinkedIn
Spatial Web AI
Twitter:
@deniseholt_
@spatialwebai
Verses AI

John R. Boyd's Conceptual Spiral was originally titled No Way Out. In his own words: 

“There is no way out unless we can eliminate the features just cited. Since we don’t know how to do this, we must continue the whirl of reorientation…”

A promotional message for Ember Health.  Safe and effective IV ketamine care for individuals seeking relief from depression. Ember Health's evidence-based, partner-oriented, and patient-centered care model, boasting an 84% treatment success rate with 44% of patients reaching depression remission. It also mentions their extensive experience with over 40,000 infusions and treatment of more than 2,500 patients, including veterans, first responders, and individuals with anxiety and PTSD

Stay connected with No Way Out and The Whirl Of ReOrientation

X: @NoWayOutcast · @PonchAGLX · @NoWayOutMoose

Substack: The Whirl Of ReOrientation - www.thewhirl.substack.com







AI and Spatial Web Interaction

Brian "Ponch" Rivera

All right, hey, good afternoon everybody, good evening, good morning Wherever you may be. Pontrevere, here, i'm here with Denise Holt. Denise is the podcast host of Spatial Web. Make sure I get this right Spatial Web, ai podcast. Did I get that right, correct, yep? What is the Spatial Web and why the heck would you want to talk to me? That's what I'm curious about.

Denise Holt

Well, okay, so the Spatial Web is basically the next evolution of our internet protocol. So we are currently living in a world where we're interacting with websites, web pages, and yet all of these converging technologies are coming together that really require us to be able to have a network of spaces and all of the things within those spaces. So the Spatial Web protocol of HSTP, hyperspace Transaction Protocol, and the programming language for it, hyperspace Modeling Language, gives us the framework to be able to move into this kind of digital twin computing environment where every person, place or thing in any space, in any reality, is now locatable and identifiable, and the protocol allows for gatekeeping around each entity.

Brian "Ponch" Rivera

Right? So if we go back to early 1970s and DARPA coming up with what we now know as the internet, is that considered Web 1.0? Is that right Or is that? where are we in the other channels?

Denise Holt

Yeah, i believe so, yeah, Okay.

Brian "Ponch" Rivera

So we have DARPA coming up with this idea back in 1972, or whatever it was, in the 70s. We grew up with it, my generation. We see Web 2. We're moving towards Web 3.0. And that's? is that Spatial Web, or is that?

Denise Holt

Yeah, web 3.0, spatial Web, they're interchangeable.

Brian "Ponch" Rivera

Yes, Okay, okay. So that's pretty important, but that's an enabler for artificial intelligence, correct?

Denise Holt

Yes, yes. So the magic of this new network of everything is that it really gives us a foundation for an entirely new type of AI called Active Inference AI.

Brian "Ponch" Rivera

Okay, so Active Inference, ai. I think a lot of people are familiar with chat, gpt-3 and GPT-4. And I believe those are called LLMs, which I'm going to try large language models, right.

Denise Holt

Yes, yes, is that correct.

Brian "Ponch" Rivera

Okay, so LLMs and Active Inference, artificial intelligence, night and day, correct.

Denise Holt

Yes, night and day two very different types of so and the difference can be equated to these LLMs. They're transformer models. It's machine learning, ai, their algorithms and their incredible tools, but they're limited in what they can and can't do and they have no awareness, no understanding, no ability to gain any within themselves. So they're like tools in a toolbox. Active Inference is modeled after biological systems like our brain and the way our body works and the way nature works, so it's like a toolbox versus the human.

Brian "Ponch" Rivera

Right. So there's a connection and that has to do with biological systems, complex adaptive systems and why Denise is on the show today. And that is because Active Inference AI I'm going to say this out loud and to the world right now is Udalloop AI right. So there's a difference. When a lot of people think about linear systems like Plan-Due-Check Act, pdsa nothing wrong with them, they're linear things, they're pretty cool, they work in great in a context. To me that's a LLM approach. It's not a closed system necessarily, but it has a data source that it's learning from and it's not an open data source. It's kind of closed data source. It's limited, right. That's one way I think about it. And then Active Inference AI has sensory capability that looks outside and adapts. And I have a couple of notes here and I want to see if these make sense. And you already said this.

Brian "Ponch" Rivera

But LLMs do not actively interact with the external environment, correct? They don't actively interact And they don't take actions to reduce uncertainty or what John Boyd called mismatches. So if you're familiar with the John or John Boyd Doodle Loop in competition, one of the things we want to do is we want to create mismatches in the environment, to slow down, to defeat, to get ahead of whatever you want to think of in competition, to get ahead of your enemy or your competitor or get inside of the thought process of your customer. So that's the. You need to take some type of actions to reduce uncertainty, to avoid things like being taken advantage of by a thief. That's what we're doing all the time in our brain. So that's a little bit more on Active Inference AI. Any other differences that you can think of, denise?

Denise Holt

Well, so you know these yeah, these machine models.

Denise Holt

There's two different aspects to the way the AI works too. So these machine models. They require enormous amounts of data to be fed in to train the model. Right, that's because they're pattern, they're pattern matching, pattern recognition machines. I mean that's all they do. So the more data you feed it, the more likely they are to be able to recognize the patterns and make you know, make decisions based on what they recognize as familiar patterns that we already, that we already use and recognize. So big data coming down, it's not a very scalable approach.

Denise Holt

Now, active Inference can take any small amount of data and make it smart because it's, it's instead of looking inward to the historical data that it was trained on, it's able to look outward to the data that is presenting itself at that moment And it can update in real time. So it can take in data from sensors, from cameras, from the actual spatial web protocol itself, with, you know, the programmable context that is informing it of relationships between the data and between the entities that are within the spaces. So it can, and because of the gatekeeping aspect of the protocol itself, you can you the same Active Inference AI can be applied to, you know, proprietary data and keep it internal, or it can be applied to enormous amounts of, you know, interlink data throughout the planet. You know it, it, it has a lot of flexibility.

Brian "Ponch" Rivera

So what I'm hearing from you and maybe I might be wrong on this, but an LLM requires I'm going to say this it requires more energy. And if you go back to biological systems and what I understand about the free energy principle and Active Inference, which we'll probably touch on here a little bit, and a little bit about the brain, the brain weighs 2% of our body weight. It burns 20% of our energy. It's a highly it's. It's it's burning too. It's burning a lot of calories, right. So it has to find efficiencies And, if I remember correctly, active Inference borrows from that type of thinking in the brain that we have to find a way to reduce the energy spend on how we perceive the world.

Brian "Ponch" Rivera

So I believe what's going to happen is if we do that, if we do shift from LLMs to this Active Inference, the energy required to do so is going to be lower, which allows novel things to happen with Active Inference, the same way that happens with our brain. And, by the way, that's what the Oodaloop is about. It. It, i believe the Oodaloop actually captures Active Inference. Before Active Inference and FEP were a thing, and we could talk about that here a little bit. But I but I get that right is. It's a lower energy cost to use Active Inference approaches.

Denise Holt

Yeah, and you know, one other really important difference to is these machine models, because they are, you know, neural nets that are just, you know they have all this input, they, they take in, you know they take input there, comparing it to the training data and giving you an output. But all of that activity within the neural net is obscured. So you can't explain how it comes to its decision. You can't, it's not explainable, they're not even observable, you know, and therefore they're not auditable either. So that's one of the biggest difference with active inference, because active inference The entire action perception loop of how it is getting to its decision making. It's, it's transparent, it's auditable and it's it provides a situation where the it self Can reflect on its own decision making process and report on it. So it's, it's kind of like a black box.

Brian "Ponch" Rivera

You got this black box to do and stuff. You have no idea what's going on inside, right? This is very dangerous. And what? when we're coaching organizations about black box not necessarily black box thinking I actually there is a connection there when you go into black box thinking, you figure out what's actually going on On the inside in black box thinking comes from the cockpit of commercial aircraft where you have the black box that tells you what happened in the aircraft. But you have to use that at some point in the future, usually in a accident report.

Brian "Ponch" Rivera

But when we talk about black box thinking or black boxes in an organization, they're usually just these things that magical things happen and nobody knows why. That's very dangerous, right, and the main perspective that's. That costs a lot of energy and What we want to do is we want to know what's what happened. That explain ability and accountability within that black box, and I believe Active inference gives us that capability to learn from the AI. How did you, how did how, did it come up with this decision? what was its process? is that correct?

AI and Spatial Web Protocol

Denise Holt

Yes, and you know. And when you're thinking about the value of the two different types of AI, you know it's really important to think of our use cases for AI to write. So if you take one of these LLMs, one of these transformer models, if you're just concerned with making things, you know like, you know write me a report, make me a picture or a video. Or you know like perform a task, create code, write me some code. You know there, they're good at that, they're amazing. They save you a lot of time. It's, you know, it's, it's, it's wonderful. But when you have no way to know how it's achieving, are it's arriving at its decision making, you know it's at its outcome then it's impossible to use it for mission critical. You know decisions, like you know anything to do with medicine, or you know how operating an airport, or you know anything where you have to be able to, number one, understand How it's getting to its decision, so that if something goes wrong, you can correct it. You know, but also, yeah, yeah, so.

Brian "Ponch" Rivera

This is awesome, because this is what we actually coach organizations is. We want to separate your outcomes from your decision making, and that means we want to look at how you interact as a team To see how that works. So we can repeat that over and over, and once you repeat it, you borrow from somebody else, you repeat it, you improve it over time, over and over again. So that's the parallels between what we're coaching and organizations and what I'm hearing from you on active inference, and even LLMs is pretty powerful. Now I got a quick story, as you.

Brian "Ponch" Rivera

As you know, we talked not too long ago. I'm in Colorado. My mother passed away. There's a lot of things that I think I are active. Active inference I can do in future scenarios that are similar to my mom's.

Brian "Ponch" Rivera

But for an LLM, i use chat GP three the other day to take my mother's obituary and some stuff from her life, and I had it right for different songs George straight song straight out of Compton, and then something from Elvis, and then I can't remember the fourth off the top of my head, but it was such a novel thing that it captured my mom's life in song and everybody at the funeral was like, how did you do this, and so there's.

Brian "Ponch" Rivera

It was a very powerful way to Kind of recognize my mom's life but, at the same time, have some fun and show people some new technology. But the active in but LLM cannot have saved my mom's life. I guarantee you that, because To me it's looking backwards and not looking out right and we need capabilities that look out, to bring disparate information together And get away from these solid approaches. And, by the way, that's what we're trying to do with organizations in the first place is get them to break down or to work across their silos, because you can't break them down, work across them so they can. They can learn as an organism and create better outcomes. Is that what you see happening with active inference is creating?

Denise Holt

that different connections? Yeah, and you know one of the things with the, with marrying up the active inference AI with the spatial web protocol, you know, is it? you know it's a framework of these nested, nested spaces, right? So then you have the advantage to be using Markov blankets and you know it allows for this action perception loop between all things in real time. You know so, it's really. It's really interesting because it, when you think of the, like the free energy principle, and you know, obviously there's all possibilities for outcomes at all times.

Denise Holt

You know the way active inference works within the protocol And then all of the sensory input, is it? it really works in a way to minimize all of the, you know, external noise and be able to actually focus on what's important, what actually relates to the task at hand or the desired outcome. And you know, and you can get there, and then it it's happening across a network simultaneously at the same time. You know so, it's, it's, it's scalable without the requirement of all of this. You know extra energy, that's really not needed. You know what I mean. Yeah, yeah.

Brian "Ponch" Rivera

Yeah, and you know I'm hearing some stuff that LLMs are hallucinating. There's a great story in the last couple weeks from time magazine, but it there's lawyers are using it to create, to reduce their workload, but at the same time they're asking it to find cases. And these LLMs are actually creating new cases, which are right. So they're actually hallucinating. And we have a John Boyd bot right now that looks back at some of our data and it hallucinates to. It, makes crazy connections, are like that's not true. So LLMs I and this is my perception of LLMs right now they're pretty cool. It's not the coolest thing out there on the block, but everybody's talking about it right now. That's that's what everybody wants the LLM.

Brian "Ponch" Rivera

My belief is, as people learn how we perceive the world, how our mind actually works, how Our reality is a controlled hallucination, all these great things that are coming from mental health, psychedelic assisted therapy, from active inference, from Carl Friston, from Dr Hippolito, you name it All these amazing things are coming together. My belief is that I can show this in a short amount of time through going through John Boyd's Oodle Loop, and we could talk about that down the road. So next question to you is if, if my perception of the current reality is common, which is a lot of people are focused on LLMs. Who in the world is actually looking at active inference AI? Is there any companies out there or anybody we should pay attention to?

Denise Holt

Well, yeah, ok, so you know. So versus AI is a cognitive computing company. They, they basically. So Let me back up for just a minute just to give a little bit of background to the audience. So you know, back in, i don't know, probably 2016, 2017.

Denise Holt

You know the founders of versus. We're seeing all of these converging technologies coming together, but there was no underlying framework for the interoperability between them, and they had a vision and an idea for what they wanted to do. So, but this framework had to be there. So that framework being the spatial web protocol, this protocol that takes us, you know, from websites and web pages into 3D spaces, and this digital twin capability. So they built the protocol, donated it to the public because nobody can own the Internet, you know And then donated it, donated the protocol and the IP to the IEEE, which is, course, you know, the world's largest core standards body that has everything to do with, you know, electronics and engineering, and and is responsible for all of the, you know, core standards around things like Wi-Fi and Bluetooth and even nuclear energy, and so that happened almost three years ago. So, in the last three years, there have been core standards being built around this technology, with, you know, some of the biggest brains all over the globe And, at the same time, now versus, had the foundation to be able to build what they intended to build on top of it, which is, you know, a COSM operating system Which enables these intelligent apps then to be built and utilized within the network.

Denise Holt

Right? So it becomes this network of distributed intelligence. And so, since then, they've been working with Fortune 500 companies. They've been working with governments all over the globe that are building smart cities. They've been involved for almost three years in a European drone project called Flying Forward Flying Forward 2020. And to work out the actually to work out and to prove the concept of using the protocol and the active inference AI And being able to translate human laws, like all of the airspace laws and all of the different things. Translate them so that the drones can understand them and act accordingly. So you know that that that project has been met with huge success, you know.

Denise Holt

And so what's happening later this year is all of this is now coming to the public, because COSM, with this app store where anybody can build an intelligent app, is going to be released to the public, so that you know it has this public face and in and along with that, the interface then for this new internet experience is called GIA, and that's a general intelligent agent. It becomes every person's personal assistant to navigate, browse the web, but even more so, it it literally can act on your behalf. So so, yeah, so there's a lot coming. But you know it's really interesting because you know it's it's going to really change our lives and the way we we, you know kind of exist. It's going to bring this, all of these technologies, the interoperability between them, this augmented existence that we've all kind of envisioned, but the foundation really hasn't been there. It's all coming into fruition. So you know, things are going to look a lot different once we're living and working in that space.

Brian "Ponch" Rivera

What I find interesting is, just in the last 24 hours, there's a new paper that came out. I think it's called Designing Explainable Artificial Intelligence with Active Inference a framework for transparent introspection and decision making. And I look at the folks on there. I believe Gabriel Renee is the CEO of Versus. Is that correct?

Denise Holt

Yeah.

Brian "Ponch" Rivera

Next to him is Carl Friston. Carl Friston, dr Friston Fame of FMRI and then Active Inference and the Free Energy Principle. A lot of math involved in there.

Denise Holt

There's some other names. I'm not too familiar with Scientists as well.

Brian "Ponch" Rivera

Yeah, oh, he's a chief scientist for. So there's. There's a lot to unpack here, just just on the names on this paper And there's for our listeners. Early on we had Dr Enos Hippolito on, who talked about Active Inference and cognitive science, neuroscience and some things like that. We even talked a little bit about psychedelic therapy. But as I look at the names on here, these are the who's who in the zoo right now of not just artificial intelligence but how the brain actually works Right.

Denise Holt

Yeah, yeah.

OODA Loop and Active Inference AI

Brian "Ponch" Rivera

And then going back to our show again, i had a quick conversation with Gabriel Rene not too long ago about the Constructal Law. He's familiar with that, which is from Adrian Bayesian Law of physics, a law of flow systems really. And then there's a lot of discussion now about complex adaptive systems theory and one of our advisors and you know to my company and then to who was our first guest on our show was Dave Snowden, so complexity theory theorist in the Kinev framework, creator on our show. So what I'm getting at is we accidentally came to meeting Denise. This is all by no design And but I do want to get back to this paper here. So I'm making these connections here about how are all these folks interrelated? They all have different disciplines But at the end of the day they're all coming together to solve some of these. They're not promised but opportunities, right. So on this paper, you kind of pointed out some things on this paper already when we started Explainable AI. That's what they're talking about in the paper.

Denise Holt

And you asked who's I can provide human under. Yeah, I'm sorry, I was going to say you asked me who's paying attention, who's going to pay attention to this? Yeah, And this paper and another report that they announced that they're going to be releasing mid-month, that's, you know, it's called the Road to Autonomy a path to global AI governance. So this paper talking about how active inference AI provides explainable AI, it solves the issues that we're having with trying to put governance around AI. So all of this is going to make the world pay attention.

Brian "Ponch" Rivera

Right. So what I like about this is human, understandable explanations, right? So when you look at the free energy principle, i'm a human and when I read it it's just full of math, right, i'm just blown away. I'm like I'm done. I can't read it. I like drawing, i like pictures.

Brian "Ponch" Rivera

Unfortunately, john Boyd gave us the OODA loop, but unfortunately, too many people think of it as just a simple decision-making process. Now I'm going to share something with you. We put into our bot earlier about the origins of the OODA loop, and this is coming from a pretty big data source that is not open. It's from John Boyd's archives And we just asked it. You know what informed the OODA loop, and I think there's some interesting connections here. It comes from a variety of scientific disciplines, including physics, engineering, psychology and biology. This is the first thing our bot put out. It didn't say it came from fighter aviation, right? We know this. We don't know that. All the time He drew on concepts such as thermodynamics.

Brian "Ponch" Rivera

So the second law of thermodynamics, which is extremely important inside of the free energy principle, and complex adaptive systems and so forth. Cybernetics is absolutely critical to the free energy principle, game theory and theories on decision-making and strategy. He also studied the behavior of fighter pilots in air combat. So my background in fighter aviation. Many people make that association that it's all about fighter pilots going through OODA loops. No, the OODA loop again, this is Brian's view When you put a Markov blanket on it which we'll probably talk about down the road and we have talked about in the past you create the action perception loop and you get to see what's actually happening on the inside of the Markov blanket with the sensory states and active states.

Brian "Ponch" Rivera

There's also more behind this. John Boyd was influenced by a wide range of scientific disciplines, including mathematics, norbert Weiner, who developed the field of cybernetics we already talked about that living organisms, machines and complex adaptive systems thinking and systems thinking All right. So there's a lot there, and this is why, when we had Dr Hippolito on, we were kind of like, hey, is this not the same thing? Are these things related? And the answer is they may be. There's no need to connect anything, by the way, but if we're going to explain how the brain works, how people perceive reality, and at the same time show them how active inference AI works, wouldn't it be easy just to show them the same thing? You know, and that's what we're trying to do here with one of the things we're trying to do here, denise, any thoughts on that? Am I just talking out of turn here? Am I crazy?

Denise Holt

Oh, no, no, no. No. It's interesting because when you reached out to me, you know I looked into the Oodaloop theory because you had said that the show focuses on that, and as soon as I saw it I was like, oh, i know why he's reaching out to me, because they're very closely related. So, yeah, i definitely see that connection.

Brian "Ponch" Rivera

Yeah, no, it's great And I know you know you and I looked at some slides not too long ago. We may be able to go on to them here if you want to. I just want to see if you have any questions about what you know, what we're doing with the Oodaloop, or any overlap that I'm missing between what you understand about the spatial web and what I understand about human performance, anything like that.

Denise Holt

So I honestly, I think that they both kind of come to the same, you know, the same resolve in that it sharpens your perception, to you know, to you know, become to achieve a more accurate outcome, you know, and it's that it's a very similar action, perception, perception loop for that process. So so, yeah, i mean I, you know, i definitely see the relationship.

Brian "Ponch" Rivera

Yeah, so spatial web AI. And then, what else are you associated with? Do you have any other connections? or you get the IEEE working group, anything else?

Action Perception Loop and Inference

Denise Holt

Yeah, i am, i'm a voting member on that IEEE working group, you know. So I've been, you know, kind of watching and, you know, participating with these core standards being developed. You know, and, yeah, i mean that's that isn't itself is an interesting process, but you know, one thing I will say is that you know all of the people that are involved in this process, the level of integrity and the level of just awareness and even the introspection, right, you know, just in understanding that this is a, this is a, this is a monumental task and it's going to have an effect on the future generations, you know, in a tremendous way. So, you know, it's, it's, i don't know, in a lot of ways I feel really grateful to be a part of it And but I felt like that about kind of technology for quite some time. You know, i feel lucky to be alive right now We're in the greatest time of technological discovery to be able to be a witness to all of these things you know, happening and evolving.

Denise Holt

you know I'm a I'm a tech geek girl at heart, so it's fun to me.

Brian "Ponch" Rivera

Hey, so I want to invite you to continue the conversation, but I want to transition over to some slides, which means our listeners may not get the full experience if they're listening to this. We'll try to do our best to walk through these slides. We'll also put this on our YouTube channel and you can definitely have a copy of this as well. So before we jump into that, how close are we to active inference, AI becoming as prominent as LLMs, chat GPT three, chat GPT four? we, five years away, 10 years away, a week away, where are we?

Denise Holt

No. So you know, as I mentioned earlier, versus AI is launching their, their COSM operating system, with App Store on top of it, and so, and their and the COSM operating system will have GIA, which is an intelligent agent interface for the public, right, so that is ushering in the active inference AI in a tangible way for us, as humans, to now interact with it As far as the public, you know, being able to interact with it. So anybody can build one of these intelligent agents. These intelligent agents are going to be the difference with these apps versus, like building an app in, in, like the App Store or Play.

Denise Holt

You know, google Play is that you're just building siloed software, right? These are intelligent agents that are aware of each other and aware of everything within the network, so they will be able to act accordingly. And so you know that you'll see this active inference process playing out And as it, as it interacts and acts within the network and it'll improve its learning ability, it'll improve its awareness and you know it'll grow from there. So you know it's. It's been described to me that when, when versus first launches this, the, the operating system, and you know, people start building the apps. It's not going to be totally apparent of the differences in their capabilities between these machine models and between the active inference. But it's been described to me that the differences is, if you take like a toddler and a chimpanzee, they seem like they have about the same amount of mental capabilities, right, but the difference is that toddler is going to grow into an adult, whereas the chimpanzee stays the same.

Denise Holt

So we'll start to see it evolve over the next couple of years, but the access is coming very soon.

Brian "Ponch" Rivera

Okay, great, no, this is fascinating. So what I want to do next with you and again, this is all voluntary I want to dive into some slides and I want to invite our guests to listen in and potentially shift over to a YouTube channel to watch this, and we'll give links for that in the future. But we're going to do is we're going to go back and kind of recap what we talked about, but put some animations to this so people can understand it, and we're also going to make some connections to action perception loops. We'll draw out the free energy principle And then I'll go through markup, we'll go through markup blankets and I'll make a connection to the loop And then we'll show you what it looks like through the lens of the loop to really see what active inference couldn't mean without the math. No math involved today, so math some other time. But we can. We can always plug that in. Is that something you want to do?

Denise Holt

Absolutely Yes, okay.

Brian "Ponch" Rivera

I'm going to try to share a screen here, which could be kind of fun. I'm going to go to a window, i'm going to go here and here we go And it's coming up. We'll mark this. It's going to take a moment to come up, for whatever reason.

Denise Holt

Cool.

Brian "Ponch" Rivera

Transition over. Okay, we'll start there. It should transition over to the full screen here in a second, but we're going to talk about the action perception loop. Let me make sure this comes out correctly. Let's weigh something. One of the limitations of these new technologies is they don't like to play with each other. And there you go. Let me play this first, and we'll try it that way.

Denise Holt

This is so fun.

Brian "Ponch" Rivera

This is fun, all right. Waiting for the computer to come up, there we go. Okay, basic thing Action perception loop.

Denise Holt

Just the whole power system.

Brian "Ponch" Rivera

All right. So our guests heard us talk about action perception. This is the basic foundation of how we interact with our outside world. The idea is that we observe things. They come into our sensory signals or, excuse me, our sensory organs. We have five. We may have more. I'm not going to argue if we have five or 20. It just depends on your perspective of the world. Right now I do believe we have more than five, but that's another story And then we emit some type of action to change the external world. So this is the basic foundation of it.

Brian "Ponch" Rivera

Some of the notes you see up there, denise, are from different books that I have around here. One is from Bobby Azarian. We got stuff from David I forgot the name already. I apologize, but anyway I'm borrowing from different folks on here. Okay, so what's on here?

Brian "Ponch" Rivera

We have living organisms constantly engaged in reciprocal interactions with their environments. That's the truth. I mean, that's again science-based, right. We emit actions that change the environment and we receive sensory observations from it. From physics, we know that observations are interactions with the outside world. Observations are interactions, right? So we're interacting in some way. That's one way to think about it, and there's other things on here Any interaction between two physical objects is an interaction or observation. Interaction permits vitality and growth, while isolation needs to decay and disintegration. That's from John Boyd, right. So we have to interact with our outside environment, and the animation I have here is pretty simple. We have our sensory signals, our eyes, ears, excuse me, our sensory organs eyes, ears, nose, mouth, skin, interoceptive capabilities, picking up some type of vibration, photons, whatever it may be, from the external world, right. And then we emit some action to change that external world. It could be changing, you know, closing an eye, it could be tilting our head, it could be actively listening to somebody, it could be swatting a fly, it could be whatever right, it could be putting information on the internet. Those are all actions. So that's the action perception loop that Denise and I were talking about previously.

Brian "Ponch" Rivera

So let's go ahead and do something to this. We're going to label the observations with an O and the action with an A, and this is going to lead to something very important. This leads to the free energy principle. So observations are represented by a sphere. All right, i'm making it a sphere on purpose, and for those of you who follow the or understand John Boyd's Doodle Loop, this sphere is observations And then for action. We're making it into a box, a 3D box, and that's action. That's the observed act loop, if you will. So that's what this animation shows here is we're going to take these sensory signals and we're going to go ahead and put observation and action in there And then eventually we're going to add a boundary in a moment which is going to be absolutely critical, not only for how we perceive the world, but for active inference, ai.

Brian "Ponch" Rivera

Okay, before we do that, just touch on the second law of thermodynamics is very important. We're not going to spend a lot of time on this. Going back to your story about cooking and free energy Useful energy is called free energy. There's a lot of energy in the atmosphere or in the environment. If it's useful to us, it's free energy. If it's not useful to us, it's just waste, right? So it kind of goes back to your analogy there. We must continually extract that free energy, otherwise we die. So one thing about the second law of thermodynamics when it comes to biological systems, we don't necessarily abide by it, right? We're not always going towards a state of higher entropy, right? Which? or equilibrium? We don't want to go to equilibrium, we want to be far from equilibrium with our external environment. But there's a lot of second law of thermodynamics involved with FEP. I just want to touch on that here, because we're about to talk about the real fun stuff in a moment, and that's the Markov Blanket. Could you walk us through your basic understanding of what Markov Blanket, denise?

Denise Holt

Yeah, so well.

Denise Holt

So I'll do it in regard to how the active inference works within the spatial web too.

Denise Holt

So you know, when you have these nested spaces, these nested entities, you know, and that could be an object inside of a house, inside of a city, inside of a country, you know like, or you know a restaurant inside of a skyscraper building, inside of a city, inside, you know, there's nested entities, right?

Denise Holt

So you know, the way the spatial web is structured is in a Holonic structure And you know, holonic architecture is basically where, you know, something can be a whole in and of itself and be inside of another whole that is a whole outside of it, right? You know, in this nested kind of structure And we're like that in our bodies, you know each cell is a whole cell, but you know the heart is a whole heart and it's made up of these individual whole cells, and then your body contains the heart, contains the cells, but each one is a whole, you know, nested inside of the other. And so the Markov blanket becomes a boundary around each nested entity And it determines, you know, it basically separates the internal state from the external state, in the sense of allowing the larger entity to govern the smaller entity, but allowing the inside entity to still govern itself, you know? So yeah, that's my understanding of it.

Brian "Ponch" Rivera

So this is critical because when we have multiple people looking at something, we want to be able to define that boundary, to say what's on the inside, what's on the outside, and your point about the heart or a cell, even the mind and the body, and we can put a blanket around that. But we have to have a common understanding of what's on the inside and what's on the outside. And once we have that common understanding, we can now talk about how this works. I believe that's a common thing with the second law. It's going to be thermodynamics as well. Construct the law, we create a boundary, and the moment you put a input and an output on it, you just created a flow system. All right, this is really cool. So now we get into the construct the law. We get into complex adaptive systems, biological systems, all that. But we have to. We define that as an observer, where that boundary is. And I may be wrong on that, but I think Adrian Bayesian said we'd have to define that system or that boundary. And that's that Markov blanket, which is it's not a real blanket, by the way, i'm not sure why they call it a blanket, but it's a boundary. All right, that's how I kind of look at it. So yeah, so the idea here that we have on this on the slides is you have to have the precondition for any adaptive system. There's a must enjoy some type of separation from the external environment, just like our bodies do, right. So we have to have some permeable separation from the external environment. It's a statistical boundary.

Brian "Ponch" Rivera

I believe it is math again, math involved in this. I'm not going to go through the math because I'll scare everybody. I don't do that. But there's something very important about having the boundary here and I'm going to draw something while we've got this Hopefully it works on here And that is the sensory states, which are listed as an O, remember observation These are called sensory states and the free energy principle or active inference observations. And then below that we have another sensory state which is called our active states. So sensory states in the green, active states in the red, and again, sensory states are listed with an O as a sphere, the way I have it drawn, and a box are the active states with a text of a small a on it, right? So those are part of the blanket, the parts that interact with the external world, which is very important here. So I'm not sure if I have an animation on this or not, but the key point here is your sensory states, the observations, or O, and the active states, the A, are part of the blanket.

Brian "Ponch" Rivera

And then this blanket, by the way and I don't want to go through this today, but it's an entropic blanket that we use. We borrow it from Rebus and the psychic excuse me, entropic brain hypothesis from Robin Carhart Harris. He borrowed a lot of ideas from Carl Friston. So for those listeners that are out there trying to figure out what we're talking about, there's plenty more depth we can go into. We're not going to dive into that today, but there is a reason why it's an entropic boundary, the way we have it drawn. So here we go, a quick animation there. What we're doing here is we're starting with the sensory states and active states and we're putting a boundary, in this case around the mind and the head. But it's not the brain that we're targeting here, it's just a boundary. You can put the boundary around your body, you can put the boundary around your head and a tablet or a computer or a notebook, it doesn't matter, it's just a boundary. We got to define that boundary. So that's what we're animating there, denise. any questions on that? Thoughts concerns questions.

Denise Holt

No, no, go right in ahead.

Brian "Ponch" Rivera

I wanted to do this with you because I'm like, hey, This is fun for me because we get to have a conversation and go where are we wrong on this? What are we messing up? Okay, let's see what's next on here. I'm going to just pause for a second and see what we can dive into next. I don't want to dive into the.

Free Energy and Active Inference

Denise Holt

So I do have you know, just, I guess, an observation or a question, you know, or maybe just kind of a The way my brain works.

Denise Holt

So when I'm looking at that, you know and you're talking about the Markov blanket, you know, and the observation and the action being, you know, at the boundary point, right, yeah, you know. So basically, i mean that's saying that we can choose to observe or not observe, or take in an observation or not, right, or we can actually block off certain sensory observations. And what came to mind with me is if you put a blindfold on right, you're not taking in any sight, and what that's going to do is that's going to increase the likelihood for, you know, uncertainty of your environment, uncertainty, you know, it increases the ability to be surprised by things, because now you've blocked off the ability to see that.

Brian "Ponch" Rivera

Yeah, and I think there's a great example of that is when you go into a dark room. I think there's a lot of free energy in there until you turn on the lights, right. So I think that's where you're kind of getting at too And I may be wrong on that Yeah.

Denise Holt

Yeah.

Brian "Ponch" Rivera

Yeah, no, there's a. We're going to build on that, so we're going to come back to that. What are the couple of things we can do when it comes to changing the environment or adapting our internal model? So I'm going to continue the continue this. Hopefully this works out Okay. Back to the slides here. There we go, okay. So now we're going to dive into the free energy principles schematic, and this is a basic schematic that you'll find online. Now It's been modified over the years. I think I have it captured correctly.

Brian "Ponch" Rivera

Now I may be off on a few things, and that's okay, but at the end of the day, what we're going to see is we're going to have the external states, the internal states, and then the two things we talked about earlier, being the active states and the sensory states. So we have a boundary around a brain and a head, if you will, where we can see the brain, which we're going to call just the internal states, which is everything inside the blanket. It doesn't mean the brain is an internal state. It means everything inside of the.

Brian "Ponch" Rivera

The Markov blanket is an internal state. We have the active states and sensory states that are part of the blanket, and then we have this thing on the outside which we call the external world, which could be a social system, physical system, a culture, whatever it may be. It's something external to the blanket Right, all right. So this is kind of interesting And, denise, i'm not sure how, how and, by the way, i'm still I'm still a student of this. I don't think, i don't think anybody as smart as the people that came up with this, but we're learning This is interesting.

Denise Holt

I'm definitely learning.

Brian "Ponch" Rivera

So so the external states contain hidden causes, right, and for those of you that read Anil Seth's being you and try to, he talks about this and tries to break it down. It does a great job breaking this down. There are causes in the external world that our sensory organs pick up, the causes that are happening out there. What we pick up through our sensory organs, right, or our sensory systems, which could be sensory capabilities that we use in artificial intelligence too So those are hidden causes that are out here. What else can we pull out of this? right now, before we animate this, there's a couple of things that may not mean a lot of to a lot of folks, but internal states cannot directly change external states, but can do so vicariously by changing active states, and again, we'll have these slides available for folks.

Brian "Ponch" Rivera

External states cannot directly change the internal states, but can do so indirectly by changing sensory states. A lot of words there. We'll animate this. I'll take Denise through this real fast and we'll explain how maybe a what do you call it? an illusion can help us understand this in a moment. Okay, and I think this is where we want to get back to where we started, which is organisms follow a unique imperative They want to minimize surprise of their sensory observations, minimize that uncertainty, minimize those mismatches. That's what we talked about in the first half of this.

Brian "Ponch" Rivera

And I think that's what's going on when we talked about the difference between an LLM and an active inference. Ai Is. Llms cannot do this right. They cannot reduce or minimize the uncertainty or surprise coming into their sensory systems. This is the big difference. Okay, any thoughts? questions concerns there. This is fun, by the way. I'm having a blast.

Denise Holt

Yeah, no, i'm having a blast, you know. so that whole minimizing the uncertainty, you know, gives us a sense of temporary control over you know, like narrowing down the options to get to that end goal, correct?

Brian "Ponch" Rivera

I said yes, i'm sorry, i was having a quick drink there, oh it's okay, yes, absolutely.

Brian "Ponch" Rivera

And we'll dive into that again. This is really fun, okay, so maybe there's an animation here, i can't remember. There we go Quick animation external, internal states. There's our observations going into our internal states. Here comes an active state And now we're emitting some type of action on the external environment. We have not done anything to the action perception loop, we just defined a few more things within it. Foundationally, we put the sensory and active states on a blanket, within a boundary, and we put it around, in this case the head of a human or a biological system, with space on the left or right side of it. Okay, let's go deeper And let's see. Let's talk about this next. This would be kind of fun. This is the. Let's come back to the monkeys. You're bringing up some chimpanzees. I don't believe these are chimpanzees, but we'll take a look at this again.

Brian "Ponch" Rivera

All right, before we dive into the what I call the entropic ootaloop, i want to go back to surprise, active inference and free energy here. So surprise how much an agent's current sensory observations differ from its preferred sensory observations, desired minus sense states. Right, what do we internally desire and what are our sensory organs sensing? The difference between them is surprise or free energy or uncertainty, or from John Boyd's world, mismatches, right. This is absolutely critical. And this goes back to the blindfold that Denise brought up earlier, and that is minimize the surprises is not something that can be done passively, has to be done actively, and this is what active inference AI does right, and this is the big difference between the LLMs and active inference AI. And there's more on here. I'll just read these off real fast. Agents must adapt to control their action perception loops. Remember we haven't changed the action perception loop, we just added a few things to it to understand what's on the inside and what's on the outside.

Brian "Ponch" Rivera

Surprise minimization permits living organisms to temporarily resist the second law of thermodynamics. Why is that temporarily? Well, all organisms die for the most part. I'm not sure if that's 100% true, but what we want to do is we don't want to be in equilibrium with our environment. If we are, you're six feet under. That's not a good thing. That could be a good thing. Living organisms cannot directly minimize surprise. They must. They can minimize a proxy called variational free energy And I believe free energy is properly called variational free energy, if I'm correct on that. And then we act to minimize expected free energy, reduce expected, surprise or, more simply put, resolve uncertainty. These are all pretty cool things, right. So so what? Who cares about all this? And this is why we teach the OODA loop.

Brian "Ponch" Rivera

If you don't understand this, if you can't understand this, if you don't spend any time understanding this, people are going to take advantage of you. You're going to lose. You're not going to understand how to win in sports. You're not going to understand how to win in organizations. You're going to buy things that cost energy and time that do not work. If Denise is right and I believe she is, and first Charles Friston and Gabriel Rene are right what's coming with? active inference AI is going to reduce the energy cost for so many things, and then there could be good or bad out of it. I'm not sure. I'm not sure, but for the most part, what I understand is there's a lot of upside good on this, and Denise is. Are there a lot of people talking about the downside to active inference AI right now, like some of the bad things that could happen? No, no.

Denise Holt

I mean, you know it's interesting because, like Dan Mates, he's one of the co-founders of Versus And you know he describes active inference AI within the Spatial Web Network as a nervous system for the planet. It's going to give us so much ability to actually, you know, optimize all of these operations and all of these you know the problems that we face, optimize solutions for them, and the AI itself is going to be able to have this self-organization to you know, maintain systems and to detect, you know, issues and correct them as they go, as it goes in real time.

Denise Holt

So it's going to give us so much capability to, you know, resolve a lot of the issues that we have that we really couldn't do otherwise. you know, in the same way that you know these LLMs are so powerful for us to be able to save time and be efficient and, you know, in our tasks and our creativity, this is going to do it on an operational level. Yeah.

Brian "Ponch" Rivera

So, when you think about what humans are about and this is my belief and maybe my reality, that is, we explore You know we are here to go further beyond, maybe our own atmosphere, our own universe, if you want to put it that way. We have buddies that are. One of our friends is going to lead the mission the Artemis mission to the moon. All right, so flew with him years ago. That's something I want to do, but that's what he wants to do is lead these missions. Right, that's cool.

Brain Perception and Inference

Brian "Ponch" Rivera

But what I'm getting at is we're explorers, we have to expand our knowledge, and I believe what we're seeing here is, if we understand not just active inference, ai, but what John Boyd gave us, what people are talking about, if we can simplify them and help people understand them more, we can I'm going to use the term here Yocotan or spread the knowledge of what's coming next, and we can leverage it together and be a pretty amazing society. Let me connect one more thing here. Competition is one thing. Conflict is one thing as well. At the end of the day, we understand from conflict that it ends up being collaboration. Right, collaboration is more important, and that's what I believe is going to happen here is with a solid understanding of how things work. We're going to collaborate more as a global society, and who knows what the limits are?

Brian "Ponch" Rivera

And that's why this is so exciting. I'm going to go build more slides here. We talked about that, okay, so let's talk about the evolution of the Oodaloop in AI, and I'm just going to pause there while I get the slide sorted out here. But I want to take you through the process a little bit And I'm going to do that with. Let's do this. Yeah, this is Allison's checkerboard. I'm going to share the slide with you here on screen.

Brian "Ponch" Rivera

Many people have seen this from either David Eagleman or his books, or Anil Seth, or, if you've ever been with me in a class, i do this all the time. This is pretty basic, right? The question is, when you look at these two squares and I have for those of you that are listening to the podcast you have a checkerboard with a ball that's casting a shadow upon a couple of square, or one square, and the other square is not covered. They're listed as A and B. The question you ask is which square is darker? And, denise, it doesn't matter if you get it right or wrong. Everybody usually gets this right.

Denise Holt

The crazy thing is, i know what this is getting to because I've seen it. It's like you know where you have the split square, but it's the surrounding colors that make your perception off. They're the same color. Yeah, they're the same color.

Brian "Ponch" Rivera

So you ask this question which square is darker? And generally I've never had an audience that, not that way. That's a darker square right, Which I just highlighted.

Denise Holt

It appears, so It appears so right.

Brian "Ponch" Rivera

So what I did is I just matched some colors here. I'm going to slide them over, and what we see here is that the square that's underneath the shadow of the ball, in this case, is actually the same color as the square that's outside the shadow. And going back to Denise's point, even though she knows this, she's seen this before, even though you know this, your mind still plays the same trick on you.

Denise Holt

My mind is the same.

Brian "Ponch" Rivera

Right, it doesn't change, and the reason for that is anything inside of a shadow that casts a shadow appears to be darker. So we compensate for that, right? So we know that that's active inference. That's our brain working. We're going to talk about that here, and this is a great way to explain to folks what active inference is really about. So that's Adelson-Shackleburg, okay. So let me connect this back to John Boyd's Oodleoo, and I'm going to draw a few things out here as we go.

Brian "Ponch" Rivera

Here we have a Mark Cobb blank. It's an entropic excuse me, it's an entropic boundary And with it we have our sensory states and our active states And we have this little thing in the middle which is called orientation. That orientation you can think of as an internal model of the external world, right? Okay? So I think you used the phrase earlier that we digital twin, right? So I don't know if this is accurate enough, but sometimes we go on a digital twin to understand what's going on in the real environment. I'm not saying you can think of it like that, but think of this as your internal map of the external world. We call it orientation. There's a lot of cool things within it. We're not going to dive into those today. You can see our sensory.

Brian "Ponch" Rivera

I guess not signals yet, but they're the photons, the molecules, the sound waves, the vibrations of the world that are hitting our sensory organs, in our case our biological systems. For snakes they're going to have infrared capability, for bats echolocation, things like that. So every type of animal has a different type of capability. Humans have great eyesight. For the most part, our hearing is bad At least that's what my wife tells me And then sense of smell is not as good as a sense of smell of a dog, anyway. So we're going to take you through this real fast using the Adelson's checkerboard analogy in a second and hopefully have the correct animations for this.

Brian "Ponch" Rivera

All right, so let's see how this one goes. There's our animation. We've got our sensory signals coming in to our sensory organs Excuse me, the photons hitting our sensory organs, the vibrations hitting our organs there, and that's on the left side, and we're just going to transition this into the photons hitting our eyes, and that's what you see on the left side of the screen. You've got a globe, you've got these things hitting the O or observation portion of the oodaloupe, and what happens next is pretty cool. There's our head, the connection to the eyes, if you will.

Brian "Ponch" Rivera

This here is what are known as sensory signals. Sensory signals are a currency within the body, that it doesn't matter where they come from. Our brain interprets them. Based off of previous experience, you know. So I believe the signals from our eyes are pretty much the same thing from our ears, or they are from our skin and all that. They're sensory signals. It's the currency that is needed in our body, right And currency being used from the concept of flow and flow systems. Okay, so we get these. It goes through some conculation, calculation, a generative process, and this is what's really cool about this. At some point, we make a decision or a prediction or a hypothesis about the causes. Remember what I said about the causes that are coming from the external world. We make a prediction of them.

Denise Holt

Yes.

Brian "Ponch" Rivera

And this is this loop right here. So, in this loop and I'm not sure I can do it right here Yeah, this goes around and around and around, and this is known as prediction error minimization, right, the difference between our sensory what's coming in from the outside world and what we expect to see And, going back to Adelson's checkerboard, it's the same thing. We have a prior expectation of what is expected, what we expect to see, and that's why we can't see the difference in colors in the Adelson's checkerboard. We see them as two different colors. Excuse me, but that goes through a process and this is predicted processing. It's one portion of it, right? So, denise, is this striking a chord with you? Is this kind of fun?

Denise Holt

Yeah, absolutely Yeah.

Brian "Ponch" Rivera

All right, by the way.

Denise Holt

I am having fun.

Brian "Ponch" Rivera

What's that?

Denise Holt

I said, i am having fun, that was fun.

Brian "Ponch" Rivera

I am showing you the Oodleoop without talking about the Oodleoop. This is really fun. We'll continue building this out As those signals go around. I'm actually going to do this. Something else happens here. Something has to go to active states In our active state, in this case. let's use Adelson's checkerboard as an example. In this case, we, which is this line right here.

Brian "Ponch" Rivera

This is a counterfactual, a what-if scenario. This goes back to Denise's example of thinking through what I need to do next, not actually acting on the external environment, but maybe if I tilt my head, close my eyes, use another sensory capability, maybe if I pay attention more that's a what-if scenario. using Adelson's checkerboard, just squinting. What if I squint? What if I turn my head? Not turning my head, but what if I do this? That's going on inside your brain. I believe these are called conditional predictions. What if scenarios. If you scale this and going back to Denise's concept of scaling or nested loops or whatever, if you scale this to a team, this is planning. This is a planning process. You're going to plan on what you're going to do to the external environment.

Brian "Ponch" Rivera

Very, very powerful Active inference is predicted, processing, plus a counterfactual. Well, wait, there's more. I hope I can do this more Ready for the more part. There's this piece here which is going to be hard to see on here. There's the actual piece where we turn our head, pay more attention and that's the loop. that's coming from the outside of the active state, the box on the right, and it's going to the outside world and it's interacting with that. It changes the sensory capabilities. We're doing something to the environment which is changing how our sensory organs pick up things. I don't know if I did a good explanation on that.

Brian "Ponch" Rivera

But that is basically what active inference does. But wait, there's more. Let me try to grab this slide here. At some point we have to have some type of perception of reality. That's what this loop does here. Here's the famous reality is a controlled hallucination loop That Dr Seth talks about, david Eagleman talks about, a lot of neuroscientists talk about. You have to have some type of construction of reality, and that is top-down inside out. That's top-down inside out construction of our external world. That is perception. Up here is our perception. Over time it gets better and better.

Brian "Ponch" Rivera

It's interesting Yeah.

Denise Holt

Yeah, i was going to say. What's interesting is then I believe that would be the spatial web protocol itself, as far as what that does for active inference, because it gives the world model. It provides an understanding of the way the world works, in that all of the context baked into each thing and then the relationships between the things and everything from identity to location to activity, to even things like temperature and pressure, and all of these things It provides that understanding of reality. That's why active inference is so different than these other machine models, because they have none of that, they have no ability to have that. There's nothing for them to measure the world around.

Brian "Ponch" Rivera

They're not updating this by themselves, and that's what the OODA loop is about is you've got to update this thing called orientation, or what is known as the internal map. I want to point out something else that's cool. Going to your point, this perception of the external world, or reality, is our mental models, their schemas, their repertoire, their things that are embedded in there, that we want to update as well, because if we don't see the world remember we all see reality different. we need to update that thing. This is also known as the default mode network of the brain. This is really cool.

Brian "Ponch" Rivera

Your ego is here When you're in a stuck state or you're going through trauma, which we'll talk more about in later episodes with some trauma specialists. we need to suppress this thing. You need to suppress this to get down to these states, down here, to change your model of the world. It's absolutely mind-blowing the way this works. That's pretty wicked. There's a connection back to rebus and entropic brain hypothesis through Carhart-Harris and the work that Anil Seth did baked in right here, as well as everything below. here You had something to say Interesting.

Brian "Ponch" Rivera

It's pretty cool. There's one last piece.

Denise Holt

I just said interesting. I'm like I want to hear more.

Brian "Ponch" Rivera

All right, then the last piece here is this one. This one is another implicit guidance and control aspect of the Oodle Loop. That's the piece I just drew up here. You can call it the autonomic response. I tell my kids when they learn how to play basketball or dribble you don't want to be down here trying to figure out how to dribble. Your body needs to know how to do this. That's our response to Adelson-Checker board was hey, brian, that's the same color, even though they look different, even though we know they're the same color. That's our autonomic response.

Brian "Ponch" Rivera

We can get an appropriate reception and interoceptive capabilities and nested Oodle Loops and things like that. But this is John Boyd's Oodle Loop in a nutshell, without talking about it, it's observe on the left, orient, decide and act. Orientation you're going to love this. Orientation determines how we make sense of the world, decide in it and act in it. That is why active inference and the Oodle Loop need to be brought together. There's so much more to dive into. Denise, i just wanted to share this with you because I've been dying to get this out there. But it needs to be done with somebody who is, one, not familiar and, two, is very familiar with what's happening with artificial intelligence. I'm going to stop share there and if I can figure out how to do that, there we go. That's that Now. I want to dive deeper into this in the future with you, if we can, denise, and make some more connections as we go. Sure, but thoughts, questions, concerns.

Denise Holt

Oh no, i love seeing all of that laid out, because it gives me deeper understanding, too, of how, because I see the mechanics within the active inference and the free energy principle, and then how it becomes this AI, how it interacts within the spatial web protocol, and all of that together lays this foundation for distributed network of intelligence that can update in real time and everything else. And it's really interesting to see you lay it out like that, because then you see where the sense making comes in and where all these other aspects of it. So it almost gives me it's almost like it's giving me a bigger pattern to look at that is present in the other, but it's breaking it down even further.

Brian "Ponch" Rivera

Yeah, For my understanding, we talk about distributed leadership quite a bit and there's a connection there. So we're talking about human performance in organizations, creating agility, innovation, resilience, safety, employee engagement, whatever psychological safety. You go into so much more. If you understand these basic ideas and this is where I want to go with all this is if I can explain to your leadership team in a matter of minutes or 20, 30 minutes, how you perceive reality and how that connects to active inference, ai. we're going to be able to create common protocols or behaviors or interactions, the way we're going to work with AI in the future.

Brian "Ponch" Rivera

Right, and this is absolutely critical in as we transition I don't know if it's transition or phase transition, but as this new reality emerges all around us, you either learn this or you get left behind. Right, that's just. you know you interact with your external environment, or you go to your dark room or your oxbow lake and hide in there and let's see what happens. I will tell you what. I'd much rather take people through this to show them what's possible. And if I'm wrong, i'm fine. You know, i'm fine being wrong.

Denise Holt

Well, it gives you an understanding of the optimization process. You know that goes on within each point of decision making, within your mind. You know, and if you understand how that works then you can better optimize your outcomes. You could be more aware, you can have more self-awareness to where you can really put it into action more, rather than kind of giving into distractions and you know irrelevant things. You'll understand better focus To me that's what this whole process really kind of comes down to is. You know it's like an explainability to focus and action.

Spatial Web and Active Inference Blog

Brian "Ponch" Rivera

Yeah, i love it. So, hey, i want to thank you for being on the show today. This is a really it's just a good conversation to kind of make sure I bounce some ideas off you and we're talking about what's possible. I think I want to have you back on the show in the future to kind of dive deeper into where we are with Active Inference, ai and anything we could do to help you in getting the word out there. And, as you know, our audience is it's pretty broad. You know a lot of folks will come on and go why am I learning about psychedelics? or why am I learning about leadership and mission command and all these other things We're like? well, because it's part of the oodaloo. You know you got to understand how these things work, but it's not easy to do. So, yeah, there's anything we could do for you on Spatial Web or with your clients and guests or whatever, or listeners. let us know.

Denise Holt

Thank you so much, and thank you so much for having me today, brian. This has been a blast and you know I'd be happy to you know, support you guys in any way I can too. So thank you so much.

Brian "Ponch" Rivera

Oh hey, before I forget, our listeners can find you on Spatial Web AI podcast, correct?

Denise Holt

Yeah, so I'm on YouTube, It's also on Spotify, apple, anywhere else. I also have a blog and it's under my name, denise Holtus, and I do a ton of writing on the Spatial Web and the Active Inference, ai and kind of what's coming, you know, and so it's my blog's a great resource to kind of point to all of the external stuff too.

Brian "Ponch" Rivera

So Okay, awesome.

Denise Holt

And I'm Denise Holt underscore on Twitter.

Brian "Ponch" Rivera

So Perfect, all right, and we look forward to those blogs. I read the one on LLMs and you got chat GPT-4, is that right? Write the difference between an LLM Yeah, pretty cool. So practice what you preach, right. Figure out what the system knows about itself Awesome, well, i appreciate your time today.

Denise Holt

Thank you so much. All right, take care.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Huberman Lab Artwork

Huberman Lab

Scicomm Media
Acta Non Verba Artwork

Acta Non Verba

Marcus Aurelius Anderson
No Bell Artwork

No Bell

Sam Alaimo and Rob Huberty | ZeroEyes
The Art of Manliness Artwork

The Art of Manliness

The Art of Manliness
MAX Afterburner Artwork

MAX Afterburner

Matthew 'Whiz" Buckley
Inchstones with Sarah | Autism Parenting & Neurodiversity Insights Artwork

Inchstones with Sarah | Autism Parenting & Neurodiversity Insights

Sarah Kernion | Profound Autism Mom and Advocate for Neurodiversity