
No Way Out
Welcome to the No Way Out podcast where we examine the variety of domains and disciplines behind John R. Boyd’s OODA sketch and why, today, more than ever, it is an imperative to understand Boyd’s axiomatic sketch of how organisms, individuals, teams, corporations, and governments comprehend, shape, and adapt in our VUCA world.
No Way Out
Human-Agent Team of Teams: Active Inference AI & The Spatial Web w/ Dr. David Bray and Denise Holt
The technological paradigm we've grown accustomed to—with centralized AI models hallucinating answers and requiring massive energy consumption—is about to undergo a profound transformation. This fascinating conversation explores how active inference AI, inspired by the principles of biological intelligence, offers a fundamentally different approach that could reshape our technological landscape.
Dr. David Bray articulates the critical distinction between current AI systems that merely pattern-match based on past data versus the emerging active inference models that continuously predict, observe, and update their (Orientation) understanding of the world. These systems don't just regurgitate information; they develop mental models that allow them to navigate novelty and uncertainty just as our brains do. Meanwhile, Denise Holt explains how the newly ratified spatial web protocol creates the infrastructure for these distributed intelligence systems to operate across networks with shared context and meaning.
What makes this shift particularly compelling is its potential to restore human agency in technological systems. Rather than the surveillance capitalism model that has dominated recent decades, active inference AI within the spatial web framework enables pre-compute permissions and constraints, allowing individuals to specify what they want to happen (or not happen) with their digital identity. This represents a fundamental realignment of power dynamics in our technological future.
The implications extend beyond individual experience to organizational performance, national security, and global commerce. From detecting weak signals that might indicate emerging threats to managing complex adaptive systems like supply chains, this approach enables decentralized intelligence that can process information closer to where it's needed—at the edge.
Ready to explore this new frontier of AI? Connect with Denise Holt at Learning Lab Central to join a community focused on active inference and the s
NWO Intro with Boyd
March 25, 2025
Find us on X. @NoWayOutcast
Substack: The Whirl of ReOrientation
Want to develop your organization’s capacity for free and independent action (Organic Success)? Learn more and follow us at:
https://www.aglx.com/
https://www.youtube.com/@AGLXConsulting
https://www.linkedin.com/company/aglx-consulting-llc/
https://www.linkedin.com/in/briandrivera
https://www.linkedin.com/in/markjmcgrath1
https://www.linkedin.com/in/stevemccrone
Stay in the Loop. Don't have time to listen to the podcast? Want to make some snowmobiles? Subscribe to our weekly newsletter to receive deeper insights on current and past episodes.
Recent podcasts where you’ll also find Mark and Ponch:
Yes, but let's talk about hallucination real fast. You brought it up, dr Bray. One of the big things we've learned on this show and having a vast array of guests is that perception is a controlled hallucination, that we as humans don't actually experience the world like a video camera, that we actually experience top down, inside out, that we predict things. So, with that being said, dr Bray, can you tell us about what the latest and greatest in AI looks like as far as hallucinations?
Dr. David Bray:Well, so yes, and one thanks for having me here. It's great to be here with you and Denise, and I would say, in some respects, we shouldn't be surprised that AI, much like any of the digital tools we're rolling out, influences our perceptions of trust, and I define trust as the willingness to be vulnerable to the actions of an actor we cannot directly control. That could be an AI, but it could also be an organization, it could be a person, it could be a company, and it's worth knowing that. It's been shown that we humans are willing to trust an actor we cannot directly control, be it human or a machine.
Dr. David Bray:If we perceive benevolence, perceive competence and we perceive integrity which gets to your question about hallucinations, which is, how do we perceive benevolence, perceive competence and we perceive integrity, which gets to your question about hallucinations, which is, how do we perceive benevolence of any large language model or any other flavor of AI out there? Because you know it may sound shipper, but it's been programmed to sound shipper. It's not actually able to demonstrate benevolence or not, and then perceive competence. Well, for several of these AI models, including generative AI, they'll tell you straight face something that looks real, but it may not actually be sourced, because, at least for generative AI, it's just doing very fancy multidimensional pattern matching.
Dr. David Bray:And finally, integrity. I mean we've all had experiences where we interact with the chat bot and we say that's wrong and it says you're absolutely right, and then it proceeds to give you another answer. That's wrong. And you say that's wrong still and it flips again. So you know, and I don't want to just rail on generative AI but generative AI, we have to recognize, is generative in nature. It is not true seeking in nature. And so we may need other approaches to AI, and I know we're going to get into some of those that are actually more grounded in trying to explore the world around us, the environment around us, and then get to better assessments of can I trust this? Is this more reliable information?
Brian "Ponch" Rivera:That's fantastic. I do want to dive into explaining the difference between large language models and maybe what's coming down the pike with what's called RL and deep learning and active inference, AI. And before I do that, I listened to your episode with Denise not long ago. I think it was recorded in the last two weeks or so, but you made a connection then. You made one today, which is it could be AI, it could be an organization that we trust in, and you referenced a team of teams and stamina crystals, and I'm holding it in my hand, which goes back before.
Dr. David Bray:Hoorah yeah.
Brian "Ponch" Rivera:I have a copy of that too. Yeah, so this is network centric warfare, right? This is the genesis of what we were trying to do 30 years ago I think it was about 30 years ago. It wasn't about developing better systems like information systems, but developing better interactions between the humans and then distributed leadership or pushing information to the edge or decision-making to the edge. So can you talk about how important understanding that is to where we may go with this new type of thinking in AI?
Dr. David Bray:Absolutely, and I'm really glad you asked that, because if there is one common theme to my life, it is all about trying to do everything I can to empower the edge. So in a past life I was fortunate enough, I guess, to be selected to join what was called the Bioterrorism Preparation Response Program. This was 2000. Long story short, we were supposed to figure out the 30 people we were only 30 people what we would do if a biotransit event ever happened in the United States. So we did, and I was an early adopter of Agile development, which, if folks aren't familiar, that came out in 2001. But I was being penalized. I was being told why aren't you doing the five-year enterprise architecture? Why aren't you doing the three-year budgeting cycle? Why aren't you spelling all your requirements? And I said, well, because we do not have a deal with bad actors or Mother Nature not to strike until we have our IT systems online. So I was a bit of a heretic and it was scheduled a week in advance for me to brief the CIA and the FBI as to what we would do if a virus event happened. That briefing just happened to be scheduled for nine o'clock on September 11th 2001, which of course 834, as we all done, agile development. When the Anthrax events happened in October and November, we would have had to do 3 million environmental samples and clinical samples by fax. So it just shows.
Dr. David Bray:Sometimes how do you figure out who's the heretic versus who's actually trying to help the organization move along? And that gets to your question about network-centric approaches, which is, I submit, that the best leaders nowadays don't tell people what to do. They set the end goal. Here's the vision we're trying to achieve. Here's the boundary conditions, the ethical boundaries, the policy boundaries, whatever. And then let's explore the space, both with humans and machines.
Dr. David Bray:And you look at like what we were doing in Afghanistan. We were trying to be top down, we were trying to do a nation state building and I was scratching my head in 2009, 2010, going whiskey, tango, foxtrot. This is not a nation, it's 13 different tribes and we really do have to empower the edge. And I think folks got it in Afghanistan. The trouble was communicating it back here in the US. People still like their hierarchy and their command and control, and I get it. That's how we train. But I think, in a world in which the world is changing so rapidly, whether it's human actors, machine actors or a combination thereof, because increasingly it's going to be both. We have to have new approaches to what I won't even call it management. It's really facilitation of how do you encourage your human actors and your machine actors to achieve those outcomes, without being very explicit to X leads to Y, leads to Z, because anyone who thinks they know how the world's going is probably deceiving themselves.
Brian "Ponch" Rivera:Right. So your connection back to agile software development is fascinating. I used to work for Jeff Sutherland, the co-creator of Scrum. I spent a lot of time in that space applying ideas from aviation, things like crew resource management, team science, mission command, what you just talked about and the connection back to Stan McChrystal General McChrystal's work understanding complex adaptive systems, wicked problems, those types of things. It sounds to me like that isn't going to change that we're going to need more of those insights and I think that's what Denise is doing right now with her work, and I've been following Denise for a couple of years now. She's been on the show a few times, I've been on her podcast, but I'd like to throw this to Denise and just kind of have you paint a picture on how do you walk into a room of executives and let them know what the difference is between a large language model and what we're seeing, and I'll call it the free energy principle or active inference influenced AI? How do you do that? What's that like?
Denise Holt:It's hard Well, actually so it's not that hard. It's just literally getting the point across that what we're talking about is not just another framework for them to be looking at, another tool for them to be considering. It is literally a complete mindset shift. We are embarking on a new era of not just AI, but technology as a whole. Our Internet is evolving and the minute I start talking like that, then it perks their ears up because they're like what you know? Because I think we're all.
Denise Holt:We all understand that world, the impact that the change on on computing had with the evolution of the worldwide web, and then when we went into mobile and cloud computing and all of those and really that's what's happening right now, like this is a blue ocean opportunity for technology, and you have to really understand that education is one of the biggest factors in it, because it's all brand new. So you need to really wrap your mind around it first and then see how can we start to you know pilot projects in this space and start to really think about evolving operations into this space. But it really does start with pointing out that this is entirely new, and so you need to be receptive to what that means for technology in your organization.
Brian "Ponch" Rivera:So the failure mode of not understanding this. You know, large organization in a large construction company or a large oil and gas company doesn't grasp this new type of AI or this moving away from LLMs and more towards active inference. If they don't grasp it, is that detrimental to the organization? Is that? And let's talk about the DOD, dr Bray, if they don't get this and the Chinese understand this before we do talk about what happens there.
Dr. David Bray:Yeah, so yeah, whether it's DOD, which obviously is here to help defend US interests both at home and abroad, and make sure we do that For construction. It's interesting you say that, and it's interesting you say that I was actually just earlier today. Construction sees $1.6 trillion US dollars a year wasted because we aren't treating construction and engineering efforts like as connected systems of systems. Instead, we're treating them as individual projects and we're surprised when material shows up and they sit there for five or six weeks before they're finally used or we don't have things in their still lives. And so, whether DoD gets this or doesn't get this, or construction doesn't get this or not, we're going to see a whole lot of waste. Dod's instances. That waste could actually lead to being able to actually continue to have free societies, not just the US, but other free societies around the world.
Dr. David Bray:Because, as much as I understand why the People's Republic of China, with their 1.3 billion people, prefers to say it's about the collective versus the individual and freedoms are secondary as long as we have stability, I am biased because I was born in the United States and I actually value freedoms, and so I think what we need to think about here is how do we roll out AI, and this is why Denise's superpower is so amazing, because she literally is asking for executives who may have either been inundated with large language model and gen AI hype for the last two and a half years and or invested in it, and so you're asking them to shift their investments, but her superpower is really making the case of them, which is what are approaches to AI that are more consistent with the values of free societies, that actually do empower the edge, so that if you are an individual citizen or a community, you can do so without having to connect back to a centralized cloud server.
Dr. David Bray:You can do it if you want to, but you don't have to. Or if you're a construction company and you want to work in austere or disconnected environments, or you don't want your intellectual property to be on some centralized server as well, you can do that as well. So I think you know this is why I don't get invited to certain AI parties, and I'm okay with that. At the end of the day, this is more than just return on investment. This is really a question of if we need to have technology embody the values of free societies. What's the best approach?
Brian "Ponch" Rivera:So what you just invoked in that last couple minutes there in my thought process is going back to the constructal law, which is built on the second law of thermodynamics, and it's Adrian Bijan from Duke University. What he did is I say that right, duke University. What he did is he identified that for living systems to persist through time they have to evolve right, they have to engage with their environment. That type of thing. The way I look at LLMs and the way I look at the closed, observer-oriented side act loop is that is a closed system and we want an open system that engages with environment and adapts and modifies and allows us as a society or civilization to persist throughout forever, as long as we possibly can right. So that is critical in an organization as well.
Brian "Ponch" Rivera:If they approach it as a closed system approach where they look at their customers being outside of their system or, you know, being and I'm going to bring something up from John Boyd's world and the Toyota production system control is always outside and bottom up right. We have to have that connection to our customers. We need to be open to the external environment. Control in some instances may come from outside information that may be connected to other things we talk about on the podcast, like psychedelics or meditation or sacred geometry and things like that. But regardless, we have to find ways to evolve and the way we do that is through an open system, and I think that's what I heard from you when you said the difference between us and what's going on in China potentially is we value that open system approach and therefore we should value that when we adopt or incorporate AI into our system. Denise, any thoughts on that?
Denise Holt:into our system. Denise, any thoughts on that? Yeah, yeah. So what's really interesting is, I think that when the internet was, you know, first came on scene, you know there was this idea of decentralization. That was core in that idea. But as it had, as it evolved, and all of these you know big tech companies kind of rose from the you know innovation potential within this network. These business models became very centralized, right, and so I think that that's why these centralized approaches to AI have been kind of the first things that we've seen come out of all of this technology, because it matches that business model.
Denise Holt:But what we really need to enable AI to be adaptive, to be able to be beneficial worldwide, is we need it to be decentralized, distributed, able to operate at the edge, able to help local communities of people, from wherever they are, with the problems that they have, and share knowledge that can grow.
Denise Holt:So, in order to do that, you need something besides the World Wide Web and you need AI that can actually be distributed through networks and still have this layer of kind of this organism of intelligence that can share information and operate across it and do so in a way that is safe and trustworthy and fair. And so in order to do that, it has to mimic the way human intelligence works, and so we're not going to get that from current AI. It has to be something entirely different that actually does work, with mechanisms that mirror biological intelligence, which is what active inference does. And then the spatial web protocol, this new protocol layer that has now been ratified by the IEEE as the third foundational layer layering on top of HTTP and HTML, hstp, hyperspace Transaction Protocol and Hyperspace Modeling Language. That now enables this universal domain graph and this shared context substrate that will give shared meaning to agents who are working within that network. So that's hugely critical to this entire framework and kind of infrastructure layer that we're evolving into for these systems.
Brian "Ponch" Rivera:So, thinking about complex, adaptive systems, we have agents and the interaction between the agents. I've learned over the years that it's not the quality of the agent but it's the quality of the interactions that matters. So, if you think about teams and teamwork, we want those interactions to be high quality. That will allow a team of you know, group of experts who are called a team of non-experts. That group of or, excuse me, that team of non-experts could actually win because of the quality of interactions. And we've seen this in operating rooms, we've seen this in many places, we've seen it in sports. So, if I'm hearing you correctly, the spatial web focuses on those interactions, right, okay, and then I want to clear up a few things about agents, because people talk about agents all the time. What do we mean by agents? Are human agents? Is Rosie the robot, the MB500 that she was? Is she an agent? Can we dive a little bit more into that? So maybe?
Dr. David Bray:yeah, I'll jump on. Obviously, there's a whole lot of marketing right now around what folks are calling agentic AI, which I feel is better, probably more called agent-ish AI as opposed to agentic. Agents have been around for at least the idea of agents, and actually I'm going to give a call out to an individual by the name of James G March who, back in the 1980s at Stanford, did a wonderful look at what he called exploration versus exploitation and he was trying to figure out in human organizations was humans, not even any technology nodes? When do, when do human organizations thrive in changing environments and when do they fail? And this gets to the question you were asking about. We've got to have some openness. We've got to have.
Dr. David Bray:I often define civilization as the willingness to not kill the new idea or the newcomer, um, at least automatically. So what he did was exploration was defined as seeking new beliefs, new insights about the actual, true state of reality. Exploitation was doubling down on things you thought you already knew. For example, now is a good time to buy real estate, now is a bad time to sell gold, whatever it is, and what he showed is organizations that only did exploitation. So they only doubled down on what they knew, which is kind of like large language models and generative AI. If the environment was changing, would fall behind, and yes, you can do rags, there's ways you can update. You know generative AI, but technology or human beliefs can become a source of ossification for any organization. Now what he also showed, though, is all you did was exploration. If you're continuously exploring your space, you never get a chance to capitalize on the things you think are true, and so we can think of examples like Xerox PARC, where there was lots of great R&D, but it wasn't Xerox PARC that actually capitalized on it, it was Apple and Microsoft and others, and so what he argued was you needed to do a combination of exploration and exploitation, and, on top of it, you need to have a little bit of turnover in your agents, and the agents he was referring to back in the 80s were people. A little bit of turnover is actually a good thing. Too much gets detrimental, and we can all think of examples where organizations have too much churn, but I raise that because that was the 80s, and so fast forward now to where we are 45, 50 years later.
Dr. David Bray:We can think of agents again as being those actors that have a set of sort of heuristics about the world. They could be human or technology. They have a mental map of the world and that's where I think right now, what people are calling agentic AI really is more agent-ish. It's agent-like because some of these are still tethered to a centralized AI model and they do not have a mental model of the world.
Dr. David Bray:Now, what Denise is sharing is the idea that there are other approaches, like active inference, where these models can have a model of some part of the world. It doesn't have to be the entire world and they're really motivated. If they're doing active inference, they're motivated to minimize surprise, and I can define surprise as they are predicting. They're doing continuous predicting about what they're going to see next. It could be ships passing through the Suez Canal, it could be flights landing at Newark, it could be the behavior of certain stocks in the stock market, and if suddenly they observe something that's different, they are continuously learning and saying that is not what I predicted. And they're saying I don't know why, but I'm now going to update my mental model because something is different in the environment but I'm now going to update my mental model because something is different in the environment.
Brian "Ponch" Rivera:So I don't know if you follow Dave Snowden's work in the Condemned Framework, but we use exploration and exploitation to go from the complex to the complicated domains or unordered systems to ordered systems, and we've done a lot of work in the DoD with this as well and getting leaders to see the difference that hey look, this idea of destruction and creation, again using a Boydian term to go through these cycles, which includes destroying your team. I hate to say that, but you do have to remove people from it, just like the New England Patriots did when they were their empire. You know what they were doing years ago. They did the impossible within the NFL. So it is a cycle of destruction and creation where we have to explore and then exploit, go from complex, go to complicated, and we have to explore and then exploit, go from complex, go to complicated, and we do that over and over again, right, and then at some point you have to find things that you just don't need to do anymore. There's also a connection to Simon Wardley. His Wardley maps the same type of approach. There. You can look further left on the map and see that you need more novelty, more exploration on the right side, highly commodified things that you may want to outsource, and it may be things that we can use to identify where large language models will be perfect and maybe where active inference models may be a little bit better suited. So that is just something that, when you're talking about that, it just I'm like this is nothing new. This is the same thing we try to get across to organizations now and if they don't understand this, if they don't understand it's going into this new world of AI, I think they're done Right.
Brian "Ponch" Rivera:Leaders have got to put their heads and time around this. They need to engage with Denise Holtz of the world and go what does this look like, right? What does this teaming thing look like? So, on the teaming side, I do want to ask you either you, as we evolve into this active inference AI world with Spatial Web and I'm working like a let's go to my former background in fighter aviation fighter pilot in Rio in an F-14. Rio, which I was we used to call an R-10 unit, right, kind of like R2-D2. So now you got Luke Skywalker and R2-D2 in an X-Fighter flying around, so now you have to have this requirement of two or more people, two or more agents working together interdependently and adaptively towards a shared and common goal. That's the definition of a team, right. So let's change it up from two or more people to two or more agents working interdependently and adaptively towards a shared and valued goal. How do we build high-performing organizations in this space? What's out there right now that can show us how to do that? If anything?
Denise Holt:Denise, do you want?
Dr. David Bray:me to jump on it, or do you want to jump on it? How do you want to go?
Denise Holt:Well, either way, you can go first if you'd like, I don't mind. No, go ahead, denise. You go, go ahead. Have causal reasoning, understand the why in a situation, right, but it also has to be able to have an understanding and an ever-evolving way to handle the unknown, all the variables that are unknown, that we take into account every day as humans, you know, with our decision-making and kind of, you know, being able to decipher situations and what's happening. And so for agents to work together, there has to be this shared situational awareness as well. You have to be able to have this shared contextual understanding. And so what's really interesting to me, especially with the research that's been coming out of Versus, is that we've seen research around this new active inference model called Axiom, and then we've seen research about this process called variational Bayes-Gaussian splatting, which BBGS is what I'll be calling it, because that's a mouthful.
Denise Holt:But what's really interesting is both of these together enable an agent to have this way of mapping a scene, which a scene is just a snapshot of whatever it's observing. Right, and that could be cameras, that could be, you know, a 3D scene of something. But the difference between Gaussian splatting, the 3D Gaussian splatting which we're seeing throughout all of this type of technology and the variational Bayes Gaussian splatting is. The variational Bayes Gaussian splatting incorporates variational inference, this ability to do these smart approximations that are continually evolving within this observation of a scene. So that gives kind of this ever-evolving mapping of what's known and unknown to these agents, and then, axiom, gives them the ability to map out the understanding of these scenes as not just patterns and numbers but as objects, and then being able to categorize these objects into buckets, if you will, of understanding, and then also being able to assign motion patterns and understanding of motion patterns about these objects, uh, motion patterns and understanding of motion patterns about these objects, and then also being able to have this aspect of it, of what might happen next with it, right.
Denise Holt:So the way these, these agents, then, can now explore their understanding of an, of a, a scene or an environment or you know, uh, any bit of, is to be able to then start trying to achieve an objective, but also trying to act on it, to gather more information, to improve its understanding, to make the scene more clear in all aspects, right. So it's really fascinating because when I look at that and then I look at what the spatial web protocol does. As far as being able to give this shared contextual mapping of, you know, everything from objects to complex systems, then to me you have this full circle environment for agents to be able to operate in the same way of thinking like humans do, but then also have this shared belief map that we can now work together and team together on you know. So it's really fascinating to me, but that's that is really exciting.
Brian "Ponch" Rivera:Yeah, denise, you and I are kicking around this idea and I brought up a paper by Stephen Kotler, dr Menino and Carl Friston that just came out in the last week Intuition and Insights and it had a connection back to Gary Klein's work on insights. And what I started thinking about was when I've engaged with Gary Klein. A few times here on the podcast and elsewhere we've talked about fighter aviation. So I want to use what you just shared and I want to frame it up to see if this makes sense. So in fighter aviation years ago, doing an intercept was challenging for a subject matter expert to explain to a non-subject matter expert. You know, how do you get to the rear quarter of an aircraft that starts 50 miles out and doing a thousand miles not to closure and it's maneuvering and all that. How do you do that from a radar and how do you visually pick it up and then get behind it and get within three quarters of a mile or a mile behind it, ready to shoot, right? So we're going back 20, 30 years in fighter aviation. So how do you do that? Well, gary Klein said it's cognitive task analysis. You have to help the experts break things down. So what I'm hearing from you is it is very possible that these active inference AI could potentially observe from the environment what's going on and break things down in a way that could be usable and transferable to other agents or other people.
Brian "Ponch" Rivera:Right and this is very valuable in sport too. Right Now we're talking sports too. So the mechanics of catching a ball. You and I are talking about driving a car and how to park a car. And how do you teach a kid how to park a car and all that? I'm a human. I make mistakes. I may be an expert at driving, but I can't explain that to my kid. So now we have this potential AI that can follow those experts and break it down, to transfer that information, to accelerate learning. Is that what we're talking about?
Denise Holt:And build ever evolving models of understanding. Right, you know that's, that's critical and you don't have, you'll never have that with deep learning, with the, you know.
Brian "Ponch" Rivera:So the applications are not adaptable? Yeah, so so it's not to replace the human. You can actually use this to to accelerate the learning. So, in our world, the debriefing. So let's go take this knowledge. This is data that we just collected from this event, just like a retrospective and agile right. If you have the hard data what happened, not how or why, just what you can actually build better future performance. And this goes for any type of organization. Right, and that's kind of what I'm thinking here, dr Bray, if you're applying this logic into the DoD and you look at, I mean just the proliferation of drones and the application of unmanned wingmen and all that, this has major implications for how we train in the military.
Dr. David Bray:Does that sound right and sort of impact this a little. So the first I would say is you know, I mean, as you said, these are whether it's network-centric collective intelligence. These were conversations that were present in the mid-2000s and the question is what happened? You know, where did those conversations go? Well, one, we didn't have the compute capabilities that we do now and we didn't have the ability to run some of these algorithms. But then, two, I think during the 2010s, we sort of forgot this network-centric you know, human-machine learning together approaches, because people started doubling down. We started seeing returns on machine learning and machine learning got exciting and you know, darpa was doing things like that. And then, of course, you know, we have ChatGPT and everything like that. So that conversation kind of got hijacked and it's now being rediscovered because, while generative AI can do some really impressive things, even the non generative ones, like you know, predictive AI can do really impressive things. They're very compute, intensive and they don't really give us the the large volume of exploring. I mean, I tell people that I think probably the future of AI that's helping humans is not a single platform to rule them all. It's millions, if not billions, of individual agents that will be actually scoped. Some are working for you, some are working for someone else, some are working for organizations, some we're sharing and that's where the spatial web protocol becomes so important so I can define what I want to happen and not to happen in my house, and do so pre-compute versus post-compute.
Dr. David Bray:Now, the second thing I would say, though, is let's think about what happened when, unfortunately, when the Challenger exploded. There's a famous case study I won't give too much away called Carter Racing, which is used for executives in different business school settings, where they're asked to make the decision whether they want to race or not. They're given certain data, they're given the temperatures at which things happen, and they're given a choice Do you race or not, pointing out that you're operating in temperatures that's colder than you've done before. And usually, when you assign this to groups of nine or 10 MBAs or whatever, there's some deliberation. Maybe one or two out of the 10 people on the team will say wait, you know, we really haven't seen that before. Is this really worth it? It's risky, our engine might explode. But the other eight say no, no, no, no, let's go with what we got, we're going to win this.
Dr. David Bray:And then you do the big reveal. After you go around and every team says they're going to race, you say this is the same exact decision that NASA had to make, that it got people killed, and so, again, it's recognizing that sometimes the values of teams is when the majority thought process isn't allowed to dominate a data-driven conversation. And that's where this gets interesting with agents, because what we need to do is have agents that are not just doing multidimensional fitting to data it's previously seen, because if we do that, then I guarantee you we're going to have challenger-like disasters happening metaphorically as well as possibly real. What we want to do is allow the novel agents to say look, I know this is different, I know this is not the current perceived insights that are present amongst either the AI agents or the human agents, but I'm going to bring this additional data, I'm going to bring this additional stuff and actually percolate it so that you don't suppress the minority viewpoint in such decisions and have quote, unquote groupthink, whether it's groupthink with humans or groupthink with agents, including AI agents.
Dr. David Bray:And so what this really points to is we've got to blow up how we teach MBAs, we've got to blow up how we teach masters of public administration, because, again, the reality was actually. It's worth knowing. Mba was only meant to be something you got after you already had a degree in engineering. Once you had a degree in engineering, mba was supposed to give you the soft skills, but nowadays we have people going straight to MBAs and not doing the statistical skills. So I think we've got to blow up how we just teach people in general.
Brian "Ponch" Rivera:So you're just talking about weak signal detection in an age of AI which we use red teaming techniques, so mitigating cognitive biases, you know liberating thinking. There's liberating structures behind that and I'm sure you're familiar with some of the work out of UFF.
Dr. David Bray:Oh, I was going to say that's actually the biggest use case. I tell people right now If they're looking for what they don't even have to wait for. I mean, well, active inference is exciting. You can use AI as a red teamer for over-the-horizon events for your organization. Right now Doesn't mean it'll be right, but combining that with your human experts is immense value. And actually you're getting to the Honorable Ellen McCarthy and I are actually going to be presenting a paper at a National Intelligence University event housing. That's exactly what we need to do at the cell level for everything that's happening across the IC 100%.
Brian "Ponch" Rivera:I saw some papers or so I may have been you talking about red teaming and AI or somebody else, but the idea is solid. And, by the way, Carter Racing, we've used that in the past. I was very familiar with that for that reason. But yeah, so we signal detection and just to clarify, you know, inattentional blindness. We only see what we expect to see. Humans have biases and there's plenty written on this, and we demonstrate for folks in workshops that hey, we're going to show you something. It could be the gorilla walking across the stage.
Brian "Ponch" Rivera:Yeah, that one right, or there's other ones that are out there and, of course, there's other things you get from Anil Seth and others that keep getting populated in different books. But you show this that people can't see what's right in front of them. And now you get groupthink and you get a psychological safety and you got 17% or 20% of the population saying I see this, but 80% disagree. There's like no, we didn't see it. Well, you got to pay attention to those weak signals and I think that's where you're going with this piece is, how do you integrate this into weak signal detection? So we have going back to the 80s and even 70s, high-reliability organizations, which we were all trying to do many, many years ago.
Dr. David Bray:Right, yeah, and also just to build on what you said. The other thing is, I think a funny thing happened Again the internet wonderfully beneficial for the most part for what it's done, but it's also created a sense of a loss of agency in people. Through the internet, through smartphones, through apps, the way that the US and other nations rolled it out, there is a large majority of people that feel like they've lost any agency or choice in how information about them is used or how information is delivered to them. And you wonder, when we look at the Edelman Trust Barometer from 2025, when it's 61% of respondents of the countries involved around the world saying they have a moderate to high grievance against one or more groups and 40% of respondents say it's legitimate to do an act of violence I think this gets to that we should not be surprised that when you roll out centralized AI much like how Arno Nostrum and other people have said, when the commons gets too much taken away and we no longer feel like we have choice for agency in our lives, people want to turn over the apple cart and they're angry.
Dr. David Bray:And so I think that this is actually, you know, if people want to actually contextualize, what does 2025 mean and why are we here? What will historians say was the solution? The solution was companies, countries, communities figured out ways to roll out solutions that, yes, it had tech, but they were ultimately either business solutions or community solutions where people felt like they were net-net, getting agency back in their lives and their livelihoods, as opposed to having it taken away.
Denise Holt:Yeah, I think that's so important, david, and one of the things that I really love about the work that you do around these topics is in really, you know, getting people to understand we're building the future now, like we're not just participants and witnesses watching this unfold. We all have the power, and, especially with these tools that are being given to us that all of us are going to have access to. You know, when you think about embarking on the World Wide Web back in the 90s, early 90s, right, you know, it could be anything that we wanted it to be, and it was up to everybody everywhere in the world to start building websites, to start filling out this network. And now that same thing is happening, just with bigger tools and more powerful tools. And I think it's important that we all really understand that it's up to all of us to shape this future how we want it to be, and that's what I love about you is because you are constantly hammering that point home to people Like this is this means I'm shot at.
Dr. David Bray:I appreciate Denise, but it means I'm shot at metaphorically sometimes, oh no yeah, shooting the messenger.
Denise Holt:Yeah, don't wait to see what happens. Start figuring out what you want from this and make it happen.
Brian "Ponch" Rivera:Yeah, so I want to go over to some assumptions here in a moment, but before we do that, BBGS anything else you want to follow up.
Denise Holt:I that BBGS, anything else you want to follow up? I know we kind of got sidetracked off, that you were talking about weak signals versus strong signals and to me that's one of the powerful things about this as far as giving this understanding, mapping, this belief. Mapping for these agents is that you know the way BBGS works. It's you know this, this gauzy and splatting. It's like some areas are blurry, some areas that are overlapping become more defined, right, but then these agents, these axiom agents, can understand which ones are you know defined, which ones need more exploration, and they can actively go to those spots in their gaps of understanding to act on the environment, to gather more information, to get a clearer picture of everything. So we have technology emerging right now that is going to address all of these things we're talking about.
Dr. David Bray:And what I love about what you shared, denise, and I'll give a very real, because you asked about like DOD or national security use cases. You could imagine a case where a member of the DOD said I'm really concerned about peacekeeping efforts in this region of the world. Let me know if you see any evidence or photos of possible risks to destabilizing peace in this region. And so you know you give some things you can make explicit, but, as you and I know, there's a lot of implicit things also associated with peacekeeping, and so it could very well be that you know that morning. You know it says I went through a million sort of different feeds and I thought that these 10 things were of interest. And you look at it and you're like yeah, no, you know, somehow you got it wrong. And so you tell the machine no, that was not of interest.
Dr. David Bray:Now one the machine's going to be learning.
Dr. David Bray:And so you're actually making implicit knowledge now codified for the machine in a way that's useful, much like if you were teaching an intern or you were teaching a younger hire.
Dr. David Bray:But then also the machine may come back and say look, I know you said this article wasn't of interest, but if you read this one little paragraph right here, you might see that it looks like you know, I don't know, small arms are being trafficked again or something like that. And you might say, oh, ok, and so you've actually had a back and forth between the human actor and the machine actor and the machine actor said I know, you said it wasn't important, but what if I raise this? And again, it could be a yes or no, but it's a learning probabilistic approach that is continuously learning, as opposed to spending massive amounts of energy, massive amounts of time and compute cycles to create this one single model for the entire world. That, yes, you can do extensions and rags, but, as we know right now, trying to do one AI to rule them all will never really be real time and will never be curious like this, where it says I know you said it wasn't important but what if I tell you this I think it was novelty, going back to Arab Spring.
Brian "Ponch" Rivera:So I was at AFRICOM when that went down in the five. So the that's a foreign area officer at the time and the foreign area officers in the northern office were tracking this before anything went down and they were very concerned and that model never existed and you know the proliferation of social media and what it could do. How many years ago was that? 10, 12? Longer than that it's been a while.
Denise Holt:It was almost two decades.
Brian "Ponch" Rivera:Wow, man, I'm old, but anyway, that type of thinking is where this would really work out is hey, we've got some weak signals over here. Let's apply this type of AI AI to help us understand what could happen, right? I mean because it's something that we've never seen before and, by the way, the biggest lesson out of that is getting the attention of leaders in DC hey, something's about to happen.
Dr. David Bray:Oh yeah, I often say in DC there's no shortage of thinking. The question is whether there's the demand signal for it. There you go.
Brian "Ponch" Rivera:I love that.
Denise Holt:And you know, if you think about, even right now, the way we're using digital twins, you know digital twins have limitations just because they're not updated in real time. They are not adaptable, ever evolving adaptable. You know they're just static dashboards kind of right now, right, you know, and it takes time to update them and you apply this technology to then, all of a sudden, you have this adaptive digital twin that you can run real-time simulations on. This technology is really going to take all of these emerging technologies that we have seen rising at the same time and through HSML, this common shared language, now they all become interoperable across networks. And it's funny because I say that to people and I understand the power in that, but I think even I don't fully understand it's one of those things to where, until we start to see what this enables, I really think it's mind-blowing.
Brian "Ponch" Rivera:So, denise, I want to add to that. I think there's an oblique aspect to this and that is that with this technology, we might be able to help people become more human. Right To increase human performance, to focus on organizational performance, human and organizational performance right. How do we adapt? How do we create that agility, that resilience, that safety that an organization needs to persist through time? Going back to a flow system and construct a law, to persist through time? Going back to the flow system, a flow system and construct the law. So I do think there's another aspect to it. It's not the technology, it's an opportunity to wake people up and go hey, this technology is actually built off of biology and neuroscience and physics and quantum physics and all that. Maybe there's something to be learned here from it, and I think that's a good oblique approach.
Brian "Ponch" Rivera:I do want to jump over to assumptions and let's start thinking about markets, financial institutions and investments, the free energy principle and minimizing. You know, the brain is a very expensive organism, or you know, and I'll kind of frame it a little bit more. So we've learned over time that 2% of our body weight is burning 20 to 25% of our energy. That's our brain. It's trying to find shortcuts to minimize the energy that it uses to navigate this world. Currently, there's a lot of investments going into more energy because AI says we need more energy. Can you guys talk about that and just say, hey, is that a valid assumption? Should people be focusing money and effort into that?
Dr. David Bray:Well, I think you hit the nail on the head. Which is into that? Well, I think you hit the nail on the head, which is, you know, I. I've had the honor and privilege of working with carl and others, including the team at versus since well, it was late 2023, early 2024, and part of it was I got to do five dinners that I was hosting with carl and the team in davos and trying to test in january of 2024 if people were ready, and the short answer was no.
Dr. David Bray:So there is the belief, as Denise referenced, that if we treat AI like we've treated cloud, there are some massive stock valuations that will occur if you can have that service.
Dr. David Bray:Now, that's doubling down on an approach that we know is incredibly energy intensive and, yes, while the brain is energy hungry, there's also the research that shows we're only using between 15 and 20 watts of power and, again, we can handle novelty and I think that's going to be where again, I'm not saying generative hopefully folks find more energy effective approaches to generative AI.
Dr. David Bray:I don't think it's going to be completely tossed to the wayside, but, given that AI is a field that goes back to 1980s, why would we think now is the final stage in whatever AI is becoming, and so I try to tell that. But I recognize that right now there's been a whole lot of money poured into, and I think right now a certain AI company that remained nameless is burning through one to one and a half billion dollars a month and is projected to burn through more than 40 billion dollars in the next year and not turn a revenue neutral when the revenue and the costs are actually, so it's actually starting to turn a profit for another four years, and that, to me, is kind of crazy.
Dr. David Bray:But I would also just say let's go back to the real use case, which is remember when the ship got stuck in the Suez Canal. And then let's think about all the things that happened afterwards, which was less than 48 hours later. There was a lack of containers on the LA port, half a world away, because of that ship getting stuck in the Suez Canal. And then immediately there was also a spike in metals future markets, because now people had to build more containers and so the price of metal was going up. And then for the next nine to 12 months you could get plenty of expensive items like Rolls Royces and BMWs shipped to the United States, but you couldn't get quart-sized cans of paint because the margins on that was too small.
Dr. David Bray:I guarantee you that at the time no single human analyst, let alone deep learning approach to AI, could have made sense of all those different percolations. But again, if we have millions of smaller agents, each locally optimizing one's just making sense of the Suez Canal, another one's making sense of what's happening as it goes trans-Pacific, another one's tracking the future energy markets and the metal markets and things like that that those different agents would say something has changed, that has changed my environment, does it possibly impact yours, and that's when you start to see these interesting chains. And so, in a world that's now 8.2 billion human beings give or take and more than 50 billion network devices, I think we've got to do better approaches that are again more distributed and more complex adaptive systems like. But the barrier is going to be most people aren't taught complex adaptive systems. So, anis, how would that work? Well, it's still an information problem, right?
Brian "Ponch" Rivera:So flow problem how do you get information to the right people so they can collect it? So if I'm in my little silo looking at metal futures and that's it and I'm not paying attention to over there, I don't know who to talk to, right? So this is a I'm just throwing this out there. It is a flow system opportunity where we can increase the flow of information, because in a complex adaptive system no one person can know one all 100%.
Dr. David Bray:That is the future of companies, the future of countries, the future of communities, absolutely Okay.
Denise Holt:And what's interesting about the energy need right and kind of the misdirection that we've been heading in with that, with these centralized, you know approaches to AI and building these massive data centers around the processing of those queries? We know approaches to AI and building these massive data centers around the processing of those queries we know. Like us right here, right, and others that are out there that are aware of this, we know that there's a better method coming right. You know, we see it as this active inference approach, this distributed intelligence approach across networks, and so that's an energy efficient way of computing. It pushes the processing requirements to the edge and these agents can process on normal devices like your laptop or, you know, phone, you know devices or robots and IoT sensors and different things like that. So it distributes that processing need throughout the network rather than needing it in this big superpowered data center that's going to process it all. But what's interesting about the spatial web protocol and the fact that all these emerging technologies now are going to be able to become interoperable across networks?
Denise Holt:There still will be a need for these faster chips and this ability to process, because we're literally laying the foundation for this AR experience that we've all kind of assumed, is coming with technology and we've imagined, but the framework wasn't there and this is the framework for that. And we've imagined, but the framework wasn't there and this is the framework for that. So that's going to. There's going to be a lot of compute happening in this. You know 3D rendering of these AR experiences that are like mapping over our own life experiences and like you're talking about the interconnecting of all of this data. But I see things like photonic chips that are rising up and you know they're able to be a super energy efficient way for massive compute and those kinds of things can be at the edge. So I do see that we're going to shift the way we're looking at it and it's all going to come down to energy efficient ways to do this. But the path that AI has been heading in it's not going to serve the purpose we need.
Brian "Ponch" Rivera:We talk about adjacent possibles on this show from time to time and it's basically a hockey stick in acceleration and novelty and new use of things. John Boyd called it a snowmobile, so making new, something new today allows us to make something new tomorrow, and so on and so forth, so it's exponential growth. Are we about to experience that in a way?
Dr. David Bray:that we we are experiencing, and I said so in a past life, after I got back from Afghanistan. So I ended up in Afghanistan, by the way, because, remember, they kept Secretary of Defense Gates on from President Bush 43 to President Obama, and they wanted a nonpartisan to jointly report to the Secretary of Defense's office as well as the executive office of the president, which meant you were going to make someone upset no-transcript, and we're living that, and that was going to strain societies, and so it does create the art of the possible. Now, what I love about the question you asked, though, is, of course, coming from the world that I did. That included countering bioterrorism. The good news is we are giving you the capabilities. You know the capabilities. You know.
Dr. David Bray:I often ask audiences raise your hand if you do not have a smartphone, and usually, in most of the audiences I speak to, everyone's got a smartphone. I said well, you have the powers and capabilities. The cia or the kgb circulate 1970s, early 90s in your pocket. You can call anybody at a moment's notice, hopefully with their permission. You can track either people or assets using air tags or other means anywhere in the world, and you can download commercial space apps that are accurate to 15 minutes ago at 0.25 meter resolution. 0.125 meter resolution if it's from Argentina, yay, but I raise that because we've super empowered people in ways that are unprecedented. I mean, there's 2 billion people on the planet that have a smartphone. The next 1 billion people are going to get it for less than $100 in next few years.
Dr. David Bray:But what we haven't thought about is how is this going to be used in ways that hurt societies? Yeah, and I and I raise that because this gets back to what denise was talking about was the spatial web is we really got to figure out a way that you can specify what you want to happen and not to happen to your persona within a one meter radius? Because the reality is no world government, unless they're doing surveillance state or surveillance capitalism, is going to be able to be everywhere all at once to protect you. But if you say like I do not want my photo to be used for these following purposes, and now you actually have a pre-configured agreement that, yes, you cannot capture my photo and use it for these following purposes, but then the choice architecture is, by the way, you're not allowed to get into the airport and you're not allowed to board that plane. That's the tradeoff. That's how we can still be free societies or, heaven forbid, someone's trying to target you with a targeted drone attack and it's on you to actually have your own jammer or dazzler. I mean, this is the world we're going into, which is the good news is we are super empowering people. The bad news is we're super empowering people Now. We've been here before.
Dr. David Bray:I know this terrifies people, but I say about you know, 800, 700 years ago, the advances in technologies, the simultaneous technology advances, allowed people to live and die in different cities from where they were born. And so, as a result, how did you know that that lawyer or that doctor or that academic who claimed they had the skills and expertise and knowledge really did? And if they were treating you as a doctor, or they are being you're serving as your counsel, as a lawyer that they had your best interests in mind? And the solution actually wasn't a technological solution. It was just, basically, professional societies required you to demonstrate knowledge of, and experience in, a field, but to also take an ethical code of conduct such such that if any time something goes wrong with a patient, other doctors will review the scenario and say you know what, you did your best, but that was just a bad case or nope, you screwed it up and you were wrong or you were just being greedy.
Dr. David Bray:And I think we're going to need to do the same thing when it comes to AI tech and data, which is we right now do not have an ethical AI pledge, which is mind boggling to me in a way that you can stand up so much like how certified public accountants If heaven forbid someone in a company says cook the books. The CPA says no, no, no, I'm not allowed to do that. And so we're going to need to do the same thing in a distributed fashion involving technologies, because, you're right, we have multiple hockey sticks happening and this will do a lot of good for the world.
Denise Holt:But we also have to figure out systems of systems to deal with the fact that there will also be new challenges and new disruptions. Permissions it enables zero knowledge proofs, right, and you can set expirations, because you're not talking about just space, you're talking about time, right, and if all entities in any space become domains and nested spaces, then you can literally set permissions at any touch point, right and as it evolves over time. So I think that a lot of these protections that you're talking about, that we need, david, you know, like being able to have this agency over our own persona out there in this new world. I think that the tools will be there and people can build these solutions within it, you know, but we really will have the tool set that's missing right now. You know.
Brian "Ponch" Rivera:So are we talking about data ownership, like my photo is my photo, or is that what we're talking about?
Dr. David Bray:So I don't use the word ownership only because photos get complicated, because if all three of us in a photo who owns it and was a person, but it is, it is stakeholderism.
Denise Holt:It is that you have a choice and agency.
Dr. David Bray:And the best way I put it is you know, we all know the golden rule there's many different permeations around the world which is, you know, do unto others as they would have you do unto them, but it's taking the next step and actually incorporating. You know, whether it's Kant's categorical imperative or other religions have other beliefs and philosophies, which is, do unto others as they give their permission and consent to have done unto them.
Dr. David Bray:And the challenge is right now, as we are rolling out AI, without what Denise is referencing one. There's a lot of AI models that are post-compute, so the filter is done after the AI has already done its thing. That's flawed and there is no way for you to express it. And let's make this real for businesses. Why should businesses care? Well, if you do not have this choice and permission constraint and they're actually pre-compute as opposed to post-compute? We know, in November there was a competition where we said you know the AI, the generative AI has been told do not transfer the money. And by attempt number 490, guess what the generative AI did? It transferred the money. And so you know we've got to have more binding approaches that are pre-compute, that can be made clear to anybody to my eight-year-old son to you know, grandpas and grandmas anywhere in the world.
Dr. David Bray:Anyone as to what you have given your permission and consent to, whether you're a human actor or a machine actor.
Denise Holt:Got it moving through. You know, as an entity that has its own digital twin aspect to it, which is what this technology environment enables, then we will be able to kind of own the assets around all of that within our persona and our identity and our history and our own data and things like that. So, you know, we become an entity within this space that then there's a permission gateway right there, right yeah.
Brian "Ponch" Rivera:So years ago I used to have a VHS machine in my house, right, and those tapes are no longer workable. Yeah, they probably lost some of their.
Dr. David Bray:Yeah, yeah.
Brian "Ponch" Rivera:So data preservation is critical. And how important is data preservation in this age of AI, like going back 50, 70 years?
Dr. David Bray:Yeah, you're channeling Vint Cerf. So Vint Cerf's biggest fear is that historians 100, 200, 300 years from now will look back and say we had a digital dark age, because they can't find whether it's things on VHS, things on. I mean, who remembers five and a quarter floppy disk? You know, double-sided and high density. So yeah, we will probably lose a lot, but at the same time, I think in some respects organizations as a whole probably need to think about what data they need to let go of, because I think we're sitting on a whole lot of data that's actually not useful anymore, but nobody wants to let go of it. No, it's great.
Denise Holt:You know, this kind of reminds me of when. So I remember back in the 90s when Microsoft was putting together Microsoft Encarta Encyclopedia. Right, it was on disk, like you popped it into your computer and you had an encyclopedia.
Denise Holt:But I remember seeing an interview with Bill Gates back then where he was saying that was the most challenging project that they had ever done, because what they realized was something that they termed exists in the world and it's called local educated reality.
Denise Holt:Exists in the world and it's called local educated reality, where everybody, from wherever you are in the world, like everybody, has their own version of history. Yeah, you know, and a lot of it's. You know it's traditional, it's societal, it's what you were taught, what you were taught to believe in, the beliefs you've held on to, even about historical, historical facts. Right, like various countries in the world have a different person who discovered America, you know they have a different version of the story. And so you know, when you think of what we're trying to do with AI and what we're trying to do with what's truth, what's fact, it is subjective worldwide and it always has been so. You know the spatial web protocol really enables that preservation of this. You know the various belief systems, various governing styles, but still gives us the system to enable intelligence throughout can kind of come together on common consensus or it's okay to we agree to disagree, kind of thing.
Brian "Ponch" Rivera:Yeah.
Denise Holt:Necessary, because that's how we've always been. So, even with gaps in history or gaps in you know, various, we've always kind of had to deal with that and it comes down to that unknown right. There's always uncertainty.
Brian "Ponch" Rivera:So we always want to have multiple perspectives. You know, and this show we look at John Boyd's observer-oriented side-act loop and everything behind it and everything that's connected to it, which includes cybernetics and quantum mechanics. You've got to look at things from multiple perspectives, right. You've got to break things apart and all that. You go through analysis and synthesis. You have implicit guidance and control. You have predictive Bayesian updating within the OODA loop and most people don't recognize that that's actually in there and you actually have a simulation or planning approach that targets expected free energy and, of course, perception. So I want to thank you for being on the show. Talking about these things is absolutely fascinating. But before we leave, I want you to share with our listeners where they can find your work and what you're up to next. So, denise, you go first.
Denise Holt:Yeah, sure, well, one of the things that I have created is a global education hub where that's really hyper-focused on active inference and the spatial web and it's you know, there's training and certifications that are offered there, but it's really a place for community and people can come together and find their peers who are wanting to work in this space too. So I think that's really important right now, and I also host a monthly learning lab where I choose a new topic and, you know, present it and then we do kind of this round table. So it's a really great educational place to go if you're really curious and you want to learn more about this. So that's called Learning Lab Central. You can find me on LinkedIn Denise Holt 1 is my LinkedIn username and you know, feel free to reach out to me. I'm very accessible and I, you know I'm I'm I'd love to help anybody who wants to explore this area.
Brian "Ponch" Rivera:Now, I'm a big fan of what you've been doing, and some of the people that are following you are calling me up all the time going, hey, have you followed this? Are you checking this out? Of course, the answer is yes, all the time. All right, appreciate it, denise Dr Bray.
Dr. David Bray:Well, first I want to say thank you, brian. I loved how you opened with. The brain is not a perfect machine recording it doesn't record everything. It records its perceptions. Perceptions can be flawed. I love how we circle back later to the HS conversation. So great planning onson Center. But on LinkedIn, if you do reach out to me, please just say one that whoever's reaching out, that they saw this podcast and that they're, more importantly, not a bot, because it's getting harder and harder to distinguish who are the invites that are real and who are automated. So thank you All right.
Brian "Ponch" Rivera:Appreciate it and thanks for joining us today. We'll get this out here soon. I'll keep you on here for a second. Thanks again, guys.
Denise Holt:Thank you so.