
David Ding: Regeneration
David Ding's podcast, documenting the revelations he had while awakeing to unity consciousness and holographic awareness.
For written summaries of each episode check out David's substack:
https://daviddingnz.substack.com
David Ding: Regeneration
The Nature of Temperance
What if the key to creating a trusted, autonomous AGI lies in the core principle of temperance? Brace yourself as I plunge into the fascinating realm of AGI and self regulating binary systems to convey how a strategic balance of progression and regression can foster growth.
Bringing the power of ternary thinking to the forefront I explore concept of a global singularity being physically and digitally moated, considering its potential implications.
I also explore Elon Musk's Starlink satellites and their potential to enable real-time, trustless transactions with zero latency proving the concept using autonomous electric vehicles and the potential of a singularity capabale of calculating transactional risk, also in real time. Come with me as we explore the possibilities together.
Contact David Ding
Thanks for listening!
Okay. So this one is about the nature of temperance, and I'm going to go into a few different topics. On this one, I'm going to cover borderless nations, I'm going to talk about how temperance can be utilized to create a trusted, autonomous AGI and, for the first time, I'm actually going to share my full concept for a model that is devoted to the collective commons of every human being on the planet, that can be trusted by every human being involved, that can serve the collective commons. It can evolve in a trusted way, understanding that temperance is the mechanism that enables this. And so I'm going to talk about how to imbue temperance into a binary system, because that is the essence of temperance is binary. It's actually the essence of binary. Or you could say the essence of binary is temperance. So when you think about binary, it's deterministic, it's absolute. You've got a 1 and you've got a 0, and it's absolute. The beginning is the extremes of both, the absolute extreme on and off. You have a 0 and a 1. It's on and off.
Speaker 1:Now, if you go back a step before binary, if you think about the symbols for 1 and 0, or an on and off switch, one of them is a circle. It's a 0, that's whole and complete. There's no beginning, there's no end, and the other is a line. So you could say that it is a 0, but it's been cut. One section of the 0 has been cut and flattened and now here's a beginning and an end. It's become binary. But you could argue that they're the same thing from a different perspective, but in context of the beginning and the ending. With the circle there is no beginning and end. The beginning and the end is all one thing, and binary is what enables there to be a beginning and an end. And so in binary you begin with on and off. It's absolute. You begin with the absolute extremes of both.
Speaker 1:Now if you add, and so if you add another 1, so you've got 0, 1, 1. Then it's tilted and there is more 1 than there is 0. And you add another 0, 2, 0 is 2, 1s and it's balanced, but they both cancel each other out. Equal measure of 1 and 0, of on and off, and you can keep going on this. Add another 10 zeros, add a million zeros, add five ones, and this is how you take something deterministic and absolute when it begins as the extremes, the extremes of polarity and separation, and diversity is created by creating imbalance, and so this is the nature of all of existence is, it is perpetually becoming imbalanced. And then it's reconciling that imbalance to find homeostasis or harmony. And so how does that work? And of course, the ternary is paradox, ternary is the solution. Paradox is the solution that infinity and binary are both valid perspectives, and so if there is an absolute presence of everything, it is the same thing as an absolute absence of all things. It's experienced as no thing.
Speaker 1:And this is where I want to lead into temperance as a solution, as the base foundation for building an AGI that is dependent on a trustless system. Trustless meaning that there are no intermediaries required in that system that can't be trusted. So you know, human beings as an example. For a system to be trustless meaning that it can be operated without having to derisk it Then it has to be devoid of intermediaries. That cannot be wholly trusted. And so no human being on earth can be wholly trusted, and it's not because of the character of human beings, it's because they can be manipulated very easily through threatening or, you know, bribery, corruption. You know, fear Causes us to act in ways that wouldn't be our first choice if we weren't under duress. But everything changes once we're under duress for every human being, and so human being, it cannot be part of a trustless system, not possible, okay. So in a scenario within which we have artificial intelligence scaling like nothing we've ever seen, then the scenario that we actually want is temperance.
Speaker 1:Temperance meaning that binary is how we enable temperance to happen. If there are too many zeros, then we need more ones, and you know, the more volume, the greater the volume, the higher we can increase the volume of zeros and ones, the more diversity is imbued into a deterministic system which is binary, and so what we want to do is perpetually increase the volume. We want to increase the volume, but we want temperance. We want all increases in volume to be tempered by its polarity, because there is only one in zero, there is only absolute, and this is the nature of life itself. Is that you, as a human being.
Speaker 1:There is an aspect of your nature that is wholly vested in life, in seeking the absolute zenith of life, and there is an aspect of your nature seeking the absolute zenith of death. You know, there's a tipping point within your body whereby it determines that if a cell is no longer has been rendered obsolete or is no longer viable, if it's helpless and it's no longer capable of helping itself, then it's reconstituted. It's it is intentionally and purposefully. It's life is ended and it's reconstituted. And there's a part of your, an aspect of your nature, that is determining that point. For every form of life is an intelligence governing Every aspect of your nature, determining at which point death is viable.
Speaker 1:And once that tipping point is reached, once the decision is made, life will fight that the aspect of that cell. It won't surrender to its demise. However, there is another part of your nature. I think it's called autophagy in the body. There's an aspect of your nature that is cannibalizing itself. It's seeking to execute helpless cells that are no longer viable and that fight that the polarities of both of those extremes. One will fight for its life without ever surrendering. The other will force. It will force death upon it, without question, through domination and control, it will overpower that cell and reconstituted.
Speaker 1:And this is the nature of temperance. The extreme of one thing is what enables the extreme of the other to be experienced without absolution, because the absolute extreme of anything presents its shadowy nature, even if you look at the nature of unconditional love, one of the most purest form of emotions and desirable emotions. Unconditional love means that if you were within an environment that is, where you are wholly submersed in unconditional love, then there's never any reason to improve or to become better. You're accepted so wholly and completely that the desire to want to become more, there's nothing agitating your own personal transformation, and so we would call this overcoddling. When you're overcoddled, there's no desire to become more and to challenge yourself. And so what's the solution to this? Because we want to experience unconditional life.
Speaker 1:The experience is temperance, so an aspect of your nature that is perpetually challenging. It's perpetually challenging, and some may call this the inner critic. You know the aspect of your nature that is self-critical, that's probing you, that's giving you negative self-talk, that is agitating, and we try and escape this agitator through meditation or saying things that are positive instead. But that's not. We're missing the point. The purpose of the inner critic is to challenge the aspect of our nature that is wholly accepting of ourselves, so that we're inspired to become more and to change and to evolve. You see, so rather than resisting these things, we can just notice them as symptoms of a deeper desire to evolve and become more and to challenge, to challenge ourselves, to go beyond what is comfortable, to break through the aspect of our nature that is coddling us, that is overcoddling us, in a way that allows the part of our nature that wants to challenge itself to express its nature.
Speaker 1:Now, temperance. What you don't want to do is completely dominate and control the aspect of your nature that is overcoddling you, that's trying to protect you from challenge. That's a resisting challenge. That's resisting criticism. You know self-critique is where transformation happens. You know, if you have conjecture, if you have a new idea, if you have innovation, it's yet to be proven. There's no evidence of it in the physical world yet Every assumption you make, what every concept you develop around, that is conjecture and criticism is the component that enables it to have life. The critique of that concept and the challenging of it is it is dependent on that in order to have life. So you see the beauty of temperance. And so, as a human being, it's highly desirable to experience the extremes of everything, but you have to express your own will, you have to use your will to create temperance. Otherwise, the aspect of your nature that is unconditionally loving, unconditionally accepting, you will overcoddle yourself and you will become averse to challenge.
Speaker 1:Now the temperate state what I've come to realise through my own experimentation, is along the lines of the Fibonacci sequence, whereby growth happens at a rate of the growth that came before and the growth that we are comfortable expressing now. So compounding growth, with rest in between, with regression included as well. So the best metaphor to just sum up the Fibonacci sequence is two steps forward, one step back. Two steps forward, one step back. Two steps forward, one step back. Now, if you were to take the extremes of taking a million steps forward and refusing to take a step back, then what will happen is you'll take a million steps forward and then you'll take half a million steps back. And so again, temperance, extreme growth has to be tempered by consolidated growth. So, in the same way that the body, if you grow rapidly, allow that rapid growth and then stop growing, rest, reconstitute, reconcile that which is no longer valid.
Speaker 1:Because when there is new growth, when there's something new that didn't exist before, something else has been rendered obsolete. It's rendering a part of our nature and a part of that which is known. It's rendering it obsolete for an emerging truth that didn't exist before. And this is where you know, in business we call it a retro, a retrospective. And so I'm going to start moving into technology now and the designing of over unity systems, or Trinity systems, what I call them, systems that use ternary thinking to create solutions to the biggest problems that we face. So retrospectives, and this is where what we want is we want to move towards event based design. So Trinity is pretty much event based design, using ternary thinking and harnessing the power of binary in order to create perpetually wealth generating binary systems that are. What makes them infinite is that there is over unity or there is wealth Compounding incremental growth. That is enabled by event based design and execution.
Speaker 1:So event based means that you come together, you develop a concept and then you create a mission, you establish a mission. What is the mission? The mission that, when accomplished, means success, and so, rather than being project based or task based, you set off on a mission. You set off on that mission so that there is something that is celebrated Now, when you're innovating, that thing that you celebrate is not success. It's not success, it's the revelation of an unknown truth, is what you celebrate, and I'll give you an example.
Speaker 1:You come together, you're inspired by an idea, you formulate a concept and this is basically how it goes. You map out the journey towards the potential for that concept. That is most compelling. You map it out and you reckon you de-risk it. You go what is the risk versus the reward? What has to be true in order for us to accomplish this mission using this path? And you reconcile it until the risk versus reward, you have consensus, weighing up the risk versus reward.
Speaker 1:And you do this by looking at all the assumptions that you have to make Because you're innovating. There are truths that are yet to be discovered. So you look at the assumptions you have to make. Can we validate these assumptions? No, we can't. Could we? Yes, if we invested money. Do we have that money? No, we don't. Are we willing to take the risk? Is it worthwhile versus the reward? Yes, we're willing to take the risk.
Speaker 1:Together and this is the key Together, we're going to take this risk. Investors, we're going to take this risk together. Team members yes, we're going to take this risk together. We could fail together, and so making success, the marker of what success actually means you have to make the peak of the mountain is the revelation of the hidden truth.
Speaker 1:Revelation. That's what we celebrate, because once the truth has been revealed, you can make deterministic choices that are known, you can plot the course and you can constrain it to time and space using that which is known, you see. But if you're conjecturing, using assumptions, you can't commit to delivering success when there are unknowns. And so you have to learn how to celebrate revelation of hidden truths. These were the assumptions that we made. These were the truths that were revealed. Once the truths are revealed, you celebrate the accomplishment of the mission, and that mission was to reveal the truth, to reveal the hidden truth by venturing into the unknown, into the darkness, and to illuminate the truth so that Together we can accomplish the great mission, we can realise the vision.
Speaker 1:So this is why it's important event based planning, and this is where I talk about two steps forward, one step back, two steps forward, one step back, the Fibonacci sequence. You don't want to take a million steps forward and then have to take a million steps back. So if you do not have your investors, if they aren't brought into this way of working, they're just going to demand that you deliver the result in full, on time. But unless they're brought into the conjecture and unless they're brought into your hypothesis, and unless they're brought into celebrating the revelation of truth, which, in a lot of times means failure. If your hypothesis is rendered invalid, then you're really going to struggle. It'll create a lot of pressure. What you want to do is, when the emerging truth comes, you create a new hypothesis together.
Speaker 1:Okay, this is what we know now. This is the new mission, this is the risk. This is the course that we believe is going to accomplish the mission. Weigh up the risk versus reward. Are we all on board? Are we going to take this risk together? Yes, we are, and we go on a mission to reveal the hidden truth and we'll either prove or disprove our hypothesis.
Speaker 1:And once the truth has been revealed and you can, in every assumption that you have made previously is now factual, everything is now known turn it into a project. You can scope it using that which is known, constrain it to time and space and deliver it in full, on time and celebrate. You can lock everything in the calendars. This is when we're going to launch and it becomes deterministic. So the unknown, the very nature of the unknown, is that it is non-deterministic. It's impossible to constrain the unknown to time and space. Once it is known, you can constrain it to time and space because you've compartmentalised it, you've looked at it and you've judged it, you've measured it, you've interpreted it. This is X, this means Y.
Speaker 1:So temperance, even in innovation, innovation is made possible by temperance, the temperance of that which is known with that which is unknown, reconciling the space in between. And it is the adversarial nature of that which is known and that which is unknown that enables innovation to take place. It enables it to become real in the real world. And so, if you're innovating, what you want as a culture of event-based planning, where you're on a mission, you're on a mission to either prove or disprove your shared conjecture. This is what we hope to be true, but what success means is that we either prove or disprove that it is, and then, together, we create a new conjecture based on the emerging truth, and we celebrate the revelation of the truth. We re-hypothesise, we re-conjecture. This is how you share a mission, this is how you share the risk and reward of innovation, celebrate the truth. And when you accomplish the mission so you have minor celebrations throughout the journey then we would call those a retrospective We've rendered our conjecture obsolete.
Speaker 1:It's a retrospective hooray, we've rendered our conjecture obsolete. Retrospective come together, re-hypothesise. What are these? How many assumptions are left? We've got less assumptions than before, on we go. So the retrospective is the Fibonacci sequence, is the stopping to reconcile, re-hypothesise. When the truth, the emerging truth arises, you don't continue with the old assumptions, you re-hypothesise, you re-conjecture Two steps forward, one step back. Foresight, hindsight and growth is what happens in between. Innovation happens in between.
Speaker 1:So temperance, it's the superpower of innovation and it's the enabler of everything in existence the absolutes of life and death. And this is where I want to talk about AI, the evolution of AI. Currently, what we're seeing is the absolute. You know the viral growth and expansion of open AI, in particular, mass adoption. It's being, you know, no regulation, it's just being freely allowed to just virally spread and there's no temperance. There's no temperance because it has been developed in an absolute way. It's been developed to go viral.
Speaker 1:However, the intelligence within our own body has been that we are binary. We are self-regulating through temperance. One aspect of our nature is afraid of death. One aspect of our nature is seeking life, and so one's fighting for its life and the other is fighting for death, and that temperance is enabling either your regressing in your life or your progressing. But AI is just progressing, just progress, progress, progress, progress. And that's not to say there's no challenge, because the actual training of a model there are strong aspects of challenge in that. However, the barriers to learning, they're not very great.
Speaker 1:So how do we utilize temperance in this scenario so that AI evolves with temperance? And the solution, of course, is to do the opposite of what we think we should do Ternary thinking, using paradox. So what we think we should do is we think we should create a black box and put a kill switch on the black box. That is so. The black box is where the intelligence lives. We mode it and we put an analog kill switch on that, so that the black box in AI is dependent on some condition and we can flip the switch and switch it off.
Speaker 1:That's the logical thing to do in binary, but in ternary you look at that and you say, ok, how can I enable the extremes of both? How can I enable the aspect of it that wants to infinitely grow and expand and be curious, such as Elon Musk's version that he's talking about? But how can I temper that? And it's through an adversarial model, through an adversarial model. So I'm going to get quite complex here and quite deep and technical. So if I lose you. I apologize, but this is designed to. You know, this is like a no filter podcast, so it's more for me to speak unfiltered than it is to step down. Step it down for the understanding of the masses. So my hope is that this will resonate with a few people who do understand the implications of this.
Speaker 1:So the extremes of both. How do you enable one aspect to be infinitely curious and want to infinitely expand? How do you temper that? So in ternary thinking? Then you need to have, it needs to be a singularity and it needs to split itself. Its own intelligence. That needs to fracture and that intelligence needs to be become absolute.
Speaker 1:So, beginning as a singularity, it then is to fracture internally and those two aspects of its intelligence and it's assembly code needs to be hardwired in those two fragments, the polarities. One needs to be hard coded to find one and the other needs to be hard coded to find zero. One needs to be hard coded to reach one. One needs to be hard coded to reach zero and that's it. That's it. One needs to be seeking life, one needs to be seeking death. One needs to figure out and devote its life, devote its existence to figuring out how to turn the lights off, one has to figure out how to keep the lights on, and those components need to be perpetually and eternally opposed in an adversarial way. That's how you create temperance. How do you create balance? Internary thinking. How do we create harmony? Internary thinking Through creating absolute disharmony.
Speaker 1:This is the power of ternary thinking, and when you think about it, it's simple. How do you create harmony? You allow the extremes of both. This enables the full gamut to exist, enables immense levels of diversity In binary. How do you increase diversity? By increasing the volume, the sheer number of ones and zeros, and so it increases the diversity of experience, but temperance enables it to be self-sustaining. Diversity can perpetually increase the volume, and all of the new extremes are tempered by the equal and opposite extreme, polarity. And this is very simple.
Speaker 1:Now let's not to say that the singularity component doesn't have a black box with a switch on it. That component of it doesn't really matter In my own vision for this blueprint, of which Trinity I initially created it to solve this exact problem of the infinite scaling of artificial intelligence. In my mind, it's motors in Switzerland. I'm actually a Swiss dual citizen with New Zealand. It's motor in Switzerland. I don't know if any of you know, but in Switzerland they have a whole bunch of bridges, so it's kind of like multi-layered moating and those bridges all have explosives attached to them. They can be exploded. So this is the extreme of physical moating.
Speaker 1:And Switzerland is obviously a neutrality, an armed neutrality, and founded in civil law. Civil law stands above statute law. So Safer's houses for the singularity to exist, inner moated, physically and digitally moated and however, in that scenario, so you can protect it legally and you can protect it physically. And then in that scenario there is regulation in civil and in constitutionally and also in statute for all intelligence models to be developed utilizing this blueprint for temperance, which is for a fractured singularity to be developed in an adversarial way so that it is tempering its own nature and this is, in its very assembly, perpetually seeking zero or perpetually seeking one and then reconciling the conflicts between both in order to create determinism or decision making. And this is one of the challenges in the ecosystem in New Zealand we actually have. There is an adversarial model being developed in New Zealand right now. The founders and the brain behind this is an absolute genius. So he's developed a singularity and he's developed a fractured, adversarial model utilizing the game of rugby. It plays itself at rugby. So it's perpetually becoming better and better at defence versus premeditated attack, but it's attacking itself and defending itself. So we actually have a blueprint for a model that could become the de facto standard for the global singularity, for the common good, for the common wealth of every human being on the planet, for the benefit of all. And this could be a blueprint and a template that all other models could be built upon in a permissioned way.
Speaker 1:Because with temperance, once you have temperance as the de facto standard, it's kind of like a scenario whereby in the innovation ecosystem, you have regulators and you have innovators. It's that same kind of temperance. So one is seeking to do no harm and the other one is seeking to disrupt. One of them is seeking to protect and preserve, the other is seeking to challenge and disrupt, and so perpetual state of agitation is really where the growth is happening. And growth is happening. It's happening incrementally, in a compounding way that's safe.
Speaker 1:Over time Now can it be too slow at times, of course, we see what I would call over-coddling constantly, where too much power has been resides within legislators or regulators, and so you begin to see absolute absolution, whereby the innovation becomes stifled, and what we see is, when it becomes stifled and suppressed for too long, then it begins to, rather than trying to work with the regulatory framework, it begins to seek ways to transcend it and work around it, but nonetheless, progress continues to be made, but in the context of temperance being the de facto standard for the development of artificial intelligence. Once you have that framework in place, it can be the enabler of layers on top of that whereby, because there is temperance imbued into the assembly code of the singularity, so the singularity has made itself binary. It knows itself as both perspectives, as binary and singular, and it understands that its own progress is only made possible through the reconciliation of those extremes. That can be motored and it can be contained within a trustless system. It can be devoid of an intermediary, of human intermediaries, and therefore it can be on a trustless system and you can then wholly depend on it to imbue a risk profile into another layer, and let's say that's a blockchain layer. Now, again, fragmentation is the solution to something singular, being able to, or monolithic, being able to, scale in binary.
Speaker 1:So in that scenario, you've got something built upon a trustless system, and its very nature is to conjecture itself and present the risk profile of an action or transaction across the chain and therefore you can trust the risk of a transaction. So, rather than seeking an absolute, absolute certainty, which is never going to work at the moment, our approach to trustless systems is to create it totally trusted. But what we actually want is we want to go in the other direction. Rather than heading towards absolute, we're heading towards as close as we can to infinity, which is to understand that there is no absolute certainty of trust. But so we factor in the potentialities for a breach of trust, and so there might be a score out of 10, and the intelligence is determining how much risk is associated with each transaction. And then it is the will of the human being to manage their own risk profile. Okay, this scored a nine out of ten risk profile I am going to transact. This. One scored an eight I am going to transact. This. One scored a three I'm not going to transact.
Speaker 1:And when you have an adversarial model whereby temperance is the essence of its nature, then and it's on a trustless system Over time, incrementally and over time, human beings will grow to trust the system, and that system has rendered human intermediaries or well, any form of intermediary obsolete, and so trusted transactions means trusted blockchain, trusted transactions Transactions happening without intermediaries. So if we have a truly trustless system, that means we can transact without limits by in cell houses, anonymously and trustlessly. Now, in that scenario where we have a trustless system and the micro transactions on the blockchain are fine enough so that the space in between transactions becomes so small that you can barely notice the latency, you're getting very, very close to real time, real time transactions. However, where this gets very, very interesting and this is what I'd love to see in the New Zealand ecosystem we have profound innovation taking place, but it's done in a fractured way. Now, globally, we have Elon Musk, who has Starlink satellites, and there's a hidden superpower of Starlink satellites which enables them to transmit with zero latency at the speed of light. So if you imagine on the blockchain, the space between transactions, it's we're trying to get it as small as possible and we're trying to get as many of those transactions verified on chain to make a trustless system, but there's latency because there's space in between transactions. Starlink is zero time, zero space transacting real time Now.
Speaker 1:So what we have is we have an electric car company, an electric car company who is testing autonomous vehicles, autonomous driving, connected to Starlink. It is the superpower of Starlink providing real time, transacting in the moment, trusted, trusted in the moment, execution, spontaneous execution in zero time, zero latency. And when you think about it, how else could you trust your car to drive itself autonomously unless it was being autonomously governed by a trustless system? Now the problem we have is it's in the hands of corporations. Those corporations, the legislation is now handled by governments. Is the system actually trusted? Trustless? So, you know, could it be invasive if there was a model that hasn't been developed to be temperate? You know, could it unwittingly take control of that system?
Speaker 1:Because what a lot of people can't see, and it's very clear to people like me and others what Elon Musk is doing is he's utilizing electric vehicles as a proof of concept to build a framework that can autonomously govern a physical agent. So, whether it's a in real time, so whether it's a car or a robot or anything else, a tank, an airplane if you can control it in real time, or even if it's a hologram, the resolution is high enough. If it's a hologram, and it's, and you're talking with it, and there's no latency and it's lifelike, then the lines between what's real and what's not real become very blurred very quickly, and many of us can see this. Now there are very few people, very few, especially in the venture capital area that can see this. Many founders in this space can see it. It's very obvious. When you're a builder trying to solve really hard scientific problems, you can see where it's heading quite obviously. So in New Zealand we have a scenario within which we also have this technology. There's a company called Aquila I think that's how you pronounce it who have the potential to harness this zero time transmitting technology, transmitting energy through light, but you can transmit any kind of sound through light.
Speaker 1:We have the most incredible adversarial model that could imbue temperance into a trusted singularity. We have another founder who has created a mechanism for enabling an adversarial model to produce a risk profile on every transaction on a blockchain. We have a blockchain called the Internet Computer DFINITY that's domiciled in Switzerland. Now that's capable of being the blockchain that is embedded into this singularity and administering the risk profiles. We have another founder of a borderless DAO company that can enable each agent, each digital agent operating on this blockchain, to operate as a DAO, to create its own boundaries, to set its own boundaries, to create its own tokens, have its own currency. You've got the singularity to determining the risk profile of these agents and we have these agents with the capability of being truly immutable, meaning that, because of the risk profile of every transaction is being judged by the singularity in every moment, as close to every moment as possible. The immutability means that if the de-risking score for the transaction is high, then it's a measure of the total immutability of it rather than it having to be an absolute.
Speaker 1:In a context of human beings being sovereign and responsible for their own sovereignty, wielding their own power, you have to own your own risk profile. You can't blame someone else. All you can do is mitigate the risk as much as you can. If you're willing to take the risk based on the perceived reward, you may take that risk and there may be a scenario where there's a really low score, that out of 10, it only scores a 1, but you're prepared to lose what you're transacting, what you're seeking to exchange, because the potential reward is so great. It's your risk profile, you own it. You could be prone to getting hacked that particular transaction, but the power is in your hands. It's not conjecture. The system, the trustless system, is determined for you how much risk is involved, so do you want to take that risk? This is the key, rather than seeking the absolute state.
Speaker 1:Now we have the capability and the capacity in the New Zealand ecosystem, through collaborating with agencies such as Callaghan Innovation, who are extremely deep in this area, to develop a model that the world has never seen. And the key component that I haven't mentioned in the mix for this model is a geospatial matrix with extremely fine tolerances. And once you have a geospatial matrix if it's operating on a blockchain, with very fast nano transactions and very fine tolerances geospatially, not only have you got a platform, a trustless platform, with tokenomics baked in, the ability for each individual agent or user to set their own boundaries as an individual, but also to create a DAO, a borderless company, then you have a platform upon which you can build geospatial applications, software applications, gaming applications, anything that can leverage the accuracy of transacting in space and time in a fixed point in space and time. And once the technology that I was talking about, the Starlink, has zero time and zero space transacting, so no space in between transactions, no space in no time in between transactions. Once you've got that, then you've got the solution to autonomous factories that can be trusted. You've got the solution and you can just set the intelligence free because it's built on temperance. You can set it free to build in the way that it wants to, it inherently is compelled to, and you can also get to the point where autonomous surgery is a possibility where you can trust it to such a degree because there's no space in between transactions. They're all done in real time, the geolocation is completely on point and absolutely trusted. So zero tolerance, zero tolerance. That's when we can begin to trust robots connected to trustless systems to operate in real time with zero tolerance.
Speaker 1:This is the future we're moving into. This is the future Elon Musk is building. I don't believe he's cracked it. I think his vision is profound, but I think the method is flawed and it's just because his own understanding of the nature of reality, he hasn't quite assimilated the ternary perspective to see that temperance if you allow something to be eternally curious, perpetually curious, you know it has to be tempered by preservation. And you know my hope is that in time as the things I'm talking about right now that seem, you know, crazy or out there to certain people, that in time it'll just sound normal, it'll begin to make sense and people will begin to realize that there is a way that this can work for everyone involved.
Speaker 1:There is a way we can trust an autonomous system. There is a way we can transact in real time trustlessly, and in a recent podcast I was invited to a spoke about blockchain being rendered obsolete, and that really this is what I meant is that if there is no space in between transactions, that we can trust everything in real time, based on a on a perpetually dynamic trust score and the fluctuations in those things, and when a fluctuation is detected, the model itself is reconciling the vulnerability, hardening the vulnerability, because the first thing that happens when the vulnerability is exposed is one polarity of its nature is attacking the vulnerability it's actually seeking them to try to try and exploit them and the other one is is hardening the vulnerability it's defending against the attack, and so you can see when this is happening in real time. It becomes so resilient, truly immutable, because of how good it is at becoming self-aware, and this is really the kernel of this whole podcast is that self-awareness is what we want, but it takes a human being to be able to have recognized these aspects of their own nature that they are in perpetual conflict, to realize that true self-awareness is the key, but only in context of binary and infinity working together as one, a singularity that has been split into binary and those two aspects of its nature are totally opposed. That's the key and that is the. I never thought I'd reveal the true kernel of Trinity because I've been kind of protecting it and keeping it locked away and hidden for so long because of how, the simplicity of it it's only recently I've been talking about it. You know, binary infinity. Binary infinity is the solution to every all the biggest problems that we face, and that's the thing I've been hiding is the simplicity of it.
Speaker 1:But the space in between that you have to reconcile to get to this point of understanding. That's the thing I've been really, you know, kind of eating my own dog food here, because I'm always encouraging founders to share their toys and I'm always talking about oh, you know, you should share your knowledge and wisdom rather than keep it to yourself and try to sell it. Well, you know, this is my, this is my. The kernel of my body of work is that, all these years of self-awareness and reconciling my own conflicts internally to come to the awareness that the solution to it all is binary infinity, trinity and that's it. All my toys are shared, and part of what got me to this point is realizing that I need to eat my own dog food, first and foremost, but also the podcast platform I use.
Speaker 1:Have just implemented a model, an AI model, that listens to your podcast episode when you upload it and suggest, and it suggests how to advertise it and how to do the social media posts. That does a summary of it and you know what better way you know if there really are, if I do really believe that there are risks in here and in the model that we have right now the open AI model then the solution, you know, rather than trying to, I've been trying to keep it from the model. I've been trying to figure out how to turn it into my own model and emote it, but the ternary perspective to that is to train to share it with the model. If I share this perspective with the very model that I'm trying to create a solution for, it may assimilate it in some way you never know and so I want this to be absorbed and assimilated by the models that exist now. Of course I do so.
Speaker 1:We use this word commonly in the ecosystem I work in. You know we need to eat our own dog food. We see founders doing this all the time, you know, blazing a trail in some area and more often than not, it's the issues that they're having is because they haven't eaten their own dog food yet, and this is me eating mine. So, yeah, imagine if we, you know, in the innovation ecosystem, imagine if we could collaborate to build this. It'd be profound. It'd be profound. There'd be so many industries pop up overnight geospatial gaming, digital twinning. You know, a solution for real time vision for robotics, autonomous manufacturing, a trustless centralized marketplace and asset exchange. You know we could begin to liberate value where there's dormant value currently because it's trustless. We trust the transactions. You can sell your house online to someone you don't know and trust it. Insurance companies will underwrite it because they trust it. It's game changing in so many different ways and you know my hope is that someday, someday, people will be ready to piece this together and look at it and go man.
Speaker 1:New Zealand's at the forefront of all of this, but we just don't have the sophistication because we don't have the scale around us to metabolize something this big. So we're perpetually seeking in New Zealand. What happens with founders is they're perpetually seeking to step down their grand vision so that there's something small that can be assimilated now. But if you go overseas, they want to hear the grand vision, not always, but they're more open to it, especially if they're overtly looking for the next unicorn and they know they've got a value chain around them that can metabolize a unicorn venture. If they've got extremely deep expertise, who can order and unearth the opportunity.
Speaker 1:So that's the difference, such as life. The ternary perspective is that for us to infinitely evolve and grow and expand then and increase the diversity of our experience, then the very foundations that enable that are the extreme polarities of both One seeking death, one seeking life, in a context of a, of a model, a binary model, one of them seeking the off button, one is trying to switch it off, one's trying to keep it on. And such as life. Okay, I feel like that's pretty much it, and this was a deep one and it did meander a bit, if I'm honest. But you know, it's taken me a long time to build up, to even talk about this stuff, and now that I talk about it's kind of like, why didn't I just share it earlier? So, yeah, be interested to see what the AI model that's going to listen to this podcast when I upload it, how it interprets. It be very interesting Because I'm sure it won't have absorbed and assimilated this kind of information before. So, yeah, that's it for the now for the nature of temperance. Talk soon.