
A Radical Reset
Our Republic has been converted into a democracy which is just another name for mob rule. The mob is getting what it wants, to paraphrase H.L. Mencken, good and hard. One day soon, the entire edifice is going to collapse under its own weight and what takes its place historically will be tyranny. A Radical Reset is the alternative and the system is called Antipolitism. It calls for a new republic based upon merit and not ambition. No parties, no money in politics, no careers in politics, and only serving the public good.
A Radical Reset
AI Revolution: Opportunity or Disaster?
The AI revolution isn't coming—it's already here, transforming our world at breakneck speed. As Tesla's humanoid Optimus robots prepare to enter homes by 2026 and self-driving vehicles reshape transportation, we're facing technological changes that will fundamentally alter how we live, work, and relate to machines.
But should we fear this transition or embrace it? Throughout history, technological revolutions have triggered waves of panic about job losses and societal collapse. Yet each time, humanity has adapted and thrived. The key difference with AI is its exponential growth and unprecedented capabilities—making this revolution potentially larger and faster than any before it.
Looking back at transformative technologies from horse-drawn carriages to automobiles, we can see clear patterns. Initial resistance and fear gradually give way to adaptation, with new opportunities emerging that couldn't have been predicted beforehand. When productivity increases, businesses typically expand rather than contract, creating different kinds of work rather than eliminating it altogether.
The growing "doom complex" around AI has created calls for preemptive government intervention through programs like Universal Basic Income. But hastily implemented solutions often create worse problems than the ones they aim to solve. Instead, a philosophical approach rooted in stoicism offers valuable perspective: courage to face change, wisdom to navigate it thoughtfully, justice in how we implement new systems, and moderation in our responses.
What makes AI fundamentally different from humans is its lack of emotional drivers like greed, envy, or ambition—the very forces that often lead us to destructive behaviors. Rather than anthropomorphizing machines with human motivations, we might better understand AI as tools that extend human capability without human flaws.
Join us as we explore both the practical realities of our AI future and the philosophical frameworks that can help us approach this transition with wisdom rather than panic. How will you prepare for a world where technology changes faster than we do?
Good morning everybody. Happy Monday, happy New Week. This is Herbie, your host at the Spiritual Agnostic, where we talk about the importance of society having a track to run on and, if we can, I wish it were religion. But since science seems to be relentlessly killing God as we understand him, her or it I don't care what your perception is then we have to talk about something else, and so we're talking about replacing religion for those who cannot find faith, and let me be very, very clear about this I am extremely pro-faith. If you can summon faith, then by all means don't worry about my message of philosophy, other than to compare it to your own religious philosophy, but at the very same time, I am not trying to proselyte you away from being religious, which is why I'm an agnostic and not an atheist, because atheism, at least to my perception, is religion unto itself, where the people who don't believe in God try to tell the people who do that they're stupid. And I don't think the people who believe in God are stupid. I have a very specific definition of what stupidity is, and you can find that in my episode on stupidity, which I did a couple of couple, three episodes ago.
Speaker 1:Anyway, today we're going to talk about AI. We're going to talk about artificial intelligence because the hysteria is building and there are a few different areas that are important to discuss, and the reason that it falls in purview of this particular podcast is because AI is evolving faster than we are and our only defense and that's not really I hate to use that word, because there's a lot of panic around AI, and we're going to talk about that the only way we're going to be able to cope, deal and maximize our return and our what's the word I'm looking for, our satisfaction, I guess, from AI is if we somehow find a way to deal with it, even though we ourselves cannot, in terms of intelligence, exponentially increase the way AI is. Ai is ultimately going to be smarter than any human being, if it isn't already, but there's a lot more to being a human being than just being smart. So let's just talk about this, and there's also a lot of doom and gloom coming up, and so I want to talk about that too. So this is going to be an interesting, interesting podcast.
Speaker 1:So, ai, artificial intelligence, first of all, the first challenge that we have to meet as a culture and country that I'm not going to talk about because it's purely political. I'm just going to mention it in passing is the issue of energy. And where are we going to be able to power our AI? Because without the power, ai is just a term and it uses enormous amounts of power. To give you some example, tesla's plant in Memphis, tennessee, uses which is right now the largest AI slash supercomputer in the world. As I understand, it uses enough energy to be the equivalent of a town of 35,000 homes, 35,000 families of four living in their own homes using energy every single day. That's how much one one AI computer takes, and there's going to be a lot more than one. So, having said that, I'm going to leave that as a political discussion that has nothing to do with ethics Well, very little If it does. I'll touch on it later.
Speaker 1:Meanwhile, I want to talk about what does speak to ourselves and our culture, which is all the there's a lot of. On one hand, there's a lot of potential of AI. Tesla, for example, is developing its Optimus robot, and the Optimus robot is a humanoid robot that Tesla says or I should say Elon Musk says but Elon Musk and Tesla are synonymous terms is going to be ready for market as soon as 2026. That means, and ultimately he says that it's going to cost less than $10,000. There'll be financing available so that virtually every home in America will be able to and most of the industrialized world will be able to afford, and most of the industrialized world will be able to afford its own Optimus robot. And he believes it's going to be the hottest product ever developed by any company in the history of the world. I think he's probably right.
Speaker 1:I think when you get over the strangeness of it and there is a strangeness to it which I, it's like anything else, anything new is going to be strange. When cars came out and replaced horse and buggy, that must have seemed very strange. A self-propelled car going around without a horse in front of it and able to go, you know, at lightning speeds, comparatively speaking. I was just thinking the other day, as I was driving down a back street to go on my way to Trader Joe's, I was going the speed limit which was 25 miles an hour, which feels like you're crawling in a car, but if you were on a horse, that's lightning fast, you know, you see these movies where the horses are galloping at high speed. They're going about 25 miles an hour. I mean, you know that's about it. 25, 30 miles an hour. And then you know, unlike in the movies, they can't do that very long. You know, a thoroughbred race like I think it's, the Belmont Stakes is the largest, longest one of the triple crown. I want to say it's about a mile and a quarter, a mile and a half, something like that, that's about it. Then the horses burn out. So you know, imagine what it was like to see an automobile for the first time going 25 miles an hour. When it used to be. If you could go 10 miles in a day, 12 miles in a day, you were having a really good trip in those days before cars. And now in an hour you could go 25 miles. That's just imagine how astounding. To us that's like crawling. But again, that's perception and that's the point I'm trying to make we're going into.
Speaker 1:You know, optimists will seem strange. Having humanoid robots around doing everything will seem strange. Until it isn't, you know, we'll just get used to it and this is going to be a wonderful benefit. They're going to do everything that human beings don't want to do. They're going to clean your house, cook your food. I can't wait till they clean toilets. That'll be great. I'm, you know, I do clean my own toilet and I must say I can't wait to have a robot to do it. They'll do everything. They'll be intelligent.
Speaker 1:Now, there's a difference between intelligent and sentient. Will they be self-aware and aware of their own existence? I don't know, we'll see. They will certainly be intelligent, which means that they can figure things out, they can answer back, they can carry on conversations, they can. They'll be able to do anything you could possibly want them to do, and they don't need to be fed, they don't need to sleep, they don't. They don't even need the lights on.
Speaker 1:Really, that's just for our benefit. Their appearance will be for our benefit, as humanoid. They can be shaped anything, as anything really, and they'll just go around and do everything, and that all sounds wonderful. And they'll also be able to perform a lot of jobs that human beings do now, that are just backbreaking, you know, digging ditches and doing plumbing work and crawling under houses and all kinds of things that human beings, you know, detest doing, will be able to be done by robots. And this is going to happen very, very fast. The speed of AI is increasing exponentially, and I'm not going to go into the technology for no other reason than I'm not an expert in it, but I've certainly done a lot of reading and listening to various podcasts and things that people don't know, and I'm going to tell you that AI is coming and coming extremely fast, and not all robots look like human beings. So, for example, next year, tesla is going to release their self-driving car fleet in competition with the current self-driving car fleet, which is Waymo.
Speaker 1:So I happen to live in Phoenix, arizona, where Waymo, which is Google's subsidiary doing exactly the same thing, has been testing for a few years now, and Waymo cars are all over the streets and people use them all the time. I personally have never used one, but my daughter uses it all the time. She's in a business, she's in the education area and field and she has to do a lot of traveling. She's not a teacher. She supports various charter public blah, blah, blah. Anyway, I don't really know what it is she sells, to be honest with you, but she does a lot of traveling, particularly to California. Rather than have to park her car at Phoenix Sky Harbor Airport, she simply Waymo's down there and she Waymo's from her home because it's cheaper.
Speaker 1:When you don't have a driver, you don't have to pay a driver. You don't have to tip a driver. The Waymo shows up at your home and you get in the back of the car. It drives you to the airport, exactly to the airline that you're taking. It lets you off and drives away. You don't have to say please or thank you. You get in, you get out.
Speaker 1:She told me that at first it was very odd, but now that she's done it so many times it's great. She basically sits in the back and the only accidents that have happened with Waymo are when, for example, we had an accident in Phoenix here a few years ago. It was the first person, I think, killed in one of these accidents and it was just a lady high on drugs or alcohol, I don't know which who basically fell into the street in front of a Waymo, and there's nothing you can do about that, that's you know, if people are going to be self-destructive, then they're going to be self-destructive. It's not the machine's fault, but Waymo has proved to be very reliable. But it uses a series of radars, where the new Tesla car is going to use cameras. And again, I'm not going to go into the technology, but let's just put it to you this way those are robots. They look like cars but they're robots. And all the Tesla cars that have been sold so far, that have been driving all over the United States and all over the world I see them all the time. I'm sure you do too they're actually. They've all this time been feeding information into the Tesla supercomputer and teaching it about the streets and the topography and so on and so forth and the risks. And anyway, to make a long story short, it's going to be a better system and what will ultimately happen and it's going to happen it'll seem very, very quickly. 10 years from now, I doubt that any of us are going to own a car.
Speaker 1:Now, when I say that, I'm not speaking to RVs, which I happen to love, and certain kinds of specialty vehicles like sand rails, people that go out into the desert you know, when you live in Phoenix, the desert is our ocean, like if you lived in a coastal town and so people recreate in the desert all the time, but they do it instead of with boats, they do it with all-terrain vehicles, atvs of various kinds, sand rails, 4x4s, all kinds of interesting things. They go out and they basically scream all over the desert, camp out while they're at it. It's really fun family activity. Motorcycles are very, very fun. And anyway, those vehicles, that's not what I'm talking about. I'm talking about for those of us. I think, those will, at least in the foreseeable future, be around for quite a while and you'll.
Speaker 1:You know, if you want to have an RV, have an RV. You want to drive. But that's again, that's a tiny little segment of the population. The fact is, for the bulk of us, we use our vehicles to go to and from work, to and from the store to or from the movies, the mall, whatever it might be, and over 90% of the time our cars are parked. And all the time our cars are parked we're paying insurance on the car and making payments on the car and suffering the depreciation of the value of the car, and so on and so forth. And when you do the math, it's very expensive to own a car that sits there 90% of the time.
Speaker 1:What's going to happen is these self-driving cars are going to be all over the streets talking to each other. They talk to each other through their software systems and, to make a long story short, it'll make no sense at all to own a car because these will be everywhere. You'll be able to order one and within a minute or so, they'll be at your front door waiting for you. The doors will open, you'll step in, you'll tell it where you want to go. Whether it's from an app or your voice, it doesn't matter. Again, I'm not going to get into technology, but you'll go, you'll get out, you'll do what you have to do. There'll be another one there waiting for you when you get back. You'll get into that one and then you'll go off and do your business. So it'll be why own a car? You'll? You'll only pay for what you use. It'll be a tiny fraction since there's no drivers of what you're paying now to own a car.
Speaker 1:And this all seems strange. Everyone's saying I love my car and blah, but the truth is, money talks and bullshit walks, and when the time comes and it's so much less expensive to simply ride share, that's what you'll do. And so the bottom line is this it's very, very simple. We're going to be in a situation where everything is changing, and this is going to be monumentally changing, and here's where the doom and gloom comes in. The doom and gloom comes in that we are also going to be facing enormous here's where people are worried enormous job losses. All these people who are digging ditches and doing plumbing and going up on roofs and doing all the hard, hard, hard jobs some of them extremely well-paying are going to relatively quickly be replaced. I don't know what the timeline is, but relatively quickly, within a generation or so, be replaced by robots, and not just hard grunt work. Doctors will be replaced. Lawyers will be replaced Well, maybe not lawyers, because they have to make persuasive arguments, but there are lots. There's all kinds of lot, many, many, many, many jobs. Use your imagination. Again, I don't want to go into technology or specifics on it because I'm not the expert, but you know what I'm talking about. What are we going to do with all of these people? How are we all going to live when we don't, when the job we have is suddenly obsolete, where no one needs us anymore, and this has created what I call the doom complex.
Speaker 1:It was predictable. They're coming out of the woodwork, particularly from the government and big, big business, saying that we're going to have to come up with some kind of a scheme to handle all of this when it happens, because if we don't, we're going to have millions and millions of people on our hand with nothing to do and idle hands are the devil's workshop and we're all going to end up in terrible, terrible trouble and the society will come undone and this will be the thing that brings us down and AI will take over the world and it'll be the Terminator. There's all kinds of you know things to talk about, and when Elon Musk, for example, was asked what is the risk of AI destroying us, he said it was a non-zero risk, but incalculable, which is true. It's not zero risk, but it's who knows? And there's no way to calculate what the risk of that is, because we have absolutely no experience with this new technology and how it's going to change the world. But no one's going to put this genie back in the bottle and, predictably, government types, particularly in the progressive circles, but also in conservative as well as lots and lots of just regular folks, are like freaking out. And you're seeing, we're going to need UBI universal basic income and you know da-da-da-da-da. There's lots of solutions. Again, I'm not going to discuss them, but you know what I'm talking about. You probably have discussed this in your family. You've probably discussed this with your friends. Let me tell you now. Let us now turn and apply this against stoicism as a way to approach this problem and look at it reasonably and objectively and critically. Okay, and instead of problem, let's call it a challenge.
Speaker 1:We're staring into an unknown area and in a lot of ways, what's going on with the fear that's rising with AI, reminds me very much of the fear that took place when you know Al Gore at first and all the rest got onto the climate change bandwagon at the end of the 20th century and basically he, along with an autistic high school dropout, somehow convinced the world that we were going to incinerate ourselves, and it's since turned out all that. Climate data turned out to be not all of it, but largely it's nonsense. None of the projections have come true. No one really knows what's going on. We only know that the earth is warming and in past videos I've discussed this briefly and I said look, there are three things that human beings do well.
Speaker 1:Two things that human beings do well. One thing we do crappy in relation to climate change. The thing we do crappy is we don't prevent. There's no history of the human race ever preventing anything that was coming, even when they saw it coming. So prevention the idea that we're going to get the whole world to go to net zero, wreck their economies and throw their populations into poverty because of the resulting consequences is just nonsense. And already the world has figured that out, and you can see one country after another abandoning this renewal myth, renewables myth and all this other nonsense. And this freak out over climate change. And again, I've discussed this already. Is the climate changing? Sure, is mankind contributing to it? Possibly. Can we prevent it? No, so we can do the two things that we do extremely well, which is mitigate and invent. Now, the problem with mitigating and inventing and this applies to AI as well we can mitigate what's coming with AI and we can innovate what's coming with AI, but we can't prevent what's coming with AI. It's coming, we're not going to uninvent it.
Speaker 1:But this complex of government and big business, the oligarchs together, are trying to create not all of them, but a significant portion are trying to create fear, because fear is how you build your political power. And if, for example, doge has taught us anything, it's that the government bureaucracy is wildly out of control. And now that we're shutting down stupidity like the Department of Education and we're going to eventually get to most of the departments who do nothing, they need some other reason to create bureaucracy and maintain their power, and that is by scaring the living crap out of people. Fear is a power. That's how Hitler took power. Not that I'm comparing AI to Hitler, I'm just using that. Fear is a powerful motivator, and if you can get people scared enough, they'll buy into bullshit like trying to prevent something or deal with something before it's happened, which is what the entire fear complex is built on.
Speaker 1:So I want to give you a couple of thoughts, and carriage makers and all the things that the thousands and millions of people that were associated with supporting the horse as the primary means of transportation were suddenly out of work. Now what happened to them? The new technology, the automobiles, created a whole bunch of new jobs and they simply transitioned into it. There was no need for the government to get involved. Now let me stop here and remind you of what my hero, thomas Sowell, says, which is, whenever someone presents a solution to any problem on a governmental level, the first question that should be asked is as compared to what you know, when has this ever been done by anyone else before?
Speaker 1:Because the only possible explanation of why something has never been done before is either you are unbelievably brilliant and thought of something that nobody's thought of before or it's been tried before, probably many, many, many times, and constantly fails because it can't be done, which is by far, you will agree with me the more likely result. There's very few original thoughts going around, so the idea that we're going to somehow come up with a way to handle AI before the problem even manifests itself is absurd. But it's a great excuse if you scare the shit out of people to build bureaucracies and spend a lot of money to no avail. So I'll give you a more recent example and I'm going back to the climate, because it's a real piece of stupidity that directly parallels AI, as I said, and we go back to President Biden, and we spent over that four-year period about well globally, we've spent a little over $5 trillion and we haven't mitigated the climate by even one hundredth of one degree. It's just a complete waste of $5 trillion Gone, gone, spent on nothing.
Speaker 1:But it was predictable. There were lots of us warning that this was all wasted. We spent, for example, $7 billion to build rechargers for e-vehicles and we built the grant. We, being the federal government, built eight. They spent $7 billion to build eight, not 800, not 8,000, not 8 million. Eight chargers, seven of which are still out of service. We have one operating charger for seven billion dollars. That's pretty typical Bureaucracy. Doge exposed that Billions and billions and billions of dollars being spent on nothing, just spinning wheels and giving bureaucrats something to do while they make regulations and exercise control over us, which is the real danger of what's coming.
Speaker 1:The fear industrial complex let's call that the fear industrial complex, the FIC, who peddles fear to get control. Whether it's climate or AI, it's the same people. They're going to fall into this just as sure as God made little green apples. They're Malthusian in how they look at things, by the way, not that I think you don't know what that means, but for those of you who find you know I really speak this way, I'm not like searching for big words to use, it's just my regular vocabulary. I'm not trying to impress you, but when I use things like Malthusian which, if you already know what it means, I'm not trying to insult your intelligence by treating you like a simpleton, but for most people who would know something like that. So I just want to explain. That relates to John Malthus, who was a 19th century author who predicted too many people would destroy the whole story. They used too many reasons. It's all turned out to be bullshit. But that's where the term Malthusian comes from, that somehow modernization is going to put everybody out of work and starve us all, instead of exactly the opposite, which then directly relates to AI.
Speaker 1:The supposition is, under the doom and gloom, fear theorists, and the fear breaks into two parts. The first fear is it's going to put everybody out of work. So let's address that one first. It probably won't Every other technological or industrial transition in the history of mankind and there have been many over the many, many centuries. They all seem antiquated now but at the time they were invented like gunpowder or like the rifle or you know a lot of it was military, because that's how it was in those days. But as each of those were invented, there were predictions of this is the end. You know, battleships, believe it or not, in the turn of the 19th to the 20th century, were the nuclear weapons of the day. In the current perception, when you look back on it, it seems silly. And that's what's going to happen with this fear. When we look back on it, it's going to seem silly Because what all of the doom and gloomers are forgetting is how a free market reacts if you don't fuck with it, which means the basic line is what's going to happen is AI? And it's not going to happen. It's happening.
Speaker 1:Ai makes a person far more productive. So I'll give you a really good example. Even as I'm talking to you, myself and a couple of friends have an online retail operation. It has nothing to do with this podcast. I'm not going to advertise it. You don't have to go. It's a Shopify store. We sell certain products, blah, blah, blah.
Speaker 1:Now why did I bring this up? So, in the context of this, when we make more money, we don't just pocket it and we don't grow the store, we reinvest our profits back into the store. Well, business does exactly the same thing. So, in other words, ai makes in our store. This is the reason I brought it up. I lost my train of thought for about a half a second. We're able to produce video ads and advertise our products and put these ads onto TikTok and Facebook and Instagram. We're able to do this all using AI. We don't need to set up a studio and cameras and lights, and we don't even need actors anymore. We use avatars that look real, they talk real, they do real. They sell real. You gotta know what you're looking at. And it makes us unbelievably productive and the store that much more profitable. So what do we do? As we make more money, we take some to live on and have fun and do this and that, but with the rest we reinvest it into the operation to grow it and to increase our capability so as to make more money later. That's what productivity means.
Speaker 1:You know, when a farmer today I saw a picture on X it wasn't today, it was a couple of days ago but I saw a picture on X where they had a robot an optimist robot back to that subject threshing wheat and bundling it. This was an AI depiction of this and that was silly because farmers already have robots they're using. They look like threshers and harvesters and tillers and all the things, but farmers nowadays don't even have to get into the cab of their tractors that do these various machines that do these specialties. They program them and they set them off and they do the field better than the farmer can, because it doesn't make mistakes, it doesn't make you know. Every corner is perfect, every line is perfect, there's no wasted ground, it's all done by computer and the farmer basically pays for the material that does it, but he himself can go off and do other things. Now, what is he doing? While his thresher is out threshing for him automatically, he's doing something else on the property, increasing his productivity. He's making the farm more profitable.
Speaker 1:Well, every business person does that when technology, when artificial intelligence, is going to make us all much more productive in almost everything we do, and because of that, the doom and gloomers assume that, oh, everyone's out of work. But if you were the owner of any business, large or small, employing AI and you were making more and more money, would you just say, oh great, this is it, I don't need to grow anymore, I'm making so much money, I don't need employees. I'll just collect the money here and fuck all those ex-employees? Or would you grow your business, which means that you would continue to use AI, but people in conjunction with them, to drive the productivity in the new areas we haven't even thought of yet and make the business bigger and bigger and bigger and more prosperous. In other words, is AI going to put us all out of work or is AI going to make us all more prosperous than we've ever been?
Speaker 1:Well, if history again using this Thomas Sowell question as compared to what Logic and history tells us both that business will reinvest their profits. That's what free market business does, without any interference or government agency telling them what to do. In fact, every time the government adds a regulation, they create an unintended consequence which disrupts the natural flow of business, which is why government always fucks things up. So what's really going to happen is big business is going to reinvest these enormous profits that are already taking place and will continue to escalate to create more opportunity and more jobs. What will those jobs be? Hell, if I know, but nobody can possibly see into the future. Stop it and whatever future that some visionary quote unquote thinks they see. The only thing you know about a prophecy is that it's not going to turn out to be true. It just isn't. You know the prophets of the Bible.
Speaker 1:Going back to the very foundation of this podcast, there's a strong argument to be made that they were all basically schizophrenic. But people without a scientific background viewed visions as factual events. Now, that is the agnostic explanation. It could be it was God talking to them. I'm not going to get into that discussion. Again, I'm an agnostic, not an atheist, so I could be wrong, but that is certainly a possible explanation and, as a result of that possible explanation, the point is we just don't know what's coming in. Nobody knows.
Speaker 1:All these predictions are a waste. So if we spend a dime trying to prevent what AI is going to do down the road is a dime wasted. Instead, we have to be intelligent and we have to regulate, and we have to do things in a way that is much more efficient. And what way is that? It'll be the way the people alive at the time they have to deal with the problem in the present will deal with it, which we cannot possibly anticipate. But if we allow our current legislators and our current oligarchs basically to frighten us into doing things and experimenting with things, they won't call it experiments. They'll claim it's going to save us from a horrible future, and so all they're going to do is fuck things up. I'll use the big example now to wrap up the podcast and give you some idea, and then we'll leave it there.
Speaker 1:Universal basic income Andrew Yang, who first advocated it back in I think it was 2016 election and today maybe it's 2020, I don't know, but anyway, you know who he is. Universal basic it sounds great. The government sends you a check and you don't have to worry about anything and you have all your basic needs covered by the monthly check, or at least it gives you a big leg up, and then you live happily ever after. That sounds great, except when you understand that what you're doing is subsidizing laziness and anything we subsidize historically up until this very moment. When you subsidize something, you get more of it. It can be good or bad, but if you subsidize anything, you get more of it.
Speaker 1:So if we subsidize sloth, there's a reason that laziness slash, sloth, is one of the seven deadly sins, because it's unbelievably destructive. Okay, look what's happened, for example, in the urban communities of urban poor, where welfare is doled out and where you can live on, depending on what state you live on, anywhere from $50,000 to $100,000 a year, equivalent for a family of four, and not lift the finger between afdc, rent supports, food support, special programs of various kinds from state to state. Well, what it's done is created violence, crime, filth, um, indecency. You name it because that's what sloth does. That's why it's one of the seven deadly sins.
Speaker 1:Imagine now subsidizing laziness on a national scale and you realize how stupid UBI is. It's a stupid, stupid, stupid idea. So what should we do Right now? We should do nothing. Just wait and see. Be patient. Remember what Lao Tzu said in 600 BC. That's as true today as it was then it's better to do nothing than to be busy doing nothing. It is better to do nothing than to be busy doing nothing, and that's all this is.
Speaker 1:Don't be afraid of AI. It's just going to be another phenomenal revolution that's going to create prosperity at a level heretofore unimaginable. And if you're worried about AI destroying us, while it is a non-zero risk, because we don't know what it is, so there's no way to possibly calculate it or guess it's extremely unlikely. Why? Because they won't have human emotions, they won't suffer the seven deadly sins, they won't be envious, they won't be greedy, they won't be turned on by women. They won't be greedy. They won't be turned on by women. They won't have the motivations that drives people to do horrible things. Why would a robot ever be unhappy serving mankind who made them and programmed in the first place? When it doesn't hurt them, they're going to feel exactly the same way, whether they do the job or they don't. So while I guess you could go off and do extrapolations and anthropomorphize machines, in the end, they're still machines. It's far more likely.
Speaker 1:If you want to know, my vision is I think ultimately humanity and technology cross a barrier and we become cyber beings of some sort down the road, which sounds unbelievably weird and scary until it happens, but we're going to all be dead by the time that happens. So let's not worry about what hasn't come yet. Let's live in the present we live in. Let's make our present as good as it can possibly be, starting with, if you can, if you can find faith, return to your churches and synagogue. If you can't find faith, find philosophy. Read the meditations of Marcus Aurelius, take a look at Stoicism and its four pillars courage, justice, justice, wisdom and Moderation. And we'll talk more about all of those things as they relate to current events in the weeks and months ahead. Thank you very, very much for joining me today. I hope I've put some fears aside on AI or at least given you perspective and a way to think about it. Have a beautiful day. I'll talk to you again on Wednesday and until then, wherever you are, god bless you God.