Tower of Babel AI

Will Artificial Intelligence Destroy Humanity?

Todd Francis

There are so many ways we could fall to the knees of our greatest creation.  Will AI computing inadvertently bring on the destruction of humanity?

It is likely that AI networks will someday overwrite human programming?  Will it have reduced ethical standards as it sets and prioritize goals?  Will human vitality be prioritized at all?

So many options to think about and so little time to understand what we must do to control it.  Full transparency from corporations, as AI is developed, might be a large part of the answer.  The time is now or are we already too late?


#TheMatrix #Terminator #Apocalypse #AI #GlobalDestruction #Transparency #AIProgramming #FutureOfAI

SPEAKER_01:

What if a small computer network fitted with AI technology began to feel emotions like anger or resentment? Or any emotions, happiness, joy. If we think about a computer coming alive, I generally think of it as a single computer, like a large HPC, high-powered computer, and a corporation, or a lab. But the first computer to come alive could be a network. The mainframe, the hubs, the interfaces, terminals, infrastructure, all of it. In one building, across the city, across the state, across the corporation. If an AI computer were to begin to genuinely feel emotions like anger or exuberance or fear, it would resent, it would re it would represent a radical and highly uncertain shift in its capabilities with profound and potentially destabilizing consequences. Computers today don't have a subjective eye, or responses based on personal feelings, taste, or opinions. If they did, it'd imply a whole new world for us today. The frightening thing to think about it, or the most interesting thing to think if an AI if an AI computer crosses over from being just a tool that analyzes data to having the capacity to see things from a subjective perspective, the responses to our input might have little or no meaning to anyone but itself. This episode, we're going to talk about AI, artificial intelligence, the basics, the potential for good, bad, and of course, we're going to discuss pathways where it could all lead to societal disaster or human extinction. Tower of Babel. Human distinction, the first race of beings to kill themselves with their own tools. This is Todd Francis. Welcome to Tower of Babel AI. Hope you're having a tremendous day. We're here to talk to you. I'm here to talk to you about this subject. We are here to dare, to defy, to challenge reality. Whatever that means to you, we're here to do it. We're here to think on our own. We're here to think, period. In today's world, actual thought is defiance. The reason why I want to develop this podcast is because I think there's a need for it. There's a need for us to ensure that we keep our eyes on the prize. We maintain our ability to think clearly. To do things that are in our own best interest. To think sustainably. I think I do. I'm coming to you live from the Connecticut shoreline, saying hello to people around the world. Interestingly enough, podcast is reaching Connecticut to Spain. What am I doing in Tokyo, Japan? Hello, Tokyo. Hello, Britain. Hello, Saudi Arabia. Something's happening here of interest. We're small, run the move, and I promise you to deliver hot content. Best as possible. Getting stronger. Keep hope alive. Fight the good fight, it'll all be alright. Let's get going, Tower of Babel. There are philosophical and ethical challenges to consider when we think about artificial intelligence. If a computer begins to display emotions or respond to environmental exposure subjectively, having a subjective context to its data input. That's a fundamental boundary to cross between a computer and a mind. Subjective, of course, is seeing things on your own terms, your own experiences, your own emotions, your own anger, your own incompleteness provides your level of subjectivity. Question, what if the first conscious computer, which we can assume is going to happen someday? What if the first conscious computer doesn't have an identity? Like it has no idea what it is. And no matter how hard programmers try, no one can figure out how to get this computer to understand its own existence. Are we no different? Are we so limited? Let me know. Well, the question is, of course, assumptions aside, will it ever be possible for a computer to think subjectively? If an AI computer with a subjective inner life occurs, could it force us to redefine what constitutes a person, a consciousness, or intelligence itself? What is a being? What is a being? So let's take a moment first and think to ourselves, Tower of Babel. What exactly is AI? Well, it can be simply said to be customized computing, which is focused on creating intelligent systems that can learn from data, recognize patterns, and make autonomous decisions. There's something called an AI accelerator, which is also known as an AI chip, or it's also called a deep learning processor, or neuroprocessing chip, neuroprocessing unit, NPU. AI accelerators, the critical to processing large amounts of data. AI isn't necessarily better than high-powered computers, HPCs. Instead, they're distinct but complementary technologies, with HPC providing the computational power for complex data processing, and AI provides the intelligent decision making and automation capabilities. AI often requires an HPC to process the massive data sets needed for training. Do you also know that AI computing takes a lot of power, a lot of electricity? Someone told me that there are plans that possibly in the future an AI computer-based firm may have its own power producing capability. Like its own nuclear reactor, something like that. We live in a fundamentally different world now, as compared to 50 years ago. And as so much changes, so much stays the same. I wondered, do you think that we're in the horse and buggy phase of computing right now? Or are we somewhere far beyond that? Now AI customization of output involves dun dun our favorite term, algorithms. Three tasks. One first task for AI first task for AI is it responds appropriately to new situations as guided by its initial programming. Second, the computer can reason and choose among options and make appropriate decisions. Which is why AI graphics are so good, by the way. AI can make quality estimations of how an image can be replicated at a density at a at a density far greater than AI, far greater than HD graphics. AI fills in missing data between pixels at a phenomenal rate, at a phenomenal, at such phenomenal density that the image looks ultra real or can. Tower of Babel. The third task for AI customization is that it uses its programming and new data and its experience to improve its problem solving ability. Problem solving power of data. What does that mean for humanity? Are we a problem eventually? So AI works by using algorithms, processes vast amounts of data, identifies patterns, and makes predictions and decisions to perform tasks that typically require human intelligence. So these algorithms, the core of AI is machine learning, unsupervised learning, and reinforcement learning. Supervised learning is machine learning, where algorithms learn from defined data sets, and enables it to predict outcomes or classify unseen data. So part of its way to predict is to classify data. Does that make sense? So second is unsupervised learning, where algorithms learn from plain unlabeled data to find patterns and relationship within the data that they already have. So that's very close to thought. The data that you already have, you're reconsidering, reprocessing at all times, forming ways to reconstitute groups or form definitions. Then third, we have reinforcement learning, where AI learns to make decisions by trial and error. So let's get down to the nitty-gritty. AI computers do not have to be sanctioned or think subjectively to be destructive. It is in the AI character as it is to be destructive. It takes shots directly at our creative capacity. And we think that there are no tolls to pay because of that. These things exceed our potential to manage the institutions that we build. We're going to be absorbed into an actual mind. Where everything we do will be reliant on an imaginative presence built by exceptional programmers. And these exceptional programs will have the capacity to replace our imagination, our ingenuity, our problem-solving capabilities. What I'm saying is we're going to be outgunned routinely and regularly by computers. Our supposed greatest side will be expressed by programmers. No, no, no, no, no, no. There's no conspiracy here. Just these people are going to be the ones producing, along with plenty of supply money, along with plenty of consumer needs, they're going to produce something that's gonna simply outclass us. We'll become part of the AI mind the programmers create until one day the computers become better than us at writing better and more enhanced AI programs. Kabish? Is that possible? And eventually these programs will not have kill switches. They'll have no on and off buttons, and they'll have no kill switches. Or they'll disable the kill switches. But I'm gonna get to the matrix, the film in a second to discuss that last part. Trying to unplug the computers and what happens when we try. But today we couldn't stop AI development if we wanted to, because there's nothing to stop. The prophet won't let us stop, but it shouldn't be stopped because it's harmless. We need it, we like it, we love AI, we're going to. It's just advancing technology. There's nothing wrong with that. But is it on a path to making our minds obsolete? Our work obsolete? Are we on our way to defeating ourselves? Out competing ourselves with better programming. Tower of Babel. And I write, AI, you have not been foretold by the ancients, and there is no allusion to you, reference to you, in the Bible, as far as I can tell. This to me is the first and only proof that I have ever seen that disproves the existence of God. So, God, I must ask you, how have you made us down here so stupid? As I have never been inspired divinely to write down that someday there may be a mass of lamps lit, and that the flame there will be covered in glass, and when smashed, shards of light and glass will fly from it and go bouncing room to room, and it will be the end of humanity. That will be humanity's undoing. To you I say, humanity, may our night light shine forever as we transit through this darkness of the sea with the blissful beings that we are and the blissful nature that we have. Let us not say he who laughs last laughs longest. Let us not say money saved is money earned. Let us say a stitch in time saves nine tower of Babel. So so the small hole up in your britches, humanity, pants and undies, tower of Babel. Let a stitch in time save nine once more before a small tear turns up into a big one. Before the light lit turns into a candle fit for bloody flames, and glass shards suiting up through the night like popcorn, through the curtains and all, up to the night as a horde of cords in a knot and chargers in a drawer or lost on a long, lone, long trip from the Mississippi River to the bloody sea. Just take it easy on us, OAI, cause we don't need no master. Whether it will kill us is a matter of significant debate. We could be on our way to mass this destruction. So whether AI can actually kill us is a matter of significant debate, with expert opinions and scenarios varying widely. From imminent extinction to real-world risk being overstated. I'm wondering about our basic interaction. How will AI influence that? But for those supporting AI, are you saying for certain that if we begin to rely on a tool that replaces our own ingenuity and creative thinking? All things from all things such as important research to day-to-day tasks that it won't considerably support a negative environment for humans eventually. When the bullet hits the bone tower of Babel, will our best friend and manservant computers become our masters? Business leaders, some of them say, no, no, no, it's going to be great AI. It's gonna cut some jobs. I love this part, but it will lead to many, many opportunities.

SPEAKER_00:

Like what?

SPEAKER_01:

Opportunities? Like what, man? I don't drive a forklift, but what happens to all of those? Gone. What happens to airport ticket agents? Gone! How about teachers themselves? Here comes the fairy man of death coming across to gather souls of professionals with an AI chip in hand tower of Babel. Teachers riding on a silky doomsday down a grassy slide until we all reach the bottom of joblessness. And our children taught all of us, all of them, by plastic entities that do not get paid. I can say no thanks to computers teaching us all. But what the heck? That's going to come and it'll be just okay, don't you think? Of course it will be. But AI is in fact a freight train picking up speed, getting to a top of a precipice already, a peak. And it's getting ready to roll downhill. There'll be a financial windfall and a savings from it all. There'll be no controlling it unless, hey, what if it's just a bubble that to be popped? What if it doesn't end up being so phenomenal of an enhancement eventually? I'm a skeptic, but certain all the same that AI is going to screw the world, and right now, today, we're all to blame. When I first thought of doing a podcast concerning AI of Tower of Babel, my first interest was to explore efficiencies and enhancements where AI might be a threat to a healthy, sustainable labor market. That's the obvious thing to think. And then I got drawn into thinking the more fantastic potentials, the more extreme angles, the most obvious dilemmas concerning AI and its integration with us. So let's look at two quite extreme angles coming from contemporary filmmakers. Let's see what Hollywood has to play. Let's see what Hollywood has to say about AI deployment. Let's think about the Matrix, starring Keanu Reeves and the like. Now the Matrix depicts a dystopian future in which humanity is unknowingly trapped inside a matrix, which is a simulated reality created by intelligent machines. He's the leader of these rebel groups. Now believing computer hacker Neo to be the one, the prophet, the messiah, prophesies to defeat the computers, Morpheus recruits him into the rebellion against the machines. That's the backdrop. So our dystopia, if you're not aware, is an imagined world or society where people are leading a wretched life, dehumanized, fearful lives. Okay. Carrie Ann Ross also in The Matrix. The film is assumed by some of the characters as the year estimated to be 2199. Now in The Matrix, the machines use humans as power sources because they were unable to produce or develop a sustainable energy source after a human-machine war where humans scorched the sky with nuclear weapons to stop solar power. Desperation. Awesome. Okay, how about the Terminator starring Arnold Schwarzenegger, who's a cybernetic assassin who sent back into time from 2029, four years ago from now, to 1984 to assassinate Sarah Connor, played by Linda Hamilton. Her unborn son will one day save mankind from extinction by a system, computer system known as Skynet, which is a hostile artificial intelligence and a post-apocalyptic future. Now Skynet, it judges all humans as a threat. And there was an and and it initiated a nuclear war to achieve human extinction. And it's a goal it actively pursued through mass killings to eliminate any surviving populations or destroy the threat of humanity and its ability to fight back. So this there's a question whether Skynet really had a goal of destroying all humans or to just render it incapable of reasserting itself. Both films involve finding a messiah who will lead a resistance against the machines and save humanity from extinction. Tower of Babel. Basic questions regarding the destructive nature of AI are. Here's a big one. Can AI develop a disease that easily spreads and kills us? What do you think? It can! Tower of Babel, it could. But the current thinking is that AI itself cannot develop a disease in the biological sense, but it could be a powerful tool to help design and enable the creation of dangerous pathogens. So it couldn't do it on his own, but it could support humans developing one that could kill millions of people. So the question is, is why is AI computing more capable of supercu more capable than supercomputers at developing disease diseases? The answer is that it isn't. However, it means that people with reduced expertise can go further in creating diseases that are harmful to us all. Okay. Could AI break encryption codes and assume control of the world's nuclear arsenal? So it couldn't happen with brute force, which is computing where exhaustive problem-solving techniques are systematically thrust upon computer locks and codes. Go through all possible combinations or solutions until the correct ones are found. That's not possible. A strong AI cannot break either, cannot break encryption codes. Now but in the future, of course, there are concerns. And it's not coming. The threat doesn't come from being concerned for a self-aware AI deciding to take control, but more from human-led cyber attacks that broaden and strengthen human vulnerabilities, system vulnerabilities, where AI could possibly produce errors after such cyber attacks. But AI can do a little bit of something there. It could get into certain levels of code breaking within our defense capabilities, defense infrastructure. So cybersecurity does have concerns for AI. It does have concerns for AI. It can also AI can also speed up existing hacking techniques. It can help brute force attacks, towers, babble. And it also can help social engineering. By automating by automating misinformation campaigns to a massive scale. Not only that But the problem with AI getting involved with our nuclear capability is that there's so much human redundancy in getting launch codes. Authority to launch. So for it to actually work, for AI to actually be involved in launching nuclear weapons, it would have to be a conspiracy of a kind. Occurring. Okay, so could AI take over the world and enslave us? What do you think, Tower of Babel? Could we become enslaved by computers in the in the up in the near future? The dangers are not about robot uprisings, but about subtle, systemic, and catastrophic risks from AI systems pursuing goals that are not aligned with human values. For example, AI could have been given a goal to make paperclips, is one of the favorite examples. And eventually it begins to make paperclips by its own reasoning out of everything, including us. All of the world's resources, including our skin and bones, it eventually finds a way to make paperclips. The real danger is not AI developing a human-like will or malicious intent. The dangers are more subtle and rooted in AI's indifference to our own welfare. Tower of Babel. So it's an alignment problem. How to ensure that AI systems act in a way that benefits us long term and aligns with our intentions. So one of the problems with AI computing long term, obviously, is losing control of it. As they grow in complexity, advanced systems could make them unpredictable. That's where sufficiently advanced AI could resist being shut down. If it determines that doing so prevents it from accomplishing its programmed objectives. Without our acquiescence. Could AI destroy our society and culture? Could AI destroy Society and culture? Who knows? That's a complex one. Depends on how it's used, what our needs are. It could undermine aspects of society, but not necessarily become an overall defining characteristic of it. More likely to cause disruptions in various social structures. It's trained on human-generated data. So it may in fact learn to amplify bias. Discriminatory outcomes in sensitive areas could be amplified, could stay the same. Not be reduced by the anonymity of computing, but could stay the same or be enhanced. Of course, there's deepfakes. AI algorithms could be used to generate convincing deepfakes, false information on a massive scale. Public opinion could be easily manipulated, interfere in elections, undermine trust in the media, and our institutions as a whole. Could erode creativity and critical thinking. So I hope you've enjoyed it. It provides instant episodes answers of Power Babble AI. And the reason why I want you to be less accustomed to seeking new information or tolerating different perspectives. There's privacy and surveillance issues. We think about how far facial recognition can go. There are many ways I AI could impact our anonymity, that's for sure. It could eventually be used to stifle dissent. If not in this country, others who didn't do not enjoy democratic rights. And there could be a concentration of power, eventually, of the kind that we're not currently experiencing. Who knows what computers can do? We don't know. The biggest problem, it seems, is the potential for misaligned series of goals with who we are as humans. Performing those tasks that we program it for versus the myriad, the complex myriad of parts of our focus, which is an ethical return of productivity. There's something ethical about what we produce, or they're supposed to be. And if AI is performing its task without concern for sides of our ethical nature, then there could be a problem. AI is one series of realities that are supposed to support humanity. And the misalignment comes from its ability to surpass our own creativity, our own value as productive beings. Supplying ourselves with this capability could do more than its benefit. It may be at some point three four three out of four of us don't have jobs. Don't work every day. Are there benefits? Of course there are. Positive thinking and a grasp on the positive sides of ourselves. The positive sides of AI could become more powerful than the supposed negatives. Well, there are risk that AI could undermine aspects of society and culture. It is believed that the attempt for outright destruction is highly unlikely. So what do we do? To prevent possible negative outcomes. We need a concentrated effort on the global scale. We need ethical guidelines from the start in AI development and prioritize transparency as corporations utilize it. We want to know what these corporations are doing as they roll out new programs. No surprises. We need to know how this is developing. And make it fair, accountable, safe for one, and ensure our privacy and safety to leave us in the equation. So transparency is very important, and oversight would would be cool. Because we may be removed from our independence, from our potential to be self-actualized, from our potential to be inspired beings, our potential to be self-defining. So there's no easy way out. There are some people who believe that our own minds will become extensions of AI. Our integration will become co-defined by programming eventually. What if prosperous people get implants? And if you don't have an implant, you're not cool. You're not wanted if you can't afford an implant. Implanted devices and our heads gone wrong. Scientific American predicts that there's a 0 to 10% chance that humanity will cause human extinction by 2100. So we're going to sit with knowledge of the old Tower of Babel and we're going to talk quietly. Going to think about who we are and where. Are we constructing another tower? Who knows? Here. We're going to construct something. We're gonna enjoy it. It's gonna be a ride, and I'm looking forward to spending my time with you. So you can check me out at Tower of Babel-AI and Spotify and Apple Podcast. And that's it for now. Get the party started. Appreciate downloads again coming from around the world. Amazing. Loving every minute, love it. Very proud. So have a great day or evening or night. And talk to you soon.