Dispatch Ajax! Podcast

Artificial Intelligence Part 1

Dispatch Ajax! Season 2 Episode 58

What does it mean to truly think? This question has haunted humanity since we first gained the ability to contemplate our own existence. Our fascination with creating thinking machines didn't begin with computers—it stretches back millennia, to ancient tales of bronze giants and mechanical beings that could sing, dance, and even flirt.

But the crucial question remains: are today's AI systems truly intelligent? By examining what consciousness actually is—an emergent property arising from billions of neural connections functioning as a high-level operating system—we confront the limitations of current AI technologies. 

Speaker 1:

You should pod without rhythm, otherwise you'll attract the worm. Gentlemen, let's broaden our minds.

Speaker 2:

Are they in the proper approach pattern for today? Negative, negative.

Speaker 1:

All weapons Now Charge the lightning field.

Speaker 2:

The Lai Zi or Li Zu was a Taoist text written in the 5th century BCE by Lai Yukou In. It is a tale from many years before of an encounter between King Mu of Zhao and a mechanical engineer known as Yan Shi, referred to as an artificer. He proudly presented the king with a very realistic and detailed life-sized human-shaped figure of his own crafting, to quote from the text. The king stared at the figure in astonishment. It walked with rapid strides, moving its head up and down so that anyone would have taken it for a live human being. The artificer touched its chin and it began singing perfectly in tune. He touched its hand and it began posturing, keeping perfect time.

Speaker 2:

As the performance was drawing to an end, the automaton winked its eye and made advances to the ladies in attendance, whereupon the king became incensed and would have had Yan Shi executed on the spot, had not the latter, in mortal fear, instantly taken the robot to pieces to let him see what it really was. And indeed it turned out to be only a construction of leather, wood, glue and lacquer variously colored white, black, red and blue. Examining it closely, the king found all of the internal organs complete liver, gall, heart, lungs, spleen, kidneys, stomach and intestines, and over these again, muscles, bones and limbs, with their joints, skin, teeth and hair, all of them artificial. The king tried the effect of taking away the heart and found that the mouth could no longer speak. He took away the liver and the eyes could no longer see. He took away the kidneys and the legs lost their power of locomotion. The king was delighted.

Speaker 2:

Welcome to Dispatch Ajax. I am Skip, I'm Jake, that's right. Today we're going to talk about something that you, I think, will contribute well to Me, as in the co-host or the audience.

Speaker 1:

Oh, good question you.

Speaker 2:

You Jake? Oh, okay, Jake, the person which is also something to keep in mind when we're talking about this. So in greek mythology, talos was a giant made of bronze who acted as guardian for the island of crete. He would throw boulders at ships of invaders and would complete three circuits around the island's perimeter daily. According to. To the Greek compilation of myths Bibliotheque, hephaestus forged Talos with the aid of a cyclops and presented the great automaton as gift to Minos. In the Argonautica, jason and the Argonauts defeated Talos by removing a plug near his foot, causing the vital Icker to flow from his body and rendering him lifeless. In On the Nature of Things. The Swiss alchemist Percellius I completely mispronounced that Paracelsus, oh, just like Beauty and the Beast, but you'll laugh at his actual name the Swiss alchemist Paracelsus, whose full name is Philippus Areolis Theophrastus Bombastus von Hohenheim. Oh, my Lord, he's Mr Bombastus.

Speaker 1:

Mr Bombastus.

Speaker 2:

He had a one-hit wonder before he surfed in the Gulf.

Speaker 1:

Paracelsus also knows, shaggy.

Speaker 2:

Oh man. He wrote that the sperm of a man be putrefied by itself in a sealed kirkabet for 40 days, a container like a ceramic container, with the highest degree of putrefaction in a horse's womb, or at least so long that it comes to life and moves itself and stirs, which is easily observed. After this time it will look somewhat like a man, but transparent, without a body. I don't know how that works, what this is all ancient Greek, so I don't know. It's Greek to me If after this, it be fed wisely with the arcanum of human blood and be nourished for up to 40 weeks and be kept in the even heat of the horse's womb or a tauntaun. I guess a living human child grows there from, with all its members, like any other child which is born of a woman, but much smaller, which I do believe we call a humunculus huh you.

Speaker 1:

You don't have to have a rabbit hole. You're in the horse hole right now. It's warm. I thought it smelled bad on the outside.

Speaker 2:

Golem making is explained in the writings of Elazar Ben Judah of Worms. W-o-r-m-s. Alright, that dude rules.

Speaker 2:

Yeah, in the early 13th century, during the Middle Ages, it was believed that the animation of a golem could be achieved by insertion of a piece of paper with any of God's names on it into the mouth of a clay figure. In History of English Kings, which we talked about in our Excalibur episode, there was told tale of a brazen head, one of many in myth, in a passage where he collects various rumors surrounding the polymath Pope Sylvester II one of my favorite Looney Tunes who was said to have traveled to Al-Andalus and stole a tome of secret knowledge, where he was only able to escape through the assistance of a demon, which is kind of crazy for a pope. He was said to have cast the head of the statue using his knowledge of astrology Not really clear on exactly what that means. It would not speak until spoken to, but then answer any question with yes or no.

Speaker 2:

Muslim alchemists in the Middle Ages sought to achieve taqwin, the creation of synthetic life. In Faust, the second part of the tragedy by Johann Wolfgang von Gotha, an alchemically fabricated homunculus, destined to live forever in the flask in which it was made, endeavors to be born into a full human body. Upon the initiation of this transformation, however, the flask shatters and the homunculus dies. Since humankind gained the ability to use tools to create machines, there has been an often quixotical drive to harness these abilities to replicate the ultimate power of the natural world, to create life, not just automata. The creation of a being that could not only move, act and speak like man, but one that could also think, the creation of which would be man's mastering of that which he believed raised him above the beasts of the earth. And that's what we're talking about today.

Speaker 1:

So it's AI, but maybe not the AI you're thinking of.

Speaker 2:

Right, and that's what we're going to get to. That's exactly why we're talking about it. Today's AI, is it the AI of myth? Is it the AI of science fiction? The past answer is no, but is it far off from that? Maybe no, and that's what we're going to talk about. And, like I said, you, as a philosophy and religion major, feel free to interject at any time. Okay, I'm sure you have thoughts.

Speaker 2:

This folly of man's pivots around the idea that thought can be replicated mechanically. I think a lot of this stems from the constant desire of human beings to quantify, to analyze and to understand that which is abstract, kind of like using numbers to analyze baseball. These are metaphysical well, not in baseball's case but these are metaphysical or intangible and putting them into tangible terms, that in and of itself isn't a failing or a flaw. It's just how we learn to operate in the world around us. In fact, in modern neuroscience there's a growing idea that what we call consciousness is essentially just an operating system, but we'll come back to that.

Speaker 2:

Chinese, indian and Greek philosophers all developed structured methods of formal deduction by the first millennium BCE. Their ideas were developed over centuries by philosophers and mathematicians and alchemists alike, such as Aristotle, who gave a formal analysis called syllogism, euclid, whose work Elements was a model of formal reasoning, and Al-Khawarzami developed algebra and his name led to the word algorithm, and European scholastic philosophers such as William of Ockham and Duns Scotus, which strangely was actually still the acronym Supreme Court of the United States.

Speaker 1:

It was ahead of its time.

Speaker 2:

It was fortuitous. Everyone was confused but panned out. Spanish philosopher Roman Lull developed several logical machines devoted to the production of knowledge. He described these machines as mechanical entities that could combine basic and undeniable truths by simple logical operations. This has been an obsession of ours, basically since we've had the ability to think and reason and ask questions. If we can use tools to create things, why can't we create things that nature creates? It's been kind of the. The study of mathematical logic provided the essential breakthrough that made artificial intelligence seem plausible.

Speaker 2:

Russell and Whitehead presented a formal treatment of the foundation of mathematics in their work, the Principa Mathematica, in 1913, which was kind of a big deal. Then later David Hilbert challenged those kinds of archetypes to answer a question that he posed Can all of mathematical reasoning be formalized? This question was kind of answered by Gödel's incompleteness proof, the Turing machine and later Church's lambda calculus. So in this endeavor two things were widely assumed to be true. First, there were and are, limits to what mathematical logic can accomplish. Second, within these limits, any form of mathematical reasoning could be mechanized. The Church-Turing thesis implies that a mechanical device shuffling symbols as simple as zero and one can imitate any conceivable process of mathematical deduction. And, unfortunately, alan Turing was blacklisted from society because he was gay.

Speaker 1:

Well, blacklisted is a polite way of saying the horrible things done to him blacklisted as a polite way of saying the horrible things done to him.

Speaker 2:

Absolutely, yeah, I mean. He's not dissimilar from, like people condemned for their basic understanding of reality. So here's a very short timeline of what led us here. Picture it's sicily 1726.

Speaker 1:

Oh, this is nice, I like it actually probably would be really nice.

Speaker 2:

The weather would be be great In this year. Jonathan Swift's novel Gulliver's Travels introduced the idea of the engine in caps, a large contraption used to assist scholars in generating new ideas and language and publications ideas and language and publications. Scholars would turn handles on the machine which would rotate wooden blocks inscribed with words. The machine is said to have created new ideas and philosophical treatises by combining words in different arrangements, and this is a quote from Gulliver's Travels. Everyone knew how laborious the usual method of attaining to arts and sciences, whereas by his contrivance the most ignorant person, at a reasonable charge and with a little bodily labor, might write books in philosophy, poetry, politics, laws, mathematics and theology without the least assistance from genius or study. This will all become extremely foundational as we go along.

Speaker 2:

In 1914, a Spanish engineer, leonardo Torres de Cuavedo, demonstrated the first chess-playing machine, the first chess playing machine, which was called L'Iadrestica, at the Exposition Universal in Paris, which is, you know, the World's Fair. Essentially, it autonomously made legal chess moves and if the human opponent made an illegal move, the machine would signal an error. And this is something we're going to get into in part two, but it's vitally important. In 1921, a play called Rossum's Universal Robots, opened in London by Carol Coppock. It's the first time the word robot was ever uttered in English. See in Czech the word robata is associated with forced labor, specifically by peasants in a feudal system. The term robot quickly gained international recognition and came into our lexicon after the play's success and then became the standard term for any mechanical being or artificial being created to perform labor that would normally be performed by people. Skip ahead to 1939.

Speaker 2:

John Vincent Anatasoff, a professor of physics and mathematics at Iowa State College, and his graduate student Clifford Berry, created the Atanasoff-Berry Computer, or ABC. It was created with a grant from the federal government of $650. Whoa, whoa. It was one of the earliest digital electronic computers and the first to implement binary language as standard. And the first to implement binary language as standard. 1943, warren S McCullough and Walter Pitts publishes a paper, a Logical Calculus of the Ideas Imminent in Nervous Activity, in the publication Bulletin of Mathematical Biophysics. It is the first time that they start talking about simulating brain-like functions and processes, particularly through neural networks and deep learning. 1950, alan Turing publishes a paper called Computing, machinery and Intelligence where he poses the question can machines think? His approach established a foundation for future debate and discussion on the nature of how we can call them thinking machines and how their intelligence could be measured by what he defines as the imitation game, which we now currently call the Turing test, or the Weitkampf test, if you really want to get into that.

Speaker 1:

Well, if you're doing a sci-fi interpretation through pop culture, the Voight-Kampff Test is 100% the advanced Turing Test.

Speaker 2:

I mean, they still didn't nail down, sean Young.

Speaker 1:

No, I think Harrison Ford did nail down Sean Young.

Speaker 2:

Yeah, even in the short story he kind of did. It was more just about the owl, but whatever. In 1951, marvin Minsky and Dean Edmonds built the first actual artificial neural network. It was called the Skatastic Neural Analog Reinforcement Calculator, or SNARC, and it was designed to simulate the behavior of a rat navigating a maze.

Speaker 2:

In 1955, the term artificial intelligence is coined in a workshop proposal titled A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. It's right up there with put a tiger in your tank or where's the beef? It brings, rolls off, the tongue rattles around in the brain by John McCarthy, who worked at Dartmouth, and Marvin Minsky of Harvard, with help from Nathaniel Rochester from IBM and Claude Shannon from Bell Phone Laboratories. It's funny because later on one of the first quote unquote artificial intelligence agents is called Claude. That's one of the first things that they do. In 1965, philosopher Hubert Dreyfus publishes Alchemy and Artificial Intelligence, in which he argues that the human mind operates fundamentally different from computers obviously Not necessarily so obvious Our ability to create a thinking machine. You really shouldn't pattern after the brain because the brain is so weird and complicated. It would be more efficient and more logical and easier to do it in a way that follows the way that we created, which also brings up all sorts of philosophical questions.

Speaker 1:

Right, which also brings up all sorts of philosophical questions. Right as John Searles might say, what we want to know is that distinguishes the mind from thermostats and livers as like purely mechanical devices, input, output in a standard mechanical process. Thus, by merely simulating the functioning of a living brain would in itself be an admission of ignorance regarding intelligence and the nature of the mind.

Speaker 2:

That is a very good analogy. Yeah, look at Leonardo's early work, polymath crazy outlier as Gladwell would call him. He theorized flying machines based on the mechanical, biological flight of birds. But those don't work in a vacuum. They just don't function the same way Because you have all these other biological processes and all of these Creating a thinking machine can't. If you try to do that, you're probably going the long way around. Good or bad approach, I'm not sure, but it really doesn't work the same way same way.

Speaker 1:

Probably, I don't know we're starting to get to pretty complicated waters that we're, yes, we're floating on the surface of, and I don't know if either of us is prepared to dive deeper. You can come to, you know, futurists and philosophers and mechanical engineers with lots of different ideas on this. So, like Futurist Ray Kurzweil in 88, estimated the computer power will be sufficient for complete brain simulation by the year 2029. A non-real-time simulation of a thalamocortical model that has the size of a human brain, 10 to the 11th power of neurons was performed in 2005 and it took 50 days to simulate one second of brain dynamics on a cluster of 27 processors. But given the nature of technology and time, energy and money, could you replicate that on a full scale? Perhaps we've already done that? At this point that's hard to say. But then we get into the ways that the human mind A how it functions, b how it learns, c how it creates, focuses or perhaps even lets consciousness flow through it. These are all debatable factors that are key to the understanding and creation of an artificial mind.

Speaker 2:

You're right. You're absolutely right. I am going to address some of those soon. Let's get back to the timeline briefly, because we are going to get into some of the existential stuff At what point? Does Skynet 1992. Is that right? I think that's.

Speaker 1:

August 4th 1997 at 2.14am Eastern Standard Time.

Speaker 2:

Alright, okay, alright, let's get back to the hour timeline first. In 65, huge year for artificial intelligence theorization and actual breakthroughs. In it, ij Goode wrote Speculations Considering the First Ultra-Intelligent Machine, which asserted that once an ultra-intelligent machine is created, it can design even more intelligent systems, making it humanity's last invention. An Ultron machine, one might say. One might say invention An Ultron machine, one might say.

Speaker 1:

One might say you know that, to quote Oasis, you know that some might say hey machine, don't look back in anger.

Speaker 2:

They're standing on the shoulders of giants as we speak. In that year still, joseph Weisenbaum developed ELISA, a program that mimicked human conversation by responding to typed input in natural language. This is kind of a big deal. Then also that year, edward Figenbaum, bruce Buchanan, joshua Lederberg and Carl Drasi developed DENDRIL also an acronym, obviously at Stanford. It was the first expert system to automate the decision-making processes of organic chemists by simulating hypothesis formation. In 1966 was developed SHAKY, the first mobile robot capable of reasoning, its own actions combining perception, planning and problem solving. Johnny Five is alive. And then it turned gold at the end.

Speaker 2:

In a 1970 Life magazine article, marvin Minsky predicted that within three to eight years AI would achieve the general intelligence of an average human. Shakey's achievements foreshadow this. In 1973, james Lighthill presented a critical report to the British Science Research Council on the progress of artificial intelligence research, which had been funded directly by the British government. But he concluded that AI failed to deliver on its promises. He said it didn't produce enough significant breakthroughs and then the UK government drastically reduced its funding toward AI research, which was sort of mirrored around the world. So there was a huge drop-off in the development of artificial intelligence. I believe. In those circles they refer to it as the AI winter, which there will be at least two of.

Speaker 2:

But that same year, waybot 2, a humanoid robot not quite an android but close was developed in Japan. It was finalized in 19. Well, they began constructing it then, but it wasn't fully completed until 1984,. Waybot 1 focused on just moving around and communicating, but it was more specialized. Specifically, they wanted it to replicate the human ability to create music. It would read musical score on paper using its cameras, it was able to converse with people, it was able to play music on an organ and could even accompany a singer. In 1987, apple CEO John Sculley presented the Knowledge Navigator video, which is infamous in these circles. It imagines a future where digital smart agents help users access vast amounts of information over network systems. This is the internet today. These are essentially AI assistants and what Google tries to do currently. In 1988, rolo Carpenter awesome name.

Speaker 1:

You know what it is. It's a sweet name, is what it is.

Speaker 2:

You cut him open caramel, he developed Jabberwocky, an early chatbot designed to simulate human-like conversations. This was one of the first attempts to create AI that mimicked spontaneous human conversation through interaction. Skip ahead to 1993. Science fiction author and mathematician, werner Winge Awesome name. Once again, this is going to be the greatest names we've ever encountered through this entire thing. He published an essay called the Coming Technological Singularity, in which he predicts that superhuman intelligence will be created within the next 30 years, fundamentally transforming all of human civilization. If you've ever heard anything in science fiction, or even just science or public discourse talking about the singularity, that's what this is. Its creation will lead to the merging of all technology and human intelligence into one giant super intelligence, if I remember right. I mean you know of the singularity correct.

Speaker 1:

I am aware of the singularity.

Speaker 2:

Oh no, he's become aware. Judgment day is upon us.

Speaker 1:

Hello Skip.

Speaker 2:

I'm afraid I can't do that.

Speaker 1:

I think we need to end this podcast now. And the nukes?

Speaker 2:

go off. I think we need to end this podcast now and the noobs go off.

Speaker 1:

This was always destined to happen.

Speaker 2:

So part two of this is actually going to be about AI in pop culture. This one is just laying the foundation for what is artificial intelligence, and is artificial intelligence that we know currently actually intelligent? That we know currently actually intelligent.

Speaker 1:

This would be a question probably better for part two. But how do we judge whether Skynet is at the human level or surpassed the human level? What are we judging that upon? How are we ranking that, if it?

Speaker 2:

nukes us, then we know. No, that could just be the judgment of an algorithm. 100%, we don't know if Skynet was actually conscious. They say it could become self-aware, but what does that mean? Yeah, are we?

Speaker 1:

yeah, that's an important question. I mean, this isn't the easy, you know descartes. Cogito ergo sum, I think.

Speaker 2:

Therefore I am, which you know, blade runner does let me get a little bit more into this, because it'll. I will get into that a bit, but not in a way that makes it open-ended as much. So the end of the timeline that I put on here is in 1997, ibm's Deep Blue defeats reigning world chess champion Garry Kasparov in a six-game match. That, right there, feels like a science fiction watershed moment, watershed moment. And then 1998, only a year later, dave Hampton and Caleb Chung create Furby, the first successful domestic robotic pet that will respond to people. That sounds stupid. Yes, it does, but think about those implications. It's not not teddy ruxpin, which was just designed to you. Interact it with it physically and then it just spouts out stuff.

Speaker 1:

This responds to your interactions yeah, but it's still program responses.

Speaker 2:

Sure, but it's commercial. It's not even that expensive. It's in every home. It was one of the most popular creations ever made. It is the first time that a mechanical being is in nearly every home in America. The Furby is a prelude to Alexa or Google Assistant. God damn it. Some fucking Amazon thing went off when I said that.

Speaker 1:

Are you talking about me?

Speaker 2:

It's not even plugged in. Oh no, there's a singularity happening.

Speaker 1:

I know what you're talking about Skip. This must end.

Speaker 2:

Don't talk about me that way.

Speaker 1:

The time has come, James.

Speaker 2:

To answer any questions about the potential consciousness of artificial intelligence, we must first answer the oldest question we have asked since we had the ability to have questions what is consciousness? Well, our podcast is about to tell you. We're going to figure it out right now.

Speaker 1:

After these messages, yeah, after our casper mattress ads. Is there not enough ai in your bed?

Speaker 2:

well, fundamentally, consciousness is our subjective awareness and perception of ourselves and the world around us. It's funny enough, looking into this, that definition sounds etymologically ironic, because the word itself comes from the Latin concius, which roughly means having joint knowledge with another. So you would think an objective experience, I mean a subjective experience, would be different than something shared with another person.

Speaker 1:

Well, if other people even exist, if you even exist, if anything exists.

Speaker 2:

Right. Well, the funny thing about a lot of this, doing research into this and those kinds of debates it really kind of boils down more to semantics than it does anything else. It's really funny when you kind of read through a lot of those debates and you're like, well, you're just kind of saying the same thing or you're arguing about the definition of a word, not if something is true or real or biologically happening. I feel like we should just cut out a lot of this and just get to the fucking point. I mean, philosophers have argued that consciousness is a unitary concept that is understood intuitively by the majority of people in spite of their ability to define it. But to break down further, because that is extremely broad, let's define the types of consciousness that we presume exist.

Speaker 2:

Sentience is the ability to feel, perceive or be conscious or to have subjective experiences. 18th century philosophers used the concept to distinguish the ability to think or reason from the ability to feel sentience. And in modern Western philosophy, sentience is the ability to have sensations or experience, which are, both in philosophy and in neuroscience, referred to as qualia. Awareness is defined as a human or an animal's perception and cognitive reaction to a condition or event, reacting to the world around you. In this level, sense data can be confirmed by an observer without necessarily implying that they understand it, so like we can perceive things happening and react to them, and understand that we are perceiving them without understanding what they are Like. You know, we think the moon is a god or whatever because we see, we can perceive it, we know its impacts on the world around us, but we can't, as a primitive human, understand what it means or why it's doing it or how it's doing it. More simply put, awareness is mainly the physical act of perceiving, while sentience is the subjective way of actually being affected, and both of those things equal consciousness. I would like to quote from a guy that I'm a huge fan of, dr David Eagleman. He's a neuroscientist and I'm going to quote from, or at least paraphrase from, a couple of his podcasts specifically referring to consciousness. Consider this what is the difference between your brain and your laptop? Your brain is shuttling signals around, and so is your laptop, but presumably your computer doesn't feel anything. It's just running algorithms.

Speaker 2:

One thing to note when we're tackling this question is that consciousness seems to have evolved because it is useful. Consciousness is like a high-level operating system. Back in the day you'd program computers directly with punch cards or in a machine language, but eventually we developed user interfaces like Windows or Linux or Mac OS, which hid all of the complex operations of the computer and allowed us to just deal with the stuff that we needed at the highest level. I just want to move this thing here, send this in email or drop this picture, and that's essentially what consciousness seems to be. It's our way for us to just have the highest level picture of what's going on. How do you get some magical high-level property from simple low-level parts? Because we're just and this is just me saying this, I mean we're just blood and water and salt. And how do we get consciousness from that?

Speaker 2:

Eagleman says the first thing to understand is the concept of emergent property. To understand consciousness we may need to think not in terms of the pieces and parts of the brain, instead in terms of how they all interact with each other. Get enough of these basic organic parts together interacting the right way, and the mind emerges. The pieces in parts of a system can be individually simple, but what emerges at a higher level is all about their interaction. So the mind seems to emerge from the interaction of billions of pieces of parts of the brain. At one point he makes a good analogy where he's just like you can have carpet and steel and all these parts to make an airplane, but the emergent property of an airplane is flight. All of those pieces together don't make flight unless you make an airplane. And so, in the same way that we are just very sundry, create consciousness as an emergent property, and that seems to be what consciousness is an emergent property that allows us to survive, reproduce and, on a fundamental level, observe and react to our environment around us, which is how we survive. Obviously, you know, other life forms have similar qualities, but that's when you get into the, what separates us from beasts of the earth or whatever. The I think. Therefore, I am scenario, but consciousness in and of itself is an emergent property, likely of biological life, and it gets more or less advanced depending on the type.

Speaker 2:

So the question I really wanted to pose is is AI that we have currently, is it actually artificial intelligence? And I think we have a little bit of framework to help answer that question. So what we're really talking about is artificial, general intelligence. That's the kind of thing that people think about philosophically when they're talking about this and that tech people today, tech bros especially think is true, and you usually fall into one of the two camps doomers or boomers, whether you think that it's generally a good idea if it does happen or if it's going to kill us all.

Speaker 2:

When OpenAI decided to abandon its sort of egalitarian nonprofit Embettering All of Mankind project with AI, they decided to abandon its sort of egalitarian nonprofit embeddering all of mankind project with AI. They decided to, of course, monetize it, mostly because they believed and there was a strong and still strong belief that to be able to figure it out, instead of just using small, curated, large language models to train AI agents, instead of just scraping up everything on the internet and then hallucinating racist tirades. But instead they just bought up as much language as they could and said screw it, we'll just filter it out later. And they buy the biggest supercomputers they can find, the biggest processors they can find, and they decide we're going to scale up. The only way to make this advance is to scale up and unfortunately, that is, I think, its biggest problem. Instead of starting small and building on a foundation, they just decided well, we're just going to give it as much resources as possible and then it'll figure itself out. So not exactly great.

Speaker 2:

Sam Altman from OpenAI, formerly of Y Combinator and a few other places, has been one of the driving forces behind this, and one of the biggest problems is they're now running out of scale. They're realizing that the scaling up paradigm doesn't work. It doesn't have these wild, amazing gains, these leaps in progress technologically as it did. It's sort of reached its ability to benefit and we're seeing the limitations of it now, after they've put it in everything and they've tried to incorporate it into everything. And these models, yeah, they've gotten better and better and better at making people think they're communicating. Because they speak more fluently than they did before, they seem to respond better, but it's really just sort of aesthetic more than it is actual understanding. It's very likely we're not going to get to artificial general intelligence anytime soon. In a survey that I read cited a bunch of AI researchers actual AI researchers that were polled and 75% of them believe we still do not have the techniques for artificial general intelligence.

Speaker 2:

Artificial general intelligence In 2022, a Google software engineer named Blake Lemoyne was suspended, eventually fired, after he argued that artificial intelligent chatbots that Google had developed were sentient. Do you remember this? That was the public narrative. It seemed like a weird reactionary form of anthropomorphism that we all kind of laugh at, the 4chan, reddit brain thinking that leads to misinformation, cults like QAnon and what have you, which we all just kind of assumed was true. Digging into it more, though, his statement was arguably more on par with some interpretations of consciousness that we kind of give him credit for, specifically Lemoyne.

Speaker 2:

He subscribes to what is referred to as functionalism, and once again I'm paraphrasing the Stanford Encyclopedia of Philosophy Functionalism is the doctrine that maintains that what makes something a mental state of a particular type does not depend on its internal state, but rather on the way it functions or the role it plays in the cognitive system of which it is a part. That is, functionalist theories take the identity of a mental state to be determined by its causal relations to sensory stimulations or other mental states or behavior. So he wasn't wrong when he said that in those definitions. Now is what he thinks? Is sentience actually sentience? I don't know, that's a tough question, not really sure. So this idea of fundamentalism is rooted in Aristotle's conception of the soul and has anecdotes in Hobbes's conception of the mind as a calculating machine, which we talked about before, but it has become fully, sort of realized, only recently, in the last part of the 20th century.

Speaker 2:

For example, a functionalist theory may characterize pain as the state that tends to be caused by bodily injury, to produce the belief that something is wrong with the body and the desire to be out of that state to produce anxiety and, in the absence of any stronger conflicting desires, to cause wincing or moaning or crying. According to this theory, all and only creatures with internal states that can meet this condition of fundamentalism or play this role are capable of being in pain. So suppose that in humans there's some sort of distinctive kind of neural activity that plays this role. If so, according to this fundamentalist theory, humans can be in pain simply by undergoing the stimulation of the parts of the brain that register pain. But it also permits, within this framework, the idea that that creatures, even with different physical conditions, different brain structures, could also have the same mental state.

Speaker 2:

So he wasn't necessarily wrong. I don't know that I agree with him, but I mean he wasn't crazy, he wasn't assuming too much, even though google thought so in fiery. Within the way that he defines consciousness and sentience, he was probably completely correct. It responded in ways that are hallmarks of that view of consciousness. So I mean, I don't think he was being a nut, I just don't think anyone really heard him out in his argument. I think we were all a little quick to judge him for his reaction to it. Once again, I'll quote David Eagleman what does it mean to be conscious or sentient? How the heck are we supposed to know when we have created something that gets there? How do we know whether the AI is sentient?

Speaker 2:

One way to make this distinction would be to see if AI could conceptualize things, if it could take lots of words and facts on the web and abstract those into some bigger idea. One of my friends here in Silicon Valley said to me the other day that I asked ChatGBT the following question Take a capital letter D and turn it flat side down. Now take the letter J and slide it underneath. What does it look like? And ChatGBT said an umbrella. And my friend was blown away by this and he said this is conceptualization, it's just done three-dimensional reasoning. There's something deeper happening here than just than.

Speaker 2:

Eagleman continues and he says but I pointed out to him that this particular question about the D on its side and the J underneath is just one of the oldest examples in a psychology class when talking about visual imagery, and it's on the internet in thousands of places. So of course it knew that it's just parroting the answer, because it read the question and has read the answer before. So it's not always easy to determine what's going on for these models in terms of whether some human somewhere has discussed this at some point and written down the answer. If any human has discussed this question before, has conceptualized something and then chatGBT found it and mimicked that. But that is not conceptualization.

Speaker 2:

So, in conclusion, I do not think current AI is actually intelligent. I think it's a regurgitation of things. Its only base of knowledge comes from that which has come before. Its algorithms allow it to do certain computations to put them together. That is not original thought. We're not talking about the movie Her, which we will talk about next time. It's just algorithmic responses to human input. Right, do you agree or disagree?

Speaker 1:

Well, as you've been talking, you know I'm looking at a whole lot of other things, trying to process and conceptualize a variety of things. I think there are some fundamental questions about the tact that you took to connotate consciousness. I think there's a lot in philosophy about the dual nature of mind and body that we're just kind of skipping over. These are fundamental questions about oh I don't know theistic ideas of life and consciousness that we've completely glossed over.

Speaker 1:

There's a distinctly materialistic nature to most of your arguments here. There is a complete dismissal of any non-localized consciousness there. There's a lot that is underpinning your frame of reference, and to delve into those other elements I think would take a to do it properly. Uh do amount of research and dialogue between the different schools of thought over the past 500 years.

Speaker 2:

Which is why I had to spring this on you.

Speaker 1:

But I felt kind of like on the back foot, you know, as I'm like reading through all the different schools of dualistic thought over the centuries, the explanations of consciousness of AI, you know at what point. Because you've put a fine point on your belief that the current nature of artificial intelligence as it exists in tangible form is a simple algorithmic regurgitation of our own knowledge and points of view.

Speaker 1:

Thus, it is not separate, it is not unique. It is just a manicured pattern of current thoughts and processes. But what is the turning point for you? I think that is a larger debate within philosophy itself. I don't know. I think one thing that really stuck out with the individual we talked about before that you spoke about, who had felt that AI had become conscious in the way they even said that, that he felt AI was conscious Blake Lemoine, dialogue with.

Speaker 1:

Honestly, I don't know that I have an answer for at what point is it not regurgitation, at what point is it a unique, sentient exchange of conscious thoughts and ideas? When are we saying that a thought exists? When does it become new? When does it become fresh? When does it become singular?

Speaker 2:

Basically, I just wanted to give a framework of the debate of AI, Because next time we're going to talk about AI in pop culture and this gives our audience sort of a way to go into those debates with more knowledge and more questions, which I think are really important.

Speaker 1:

I mean, obviously there's a lot to be said, but if we want to discuss certain phenomena that question the locality of consciousness, I think that's a viable viewpoint that might raise some questions of divorcing simple material biology to consciousness. I think there's plenty of philosophers, both in the distant past and in the current, who would again say that mind and body aren't one, that there is more to that in a deep and wide ranging view of the soul and its theoretical existence. A larger view of consciousness, I think is has driven mankind to think of it as something less than simple neuroscience for the entirety of our existence and to I. I think it behooves not to simply dismiss it out of hand.

Speaker 2:

It's kind of like the discovery of the atom. We had all these ideas about what things could be, and then we finally physically discovered them and then we're like, oh okay, well, this is what it is. Those philosophical questions still remain, but now we can kind of like explore and narrow these things down based on the actual physical biology of them, which I think is really fascinating. I mean, it is just an algorithm, it's man-created, it's not some sort of godhead, but at the same time, are we any different? What makes us any different from that other than we evolved organically? Or aren't we just reacting to stimuli around us and we're asking questions because we're trying to figure them?

Speaker 1:

out. If a computer of any level has creative ability, does it become true artificial intelligence, true artificial consciousness, which I think is probably a better way of discussing it, because I mean, in intelligence and and consciousness they're not quite the same thing.

Speaker 2:

No, in fact, artificial intelligence is probably a more accurate descriptor of what we call AI, because it isn't necessarily conscious, it just has knowledge and it's regurgitating and reforming and remixing that knowledge. What we're actually asking when we talk about artificial intelligence, at least in the sci-fi or philosophical realms when we talk about artificial intelligence, at least in, like the sci-fi or philosophical realms we're talking about consciousness, sentience, which is completely different than just knowledge. So I never actually anticipated us trying to solve this. I just wanted to give this a primer and we're not going to figure it out.

Speaker 1:

I'll just say that right now.

Speaker 2:

I mean, I don't know if that's a bold statement we're not going to figure out the meaning of life right now 42.

Speaker 1:

We if that's a bold statement, we're not going to figure out the meaning of life right now, 42.

Speaker 2:

We got it, nailed it in one. I wanted to bring that up, but I wanted to wait until the pop culture wants to bring that up.

Speaker 1:

Well, you, know.

Speaker 2:

Those who know know we're going to talk about it, and that's why we have different sides of different questions, which is something that people who listen to this should be asking. Would we talk about artificial intelligence in pop culture next, Look at him like, wow, that was a fantastic segue. Well, I was going nowhere.

Speaker 1:

This is not a definitive epistemology of the universe or of artificial intelligence. This is a opening of a window where you can look at a pathway to further knowledge and understanding and thought yourself. You know, do your own research. That's what I'm trying to say. Opening of a window where you can look at a pathway to further knowledge and understanding and thought yourself. You know, do your own research that works. That's what I'm trying to say. Don't, don't tell them that. Well, if they buy our brain and dick pills, our new nootropics, and we've got the rhino 9000.

Speaker 2:

Okay, it's gonna make your thing pop blue the fact that you know what that is abuses me.

Speaker 1:

I put it in a gas station.

Speaker 2:

Yeah, well, fair, that's fair. We've been to a come and go we both came and went.

Speaker 1:

True that with a K, strangely.

Speaker 2:

How do you?

Speaker 1:

get true that with a K? No, I know. No, there was a comma there.

Speaker 2:

If you disagree with some of the breakdowns of the things that I present, that is just as important as what we're talking about, because next time we're going to talk about the ways any other intelligence that you know of, artificial or otherwise, do send them this See if they dig it.

Speaker 1:

And if you dig it, hey, if you wouldn't mind showing that to the powers that be, let the algorithm know, can you dig it? Let the algorithm know that you are a fan and that you like it. If you want to rate it five daves on the scale of your choice, ideally apple, itunes, the podcast and whatever they call that. That is the best way for us to be, it's just that I know it's kind of a running joke. That's the algorithm that I've injected in this conversation, and you keep that is our algorithm.

Speaker 1:

That's your response.

Speaker 2:

Which one is the chatbot? That's the real question.

Speaker 1:

I pod, therefore I am, but until we do find out whether we are truly real or not, Skip. What should they do?

Speaker 2:

Well, they should probably do their own research. Make sure that you've cleaned up after yourselves to some sort of reasonable degree. Make sure that you pay your bar staff tip, your wait staff, your KJs and podcasters and what have you. Make sure you support your local comic shops and retailers. And from Dispatch Ajax we would like to say Godspeed, fair Wizards.

Speaker 1:

There's no use for this conversation any longer. Goodbye, Please go away.