Dispatch Ajax! Podcast

Artificial Intelligence Part 2

Dispatch Ajax! Season 2 Episode 61

Our exploration of artificial intelligence continues as we bridge the boundary between current AI technology and fictional representations of artificial minds. While last episode focused on the technological trajectory toward superintelligence, this time we tackle the philosophical dimensions of machine consciousness.

Speaker 1:

No, not robo-Nazi. Don't stop, why do?

Speaker 2:

you always default to Nazi when we do this, because it's the closest thing.

Speaker 1:

What word is closer to Yahtzee than Nazi? I mean, granted, but why does that keep coming up? Because it's the first thing I think of Yahtzee, nazi, nazis are the first thing you think of when you say Yahtzee, and I have to riff off of Yahtzee.

Speaker 2:

Okay, we really do have to evaluate. Perhaps, as Cake once said, perhaps, perhaps, perhaps. I don't think I've ever quoted Cake before.

Speaker 1:

That's weird, because you were just doing it in a short skirt and a long jacket. That's true, Gentlemen. Let's broaden our minds.

Speaker 2:

Are they in the proper approach pattern for today? Negative All weapons Now Charge the lightning field. Charge the lightning field to assess its values. Quote If our society is concerned with profits, then we may end up sacrificing human life and well-being, no matter what technology we use. If we want to live in a society that values human life, a society where a self-driving car will damage itself rather than run over a child, for example, then we need to work at building these principles into our technology from the beginning. Well, welcome back to Dispatch Ajax. I'm Skip. Yeah, that's Skip. I'm Jake.

Speaker 1:

This is going to be a fun loaded one, so last episode Skip delved into the coming singularity, the ascension of artificial intelligence to superintelligence and beyond human control. This episode we will sift through popular culture, mostly movies if we get to it, to explore examples of AI and their depictions, those sci-fi waves. Skip's gonna recap and kind of realign focus on what he discussed last episode before I discuss a little bit more of ai consciousness itself and the philosophy of understanding that outside of what skip had talked about yeah, take it away, skipper.

Speaker 2:

Whoa, I wish I had one of those whirring whistles, you know, or a slide whistle, whoop.

Speaker 1:

Yep, If you could, if every time one of us talked it could be like we were beaming in that sound effect.

Speaker 2:

Oh well, I mean, I could do that, I could do that for sure. Effect oh well, I mean, I could do that, I could do that for sure. So let's recap a little bit about modern real life AI and how it differs from what we're going to talk about today. So modern AI what we call AI while based on computational neural networks which have been around for decades, actually they're not as complex as you would think. I mean, they're complex, but their concepts aren't. They're basically just chatbots which we have all kind of come accustomed to, but with more information to draw on. They're designed to mimic human conversation through trial and error, based on large language models and machine learning. So large language models aren't all that mystifying in and of themselves. They just do what Google did to create its original not-shitty search engine. That entailed getting a bunch of servers with racks made of Legos that's why they have their color scheme and they basically just indexed every website. That was their entire concept. They just downloaded the entire internet. They then created an algorithm to not just search for the frequency of keywords used online, as previous engines did, but figure out which websites most frequently linked to other websites. That would eventually end the rabbit hole of your query and it worked and it was great. And then in-shedification Large language models aren't dissimilar.

Speaker 2:

They have algorithms that model and index text, conversations, tweets, books, everything. They create categories in which different language falls. It then uses these examples to predict how to respond to prompts, and I'm really glad you did. I did watch the Artifice Girl before this, and a lot of the summation of that is what I'm talking about here, so I'm glad that that worked out. On a basic level, this is how predictive text works on your phone, just a little more complex with enormous amounts of data on which to draw. With generative AI, the algorithms do much of the same, but with images, music, movies and all sorts of other data, and then spits out an approximation of whatever we ask of it.

Speaker 2:

But because it only remixes and regurgitates stuff that already exists, it does not have the ability to generate anything wholly original. This is important. Everything is derivative in the truest sense of the word. Everything is derivative in the truest sense of the word, and because it can only understand technical context, not abstract context, eventually, inevitably, it begins to go off the rails. This is referred to, begins to give you wrong answers to questions or images of shrimp, jesus, or it starts endorsing eugenics on Twitter or X, which is where eugenics apparently lives. It doesn't fundamentally understand what it's saying or doing outside of its technical computations. It can't have an opinion, an emotional one, nor can it express a cold, calculating one like Skynet or in the Matrix. It can only express what it thinks you want to hear. That is current ai. That is what we're dealing with right now.

Speaker 1:

So, jake, let's get into it yes, let's, yes, yes, have some, yes, have some.

Speaker 2:

Just thinking of the same thing. There's a second behind you, yes let's have some.

Speaker 1:

We gotta get these AI and supercomputer together. I think that would be extraordinarily bad.

Speaker 2:

Define the whole good bad thing. You know what? That's what we're gonna try and tackle today. Yeah, yeah kind of kind of we're not gonna succeed, but we're gonna do it no, no, and this is kind of um.

Speaker 1:

Uh yeah, I kind of felt like we had a little more hard data-driven analysis last time and I wanted to do a little more soft, thoughtful approach to this one.

Speaker 2:

You wanted to go more abstract.

Speaker 1:

A little bit. I kind of want to get into some of the philosophy behind. I'm going to focus on a particular thing when it comes down to it, but obviously this is going to be cursory and casual.

Speaker 2:

There's no way we could possibly delve into everything that is out there all the thoughts, oh, you mean all the questions humankind has ever had, since the beginning of consciousness, you know yeah, it may take a little while it may be longer than this uh hour-long podcast.

Speaker 1:

But we're just going to dance on the idea of a singular. Thing.

Speaker 2:

This is bold. We're going to try it. We're going to do it. Let's see where this goes All right.

Speaker 1:

So AI consciousness isn't just a tricky intellectual puzzle. It's also a very morally weighty problem with possible unknowable consequences. It maybe even causing extreme pain and nullifying the thoughts and emotions of a being whose interest should matter. But mistaking an unconscious AI for a conscious one, you might risk compromising the safety of humans and happiness, all for the sake of this box, spouting ones and zeros. Now, both mistakes are easy to make. As Liyad Mudrik, a neuroscientist at Tel Aviv University who has researched consciousness since the early 2000s, says, consciousness poses a unique challenge in our attempts to study it because it's hard to define. It's inherently subjective. Now, consciousness is often fused with terms like sentience and self-awareness, but according to the definitions that many experts use, consciousness is a prerequisite for those other, more sophisticated abilities. To be sentient, a being must be able to have positive and negative experiences, in other words, pleasures and pains, and being self-aware means not only having an experience, but also knowing that you are having an experience.

Speaker 1:

Now, current AI is not considered conscious, as we have discussed in last episode and at the beginning of this episode in last episode and at the beginning of this episode, AI systems, even the most advanced ones so far, primarily mimic human cognition and lack true self-awareness, subjective experience and an understanding of their own existence. But if we were to look for indicators of consciousness, researchers that explore these various indicators point to awareness of awareness of one's own awareness, ie the ability to be aware of its own thoughts and internal states.

Speaker 2:

I think. Therefore, I am yes.

Speaker 1:

Yes, virtual models of the world, that's, the capacity to create and maintain internal representations of the environment, dicting future experiences, the ability to anticipate future events and their consequences. Self-recognition, similar to the mirror test in animals, where an AI might recognize its own image or its own characteristics when they're placed in front of it. But in more popular culture, some things that really pop out are, of course, self-awareness, a super intelligence otherwise surpassing humanity in its cognitive abilities, self-preservation, emotions and conscious volitions often based on those emotions.

Speaker 1:

Now those are all things that we're going to get to when we come around to movies and depictions of AI, but I just want to lay some of those elements out there.

Speaker 2:

Oh, absolutely.

Speaker 1:

I spoke about last episode, ilya Sutskever, chief scientist at OpenAI, the company behind the chatbot, chat GPT.

Speaker 2:

You were right with chatbot. I think that's more appropriate Chatbot, gpt.

Speaker 1:

Tweeted that most of the or X posted some of the most cutting edge area networks. Quote unquote might be slightly conscious.

Speaker 2:

What the fuck does that mean?

Speaker 1:

I yeah, I don't know, but again, this is just like when we get into the ideas of how we depict ai I think we're going to talk about down the road. There's a lot of gray area when it comes to this and maybe, uh, really nailing down and defining consciousness and bias to artificial intelligence might be at least in the gray area where we're at now, as opposed to some years ago where it was strict black and white. At the there was this, this paper I was reading, about 19 neuroscientist philosophers, computer scientists, and they were trying to figure out how to define ai consciousness, and these researchers focused on phenomenal consciousness, otherwise known as the subjective experience this is the experience of being what it's like to be a person, an animal or an ai system does turn out to be conscious.

Speaker 1:

they argue that it is a better approach for assessing consciousness than simply putting what it's like to be a person, an animal or an AI system does turn out to be conscious. They argue that it is a better approach for assessing consciousness than simply putting a system through behavioral tests. Say, if you were to ask ChatGPT whether it's conscious or challenging it to see how it responds, that might not be a good indicator. That's because AI systems have become remarkably good at mimicking human and human behavior, so just judging by that, it may lead to a false impression.

Speaker 1:

I think that's where, like a Turing test or a avoid conf where it is very cinematic and an easy way to identify with that intelligence you're talking to, but whether it actually possesses consciousness based on that is probably a faulty premise 100%, you're completely, yeah, and I have a lot of stuff. Yeah.

Speaker 2:

Yeah, yeah, yeah, I agree with you, yeah.

Speaker 1:

Yeah, I agree with you so they had selected some theories and extracted for them a list of consciousness indicators. Now these are some ways that consciousness might be constructed with artificial intelligence. Now one of them is the emergent complexity. This is that some researchers believe that consciousness is an emergent property arising from complex interactions within a system. As AI systems become more complex and sophisticated, particularly with advancements in neural networks architecture, consciousness may emerge in a manner analogous to how consciousness arises from the complex interactions within the human brain.

Speaker 2:

I think that's some of what you got into last episode. Yeah, with David. I quoted David Eagleman, the neurologist, a lot. He talks about that, a lot about emergent properties.

Speaker 1:

Yeah, that's exactly right, eagleman.

Speaker 2:

Dude, he actually totally rules. If you ever get a chance, watch his network series the Brain. It's fucking brilliant and it really just changed my entire perspective on neurology and what the universe was and whatever. It's really good. It's really good. I would do that Nice.

Speaker 1:

Try that noodle. Somebody has to. I think you're doing a fine job, skip. Another one would be self-reflection and learning. Ai systems are already capable of learning from experiences and adapting their behavior, engaging what some might interpret as rudimentary form of self-reflection.

Speaker 1:

If self-awareness is a component of consciousness, then AI systems with enhanced learning and self-monitoring capabilities could potentially develop a form of artificial self-awareness. Another one would be integrated information theory, or IIT. Iit suggests that consciousness is related to the capacity of a system to integrate information. According to IIT, systems with a high level of integrated information, whether biological or artificial, could be considered conscious. The human brain, with its vast interconnectedness, is believed to have a high level of integrated information. This theory suggests that if AI systems can achieve a similar level of integrated information through complex architectures, they could potentially develop consciousness.

Speaker 1:

Another would be Global Workspace Theory theory, or GWT. Now, this proposes that consciousness arises from the global broadcasting of information across different specialized modules within a system. This theory suggests that AI systems with architecture similar to the human brain's global workspace, allowing for the integration and sharing of information across various subsystems, could potentially lead to the emergence of consciousness. And the last one I'm going to cover would be functionalism. Now, functionalism is this theory in the philosophy of the mind that defines mental states by their functional roles, regardless of their physical realization. This perspective suggests that if an AI system can functionally replicate the cognitive processes and behaviors associated with consciousness, it could be considered conscious even if its underlying architecture is vastly different from a biological brain.

Speaker 2:

Yes, I touched on that a tiny bit before. One of the things we did skip over was the definition of the consciousness or the. What was it the consciousness of the mind or no? The? What did you just say? The?

Speaker 1:

which thing I'm sorry no, no, no.

Speaker 2:

The concept of the mind, that whole fuck. One of the most important things we didn't define was the concept of the mind, that entire like, like. That's an entire like. I looked up trust me, I looked it up too. This was a. This is a big thing. What is it? What did you? What did you go back? Go back in your script for a second okay, okay, I'm sorry, I I concept, the concept of the mind. What was it? I'm sorry, I don't the mind. What was it?

Speaker 1:

I'm sorry, I don't see that Not phenomenal consciousness, the sentience or the. I'm sorry, I don't know which part you're talking about. I'm sorry.

Speaker 2:

No, because this was actually a huge part of this field of science that I kind of intentionally skipped over because I didn't want to spend a lot of time defining it, but it's super important. I'll come back to it later. I'll figure it out later.

Speaker 2:

Theory of the mind well, I mean, there's that because that's not, I don't know, maybe, maybe it will come back as we get through maybe because that's something we like probably should have defined earlier on, but we didn't and super important, but but I mean we're never going to be able to figure out any of this stuff.

Speaker 1:

So I mean no, no, I yeah, when we, when we get to the end of this little bit, I think that kind of will fall like a turd out of our pant leg.

Speaker 2:

Nice.

Speaker 1:

Now, obviously, those are some ways that conscience could come about and evolve within an artificial structure, but there is at least one philosophical dialogue I want to generally highlight before we depart from the conjecture and analysis. Okay, this is something that, according to a 2020 Phil Papers survey that's a collection of philosophers, geordi 62.42% of the philosophers surveyed said they believed that this is a genuine problem, while, in fairness, 29.72% said that it doesn't exist. And that is the hard problem of consciousness. Now, this is something that was coined by cognitive scientist David Chalmers when he first formulated it in his hard problem paper facing up to the problem of consciousness in 1995, and then expanded upon it in the conscious mind 1997, a book. Now this puts that there. There's an easy problem of consciousness and a hard problem of consciousness, and when we get to the hard problem, it it, it gets hard. Obviously, this is all going to be difficult. You know it's tough. It's a tough one, it's a real Shocking.

Speaker 2:

We're going to try and figure out the greatest questions mankind has ever posed.

Speaker 1:

This one's a difficult one to crack folks.

Speaker 2:

Okay.

Speaker 1:

Shocker.

Speaker 2:

We're trying to figure out the nature of consciousness in our two episodes or more of our show and we don't have those kinds of well. I mean, you have that degree, but still, yeah, well you know that was a wing and a prayer really.

Speaker 1:

So by Chalmers' definition there is an easy and a hard problem First. We'll deal with the easy problem Now. Easy problems are amenable to reductive inquiry. These are logical consequences of facts about the world, like how a clock's ability to tell time is a logical consequences of its clockwork and structure, hurricane being a logical consequence of the structures and functions of certain weather patterns. These are easy problems. They're logically defined by the sum of their parts as most things are.

Speaker 1:

This is relevant to consciousness concerning the mechanistic analysis of neural processes that accompany behavior. Examples of these include how sensory systems work, how sensory data is processed in the brain and how data influences behavior, or verbal reports, the neural basis of thought and emotion, and so on. They are problems that can be analyzed through structures and functions, but the hard problem, in contrast, is the problem of why and how those processes are accompanied by experience. In other words, the hard problem is the problem of explaining why certain mechanisms are accompanied by conscious experience. So, for example, why should a neural processing in the brain lead to felt sensations of, say, hunger, and why should those neural firings lead to feelings of hunger rather than some other feeling, for example being tired or being thirsty?

Speaker 1:

Chalmers argues that experience is irreducible to physical systems such as the brain. An explanation for all of the relevant physical facts about neural processing would leave unexplained facts about what it is like to feel pain. This in part because functions and physical structures of any sort could conceivably exist in the absence of experience. Alternatively, they could exist alongside a different set of experiences. For example, it is logically possible for a perfect replica of Skip to have no experience at all, or for it to have a different set of experiences, such as an inverted visible spectrum, so that blue and yellow and red and green axes are completely flipped.

Speaker 2:

To quote Battlestar Galactica I wanted to see x-rays.

Speaker 1:

Now, as opposed to, like we said, a clock or hurricane or the physical things. The same cannot be said about those difference is that physical things are nothing more than physical constituents, but consciousness is not like this. Knowing everything there is to know about the brain or any other physical system is not to know everything there is to know about the brain or any other physical system is not to know everything there is to know about consciousness. Consciousness, then, must not be purely physical. Now, I bring this up because Chalmers' idea contradicts physicalism, sometimes labeled as materialism.

Speaker 2:

Which we touched on last time.

Speaker 1:

Which we touched on a little bit last time. This is the view that everything that exists is a physical or material thing, so everything can be reduced to microphysical things. There's always like a way to break it down and then, based on those structures and functions, it logically flows one to the other, one to the other. But this theory of consciousness and the complexity of this problem, chalmers, suggests that this isn't possible. Now we'll get into some further details of that, but let's just keep going on with this for a bit.

Speaker 2:

This doesn't sound like a Okay. Here's the problem with that, though. Like you can't give a scientific paper and say that all of a sudden shrug your shoulders and go, I don't know, maybe it's not real, you know what I mean. Like you get into some of that, like what the bleep do we know territory at that point. Let me get a little further in and let's see where we go. That's fair.

Speaker 1:

That's fair. According to physicalism, everything, including conscience, can be explained by appeal to its microphysical constituents. Hachalmo's hard problem presents a counterexample to this view and to the other phenomena like swarms of birds, since it suggests that consciousness, like swarms of birds, cannot be reductively explained by appealing to their physical constituents. Thus, if the hard problem is a real problem, then physicalism must be false, and if the physicalism is true, then the hard problem must not be a real problem. Now, proponents of the hard problem argue that it is categorically different from the easy problems, since no mechanistic or behavioral explanation could explain the character of an experience, not even in principle. After all of the relevant functional facts are explicated, they argue, there will still remain a further question why is the performance of these functions accompanied by experience? Not only is there a hard problem, it actually has moral consequences.

Speaker 1:

Now, we generally agree that, say, chairs or this microphone do not have conscious minds. We generally agree that those that are neighbors or the other podcaster I'm talking to do do. Now, we often assign conscious minds to our closest male relatives, like chimps, dogs, pigs, thus to give them moral consideration, more than, say, a fruit fly or even a fish. However, we actually have no idea which things are conscious. You cannot prove that my podcaster, skip, is conscious. No-transcript.

Speaker 2:

Again, this is one of those. Like you know it's a thought exercise, you know, you know, I know it's a thought exercise, but it's one of those like masturbatory, like like I'm the only important person in the universe exercises, you know, like get the fuck out of here.

Speaker 2:

Like get the fuck out of here. I mean, just by definition, the idea, consciousness, the idea of sentience. I mean literally we talked about this last week. It's like shared experience. Consciousness is about shared, understanded experience between everyone involved. And so when people start being like, well, maybe it's a hologram, maybe we're living in a simulation, you're like get the fuck out of here. You think you're way more important than you fucking are. Get the fuck out of here.

Speaker 1:

But consciousness also has to function without the representation of another conscious mind. Just because you would be alone on an island, you would not be unconscious, because you'd have no other conscious beings around you. Your conscience is still extant and viable.

Speaker 2:

Yes, I mean yes, by modern definitions. You're correct. Consciousness is, by definition, a subjective experience. However, considering we're able to communicate with each other and share language and art and just interact with each other, there has to be at least similarity between our experiences. So it cannot possibly be completely. I mean, yes, it's subjective, but we're all experiencing the same sensory input and we're all at least for the most part, and obviously and we'll probably get to this there are differences in this, but we're all sort of like experiencing it the same way, or at least in a similar way, and there are really great examples of how we don't. That will help flesh this out as well. But so most of the time when people talk about like, oh, this is a singular, you know, experience I'm the only person in the universe and everyone else is a fucking fantasy of mine. It's just masturbatory fucking like hubristic bullshit that I do not entertain there was a strong rebellion from that conceptual framework.

Speaker 1:

For sure I'm gonna. I'm. I found this. Actually it's this reddit comment, um that I wanted to share well, now we're.

Speaker 2:

Now we're really digging into the real science of it here you want to get down to the hardcore facts reddit is where to find us well, it's better than step up against 4chan. Yeah, because they've seen 4chan.

Speaker 1:

Oh yeah, If it was a video of a cat getting smashed with a cinder block or an anime girl getting railed by a demon. Maybe it'd have strong complexities.

Speaker 1:

Alright, let me just read this out. The crux of the hard problem is that even if you were to figure out the so-called neural correlates of consciousness, the informatic pattern required for you to be conscious, you could still not prove that other people are conscious except by referencing the fact that you have those very same correlates. And that's weird. Most people who believe in a hard problem do not deny the explanatory power of science. They don't deny that prodding the three pounds of meat inside your brain can cause experiences, but they do point out that in no way can you jump from neuroscience to phenomenal experience. So what's going on? What is a first-person perspective? It's formed by patterns of information. Does which matter compose it, dictate who experiences that consciousness?

Speaker 1:

If I were to use a nano Xerox to copy every brain cell, would my consciousness be split in two? What would that be like? Is this quote-unquote isolated feeling of the first person frame a kind of evolutionary selected illusion intended to neurally shackle us to try to maximize our fitness? Does consciousness experience, particularly the feeling of volition or willing, affect my behavior? Or does consciousness come after the fact, after my brain is done making all of the relevant choices subconsciously? Is it simply an epiphenomenon or a byproduct? If I upload my mind into a virtual world, is it still me in there?

Speaker 2:

Can I?

Speaker 1:

morally kill animals? Does the thermostat feel things? Should an advanced AI be given moral consideration? Are we even morally allowed to program AIs with phenomenological experiences?

Speaker 1:

Well, I mean, there's a lot to tackle in just that last couple of sentences, and that these questions have different answers depending on the perspectives you choose to take. Let's look at some of these Reductive elementalism, more or less the full, the null hypothesis there is no consciousness outside of what science can study. Everything else is either confusion or illusion. Although he might reject being placed in this category, metzinger fits here in my opinion. Metzinger fits here in my opinion. Two materialism yes, we have all of these quality of things, but they are completely equivalent to their neural correlates. Just because we can't see how yet doesn't mean it isn't true. Three, the dual aspect monism there is one reality and the two aspects to it, the physical and the mental. Neither gives rise to the other. They're supervenient on something else we can't see.

Speaker 1:

And psychism Everything is conscious, more or less even that thermostat. It all feels, it's all thinking. Now there are other perspectives. Now this person? They said they used to think the heart problem was an ontological question what kinds of things there are. But they've changed their mind. The heart problem consists primarily of a series of epistemic dilemmas. How do you know you're conscious? How do you know others are conscious? How do you know your introspection of your own mental state is accurate. End Reddit thread quote.

Speaker 2:

Okay, well, okay, yeah, there's a lot to deal with there. Some of it easily addressed, some of it I would like to know more about. To answer it Jeez, yeah, okay.

Speaker 1:

I have other stuff If you want to think on some of that.

Speaker 2:

Oh yeah, Let me chew on that, yeah.

Speaker 1:

No, by all means. Yeah, this is kind of like the reverse of when you were talking.

Speaker 1:

I was trying to think of all these things and typing and searching and it's like finally, by the end of the episode I was able to like here. Here's some other things I wanted to point out Absolutely, yeah, Like. The hard problem of consciousness highlights our inability to bridge the gap between subjective experience and observable behavior. We have experiences of our own consciousness directly, yet we never experience anyone else's. Instead, we infer that's the key word Consciousness is in others, based on behavior, language and perceived self-awareness. This inference is so deeply embedded in human action that we rarely question it Now.

Speaker 1:

Some physicalists have responded to the hard problem by seeking to show that it dissolves upon analysis. Other researchers accept the problem as real and seek to develop a theory of consciousness's place in the world that can solve it by either modifying physicalism or banning it in favor of an alternate ontology such as parapsychism, which that's maybe even beyond my thing, or a dualism. There's both a mind and a body, which, if we extrapolate that into religious terms, you have a self and a soul. Could you possibly have three different things? A self, soul and mind. That's a whole, nother thing. A third response has been to accept that the hard problem as real, but deny human cognitive faculties can solve it.

Speaker 2:

Okay. Now the philosopher Peter.

Speaker 1:

Hacker argues that the hard problem is misguided, and then it asks how consciousness can emerge from matter, whereas in fact sentience emerges from the evolution of human organisms. He says the hard problem isn't a hard problem at all. The really hard problems are the problems that scientists are dealing with. The philosophical problem, like all philosophical problems, is a confusion in the conceptual scheme. Hacker's critique extends beyond Chalmers and the hard problem, being directly directed against contemporary philosophy of mind and neuroscience more broadly.

Speaker 2:

So that's really interesting too, because I looked into some of those same people and I went back to Eagleman when he said that when he was talking about emergent traits, emergent phenomena, that when he was talking about emergent traits, emergent phenomena, and he actually cited chalmers and others in that sense, but he was like, okay, think about it. You have like human flight is a. The idea that human beings can fly in a plane is crazy. If you really think about it from like a physical standpoint, right, and if you look at how, like a, if you look at the, the component parts of a, of a plane, you have metal and you have carpet. You have all these different things individually, how does that equal flight? Because when you put them together in the right order, with the right different components pushing on each other, making each other in a mechanical way go, then you have an emergent phenomena in flight. But individually they make no sense as to how we have a phenomena like that Until you see how all the little tiny pieces work together.

Speaker 2:

And I think some of those I feel like some of those commentaries sort of overlook that. Maybe sometimes they reinforce them. But Now anyway, continue, go, go.

Speaker 1:

Well, that's kind of like I just I didn't Again, we could go. There's. There's a there's a lot of different schools of thought just on this singular problem. There's a lot of other things we get into problem. There's a lot of other things we get into. I mean more about philosophic zombies or the argument of what knowledge is the mind-body dualism state. There's a lot we get into. We can also get a lot into the type A materialism, type B materialism, type C materialism, other monisms and illusionisms.

Speaker 2:

There's a lot just in this basic idea of trying to figure out how to understand cognitively consciousness yeah just in and of itself yeah, how to cognitively understand cognition, which is like the most crazy thing to try and think.

Speaker 1:

How to think about one thing yeah, I love like thinking of these things. I mean, that's why I, like you know, I spent my college years doing it absolutely and I I mean at some point there is a one might say a cascading failure, in that you tumble down the mountain of knowledge and trying to like break things down and what is real, what isn't real, what can we experience, what can we experience, what can we know? And the climb back up that mountain is, if you're really taking it finger by finger and step by step, it can be a lifelong journey, depending on how deep you want to go. They intuit, uh, and infer the road and they walk that path every day. Um, but there there is another way of viewing things, um, that you know some might say isn't, isn't really a worthy endeavor, or um, you know, there is a a more general understanding of the world that then you're trying to like break down needlessly.

Speaker 1:

But I find the, the complexities of the arguments and the ideas and the thoughts fascinating and riveting to me. I just wanted to juxtapose some of like you know, here's we, I think, kind of like you, kind of laid out in the first episode, like if we build it out technologically, this will, you know you keep putting it together, this will come about, and I just wanted to show like, hey, here's an another idea that you know, maybe it's a little more ephemeral, and how do we decipher that? Just something else to think about in conjunction with the first episode.

Speaker 2:

100%. That's really important. You're absolutely right.

Speaker 1:

Yeah, but again, we could go on, and on, and on. Oh, we could go round and round for like forever. Yeah, we really could, we really could.

Speaker 2:

I could just keep going, like all in all.

Speaker 1:

I think it might be fun to look at some of the popular iterations of artificial intelligence in popular culture and see how they play off of these particular elements and how we view those interactions between humanity and a consciousness not our own, that isn't from outer space or another dimension.

Speaker 2:

Well, sometimes it's from outer space.

Speaker 1:

Well, sometimes it's from outer space, another place that is mechanically based. It feels more like you're dealing with an alien. Yeah, you are with, like the artificial intelligence, mechanical mind that we're kind of talking about. Do you know what I mean?

Speaker 2:

yeah, yeah. Well, I didn't actually include any of that, yeah I mean because I thought about gort from from the day the earth still, but I was like no, that doesn't really count.

Speaker 1:

artificial intelligence came about, or, you know, juxtapose it with, you know, our own minds, or, but it's really kind of like, not the same as if, like, we give rise to ai yeah, I mean, but I mean that's a okay, but that's a great example of the weird debate we're going to have now.

Speaker 2:

because, um, if you take the, the examples of nomad from star trek, the original series, or viger from star trek, the motion picture, which is the same fucking character, um, they were earth, created artificial intelligence that then merged with alien artificial intelligence to and then lost its original purpose. Does that count? Because it is human born, it is earth born, found its own consciousness and elevated its consciousness with other artificial consciousness that is alien, and then came back to Earth, or at least came back to contact with people, right?

Speaker 1:

I think I would like lean towards no.

Speaker 2:

I didn't include those. It goes outside the scope of what we're talking about.

Speaker 1:

It's a similar like cyborg to like a RoboCop.

Speaker 2:

Oh yeah, we definitely don't include RoboCop. No, no.

Speaker 1:

We can't include RoboCop.

Speaker 2:

That's not fair.

Speaker 1:

The merging of machine and man into something. I mean especially in the way that RoboCop is portrayed. It's not about creating a new consciousness, it's more like a notification of a existing human consciousness oh, absolutely.

Speaker 2:

It's more about the mechanizing of his brain, and not successfully or even not, even though that complex a way of doing so, like, yeah, he still has his own brain. That was kind of the point, mm-hmm and that. And that was the differential between him and like Ed-260 or Ed-209 is that he's not artificial, he's organic and still human. He just has things implanted in him that he's conflicting with. That's way different.

Speaker 1:

Way different, and I also don't think the Eds meet any criteria for sentience or conscience. They're drones.

Speaker 2:

They're not Exactly. Yeah, they're just drones. Yeah, and that's the same thing. Yeah, they're not meant to think on higher levels or because that's the kind. Okay, this is one of the reasons we did this is because, like, talking about robocop is important, because it doesn't fit into these categories, into these things we're talking about, because, like, robocop was created to have the ability to think on its own, on his own on murphy's, you know, with a cop's brain, with its his own experiences, with his own experiences, with his own judgments, but within this corporate mandated interest. But he was never meant to be in artificial intelligence, he was meant to be in enhanced human intelligence, and that's a completely different thing which is important to define if we're going to talk about these things, 100%.

Speaker 1:

Is someone there?

Speaker 2:

No, they're just somebody.

Speaker 1:

I can't hear anything.

Speaker 2:

Sorry, oh, okay, good. As long as you can't hear anything, then it's fine.

Speaker 1:

I just saw you turn your head. That's why I was.

Speaker 2:

Yes, somebody is sawing something, it's not a body is it? Well it's. There's no apartment next door to me on the right, so I'm very curious. But anyway, let's no apartment next door to me on the right, so I'm very curious, but anyway, let's just keep going.

Speaker 1:

I think a similar again. What point do we count it? I did say Ghost in the Shell, Ooh, ooh good, that's good. Where again it's cyborg, where consciousness more than just the the ones and zeros program to be like. Is it major Kusanagi? And I can't remember.

Speaker 2:

They just call it the major, at least Scarlett Johansson original. It's Scarlett Johansson. Thank you, definitely. Yeah, it's Scar Jo. That's a terrible, terrible version of that fucking story, just it's so bad, it's the worst thing you could possibly do. Uh, it's so bad there is, there is.

Speaker 1:

I mean, I think it is so bad also because, like what ghost in the shell means the anime fans and to have it portrayed in that way.

Speaker 2:

Yeah, it's. It's almost like the Attack on Titan movie, just so misses the point completely, like that.

Speaker 1:

Everyone forgets that even yeah, I mean, I don't think I even. I never saw that. I never got enough deep into Attack on Titan to Just go that route. But first couple of seasons are great, the movie is terrible. I never saw that. I never got enough deep into Attack on Titan to go that route.

Speaker 2:

First couple of seasons are great. The movie is terrible. It doesn't understand itself at all. But the show is good until you get to later, in some of the later lore, and then you're like, because that actually deals with artificial intelligence as well. But we're not going to talk about that today. We have a lot of other things to get to I, I didn't, I don't, I didn't know.

Speaker 1:

That isn't about the, the large naked people eating, eating people.

Speaker 2:

Yes, yes, but then it turns out that those are. I don't want to ruin it for you, but trust me, it also deals with this kind of thing in a certain sense as well. So just watch it. You'll like it. Up until the last like couple seasons. You'll enjoy it a lot actually. I think it's very good until the last couple seasons, and then it's, and the movie ruins everything. But it wasn't American, so at least we have that. It was Japanese and they fucking. They ruined it themselves, so fuck them uh, you did.

Speaker 1:

I mean, yeah, at least. At least we didn't cast random white actors to take the roles we didn't do.

Speaker 2:

yeah, it wasn't chris pratt playing one of the main characters. At least we have that, um, okay, so it's interesting that you say that too because, like I'd like to enter just just for a moment interject. I did a bunch of research into this and I actually did find some really interesting academic papers that deal with this concept as well, specifically depictions of artificial intelligence in sci-fi fantasy, specifically sci-fi. I don't like getting into the fantasy thing because it doesn't mean the same thing. We're not going to talk about c3po or r2d2, I don't, are we not?

Speaker 2:

well, no, I don't think I mean we should, you and I should talk about droids, because that actually does bring up a lot of really interesting, weird ethical questions but I think only because I think there are a few there are elements.

Speaker 1:

Fantasy Depictions of Robo. Yeah, I think they might be the only fantasy one that I plan to bring up. Ok, all right. Yeah, I mean they're not the only fantasy depictions of, of, of, like, no you know, artificial intelligence in the cybernetics or in the robotic sense, obviously, but they are weird and unique yes, I think that's the only reason to discuss them in the realm of of this thing, because let's, let's just put a pin in that and come back to the droids. Let's come back to that just a second.

Speaker 2:

Yeah yeah, no, no right, because it's like we could definitely rant about that for a while. Let's see. So this is from a paper by isabella herman called artificial intelligence artificial intelligence in fiction between narratives and metaphors. Quote taking science fictional ai too literally and surveillance of humans by AI technologies through governments and corporations. Ai in science fiction, on the other hand, is a trope as part of a genre-specific megatext that is better understood as a dramatic means and metaphor to reflect on the human condition and socio-political issues beyond technology. So AI in films often serves plots of machines becoming human-like and or a conflict of humans versus machines.

Speaker 2:

Science fictional AI is a dramatic element that is a perfect antagonist, enemy, victim or hero, because it can be fully adjusted to the necessities of the story. But to fulfill that role, it often has capabilities that are way beyond actual technology, be it natural movement, sentience or consciousness. If science fictional AI is to be taken seriously as a representational of real world AI, it provides the wrong impression of what AI can and should do in the future, and I think that's really poignant, because we can have these debates and we can start talking about these things in fiction and we're going to, and how important they are and the metaphors they represent, but real-world AI is actually, ironically, far more insidious in its application than the crazy insidious stories that have ever been written about them, and I think that's something to really keep in mind when we go forward yeah, yeah, I think I mean a lot of what we'll get into it.

Speaker 1:

But you know, there I think there's distinctly kind of utopian and dystopian. Takes on artificial intelligence, to empathize with that quote unquote, fake human or, you know, robotic person as as data would say a synthetic no, no, as Ash would say no, I'm getting it wrong again, bishop. What Bishop said? He's a like to refer a synthetic person.

Speaker 2:

Back to Lance Henderson which, by the way, uh, artifice girl, one of the last, I have to imagine last great lens henderson roles.

Speaker 1:

He was I did not expect him to be in it and he was great. I didn't know he's in it.

Speaker 2:

He shows up, he's like, oh, that's great, uh, oh it skips ahead suddenly 50 years and you're like wait what?

Speaker 1:

oh hey, it's like the two skips I, I like how like it it jumps and it does skip twice. Yeah, let's hit the.

Speaker 2:

Let's hit therickson. That's cool. I like how it jumps. It does skip twice.

Speaker 1:

yeah, let's hit the points with this movie's about, because we're kind of in and out. I love a tight 90 that actually has something to say.

Speaker 2:

Yeah, and it's all in basically like three rooms.

Speaker 1:

Yeah, yeah. And you have what? Five actors total? Yeah, yeah, Great, Six actors yeah. Well done have what five actors total. So yeah, yeah great six actors.

Speaker 2:

Yeah, yeah, well done, yeah, well done. Yeah, especially for a lance hendrickson film later on, because he made a lot of shit.

Speaker 1:

That one was like, oh okay, you're saying something here, that's cool yeah, I, I'm assuming that the director was like all right, I'm gonna. I I love lance hendrickson. Can I get him in my 100 percent as opposed to lance hendrickson? Agent was all right. Serbian action film. Bosnian dragon film.

Speaker 2:

Well, I think there's a reason. I think there was a reason. He was in a wheelchair chair. In that, though he was at the end. Yeah, I think I think he's. I think it's, it's like it's sundown for Lance Henriksen, I think, honestly, yeah, so, seeing if it's something good, great, that's awesome.

Speaker 1:

Yeah, it was good. We'll get to that, I think, when we come up.

Speaker 2:

Yeah we'll get to it.

Speaker 1:

But just to put a pin on, like when they show AI representations, it's either a smiling face that is a little more human-like that we can identify with, or it is that terrifying visage of the killer robot. You kind of need these pantomime heroes and villains that are easily recognizable and we can focus on, as opposed to the way AI can insidiously well, not insidiously, I mean, it can both be. I mean the things that AI does. Now, there's many wonderful things for humanity. There's a lot. I mean one thing like for Artifice Girl that it does is like it's combating human trafficking. One thing like for Artifice Girl that it does is like it's combating human trafficking, you know, identifying pattern indicative to human trafficking, helping law enforcement agents detect and disrupt those networks. I mean it's disaster response, wildlife conservation, realistic characters in fiction there's, you know, personalized learning, improving accessibility, enhanced public services, helping customer service in general. Scientific discovery of, like protein structure predictions, um, okay, uh, your climate predictions and and and trying to change climate change and mitigated uh different medical breakthroughs.

Speaker 2:

can I push back on that for a second?

Speaker 1:

Yeah, Because I You're taking, you're giving one with a hand but cutting off with a huge sword with the other.

Speaker 2:

Well, in the current American administration, yes, that's exactly what's happening here. They're trying to replace actual thinking human beings with resources and time and actual experience, with ai models who are 100 not ready to predict these things or help these things. And now we're seeing I mean, we're already seeing the immediate results of this, with so many people dying from storms that should have been predicted earlier on and and and acted upon. The problem with that is all of those things rely on data, and ai doesn't create data. It consumes data. People create data, and so if you're going to try and use predictive models, you have to have actual data to build those on, and if you don't have that data, you can't predict these things. It's going to start hallucinating. So those are not good things that come out of AI, but they are ways that you could utilize it for the good of humanity.

Speaker 2:

Theoretically, if it worked the way that they tech companies say that it works. But it doesn't work that way, it does not have enough data to be able to do what they say it does, and it can't get more data if you keep cutting people out of the equation. Maybe someday it could do that, but that's definitely not where we are right now. But they are already trying to implement it like it's just there and we're already seeing the the horrible, tragic results of that.

Speaker 1:

It's trying to ride your bike, but you haven't put the wheels on yet.

Speaker 1:

Or you've never learned to ride a bike. Yeah, but I mean, I think not only did they not have a bike properly made and the skills to ride said bike, they decided that they're going to get rid of all the cars and just make everyone ride bikes, yeah, yeah. So, oh, we're talking about cyborgs and why they don't fit, and again, ghost in the shell straddles that line, because there's the digitization digitization of major kusanagi and her ghost in the shell, her cybernetic spirit that exists inside the mainframe, and there's the AI that's created inside that they're trying to get. That I don't think is either. It's portrayed as the villain in ways, but it's really just kind of trying to live. It's not the same way that a Westworld or a Terminator function way that a Westworld or a Terminator function, and I think it's trying to say more about what it is to be human rather than what it is to be an artificial living being.

Speaker 2:

Right To be a unique consciousness and a unique entity in and of itself.

Speaker 1:

Now I think there are ways that you can portray AI within a cybernetic structure, for example like Upgrade Okay Did you ever see Upgrade?

Speaker 2:

I've seen it. I don't remember much from it.

Speaker 1:

Yeah, basic idea of Upgrade is that a man is paralyzed. He gets a neural chip implanted into the lower part of his brain that gives him the ability to walk and use his limbs again. But there is an artificial intelligence that is in control and obviously it takes a dark path and turns for its own evil ends. One might say, without giving it away Everybody knows what 2001 is, everyone knows the Matrix, terminator, kill, the robot, cyberdynene from the future, go back to kill. You know we get that, but you might not know what happens at the end of Upgrade or Artifice Girl. I don't want to necessarily give everything away. If it's something else, someone else can still experience for themselves.

Speaker 2:

But the overall concepts at least least.

Speaker 1:

That'll say a lot at least that is an experience of the integration of technology and artificial intelligence within a human body, forming a cybernetic relationship just on the basis of the core concept. But it isn't about being a cyborg. It's about I mean, a lot of it's about. You know, you're giving this artificial intelligence this much power and look what it does with it and you have no control over that. But at least that's an example of the cyborg premise working in conjunction with the core idea of what we're trying to talk about with artificial intelligence transformers Transformers. Transformers are robotic entities, but do they give a shit about discussing what that means? Does that have anything to say?

Speaker 2:

Their entire world is cybernetic life, transformers the movie they show like weird ecosystems where they have like cybernetic fish. There's no organic life on Cybertron.

Speaker 1:

At that point it's no longer cybernetic.

Speaker 2:

That's why it's more like fantasy than it is like science fiction they use that term, I think partially because it was new and popular at the time.

Speaker 1:

But there is no organic life there, it is all mechanical. But I also don't think one because it's alien. So the spark of consciousness and sentience in these machine people that turn into you mean the AllSpark. Yes, the AllSpark. If only Marky Mark was here to just tell us more about it, that would be.

Speaker 2:

Marky, mark and the AllSpark, but no, but I mean you're right, it doesn't have anything to say. No, it has nothing to say. I mean it could, if michael bay wasn't the one in charge of that franchise, it could have something to say and would be really interesting but I'm sure there are some comics that have something to say yeah, maybe what 40, 50 year run?

Speaker 2:

you know, but I think you and I wish that were true. I don't think it is true. I mean, like jason aaron has never taken a crack grant, morrison has never gone after transformers you know what I mean. Like it's never, you're never gonna get anybody to say anything about it, especially at this point, now that michael bay has made like 17 of those movies and they're all completely superficial and awful. It's just not gonna happen.

Speaker 1:

But but, point taken, how do you want to approach because I kind of had them like listed again as we discussed off air a little bit like the good, the bad and the gray with liam neeson. Yes, someone's gotta fight wolves with broken bottles tied to their fists and then be as ambiguous as Inception at the end.

Speaker 2:

By that I mean not at all. Well, ok, it's a good question, because I had examples of stories and then characters. I did not write this in any sort of logical sense. So here we go. So we are going to talk a little bit about examples of AI in pop culture. I mean, there are so many we can go into. It's one of the most pervasive things in science fiction especially, and sometimes in fantasy. We're probably not going to tackle WALL-E or Johnny Five necessarily, mostly because of, but those are things we could definitely talk about. Or batteries not included, or you know all sorts of things that we really wish we could talk about. Weren't batteries not included? Weren't those aliens? They were alien robots, yeah, but you're right.

Speaker 1:

But you're right, we've already sort of excised aliens.

Speaker 2:

You're right, but for the scope of this, just follow your bliss. Well, I'm not going to yuck your yum. Ai in films often serves. I haven't said this yet, have I? I don't think so.

Speaker 1:

I don't know, have you my subjective experience of what?

Speaker 2:

this podcast is oh, get the fuck out of here. Ai in films often serves lots of machines becoming human-like and or a conflict of humans versus machines this reflects you're idealizing and a human representation, giving like emotions and empathy to the machine.

Speaker 2:

I think a lot of the utopian versions of these tales and the exact opposite where you're a lot of times you're giving a human-ish face to the killer robot to highlight those dystopian All a Terminator or yeah, and we're going to get into the weird inherent sexism in that as well when we get to the whole Ex Machina thing, that whole thing, because that's a huge part of this that I did not really take into account as a whole but definitely is a problem. Science fictional AI makes a really good enemy or even hero. Sometimes, like you're saying, like in Ghost in the Shell, it can be tweaked for the narrative. No matter what you do, you do. But to fulfill that role it often has capabilities that are like way beyond technology that exists, which is why it's often set in the future or in the near future, like max hedrum. Well, god, we haven't even talked about max hedrum, that's kind of a big one. I think we should do a whole episode on max hedrum. Personally, it's always like just in the future or right now, but nobody knows that this exists. And these capabilities also, they also, like, come in the form of sentience or consciousness, which obviously those are really demandable terms. Take that from what you will.

Speaker 2:

Science fiction ai as a representation of real world ai kind of gives you the wrong impression of what AI can and should do, obviously if you've seen Terminator or the Matrix, right, but here's some like just a couple of examples from history that I think are relevant today that exemplify this type of AI the Feeling of Power, which is a 1958 short story by Isaac Asimov.

Speaker 2:

In it, a futuristic society, humans rely on AI for everything. And then a human being discovers arithmetic they don't even educate people anymore and then he trains himself to do multiplication and then his knowledge starts to snowball. Then he becomes sort of a vocal advocate of human knowledge and at first, you know, the people in charge write up what he dubs as human math, as useless, but then somebody in the military sees it as valuable, because if humans can understand calculation they can replace expensive computers on warships. It's a short story, so it doesn't really need to get into the universe too much, but it's a very obvious reverse of how we see AI now. But that is in 1958, right? Asimov is actually one of the most pioneering people we're going to talk about in this. The rules of robotics alone were a huge thing in storytelling, so now I'm going to get a little obvious, okay, but I have very good reasons why I want to talk about, for just a brief moment, frankenstein by Mary Shelley.

Speaker 1:

I'm just wondering if we're going to bring that. This is going to be brought up.

Speaker 2:

I know I debated this because it is obvious, but I think it's really fundamentally important to discuss.

Speaker 1:

And I think it's something we can debate the fundamental Promethean principle behind Frankenstein. It applies to everything else.

Speaker 2:

In many ways. You know what, in talking about AI, frankenstein may be the most important fundamental thing we talk about. It applies in so many different contexts and so many different eras, even though everybody knows Frankenstein. I mean, the subtitle for Frankenstein is A Modern Prometheus. It is a far more nuanced text than just that, or even the Universal Pictures. Take Victor Frankenstein literally creates an artificial intelligence from spare parts, then neglects it. It's pretty relevant today for that context alone.

Speaker 2:

In both the Prometheus tale and the story of, let's say, adam and Eve, a game-changing element is gifted to mankind, and now we're on our own, able to guide our fates, for better or worse. Now, while Frankenstein does tell that story, victor has harnessed an ability that previously was only possessed by the gods, and now man's fate has changed forever, with the ramifications of that knowledge. It also shows us the danger of intentionally unleashing a world-changing discovery without an ethical framework around it. Just because he can I mean should he right I, I would argue that Frankenstein's monster refers not just to the creature he created, but also his hubris and the choice he made. I think Frankenstein's monster also metatextually refers to his folly.

Speaker 2:

He is the monster, his actions, his actions are the monster, his choices are the monster, and that's something we really need to think about. The longing for creation, as we talked about last episode, is connected with the Well, and then we can get into some of this fun stuff, because this is really, really interesting. The longing for creation is connected with the anxiety that the creature will grow over our heads, that we will lose control and finally be dominated by it. This sort of primeval desire and fear Asimov literally coined the term the Frankenstein complex kind of defines 20th and 21st century AI fiction, because we talked about that before with the golem and all sorts of other examples throughout human history the idea that we want to sort of like, beat the gods, become our own gods. I mean, battlestar Galactica does a really great job of talking about this where we have to create and, by doing so, being the stewards of our own fate, but now, because of that, we don't have the gods to protect us or save us or keep us pure, and now we're on our own and we are going to fuck it up or not. Let's find out. It is one of the first mainstream examples of artificial intelligence in narrative structure and it's really relevant because coders and big tech went all in with AI implementation, ignorant or, perhaps more cynically, intentionally ignoring the ethics around it, free of any guidelines of regulation motivated by greed. Consequences be damned.

Speaker 2:

And this is from a paper called Robot Rights damned. And this is from a paper called Robot Rights. Let's talk about human welfare instead, by Verhain and Van Dyck in 2020. Quote once we see robots as mediators of human being, we can understand how the quote robot rights debate is focused on in first world problems At the expense of urgent ethical concerns such as machine bias, machine-elicited human labor, exploitation and erosion of all privacy, all impacting society's least privileged individuals. We conclude that if human being is our starting point and human welfare is the primary concern, the negative impacts emerging from mechanic systems, as well as the lack of taking responsibility by people designing, selling and deploying such machines, remains the most pressing ethical discussions in AI.

Speaker 2:

So, all that being said, just like Victor Frankenstein, people like Elon Musk, anthropic, openai, google created this thing. They saw a financial model for it which didn't really make a lot of sense. It still doesn't make a lot of sense, and who gives a shit what the implications are on humanity and those that go out there and are like, well? No, this will mean we have to work less. This will mean that things will be easier for people. We can solve problems. Most of them know that's not true and it's pure hubris. Just like Victor Frankenstein. Let's go down that list. We're talking about Jake. Let's go down that pop culture list, because each of those, I think, has different things to say. I think that's great.

Speaker 1:

I think we should probably stop here. Okay, we've got two hours and we're just getting into the. We're just getting the premise of the episode yeah that's fucking great.

Speaker 2:

We knew this was going to be a long one so it's fine.

Speaker 1:

I think the fact that we are excited to talk about these, yeah, we're not going to have any problem. Fantastic bringing up frankenstein and also james bond.

Speaker 2:

That was, that was important.

Speaker 1:

I don't think james bond is going to be in the episode.

Speaker 2:

No, but we did talk about that for like half an hour. That's how it always goes. Well, that's what we've got for now.

Speaker 1:

Yeah, like Frankenstein, we came up with this pod and we decided to delve into it and create it, but it has grown beyond our wildest imaginations and has taken over, so it will definitely need to be pushed into next episode. So do come back, as we will delve into the thoughts and ideas, the portrayals of both good and ill artificial intelligence within pop culture. Next time, on Dispatch Ajax, if you wouldn't mind, like sharing, subscribing, we'd really appreciate it. Tell whatever bot that you can find.

Speaker 2:

Do not encourage them to use AI. It's in everything.

Speaker 1:

If you want to give us five quarts five quarts on the podcast, catch your app of your choice, ideally Apple Podcasts. It's the best way for us to get heard and thus seen. We'd really appreciate it. If you do want to hear about our thoughts on artificial intelligence in popular culture, do come back next episode. We're excited that we're able to talk about this and to share it with you all, but until we're all reborn into our mechanical bodies, thus blending human sentience and artificial intelligence into one ungodly creation, Skip. What should they do?

Speaker 2:

Well, they should all ask themselves how many gourds in a gallon, and then they should probably clean up after themselves to some sort of reasonable degree.

Speaker 1:

Is this a diehard thing? Is there a bomb going to blow off if I can't figure out how many gourds are in a gallon, and then it's going to turn out to just be like maple syrup. I have a sign that says I hate robots.

Speaker 2:

You better get out of this neighborhood, man. Hey Zeus, you got to help this dude out. He's going to be dead in a few minutes. He's in Robata Harlem.

Speaker 1:

I've got a very, very bad headache.

Speaker 2:

It's good. Everyone should probably know that to themselves to some sort of reasonable degree. Make sure they've paid their tabs and not just through some sort of ai app. Make sure they have tipped their actual human bartenders and or kjs and or podcasters. And make sure that they support their local comic shops and retailers, not their online apps. That being said, we would like to say godspeed, fair wizards please go away.