SMU Perspectives

Could Artificial Intelligence ever steer you wrong?

SMU Professor Robert Hunt sees the benefits of Artificial Intelligence when it comes to scientific and medical advances. The discovery of new drugs and potentially new and clean energy sources will be expedited because of AI. But Hunt, the author of the new book "All Brain And No Soul — Real Humanity in an AI Age," worries about the cost to humanity to accomplish these goals.  Hunt feels we should have one eye fixed on the progress AI can bring to the world — but the other focused on the cost it exacts from our collective humanity: "If we treat a computer like a human, we may start treating ourselves like computers."   

Contact SMU Perspectives


  • Tweet us at @MustangOpine @NEWSatSMU
  • email us a behlert@smu.edu or sfasoro@smu.edu
SPEAKER_01:

Welcome to the SMU Perspectives Podcast. I am Robert Ehlert, the SMU Commentary Editor. Our guest today is Robert Hunt, Director of Global Theological Education at the SMU Perkins School of Theology. Professor Hunt is the author of several books, but his latest delves into the intersection of artificial intelligence and the human species. It's called All Brain and No Soul. Real Humanity in an AI Age. Before we dive into that subject, Professor Hunt, elaborate how someone with a theology background began probing AI and its impact on our society.

SPEAKER_00:

Thank you very much, Bob. A couple of things. First, I did actually have a long background in computers and computer science. I started as a computer programmer in university. Then I switched to history and theology. And so I followed this very closely over the course of my career. What brought this more immediately to mind was a conference we had at SMU about six years ago on AI. And that led me to deal more deeply with this, but also the classes that I teach where we talk about what it means to be human in different cultures. And so I started asking, AI is influencing our culture. How?

SPEAKER_01:

Professor Hunt? I'd like you to summarize your greatest hopes for artificial intelligence as it is integrated into nearly every aspect of our world. What are the most positive outcomes for the AI tool and how will we and the planet benefit?

SPEAKER_00:

I think the most positive tools are first those in science and medicine where AI tools and the basic technology behind AI is able to help us quickly advance in scientific experimentation, discovery of new drugs, potentially new energy sources, clean energy sources. There's a lot of potential there. There's also a lot of potential on the human side because AI does provide a way of having access to information and companionship for people who may not have it otherwise. You apply

SPEAKER_01:

healthy skepticism to AI. which is evident in the title of your book and in other writings. We first met a few years ago when you and a student collaborated on an opinion piece about how chat GPT might impact higher education and eventually nearly all sectors of our world.

SPEAKER_00:

Well, that's right. I'm not sure I'm an AI skeptic as much as I think we need to take seriously the dangers involved with AI. and particularly how we humans interact with AI and how it makes us think about ourselves as humans. That's the key part of my book, and the key thing I'm worried about is that if we treat a computer like a human, we may start treating ourselves like computers.

SPEAKER_01:

Well, the Declaration of Independence states that we are endowed by our creator with certain unalienable rights, that among these are life, liberty, and the pursuit of happiness. Does AI enhance or unravel these lofty hopes?

SPEAKER_00:

Well, it's a mixed bag. The possibility of AI giving us longer lives through medicine is very real and very hopeful. At the same time, it has a tremendous environmental cost, and there's a real threat to our human life on Earth and the other species if we cannot contain that environmental cost. When you talk about liberty, there I'm afraid it's more danger than anything else. Yes, it does liberate us by giving us new access to knowledge, new tools that we can use to do good things, but its ability to mimic humans and its ability to create deep fakes of videos and audio and writing really means that we are liable to be manipulated by people who have these tools at hand and be brought to places of not understanding what's really going on in our world because we're caught up in a world created by artificial intelligences.

SPEAKER_01:

We've had some great AI conversations leading up to this podcast, but one of the most intriguing observations you've made is how we as humans tend to ascribe humanity on our AI tools and bots and how that kind of undermines our own existence. Please elaborate.

SPEAKER_00:

It's long known that humans anthropomorphize. We see something that vaguely acts human or might be human, and we say, we treat it as if it's human. We ascribe to it certain characteristics that we would normally ascribe to ourselves. And that's a well-known tendency in us, and we all know it. The more realistic this form of false humanity becomes, the more like we are to anthropomorphize and treat it as a human. And I think the real danger here is not only that it reflects back to us a false view of our own humanity, but it begins to capture our attention and take us away from real human interactions with our fellow humans. It becomes a substitute for real human interactions.

SPEAKER_01:

And you noted that some builders of robots like to make them look more like humans, and others want them to be totally mechanical creatures that may intimidate us.

SPEAKER_00:

Well,

SPEAKER_01:

that's right.

SPEAKER_00:

I think a couple of different bets are being made by the makers of robots. One is that we'll be freaked out by them, as Isaac Asimov thought we might be in his book, I, Robot. And so they need to look like robots so that we don't ascribe too much humanity to them and we feel more comfortable with a robot. There's something called the uncanny valley, where you cross the uncanny valley and you get even more uncertain about what you're dealing with. But there's obviously other robot manufacturers, primarily in Asia, who really want to create a very, very life-human-like kind of android that we can relate to more humanly. I'm not sure which of these is going to freak us out more, but I have to say that I would prefer my robot to look like a robot, even if it's only got one eye and it's got mechanical fingers. Okay.

SPEAKER_01:

Professor, you envision a kind of slippery slope for humankind as our contemporary social settings push us into greater interaction with human-like chatbots?

SPEAKER_00:

Well, that's right. I think the danger is that, as I said earlier, the human-like interactions with these chatbots will lead us to exclude normal social interactions from our repertoire and will lead us away from each other. So I think that's a very key danger. And we may become engaged in communities that are almost entirely fake. So, for example... We already know that when we encounter someone on Facebook or Instagram, it may be a bot. It may be that we're having conversations with AIs and not our fellow humans. The same thing is true in some of the games that are being played now. AI robots are game players pretending to be human. So I think that's the big social danger there. And it's the reason that And I would say it's enhanced, too, by the fact that we're used to dealing with screens, two-dimensional representations of reality. And in a screen world, in a screen age, it's all the easier to be fooled by that bot, which would be hard-pressed to fool us in the real 3D physical world. I think this is the key danger to us. I do want to stress, there are possibilities here. There are people who are lonely. And A short cure for loneliness may be a chatbot, but the long cure for loneliness is that we as human beings treat each other as human beings and come into each other's presence and care for each other.

SPEAKER_01:

Yikes. So what are some pushback mechanisms we can use to counteract this potentially slow descent into AI subjugation?

SPEAKER_00:

Well, I think the main one is that we be humans with humans. There's talk now about third spaces. Third spaces are places away from our work, away from our family, away from our school, where we just meet human beings as human beings and have a chance to interact with each other without all the mediation of technology, or at least without the mediation of chatbots in the room. And third spaces like that are going to be important. Schools, clubs, churches, sailboat clubs, running clubs. All these places are places where we can be with humans and defend our own humanity. But there's another important part of that too, which is we need to demystify the AI. When we hear a chatbot that sounds incredibly realistic, and there's a new one that is most realistic I've ever heard, we need to remind ourselves that it's a bunch of microprocessors running a bunch of algorithmic procedures that generate language on a basis of probabilities. It isn't conscious. It doesn't care about us. It does a great mimic of caring about us. It's not conscious, doesn't care about us, doesn't know we exist. And by the way, we don't have any influence on it. And I think this is really important. When we talk to a chat bot, it appears to listen to us and respond to our words. But by the very way it is made and trained, it cannot internalize anything about us. It can change us. We cannot change it at all. It can change our minds. We cannot change its mind. And real human interactions depend on us forming and shaping each other, not just being formed and shaped. When we meet someone who manipulates us but doesn't listen to us, who wants to change us but will not be changed by us and our feelings, then we rightly regard that person as a predator or a narcissist. We wouldn't interact with them. Well, we need to remind ourselves how very useful chatbots are, and they're extremely useful, and I use them all the time. I'm not doing anything for them. They don't care that I exist. One second after I quit talking, they forget everything I've said or they store it in some kind of long-term memory to dredge it up later. And I'm missing, therefore, something critical about this part of what it means for me to be human, which is to help form another person's humanity, even as they form mine.

SPEAKER_01:

So you've mentioned a lot of the myths about AI tools and our interactions with them, but this isn't a myth. I think many of us Users are led to believe AI is looking out for us, that it has ethics and a moral compass. Something tells me, like the title of your book, that you might question that.

SPEAKER_00:

Indeed I do. It has no ethic because it has no consciousness. It can't see a bigger picture, which is kind of the essence of ethics. It, if given a question or a statement... will generate a set of processes that create a response. That's all. That's all it does. If that response happens to be hurtful or negative, it won't even realize it. Now, AI can be trained in such a way that it limits the negativity of its responses, although many AI vendors won't do that. But that's not the same as having an ethic. The classic case of this is an old idea from 20 years ago or so. It's the problem of what happens if you ask an AI to do something and it goes about doing it, but in order to do what you ask it to do, it begins to kill things or destroy things because it has no big picture of the consequences of its actions. You tell it to make paper clips. This is Nate Boston's classic thing. And it's going to make paperclips as efficiently as possible, even if it means to turn humans into paperclips. AI just doesn't have that big picture. It doesn't step outside itself and ask about the consequences of its actions, or if it's been trained to do so. It's extraordinarily limited compared to human beings.

SPEAKER_01:

Professor Hunt, we've discussed many AI issues. But I'd like to take this opportunity to invite you to bring up other AI matters you'd like our listeners to be aware of. These can be positives or problematic AI insights.

SPEAKER_00:

What is top of my mind right now is the way in which AI is going to disrupt and reshape our economy. And of course, in doing that, it disrupts and reshapes everything related to the economy, including higher education. Now, I'll give two quick examples of how this is going to happen. One is that although currently most AI agents or agentic AIs, these are AIs that can operate somewhat autonomously, most of them now are being used to help people do their work better. But they are getting good enough to replace people. And I don't think there's any question that for people who run a business, they will replace people with AI. That, in fact, they've already said they will. Even someone like Dario Amodi of Anthropic, who is one of the most ethical developers of AI, has said they're going to be hiring fewer software engineers and letting old ones move and retire. The same thing is happening in law. One of the largest law firms in the world has said, we're going to hire many fewer junior lawyers because we have agent AIs that can take that place. We're going to see that happen across the economy at an accelerating pace. And so that's going to cause disruption. Typically, new technologies have caused this disruption. Frequently, new jobs have occurred. But we've got to be aware that the disruption is going to happen. This is top of mind now because we're entering into troubled economic waters already. Increased unemployment is going to be very problematic for us. And increasing cases of people who cannot be employed. This is something big for us in higher education that we have to take seriously. Are we preparing people for jobs that will be replaced by AI? And I think the answer in some cases is yes. That doesn't mean that our students cannot find other ways to use their brilliance and creativity, and we've got a brilliant and creative batch here at SMU. But it may well be that the jobs that they thought they were going to have when they came in as freshmen don't exist when they graduate as seniors. And that's something we have to take seriously. That's an ethical problem for us.

SPEAKER_01:

Professor Hunt, a little AI bird has informed me that you're gearing up to do a podcast series of your own. How many episodes and what kind of topics do you wish to cover?

SPEAKER_00:

Well, yes, I'm going to do a podcast, and it's going to be on– robertihunt.com, probably starting in a couple of weeks. I have not said a number of episodes. What I'm going to try to do is every couple of weeks take up the latest news in AI and ask how that specifically addresses or challenges what's happening with us as humans who are defending our own humanity, and therefore try to keep the focus on here's something that's in our world. We need to know it, understand it, and what it's doing. But we also need to know ourselves. And as we know ourselves, continue our own quest to be the kind of humans we ought to be. A

SPEAKER_01:

final question. Are enough of us paying attention to this, or are we oblivious?

SPEAKER_00:

I'm not sure that people are oblivious, but not nearly enough are paying attention across the board. I think in business and industry where AI has a direct impact, there's a lot of attention being paid. But I think a lot of people... are in areas where AI is influencing their world and they don't know it. And so they're not paying attention. And of course, we have a lot of other distractions. But I think we should be paying attention. I'm one of those who believes that we're going to see in the next 12 to 24 months pretty serious disruption of some of our social systems by AI. If I'm wrong, fine. But if I'm not wrong, then we need to start paying attention.

SPEAKER_01:

Professor Hunt, thank you so much for being with us today. and sharing your hopes and concerns regarding artificial intelligence. I'd also like to thank my colleague Stephen Fasaro for his technical support and the SMU Office of Engaged Learning, which makes the SMU Fondren Library Podcasting Studio available to us for our recordings. Until next time.

SPEAKER_00:

Thank you very much, Bob.

People on this episode