
Anchored by the Classic Learning Test
Anchored is published by the Classic Learning Test. Hosted by CLT leadership, including our CEO Jeremy Tate, Anchored features conversations with leading thinkers on issues at the intersection of education and culture. New discussions are released every Thursday. Subscribe wherever you listen to podcasts.
Anchored by the Classic Learning Test
Technology as a Work of Common Grace | Brian Dellinger
On this episode of Anchored, Jeremy is joined by Brian Dellinger, professor of computer science at Grove City College. They discuss the definition and history of artificial intelligence, and Brian illustrates how biases can influence AI programs. They delve into both the ontological confusion and differentiation that AI provokes. They explore the importance of approaching technology as a gift from God and how his upcoming book, tentatively titled God and AI, aims to make AI more accessible and less overwhelming from a Christian perspective.
Jeremy (00:01.97)
Folks, welcome back to the Anchored Podcast. Today, I think we're going to have a discussion that has been on the minds of many anchored podcasts, enthusiasts, artificial intelligence. We have with us one of the experts in the field, Dr. Brian Dellinger from Grove City College. Actually, we're going to pause here. Am I saying your last name right?
Brian Dellinger (00:23.118)
It's Dellinger with a hard G, but otherwise yes.
Jeremy (00:26.96)
Dellinger, Dellinger. okay. All right. Editing team, gonna, yeah, Dellinger. Okay, we'll start over here.
Brian Dellinger (00:28.386)
That's it. Thank you for asking.
Jeremy (00:40.082)
Welcome back to the Anchor Podcast folks. Fascinating conversation I believe is in store for us today. A topic that has been on many minds within the classical renewal movement and that is artificial intelligence. We have with us today one of the true experts in this arena, Dr. Brian Dellinger.
of Grove City College is here with us today. Dr. Dellinger holds a PhD in computer science from NC State and is a Grove City alum as well that is undergrad there in computer science and mathematics. I actually, knew of Grove City for such a long time before I knew they were this household name in terms of computer technology and computer science, because you don't think.
Classical Christian education, computer science, but this is really, really, I understand, an area of expertise for Grove City. Is that right, Dr. Dellinger?
Brian Dellinger (01:29.932)
Yeah, that's one of the things that originally drew me to the college back when I, know, many moons ago when I was planning to be an undergraduate, when I was in high school, you know, I knew that I wanted to go to a place where I could trust the Christian education I was getting, where I could rely on the orthodoxy of that and be steeped in that. But also somewhere where I could get a really first class education in mathematical or computing related discussions. And they're just
There's not a lot of schools that are at that intersection on the Venn diagram, and Grove City is one of the very few that are. And so that was one of the things that drew me to it, and it's been a real pleasure and an honor to now be part of contributing to
Jeremy (02:11.932)
So just a few weeks ago, our amazing chief product officer here at CLT, he recommended listening to this kind of lost interview with Steve Jobs. And so I listening to Steve Jobs' childhood and from the earliest age, just fascinated with computers, and he was kind of right place, right time. When you think back, being a young boy, when did you first get your hands on the computer? When did you see one? Was it love at first sight? What was that like?
Brian Dellinger (02:12.588)
a few weeks ago.
Brian Dellinger (02:41.102)
Yeah, I'm no Steve Jobs. Probably about five. We had an old Commodore 64 that sat down in my dad's workshop in the garage and had a, you know, an inch and a half thick manual of the basic programming language. And so I think I got started writing really terrible computer programs at about six, maybe seven.
thumbing through that journal and saving on the old, big, floppy, 5.5 inch floppy disks. And that's probably to blame for it. Yeah.
Jeremy (03:14.344)
That's incredible. At six or seven, you're playing around.
That's amazing. That's amazing. So you've been here, I mean, from this great evolution that we've had where they've now gone mainstream and all this. And this particular idea concept, I I think back to my own childhood, Terminator, then The Matrix later on, this idea of AI has kind of been floating around for my entire life. When did you first begin to think about this concept?
Brian Dellinger (03:47.094)
Yeah, great question. Let me see. So I I think similar to you, my first exposure to AI is probably in science fiction, Terminator 2, not that I was allowed to watch that when it came out, but films of that form, you we're just sort of in the water, right? Isaac Asimov books, right? Were a great early exposure there to get interested in the question of
you know, what is it that a computer can really do? Can a computer be meaningfully a person in the way that we are? And I didn't have anything like an answer to that at the time, but I knew it was an interesting question to think about, and it certainly seemed to be one that Christianity ought to be able to speak into. And so this has been an interest of mine then for probably, professionally, probably 20, 25 years at this point.
but kind of just in curiosity for another decade or so before that.
Jeremy (04:46.012)
Can you define it? Can AI be defined? What is AI?
Brian Dellinger (04:49.868)
Yeah, there's a great joke here that actually goes back to John McCarthy, one of the founders of the field, where he said, you know, AI is this incredible shrinking field, that what is AI? It's whatever the computers are doing that we don't understand. And as soon as we understand it, we stop calling it AI. So, you know, once upon a time, spell check was AI. But now spell check's normal. We don't think of that as being AI anymore.
Once upon a time your GPS helping you navigate, your phone helping you navigate, that was AI. But now, that's just too easily understood. Classically, AI fits into two basic patterns. Some AI is about humans building in systems of rules and saying, explore out what you can do by following these rules, and the computer can just follow the rules very, very quickly. So you picture something like a system that's built to play chess.
Deep Blue is kind of the ultimate example of this, the AI that beat Garry Kasparov back in the day. What did we do? We told it the rules of chess and we said, look, here's what you're going to do. You're going to look at all the chess moves you can make and all the chess moves he can make back and all the moves you can make back to those on and on until you've figured out the pattern of moves that will get you to win the game. And that's simplifying a little more or less what it did. The second kind of AI, so that kind is called classical AI.
That was very big until the 1980s. The second kind is what we call machine learning. And machine learning is the idea of AI where rather than tell it what all the rules are, we say, look, we don't know what the rules are. We want you to figure that out, right? Here's a picture. This picture has a chair in it. We want you to see if you can learn to recognize which pictures have pictures of chairs and which are pictures of other things.
We don't know how to tell you which one's which, but if we show you enough pictures and you make enough little adjustments in your own math along the way, sooner or later we think you'll figure out how to tell which ones are chairs and which ones are not. Again, that simplifies the picture a bit, but that's the kind of AI that's really dominated the field for the last 20 years. When we think about a chat GPT or a Claude or a Grok or whatever it might be, that those are descendants of that second movement, this machine learning approach.
Jeremy (07:09.512)
So Grok, as a of a qualified Elon Musk fan, I did read the biography, fascinating. And that could not be fascinated with Musk. But part of his argument for Grok's existence is that all of this bad philosophy, these bad worldviews are being baked into chat GPT and these other AI systems. And then Grok is going to be free from that.
I was wondering if you could speak into that, that these AI systems are making not assumptions because they're being fed, but ways of perceiving the world.
Brian Dellinger (07:47.788)
Yeah, so in a generative AI, there's two main steps to building it. You have an early step, which is where you give it things to read, right? So books, forum posts, conversations on the internet, whatever it might be, news articles. You give it as many examples of human written material as you can lay hands on, and you say, read this and learn to predict, based on the words that have been said so far, what the next words are going to be.
Just what would you expect to come next in the sentences as you've seen them to this point? And that will take you a pretty far ways in terms of an AI that strings words together in sentences that sound like the kinds of words a human being would say. But then there's things we don't want the AI to say, right? We don't want it to give people directions on how to build bombs. We don't want it maybe to say things that are racist or sexist or whatever other bad behaviors we might think of. We don't usually want it to insult its users.
Right? And so there's obviously if you train it on the internet, you're going to get a lot of all of those things. And so you have to have this second step where now that it's been taught to talk, you bring it back and you say, now I'm going to have it have conversations with human beings. And as it has those conversations, the humans will flag things that it says and say, no, don't say that. Right? Mark that down. Don't say anything that sounds like that ever again.
And so there's this human feedback reinforcement learning step at the end. And that's really a point where when there are biases reinforced, that's the thing that does it more than anything else. Now you can bias it any way you want, right? You get a bunch of Republicans on that team and they'll give you a very Republican sounding AI. You get a bunch of Democrats, they'll give you a bunch of very Democrat sounding AI. You can imagine Silicon Valley tends to skew politically left historically.
a lot of the things that are going to be produced by that, just by the nature of the people who are building and testing it, it'll conform to their biases. And so, you know, I certainly think there's no getting around the fact that you're going to imbue some bias into the system, but I think it's healthy to have different AIs that are biased in different ways, right? I think it's good to not have a monoculture of what the AIs think, or not think, but what the AIs say to us, at least.
Jeremy (10:07.934)
So this is one of the ways when as we're starting to experience this more and more that this is really clear. And I've seen all of the examples people will post on Twitter about something crazy, you know, that Chad GPT said that showed a clear bias. the way is we were speaking about it in the beginning of the podcast. We, our generation first heard about all this was through, you know, the big blockbuster hits, The Terminator, The Matrix. And it was a really dark view.
right, of AI. And of course, a movie to be a good story, it has to have a crisis. It can't just be that this thing was invented and it was really helpful and that was the whole thing. That doesn't make a good story. So I'm wondering if you could speak into what is the real danger? How much has the danger been colored by the blockbuster hits that shaped our imagination? And how concerned are you about all this?
Brian Dellinger (11:03.436)
Yeah, it's a great question. There's the phrase among people who look at this right now is they talk about P-Doom, right? So P as in probability. What's P-Doom? What's the probability that we kill ourselves by building AI, that this is the end of humanity? And you'll find a variety of perspectives on that. A lot of them come down to, there's kind of two perspectives here, or two ways that things might go really bad.
One way is if you believe that AI could become genuinely human-like intelligent, right? This is your runaway Skynet situation. It becomes a person and then it decides for whatever reason to take over the world. Maybe it's going to use human beings as batteries or it just wants to eliminate the competition or it thinks they'd make great spare parts, whatever it is, right? It's evil, it's decided to kill off humanity, but it's basically a person and now it's a super intelligent person.
If you think AI can be a person, I think that's a very real risk. If you think people are basically reducible just to physical components, right? If you think we're nothing but basically meat and electricity, then it seems sort of inevitable that you're going to have a computer that can simulate whatever your body is doing and that will inevitably be a person in that way. If you take a perspective that says, no, I think human beings are more than just their bodies, I think there's something eternal, immaterial,
which I think Christianity speaks into and says that yes, they are, right? We can be absent from the body and at home with the Lord. If you think that's the case, then there's reason to say, okay, I don't think that's as much of a concern for me. I'm not worried it's going to become a person. I'm not worried it's going to surpass humanity in this way. But there's still a second concern, and the second concern is basically how many things do we plug the AI into?
Jeremy (12:36.263)
Yeah.
Brian Dellinger (13:00.174)
that might potentially go wrong along the way, right? So there's this classic true story of this one Soviet engineer back during the Cold War who's on missile duty, right? And he sees this radar contact come in, the radar contact looks to him like the US has launched a bunch of missiles. Standard operating procedure is he's supposed to push the big red button and do the Soviet first strike back at the US. Well, this guy looks at this and says, this doesn't make any sense.
I can't see why the US would be shooting missiles at this way and not all of them, and I think this is some kind of radar glitch or it's a flock of birds or something. I'm not going to do it." And he chooses not to press that button. And he's right, obviously, which is why all of us are still here to have this conversation. If that had been wired into a system that had had final go-ahead authority, right, that the AI had just been programmed when these circumstances are met, go.
That's a very different picture, right? And so the AI doesn't have to be super intelligent. It doesn't even need to be very smart at all if it's hooked into powerful enough systems and there's no human control left in the loop there for that to be a very bad scenario. And so I think there's more concern to me of that form to say I'm more worried about what happens if it's just given access to things that it shouldn't be.
and there's some genuine mistake in the programming, right? Some bad behavior and we don't catch it until it's too late.
Jeremy (14:35.678)
Very concerning, yes. Yeah, and I wonder, you with the first one that you said, know, as Christians, you we believe there's something irreplaceable. We believe in the imago de, AI can't be fully human because it's not made in the image of God. You know, I think at the same time, I'm shocked already with, you know, how far it's already gotten. And I wonder, I actually had this talk with Dr. Chris Perrin, Classical Academic Press, dear friend.
You know, my youngest is three years old. I'm wondering when he's my age, if it's possible that he interacts with what he thinks is a person, you know, and comes out to find out after the fact that it was a bot. And maybe that that's a pretty normal experience by the time he's my age. You know, I they've made these crazy advances and, you know, these things that look like humans already. mean, it very much is very Terminator-esque.
Brian Dellinger (15:34.402)
Yeah, there's a Christian technologist, Derek Sherman, who says that one of the big challenges he expects in the next few years is what's called ontological confusion, or what he calls ontological confusion. That is just a lack of clarity as to what kind of thing we're interacting with. We can probably all remember the first time we got a robocaller AI that sounded like a person on the other end of the phone, and it took a minute to realize that something was off in the responses.
and that this was a computer you were talking to, right? We would expect that kind of confusion to happen more and more as the imitation, even if it is just imitation, as the imitation becomes more seamless. What's interesting is this has happened now for something like 60 years. One of the earliest stories of AI comes from a very early chatbot, in some ways the first AI chatbot, called Eryza. was designed to be a computerized psychiatrist.
And there are stories of people who didn't realize they were talking to Eliza and got frustrated because it was designed to be one of these psychiatrists who just feeds back everything you say as a question, right? Why do you feel that way? what do you think about this? And they didn't realize and they couldn't figure out why they couldn't get anywhere with this person they were talking to because the person just phrased everything back to them as a question. That's the 1960s, maybe the early 1970s. So this has been a problem for decades and it is going to get worse in that regard.
Jeremy (16:58.268)
out.
Jeremy (17:04.734)
Right now, believers in this space, are you an anomaly? I I think that folks who are developing and the front lines of developing artificial intelligence and understanding how we use this ethically and responsibly, I know I want believers there. Is that common or do you find that you're a bit of a unicorn in this AI world?
Brian Dellinger (17:30.434)
So somewhere in between, I'd say. The dominant position within AI is unquestionably secular. But there have always been strong representations of Christians within computation as a field and within AI as well. Computation's really developed. The idea of a computer is really developed by a couple of guys, Alan Turing and
and it'll be embarrassing if I blank on the other fellow's name. Church, Alonzo Church. And Alonzo Church is a Christian, right? And it's really these two guys working kind of in concert, kind of independently, hit out in the idea originally. So from the very beginning, Christians have been represented in this field. Christians are doing pretty incredible things with AI. A couple of colleagues of mine are working right now with a Bible translation company.
to do automated Bible translation, right? To say, can we take this into new languages where we can do at least a first cut of the scriptures very, very rapidly into new languages? Or can we make it so that missionaries in the field can have AI support to do on-site translation as they're trying to communicate with unreached people groups? There's a lot of potential to use this for good. So, certainly I'm worried about some of these risk scenarios, right? But I think it would be a mistake for us to say,
well there's risks associated with it so let's just back out of it altogether. There's a lot that Christians can do to shape this field for good.
Jeremy (18:58.568)
Yeah.
Jeremy (19:02.174)
You know, one of the questions I always wondered about from watching The Matrix, which I've watched way too many times, I'm a big fan, I imagine you're a fan as well, is they don't ever answer the question of how people got to become the batteries. Was this a conscious or unconscious thing?
Right. And I mean, already I hear these stories of people, especially in Asia, you know, spending 16 hours a day in the virtual world. And I'm like, maybe we're witnessing, you know, this happening, that they basically come out only to sleep, you know, and, you know, professional life and everything else. you know, if you can opt into a virtual world where things are way better than your actual life.
What is this connection like between people spending more and more time in virtual reality and AI?
Brian Dellinger (19:58.946)
Yeah, you know, think one of the, and I've written about this for the American Spectator a few months ago, one of the challenges of what we're seeing in AI companionship, which think ties in very well to the idea of living in virtual worlds. One of the things that ties in very well for the questions around AI companionship is that when you've got an AI friend, right, for lack of a better word there.
In many ways, this is a product that's designed specifically to make you happy, right? It's there to make you feel good, to make you keep coming back to it. And real human beings don't have that property, right? We come with all sorts of wrinkles and priorities that are not about pleasing the person we're talking to. Well, that can be a human-forming part of our development, is to say we interact with people
who don't think we're the most important thing in the world, right? I think that's an important experience for us. I have a seven-year-old and a four-year-old, and, you know, it's very humbling to talk to someone else, like a four-year-old, who's convinced that he is the most important person in the world. That helps to put you in your place a little bit. And I think we run the risk if we entirely substitute human communication, human worlds, with these artificial ones.
that we just withdraw into something that's pleasing to us all the time and we miss some of that character formation of dealing with things that are unpleasant, right? That shape us and grind us down in good ways, right? Into stronger or better character because we have to deal with people who don't particularly like us and don't particularly want to please us.
Jeremy (21:27.486)
Yeah.
Jeremy (21:40.876)
Love that. Tell us about your book. You've got a book I know that you've been working on. What is the idea?
Brian Dellinger (21:46.348)
Yeah, so the book is tentatively titled God and AI, and the concept of it is really trying to put shape around some of these things that we've already been talking about, right? My goal was to write something that would be accessible to ordinary Christians, ordinary pastors, and to say to them, here is, look, first off, here's what computation is, here's what AI is, here's what chat GPT is, because I think we sometimes think about these things like they were magic.
They seem basically like some kind of technological sorcery. And we have no idea what to expect in terms of their capabilities because we don't really understand what they are. And so my first goal was to just say, let's solve that. Let's understand under the hood, what is this thing and what knowable limitations does it have? Because even mathematically, we can talk about computation and say there are limits to this. There's things that we can know that it can't.
And so I try to unpack that in ways that a person without a computer science background can go, okay, I get it, right? I see some of what's happening here. And I say, okay, from there, let's look at a Christian understanding of personhood. What is it to be a human being? What do we know about the Imago Dei? How have Christians understood this through history? And how maybe are we being pushed on some of those understandings to say, let's come to a better grip of what this was always intended to mean?
And from that, I think there's arguments to be made to say that humanity is, according to scripture, is unique in creation, that we should anticipate that human beings are not going to be reducible to just something that's perfectly imitated by a computer, right? That there is something more than the physical, there is something more than just what we say and do, that we should expect to be unique here. And that I think even secular philosophers have begun to argue along these lines and to say,
there's something about the way we as thinking beings operate that really doesn't seem to be captured by these computer systems. And so I think that's really encouraging, right? It's a case where what's being found even in secular philosophy is aligning with exactly what we would expect from the scriptures. I think that's a positive note to us. And it means there's an opportunity for us to speak into this process and to say, look, we should expect to have some insight here, right?
Brian Dellinger (24:10.058)
If Christianity is true, then Christians above all should have some idea of what it is to be human or to be human-like. Let's use that. Let's present that. Let's speak it with confidence. And the last portion is just to look at the practicalities, right? What should we expect AI to do? We can anticipate it's going to be disruptive because that's what technology does. It disrupts things. But at the same time, I think we can approach technology as a work of common grace, right? If we look into history,
We see so many examples of untold human suffering, of starvation, of deprivation, lack of knowledge, lack of access to great works of art and literature and music. And we've been blessed with so much of that, right? Not just the necessities of life, but of health and of these opportunities, know, certainly within the classical learning movement, right? These opportunities where it's never been easier to say,
I want to go out and I want to listen to all of Mozart for the next 24 hours. I just want to hear all of it right now. No one's ever had that opportunity, certainly not for free, trivially, the way that we do. And I think we can look at that and say, you know, let's begin from a posture that recognizes that God has given us a gift in so many of these technologies. Will they be abused? Absolutely, they will.
Will they sometimes be put to destructive ends? Yes, that's what we do with every good gift God gives us. He gives us food, we overeat. He gives us sunlight, we stay out too long and we get sunburned, right? That's our nature as fallen beings. We abuse the good gifts, but the gifts are good and we ought to begin from a posture of gratitude there. And so I try to trace through both where I see some of the good gift being given and some of the places I think that it will cause harms, that there will be abuses falling out of...
Jeremy (25:39.07)
Yeah.
Jeremy (26:01.736)
Fascinating, can't wait to read it. God and AI is the working title right now, hopefully gonna be released this summer. And before we go, Dr. Dellinger, we always love to talk to our guests about books, the books that have been most kind of formative. Maybe it's a book that you reread every year. What would that be for you?
Brian Dellinger (26:22.21)
Reread every year is probably gonna be The Lord of the Rings. I'm reading that, I like to read that with my wife as we go to bed in the evening. One of us reads it to the other one as we're settling in. And so that's probably my biggest reread. Let me see, beyond that.
Jeremy (26:26.14)
Yes! Wow!
Jeremy (26:38.942)
And there's something, you know, I've heard folks say that Tolkien was prophetic in kind of predicting AI. Do you see anything there?
Brian Dellinger (26:48.898)
He's certainly got a view on modernity, That's, you you think about Saruman and the scouring of the Shire at the end. He's certainly got a perspective that a mind, I think the word is something like Saruman has a mind full of wheels now, right? That there's a loss of humanity in that very techno-focused perspective. I think that certainly can be a risk, but I don't know that I'd be quite as...
quite as negative on it as he would. I have maybe a little bit more of an optimistic perspective about some of the positives that can be worked there without cutting down all the trees and paving over the whole world. I don't think those are our two choices. I'd say as a child, one of the books that got me on this path was Isaac Asimov's Norby books. Those were little friendly robot that goes around with a kid. They're written for children. And just that.
That mental image of the AI as a friend to this kid was very formative for me at about six or seven when I was a nerdy little kid who liked the idea of having friends who would be that kind of companion for
Jeremy (27:58.526)
Amazing. We're here with Dr. Dellinger from Grove City College. Again, the book coming this summer is God and AI. Dr. Dellinger, we're huge fans of Grove City here. It's one the most popular destinations for CLT test takers. students, parents, if you haven't already, make sure to check out Grove City College. Dr. Dellinger, thank you so much for being with us on the Anchor Podcast.
Brian Dellinger (28:21.111)
My pleasure.