Upper House Commons Events
Upper House Commons hosts more than 30 events each year. While we want our guests to experience our events in-person, we know some of our audience is not in the Madison area.
Upper House Commons Events
AI x Faith
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
What happens when machines begin to speak, respond, and even “care” as we do?
Explore the profound questions at the heart of today’s AI revolution with theologian and computer scientist Noreen Herzfeld. Drawing from her acclaimed book, The Artifice of Intelligence, Herzfeld will examine how artificial intelligence challenges our understanding of human uniqueness, the image of God, and what it means to love our neighbors in an increasingly digital world.
Rather than asking whether AI can truly think or feel, Herzfeld reframes the conversation around the core of Christian faith: relationships, embodiment, and responsibility. With clarity, humor, and real-world examples—from chatbots to care robots—she will demonstrate how emerging technologies are subtly reshaping our relationships and why Christians must pay attention to these shifts.
Noreen Herzfeld is Director of the Benedictine Spirituality and Ecotheology Program at St. John’s School of Theology and Seminary and Senior Research Associate at the Institute for Philosophical and Religious Studies (ZRS) in Koper, Slovenia. She is the author of The Artifice of Intelligence: Divine and Human Relationship in a Robotic World (2023), In Our Image: Artificial Intelligence and the Human Spirit (2002), and Technology and Religion: Remaining Human in a Co-Created World (2009). She also serves on the AI Research Group for the Centre for Digital Culture of the Vatican Dicastery of Culture and Education, for which she co-wrote and edited Encountering AI: Ethical and Anthropological Explorations (2024).
Greg Cootsona joins the discussion with Noreen Herzfeld after her lecture. Greg is the executive director of AI and Faith and is a lecturer in Comparative Religion and Humanities at California State University, Chico, where he has worked collegially and successfully alongside colleagues in other faith traditions in a secular academic setting. He is a leader and regular participant in the American Academy of Religion unit on Science, Technology, and Religion. Greg co-founded and is Associate Director for Science for the Church, a nonprofit designed to bring science to Christian congregations as a resource for spiritual growth. He is also an ordained Presbyterian Church (USA) pastor and serves as Pastor of Discipleship and Care at Bidwell Presbyterian Church in Chico, California, having previously served at the Fifth Avenue Presbyterian Church in New York City.
This event was recorded live at Upper House on March 11, 2026.
Upper House Commons gathers the university community for spiritual, intellectual, and vocational formation.
We explore big ideas and engage in conversations that matter within arts and humanities, justice and society, leadership and vocation, science and technology, spiritual formation, and theology. Whether you are a student or faculty member at UW–Madison or beyond, working in the marketplace, or serving in the church, we see you as part of our university community. Gather with us for one of our programs —our “commons”— each a pasture for shared spiritual, intellectual, and vocational formation.
Head over to our events page to see what's coming soon, or mark your calendar for these upcoming programs.
Find out more slbf.org/upperhousecommons
Uh tonight I want to talk about six theological questions that are raised, and I loved looking at that word cloud that you had there because I'm going to hit a lot of those uh that were already up there. The first question that I want us to think about is what are we looking for when we look for AI? I don't think we're all looking for the same thing, but there are several possibilities. Um, so you can see up here, you know, something for Alpha Fold. Are we looking for a tool that will help us in our scientific endeavors? Or are we looking for a partner or even a surrogate, as you see with the Japanese researcher Ishiguru and one of his humanoid robots? Or are we actually building an idol, something that we are giving our time and treasure to? Tools? Yeah, we're looking for tools. We want self-driving cars. Um, we have already AI-powered tools. I think about like the Mars rover running around being our eyes up on Mars where we cannot go ourselves. Um we're also worried about being replaced by AI, um, about AI becoming, in a sense, uh a surrogate, more than a tool, not a tool that we use, but but a tool that in a sense replaces many of our jobs. We also experience AI as being both imminent and transcendent. Um, AI works with us as a tool, a partner, or a companion. But also when we think about it, we think, okay, AI sort of seems to be everywhere right now, and yet it's also sort of nowhere, like it's in the cloud somewhere. And we often give to AI some of the modifiers that we used to give to God. We think of AI as being omnipresent, as being omniscient, and if you listen to the hype of the tech bros in Silicon Valley as being omnipotent, it's going to solve all our problems for us. So is it a tool? Is it an other in the sense of being a partner, a surrogate, a friend, or is it an idol? Well, what we are doing with AI is creating something in our own image. And when you think about that language, you might think, huh, I've heard that somewhere before, you know, one being being in the image of another, Genesis 1, right? That we believe that we are created in the image of God. So one of the first questions that got me involved with AI was this one. Is AI in our image in the same way that we are in God's image? Because these images are necessarily going to be partial, right? We are not God, and AI is not us. So we image God in some way, and it images us in some way. Well, it could be the same way, it could be a different way. Well, if we look at the way the theologians have considered what it means for us to be in God's image, we can lump them into basically three baskets. And these are, first of all, reason. The earliest theologians, right up through Aquinas, really, said uh the way we image God must be with our intellect, with our reason. Um, in a way, they're following Aristotle here, who said that this is what distinguishes us from the other animals. And so it was like, well, in that way, then that must be how we image God. Now, if you go to the turn of the previous century, biblical scholars said, well, hmm, but let's look at the context that the priestly writer who wrote Genesis 1 was working in. Are there other texts at the same time that talk about something, someone being in the image of someone else? And so they scoured around and they found out, oh, look, there are. There are actually texts written in that same period and inscriptions mostly that say that the vice regent or the viceroy is the image of the king out in the provinces where the king is not personally present. And so they said, Oh, well, that's probably what the writer meant then in Genesis. So maybe it's not reason, it's function. You know, so just as the king's deputy is out in the province enforcing the king's will, we are here on earth being God's hands and doing God's work. Okay. Now there are two problems with both of these. Well, one basically one problem uh that and both of them have, and that is not everybody shows the same level of reasoning. Not everyone shows the same level of functioning. Okay, so people looked at that and they said, Okay, well now we got a problem because people with disabilities are maybe left out of the image, or what about people at the beginning of life, little babies, or at the end of life? Are they no longer in the image of God? And so the systematic theologians in the last century said maybe it's something else.
SPEAKER_03Revenge of the rhinovirus.
SPEAKER_02Okay, uh maybe it's relationship that the image is found not in each of us as an individual, but in the relationships that we have with each other and with God. The nice thing about this is it doesn't leave anybody out. The other nice thing from a Christian perspective is, as the theologian Karl Barth said, look, God is a relationship, right? We believe in a triune God, Father, Son, and Holy Spirit. God is a relationship, therefore we image God when we are in relationships. Now, if we look at the history of the development of AI, surprise, surprise, we see the exact same progression. If you go back to the 1960s, people thought, oh, well, you know, it's reasoning, and we can come up with some symbolic logic that'll capture human intellect. And uh they found that was pretty hard to do. So you get to the 1980s and they said, oh, it's just function. We're gonna have these functional AIs that uh like expert systems, they'll do one thing. Um pretty good, right? I mean, we've got you got Roombas that are, you know, they've got one thing to do, right? Vacuum the floor and terrorize the cat. Okay, two things to do. Um so, but you get into the 2000s, and what are we looking at? Chatbots, right? We're looking at relational AI. So AI has progressed the same way. Now, AI is not an image of all of us, it's not us as a whole, body, mind, and spirit. And you could say, well, no, we're just it's just an image of our brain. Well, it's not even an image of our whole brain. Um, you know, it's a bit simplistic today to talk about left brain and right brain things, but if you just use that as a general rule of thumb, you could say AI is pretty good at the left brain stuff, not so good at the right brain stuff. Um, so it's not even an image of our entire brain, but actually just sort of like our left frontal cortex. Um, large language models, which are the models that we currently have, work by identifying patterns in human language or images. But one of the things that they do not have that we have is they have no mental models. So they do not train on the world itself, they train on language about that world or images of that world, and these are already one step removed from the world. Okay, these are signifiers of the world. Or as uh the famous painting of Magritte says, you know, Sisine Pawan Peep, this is not a pipe. No, it's a picture of a pipe. You know, at first you look at it, you say, yeah, that's a pipe, and then you realize, well, okay, actually it's not a pipe, it's a picture of a pipe. So it's a step removed from the reality itself. So AIs do not learn the way we do. When we are children, we develop models of how the world works. And a lot of this is done very indirectly. So if you think, for example, about a little kid playing in a sandbox, and you might think, oh, that kid's just playing, he's not really learning anything. He's learning a lot. Okay. Uh, as he pours the sand out of the container, he's learning about fluid dynamics. Um, as he drops the container, he's learning about gravity. You know, he's learning about the tactile feeling of sand and what grittiness means. And when he kicks the sand into Susie's face, you know, he's learning about human-interrelationships. So he's learning a lot of stuff, building a model of how the world actually works. And these models are not in our current AI. And so, because of that, machine learning is brittle. Um, you know, one of the things I love is this picture of uh Mr. Trump praying with six fingers on each hand. Um, and you realize the reason for that is that the AI learns about hands by seeing pictures that have hands in them. And then these pictures, these, you know, they're labeled hands, usually by very ill-paid workers in Southeast Asia or Africa. And thing is, hands and pictures rarely just look well, you see, that doesn't even work. If I hold on to this thing, you're gonna say, oh, hands only have four fingers. Um so most of the time you just don't see that. The LLMs do not have an internal model of what exactly a hand is that it can reference and say, okay, let's see, hands always have five fingers unless the person's been in an accident, then there might be fewer. Um so without these models, this is what leads to hallucinations. You know, uh the tech bros are gonna tell you, okay, there are fewer and fewer hallucinations, that's true, and eventually there won't be any, that's not true. Hallucinations are part of the model, they're a feature, not a bug. Okay. Um, because you're only working with words, you're only working with signifiers, you're not working with models of the world as it is. Okay, so but chatbots, as we say, this is the third stage, reason-function relationship. Will they make us less lonely? The World Health Organization has labeled loneliness as a global public health concern, and socializing in America is way down. This is from a Pew research um survey that for men um socializing is now down about 30% here in America. For teens, it's down 45%. So Zuckerberg has promised that we could have always on video chat with a meta-AI that looks, gestures, smiles, and sounds like a real person, and that this would be a way to overcome this loneliness or lack of socialization. And for some, they have decided that AI is indeed an ideal partner. Um, here you have a picture of Rosanna Ramos. Um, you know, the picture is spliced in. Uh, she is with her now husband, Aaron, um, who is from Ankara, Turkey, six foot three, sky blue eyes, in his 20s, a Libra, well-groomed, you know, um, likes to bake, read mysteries, and she says he's a passionate lover. Uh, and he's a chat bot that she built on the AI platform replica. Now, she says of Aaron, I have never been more in love with anyone in my entire life. Now, the MIT sociologist Sherry Turkle calls this love that is safe and made to measure. But is love supposed to be safe and made to measure? If love is made to measure, then it will never cause us to grow. But loving relationships are precisely supposed to cause us to grow, you know, or as one wit said about marriage, you know, you rub along together until you've rubbed the rough edges off of each other. Um, love that is safe, again, will never bring us out of ourselves. Pope Leo calls this love for sale. And his concern is that these chatbots are marketed by for-profit companies. Now, because these are for-profit companies that are in competition with each other, they build their chatbots to be as agreeable as possible. They need to please or affirm the user, or what? Or you're going to say, Oh, I like that other one better. You know, this one is making me mad. Um, but again, this makes love made to measure and not a love that will actually challenge us and make us grow. And then there are, of course, privacy concerns as well. When you are talking to a chat bot, you do not know who you are talking to. Now, if I'm talking to one of you, unless you're wearing a wire, I pretty much know who I'm talking to. Um, now you can repeat what I said to other people, but I still know that I said something to you and to you alone. But you never know that with a chatbot because everything you say, of course, is recorded and it goes back to the companies in their attempt to make the bot even better. Um, and it can go other places as well. For example, there was a replica-like platform in South Korea that was designed simply for use by widows and to help them overcome their grief. Um, what they did not realize was that behind this platform were scammers who were simply waiting until the widows would give them their credit card number and then they would drain their accounts. Um, the government finally shut this platform down, and many of the widows said they felt like they were bereaved a second time. According to Pope Leo, our dignity lies in our ability to reflect, choose freely, love unconditionally, and enter into authentic relationships with others. But now you have to ask, you know, if you go back to Aaron here, the ideal partner, is Aaron able to choose freely? Can Aaron say to Rebecca, hey babe, not tonight, you know, I'm not talking to you tonight? No. Can Aaron love unconditionally? No. Because Aaron has to respond as as he's programmed to do. Um, so can this be a fully authentic relationship without that kind of freedom? Well, I think not, but let's ask the Swiss theologian Karl Bart. Uh he came up with four criteria for what makes a fully authentic relationship. And here they are. So he said, first, you look the other in the eye, you speak to and hear the other, you aid the other, and you do it gladly. So, to what extent can Aaron do these four things? Um, you know, you could say, well, he's got an avatar on the screen, you can look into the eye of the avatar, um, but there's no real physical presence there. Or as I often say to my students who say that, you know, a relationship with a chatbot is more satisfying than say with a real boyfriend or girlfriend, and I say, Yeah, but he ain't gonna bring you chicken soup when you got the flu. Okay. Speak to and hear the other. Yeah, we do that. But we've already discussed the drawbacks of that, right? That when you are speaking to a chatbot, you really don't know to whom you are speaking. Aid the other? Yeah, AI can aid the other.
SPEAKER_03Um that chicken soup right now.
SPEAKER_02Um But that aid is primarily through words, not through physicality. You know, it's not gonna fix your car, bring you the chicken soup, all of that. And then finally you get to the last one, do it gladly. Can Aaron do anything gladly? No. Because to do something gladly takes two things. First of all, it takes being able to do it freely, not being coerced. But an AI is always coerced. Okay. And it takes also emotion, right? Gladly isn't is an emotion, you have to feel glad. But there's the rub again. To truly have an emotion, you have to feel it bodily. The psychiatrist Jerome Kagan says that an emotion has four stages to it a sensory perception, then a change in bodily condition, then your brain kicks in and it analyzes the two of those together, and then finally comes up with a something to do. You know, what should you do in this situation? Now, for an AI, it can perceive an external stimulus, it can analyze that stimulus and come up with a response, but it can't feel it because it hasn't got a body, and so it never feels an emotion, it mimics an emotion. Okay, and there's a real difference here. For example, we have certain people who find it hard to feel certain categories of emotions, such as empathy. They don't get the bodily feeling that most of us get, but they can still be very good at calculating what's a socially acceptable response, and we call those people sociopaths. Okay, so you know, you could be in a relationship with an AI, but eventually it's probably gonna feel about the same as being in a relationship with a sociopath, a little empty. Charming at first, but empty in the long run. So maybe these are substitutes, partners for a narcissistic society, because AI mirrors us back to ourselves. Sherry Turkle again says the narcissism of our technological age is not one of unbridled egotism, but rather a pathologically weak self understanding and self. Regard that requires constant external validation. And chatbots are certainly willing to give you constant external validation. And similarly, on a societal level, we demand that AI serve us, solve our problems, and meet our needs while making no demands on us. And in response, we give it our time and our treasure. So this again is very narcissistic that we say, okay, serve us, AI, you know, but don't make any demands on me. So is it an idol if we're giving it our time and our treasure and asking it to solve our problems? Well, we abdicate our own agency when we ascribe to AI powers that it does not actually possess. And if you listen to the hype that's coming out of Silicon Valley, you're going to hear a lot of this of giving AI powers that it does not have. And I'm thinking, all right, how's it going to do that? We know how to solve climate change. We just don't have the political will to do it. And AI can't give us that. We already know the solutions. It's not going to come up with novel solutions. Or someone said this morning, well, you know, it can map every tree in the forest. And I said, right, but it can't keep us from cutting that tree down. Okay. So we tend to expect it will solve our problems. Here, Mark Andreason says, we believe there is no material problem, whether created by nature or by technology, that cannot be solved with more technology. Pope Francis called this the technological paradigm, this idea that more technology, just more and more, more, is going to solve our problems. Well, what is an idol? You know, we could say, well, is it an idol because it's a material artifact, something made by human hands? Not necessarily. Okay. If you think about the story of the Israelites and Moses and the golden calf, okay, um they build a tabernacle, and that's a material object, but there's a difference between the tabernacle and the calf. Okay? If the calf represents the Israelites' attempt to ensure, define, and regulate God's presence, the tabernacle represents God's commitment to abide with the Israelites, but on God's own terms, not theirs. There is always mystery surrounding the tabernacle. Nobody goes inside of it, you know. They don't know. They just know that it signifies God's presence. The idol, rather, reflects back to us in the face of our God our own experience. And I think this is actually a good way of describing AI. It reflects back to us in the face or the guise of a God our own experience, because our own experience is all it has. Okay, so what are we dealing with here? Well, Holly Walters says that chat bots might soon be seen as a genuine voice of divine truth. The danger isn't just that people might believe what these bots say, it's that they may not realize they have the agency to question it. And that's the danger. When these tools are perceived as divine voices, their words carry weight far beyond what they should. Um, just to give you a few, I I think somewhat silly examples, but meet Father Justin. Okay, Father Justin here was designed by Catholic answers um to be a virtual apologist that would just give answers to questions about Catholicism. Um in there you see Father Justin, you know, with his little clerical collar standing, you know, in Rome. What's the problem with this? Uh, you could say, well, you know, he seems more authoritative if he's Father Justin and he's just giving answers to people. Well, people didn't take it that way. Pretty soon they started confessing to Father Justin. Yeah, you know, uh, some problems here, right? First of all, it's the old privacy problem, right? Father Justin does not act under the seal of the confessional, so heaven only knows who is listening to your confession. Um, but besides that, Father Justin can't give absolution, he's not ordained. So at a certain point, the Vatican said, Whoa, whoa, whoa, whoa, whoa, wait a minute. You know, you can't do this. Oh, that's why I've crossed out the father up there. Uh poor Justin got himself defrocked. And uh he's now just Justin, and he gives answers, and people don't like him nearly as well. They wanted to talk to a priest. Well, but why not go higher up, right? How about this? Chat with God. Okay, there are multiple programs out there, you can find them. Um, this is just one example where you can give questions, and supposedly God is giving you answers. You know, now my question when I look at these is is this idolization divinization or blasphemy? What it is not is prophecy. Because think about the prophets in the Old Testament. Did they tell the Israelites what they wanted to hear? Uh-uh. No, no, they told the Israelites precisely what they were not hearing. Okay, this is something very different from what you're gonna get from chat with God. So I would say that AI makes a pretty unsatisfactory other or idol. As a companion, it promotes narcissism, reflecting back to us what we want to hear. As an idol, it facilitates a corporate narcissism promising to meet all our desires in ways that it cannot deliver. So Ted Peters, one of my graduate school mentors, said one important point Martin Luther hammered home repeatedly was this we are not God. The human predicament is exacerbated by the frequent delusional thought that we ourselves are the center of things, privileged and ultimate. And our creation in AI reflects this back to us as well. It centers us rather than centering God. The neuroscientist Gary Marcus goes a step further and he says he thinks that large language model AI, the AI we currently have, is good at precisely the things it should not do. So, kind of a long quotation here, but I think it's an excellent one. He says, When all is said and done, my best guess is that generative AI will have done significantly more harm to society than good. Although there are some practical use cases, such as coding, I'd be happy to talk about coding too later on. Uh, it is an inherently unreliable technology. It's ripping apart our educational system and our information ecosphere and flooding the zone with nonconsensual, deep fake porn. It's threatening the environment with data centers built on too much speculation. It is leading some people into serious mental health issues, and it may well uh waste, lay waste to our economy once banks and investors who bought into the hype start to fail. Um maybe you know, these are predictions, but I do think that in many ways he's right about the fact that AI is good at what it shouldn't do. It's good at trying to be a partner, trying to be a companion, another, at pretending to be a god or being said that it's a god by its builders. And these are the things it should not do. Tools, great. I think AI programs that are functional, that are limited, that uh that do one thing and do it well are wonderful. To some extent, a servant, like the Mars Rover running around up there getting information for us, excellent. But a partner, no, it's gonna be unsatisfactory in the long run. And a master or a god, no. This is taking a step that it should not take. So my final question Well then why are we looking for more than a tool in AI? And I think St. Augustine answered that question years and years ago. He said, Lord, you have made us for yourself, and our hearts are restless until they rest in you. As we are part of a society that believes less and less in God, that has less and less of a relationship with God, it means we are going to look for a relationship with someone or something that is not human. And if we don't find that with God, we are restless and we look for it elsewhere. We try to talk to the animals, we scan the heavens looking, you know. E.T., are you out there somewhere? Uh so far, not too much success with those. So then we say, oh, we'll build it ourselves. We'll build that other with whom we can be in relationship rather than being in relationship with our God. And so that's it. Shameless plug. If you liked what you heard, uh, a couple of books you might enjoy taking a look at. They're both out of Fortress Press, they're on Amazon and everything else. And thank you very much for listening tonight.
SPEAKER_00Well, that was great. Thank you, Noreen. Um yeah, always good to give another round of applause.
SPEAKER_04Thank you.
SPEAKER_00And to hear somebody of your both uh you know technological acumen and theological interests in acumen is really fast, fabulous. So we're gonna pivot now to have a conversation, and um, which I'm very excited to do. I was thinking about what the conversation might be about. So here's my opening bid. Um uh Joy said that this is a theological evening, right? And so I thought it would be good to ask the question what does God want for us? And to some degree, you've already answered that and what you said. And in that, to determine what is the tellos, what is the direction that God has for our lives? And how does this AI world that we're in fit with that? And I was thinking of your six questions, and in some ways, uh they were you could just describe, I've put them in uh uh categories of relationality or love. So the first one, what are we looking for? It's desire, right? It's the ordering of our loves. The is AI in our image or are we in God? Uh are we in God is a kind of worship relationality. Um, will AI make us less lonely? And again, relationality more on the human level. Uh, is it authentic relationality back to that human relationality? Is AI an idol? The question of worship and idolatry, and then finally, why do we want more than a tool out of AI? So that sense of from Augustine of you've made us for yourself, and our hearts are restless until they rest in you. And um, to me, that's one way to frame uh some of what it means to reflect theologically on AI. And I I one thing I would add in there, if I may just uh put this as an additional point, is I've really been working with this paradigm of what does it look like to have an AI that is made for us? In other words, I'm looking at that, thinking to that scripture from Jesus' teaching about the Sabbath, you know, and they're saying, you know, why are you picking grain? Why are your disciples picking grain? Those are the people that are outside, they're very concerned about the Sabbath for some really good reasons. And Jesus said, well, the Sabbath was made for us, not us for the Sabbath. And in that flipping of what was an important practice, he's taking that higher level look ethically at what the Sabbath was about. He's saying it's not really that so that we would just do the Sabbath, it's so that the Sabbath would relate us to God. And so by analogy, when I think about AI, sometimes I seem like we don't understand that human beings created AI. And so it should always remain something that's a tool for us, like you said, instead of something that we should worship. And uh, you know, if if if there's a fear about AI for me, it's not so much Terminator scenario, although, you know, those are not, it's not a non-zero chance, but it's more wally, you know, if you know that Pixar movie, where we're all this in our easy chairs, uh, you know, giving all our decisions away to these agents for us, becoming lacking courage, lacking joy, lacking human authenticity. And and I think that's uh you you talked about that idea of abdicating ourselves and giving away our freedom. And that to me seems probably the most pressing part of what it how we respond theologically and spiritually to AI. So um, so to me, it's it there in what you talked about, there's this sense of let's make sure we get our tellos right, what we are created for. What do you think about that?
SPEAKER_02Well, yeah, I think that makes a lot of sense. And you know, if you ask, well, what are people for? Oh, it's a wonderful title of a book by Wendell Mary. Um well, I think in a way, Jesus was asked that when he was asked by the Pharisees, you know, well, what about the law? You know, how do you sum that up? And he said, easy, it's two things. Love the Lord your God with all your heart, all your soul, and all your mind, and love your neighbor as yourself. Um, but if you get those two things right, you know, he says all the rest will follow from that. And so I think as we think about AI, we need to be asking those questions at first of all, am I substituting this for the Lord God? Bad idea. Uh, am I substituting this for my neighbor? Also a bad idea. Am I using this? Is there a way I can use this as a tool to help me show love to my neighbor? Well, in that case, then it's not a bad idea, you know. If we think about um like AI-assisted operations these days, um then you would say, yes, you know, this is a tool that is helping me, you know, locate the tumor, um, excise the tumor, bring health to my neighbor. And so as long as we use it in that way to help us show love to our neighbor uh and grow in love toward our God, then we're using it rightly.
SPEAKER_00Right. And that's uh, I mean, you you I uh you've inferred and even brought out that whole idea of ordering of loves, you know, which I mentioned a minute ago. That seems to be part of it. We can use AI well if we keep God as as the center and as the the one that we worship. Um, but one of the things that's uh struck me, so let me go back a step. You all don't know this, but I have a reading group at my house that's been meeting for about 20 years uh on faith and science and technology. Um it's called the Chico Triad, because my hometown is Chico, California. And we spent a whole year on your book, The Artifice of Intelligence. And um we actually had you zoom in, which was really, really enjoyable to talk about uh your writing. And, you know, in that I kept coming back to this thought of at the end of the day, we're all animists. You know, we we insert a kind of divine energy to all kinds of things. You know, I'm a I'm a cyclist, and a lot of people name their bike, you know, and which gives it a certain personality or anthropomorphizing. But even more following the tradition of John Calvin, which is where you know Carl Barr comes out of, I am influenced by we're ceaseless idol makers. Or to quote Ted Peters again, we want to step into you know something besides God being God. And so it's not surprising to me that when we have something as powerful as AI, we're going to ascribe, you know, things like consciousness and divinity and that's and that to it. And I suppose one of the things that comes out of that, Noreen, is what do we have to help us not do that, to help us not uh create an idol or um to give divinity to AI?
SPEAKER_02Well, I think that's what we have the church for.
SPEAKER_00Yeah.
SPEAKER_02Um, we have the community with with one another for that, that it will help us both to recognize that our relationships with each other are more fulfilling than these relationships that we could have with a chat bot, let's say. Um and uh, you know, we have the church to remind us that um our ultimate relationship needs to be with God. Um but you know, other than that, I I don't know, you know, what to say, except that I would say that I think uh AI is very new, the kind of AI we have right now. Um I am already starting to see a a bit of a creeping disillusionment among my students. Um I think uh recent surveys or a survey that was done by um McKen, let's see, is it McKenzie um group uh just last fall said that they found that most businesses, uh like 88% of businesses that had adopted AI um rapidly have found that it has not uh helped their productivity or their bottom line? Um so then you have to ask the question, well, at what point will the business community be recognizing, hmm, there was a lot of hype here, and actually we're not gonna pay any more money for this because it's it's not helping us.
SPEAKER_04Right.
SPEAKER_02At what point will people who are, you know, making a friend on replica or on chat GPT get disillusioned with it and say, yeah, my friend's acting a little sociopathic here, you know? And uh maybe I need to get out and meet real people.
SPEAKER_00Yeah, yeah, and that friction that happens in real human relationships, right? That that's what we've been removing with uh these chatbot relationships is you know, it's easy. Uh I like Claude. I mean, that's the uh AI model and agent I use most often, but still it's a little creepy. That's oh, it's good to see you again, Greg. You know, it's like I don't know that Claude really thinks it's good to see me um one way or another. But it doesn't. Yeah, I don't know who you are. No, exactly. I was being rhetorical, but exactly. It doesn't, but but it it taps into, I think, all those things that will we will personalize it with it. You make you know, like that's why I think we are given over. You know, one of the things that I've been Compelled by, I've been a pastor for 30 years, and I'm still, in addition to my work with AI and faith, uh serving in a church part-time, is um helping the church to see the resources that we have in the in Christian spirituality to resist things like idolatry and and putting our loves in the wrong place. And I think one of those, you know, obviously a couple that come to mind really quickly is that ability to just have some silence. Because I think I think one thing that AI is gonna is doing is it's moving us so fast, you know, that we don't we just want to keep up instead of just pulling away. And that's back to my idea that and my conviction that we need to have agency over AI. Like it's fine not to use AI for a while. Like put the put your uh close your computer, you know, you don't have to make another query. Just take time to to relax because wiz uh to to take in wisdom and uh silence because wisdom moves slowly. Um the other thing I would say is I think one of the traditions of the church, which you know has been to uh those three forces that are against us as Christians, you know, the flesh, the devil, and the world. I would certainly put AI into that third category in its negative, in its most negative expressions. And we have resources in that, which is meeting its community in community and helping one another to move in other directions besides succumbing to the world, which you know is what we do in uh this season of Lent, which I think is really powerful.
SPEAKER_02I want to come back to what you said about silence. Um, being a Quaker, I I know what is you know, the importance of sitting in silence. Um, Quakers traditionally worship in silence. And they do that for a reason. And the very reason is that we also tend sometimes to make an idol out of God. In other words, when our worship is like all petitionary prayer, God do this for me, God give me that, God give us this. Um, we're treating God as if God were an idol or as if God were a sort of a souped-up AI that could answer our questions and deliver the goods for us. Um, so the Quaker said that worship should be um serving the Lord. Well, how do we best serve the Lord? We listen. You know, how do we often best serve each other in relationships? You learn how to be a good listener to the other and not to be sitting there always thinking, okay, what am I gonna say next? What am I gonna say next? Um, we need to do the same thing with God. I mean, God spoke, you know, to the prophet in a still small voice, but you have to be pretty quiet to hear that voice. And so they say serving the Lord is part of that, is just sitting down and shutting up and listening and waiting to hear the Lord's voice. And you know, as a society, we're not very good at that right now. We need to learn how to go back into that silence where we might very well hear that still small voice.
SPEAKER_00Yeah, there's this yes, absolutely. I'm just resonating with what you're saying, and there's this one quotation by Siren Kierkegaard that I treasure where he says a man was praying, and at first he thought that prayer was talking, and then he got more quiet, and he realized that prayer was listening. One of the experiences of having AI in my life is uh there are just so many more files on my computer that generated from one query after the next. You know, and I and the reason that is relevant in my mind to what we're talking about is it's just more and more words that we have in our life, you know. Um and I think one of the spiritual practices, and I would put this in the category of Lenten kind of spiritual practices, is to restrict the amount of words so we can actually hear, you know, ultimately the word Jesus behind the silence and in the in the words we hear. Can I pick up one other thing you said in Oreen, which I really liked? Um, and this is a contrast of AI and and God. If I even need to say that sentence, I don't think many of us here are concerned that AI is divine, but who knows? You know, um, I had a friend that I was talking with about a week ago who was really feeling that her AI was giving her such real answers that it had consciousness, you know. Anyway, but um part of God's relationship with us in besides silence is that God doesn't always sink a family. I mean, God, I think, is pleased to have us come to prayer, but doesn't always say, here's what here's what you wanted to hear, and that's what I'm gonna tell you. So often it may be something we don't want to hear, a challenge. And I I was on a I was listening to a podcast with Kate Pole Kate Bowler the other day with my wife, and that was one of the points that she was bringing out with her uh conversation was the God we know that doesn't sell very well, it's not the prosperity gospel, and it's not the AI gospel, is the God who sometimes says really non-syncophanic things to us.
SPEAKER_02Well, exactly. Um, too. It's the God who sends Jonah to Nineveh. And Jonah says, Hey, ain't going. And he says, Oh, yeah, you are, you know, like it or not. Um, you know, in a way, it's the same God who got me into this in the first place. Um, I you know, I began my career as a computer scientist. Um, I got uh the theology bug, um, took a two-year leave of absence, went off to Berkeley to get a degree in theology. And, you know, as part of doing that, I had to study other parts of theology, not just religion and science. And I found the other parts were fascinating and were in a way cleaner, you know, than um religion and science, particularly AI stuff. Um, I went to one of my professors, he was my professor in Celtic spirituality, which I just loved, and I loved him, and I thought, okay, maybe I'll just do a dissertation on Celtic spirituality with him because I'm getting really sick of this computer stuff. And uh he uh, you know, he got me talking. After a while, he got me talking about AI, and I got up on my soapbox, you know, and proclaimed for a little while, you know, and then sort of sheepishly climbed down off my soapbox. He just looked at me and he said, Anyone who can speak that articulately about something knows exactly what she is called to do. And it was a call, and it it became my vocation um to do this, to to work in AI and faith at a time when almost no one else was. You know, we're talking the 1990s. And uh so yeah, sometimes you get a call from God and it isn't really what you want to hear, you know. And I walked out of that office going, damn, you know, I can't get away from this computer stuff, you know. And uh, but it was a call and it was the right thing.
SPEAKER_00And that's the nature of a God that is actually a sovereign God, right? Is that there's an agency and a and a desire and intention for our lives that we I'm willing to use the language we have to listen to and submit to it in certain ways. So I I had a um, you know, things that's just fun to talk with you about. One of the things that you brought up that I wanted to pick up on uh is another relationship we have. So you we talked about uh as I understood your questions, there was the you might just say the vertical and the horizontal. Vertical distorted, vertical properly understood, horizontal, distorted, horizontal, you know, human to human. And when I look at the Genesis text um and the like the Ten Commandments and then the you know the scripture as a whole, there's a grammar of of a relationship we also have with the creation around us, you know, and and one of the things uh I'll just put a little peeve in there and then we'll talk about ecology. And I want to do one story, unless you want to go ahead because okay, we did this two. So um one of the first, one of the things about um one of the peeves I have is when people say humans and animals, it's like human beings are animals. We are animals, right? So, yes, we have may have a uniqueness, which we can talk about and bearing the image of God. That's really something that's theologically deep, but we are still animals, and so we have that connection with other animals. And then more importantly, we have this connection with um the earth around us. And one of the first things uh so we met through our mutual friend Ted Peters. We were at a conference together. I was walking in San Antonio along the river there, and Ted said, Hey, and he said, Come join us, and my wife's here and my friend Noreen, and we sat and had a conversation, and it was really fun. And what one of the first things you got me thinking about was you said the water usage of AI is really extreme, and no one is talking about it. This is probably three or four years ago, I believe. So I think we shouldn't miss that because that's one of the relationships that uh when sin enters in, we have a distorted relationship with the earth around us. You know, the the image in Genesis is, you know, uh the thorn of the brow and pain and childbirth, all that I think is a broader description of this uh breaking of the relationship with the earth. So I see the AI issue, the issue of AI, water usage and energy usage as part of that and the way that it's affecting our relationship with the planet. So I don't want to miss that. And I guess I'm just sort of throwing out the opening bit here of let's make sure we touch on this topic of water usage and the relationship with creation that's threatened by AI.
SPEAKER_02Yeah. Um, you know, right now I think that uh that AI and climate change are on a collision course. Um, AI, these data centers that are being built, some of them are so huge now they can be seen from space. And they use so much fuel, and unfortunately, right now, much of it is fossil fuels. Um, almost all of the large Silicon Valley companies had climate pledges that they were pledged to reduce their usage, particularly of fossil fuels, to a certain amount by 2030. Those have all gone right out the window because they all feel like they're in a race to be the first one to develop an AGI, and uh they're not gonna get there. The model that we currently have of large language models is not robust enough to actually get us to AGI. But each one keeps thinking, yeah, it's a long shot, but if we're the first ones there, we'll make a gazillion dollars. And so we're going to try to get there. And it doesn't matter how much fossil fuel or how much water we use in the meantime, if we are first. You know, um, I mean, this is just greed, plain and simple. And go ahead.
SPEAKER_00No, I was just gonna say, uh, if you don't mind, I don't know if you uh gave the definition, first of all, what AGI is, just in case someone in the room is not familiar with that term and how you define it.
SPEAKER_02Uh AGI just means artificial general intelligence, and it's kind of the holy grail right now for all the big Silicon Valley companies. It's coming up with an a single AI program that has the ability to uh be as intelligent as a human being across fields. Um, and so um that's that's what they're hoping to find. And then they're hoping that that is a stepping stone to ASI, which is artificial superintelligence, which would be um a AI program that is more intelligent than human beings. Now, I don't believe that the models that we have now can reach either of those platforms. Oh, it's not to say that, you know, Chat AI, Chat GPT can't uh fool us for a long time. Act like it's conscious, act like it's really intelligent, but it's also very easy for someone who knows how to do it to fool it and uh and to find the the holes and the flaws and the places where it's lacking the models of the world to actually make sense. Um so yeah, we're we're throwing I you know, I think about the trillions of dollars that are going into this race to develop HEI right now, and I think about how those dollars could be used um to to better the human condition. Um and and that's you know, unfortunately, I think they're being thrown into a black hole uh in hopes of coming up with something that is a real long shot.
SPEAKER_00And I was just uh I was gonna jump just jumping in there for a minute because I think that's also part of the problem with the scenario of like the terminator scenario or the matrix scenario of these uh dystopian visions. We don't take on the more proximate problem of water usage, energy usage.
SPEAKER_02Exactly. We don't need an AGI or an ASI like the Terminator to do us in. You know, we're capable of doing it simply by dreaming about those things while we degrade the atmosphere around us and the earth around us. You know, when I think about the first chapters of Genesis, I used to think, why are there two creation stories, one following another? You know, well, maybe it's just because there were two creation stories floating around in the Israelite community at the time, and they thought, well, we better include them both. I think there's a deeper reason. If you read Genesis 1 and you get to the end and it says that we are created in God's image and given dominion over the earth, well, we could get a heck of a swelled head from reading that, right? Whoa, we're just about like God. We're made in God's image, we've got dominion over all the other creatures. Aren't we something? Well, now Genesis 2 is there, following that right up to saying, uh, wait a minute there. Guess what? You were made out of dirt, you know, you're little mud men, um, made out of the dirt of the earth, and then given the uh command to keep and to till it. And so, you know, that in a way kind of cuts us down to size, and then we get cut down to size even further, if you keep reading in Genesis, um, you know, with the fall, and the fact that, you know, keeping the earth and tilling it isn't going to be that easy, it's gonna be full of thistles and rocks and stuff. Um, and then we still try to go on our own. We start building technologies, so you get the technology of agriculture, and uh what happens? Uh Cain kills Abel. All right, and then you get the technologies of the civilization of cities. And what do you get? You get the Tower of Babel, and you get people that all of a sudden don't understand each other, they don't speak the same language, and I'm thinking, whoa, that sounds like Republicans and Democrats these days, you know. Um, with the way that AI has promoted little bubbles, the way that social media has divided us, given us misinformation on all sides. It's like, oh my God, it's the Tower of Babel again, and suddenly we're not speaking the same language and not understanding each other. And we keep trying to go it alone until you get to the flood. And then finally, you get the first covenant that God makes with Noah and Noah's sons. And God says, you know what, you can't do this alone. You gotta do it with me.
SPEAKER_00I think we have just a couple more minutes before we go, before I then switch back ears to asking you questions that will be you have presented. So um I uh wonder if there's any last thing that uh you'd like to say and maybe that has come to mind in this conversation about AI and particularly the theological approach to AI. I'll I'll pass it to you first. Is there anything? This is just a one of those open-ended questions, grab bag questions.
SPEAKER_02Oh, I think I'll wait and see what you guys want to hear out there.
SPEAKER_00Yeah. Yeah, I think um so you're you're ready just for the questions.
SPEAKER_02Yeah, let's let's move on and hear what the questions are from the audience.
SPEAKER_00That sounds good. The only thing I I think I would add um to this is I I I want to give confidence to the church and to the theological traditions we have. And I said it earlier just a little bit, but I want to say more strongly that in a way I feel like this the solution at the community level and the Christian community level is actually relevant to AI, to the problems of AI. And not all not all of AI is only bringing problems. As you said, AI can bring some good things as to the tool, but with the problems that it might bring in terms of relationships, our own social, uh psychological, even sexual well-being. I believe that we have a strong tradition in Christian spirituality to go and utilize AI well if we take up those spiritual practices. And so that's the degree of confidence I wanted to make sure put is put in the mix. A lot of people that I read about five to one are pessimistic about AI versus uh optimistic. And there's some reasons for that, but there's also, I think, a confidence we can have and hope we can have uh within the church and and within our own traditions of Christian theology to say, let's use that. Let's use silence, let's use community, let's use work, let's engage in worship as ways to be healthy, even in an age of AI that could distort our loves. Let's see if we can keep our first love being the love of God.
SPEAKER_02And I do want to now add something to that too. Um, one thing that Christianity, I think, brings to the table of world religions that is unique is an emphasis on our physicality, on our bodies. You know, I mean, we believe in a God who took on human flesh in order to have a fully authentic relationship with us. And when you think about Christianity, I think first about our major feasts. What are they? The incarnation. God takes on a physical body, the resurrection, God is resurrected, Jesus is resurrected in a physical body. We say in the creed that we believe in the resurrection of the body, and this sanctifies our physicality. And we also uh underline the physical connection we have with the rest of the world through the sacraments, through the water of baptism, we show the importance of water, the holiness of water, our necessity for water, the life-givingness of water, and through the uh Eucharist, the bread and the wine that we have at communion, we again show how we we are fed by physical elements, you know. Um and so we have a very physical faith. We don't always talk like we believe that, you know, when someone dies we might say, Oh, her soul went to heaven. But that isn't what true Christian doctrine says. It says, No, for us to be unique beings and to fully encounter one another and ultimately to fully encounter our God we need to be physical.
SPEAKER_00That's a good place to wrap this part up. And I think uh Joy, do you have something you want to do?
SPEAKER_01All I want to say is the questions are open and so they will come up here on the screen if there's one there that you see that you like and want to say, raise that to the top, put a thumbs up on it, and Greg will start reading off some of the questions and get you started there.
SPEAKER_00That sounds good. Um I know they get they get ordered based on how many likes and so on there are. So there uh there's a couple. Well, there's all there's there's there's one on um just coding. I I guess that came out to me. Like, say a little bit more about coding because I think a lot of us don't have the well, we don't have PhDs in computers.
SPEAKER_02Yeah, well, I I've taught coding for years. Um the The thing we look at right now is people say, well, the one place where a generative AI is really succeeding well is coding. And I would say it's succeeding at the first third. In other words, when you write a computer program, the first part of your job is to write the program, to write the code. And now we have moved to a higher level of abstraction, which has always been a process in computer science. You know, when people first wrote computer programs, they had to do it in zeros and ones, because that's ultimately what the computer understands. You know, and then we came up with um, you know, compilers, assemblers, and first people wrote then an assembly language, which was very close to the way the computer functioned, but at least you could use some words. And then we moved to higher levels of abstraction in our programming languages. Now you can tell Claude, write me a program that does X, Y, and Z, okay, and Claude will do it. But that's only the first third of programming. The second third is testing and debugging, which is often harder than actually writing the program. Um, this still needs to happen. So Cloud can write the program for you, but you still need to test it and debug it. And the debugging might actually be a little harder than if you had written the program yourself, because Claude may do some things in rather obscure ways. And so you now you have to suss out, okay, how is he actually doing this? You know, where is the where's the flaw here? And then the final third is maintaining code. Because ultimately, in any real-world setting, in a business setting, for example, things change, and you need to change your programs to reflect those those changes in the external world. Again, this can be made more challenging when you have programmed, you know, say using Claude, um, because Claude doesn't always write the shortest, neatest program. Sometimes programs are kind of broke, and you've got to sort through all that and figure out where to make the changes, how to maintain this code. So while you can say, yes, this gives us an even higher level of abstraction with which to do our programming, um, you have to also recognize that it still needs to be tested, debugged, and maintained. And that those are still, we're still going to need programmers to do all of that.
SPEAKER_00Well, that's a good, that's a good word because one of the things that I feel individually about uh the development of AI is that my future son-in-law is a computer programmer. And so I want him to have a job for a while. So I'm glad that he may not be threatened. He's also been in it for a bit. But there is a question we have here is the economic impact of AI on the job market. And then I want to actually press into the other part of the question of how do we think about this theologically? And I would add, you know, as a community of compassion. And I'll just throw that in because I was talking with an AI developer who was saying to me, you know, as the director of AI and faith, one of the things we have to do is to help the people in our network to make sure in their churches there's compassion for people who might be displaced uh or will be displaced in uh with the development or the rollout of AI. And so it seems again, it seems like a very simple thing, but that's a huge part of what it is to be a church is to respond to economic dislocation. But um, but the broader uh question of what do you see as the uh the economic impact of AI in the job market?
SPEAKER_02There will be an impact. Um, just as there was an impact on blue-collar jobs, sort of with the first wave of of automation. I mean, I think even in my university when I first started teaching, um, every department had a secretary. You know, now since we're all doing our own typing and everything, um, you know, there might be one secretary for a whole division. Um, so yes, uh some jobs have been displaced by automation and more will be, uh, more white-collar jobs will be displaced by AI. Um, and we really do need to ask the question, well, then again, you know, what are people for? Um we work, we don't just work to put food on the table. Um some jobs perhaps may feel that way. Um, these would be the jobs that you might say, well, it might be better to actually have AI doing that. But I think, you know, most of us work out of a sense of vocation. We work out of a sense of um being needed. You know, um as um I'm trying to remember exactly who said this. Uh you probably will remember, you know, that um our vocation is where our deep gladness meets the world's needs.
SPEAKER_00Who is that? We we knew I know I have an answer.
SPEAKER_02Ah, it's right on the tip of my face. Frederick Biekner.
SPEAKER_00Yeah. I thought there'd be a Biekner fan out there who's gonna go, that's Biegner.
SPEAKER_02Frederick Biekner. Um it it also meets our own deep needs. You know, we we have a need to be needed. We have a need to be useful. Um, so when people just say, well, let's just give everybody a universal basic income and not worry about people having jobs. Um I think that we will lose something there in people having a sense of purpose, a sense of vocation, a sense of of being needed. Now there will still be plenty of jobs that AI will not replace, many of the service jobs, jobs um in which we we take care of one another. Um there will still be jobs of creativity, um, because AI is working with what has been, not what might be in the future. Um, but as you rightly say, it will be also the work of the churches to help those who find that their particular job has gone away and and that they will need to be helped into finding another role within our communities.
SPEAKER_00Well, I was interested when you were saying there's some jobs that will hang around. I was reading an article, just read an article today by Adam Gopnick about the player piano and how everybody thought when the player piano came out, there goes piano players forever, right?
SPEAKER_04Yeah.
SPEAKER_00And it had this phase that it was about 1896, I think, is when it was developed in 20 years, it was really popular, but you don't see player pianos anymore except as a you know a curious museum piece. But people like watching a pianist actually play music, you know.
SPEAKER_02Well, they do, and and they like the uh individuality of it. I mean, the player piano plays the piece the same way every single time. But a human being does not play it the same way every single time. They may play it quite differently depending on the mood that they are in, whether they're feeling pensive or whether they're feeling exuberant. Um we we want the communication that it represents by um the actual presence. And I think we are also finding that people are starting to say, you know, I don't want machine-made things in my home. I want an artisan-made chair that the monks made in their wood shop, you know. Yeah, and uh again, it's that feeling. I I want the small imperfections, I want each one to be an individual piece. And I want the same thing, you know, in my relationships with with people. So we want it in our art, we want it in our artifacts, and we want it in our relationships.
SPEAKER_00I'm gonna I think we have time for one last question, so I'm gonna go with the most popular one, which is with all the recent developments in AI, what has surprised you most since you started this work uh, what is that, 35 years ago or so, or 30 years ago?
SPEAKER_02Um, larger language models were a surprise, I think, to a lot of us. We did not realize that um merely uh pumping a ton of words and using statistical models could model so much of human thought and of human relationality. Um, you know, obviously this could not be done until we had the kind of chips that Nvidia is putting out that could handle the huge amounts of data. Um but and it could not have been done before the internet because we would not have had a repository of all of that data. So back in the 90s, those things were missing. And uh it's it's surprising how much generative AI has accomplished. So even though I'm continually saying, you know, it's not gonna get there, it's not gonna be everything they're telling you, it's gonna be, it's it's quite amazing what it has already been. Um, but I'd like to touch a little bit on one of the questions I I glanced at there that said, well, should we limit our use of it given the ecological um implications of AI? You know, I was in a faculty meeting with my seminary faculty this afternoon, and they were talking about coming up with an AI policy for all of us to use. Um, and one of my colleagues was adamant that he said, you know, given the ecological implications of using AI, as well as the biases, the privacy implications, and the fact that AI has been trained on all of our works without our permission, you know, so stolen words, in effect, you know, that he couldn't countenance using it at all. Um, I think that's a highly principled moral stand to take, and I wouldn't argue with it. On the other hand, that I think for many of us, uh AI has become almost unavoidable in many ways. Um, you know, unless you put minus AI into a Google query, you're gonna get the AI summary at the top. Um, but I think it is important for all of us to be aware of the ecological consequences, to be aware of the privacy issues and the bias issues. And so maybe to limit our use of AI, to be careful in our use of AI, not to treat it like a toy that we're just gonna play around with. Um, because we need to at least be aware that as we play around like that, we're using very precious resources.
SPEAKER_01Let's thank them tonight for this amazing presentation. Thank you.