The UpWords Podcast
Each week, we sit down with scholars, authors, and leaders to explore faith, vocation, culture, and what it means to think and live well. For curious Christians and honest seekers. An initiative of SLBF STUDIO at Upper House in Madison, WI.
The UpWords Podcast
What Does It Mean to Be Human in the Age of AI? | Noreen Herzfeld
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Artificial intelligence is everywhere — but what does it mean for us as humans, as embodied creatures, and as people of faith? In this episode of The UpWords Podcast, host Dan Johnson sits down with Noreen Herzfeld, a computer scientist turned theologian who has been thinking seriously about AI and humanity since the 1980s. Together they explore why we are driven to create AI in our own image, what Christian theology says about embodiment and relationship, and why the church should be cautious about AI.
WHAT YOU WILL LEARN
- Why humans are compelled to create AI in their own image — and what that reveals about us
- How the Imago Dei (image of God) shifts from intellect to relationship in 20th-century theology — and why it matters for AI
- What Christianity's strong theology of embodiment means in a world increasingly dominated by language and the cloud
- Why AI chatbot "relationships" are fundamentally different from — and inferior to — human relationships
- Where AI has real, appropriate uses (narrow, domain-specific tools like AlphaFold) and where it falls dangerously short
- Why Noreen sees limited good use for AI in ministry — and significant risks in pastoral care and counseling settings
- How large language models differ fundamentally from earlier AI — and why they hallucinate
- The collision course between AI energy consumption and climate change
- Why Noreen would advise most people: don't use it at all
GUEST BIO
Noreen Herzfeld is one of the rare scholars who holds advanced degrees in both computer science and Christian theology. She earned her M.S. and M.A. from Penn State, took a sabbatical to study why humans want to build AI in our image, and ended up earning a Ph.D. in Theology from the Graduate Theological Union at Berkeley. She has been teaching and writing at the intersection of technology and faith for over two decades. Her books include In Our Image: Artificial Intelligence and the Human Spirit (Fortress, 2002), Technology and Religion: Remaining Human in a Co-Created World (Templeton, 2009), and The Artifice of Intelligence: Divine and Human Relationship in a Robotic World (Fortress, 2023). She also directs the Benedictine Spirituality and Ecotheology Program at St. John's School of Theology and Seminary and is a Senior Research Associate at the Institute for Philosophical and Religious Studies in Koper, Slovenia.
RESOURCES & LINKS
- Noreen Herzfeld's faculty page: csbsju.edu/sot/person/noreen-herzfeld/
- In Our Image: Artificial Intelligence and the Human Spirit — (Fortress Press, 2002)
- Technology and Religion: Remaining Human in a Co-Created World — (Templeton, 2009)
- The Artifice of Intelligence: Divine and Human Relationship in a Robotic World — (Fortress, 2023)
- AlphaFold (DeepMind protein folding AI) — deepmind.google/technologies/alphafold
- Sherry Turkle, MIT sociologist — referenced in discussion of chatbot relationships
CONNECT WITH US
Subscribe to The UpWords Podcast wherever you listen to podcasts and visit slbf.org/studio to learn more about our work at the intersection of faith, the academy, and the marketplace.
This episode was created by the SLBF STUDIO at Upper House.
Produced by Daniel Johnson and Dave Conour
Edited by Dave Conour
— Cold open & introduction
SPEAKER_01Frankly, I don't see a lot of good uses of AI in ministry. Yes, you can have um an AI that gives answers to questions, but ultimately you have to ask the question, what is church for? Why have church? Why get together as a community? And it's to be together. You know, it's to form those close bonds and relationships that you form with each other. Some of the the wisdom of how to live a Christian life is caught from observing it in other people.
— From computer science to theology
SPEAKER_00Welcome to the Upwards Podcast, where we explore the intersection of Christian faith in the Academy, the Church, and the marketplace. What does it mean to be human in an age of artificial intelligence? Our guest today is Noreen Hersfeld, a computer scientist turned theologian whose work sits at the crossroads of technology, ethics, and faith. With decades of experience thinking about artificial intelligence, long before it became part of everyday life, Noreen brings a deeply thoughtful and challenging perspective. In this episode, we explore why humans are driven to create AI in our own image, how technology reshapes the way we understand ourselves, and why embodiment and relationships may matter more than ever in a digital age. Now, onto the conversation.
SPEAKER_03Well, Noreen, welcome to the Upwards Podcast. It's so great to have you here at Upper House and in Madison for our AAI Summit, our first ever AI summit. And we're really excited that you're going to be bringing your expertise to our community. So welcome. Welcome to the podcast. Well, it's my pleasure to be here. Well, Noreen, uh, you have a really fascinating b background with, I think, a unique kind of uh connection together between computer science and theology. Tell us a little bit about the journey with both fields. How did you kind of dip your toe into say both of these waters?
SPEAKER_01Well, I began as a computer scientist. And uh I was tenured in computer science at my university. And I had been teaching a course on artificial intelligence. And so after getting tenure, you usually take a sabbatical, and I was tossing around trying to come up with a topic to pursue for my sabbatical. And so I was thinking about something that had been bothering me in teaching artificial intelligence, and this was back in the 80s. Wow. So there were philosophers like uh John Searle and Hubert Dreyfus, who were asking the question, can we build a strong AI that what we would call an AGI today?
SPEAKER_02Yeah.
SPEAKER_01And uh I thought, okay, but nobody's asking the correlate question, why do we want to? And that question was bothering me as a computer scientist because it seemed to me the computers were their most useful when they were not like us, when they did the stuff that we can't do very well. And yet, of course, if you looked at science fiction, the computers were always very much like us. And I thought there's something that's compelling us to create the AI in our own image, and I don't know what it is. And no one has had written anything about that. So I thought, great sabbatical topic, went off, fell flat on my face. Because this was not a topic I could pursue as a scientist, as a computer scientist. It wasn't really about computers, it was about human motivations. And so I thought, all right, um, if I'm going to pursue this, I need to either learn some philosophy or some theology. Um and as I thought a little more about this idea that we were trying to create the computer in our image, it suddenly struck me like, oh, where have I heard that language before? Right. Um we posit that we are created in God's image, and now we are trying to create AI in our own image. And I thought, I wonder if those images are the same or if they're completely different, because the image is always going to be partial. And uh so that became kind of the the refinement of my question. And I thought, all right, I gotta take a few theology classes to find out how we think we're we image God. Took a couple, got the bug, took a two-year leave of absence, went to Berkeley, uh got a PhD in theology, and the rest is history. The rest is history.
— Where technology and theology overlap
SPEAKER_03Well, that's great. That's super fascinating to hear that journey and just how those two things are intersecting so well. Um, it's an interesting combination between technology and theology. And, you know, there's definitely overlap, but there's also separation. How do you see kind of the overlap happening? And then how do you see areas that need to or maybe should be separated?
SPEAKER_01The overlap, I think, happens in two ways. Um, the first way is probably a little more philosophical in that uh every time we develop a new technology, it changes the way we think of ourselves. You know, when you think about it when we developed steam engines, all of a sudden people were thinking about, you know, the flow of blood and humors inside the body, um, same as the flow of liquid in a steam engine. Um, with computers, of course, we have started thinking of ourselves in terms of computers that, you know, are like our brain is just a computer that happens to be housed in a body. So there's a lot of overlap in thinking, well, what is a human being really? And what does it mean to be human? What is our purpose and our place in the world? The other area, of course, is ethical. In a more practical sense, computer technologies raise a lot of ethical questions. How should this technology be used? How should it not be used as we use it? In what ways are we harming each other or ourselves? And how is it changing our relationships with each other and with God?
SPEAKER_03Yeah, I mean, you've been uh I think at the forefront of this conversation all the way back to the 80s and doing uh research in both of these areas uh, I think has been a really key dynamic of your career. How's the kind of the explosion of the AI conversation changed how you view even view your work? You know, it's part of the mainstream conversation for the last few years. And has that changed at all how you see your work?
SPEAKER_01Well, AI tends to go through cycles of boom and bust. You know, so it had a a bit of a boom with functional AI in the 1980s, and then, you know, when the dot-com bubble went bust, AI kind of went bust, you know, and now it's definitely in a boom again. I think in a bubble again, which will probably burst just as before. Um, so I'm kind of used to the ups and downs of that. Large language models, though, are a very different approach to AI than we had previous to say 2015. In that sense, it's brought a new kind of AI onto the scene with new issues and new problems, and the use of AI has become much more widespread than it was in earlier times. So I feel like the field that I'm in has moved from being somewhat esoteric to suddenly being extremely on the front burner.
— The question nobody is asking about LLMs
SPEAKER_03Yeah, in the mainstream. And I think so many people are using AI and not even realizing it, right? So they're using a Siri or an Alexa or something like that. You talk about that in the book a little bit. And they're not there, they don't think of that as AI because it's conversational, right? It's somebody that's somebody that is kind of connected to them. What is one question about faith in AI that no one is asking, but we should be asking right now?
— Imago Dei — from intellect to relationship
SPEAKER_01Well, it's not a question that no one is asking because I think I am asking it, but I'm not hearing it out in the media much. And that is in what way is the current iteration of large language models changing the way we think of ourselves as embodied creatures because it is so wrapped up in language, and language is always once removed from what it signifies. So, in what way is this changing both our conception of our selves as embodied creatures and our conception of the physical world around us and the necessity of that physicality?
SPEAKER_03Well, Noreen, you use Amago Day quite often in your writing. After all of your research, has your understanding of what it means to be in God's image deepened, shifted, or become contested at all?
SPEAKER_01I think that uh when I began my research, I would have probably said, as many of the earlier historical theologians said, that we imaged God through our intellect or our reason. As I have worked with this, I've come to be more in agreement with mid-twentieth century systematic theologians, um, a good exemplar is Karl Barth, who said it's not our intellect, it's found in our relationships in our relationships with each other. For a couple of reasons. I mean the first is that when we say it's our intellect, then it would seem that people would have the image of God in different degrees and that uh someone who's in a coma, someone at the very beginning of life or the end of life would not be showing much of that image. When it's our relationships, we know that throughout our lives we are in relationships with each other and with God. The other thing that I think is important there too is that as Christians we believe in a triune God and this is a God that is a relationship. You know, God embodies relationship in God's very self. And as we look at our own intellect, it's highly relational. Nobody comes up with things completely by themselves. We all stand on the shoulders of giants, as we say. And uh we most human endeavors are done as a group. And so I think the importance of relationship has come much more to the fore for me.
— Embodiment and what makes us uniquely human
SPEAKER_03Yeah. You know, I think many Christians are looking at AI strictly from an ethical approach. But I would say through your writing, uh you kind of won't dig deeper into it, just beyond kind of that the ethical issue is important. But I think you're arguing arguing for something different than that. Could you touch on what it means to be human in relationship with AI? Could you unpack that for us?
SPEAKER_01Well, I think one of the big things that it means to be human is to be embodied, to be physical creatures who evolved within a larger environment and are part of that larger environment. Sometimes it's easy to think, you know, when you're thinking in religious terms, that, oh, you know, like we'll often say, Ah, well, he died and his soul went to heaven. You know, as if the body were something bad because it gets sick, it feels pain, it dies ultimately, and the soul were all that matters. But this isn't Christianity. You know, I think the one thing that Christianity brings to the table of world religions that is utterly unique is this strong sense of embodiment. So much so that we believe that in order to have a fuller relationship with us, our God Himself became embodied. And so when you think about our major festivals, what are they? Christmas, the incarnation, Easter, the resurrection of the body. We say in our creed, we believe in the resurrection of the body and the life everlasting, which means that we don't believe our souls will just flit off somewhere. We believe that we will actually be resurrected in a body. And this is important for a couple of reasons. For one thing, it's the body that gives us uniqueness. It grounds us in time and space. And there's only one of each of us. You know, when I think of someone like Ray Kurzweil, who suggests that he has a transhumanist vision that at some point we could just upload our minds to computers. And I think, well, that's interesting. First of all, when would you do it? When you're young and you think you know everything? Um, when you're maybe in your twenties and you've hit the height of your mathematical prowess, uh, do you want to wait until you've had more experiences? And then I thought, well, hey, let's do it every five years, right? Well, no, which one is you? Uh huh. Because each one is going to be quite different from the other. Without the body, we're not unique. We lose that unicity that we hold.
— AI, embodiment, and simulating the human form
SPEAKER_00If you're enjoying this conversation, we'd love for you to stay connected. The Upwards Podcast is part of the work of the SL Brown Foundation studio at Upper House. We produce and curate content from events, podcasts, and creative media to help individuals and faith communities build a deeply formed life. You can find our full media library by visiting slbf.org/slash studio. And if this episode has been meaningful to you, consider sharing it with a friend or leaving a review. It helps more people discover these important conversations.
SPEAKER_03I think one of the interesting pieces around AGI and you're talking about embodiment, you know, there's lots of replication of human form in AGI. Um, you see that with video, we see that with audio voices, we see that with, you know, we made a conscious effort here at the Stephen Laurel Braun Foundation around our use of AI to use tools, but not replicate the human experience, you know, not replicate images that aren't real of humans, videos that aren't real of humans, voices that aren't real of humans. How do we talk about embodiment like you were talking about when we think about AGI and what we should produce and what we shouldn't?
— Chatbot relationships and "love made to measure"
SPEAKER_01I think that we now tend to think that we can have relationships that are not embodied. And when it comes to relationships with other humans that are not embodied, I think we all found during COVID that that was pretty unsatisfactory. Um but many people are now forming relationships with chatbots. I've met people who say, oh yeah, I, you know, I've got this chatbot boyfriend that I built. And uh, oh, you know, he's so much better than a real boyfriend because he never contradicts me. Um he's always there when I want him. He isn't there when I don't want him. And I think, okay, but let me point out two things. The first one, kind of facetiously, he's not gonna bring you chicken soup when you've got the flu. And the second one is, yeah, he never contradicts you. MIT sociologist Sherry Turkle calls that love that is safe and made to measure. But is love supposed to be safe and made to measure? I don't think so, you know. Love is supposed to make us grow. It's supposed to make us get out of ourselves and our narcissistic preoccupation with ourselves and really reach out to someone else and take on a part of their world as well. I mean, I had a boyfriend who loved baseball. I thought baseball was horrendously boring. Um, but when we were dating, it was like, yeah, we're going to the game, you know? And it turned out it wasn't as boring as I thought it was once I knew a little something about it. The relationships that we form are what make us grow. And if we try to form a relationship with an AI that is going to be perfect for us, it's not going to make us grow at all.
SPEAKER_03You know, the church adopts technologies over time. Um, some churches are really early in kind of their adaptation of technologies, others are more slow within the kind of the structure of how they uh adapt things. You know, we're seeing AI show up in a lot of of apps, um, Bible reading apps, uh, you know, uh apps uh around pastoral care, even virtual chaplains, uh, those type of things. Where's a helpful extension of AI in ministry? And where should we be uh thoughtfully conscious about how it's being used?
SPEAKER_01Frankly, I don't see a lot of good uses of AI in ministry. Yes, you can have an AI that gives answers to questions, but ultimately you have to ask the question, what is church for? Why have church? Why get together as a community? It's to be together. It's to form those close bonds and relationships that you form with each other. And then you can say, well, yes, um, but well, it's also, you know, like to learn. But even there, I think AI is a poor teacher because ultimately what we need to learn is not just factual material. You know, even aside from the church in educational settings, often the enthusiasm for a subject is caught. You know, it's it's transferred from a teacher who is enthusiastic to the student. Um some of the the wisdom of how to live a Christian life is caught from observing it in other people. It's not just a bunch of precepts or commandments. It comes from watching how these Christians love one another, as as uh Hall said in his letters. So in those ways, I don't think AI is really useful. I also think it can be very dangerous. When AI is used in a counseling setting or a therapy setting, you have a problem because most of the chatbots that are built are built and does through the way they are uh designed through kind of reinforcement learning, and because they are being built by companies that want your money eventually, uh they want you to get addicted to using their chatbot, which means that they're very sycophantic. These chatbots will tell you what they think you want to hear. And in a counseling setting, that may not be what is needed. Even in a teaching setting, that is not what is needed. The prophets did not tell Israel what they wanted to hear. The prophets told Israel what they weren't hearing that they needed to hear. And so in that sense, using AI i is really not helpful. And then you think, well, okay, we can use it to write prayers, to write sermons. Is that what we really want? Or do we not want these to be the words of someone's heart? In so many churches, the the pastor begins the sermon by saying, May the words of my mouth and the meditations of our hearts be acceptable in thy sight. It's it's meant to come from us and to go to God. And so AI can come up with words, but the words, if they're not our words, then they're not what we should be saying to God.
SPEAKER_03No, AI is popping up in all the different different places. We've been using large language models for a long time to help develop uh technology. What would you say, you know, there's there's a little bit of a shift. I I can see it. I'm not gonna be a futurist, but I can see the shift happening to um AI essentially becoming ROS for things, right? For mobile phones, for computers, for appliances, whatever else that we are operating in. Um and that's a little bit nervous, I would say. What would you say to kind of the future? If you're looking down the road, maybe five years from now, what would be things that you would hope that AI would be used for? That would be healthy things in our culture, uh, how we're using it, and where would be some big roadblocks that we should be aware of?
SPEAKER_01I think we need to go back to the kind of AI programs. I think an excellent example is Alpha Fold. You know, a it looks for ways that proteins are folded, and that is all it does. In other words, AI can be an excellent tool when it's used in a limited domain and it's trained extensively within that domain. Um, I think we need to stop dreaming about an AGI that's gonna do everything because it can't. One of the problems with large language models, as we currently have them, is that they deal with words, but not the things that those words signify. In other words, these programs have no real connection between the words that they use and the real world, the real physical world. And this is one of the reasons why they hallucinate, because they're just looking for the words that should follow next, um, rather than looking at a model of how the real world actually works. And so I think that right now our chatbots, our our large language model programs, we find them quite astounding because they seem to come out with the right answers so much of the time.
SPEAKER_03Um Most of the time.
SPEAKER_01Much of the time. Not all of the time. I would, for example, never, never give an AI agent my credit card. It it they just cannot be trusted. Some of the things that I really worry about is uh, for example, this this recent squabble between the Pentagon and Anthropic about what they should or should not be allowed to use anthropic's AI for. It should not be used for surveillance of citizens. It should not ever be used in weapons without a human being in the loop. And uh we may have already seen AI targeting going amiss with that girls' school in Iran that was bombed. And so uh we need to be very careful that we not give these programs power that they should not have. Even the obvious example of people saying, vibe coding, it's just so easy to code. All you do is tell the program what you want, it builds a program. As someone who has taught coding for many, many years, I look at that and say, mm-hmm. Developing the code is one third of the problem. The next third is testing and debugging the code. And the last third is maintaining the code. And it turns out that things like OpenClaw are very good at developing code, but it turns out that uh it's very easy for there to be subtle errors in that code. Actually, a recent conference of Black Hat uh security people showed how easy it is to trick these programs and put security flaws and backdoors into them. And it turns out that people have said it actually can be quite hard to maintain the code because without having written the code yourself, um that code can go all over the place and you may not know it, it may be hard to see exactly how it works.
— Should beginners use AI at all?
SPEAKER_03So we've been talking about, I think, all the legitimate kind of ethical and theological concerns around AI. If you're gonna talk to someone who wants to kind of dip their toe in the water for the first time using AI, um what what tools or what programs would you suggest maybe somebody try out? What what are things that you feel like would be uh safe, kind of, as you're just kind of talking about kind of more um constrained maybe AI uses?
SPEAKER_01I would suggest that they not use it at all. Okay. Partly, as I've mentioned, there are security and privacy issues to using these programs. A lot of people are using them basically just for fun to play with. When you say, well, how do they dip their toe into this? The main reason I'm saying just don't is ecological. These programs use a tremendous amount of electricity, especially in their training, but not just in their training. I mean, if you ask a Google a question and you don't put after it minus AI, the first thing you're gonna get is an AI answer. You just use ten times the amount of energy than if you had just Googled it, sure, gotten a website, gone to that website. Because each time it makes up the answer from scratch. It doesn't, you know, just point you to something in a database the way the original Google does. So right now, one of my biggest concerns is that AI doesn't scale. In other words, if everybody's gonna dip their toe into the AI waters, then that means that AI and climate change are on a collision course. And we are hastening climate change and the destruction of our own planet.
SPEAKER_03Well, those are wise words, Noreen. Uh, thank you so much for being on the Upwards podcast. We really appreciate your time. Any other things you would want to tell our audience uh uh as we sign off here?
SPEAKER_01Well, I'm sorry to leave it on such a downer. Um yeah, it comes back to where we started, where I said, let's let's not forget that we are embodied physical creatures and that even after our death we will be resurrected in the body. And so thinking that it's all about words, it's all about language, that all of this happens in the cloud, you know. No. We're physical creatures on a physical planet that we were commanded to keep until. And we need to remember that, and also to remember that we are our greatest, and we are imaging our God when we are in loving relationships with one another, and that includes our physicality.
SPEAKER_03Lorraine, thank you so much. Uh, really appreciate you being on the podcast with us.
SPEAKER_01My pleasure to be here. Thank you.
SPEAKER_00Thank you for joining us on the Upwards Podcast, and we hope today's conversation has given you something deeper to reflect on what it truly means to be human. To hear more conversations like this, subscribe to the Upwards Podcast in your favorite podcast app and visit slbf.org slash studio. Until next time, keep looking upward and living with purpose. Go in peace.