In Trust Center

Ep. 93: Staying ahead of the evolving world of AI in theological education

In Trust Center for Theological Schools Season 4 Episode 93

Send us a text

As the world of artificial intelligence advances in theological education, leaders can stay ahead and find ways forward. In this episode, the Rev. Tay Moss, who has been a leading voice on AI in the Church and theological education, considers how leaders and institutions can engage AI wisely – balancing innovation, ethics, and integrity. This episode considers what it means to be human and bring a theological framework to new technology, and how schools can use new assessment models and spiritual assessment tools. This episode helps leaders keep up on the emerging trends and provides ways for schools to further explore. Moss has done extensive work in AI, including developing new technologies for churches and teaching on it. He can be reached here. He was previously on the podcast in Episode 78.  

SPEAKER_00:

Hello and welcome to the Interest Center Podcast, where we connect with experts and innovators in theological education around topics important to theological school leaders. Thank you for joining us. Hi, everyone. Welcome to the Good Governance Podcast. I'm Matt Huffman. One of the top topics in recent memory, I think, I think that's fair to say, is artificial intelligence. How do we engage it? How do we not engage it? And going beyond the simple things like will students use it or won't they? Because we already know they're using it. Professors are using it, podcasters are using it. So I've invited the Reverend Tay Moss, a leading who's on the leading edge of using artificial intelligence in higher education and the church and other education settings to the podcast to join us again to talk to it. Tay, it is great to have you on the podcast once again. Welcome. Thank you. Good to be here. So we have uh we talked a few months ago, maybe feels like I think longer than that, which has been a lifetime in terms of artificial intelligence. I think the last time we talked, we talked about how there are folks who uh certainly were saying, well, we're never gonna use it, students don't use it, and we we all know everybody now is using it in one shape or another. So tell me in the last six months, you know, reframe this conversation for me. Meaning, you know, what we talked about several months ago has changed. Like in in feels like in in leap years. Um so talk a little bit about how we're now reframing it in ways that we probably hadn't a few months ago.

SPEAKER_01:

Well, one of the changes that I've noticed talking to educators is that there's no longer a question of whether students are going to use the AI. The question is how to moderate and shape their usage of it. Sure. So uh rethinking assessment activities is a classic, and there are some uh instructors who will attempt to cling on to the essay by using like AI detection software, for example. But the problem is that that kind of software is famously unreliable. You get a lot of false positives. So you can take you know, essays from that were written 50 years ago, feed them into these detectors, and it'll say 30% of them are written by AI, and of course they're not. Um it also tends to go false positive on people who for whom English is not their first language, disproportionately. Um so you know, these kinds of language detector, these kinds of AI detectors don't work. So assessment activities have to be reconsidered. The problem with a lot of the assessment activities that have been proposed as alternatives is they often require more work of the instructors. All right. So I think the conversation has really moved mostly beyond um thinking that we could just keep on doing the traditional kinds of assessment activities and and actually towards new methods. So, for example, my my wife uses blue books, you know, just has the students handwrite the stuff. Um, another good one is doing oral exams instead and and so on. So that's that's one shift in the conversation. Um, another is that uh the people that I talk to generally have actually used AI now. Um six months ago or so, I was still talking to a lot of people that hadn't even tried these tools. Yeah, right. You know, and now they're just so ubiquitous and they're built into so many things that people, even if they don't really care to it go out of the way to use them, are using them nonetheless. Right. Um, and that goes to an important trend in the industry of AI, which is uh we're starting to converge toward what we might call everything apps. And these are applications which will organize your life for you, kind of like a personal assistant, and they'll have access to your email and your calendar and all this. And there are some early prototypes of these kinds of technologies already. And they're gonna have an impact on education because um students will be using this to organize their their agenda of the day, what they're gonna do, you know, uh, when they're gonna study. Um, they're gonna use these tools to create study guides and things like that. Like these tools will probably have access to the learning management systems. So you'll be able, a student will say something like, Um, well, the AI will say, Oh, Jimmy, you have a you have a test next week. Would you like me to help you study? And Jimmy will say, Yes. And then this study, you know, this AI will create study documents for the student, uh, maybe a practice test to assess where they are, you know, do some pretty sophisticated learning interventions. And those are things that won't be, won't require the instructor or the institution to even scaffold. Like that'll just be something that your iPhone will do. Right.

SPEAKER_00:

So, I mean, when I was in seminary, I still remember um study groups. So now my iPhone becomes my study group or or the AI becomes my study group. Maybe some ways better. Of course, uh, there's no relationship there. You know, you do miss out on the relationship piece.

SPEAKER_01:

This is an interesting question. Uh, to what degree will the AI have uh emulate human relationships? And and in some ways, doing so is part of a strategy to um not just to get adoption of these technologies, um, but also because they make them more effective interfaces, right? Like when you can relate to your, you know, if you've used the ChatGPT voice function, for example, it's as a very kind of personable thing. And sometimes even goes a little too far and could be kind of sycophantic. And there's so recently they dialed back some of the sycophanti because it was like annoying people. You know, that's a great idea, Tay. Let me help you do that. Yeah, I know, I know. It's it's funny. Um, so so there may be some of that where where the kind of relational quality becomes more and more um, you know, design terms skewomorphic. Like is it gonna it's gonna resemble reality of like a human more and more? And to what degree is that something that we're gonna want or something we're gonna want to dial down? Um my guess is that you know, as the AI proliferates, we're gonna have more and more ways that you can actually shape your experience of it. So you'll be able to dial that up or down. Um, but like I'm still getting, for some reason, I get like all these ads for like AI girlfriends and boyfriends. I don't know, for a lot of our listeners probably get those ads too. Um and and you know, it but there's been some interesting studies of the replica phenomenon, you know, and it's a mixed bag. I mean, some studies say that these have very positive social benefit, and and others that they have a negative benefit. I mean, a negative cost. Um, for example, there was one study, and um I'll have to find the exact things for you later, but it was um in a university setting, they gave students access to replica and then they measured the uh prevalence of suicidal ideation, and it went down by about 30 percent in the population that had access to replica.

SPEAKER_00:

Wow.

SPEAKER_01:

Yeah, 30% reduction in suicidal ideation.

SPEAKER_00:

So let's step back for a ways before we get into artificial relationships. Yeah. Which which again for theological schools there are two sides of this I want to talk about. One is, you know, how do you prepare people for this? Because again, it's you know, I've heard um like Anna Robbins ch uh chatted on this podcast uh some time ago about what does it mean to be human? And and certainly I think a theological school has a wonderful opportunity to speak into the AI conversation. Lily Endowman has funded uh University of Notre Dame to take a lead on some of that type of thinking. Um so there's that piece about you know how we engage it as not just a thought exercise, but how do we train ministers for a reality in which we may have people with um you know, in in where where they may have artificial relationships. Now, granted, I say that, that's not a new thing. I mean, there's catfishing has been going on for years, um, but the sophistication of this and what it means to be in a relationship is so there's that piece, and then the the other piece is is a is a technology. You know, you mentioned earlier. So let's start there as the technology. You know, you can you can say, look, we're going to put on, as you mentioned, we're going to put on these filters, we're gonna give, or we're gonna give somebody a blue book. You're writing out an essay. The other way on the other side is we know you're using it, let me see your prompts.

unknown:

Okay.

SPEAKER_00:

Let me see what you're learning, let me see how you're engaging. Give me some of the frameworks, if you would, about how you think a theological school may look at AI as a tool uh in the classroom to prepare people.

SPEAKER_01:

Um I think a critical thing is to get the assessment activities in alignment with what the goals are of the mission of the institution and you know what you're trying to train people to do, and then get those assessment activities looked as closely like that as possible. Sure. For example, there's a professor at the University of Toronto at um actually at Wycliffe College, and he teaches theology, and he's gone to using oral exams to assess his students. And but if you think about it, um that is actually a much closer emulation to how they're likely to mostly deploy their theological knowledge and skills because you know, most of the time it's it's you know, you're at the back of the church shaking hands, and somebody asks you a question about your sermon. You know, what did you mean about sanctification? I didn't understand.

SPEAKER_00:

That's a great point.

SPEAKER_01:

Yeah. And you're not gonna have time to, you know, write them an essay with proper footnotes and stuff. You're just gonna answer them off the cuff, right? And um, so I think there's there's there's the that kind of question of like, well, our assessment activity is really close to the work they're gonna be doing. And sometimes they are gonna be using AI to do that work. You know, for example, one one thing I'll do is I'll write a sermon and then I'll feed it into Chat GPT and I'll say, okay, make a children's talk based on this. And it's very good at that. It makes great little children's talks.

SPEAKER_02:

Interesting.

SPEAKER_01:

With like manipulable objects and scripts and the whole, the whole nine yards. It's great. So um now is that cheating? Am I plagiarizing my own material? I don't think so. I mean, I think I'm sort of true more something akin to translation or something. Um, but that's the kind of usage in real life that pastors are doing with this technology. So teaching them how to do that um in ways that are effective. But one of the things I tell people is is, you know, it's very part of the problem with AI is that it it everything that it produces sounds plausible. Right. Um so you have to actually know enough to be able to assess the output about whether it's good or not.

unknown:

Right.

SPEAKER_01:

So something like a children's sermon, I am perfectly capable of writing myself. So I can see the result and know right away whether it's good or not. Right. But if somebody is, say, a student preacher and they're using AI to actually write the actual sermon for them and they don't know, I mean, the children's talk and they don't know whether it's good or not, that's problematic. Right. So how do we find those cases where um we can teach students how to develop the skills to discern the quality of the content that's being produced? You know, I mean, some of these are just skills that that the students must suffer to learn, you know, and they have to experience discomfort. And uh that's part of the cognitive dissonance that's associated with learning, right? That kind of feeling. Yeah.

SPEAKER_00:

Well, I again seminaries are supposed to help spiritual formation or create spiritual formation for ATS accredited schools. That's that's an accreditation standard, is is whatever that looks like in the context. Um do we have an opinion yet on whether AI is helpful to spiritual formation or not? Because again, when you talk about sermons, I I mean, I've always felt the sermon starts to change me, and my study is supposed to change me. But if I go into Chat GPT and ask for a sermon, and I've tried this just to see, it will often come out with a very standard, pretty solid, basic outline.

SPEAKER_01:

Uh-huh. Well, this goes to questions of of you know theology of homiletics. Like, what are you even doing in this act of preaching? If if you're like I I tell, you know, I'd have had students learning how to preach uh you know field placements and stuff at my church and things like that. You know, I would tell them, like, you yourself need to be changed this week. Right. Uh, you know, in your process of preparing the sermon, because if you have not been changed, then you have very little hope of changing anybody else, right? Yes. And so the whole point of going off and studying the texts and all that stuff during the week is to have that, you know, put yourself in the way of that freight train of the Holy Spirit to kind of bowl you over. And then you kind of rehearse that with a congregation. At least that's my own theology of preaching and one that I've taught. Um, but if if that's the case, then if you take away that that uh being bowled over a moment, right, by just having Chat GPT write it, if you don't suffer, right, then then then there's there's very little value to it. You know, and and this is this is, I think, a theological question that needs to be, you know, chewed through. Um but I think that we are gonna have increasing sort of surface area of choices to make around AI and where it is or isn't being used. And so getting to the ground of the theological undergirds of preaching, of pastoral care, of those things is one of the essential tasks because we can't take it for granted anymore. Um, is AI capable of helping people with their spirituality? Yes, definitely. I mean, I built a tool called AI Faith Coach, and um, which you can actually visit right now if you want to. It's uh AIFaithcoach.com. And it's just a simple questionnaire that you answer. And on the back end, it's sending your results, I mean your your your question your answers to the survey to a back end which has an elaborate rubric for assessing your spiritual style. And then it emails you the profile. It basically says, based on this, you would probably benefit from studying benedictine spiritual methods, you know, like daily prayer and retreat and all these things. Um so it sort of profiles and it's a fairly crude tool in a lot of ways, and it's not dissimilar from many other kinds of surveys that are much more deterministic, you know, that are just you know, A plus B plus C equals D kind of things. Um, but it's it's an interesting example of how we can sort of give people access to resources, you know, very simply and cheaply. Like, you know, here's a and for some people, this will be a great little entry point. You know, they'll learn about themselves, they'll learn about it and spirituality or whatever else, you know. And of course they can play with it. They can retake the survey and do completely different answers. Like, you know, today I don't feel like I find God in nature. You know, today I don't feel like, you know, I like music or art. You know, you can you can take different survey and get different results and kind of play with it in that way. You know, but but um there's there's a thinker, uh Andrew uh Karpathy, and he he pointed out in a talk I saw that what's increasingly happening with AI is we're seeing this kind of spectrum of of how um how much agency we give it to do things for us um that used to be things that we reserve for ourselves. And and an example would be like driving a car, right? And it's not really binary anymore. That like that it's a self-driving car and you do nothing, or it's or it's a regular car. It's it's now it's something like a Tesla where you know you're supposed to be kind of keep your hands near or on the wheel, and it'll like it'll it'll wake you up to participate in the driving from time to time. You know, there's this slider of agency, and that's probably a slider that we're gonna have control over. So, how much agency do you want to give it in our instructive activities? How much do we want to give it in our pastoral activities? Critical questions are the future.

SPEAKER_00:

Well, it's it's in the complexity, of course, in the classroom or in the seminary. I mean, you can make seminary, you can make policies with AI. Uh AI is a great tool for early drafts of policy. Um I mean, you can you can create a syllabi in in or create a syllabus in no time. Um I mean, granted, it doesn't have the depth. Uh I don't need letters from faculty uh or emails from faculty, but does it have the depth and the expertise? I mean, I think that's where uh I was asked about uh in a session I recently did on communication. Somebody said, What about AI? I said, Well, what AI won't know is what the landmines are. What you know, the human interaction, where people are going to respond or not respond, you know, particularly if you're in a denominational setting. But this here's another thing. I mean, how do we train uh seminarians, uh, people going to theological school for dealing with AI? You know, whether you have a somebody at the back of the, you know, you're you're out shaking hands after giving a sermon, now they're not just talking about your sermon, but they're talking about what ChatGBT told them about your sermon. Right. Perhaps in real time, but also for like tools. I, you know, I went on uh Tay Moss's app or I went on Tay Moss's side, found this this uh spirituality site, and I'm not Benedictine, but now I want to be back Benedictine. Yeah, yeah.

SPEAKER_01:

Yes. Well, doctors had this problem for a long time. You know, uh, you know, before AI, you know, people would go on to WebMD and diagnose themselves. Right. And so doctors, I found, um, you know, often now have to sort of back up their diagnosis by actually flipping open the book and showing it to you where the criteria, you know, and and and which is has brought a kind of interesting transparency to medical practice. Um, but it also points out an interesting sort of issue here, which is that um, you know, would like they they have AI now that's better than doctors at diagnosis, right? Um they call you know, superhuman levels. Like there was actually a recent article in the journal of the American Medical Association about this. Uh the problem is that that diagnosis is only one small part of the work of doctoring. You know, medical care is about a lot more than that. Like, for example, diagnosing diabetes, piece of cake. Um, but getting somebody to change their lifestyle, hard. Right. So I think this analog applies in in teaching as well, right? It may be that the content is easy, right? But the relationships and the the you know human interaction, creating that learning environment, you know, creating that culture of the classroom, those kinds of things are going to be the things that are really gonna be the focus of attention for instructors.

SPEAKER_00:

Well, I wonder in theological education, much like any other, if that relational aspect or that interaction, because even I I know of schools that are using AI and competency based. And their question is, how are you relating? I mean, it gets in some ways, it gets as you're talking about, it's like an oral exam. What what's going on? How are you using it? What does that look like? And some would say that may be a better assessment than others. Let's talk about what it means to be human in this and and the role. Where do you think, and again, I think when we la when we talked um on the podcast some months ago, you know, it the the difference between then and now, and it's been a season or two, it hasn't been a year or two, it's been a season or two. Um seminaries don't often move quickly because it's not what they're built for, but for reflective thought. Uh you know, thoughtful spiritual reflection. Right.

SPEAKER_01:

Well, one of the things that's changed in the last six months has been the rise of vibe coding, um, which is the term that we use for using AI to write computer programs essentially from scratch. So you have a concept, you have an idea, you tell the AI it builds the entire computer program for you, installs it on a server, does everything. And it's astonishingly good at this moment. Um, I've built a bunch of different applications and websites in the matter of like hours or days using this kind of technology. But what it requires at this level of its iteration is it requires the person who is giving the prompts to know enough about computer programming that you can articulate the design that you want. So you say things like, I want it to be constructed from microservices in you know, in architecture that's modular, and blah, blah, blah, blah. It does authorization using two-factor, you know, but you know, all these kinds of technical terms, and it will turn that into a program. Um, but I saw a video recently of a bunch of children vibe coding. And they don't know that stuff. They're not asking it to do that. They're asking for a game. You know, they they describe a game and the and the computer's making one for them. Um what this means is we're getting towards software 3.0. Like if software 1.0 was when we would use procedural programming languages like C or Java to give explicit instructions to a computer. And then um software 2.0 was when we started building language models, and we just kind of dumped you know gazillions of texts into these computers and made neural nets that imitate, you know, Karpathi says that that um we should maybe be thinking conceptually of LLMs, large language models, as spirits of people, right? They're not complete, they're sort of shadows of people, but they are meant to kind of simulate us, right? So so so you that was software 2.0. So software 3.0, the code becomes the prompts. So what's gonna happen is the operating systems are gonna be replaced by the AIs. So what you'll do is you'll simply tell your computer what you want. And it will go make it. And it'll just make it for you on the fly and create that for you and be a multi-purpose device. And this is why I was saying earlier we're getting to the convergence on you know everything apps. So yeah, so in educational context, one of the things it means is that the cost of developing new software is gonna plummet, it's already plummeting. And that in turn means that you know a lot of technologies that were prohibitively expensive or difficult to develop now can be prototyped and put into production very quickly and very cheaply. So if any of you know institutions that are listening to this, if you had an idea of something like a database for your alumni or, you know, I don't know, anything, you can probably have it done now far cheaper than you thought you could. You know, so we're gonna see this proliferation of that. Um yeah, and in teaching too. I mean, if you want to prototype a course really quickly, you'll be able to do it, right? Like you were talking about generating syllabi, same thing, except even more complex, like even more detail. And you'd be able to feed it things that you've already taught. Like you could feed it like all the stuff that you've written on a subject, and you could say, I want to create a new course about this, and it will actually draw from your own writing to create it for you. Right. So, what's the human role? Yeah, exactly. What is the human role? Um, so I gave a sermon about this actually recently, where um the text was from uh the book of Acts, and it's the scene was when when Dorcas has died and Peter comes to visit her. And my sermon was about creation. And because my hypothesis is that you know, the traditional interpretation of them showing the clothes that she had made to Peter, the common interpretation is that he did this because they did this to show Peter how generous she was, that she was helping people's material need. But my interpretation is that yes, that is true, but it's also true that they did this to show him how beautiful the clothes were. That she was a craftsperson, who made beautiful clothes for the people that she loved. And, you know, the act of making clothing for a person is a very loving act to do. And I think there was something aesthetic and the quality of it, which was only available to Peter by physically seeing the object, by seeing the art. You know, and we don't have access to that anymore because we don't see what Peter saw. Okay, we just kind of get the shadow of it. So, where's the human in this? I think it's in that moment of recognition and of of delight that happens. So, my advice to people in this kind of vibey future where you can like dream up anything and have the AI build it for you, is to follow the delight. You know, follow, follow the joy, follow where, you know, the where's God in this questions. And you'll be able to go so much farther and faster than you used to, because now you have this incredible tool.

SPEAKER_00:

Right. Yeah. If I am a senior leader, and those are great thoughts. I mean, you're um uh my mind's really. But let me as as we close here, it's it's if I'm a senior leader or a board of directors of a theological school, I'm like AI is just another thing. I've I can't get my arms around already, I can't get my arms kicked around just so many things. How should I be? Give me some thoughts in closing. You know, what would you tell them? What would you say this is what you either need to know or not know? It may be a framework of how you come at it, maybe a thing like as you were mentioning, hey now if you want to get in the market, it's cheaper than ever for some of these apps. Uh but the big framework, I mean, what I've heard today, we we've talked about you know what the humanity of it is, how you can use it as a tool, how you can use it in assessment, uh creation, the joy. I love that with that that story in acts. Um where are you gonna come out and start to tell them, hey, here's what you need to think about that maybe you weren't thinking about six months ago, but here's where we're at now.

SPEAKER_01:

I'd say, you know, one piece of advice would be that in your organization, you should probably find one or two scouts who can move ahead of the pack and kind of into this world because it is kind of a wilderness right now. Um there are a few sketchy maps, uh, but mostly it's just about kind of exploring and figuring out what's even on the other side of that horizon, you know, what's beyond that tree line. So finding people in the organization who can do that um is is really important because then they can report back and they can be like, oh, hey, you know how we're doing this thing this way. Well, it turns out there's this AI product that'll make that much better, you know, or or whatever, right? Um so that's one thing. I think another thing to recognize uh for institutional leaders is that there's a lot of shadow usage of AI, which is people that are using AI in their work and then not reporting it. Right. And there's a lot of reasons why they don't report it. Um, sometimes it's because they fear censure from you know the organization. Another is because they fear that their work will be valued less. You know, so you're talking about policies before. If you give an administrator the task of writing a new privacy policy and they use AI to generate it and then they turn it into you, if you know that it's AI generated, are you going to treat it differently than you would if you knew that they just spent an hour writing that document? You know, and how did they write that document, by the way? They probably went and looked at all the comparable schools documents, right? And then they sort of like cobbled it together, which is not that different from how the AI would do it. Right. Right. You know, so so uh so there's the shadow usage in these organizations. So being aware of that and kind of clarifying institutional expectations around the use of AI. Um, what you know, for example, it might be fair to say something like, please disclose to us when you're using AI to write these documents because we want to give them extra scrutiny. You know, not to judge you because you use them, but to judge the AI, you know, and and and give it scrutiny, which is important, especially if it's documents that are important to people's lives, you know, like assessments and things like that. Sure. Like I would never counsel people to use like AI to do like performance assessments, you know. Right. Uh however, if one thing I have used it for is to come up with interview questions for assessments. Like give me 20 questions that I can ask my employee to help figure out where they are. You know, that it's great, right? Perfect. Um, so so those are two pieces, I think, of critical advice to you know, understand, you know, clarify institutional expectations of usage, and but also find people in the organization who can scout ahead of where everyone else is and explore what this new world looks like.

SPEAKER_00:

It's a great place to end, Tay. The Reverend Tay Moss. Uh, we will put links to uh to his site, to uh ways to contact him, and to other resources at intrust.org slash podcast. And I'm sure we will be in communication again, Tay, because in another six months, yeah, uh the world will turn again. I'll introduce you to my new AI girlfriend. Thanks so much for being with us, Tay. Thank you. Thank you for listening to the Intrust Center's Good Governance Podcast. For more information about this podcast, other episodes, and additional resources, visit intrust.org.