In Trust Center

Ep. 78: AI's potential for theological education

In Trust Center for Theological Schools Season 3 Episode 78

Send us a text

The Rev. Tay Moss, an Episcopalian priest, media producer, and educator, has produced an AI-drive webpage to help people explore the Anglican church – AskCathy.ai – and in this episode explores the potential of AI for theological schools and how they can use it to enhance student engagement and streamline access to information. As well, he discusses the potential for new pedagogy as  well as the challenges that AI offers schools and provides some thoughts about how schools and leaders can start to think about engaging with AI.

SPEAKER_00:

Hello, and welcome to the Interest Center Podcast, where we connect with experts and innovators in theological education around topics important to theological school leaders. Thank you for joining us. Hi, everyone. Welcome to the Good Governance Podcast. I'm Matt Huffman. At the Interest Center, we've talked a lot about artificial intelligence. We've had some podcasts, we've had magazine articles, and there's far more to come because we're just exploring this in the field. In that vein, I'm very excited today to have the Reverend Tay Moss on the program. He is a his bio says he likes to say an American Episcopalian priest who moved north to the greater uh confines of Canada. But in addition to being a priest, he is a media producer, ministry coach. He's led innovative church initiatives for years, including some wonderful work in art and spirituality. And he has done extensive work in online and e-learning. He works with Alum, the Association of Leaders in Lifelong Learning for Ministry. And he works in AI using knit in ministry. Wonderful thinker, uh wonderful uh thought partner. He's the creator of one of my new favorite things on the internet right now. It's a website called askkathy.ai. That's C-A-T-H-Y. And it stands for churchy answers that help you. I love that. It's a resource for the Anglican Church that we'll talk about, and I'll put a link on our podcast page, intrus.org slash podcast. But first, Tay, welcome to the podcast. Thank you.

SPEAKER_01:

Good to join you.

SPEAKER_00:

You know, I think you know, we've had a few conversations, and I want to start with the big oh, I don't know that it's the elephant in the room, but it's certainly one of the talking points is when people talk about AI, particularly in the church and theological uh uh education, there's an immediate either fear or reluctance for the unknown. And I think that's a framing issue in part because of what's not known. Now, you've done innovative ministry work for years, you're very fluent in in the world of AI and and computers and tech. Tell me, how do you think people ought to approach this? I mean, give me some framing and understanding. If if I'm a president or a board chair of a theological school and I say, hey, I hear all this stuff. What should I be thinking?

SPEAKER_01:

I think the best attitude to take is one of curiosity and playfulness as well. You know, these are these are tools. Um, they're like other tools. They're good for some things and not great for other things. Um, but as tools, they're a little unusual in the fact that they do a lot of things that computers used to not be able to do very well. And so they've created this new kind of field of of possibility, um, which is very scary. Uh, and one of the other things that's quite unusual about this technology is, you know, Ethan Moloch talks about this. He's a professor at at uh the Wharton School who's written extensively academically about pedagogy and AI, and he's written some books as well. Anyway, uh Ethan said, Ethan Mollock said that um the the frontiers of what the AI can do or not do is a very jagged edge. And so you're always kind of approaching it, wondering, can it do this or can it do that? And the only way to discover is to actually try it. And sometimes you'll be disappointed and sometimes you'll be delighted, right? Um, but if you go through it holding your expectations fairly lightly like that, like I wonder if it'll do this, I wonder if it'll do that, um, you're much more likely to have a good, a good outcome. Um it's also notable that I spoke to um there was a university professor who is in charge of the AI deployment at a California university. I think he'd rather I didn't use his name because he was very frank with me. But anyway, he uh he told me that um, you know, his experience working with professors in their system was that it really just came down to exposure, that if he got them to actually try the tools and to experiment with them a little bit, um they both had lower anxiety and also started to get excited about what this could do uh for them pedagogically and professionally.

SPEAKER_00:

So I think the where a lot of the AI discussion started was with plagiarism.

SPEAKER_01:

Yes, right.

SPEAKER_00:

And and it was I think there's some folks who had policies and just shut it down completely because of fear. Um now, I think as we've talked, but I'd love to hear your thoughts on that because there's so much more beyond plagiarism here.

SPEAKER_01:

Yes, plagiarism is definitely an issue and it's gonna get worse, frankly, because the systems are gonna get much better. You know, what'll happen is students will be able to submit a corpus of their work and have the AI use their voice and even their ideas, right? As a shortcut. You know, and so that's something that definitely has to be dealt with, but it raises some interesting questions as we deal with it, such as the prominence of the essay as both a pedagogical instructional tool uh and as a tool of assessment. And the problem is that that there are certain kinds of assessment that are very difficult to do otherwise. You know, if you're trying to test people on their ability to write academic papers, then writing academic papers would seem to be the correct learning activity. Right. Right. But but maybe that's not actually the outcome that we're trying to achieve in a lot of education. I mean, maybe, you know, people that are English majors might not actually have any ambition to become English professors. Um maybe they're going to be writing copy for advertising. Maybe they're going to end up working, you know, in some other field that maybe humanities related and their English training may be useful to them, but it's not academic writing, right? For example. And uh so I think that we have to kind of um approach this whole question of well, what is a what is the role of the essay in in instruction, right? And and sort of figure out, well, maybe there are alternatives, you know. Sure. Yeah. So so right away I would say plagiarism is an issue, um, but I would say it calls into question some larger kind of pedagogical questions. Um, another interesting factor is, you know, when you look at essay writing, for example, you could still do things like blue books, right? But with blue books, I mean, I remember when I was a TA for a course one time and and we had a woman from from China who uh English was not her first language. And so she would only be able to complete about a third of the written blue book assignment. And I and but she was extremely diligent and smart and a great student and knew the material inside and out. And so I was a TA. I said to the professor, I said, we are gonna give her an A, right? Like even though she got 30% of the And he's like, he's like, oh, her? Oh yeah, yeah, yeah, don't worry. Right. Uh and you know, I I'm not sure that was the sort of best way to handle that, but but I understood his point, you know, that like clearly she knew the material. She showed up for every section of a discussion section and for for everything. She was wonderful. It's just she didn't have, you know, the ability to write English very quickly at all, you know. So with the blue book thing, we have now this question of, well, does that kind of handicap certain students, you know, versus a take-home essay kind of thing where they can take as much time as they want? So, you know, there are for every sort of technology, there is a response, you know, it sort of changes the equilibrium of how things are done. I mean, before chat GPT, it was Wikipedia. Right. So it was it was right, it was having uh essay writing services, right? You know, it's these other things. I think the problem with chat GPT in a sense is that it's made it easier, right, than hiring an essay writing service. For sure. Um, but the other reality we have to deal with, I think, in as educators, is that these tools are going to be used in the workplace, you know, in the real world, quote unquote. Right. So I'm regularly having you know, people in my workplace um use chat GPT to draft first drafts of letters, uh, agreements for like hiring people, memorandums of agreement for partnerships with organizations, all kinds of different tasks. Um, when I write sermons now, I'll use Chat GPT to help, right? Sometimes. And so, like one thing I might ask ChatGPT is, you know, list every example of Jesus healing Gentiles, right? And and it'll just spit that answer out in in just a fraction of a second. Now, is that the kind of thing I should know off the top of my head? Maybe. I mean, certainly I can think of some examples like the the Roman centurion and the Syrophoenician woman, right? But am I are those all the examples of Jesus doing that? Right. I sort of puzzle, I'd be like, oh, I need to look that up in a concordance or something, right?

SPEAKER_00:

Right, right. Well, it it it certainly crunches, uh, I mean, compresses the amount of time you might have on a sermon prep. Um, in some ways, I mean, it it I've played with it, you know, sermon outline on on this passage, and and you know, in in 15 seconds or less, uh, and usually much less, there's a great outline. Yeah, it's that's right.

SPEAKER_01:

And for most, for most purposes, what I tell people is you got to think of this as not being equal to you at this stage. You have to think of it as being something more like an intern, right? Or like a high schooler that you've hired to help you with sermon prep or something. Like they're not gonna know everything, they're gonna make mistakes, um, but they may be good for starting you on a draft, you know, like you can sort of ask a broad question and kind of get you start self-started. Um, that can be very useful.

SPEAKER_00:

Well, I mean, there's any number of things, but I I think one of the things in education we have to realize is well, and in the church, your congregation doesn't need to go to Father Tay, they can go to Chat GPT and say, Did Jesus ever heal Gentiles? Well, you know, the the question may become one of meaning. They come to you for well, what does this mean? Although Chat GPT can give you some of that. Um there's a there's a different, I mean, when when everybody in the pew has it and they can pull up their own sermon notes if they're if they're that familiar, it changes the way I should I would think I should be thinking about how I educate people. Yeah, I've got to educate them for that reality.

SPEAKER_01:

Right. I mean, there's some precedent for this. I mean, for example, when doctors suddenly had people coming in self-diagnosed because they looked up their symptoms on WebMD, right? I mean, you know, you know, so so now everyone's a doctor, apparently, right? And it's it's it's sort of similar, it's it's been a problem for a while now with the democratization of access to information that that that people can look up stuff, uh, you know, right? Um, but presumably there's more happening in a pastoral relationship than just simply giving people information. And um, you know, I know you want to talk about the the Kathy project, and and uh, and I'm this will maybe a good transition into it. One of the things with with the Kathy project, when I look over the chat logs that people are are are you know having with with the bot, um, one of the things that that I notice is that there's uh a pivotal point where you can sort of see things shifting from it being basically a retrieval problem to being a conversation, right? And and that happens when somebody asks something like, What is the Episcopal Church's teaching about abortion? Right. And Kathy answers very accurately. And then they say, Well, how is that consistent with whatever? Right. And then Kathy responds with whatever kind of backs up the argument of the Episcopal Church. And then uh and then the person comes at it with a different angle, right? And so right in there, I would say we're we're seeing the beginnings of some kind of a learning moment happening. Um, my my pedagogical theory is I I basically follow social constructivism. And so I think that there is this kind of space of encounter that happens. Um, and then people can have that kind of encounter with something like this. So I look at these bots as being not necessarily just solving a retrieval problem of giving people the right facts, but actually engaging them in back and forth kinds of conversations that can create learning moments.

SPEAKER_00:

Well, let's talk about ask Kathy for a second, because what um it and I find it fascinating because if I want to know what the name of priestly vestments are, I I can go in and ask, and Kathy will give me that. Um that's an easy one, yeah.

SPEAKER_01:

Right?

SPEAKER_00:

I mean, that's the easy one. And and as a you know, I grew up in a Catholic church, so I'm always intrigued by uh now I go to a very low church, so I'm always intrigued. So that's where I start. But I then I want to know about the sacrament. What's the view of the sacrament? Who can deliver it? How does that work? Um, so first I want to start here because one of the things that Kathy does is it provides uh it all this information. So if I am new to the Anglican movement or the faith, I can go in and ask any question. I don't have to, I don't have to call anybody, I don't have to look up another website, I don't have to search through, you know, PDFs or anything. It's all there. So you have cataloged or found however you did that, whatever the magic behind the scenes is, right? You've done that. So let's start there. I mean, what what did you you put all this information somewhere? You're you're retrieving that.

SPEAKER_01:

Yes. So the the traditional problem with large language models, such as Chat GPT and and others, Claude, Gemini, et cetera, is that they the information they provide is not necessarily accurate, domain specific, or verifiable. Um, in fact, the verifiability problem is often one of the things that seems counterintuitive to people. But if you ask, you know, how long should I boil an egg? And ChatGPT gives you an answer, then you ask, how do you know that? ChatGP doesn't necessarily actually know how it knows that, right? It'll make something up about how it thinks it knows that, you know, but it doesn't actually even necessarily know its own mind.

SPEAKER_00:

I like the idea that AI is kind of winging it. I appreciate that.

SPEAKER_01:

Yes, and there's been some really fascinating um examples of of what they call uh emergent properties, where when you train these AIs, they suddenly have some new ability, and no one, including the AI, understands how they came to have that ability. Right. Um, there's an interesting, controversial and and maybe apocryphal example. Um, there's a there was a leaked memo in which, uh, which was supposedly between OpenAI and the NSA, in which um some sort of a large language model project had been tested by the NSA. They were doing some kind of joint experiment, and it was able to crack an encryption algorithm that is extremely difficult to crack, and they don't understand how it did it. Like they can't understand the math, but they know it can do it. Like you can give it the ciphertext, the encrypted text, and they can decrypt it. And they're like, How are you doing this? It doesn't know, right? And they can look at the training data, they can say, well, we gave it all these mathematical papers to study, we gave it this information. So somehow it was able to extrapolate from all this mathematical stuff we taught it in answer to this encryption problem, right? Oh my god. That's an example of an emergent property. Um, so there's a lot of that. So, anyway, in general, these large language models, they they don't understand what they know or why they know it. Um, they don't understand where that information comes from, they are kind of um trained on general data and don't necessarily have domain-specific context, and they're not necessarily accurate, right? Not necessarily, like they've gotten much more accurate over time, but you're always sort of a little suspicious of the answer, and it's always good to fact check. So, to solve that problem, there's an there is a related strategy, which is called retrieval augmented generation, RAG or RAG for short. And the basic technique there is you take a user's question and you first do a search process with that question on some corpus of information, like some big collection of documents or other databases or other sort of information. And so you retrieve accurate information that is domain-specific and that you know where it came from, and then you package that with the original question and you pass it on to a large language model such as ChatGPT or Claude or one of the other ones, and you say, given this context of this background, answer this user's question, right? And provide your source. And then it'll generate an answer that is now domain specific, it's accurate, and it's even verifiable. And that in many cases it'll have the quote. It'll tell you that from page number 356 of the Book of Common Prayer or whatever. So to build a rag, um, there's a couple different steps that are done. But in essence, what you're doing is you're selectively uh creating a kind of canon of information that'll be considered authoritative for the purposes of the rag. And um one of the interesting problems, by the way, just a little little footnote, is let's say you're doing lecture transcripts. Um, if you have in that lecture a quote from a movie, right, and you're using it as say a counterexample, that the the the rag system doesn't necessarily know that's a counterexample. Like it may think that's actually an example of ethical behavior when it's really a counterexample. So that's why you have to be very careful with the information that you put into these model, you know, into this corpus of information. Um, but when you do this, it turns out that you can get very, very effective uh bots essentially, such as Kathy.

SPEAKER_00:

As you're talking, I think I hear this the sense in my my head. I'm like, right, I mean, I remember when I first went to college and there was this massive catalog, right? And then if you and then as time went on, the catalog moved online, and syllabi moved online, lecture notes moved online, all this. There's a huge body of of material, uh a theological school or any institution of higher learning has from student services, registration information, calendars, dates that I I would think I could bring together, put in one spot. So somebody says a new student says, Well, when do I have to register? Or what do you all believe about? You know, I'm I'm if I'm if I'm looking at a seminary, do you match my beliefs? All of that is is stuff I could control and put up there with some sort of chat bot.

SPEAKER_01:

Yes. Um, one of the leaders in this right now is York University here in Toronto. And I've had a number of conversations with them. I'm very interested in what they're doing. And uh they had a bot that was built on the Watson IBM system that was a machine learning technology, and they were spending a lot of money on that system, and they've replaced it completely with Chat GPT, essentially, with AI, because they found it was far cheaper and more powerful and effective in all these different ways. But I saw a demonstration of it and and um what they're working on. And you know, one of the things they're doing is going beyond just simply the kind of thing I just talked about before, where you're retrieving information from some sort of corpus of data to having what you would call ingential capabilities, like agent capabilities. So that means adding tools to the language model, so giving it the ability to do things like, for example, contact the register on your behalf, right? Or to um to send you a text reminder of when your next class is in a location, right? You know, these these kinds of capabilities, which is the next frontier of the development of a lot of these models. So in their case, um they've they're quite sophisticated because they're also talking about developing um, and I've seen them prototype not just like a singular bot, but sub-bots of various types. So there could be a bot that was an expert on your class schedule as a student. And if you ask any question related to your class schedule, the main bot will sort of parse that question, realize what it's about, and send it to that sub-bot, which has access to your registration information, knows who you are, et cetera. And that solves a lot of problems around privacy and control of information, but it also means that you can specialize these sub-bots to know a lot about that particular thing. So one of the things you can do is create a tutor bot that is an expert on the course material within a specific course. And it can be trained on things like the lecture transcripts. So, for example, Randall Reid at Appalachia State University teaches religion and he's done this with his courses, where he's had his students as a course requirement engage a bot that has been trained on his lecture material. And then he he actually reviews the chat logs as well. Like he tells the students, I'm going to look at how you talk to the bot. And then he's also uh interviewed people and had surveys about how what they think about this. And the feedback from the students has been very high. Like they really actually enjoy this quite a bit. Um, one of the things that's unique about a bot experience is that it's a much lower stakes kind of conversation. Yeah. So you don't have to worry about feeling embarrassed because you're asking a really basic question that's been covered multiple times. Um, you can ask questions that feel a bit risky to bring up. Um, you can um experiment with ideas without sort of feeling like you're going down some wasteful tangent that is, you know, burning the time of your, your, your office time of your professor, right? Right. Uh so they found that that lower risk um kind of engagement was very helpful. Um also the bar was prompted to review the lecture material with the students. So it kind of walked them through almost something like a Socratic method kind of kind of dialogue back and forth. And and get the students to to rehearse essentially what was already covered in the lecture so they get higher retention of the information uh as a result. So that's an example, real world example of of this being done. And by the way, I believe Randall Reid is working on a book or at least a chapter of a book, but it hasn't come out yet. So so if you're interested in following his work, you know, you can look at that. Um Ethan Moloch, and in one of his most recent papers, I think it was in May, April or May of this year, 2024, he he wrote a paper called Instructor as Innovator. And it had at least seven real-world examples of exercises, learning activities that you can do with uh just off-the-shelf AI products. So, for example, um creating simulations where you would you would uh prompt a bot on strategies for negotiation that have been taught in a course, and then create a simulation exercise for the students based on that where they have to demonstrate that they have an understanding of these principles and apply them.

SPEAKER_00:

One of the things I uh feel, I mean, I can even feel this. We're recording it, but I can feel it in some of our audiences a bit of anxiety because they say, well, it sounds like uh money and and work um to try to create this. Talk to me a little bit about this because I I know you come from the church world where money gets stretched. So tell me a little bit about, tell me a little bit about if I am uh a president of a theological school, a dean, a board member, I hear this and go, gosh, I've got all these other things I've got to do. Now I gotta think about this. What's the reality of jumping into this?

SPEAKER_01:

The barriers to to entry into this are far lower than people expect, um, especially since the price of these things keeps dropping. Um, there's basically three costs that are related to implementing AI. There is storage of information costs, like having information in a server cluster where it can be accessed. There's what they call compute costs, also called inference costs, where you're you're paying for the conversation. And this is done on a per token basis. So in most systems, we would say this is metered, like this is like almost like a utility, like they're measuring how many, how many pieces of information go back and forth. And then the third cost would be development cost of basically hiring people like me to to create these systems and integrate them into the existing information systems of and all three of those costs are actually a lot lower than most people would expect, to be honest. Um, so you know, for for example, for for Kathy, um, which you know, right now um the stats on Kathy are pretty impressive. So since we launched in May, we've had about 2,735 chats or 26,254 conversations. Um so an average of about 10 messages per conversation back and forth. And our cost for that right now is about$25 a month to run that. Wow. Okay. That's what I mean by the cost is actually a lot lower than people expect. So the compute costs are not extraordinary, the storage costs are not extraordinary. Um, they're certainly less than, I mean, they're they're a fraction compared to what institutions are spending on the learning management systems or their Zoom accounts or their Teams accounts or those kinds of other technologies. Um, and the reason that it's so low has to do with scale, right? That the the scales that we're talking about make the for a lot of efficiencies and how things how things work. Um, and in terms of development cost, that really depends on the same, it's the same kind of problem as, you know, how much does it cost to build a classroom, right? Well, I don't know how many students do you want to host? You know, how good do you want the HVAC to be? How nice do you want the art on the walls to be? You know, like how good an A V system do you want? You know, like you can you can always spend more, right? Like you know, instead more, but you can also get away with very little. I mean, you could just take the students outside and gather them in a circle on the grass, right? And it's a similar thing with with these AI tools. There are a lot of free tools available that you can use. And Ethan Mollikan, you know he talks about a lot of his examples are based on stuff that's available for free, right? It's because he knows that a lot of a lot of institutions aren't ready to pay for their own systems yet. Um, one of the advantages that does come from paying more, though, is that uh institutions want to have control over things, like the flow of data, privacy issues, things like that. So once you start getting into paid solutions, yes, you can solve all those problems. You can you can host your own learning management system, i mean, your own uh large language model. Like instead of using Chat GPT, you can use an open source model, was one called Llama, for example. And Llama, if you run that locally, it's on your own servers. Your data isn't going outside of your university system at all. And so that kind of solution is very attractive in healthcare settings as well and like other businesses where there's a lot of concern about security and and maintaining privacy. So um, so if you want to spend money to get that kind of security and privacy and things like that, you certainly can.

unknown:

Yeah.

SPEAKER_00:

But I think about first steps for a school where they might say, you know what, it let's put our catalog into a language model, let's put our, you know, requirements and calendar, let's put, you know, the the things where a student might say, particularly students who don't like to pick up the phone, right? There's a generational thing. Some folks don't like to call. They they want to where I could get on a website, you know, say, hey, when when is the registration? What's my tuition cost going to look like? Some basic things you get to before you you have a discussion with somebody. I mean, the entry for that, I think, is is where we're talking a lot of schools might want to start. And what I hear you saying is we're not talking about a lot of money, we're not talking about a lot of time.

SPEAKER_01:

You are correct. There are off-the-shelf solutions that are designed to integrate with pre-existing websites. So you basically you build your bot in uh in a software as service kind of setup. In other words, there's a company that provides an off-the-shelf solution. So you set up an account, you put a company credit card down, you start uploading your documents, you start engineering your prompt, like writing your prompt and kind of perfecting it. You have a test environment. You can have all your people and your staff try it. And based on the test environment, you can tweak the training. You can say, uh, oh, if it if they ask this question, then reply this way, and so forth. And then once you're ready to deploy it, uh, then you have a small snippet of code that you can add to your pre-existing website, and boom, you've got your little chat thing. In fact, a lot of people will recognize these like little chat bubbles that appear in the lower corner. That's what 99% of those are based on something like that, where there's another service that's not hosted on that WordPress site at all. Like it's being provided by another provider, and um, it's just an extra little bit of code that goes on the website that gives access to that.

unknown:

Yeah.

SPEAKER_00:

Yeah, it's amazing what can happen. I mean, uh having overseen more than a couple of websites, I can tell you, I mean, there's a lot of cool functionality that doesn't cost a whole lot of money. And I mean, 10 minutes to put it up, you know, in terms of some of that. We're not talking about that with AI. There's a there's a little more, but we haven't talked about a lot of things. So there's another conversation or three I think you and I need to have about things like pedagogy, things like policy. Yes. Um, so again, let me um as we start to wrap up here, let me let me ask you this. Again, if you're talking to a board of directors or trustees of a school, or you're talking to senior leaders, and they're like, hey, I mean, this is all the great. We're a small school, we don't have the ability, we don't have the time, or oh my gosh, there's all this fear, right? That's where we started with the the kind of the framework that people come in and and it not having the knowledge that you do. What do you want to tell them? What do you want them to take away from the first conversation you have with them about AI?

SPEAKER_01:

I think I would say that this is not actually a technical problem. This is a culture problem. Okay. Uh, and it comes to the question of institutional culture and what kind of institution that particular school wants to be. And I don't mean like, do they want to be a technology first, you know, crazy connected hub of innovation necessarily, but but do they want to be a place where students have exciting engaged learning experiences that represent the best practices of pedagogy that are available with the current tools that we have in our generation? Right. Like it's it's those kinds of things. Like, like what are our goals here as an institution? Is it excellence of education? Is it student experience? Um, or even if it's something like an institution that has a goal of creating original and and and important research on topics, right? Then we would be talking about like the AI research tools that are available. Right. But in any case where an institution can identify what its goals are as an institution, what its vision and values are, I'm pretty sure they're gonna find that AI can help enable and enhance those.

SPEAKER_00:

Interesting. Interesting. I mean, I think about in the history of certainly Christianity, I'm thinking about desert fathers and mothers. I'm thinking about people on hilltops who spent, you know, years and generations thinking and slowing down. And and and many of our schools are are like that, right? They're they're they're a place where you go for a couple of years to think and process and learn. Now there's AI.

SPEAKER_01:

I think AI extends the spaces in which contemplation can be done. The the nature of the interaction that I have with ChatGPT when I'm brainstorming a sermon is different than the kind of interaction that I have with my books. It's different from the kind of interaction that I have with my memory. This is a different um, this is a different mental space. And that's not something to be afraid of. That's just another field of mission, another field of possibility, another field of epistemology, right? It's just another way. One of the things that's interesting to me looking over the conversation logs is how people actually are engaging Kathy differently than they would talk to a human being. And sometimes that can be problematic because they can be rude or or or otherwise act in ways that they would not act with a with a human person inappropriate ways, right? Like arguing, you know, in a way that they probably wouldn't argue with a human being. However, in other senses, they have expectations of Kathy that they would not have a human being of knowing facts, for example, or being able to go back and forth about obscure heresies that I've never heard of, you know, like all kinds of other sort of stuff. Um, or they'll ask questions which they would definitely be embarrassed to ask a real pastor. You know, um, so so I think that that we can see this as just opening up another door into another, you know, room in the mansion of God. You know, this is just another space. And and I think that if we put it within the context of God's creation, right? That that God made this, right? So there's got to be some good in it, I would think.

unknown:

Right. Okay.

SPEAKER_01:

Like I mean, maybe some people will argue with me about the theology of that, but I I I I I think that we can we can see this as another another space, you know, and and I think that that removes from it the idea that this is somehow replacing, you know, the normal interactions between an instructor and a student, that it's going to replace the role of a of a registrar or other people. In fact, I think it'll extend those people's capabilities to reach more, to to have more effective learning engagements and and to have happier students to learn more.

SPEAKER_00:

That is a great place to end this conversation. Um we will have you back on the con on the podcast to talk a little more about some of the how-tos or the pedagogy. There's so much more to talk about. You and I have had some wonderfully rich conversations off a recording. I want to get those on the recording. One thing I'm going to do though is I'm going to show some of the power of this. We'll have links on our website to askcathy.ai. We'll have some of the people you've referenced uh there as well. I'm also going to use an AI service I use to transcribe the transcript or to take the transcription of this. We'll post that. We'll also post a couple of different summaries of it. Um, so for those folks who are listening who are like, I'm not sure about how AI works, you'll see the power of it in the different ways it summarizes it. It's it's really quite a tool. And Tay, I appreciate how you ended that because I think the the way of extending what we have through the technology and the power of that is is a great way to end. Tay, thanks so much for being here today.

SPEAKER_01:

Thank you very much.

SPEAKER_00:

Thank you for listening to the Intrust Center's Good Governance Podcast. For more information about this podcast, other episodes, and additional resources, visit intrust.org.