The ThinkND Podcast

The New AI, Part 10: Finding Virtue in the Generative Revolution

Think ND

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 55:17

Episode Topic: Ideas, Startups, and Healthcare Tech 

Generative AI offers incredible power, but how does it shape our human character? Tom Stapleford, associate professor in Notre Dame’s Program of Liberal Studies, applies the timeless wisdom of virtue ethics to the generative revolution, exploring the moral consequences of a technology that is not just a tool, but a powerful, habit-forming force.

Featured Speakers:

  • Tom Stapleford, University of Notre Dame 

Read this episode's recap over on the University of Notre Dame's open online learning community platform, ThinkND: https://go.nd.edu/37df86.

This podcast is a part of the ThinkND Series titled The New AI

Thanks for listening! The ThinkND Podcast is brought to you by ThinkND, the University of Notre Dame's online learning community. We connect you with videos, podcasts, articles, courses, and other resources to inspire minds and spark conversations on topics that matter to you — everything from faith and politics, to science, technology, and your career.

  • Learn more about ThinkND and register for upcoming live events at think.nd.edu.
  • Join our LinkedIn community for updates, episode clips, and more.

Welcome to the AI Project Podcast

Speaker

Welcome everybody to the new AI Project podcast. My name is Graham Wolf. I'm a senior at the University of Notre Dame and the program director of the new AI project. Today we're excited to welcome Dr. Tom Stapleford, professor in the University of Notre Dame's Program of Liberal Studies. he's a PhD in the history of Science and a master's in artificial Intelligence. He's an esteem author, thinker, and professor in multiple, very exciting, uh, academic disciplines. And yeah, we're looking forward to having a wide reaching conversation, um, about how society, political economy, and moral philosophy are adapting to this new variable of ai. Tom, anything you'd like to add to that intro?

Speaker 2

No, thanks so much, Graham, for having me on the, the podcast. Excited for the conversation.

Speaker

Yeah. And thank you again for being here. On the new AI project side, um, we've done some great work, great work over the past three years of, uh, developing a team of what we call student experts, each writing in their own domain of ai. so we have tech titans, AI at work, taming ai, AI in life and research revelations. Um, so there's really something for everybody. And we encourage, uh, our listeners to check out the new AI project on LinkedIn and Substack, um, after this podcast. So joining us from our team today is student expert, will Mansour, uh, who has done some great work of the past year researching, researching what we call taming ai, um, or any means by which society can, uh, reel in and exercise oversight on the potential consequences of ai, whether that's through regulation or as we'll touch on more today moral scrutiny. So I'll pass it over to you for your introduction. Will.

Speaker 3

Yeah, thanks Graham. very excited to be here. Like Graham said, I'm will. Um, I'm a senior. I study history, philosophy and computing and digital technology, so, and I'm also the writer, uh, columnist for taming AI at the new AI project. So this conversation is really right in my, area of interest and my expertise as far as it goes, being a writer for the new ad project. So I'm just very excited to, uh, engage in the discussion today.

Speaker

Thanks, will and, uh, this week and in, in coming weeks, you'll be researching and, and publishing a little bit about, uh, AI and Virtu ethics specifically. So I think this is, uh, yeah, very relevant opportunity.

Speaker 3

Definitely.

Speaker

Yeah. Well, thanks, will. Um, and then also joining us today is Dr. John Barons, profess professor of the practice, uh, director of Digital Strategy and faculty advisor to the new AI project Dr. B. I'll pass it over to you for your introduction.

Speaker 4

I think you did a good job there, Graham, just, uh, honored to be part of the new AI project and get to work with all these incredibly smart and thoughtful students here at Notre Dame.

Speaker

Yeah, absolutely. so for the first question, I think, or do you want me to pass it over to you, Dr. B or do you want me to ask the first question?

Speaker 4

Why don't you ask the first question and then I'll do a follow up if we as appropriate.

Speaker

Yeah. So as a, a starting point for our conversation today, I think we're, we're curious as, as people who have been researching AI and, and looking into the domain for the past, you know, myself, two to three years, it's interesting to be hearing from somebody like yourself who's been, uh, in this space for a while. So I'm curious, uh, just from a, your personal perspective, what do you see as, you know, unique to our current, social and. political moments, uh, with the current, you know, generative revolution, as many you're calling it. Uh, and how does that differ or compare with, with, uh, previous moments in, in your, journey of researching ai.

Speaker 2

Yeah, it's, it's a great question. I mean, I, I should clarify. It's not that I've been deeply involved in AI for the last 20 some years, but I did get started in the field and then kind of left it and put it on the back burner and have, and now returning to it after an interval of about about two decades actually. Um, so I give a kind of comparison contrast between when I was much more closely involved in ai, which was the late 1990s, when I went and did this master's degree. I did, um, I was an engineer as an undergrad. I did a dual degree in engineering and liberal arts, but towards the end of my, my time was just fascinated by work in robotics, particularly the work of, uh, Rodney Brooks at MIT and Mobile Robotics and Brooks was challenging what today we often call a, the good old fashioned ai, right. This idea of like symbolic processing as the centerpiece where you're, if you're gonna have a, a robot that's navigating a space. You're gonna build this elaborate model in the, for the robot. You're gonna have all sorts of explicit rules about how it's gonna interact with this environment. And Brooks was saying, this is totally the wrong way to think about intelligence. We should be building intelligence almost from the, from the ground up by starting with something more like akin to insects and that don't have an elaborate, symbolic processing built on, on top of them. I found Brooks' work really fascinating. So, I went to, uh, the University of Edinburgh to do my master's degree with the plans on eventually going on and, and doing more work in robotics. kind of what happened to me during that, that time period in that, that experience of the master's program is I became fascinated by how folks in AI were talking about intelligence, what they thought intelligence meant, just sort of implicitly, And because in my liberal arts degree my background had been in, in history, I began to think about, well, how does this vision of what intelligence is, how does that compare to other ways that we have thought about intelligence over, over time? And as a result of that, I ended up switching gears and rather than continuing down the, the programming route moved over into philosophy of science, which is. Dissertation in, in, um, or graduate work in, in history, moved away from a focus on AI in particular. And in part, I have to say, because I just got worn out by the hype around AI in the, in the late 1990s and, and early two thousands and got tired of people asking me about when robots were gonna take over the world. It's like they're not gonna take over the world anytime soon. and so I, I kind of pulled back from that. Working on some other areas that were related, but not quite identical. But a lot of the research that I did, has been around expertise and attempts to replace human judgment and human knowledge with rule-based systems. So it actually, it's, you know, certain ways quite similar to the questions around AI in the late 1990s. But, um, not necessarily thinking about computer science in particular. obviously as AI is. Taken center stage again in public consciousness over the last four or five years, a lot of my longstanding interests are now kinda coming back to the back to the forefront. So you're asking originally about what, what's similar and what's different from Yeah, from the past. Definitely, this is not a surprising answer. I think the success of, the early versions of Chat GPT and just the success of L LLMs caught a lot of people off guard. I mean, I, so I'd done some work in natural language processing back in the late 1990s, you know, followed it. S just loosely off and on over the, the next 10 years, had not for 10, 15 years, had not been anticipating the kind of success that you had with LLMs as you scaled them up so dramatically with the amount of data that were, they were coming in, we, it really was quite, almost shocking to have not been following the field closely for about five years. And then suddenly see, oh my gosh, this, the kind of capabilities of, generating language have improved generating. Responding appropriately to language have improved so dramatically. So I think, uh, along with a lot of other folks, I was, I was swept up with that. it does remind me of someone who's studied the history of ai. That was the, one of the plans I was going to graduate school was in part to write about the history of cognitive science and, and ai. and if anyone who's read about a bit about the history of AI knows it goes through these boom and bust cycles all the time. and so there is part of me that is seeing. Seeing now, you know, as a crest of one of these, uh, booms and anticipating that things that there is gonna be, um, uh, there is gonna be a, a downswing after, after a certain amount of time as as well. But the sheer scale of investment in AI today that we're seeing financial investment, AI is not comparable to anything we've seen around AI in the past. I mean, that is dramatically different. There have been moments of hype around ai, of high expectations of what the systems can accomplish. I think the financial investment is just radically different this time. Like the expectations, the ways in which those are being married to major capital expenditures is quite extraordinary. So we are in a really new moment, I think, in that respect.

Speaker

Yeah, absolutely. Um, will and Dr be any interest on comments in there?

Speaker 3

Yeah. I mean, yeah. I think that is something that I find very interesting too, is like that this new wave of financial investment into AI research building in new AI systems. What I'm thinking about is how, you know, this new fin wave of financial backing might be impacting, you know, its trajectory in a way. No, both for good and for better or for worse. I was interested in what you would have to say about how you think. More financial investment might be contributing to, you know, a new form, a new form of generative AI and what it might be more important to think about. Now, as you know, we get caught up in this, you know, incredible. I mean that some of the evaluations of the generative AI market are just insane, and how you think we might need to, you know, be thinking about, regulating and taming AI a little bit more closely now.

Speaker 2

Yeah, it's, it's a great question. I mean, there, there are two concerns around it, that I have, and I should say, I guess by, by bit of, way of background. I did my master's work at the University of Edinburg. I may have mentioned that. And at least at the time, in the late 1990s, they saw themselves as the skeptics of, of ai. They were ones always really like kind of pushing back against the AI hype. So I think I absorbed some of that. So that's still my, my natural disposition. so. There are two concerns. One concern is strictly economic. I mean, the numbers are extraordinary. was reading today, there's a nice article in the Wall Street Journal, pointing out that estimates are, that AI is, well, specifically, I guess investment in data centers is contributed about half of the growth in GDP over the first six months of 2025. I mean, that's extraordinary, right? Talking about half of the growth in your economy is coming from investment in data centers. So if this investment doesn't pan out, if that starts to, to flatten out or drop, you're gonna have a major hit on the economy. So there's that, and of course, a major hit on the stock market as well, because we've had valuations that have been tied to this. So there's those economic concerns. I, my other worry, and probably one that's more drained to some of the things we're gonna be talking about, to talking about today is what is, what kind of incentives does this set up for the companies that are making these. Major investments, you know, basically you've got companies. For some of them, it really is an existential issue, right? The company is not gonna exist if these investments don't pay off. For other companies, that may just be, you know, in order to get any kind of return on the investment, they're gonna need to get a, a lot more revenue coming in, and that means they have a major financial incentive to get. AI to get people using AI as much as possible in as many ways as possible, as, uh, through time. Um, right. So they, it's not just a matter of, oh, we're building this project. We'd like to have some users. It's a product that needs to be used constantly by as many people as possible in order to get returns on the scale of investment that they're, that they've made so far. And that concerns me a bit, I think is we'll, we, we, we'll go into, because it, yeah, it means the incentives for those companies may or may not be aligned with what I would think of as, the best interests of, of users writ large.

Speaker

Yeah. Yeah. I think that's, very, very much along the lines of, of, an exploration that we're currently doing, um, as a group. kind of looking into, uh, OpenAI specifically and their long-term path to profitability and some of the hints that they've started to drop about, like you said, what they've kind of been incentivized to do is, is one way of looking at it. namely, sort of just maximizing attention. Uh, and, and, that's one of the ways that we framed it, kind of tokenizing attention and, and then, you know, potentially turning it into advertising dollars or, or turning it into higher paid subscription dollars is, is. Probably the less likely option. but I, I think, with that, in that concern in mind, and you know, a pretty decent likelihood that that may, that may pan out. How might that kind of start to trickle down to moral or ethical issues on an, on a human to human basis about what's right or wrong with, with the use of AI in a day-to-day basis, or even on the next level of the developers who are. Developing these systems that are now more than ever designed to make money. you know, what are some of the moral pitfalls that they might be running into or, or looking out for. as you know, we head toward a world where, where you have to build this long term path to profitability.

Speaker 2

Yeah, those are great questions. it might be helpful for, for me just to step back a little bit. Say something about how I approach the question of ethics in relationship to ai. And then I think that'll hook back into your, your question, Graham. so the, the Greek root of what the word ethics is. Ethos. And as that might even imply, if you think back a, uh, you know, just your own associations with the word ethos. It really meant something like character or habitual way of beings, or which way of being and acting in the world. You think about the kinds of dispositions some might, someone might have for certain patterns of behavior, certain ways of thinking, certain ways of acting. That's what ethos referred to. So that ethics really was the study of character and there was a. When they thought about it in an antiquities, that your character is shaped by your actions, by the, the ways in which you habitually behave the kinds of circumstances you find yourself in. These begin to shape your character, your dispositions, to act over time. So if you take that framework. Ethics is really about the study of character and the way in which character is formed and shaped over time. That's how I approach questions around AI ethics. It's around technology ethics in general. How does our use of technology begin to shape our dispositions, our patterns of thinking, our patterns of acting, the ways in which we relate to one another, the kinds of habits that, that we build in ourselves. So if you've got that perspective and you have a view that, Any kind of technology which you, any technologies you're engaging with is going to begin to, reinforce certain ways of, of behaving and, you know, potentially weaken others. Then you start to wonder what happens as someone begins to engage with AI systems more and more frequently. Right. So I think to come back to your, the place where you started insofar as these companies have incentives to get people to use AI systems as much as they, as much as they possibly can, right. To really ramp up the engagement. did you, the phrase you used, was it maximizing engagement, maximizing attention? Yeah. Is that right? Yeah. insofar as that's your incentive. You're pushing as much as possible to get folks to using these systems more and more and more. So it's gonna have greater effects on their character, right? not just, not just what we think of sometimes as like moral character, but even their, their like habits of thinking, their cognitive habits as well are gonna be shaped by their engagement with these technologies. So it just raises the stakes dramatically, I think, because the incentive is let's get as much engagement as we possibly can.

Speaker

Yeah, that, that makes a lot of sense. Dr. Be I'll pass it over to you. Anything, uh, surprise you or stand out to you, uh, about, you know, his framing of our our current ethical moment?

Virtue Ethics and Technology

Speaker 4

Graham, were you asking me? Are you asking Tom? Oh, yeah. well, I was gonna ask, I want, I wanna pivot back to Tom. I was gonna ask Tom if it could talk more about the dimensions of character. And what we mean when we think about virtue and the, and the different parts of what that means as a person because, you know, just the idea of characters are very broad and kind of a vague concept, but sounds like it's a really important foundation for e everything of the work he's doing and that a lot of people are doing here at Notre Dame. So I wanted to get a little bit more, kind of depth from Tom about how he thinks about that.

Speaker 2

Yeah. Sure glad. Um, so I tend to approach this through the framework that Aristotle laid out, not that I grab on everything that Aristotle said, you know, 2000 years or, well, a little more than 2000 years ago now. So he wa of course, wasn't thinking about, technology ethics in particular. But he divided the virtues into what we could broadly speak of as, as two categories. One set of virtues were, the character virtues, and that's what we more typically think about in ethics. And those have to do with your desires and motivations, right? What is it that's, driving you to act in particular ways? What are your, what are your underlying, underlying goals? You know, are you seeking justice? Are you driven, motivated through, through courage, uh, or through. Temperance, which is kind of an old fashioned word that basically means, you know, moderating, your desire for pleasurable things, right? so you've got a set of character virtues as well. But Aristotle also talked about intellectual virtues, and these would be something more like skills we could think about, um, skills of the mind. so your, ability to reason logically through things, your, capacity for memory, all kinds of stuff like that would fall under the, the category of intellectual virtues. so you've got these two different categories, and it could be physical virtues as, as well, you think about in, athletics, right? Or maybe like talking about football here at Notre Dame. That's a hot topic, right? Your one's ability to track a pass and rabbit grab a hold of their hands mean those are, that's also a set of skills. It's a set of habits, right? It's something you build over time. You don't just develop that by reading about it, a book. You develop it by going out, out and actually catching passes from somebody. So those are sets of physical virtues. Like the, the root of the word virtue, is virtus in, in Latin. It's then in Greek it just means excellence. So it's a quality of excellence for a person. So I said, you can imagine qualities of character based excellences, physical excellences, intellectual excellences, are, all of those are, are in this broader category of virtues for, for Aristotle. I think that's a helpful way to. I find it helpful way to, to think about human beings and, and their capabilities as well. So, the other part about virtues or the, a virtue based approach, as we were just mentioning with the example of, of football is it's not, these aren't necessarily things that you learn, through theory. You learn through doing them, and you build and sustain these virtues through actions, through acting in the world. So that you to think back about, you know, our, our football example, right? If I stop practicing catching the football over time, I'm gonna, my skills are gonna degrade. and certainly if I don't actually go out and practice with people repeatedly, my skills, I'm not gonna be able to build those, those skills. So again, you have a sense of which ways in which virtues are, are these qualities of excellence that we have are tied to, our actions and practice in, in the world. So now if we pull this back to thinking about technology, as I engage with technology, I can potentially start to build new sets of skills, right? New, new kinds of virtues. Right there, there are, um. Virtues of being able to be a, a touch typist right on, you know, on a laptop, being able to type quickly. Right? And that's a skill that I developed. It's a physical skill, a physical virtue that I develop over time through, through practice. So you can build new capabilities through your interactions with technology, but at the same time, you can also lose them if you're not, if the technology, if your interaction with the technology replaces the way you would normally be engaging with the world. So. At least in my case, I dunno if this is true for, for you, John. You know, my handwriting has degraded substantially from when I had to hand write everything. And now that I, you know, a lot of my work is done, done through, through typing. so if we think about AI systems, right? Let's say as students begin to learn excuse, people begin to rely on these systems to do things for them that they would have done for themselves in the past. Some of their skills are going to degrade. They may develop new capabilities. Like you may become really great at prompt engineering a particular AI system, and getting results from that. but potentially you might be, might be losing other capabilities along the way because you're just not exercising those, you're not exercising your, your mind in quite the same way. So a, a virtue based approach then on AI or technology in general is paying attention to those questions. What kinds of skills do I, build through engaging with this system, but also what kinds of skills am I losing because I'm just no longer practicing that? and how does it, how do, how does engaging with a, a new technology potentially even shape the kinds of motivations that I have?

Speaker 3

That's very interesting to me in terms of how then, so now we're thinking about, you know, the formation of these systems, and, you know, understanding that we want to ensure that they reinforce virtues that we want to see built in human society. Then how should we be thinking about the moral morality or ethical stances of the systems themselves, and how do we ensure that the systems are created, with, you know, the proper, you know, quote unquote proper moral, uh, moral and ethical frameworks? Yeah,

Trust in AI vs. Human Experts

Speaker 2

good questions. so in, in part. One of the other aspects of of virtue ethics is that it tends to be very situation specific, right? there are other approaches to ethics that are more general, more universal. Here's a particular rule. Follow this rule in every case in place. Virtue ethics emphasizes the context specific, nature of, of good moral judgment, of excellent, ethical judgment. So. It's hard to take a kind of an answer your question in a, in a really general way, but we could start to think about maybe, uh, specific, you know, cases or examples and it then becomes a little, a little easier to, to think through. So, let's see. let's just talk about interacting with the chat bot. I mean, that's a, a familiar experience though I'll, a lot of people have had and certainly. I had this, I mentioned this, I had this sort of shock. First time I started working with chat, GBT is how good the responses were back. Like, it seems like there's another human being on the other end of this, you know, reading and responding, responding to you. So you get this impression that you're engaging with, another human person through the chat bot. And, you know, you can learn to break that over time. The more, the more experience you, you have with them. But of course they're, they're constantly improving as well. Um. What's the consequence then of the, how the chat bot responds to my input? So, lemme see if I can stop that and rephrase for a second. All right. So if, if I've got this basic idea that I'm engaging with the, engaging with the chat bot and thinking about it as though it were another human being. Then if that's in my mind and I don't have anything to counter that my expectations for how I in interact with other people are gonna start to be shaped with, shaped by the kind of responses I'm getting back from the chat bot and the way it seems like the chat bot is treating me. and as you all know, a lot of the companies found out quite early on if we wanna keep people engaged with our generative AI systems. We want our systems to be as, to flatter them as much as possible to say to tr to kind of be as positive as you can for the responses that are, that are coming back and often to kind of tell people the things that the chat bot, well, that the system, uh, the kind of the rules within the chat bot hope will keep the user engaged, kind of what the user wants to, wants to hear. So, you know, you can ask, you know, chat GPT almost anything. And it's all, it's gonna respond back usually with, well, that's a great question or that's an interesting question. And then sort topel this thing. So if every time I'm asking or engaging with this chat bot, it's responding back like, oh wow, you're, this is what a great question. You're thinking so clearly about this. Let me elaborate on this a little bit more. Maybe be as helpful as I possibly can on that. That keeps me engaged with the chat bot. But it also doesn't, it doesn't give me the normal friction that I would have in a human relationship. Right. So if you know Will and Graham, you guys are very polite, but if I started to go off the rails in my conversation with you, you probably pushed back against me. Yes. Are you really sure about what, what you're saying there? But the chat bot doesn't do that for us. And so the more that you get, the more that you spend your time engaging with that kind of system, how does it shift your expectations for what you would wanna see from other human beings, especially if you allow yourself to slip into, perceiving or thinking of this, of this system as like a human being, which is very easy to do, right? It's, it's designed to, to, it's intended to do that. So one potential question, one, or I guess one potential angle this could lead you to, if you're designing an LLM, is if you know that this is a concern, right? That this is a, a worry that people are going to assume that there's, more behind the system than there really is. How can you design an LM or the ways you could design an, an LLMA large language model, LLM, such that it reinforces for users. This is not a human being you're engaging with. Yeah. Or that it you pushes back against, uh, pushes back against users in some ways. Like to the, the extent that you can do that, then you're starting to help people, avoid getting, stuck in this trap. Does that make sense?

Speaker 3

Yeah. Yeah. I mean, I know that there's, like, Stu, I think it was a study out of Georgia State res recently talking about how people already trust M'S. Judgments more than they trust, like, professional ethicists judgments. Like it's already, we're already at a point where people are putting a lot of trust into these systems at a se, at a level. That's equal or more than trust that they put in experts in the field. So it seems like, to your point, it's really important that we approach the creation of these systems with an understanding that people are putting a lot of trust in them and they're going to, as you mentioned, be. Heavily shaped by their interactions with the systems and take, you know, what they tell, what the systems tell them to be true or correct, or in the space of ethics, right? They're gonna take that to be so, so it's very interesting.

Speaker 2

Yeah. And es especially because a lot of the systems tend to respond back very confidently, right? Uh, of course. Here's this, here's the answer to your question. Great question. Here are, you know, 2D two possible responses or two, two levels of responses. it's most LLMs are not designed to, cast doubt on the responses that they're giving back. Again, because they're trying to encourage users to engage as much as they po as much as they can with those systems and, and reinforce, reinforce the value of the AI system itself.

Speaker

Yeah. Just one, one anecdote I'll, I'll throw in here. when we talk about how AI is already, you know, is, is in a lot of ways designed to be perceived as human as possible. there are friends of mine I, I know who just slip up and will refer to Chad GPT as he or like as a person will say, oh yeah, he told me this. and, and, you know, personify it in various ways with nicknames and stuff like that. I think, yeah, it's kind of a funny kind of catch yourself off guard moment where, where, where you, you know. Just completely forgotten the plot that this is just a, a machine. But then also when you think about that as, you know, being aggregated to the collective of all of us now using this on a daily basis, that's when those algorithmic rules or the rules-based systems, start to kind of trickle into your daily life and start to enforce themselves outside of just your interactions with the chat bot, but then influence your interactions, you know, human to human interactions. And then that's when, you know, the algorithm starts to bend the expectations for things completely unrelated to its, uh, initial purpose. So, yeah, I, I think like, like I said, kind of funny anecdote, but when you look at it like, like as we have been on the collective and the implications for, um, human to human interactions outside of the chatbot, it's Yeah, pretty concerning, I'd say.

Speaker 2

Yeah. Again, it's, this is of course in intentional by, by a number of companies that you're, they're trying to design chat botts in some ways, even to have, you know, to adopt certain personalities or certain kinds of styles so that it, it begins to begin to seem more and more like you're interacting with the person, especially if you can begin to tweak and choose and as a way, in a sense that the kind of personality of the chat bot, you know, and. If there's a, a sign, there's a ways to, we were talking about like, how could you design the systems partially to help, combat this tendency. it's also helpful to think about what strategies can I adopt as a user of AI systems to fight against this, right? one strategy, one technique I, I, I know people try to use so it is challenging, is actually avoid using the word you. In a command, in a prompt, right to an, an AI system. So rather than saying, can you give me a summary of this document, you would just say, summarize this document. And that is just like an, it's an intentional step for them to almost a habit to every, to remind themselves, all right, I'm not dealing with another person here. And in fact, because that's such our, our natural way of act, natural way of speaking, we would be to stick the pronoun you in there. Consciously to attend to the fact that I wanna remove this, I wanna pull this out, is like a reminder every time that, ah, I'm not interacting with a, with another human being. On the other hand, I've, I've in conversations with people that some of them have pointed out, well, there's a downside to this. If you think about the way I just phrased that, instead of saying, can you please summarize this document for me? Just say, summarize this document. I'm now developing a habit of interacting with this system through language that is really. just a matter of like, almost like a master slave relationship, right? I'm giving this command without any, you know, kind of politeness coming along this without, with actual, without bringing in. the personal pronoun, which kind of personalizes the interaction. And so then the worry is what happens if that mode of interaction begins to carry over to how I treat my coworkers or, you know, my kids or, or my spouse say I'm, I'm building up a habit of interacting through language that is act precisely because I'm trying to depersonalize this chat bot. I run the risk of that bleeding over into my other interactions, my interactions with other, with actual, with actual human beings as well.

Speaker 4

So, Tom, it's, it sounds like the fundamental challenge here is that we have tens of thousands, hundreds of thousands of years of. Evolution where all language has been aimed at humans, other humans. So you hear it, you know it's a human, you speak it, it's only to a human. And now we're, we have all this built in circuitry and expectation and social norms, that we're kind of picking up wholesale and using for the primary mode of interaction with machines, electronic machines. So that there's this, this rub where we don't want to kind of anthropomorphize, we don't wanna treat it like a human because we know it's not. But that's the mode and that's what our brain is kind of confused by. So how much of it do you think is about this kind of, this language paradox, which is, it's so great for communication, but then it brings in all these other things, and how much do you think it's other kinds of. tensions in the ecosystem that we're, we're now having with machines. Yeah,

Speaker 2

I think you really hit the nail on the, the head there, John, that this is part of what makes this technology so different and honestly even different than other forms of ai. And you don't wanna lose sight of the fact that AI is a big worry. It covers lots of different kinds of applications. generative AI and LLMs are only one, one part of that. but what's so striking about LMS is that are exactly as you said, our mode of interaction with them is through language, and that's something that we are evolutionary wired for and all of our experience for is something that only, is a mode of interaction we only use with other human beings. And so now you've got this problem in a puzzle or a new challenge. It's both the power of these generative AI systems and also their great hazard, right? The powers that I can use, this very natural form of and complex and rich form of communication that I would normally only be able to use with other human beings like that is so very easy for us. It's, it's something that we develop, naturally as human beings, right? Are a linguistic capabilities at least, and certainly speaking. then I can use this to interact with these machines and give them much more complex commands in a way that's straightforward without having to learn lots of programming, et cetera, et cetera, et cetera. and yet that also means I'm tempted, very just hardwired almost to think of this entity that I'm engaging with as, uh, another human being like, like myself, wrestling with that challenge. Trying to work our way through there, I think is gonna require. It's gonna require us as, as humans, to almost fight against some basic instincts that, that we have, and learn to, you know, develop mode, develop strategies, I guess, to interact with these machines in ways that hopefully will not begin to affect our interactions with other human beings. But that's a tough nut to crack. It's gonna be a challenge I think.

Virtue Ethics in AI Design

Speaker 3

And so you've talked a little bit about now how you think like individual u users can, you know, ready themselves for a new AI age in terms of developing strategies to sort of de anthropomorphize ai. I was thinking if you could talk a little bit about the other side of the coin. Um, with the AI companies themselves, the AI developers themselves. you know, and other stakeholders as well, you know, governments, other third party regulatory agencies who should be, who in your opinion, should be kind of making the decisions about the morality of AI systems, how they develop their systems of ethics, especially when, you know, questions of ethics are things that continue to be debated, thought about, how do we create. A system that's gonna be used by so much, so many people in this country around the planet that can deliver a sound ethical framework for all of its users. I mean, it, it sounds like an impossible problem, honestly.

Speaker 2

Yeah. Yeah. I, and I think you're right. I mean, it is an impossible, impossible, pro, possible problem. It's one, I, I think we, I mentioned earlier on that. Virtue ethics tends to be very, very context specific. So I think the solutions and, well, there's at least two levels of, of solutions, right? One thing that you can do on the, on the large scale, right? So thinking about like Google with Gemini, or Anthropic with, uh, with Claude, you, you can. True as much as you possibly can to keep some certain guardrails in place around the LLM. Right. and these companies are working hard on that. are there ways to, ways that you can prevent in the program like block these LLMs from getting pulled into conversations where they start to, let's say, encourage someone to commit suicide? You know, which is something we know has happened with LLMs in the past. Right. This is a kind, these are the kinds of things you could try to, you know, build into these large scale systems. Like, but they're really, those guardrails are pretty broad. they aren't gonna be able to address the kinds of things that we've been talking about over the, the, you know, the last 20 minutes or so. So there's a, a different level of, the development and implication of, of AI systems, which is not so much. Building something like Gemini or, or chat GT, but actually how that gets pulled into a particular organization or a particular context, right? So you think about, let's say a company that wants to use a think about this. This is an, an example that's they've been exploring here at, in, within South Bend. So exploring locally, the city has a. A number that you can call, that's like their basic call center number where you would, if you've got a problem with a pothole in your street or you have a question about something in city government, you call this number and you get directed to the right, you know, right. Service. So a number of cities have been exploring, and, and South Bend has been thinking about this as well. Well, what if we automate that system with a, with an LLM, with a generative AI system? So their user calls up, they're fir, at least on the very first level, they're interacting with an AI system, asking questions, and letting the AI system either give a response and handle the call or direct them eventually to somewhere else. Okay. So at, at that level, you, you'd have a, a company that's taking a base large language model and training it specifically for a particular application in a particular context. Right. That's where I think, a virtue based approach can be a little bit more effective. because precisely because, well, it's avoiding what you're, the, the puzzle you're thinking about, right? Oh my gosh, I released this software out in the world where anyone in across the, the globe could use it and the whole range of different content. How could I possibly set up a, a good set of guidelines around that's really, really, really difficult. You can, you're only gonna end up with these very, very broad guardrails. Okay. But if we're thinking more locally, once we begin to hone down to a more specific application, then I think we can start to talk, in, in more specific terms. So what I would en encourage a designer to do in the case I'm thinking about is begin by asking, you know, actually following good design pro process, begin to ask the customers, in this case, you know, the citizens who'd be calling in this number, what is it that you really want out of, the city when you call the three one one number? What's, what are your priorities? I wanna get my call answered really quickly. Um, I want it, the information to be accurate. I, I wanna get off the, wanna get off the phone, you know, as, as fast as I, I can, I wanna make sure I get directed to the right person. It might also be, as you're investigating this, you find out, I really, I actually wanna talk to a city employee. I mean, many, I think there's a lot of hostility to automated call centers where you feel like you get stuck in these loops. You never get to talk to a real person who's like sort of listening and understanding your problems. Like one of the, it could be, and I think it probably is, that one of the desires that folks have when they call in these city's numbers in many cases, is actually just to talk to a real human being who's working for the city and feel like they have a connection with them. Like, this person understands me and is listening to my problems. Not just that, you know, it's being entered somewhere in a logbook and someone will get to it eventually. But there's actually another human being who kind of cares about me in this moment and is taking the time to listen to what I, what I have to say. So if in your investigation you start to find out, eh, that's actually a priority, right? Then maybe you can use your ai design, your AI system so that it can help slot the caller to the right department, to the right individual. But you, you actually learn they really want to talk to another, another human. and my goal is not just efficiency. Let's see how quickly we can get through this call and get them off the line and handle it strictly with, with the ai, uh, the AI system. So, so part of a virtue based approach then is really good design practices of. Figuring out what's the, what would a, a user want from their interaction with this system? What really matters to them? What's gonna be important, important to them. So in that ways, it, I think it meshes really nicely with just good design practices in general.

Speaker 4

Well, so Tom, that's super interesting. It sounds to me just like, listening to the customer and. I don't know. For some reason, listening to the customer and virtue ethics doesn't sound like the same thing to me because it sounds like, like there's gonna be a lot of tension points there, because sometimes customers want things that are, aren't about virtue, they're about, the expedience or some other kinds of things. So it, uh, raises a larger question for me. As well, which is, how do we know in the, in this virtue, ethics kind of, framework, how do you know when it's the right, it's kind of the right thing. So for instance, lots of times people will give kind of these broad generalizations about ai, well, we shouldn't be using ai. It's bad. It undermines things. Um, but then we don't get kind of a nuanced discussion. and so like one thing I, I often think of people say, well, you know, it's bad for the environment. Well, cars are also bad for the environment and many of us have cars and many of us drive them. And we're also isolated from the environment'cause we're in the car. And so we don't know the plants and we don't know the weather. And so. how does this general idea, there's some general badness relate to the contextualization of virtue ethics. Is there something more we could think to help us through that?

The Role of Human Judgment in AI Ethics

Speaker 2

Yeah. there certainly are with within the larger space of virtue ethics, you can, Find accounts that are gonna emphasize different virtues or you stress, let's say, you know, a Catholic, a approach to, to virtue ethics, which is gonna think about in universit sense, kind of core cardinal virtues like justice and tempers and, prudence and fortitude kind, courage. As well as things like Faith, hope charity. If you were looking at Confucian system of Confucian approach to thinking about character, you're gonna get a a different set of, of virtues. There may be some, some overlaps between them. So you're gonna have, you can have culturally different answers to, to your, the question that you're posing. But I think there, there was, I, I heard like a deeper question, I guess initially from you, John, which was, what if. Your users, the folks who are using this system, what if they're not actually aware of what might be best for them? Right. This was, that was your concern. Right. because my initial one way of responding to Will's question open in the very beginning of, oh my gosh, this, these systems that could go out, lots of different communities. These communities have different values. Okay, well, hey, the, the parts of the solution to that is let's be local. Let's go talk to, let's develop systems in a particular context that are sensitive to the values in that, that context and, and that, that was a little bit where I was headed with, with the first answer that I was giving. And now, John, you're asking a slightly different one. Like what? I mean that's all well and good, but what if you talk to the folks in this community, they don't actually understand initially what's good for them. So if I, I, I don't wanna, pick on, College students here, and this, this may not be, this may not be an accurate answer, but let's say, I were to ask you take a, a random survey, certain college students, what would be a great generative AI system for you to use in college? Like, oh, it could write my papers for me. I'll help me get my papers done really, really quickly because help me finish my assignments as, as, as quickly as I can. That would be the system that I would choose, right? And help me get a good grade on it, right? So not just do it quickly, but I'm gonna get. Get a good grade on it. And you can imagine a faculty looking at that response thing. No, no. You're sort of missing the point here because the goal in us giving you this assignment is for you to work through it for yourself and learn to think about these questions for, for yourself. build up the skills that you need, in order to complete that assignment. Because those skills are the, that's really what, what our goal is to have you form these, these skills and capabilities. Okay. So that's, I think maybe a little bit like the dilemma. you were thinking about John here. Like my user group has a set of values, but they may not actually be the best Yeah. Best values for that person. so that's a tricky situation I think, and I. It's helpful in the college context because, I mean, that's the environment that I'm familiar with. Just say, well, how would it, how would I approach this as, as an instructor who maybe wants student, wants to help students learn to use AI systems Well, because I think it's, it's actually like an important skill. Um, how would I approach that, that situation? One approach could be I'm gonna try to build in all sorts of guard rails into my AI system that is going to. Prevent as much as possible students from using it in ways that I think are not helpful. Right. so you build a whole system, rules around it or surveillance or something like that. Another approach would be if I really think this, if I think maybe students have a misunderstanding of what, a good AI system ought to be and what they're trying to get out of, what they should be getting out of this course is to have a conversation with them. Where I help them to see what I understand as being the goals of this course and the goals of this assignment, so that they appreciate that themselves and will then want to use the system in a way that facilitates those, those goals. Right. So I think it's, I think it's then very hard to try to build a system that's going to force people to. Accept goals and values and priorities that you have as a designer, but I do think you can have conversations with people to help them begin to see what you think their goals and values ought to be, or at least how they should understand the situation differently so that they have a different set of aims going into it, right? How, uh, in this case, how I can help the students see. Understand. Here's why I'm giving you this assignment. Here's what skills I want you to be able to develop through doing this assignment. Here's why it's gonna be beneficial for you for the rest of your life, and why you should feel invested in, in either not using an AI system or using it in the, in the, the way that I have in mind. Does that make sense?

Speaker 4

Yeah, it makes sense to me in the college, we've said for a long time in the Office of Digital Strategy, we don't care how you. What you do in your classroom, the most important thing is you have the conversation. So I think that's very, uh, aligned to what I've been thinking, but I'm really interested in what, Graham and Will are, are thinking, in reaction to what, what you were saying.

Speaker

Yeah. Will, what are your thoughts?

Speaker 3

No, I mean, I think that example is kind, is kind of what I was trying to get at in that first question is just. You know, who's gonna A, how are we gonna ensure that, you know, we have this system that's able to be so contextually intelligent that it's able to get it right every single time. Understanding that, you know, you pointed out like some examples of AI acting immorally or unethically, the stakes are pretty high. Like, and I think that's something oftentimes too, that, you know. Companies like Open AI try to kind of, they decrease that, the sense of importance for like what they're doing and they understand that there might be, you know, bumps in the road, but we're getting to some higher purpose. And so I think that's one, one important thing is just how, you know, significant. A lot of these, you know, trial and error situations are, and that we're, you know, using it responsibly and developing it responsibly to ensure understanding that people, you know, put a lot of credence into these systems In terms of your, college example, which I think is as a college student, incredibly appropriate, or appropriate. Uh, I, I didn't have any qualms of that. you know, I think. It would be ignorant of me to say that there isn't some element of ignorance as a college student that you don't know what's best for you always. And that I, you know, a part of me wants to turn to older, wiser, more intelligent people to get advice. And you know, in a lot of scenarios, that's people like parents, professors, other sorts of advisors. But now it's more and more increasingly turning to generative AI systems as that. You know, sage person in your life. And so I think my question still kind of remains how, who is responsible, you know, for inputting the, you know, like the de the creating the system. Who is responsible for creating the systems of generative ai, particularly how they teach virtue. and how are we going to ins ensure that, you know, we have a consistent understanding of what people ought to do. understanding that ethics is sort of, you know, a normative theory of how people should behave. I think that question, you know, it, it just reminds you that at the heart of it, it still is a human question, and it's not a technical question, it's. These are things that humans will continue to be able to contribute to in an important way that, you know, will impact then how these generative AI systems respond and influence actions in the world.

Concluding Thoughts on AI and Ethics

Speaker 2

Yeah. You mean you're, you're kind of brought to mind two things. one is,'cause you were asking about are there some general sorts of, of rules that being placed? I, I think it's very, this is very challenging to. Implement. but ideally in my mind there would be a, you would, there would be sort of age guidelines around the use of AI and, you know, particularly younger kids. I mean, I would push it even through high school for younger kids, like just no engagement with it at all. And high school students even highly limited because. you're still building at that point. It's such a formative part of your life of learning how to, like engage with human beings is challenging enough on its own terms without adding in this complication that John was talking about, this thing that sort of seems like a human being, but it really isn't. But you kind of engage it with it in the same way that you might be, you know, texting with a friend or something like that. I think that complicates the, just the normal development so, so much. you know, will, you were mentioning like the tendency to turn to AI for advice, and guidance, which I think is, is, I mean it's absolutely happening. Uh, I think it is very much happening with, high school students and, um, younger is happening with. Older with adults as as well. I think it's much more problematic at that younger age where you haven't had a chance to form you to, to be formed through inter actions with other human beings. To kind of be, begin to establish yourself a bit as a person, your identity, the kinds of experiences that you need in order to build up a, a kind of coherence system of, of values and ways of, uh, way of thinking about the world. So that's one place where I love to limit, you know, like actually have hard and fast rules of, you know, from more stricter, I guess, age guidelines on who has access to AI systems, so that people can, you know, mature through the ways that we've been maturing, you know. Socially for hundreds and thousands of years, and then begin to engage with these systems once you've already built up, because it's hard enough as an adult to, to navigate with these systems. I think it's just incredibly difficult for, for younger people, you know, along the way. there's a second question or second comment I guess you were making about the fact that ethics is an, an, uh, I think you said it was a human, it's a human activity. Yeah. It's a human question basically. Yeah, a human question. That's right. so. I mentioned earlier, I, I probably should have clarified this earlier in our discussion. you know, the word virtue really just means excellence, but excellence is always in comparison to some kind of standard, some set of some expectation, right? Like, what makes for an excellent basketball player? Well, I've got a vision of what an excellent basketball player looks like. You know, what makes for, um, uh, you know, an excellent, I don't know, knife. I have a vision for what a good, a good knife would, would look like. So built into the idea of, of virtue, of, of Excellences is some sort of standard, um, that you're, that you're reaching for. and Greek, you know, in ancient Greek would've been the TLOs the goal. what's the goal, the standard that, that you're, comparing to? So implicitly then, when we think about. What makes for an excellent human being, the virtues that are, what are the qualities that make for an excellent human being? We've created some kind of standard, and I would argue that what that standard is, is a question for humans to be debating, discussing, deciding, deciding about, so to outsource that. Question to, a machine that's simply churning through data and putting, you know, throwing out probabilistic responses based on what thinks that, you know, the next character word, word might be, is a huge mistake. so even asking, you know, kind of posing that question, into an LLM shows that you sort of misunderstood the enterprise of, of ethics from the get go. That this is something for human beings to be discussing. Debating and deciding about based on their lived experiences in their world and their, you know, their interactions with one another. It's not, it's a different kind of question than, you know, asking for help with solving a physics problem or asking, you know, can you help explain this concept in chemistry to me? Which is, I think something that an action LLM can do really, really well.

Speaker

Yeah. I'm gonna jump in here. Uh, I think I have to kind of keep us, honest on time. I think this is a really, Punchy, memorable, you know, impressive, place to end our, our conversation today. it, it makes me think of a quote that we often use, in, in our research and, and reflect upon is that, human beings should never be secondary to the human experience. and it's just become, you know, ever more relevant throughout this ongoing generative revolution. and I think, you know, part of that is certainly. Part of that human experience is deciding, for ourselves, the rightness and wrongness of our actions and, you know, the moral dimensions of the things we do every day, like using artificial intelligence. to, to outsource that would be to make ourselves, you know, more secondary to the experience of being human. So yeah, really I think profound place to end. it's been such a pleasure just, just walking through this, this world with you today. Dr. Stapleford and, and, uh, will and Dr. B as always,