The Sentience Institute Podcast

Kurt Gray on human-robot interaction and mind perception

October 30, 2022 Sentience Institute Episode 19
Kurt Gray on human-robot interaction and mind perception
The Sentience Institute Podcast
More Info
The Sentience Institute Podcast
Kurt Gray on human-robot interaction and mind perception
Oct 30, 2022 Episode 19
Sentience Institute

And then you're like, actually, I can't know what it's like to be a bat—again, the problem of other minds, right? There's this fundamental divide between a human mind and a bat, but at least a bat's a mammal. What is it like to be an AI? I have no idea. So I think [mind perception] could make us less sympathetic to them in some sense because it's—I don't know, they're a circuit board, there are these algorithms, and so who knows? I can subjugate them now under the heel of human desire because they're not like me.

  • Kurt Gray

What is mind perception? What do we know about mind perception of AI/robots? Why do people like to use AI for some decisions but not moral decisions? Why would people rather give up hundreds of hospital beds than let AI make moral decisions?

Kurt Gray is a Professor at the University of North Carolina at Chapel Hill, where he directs the Deepest Beliefs Lab and the Center for the Science of Moral Understanding. He studies morality, politics, religion, perceptions of AI, and how best to bridge divides.

Topics discussed in the episode:

  • Introduction (0:00)
  • How did a geophysicist come to be doing social psychology? (0:51)
  • What do the Deepest Beliefs Lab and the Center for the Science of Moral Understanding do? (3:11)
  • What is mind perception? (4:45)
  • What is a mind? (7:45)
  • Agency vs experience, or thinking vs feeling (9:40)
  • Why do people see moral exemplars as being insensitive to pain? (10:45)
  • How will people perceive minds in robots/AI? (18:50)
  • Perspective taking as a tool to reduce substratism towards AI (29:30)
  • Why don’t people like using AI to make moral decisions? (32:25)
  • What would be the moral status of AI if they are not sentient? (38:00)
  • The presence of robots can make people seem more similar (44:10)
  • What can we expect about discrimination towards digital minds in the future? (48:30)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the Show.

Show Notes Transcript

And then you're like, actually, I can't know what it's like to be a bat—again, the problem of other minds, right? There's this fundamental divide between a human mind and a bat, but at least a bat's a mammal. What is it like to be an AI? I have no idea. So I think [mind perception] could make us less sympathetic to them in some sense because it's—I don't know, they're a circuit board, there are these algorithms, and so who knows? I can subjugate them now under the heel of human desire because they're not like me.

  • Kurt Gray

What is mind perception? What do we know about mind perception of AI/robots? Why do people like to use AI for some decisions but not moral decisions? Why would people rather give up hundreds of hospital beds than let AI make moral decisions?

Kurt Gray is a Professor at the University of North Carolina at Chapel Hill, where he directs the Deepest Beliefs Lab and the Center for the Science of Moral Understanding. He studies morality, politics, religion, perceptions of AI, and how best to bridge divides.

Topics discussed in the episode:

  • Introduction (0:00)
  • How did a geophysicist come to be doing social psychology? (0:51)
  • What do the Deepest Beliefs Lab and the Center for the Science of Moral Understanding do? (3:11)
  • What is mind perception? (4:45)
  • What is a mind? (7:45)
  • Agency vs experience, or thinking vs feeling (9:40)
  • Why do people see moral exemplars as being insensitive to pain? (10:45)
  • How will people perceive minds in robots/AI? (18:50)
  • Perspective taking as a tool to reduce substratism towards AI (29:30)
  • Why don’t people like using AI to make moral decisions? (32:25)
  • What would be the moral status of AI if they are not sentient? (38:00)
  • The presence of robots can make people seem more similar (44:10)
  • What can we expect about discrimination towards digital minds in the future? (48:30)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the Show.

Speaker 1:

Welcome to the Sentience Institute podcast, and to our 19th episode. I'm Michael Deli, afo strategy lead and researcher at Sentience Institute. On the Sentence Institute podcast, we interview activists, entrepreneurs, and researchers about the most effective strategies to expand humanities Moral circle. Our guest for today is Kurt Gray. Kurt is a professor at the University of North Carolina at Chapel Hill, where he directs the deepest beliefs, slab and the Center for the Science of Moral Understanding. He studies morality, politics, religion, perceptions of ai, and how best to bridge divides. He also has a dark past as a geophysicist just like myself. All

Speaker 2:

Right, so I'm joined now by Kurt Grace. Kurt, thanks so much for joining us on the Sentence Institute podcast.

Speaker 3:

Happy to be here.

Speaker 2:

Um, so one thing I noticed when I was, uh, doing some background, uh, for this interview is we both actually have a geophysics background. Um, so I, I, I'm not sure if you know, but I did my PhD in the use of seismic for space exploration, uh, and doing something very different now. A And same for yourself. I saw that, uh, you also work in geophysics and doing something very different. So what made you switch from geophysics to social psychology? That's, that seems like very two very different things.

Speaker 3:

Yeah. That's a, uh,<laugh>. I think you're the first person I met in, uh, in social psychology land that cares about geophysics or that cared, I guess. Um, I did resistivity imaging. That was primarily what I did, um, here on, on Earth looking for natural gas. Um, I guess what made me switch is that I didn't really care about rocks and I didn't care about natural gas or oil. Um, not only because I didn't really want to, you know, dig up beautiful wilderness to find oil, but mostly I just cared more about people than rocks. And so I switched without ever, ever taking a class in psychology, but never, never looked back.

Speaker 2:

Yeah. Well, how did you find the switch given you didn't have a psychology background?

Speaker 3:

Well, I thought, I mean, I thought it was great. I, I think I always was interested in psychology, but I thought it was, uh, it was too soft, You know, I thought I needed to do like science<laugh> not realizing that science takes all different kinds. Um, and yeah, my first class was an organizational behavior, and, uh, it was interesting, but it just seemed like it was psychology in organizations, which is kind, you know, kind of what it is, and that's okay. But I wanted to study more, more things, more weird things, and, uh, and so that's what I'm studying now. More weird things. Yeah.

Speaker 2:

Yeah. Cool. Excited to talk about the weird stuff. But, um, yeah, just so you know as well, I also worked in, um, oil and gas. I did, um, science week exploration for oil and gas, and then went to do my PhD. Uh, and now, now I'm doing something very different,

Speaker 3:

<laugh>, but Okay. Are you, are you from out west Western Australia?

Speaker 2:

Uh, no, South Australia, um, originally. Okay. But, uh, I worked in Central Australia for a little bit. Oh, got it. Yeah. Okay. All right. So, um, it looks like you do quite a few different things. I know that you work with the Center for the Science of Moral Understanding and also the Deepest Beliefs Lab. Uh, so I thought we could just start by maybe you giving a quick pitch on what both those organizations do.

Speaker 3:

Right. So the, I run a lab called the Deepest Beliefs Lab, and it used to be called the Mind Perception and Morality Lab, um, because we studied how people make sense of the mind of others and how people make their moral judgements. And then kind of my, my early research and ongoing research too kind of argues that those things are kind of the same. So when we make our judgments about the minds of others, we also use those perceptions of the minds of others to make our moral judgments. Um, but then as I kind of did more work and, and got more connected with things like politics and religion and even how we understand the rise of ai, um, I realized that the, the lab was a little broader and so rebranded, uh, in a, in a corporate sense to be the deepest police lab so we could, uh, study whatever we felt was interesting, uh, as long as people cared a lot about it. And so that's what we study. And then the, the Center for the Science of Moral Understanding is really a kind of more applied endeavor, and that's to explore how best we can bridge moral divides in a kind of divided world, um, and especially in a, in a divided America.

Speaker 2:

Great. Thanks that, uh, so I've, I've got lots of, uh, follow up questions. So, uh, we'll start with mind perception, I guess. So, um, first, what is mind perception? How would you describe that?

Speaker 3:

Yeah, so the, I think the most, the easiest way to get into what mind perception is, is to take a step back and think about this longstanding philosophical problem called the problem of other minds, right? And so, I, the problem of other minds is great when we're doing remote conversations like we are now, right? Because I'm looking at you on a screen and, and you're talking to me, but all I see is this, this face on a screen, and like, I, I can't even see the back of your head, right? So I don't know if, if you are a human being like I am, or you're just like some robot, right? With some like fleshy face on top of some, you know, uh, mechanical skeleton that's just like parroting those words. And so the what applies on, you know, zoom calls also applies on if you're, you know, a lover tells you that they love you, but how do you really know, Right? How do you know if you see blue, it looks the same to me, or strawberries taste the same, right? These are questions that, you know, maybe when you're a teenager and getting stone and talking with your friends, that kind of like come up naturally. But, but there were kind of like deep philosophical issues in the, in the idea that other minds are ultimately inaccessible. And so because they're inaccessible, we're just left to kind of infer the presence of other minds. We're left to perceive other minds, right? And so you can, uh, see this maybe best when people think about dogs, you know? So, um, my parents have two dogs. They're purebred aquis. They're very pretty dogs. I don't think they're particularly smart dogs, right? But when I look at the dogs, I just see like two dogs, just plain dogs, you know, like they're looking at the wall, they're thinking about nothing as far as I can tell, right? I don't see a lot of mine. But my parents look at these dogs and think that they're, you know, nostalgic about the way that things were, or thinking about the state of the government, right? In for huge amounts of minds in these animals. And so we perceive different amounts of mind, uh, in the same thing. And so that's really what my perception is, the idea that two people can look at the same thing and perceive different amounts of the ability to think and to feel,

Speaker 2:

Is that what you want your parents to be doing in that example, anthropomorphizing the dogs to an extent.

Speaker 3:

Yeah, exactly. So anthropomorphism is, is kind of inflating mind perception above and beyond what most people would, would guess. You know, people can actually, or things can actually think and feel. And then of course, the other way is, is typically called dehumanization, right? So if you take people that do have minds that can think and feel like other people, and you deny them their mind, right? So you treat them like they're animals or machines or something like that.

Speaker 2:

So, okay, what exactly is a mind then? How would you describe mind?

Speaker 3:

Yeah, I, that's a question I get all the time, and I, I never have a good answer. It's if, I mean, if you look at the dictionary definition, there's lots of definitions of what a mind is. And I wrote a book with the word mind in the title<laugh>. I don't even know what a mind is. I mean, I think you can think of a mind in, in the sense that it takes incoming information, it takes sensations, it takes perceptions, it takes things in the outside world, and it performs some set of kind of computations, cognitions. There's beliefs in there, right? There's these internal states, and then you, you transform the kind of input using the kind of like internal states of the mind to an output, right? Behaviors, actions, things like that. So I think, you know, to be a, I don't know if this is like too computer focused, but I think it's really kind of like, there's inputs, there's like computations and there's outputs, but I think those computations are not just, you know, that sounds very behaviorist, right? Like there's some black box and it's like going beep boo boo, right? And then spitting out some behavior. But like, there's something that it's like to be a mind too, right? And those are, those are feelings and sensations and, you know, the, the feeling of love or the sensation of the color red, right? These like rich conscious experiences. And so I think that you need to emphasize in that, in that chain from input to output, that there's like that thing in the middle that's really this, like this person typically, right? These like feelings and, and, and sensations. So, um, I think that's what a mind is.

Speaker 2:

Yeah. And you just mentioned output and input, uh, and I think I've heard you say those can be thought of as agency, um, which is the doing and thinking and experience, which is the feeling and sensing is, is that right?

Speaker 3:

Yeah, that's right. So experience is the capacity to feel and to sense and agency is the capacity to do and to act. And then, you know, some of like beliefs, you know, what is that? I'm not sure, uh, where that fits into that, but, but at least when people perceive the minds of others, they perceive them along in these kind of like two broad dimensions like thinking and feeling, and they can come apart, right? So usually you think of dogs as being more about feeling and less about thinking, and you think of, uh, corporations like Google as being more engaging and thinking than feeling. And then you or I, you know, every day people we can think and feel and things like rocks, neither. Um, but of course, right, these things are a matter of perception.

Speaker 2:

Yeah. And that, that brings me to another question. So, uh, you've also said that peop for, and for an interesting example, um, moral exemplars, people often think of as, uh, less susceptible to feel pain. Uh, could you talk about this a little bit? And, um, would I be right in saying this is related to, uh, people seeing more exemplars as being perhaps more than experiential?

Speaker 3:

Yeah. So the phenomena you're talking about is something that we call moral type casting. And moral type casting is the idea that when we perceive those in the moral world, we kind of bin them into one of two categories. And that one of those categories is like those who do moral deeds, like good or evil people, right? They're moral agents. And the other category is those who receive moral deeds, and we call those moral patients, right? So a victim is a moral patient. And on the other side, like heroes and villains are their moral agents. And so I guess if we take a step back, when we think about morality, you know, average, average person, they think about like, good and evil is like the big dimension of morality, right? Like on the pole, there's like Hitler and Satan, you know, the evil people. And then there's the good people like, you know, um, Martin Luther King Jr. And Mother Teresa and all these great people. But it, but those, those people like MLK Junior and Mother Teresa are still doers, right? They're still moral agents. They still, you know, impact others whether through help or harm. And the other side in, in the kind of like, you know, my moral world, we've got our agents and our patients like victims, right? Like victims of crime, people who are struck by natural disasters and need help, right? They're like more about experience, they suffer, and agents are more about doing, right? They're agency. And so when we think of the moral world, we put people into either like an agent bin or a patient bin. And this means it's hard to see them as the other other way. So if you think of, you know, heroes, like you think of action films, right? Like there's Arnold Schwarzenegger, and like he's on a mission to like save someone. And you know, he gets like stabbed. He gets shot, he gets punched, You're not really worried. I mean, he's super ripped, obviously he's buff, but you're not really worried about Arnold Schwarzenegger because he's such an agent. Like he's a doer. And then you see like the victim of the movie and, and you know, the villain just kind of like slaps the victim lightly and you think, Oh my God, what a monster, right? So evil, right? Because you're worried about the victim's experience and suffering, and you only think about the kind of hero's agency, and the same with the villain. So that, you know, that's how we divide the more world. And I think the most interesting thing there is when, you know, if people think of you as a hero, you know, you help out a lot around the house, or you do a lot at your workplace, then it turns out that they think you're generally insensitive to pain, and that means they ignore you when you're suffering, right? So we have this other paper showing that like leaders in workplaces where, you know, they're heroes like firefighters, let's say. Like, people don't think those leaders ever need help because you just think of them as heroes, right? But in fact, right, everyone needs help. We just need to kind of change our perceptions.

Speaker 2:

Yeah. Uh, it's, it's, it's interesting. It did. So you mentioned, um, movies, for example, Heroes and Villains. And first of all, I I, I've heard you say this before, but I didn't really think about it in terms of, um, villains as well as like, also, um, having that aspect where people feel, feel like they don't, um, feel as much, uh, they're more see them as more of agents. I guess. I, I thought about that in terms of just moral exemplars, uh, on the good and the good scale, the good side of things. Um, so it's interesting to hear that that seems to apply for villains as well. But, uh, for the movie examples, do you think any of this feeling is driven by, uh, depiction in media in fiction? Or is it more an innate thing that is kind of like expressed in, in media?

Speaker 3:

Yeah, that's a good question. I think the answer is almost always both with these things. I mean, there's a reason why we depict them in these ways, I think, but also these depictions kind of reinforce it a little bit in our studies. So we have a paper with, you know, like seven studies, and in our paper, we, we show this effect even without kind of muscle bound pictures of Austrian bodybuilders, you know,<laugh>. So it's just like the, the very fact that someone is kind of like committed to good or committed to evil makes you think of them as being able to, let's say, hold their hand in a bucket of ice water longer, or feel less pain if they step on a, on a piece of glass. I mean, I, I think a, a good example is Gandhi, right? Like, if you want to talk about physical types, Gandhi is about as far away as possible from Arnold Schwarzenegger as you can get. And yet, right? People think that, you know, Gandhi can endure all sorts of things in the service of helping India achieve independence right? In, in the service of heroism. Same with Mother Teresa, right? People think that Mother Teresa is kind of relatively insensitive to pain, and I don't think she could, you know, bench press more than, you know,<laugh>, I don't know what Mother Teresa could bench

Speaker 2:

Yeah. To, to what ex Do you think there's an extent to which that is kind of, um, a, a true thing because, uh, I guess one, so one might think, let's take the example of Gandhi. I mean, they would often go on hunger strikes, and I mean, it seems like to an extent maybe they were less susceptible to pain, uh, or maybe that's just that their, their moral convictions are so strong that they were more willing to endure that pain. So is there an extent to which that's actually a, a true thing and not kind of a mirage that people are feeling when they, when they think about moral exempla?

Speaker 3:

Yeah, I mean, you know, I don't, I wouldn't say it's a mirage, right? I think a lot of our perceptions are grounded in, in some kind of truth. And I think you bring up a good point that this is too, so, like amazing heroes, they're, they are kind of, um, generally more egen, right? That's how they can do such heroic deeds. But I think the way to think about it is like, maybe more of like a stereotype of heroes, and then people can apply it to other cases where it doesn't really fit. You know? So if, if you think of your mom or dad as a hero, as kids often do, and then, you know, you see them cry, maybe the first time you see them cry, you think, What is going on? Right? Like, how can you cry? Like you are a, a sheer agent, you're a hero. Like, how can you even, you know, feel, feel pain or sadness? And so I think we, we like generalize this beyond the limits of where we should. And just totally neglect the, the suffering of heroes like doctors as well. I mean, doctors do endure a lot of things to go through medical school. They like never sleep. Um, they work long shifts, but there's also, you know, chronic burnout among doctors, and they work way too many hours and they make foolish accidents because people think that they can do more than they can, right? So I think there's a little bit bit of a, you know, a and b here.

Speaker 2:

Mm. Are there any examples of groups of people, types of people that we might see as both, or not even necessarily people, any other entities that we might see as both and experiential, or is it very much often one or the other and it's on his, um, invest relationship?

Speaker 3:

We can see ourselves, uh, as both those ways as agents and patients, we can see our very close others as agents and patients. But even then, even then, right, if you're, if you're speaking to your, your spouse or your best friend, you can recognize in the kind of abstract that they are both agents and patients. But then when you're talking to your friend and they're telling you about this terrible thing that happened to them, or they're telling you about this amazing thing they did and helped others, you still get that type casting that way. So I think it, you know, it's not a, it's not a physical law, but I think it's a pretty powerful tendency.

Speaker 2:

Mm. Okay. Let's, uh, switch a little bit to talk about AI and robots, but to keep the same topic for now, I'm interested in how, what, what does this tell us about how people might perceive AI and robots in the future? So whether or not they're actually sentient, uh, if there are AI and robots that feel and looks sentient, does, what does this apply for? How we might, um, interact with, uh, robots and what we might think about them, um, because they, you, I guess you might see them as, um, and does that, Yeah, I'm curious to he talk about that.

Speaker 3:

Yeah, it's super interesting to think about how people interact with robots. I'm actually writing a, a big chapter on it now for this handbook, and it's made me think about it about things in a lot of ways. And, you know, robots and AI are unique in that they are the only agents in the world in a sense that humans have created just to replace other agents, right? Like, yeah, there's like dogs, but they like come from nature, you know, like maybe we selected them or whatever, and other people, right? They're, they're born, but like, people make machines and they make machines to replace other people, typically, right? To replace people at work, to replace, uh, there's actually this Australian guy, I'm sure you're not friends with him, but maybe you are. He wants to marry a robot, you know, like he wants, he wants to, to to be the first person to marry a, a robot. And, you know, this is obviously replacing, uh, a human spouse, right? And so as we're trying to replace people, we make those robots more people like, and then we come to think of them as having human thoughts and human emotions and, uh, all sorts of weird and wonderful things happen. So the, the first thing that happens, uh, that, that I've kind of studied the most is something called the uncanny valley, which, um, folks may have heard of, but it's this old, it's this old ideas from 1970, um, when robots were far from being human by this, um, Japanese roboticist whose last name was Mori. And he thought that, um, people would like robots more the more human they looked. So if they're cute and anthropomorphic, we like them more, but then they get too human and then they get too human that you're, you're not sure are they human? Are they robot? I don't know what's, ha are they zombies? Are they dead? And then people don't like them, right? They get waked out. And so this is called the uncanny valley, cuz that's the sudden drop in liking when something becomes too human. And so this has been discussed a ton in, um, in kind of like human robot interaction. It's important in movies. So there's lots of movies like the Polar Express or Bale Wolf animated movie that came out that just creep people out. They like kind of tanked because people didn't like them. And it's why Pixar, if you've ever seen a Pixar movie refuses to do realistic human beings, that's like their, like literal policy. They only do like fun, cute, you know, cartoony people like the Incredibles Never like a re you know, was like dead gray eyes or whatever is like what happens when people try to make real people. And so the the explanation for why the young Kennedy valley's there, at least, you know, before, you know, we started looking into it, was that just like something about the face is super creepy, right? Like, no one likes the human-like, uh, face, but we thought it was really about perceptions of experience, right? It's like perceptions of mine that when you see a human like robot, you feel that it can sense and feel it can love, it can be afraid, it can feel pain. And we have this fundamental intuition that like, machines don't, don't get to do that, right? That that's off limits. And so it's that mismatch between machines are made of silicon and like are just made to like, you know, vacuum my floor to like this something looks like it can, you know, feel and feel deeply. And that's what's creepy about it. And so, yeah, we've shown a lot of studies that, that people get creeped out when you tell them that robots can feel. And yet I think when<laugh>, when people get used to a feeling robot, like this movie her, where he like falls in love with his smartphone, um, who happens, by the way, to be Scarlet Johansen, right? Which everyone can appreciate is like a pretty, you know, uh, attractive woman. And I should say they actually had a different actress voice her, and then it didn't work. So they redid it with Scar Jo Hansen because everyone can appreciate this scholar Jo Hansen is like, you know, has this like, ability for experience. But, but I think, I think people get used to robots with experience, right? It's creepy at first. It's uncanny, and then you think, yeah, you know, I have a robot wife. That's cool. And so then the question is, and I think this is an unanswered question, mostly like what happens when we are living with robots that we really think can feel, And we're doing one set of studies on this right now, led by, um, my graduate student, Danica Willbanks and<laugh>, Basically what we, what we're finding is that people are afraid, and this is not published, so, you know, don't scoop me here. People are afraid that robots with experience will resent people for kind of like mistreating them, treating them as slaves in some sense, and that they're gonna rise up and destroy us, right?<laugh>, because, you know, they're gonna realize that like we're having them vacuum our floors, and that's kind of not a great thing to do. And then they're gonna, you know, fast forward from Roomba to Skynet and the Terminator, and then we're all, you know, running for our lives, uh, away from machine guns.

Speaker 2:

Uh, cool. Let's, let's do on back there. Thanks for that. So, um, first it's interesting, I, I have seen Bear Wolf and Polar Express. I don't think I was particularly creeped out by it, but I hadn't really thought about the uncanny valley in terms of, I guess I, I've only thought about them in the con in the, uh, in relation to robots and how they look and not necessarily like, um, animation and them becoming closer and closer to, to life. Like, uh, and I guess the uncanny valley sort of implies once you get past that valley of being really close to human, like, but not quite, if you can get past that, and now they actually are humanlike, I presume that's people's reaction to them would be, um, would, would improve. Is that the idea? Well,

Speaker 3:

That's the idea of the uncanny valley. Yeah. Yeah. So the idea is you can climb out of it. Mm-hmm.<affirmative>, I don't know if you can ever climb out of it. So I've called the uncanny valley the experience gap in the sense that like, there's always a gap. And that's because you have a fundamental expectation that machines should lack experience, the capacity to feel, right? Like if we're talking right now and you look fully human and then you open your head and there's a circuit board and you're like, Surprise, I'm a robot. I wouldn't be like, cool<laugh>, you know, like, that's great. I'd be like, that's real creepy<laugh>, you

Speaker 2:

Know? Yeah. Yeah. So it's, it's the fact that you just know that robots means you can't fully climb out of it. But I guess unless maybe if they're so lifelike that you can't even tell, and you, you just don't know and you think you're interacting with, uh, a human maybe because of robot is so lifelike, I guess, is that you could presume you climb out like that.

Speaker 3:

Yeah, totally. Yeah. If, if, if there's just to you a human right, If you are actually a robot, and I can't tell than for all intents and purposes, you're a human. I mean, yeah, I think to illustrate the experience gap, it also goes the other way. So if you're, if you're told about a human being who fundamentally lacks the capacity to feel pain or pleasure or love, like a psychopath, right? American Psycho speaking of movies, it's like a, he's super creepy, right? I mean, he like murders people, obviously, but there's also something about him that like, you know, just being devoid of that like capacity for, for emotion is just like so unnerving. So people don't like human humanlike robots and they don't like robot like humans.

Speaker 2:

Hmm. Yeah. Just on that, I'm not sure if you've seen No Country for Old Men, but I think the villain in in was really effective and quite disturbing, um, for, for similar reasons.

Speaker 3:

Yeah. Yeah. He was a dead eyes, right? Javier, uh, uh, Barer, I think his name is. Yeah. Like, he just always was so impa. I think that's a great example.

Speaker 2:

Yeah. Uh, so, um, what can we expect about the mine perception of AI and robots? We talked about that in the human context and even for non-human animals. Um, but what can we expect about how people might perceive mines in AI and robots? And, um, and I wanna talk about perspective taking as well, which I think is maybe a little bit different. But let's start with my perception of an robots.

Speaker 3:

Yeah. I think generally when people perceive the mind of robots, they perceive them as having agency but not experience, right? So the capacity for planning and thinking, right? Like, you can, if you're a robot, you can plan out the best route for me to take on my flights. You can crunch a hundred numbers, you know, when doing insurance adjusting or something like that. But people generally, again, don't expect robots to be able to, to feel. But, but these are kind of like squishy. Um, and they depend on the appearance of the robot, as we've discussed. They also depend on the person perceiving, right? So some work by, um, by Adam WTS shows that if, uh, a person's very lonely, you know, they'll perceive more mind in all sorts of things, including AI and robots. I mean, again, right? Like this, the Australian guy who wants to marry a robot, or, um, the movie her, right? Like, the reason that he's in love with his phone ostensibly, uh, is because he's lonely, right? There's this, there's this movie on<laugh> BBC you can find on YouTube called Guys and Dolls, and it's about men who, I dunno what you call it date are with, um, hang out with a lot, uh, real dolls. It's like very lifelike sex dolls. And you or I might look at those dolls and think that, you know, they're just like lifeless silicon, but for them, they perceive a rich amount of, of mind and experience in the dolls. So, you know, you get lonely enough, Tom Hanks sees a, a mind in a volleyball. And so I think AI robots, it's like fair game for us to kind of like project our own desires and needs upon them.

Speaker 2:

So sort of related, I think to my perception is the concept of perspective taking. Uh, so this is where not just perceiving in mind, but we're trying to take the perspective of that mind, um, and, uh, I guess really put ourselves in their shoes, so to speak. So how might this work with ai? I'm, I'm, there's some research to think about, um, taking the perspective of, uh, other humans or non-human to, uh, as a, as a tool potentially to, um, I guess improve our, our perception of them and to maybe be less prejudice towards them. Could this potentially work with, uh, AI as well?

Speaker 3:

Yeah,<laugh>? Yeah. That's interesting. I hadn't thought about that. I mean, the, I guess there, there's a couple ways to think about it. One, are you just kind of trying to project your human consciousness into an ai? So that film, I think it was just called ai, right? The Hailey Joe Osman, right? The kid,

Speaker 2:

I haven't seen it. Sure.

Speaker 3:

Yeah. He's like an ai, but he just looks like a kid. And so you perceive him and, and like bad things are happening to him, right? I think he's on a quest to become a real boy. See the blue fairy like Pinocchio. And so you just like, feel terrible for him, and you sympathize with an ai, but what you're really doing is sympathizing with a child. Mm. I mean, it's the same thing. Like you take Scarlet Johansen's perspective in her, but you're really just taking scarlet Johansen's perspective, right? So are we trying to get people to project and just see AI as more human, or are we trying to get people to, to feel what it's actually like to be ai? And if that's the case one, I don't know if we could ever do that. So there's this famous philosophy, um, paper kind of, that gets people to think about what it's like to be a bat, and they're like, what's it like to be a bat? And people are like, I don't know, it's fun. I'm like, flying around eating mosquitoes, like squeaking. It's like, no, no, you're imagining if you were like a human in a bat cost, like you're Batman, you know, uh, who eats some mosquitoes, but like, now imagine you're like, echolocating, so you're like clicking, you can't see and write all these weird things. And then you're like, Actually, I can't know what it's like to be at bat. There's a funda again, the problem of other minds, right? There's this fundamental divide between like a human mind and a bat, but at least a bat's a mammal. Like, what, what is it like to be in ai? I have no idea. You know? So I think there, it, it could make us less sympathetic to them in some sense because it's like, I don't know, there's, there're circuit board, there're like these algorithms. And so who knows, right? Like, I can, I can subjugate them now under our, you know, the heel of human desire because right. They're not like me.

Speaker 2:

Sure. Yeah. That's interesting. So to jump topic a little bit, uh, but still in the realm of human robot interaction, uh, you've also talked about how people don't like to use, sorry, people, you've also talked about how people like to use AI for decisional context, like what stocks to invest in, for example, but they don't like to use them to make moral decisions. Uh, why is that?

Speaker 3:

Yeah, moral decisions are increasingly being done ba ai. So medical triage decisions, right? Like who gets, who gets a ventilator if there's only one ventilator, but a number of patients or parole decisions, like what inmates gets is most eligible for parole to have algorithms kind of make recommendations there. Their biased, I just wanna say they're, you know, people use'em because they're, they're supposed to be unbiased. They're still biased because we program them with kind of biased information, But it's

Speaker 2:

Like how we've seen AI that are racist because they read the internet or something, they're trained on the internet or something like that. Exactly.

Speaker 3:

Right. Yeah. Right. Or a, you know, there's like, um, take something very minor like, um, technology that, uh, that, you know, when you put your hand under a tap and it turns on, well, that turns out that technology doesn't work very well if you're black, if you have black skin, because those technologies were trained by the engineers who made them who were predominantly white, right? And so it's kind of like whatever your model is, right? The output's gonna be biased towards that model. But there is some promise for AI being less biased, right? If you program them right, with unbiased data, assuming that exists, they'll be less biased. And so, I, I don't think we should scrap, you know, AI decision making systems because they're biased. I think they have a lot of promise, and we should take that promise seriously. But here's a thing. People generally don't like AI making moral decisions because they think that AI, again, doesn't have the capacity for experience. They can't feel, And we, we like to think that to be a good moral decision maker, you have to have compassion, right? So if a doctor comes in to your, the waiting room and you're, you know, sitting there without your pants on and one of those gowns, you know, it's open and you're feeling vulnerable and exposed, and like robo doctor comes in and says, you know, like, this is your treatment. You need to do this. It's the best because it's got the highest percentage of chance of working. I think you'd be like, I don't feel great about it. You know, like, Robo doctor is making a good call, I'm sure, but what I want is someone who's like, Oh, like you must be so afraid, whatever. Like, I, you know, I want you to do well, and so I, I care about you and I'm choosing this treatment. And so people want those making moral decisions to care about the value of other human lives and robots, at least now do not. And so that's medicine, that's self-driving cars, that's drones, right? You want, you want people to be in the o other end of those decisions because you want something that cares about other people.

Speaker 2:

Yeah. So it, it sounds like, to sum it up, it's been, in simple terms, it's because they can't feel, or we perceive them to not be able to, to feel or, or have emotion or empathy. Um, but, but even if they can't actually feel, if an AI could be made to, to act like it feels and to, to be quite, become realistic in, in expressing emotion, is, is that enough? Uh, does it actually need to feel, or from, from our perspective, I guess, what's the difference between AI that's actually sentient and one that just really seems like it, Uh, maybe we still have the same problem where it, it might seem like a bit people, but just might not perceive AI as, as a robots, as, as being able to feel that if we could have, say, a robot doctor that really feels like it's really, uh, emotional and empathetic, does it, is that enough? Could, could that be enough?

Speaker 3:

I think that'll be enough. Yeah. I mean, you know, mind perception can be explicit. So you can ask people, like I do in my studies, like, how much mind does a robot have rated on a scale of one to five, You know, how much can they feel? But in everyday life, it's much more implicit, right? You're just like hanging out with people, you're talking with'em, you know, you're like seeing them on the street. And in those cases, I think just the sheer appearance is enough to convince people that they generally do, like most people aren't, aren't philosophers pondering the problem of other minds. They're just trying to go to the doctor and get some food and, you know, go about their lives. So I think I, if they're convincing enough, I think that'll work for 99% of the cases. I mean, I think maybe an analogy is in, in Japan there are these, uh, these like girlfriend clubs, uh, if you've heard of these, but you know, you're, you're a businessman. You go to these clubs, you, you pay money, you'll, you know, buy drinks, whatever, Uh, you tip these, these women who act like they're your girlfriend in some sense, they like talk, they listen to you, right? And, and I think for, if you think about it explicitly, you're like, I am paying these people to listen to me and to act like my girlfriend, like to be kind and considerate. But implicitly in the moment, right? You're just like, Wow, someone's listening to me and it's great, and we're connecting on this deep level. And is it fake? Probably a lot of the time, maybe sometimes it isn't, but in the moment you can convince yourself that it's real because it feels that way. And I think AI could be the same way. And I think, you know, this is how this guy can marry his robot, or wants to, right? Cause he's just like, in the moment, it feels real and that's enough.

Speaker 2:

Yeah. So even if an AI seems like it's sentient, uh, but we can't prove their sentient, um, what would be their moral status or what, how should we see their moral status? And just for a bit of context, we spoke to David Gunn recently who, uh, suggested that even, even AI isn't sentient. Um, and even if it could never be sentient, we, how we treat them might actually have consequences for how we treat, uh, other humans or, or animals. Uh, like treating them badly might lead to us, um, be more likely to treat humans or animals badly. And so we might want to grant them rights regardless and in the legal system in the same way that we grant, uh, corporations and rivers legal rights. It's not that they're sentient, but that's kind of a tool in our legal system. So do you have any thoughts on, um, I guess rights granting of rights in the legal system to ai? Uh, and then we'll say this?

Speaker 3:

Yeah, that's interesting. I guess regarding the question of whether, you know, AI deserves rights because they're, they have moral status as AI per se. I'm not, I'm not super convinced, but again, the problem of their minds is hard to tell. So the, you know, the old experiment for determining if an AI was human enough as the tour test, right? And so the idea there is if you have a conversation with an AI and you can't tell if it's a computer or a human, then it's human enough. Mm-hmm.<affirmative>. But, you know, I've talked a lot with chatbots and sometimes they seem pretty human. I'm not ready to grant them rights, right? I, I don't think they like get to vote. I don't think they get to, you know, marry who they want or, you know, work wherever they, you know, all the rights who grant human beings. But I could imagine a time at which AI becomes sophisticated enough that it does deserve some rights. Part of that's gonna be perception. Um, and it's gonna be harder than with other humans. I mean, we had the same thing with humans, right? Humans of different races, of different nationalities, of different religions. You know, we like, as a society struggled with like, well, shouldn't rights only be held by rich white landowners? You know? And then turns out like, no, uh, because these people can feel as we feel, right? And, and there it's easy because it's like, literally they're people like us, right? Like the exact same species with ai. I think it, it's harder to guess when they're the same, but I don't know Right. If it can ride a, a stirring opera, but then there's like G P T three, right? That, that can write beautiful things, right? Or like if it paints paintings, but then you've got like, or like Wally, right? I think, uh, do Dolly. I think Dolly. Dolly. That's right. Right. Um, I'm confusing with Disney movies. Um, I've got some kids, right? So, so like, it's like doing all the things that you, like this is, you know, what humans can do, who can experience, but I still don't think they're like Dolly or, you know, G b t three can, can feel. So I don't know what it would take, but Sure. Yeah. As to the argument of like, it could be good for society if we grant robots rights, maybe, You know, I think, I think, you know, there's that classic study with kids seeing the adult punch the bo bo doll, and then the kid gets in there and knocks the stuffing outta the bo bo doll. Like, yeah, I think I don't want my kids being an to Alexa because I think that, you know, kids just shouldn't be. But do I think we need to change the legal system to enshrine AI rights in it? I don't know. That seems like a little premature, but maybe, maybe one day. Sure.

Speaker 2:

So I've also heard, uh, that you, you talk about the fact that, um, in one study they, someone found that, uh, people would rather give, give up hundreds of hospital beds than let AI make moral decisions. Uh, I didn't get any more context in that, but that just sounds kind of wild. So could you talk a little bit about what they actually looked at in that study? What, what happened there?

Speaker 3:

Yeah, so we, that's a study that I love that like, didn't really make it into the paper. Um, but I always like it talking about it in talks because it's the, I think in my mind, the most interesting. So we, it was just a trade off, right? So you can have a, a human doctor, uh, like we do now, or you can have an AI who can make recommendations about life or death decisions. And it turns out the AI's cheaper. It's free once you get it. You don't have to like pay for it. It's never gonna, you know, try to get a raise, uh, complaint about vacation. And so you can save more hospital beds with, uh, an AI doctor. And the question is, well, how many more hospital beds do you have to save to have an AI doctor to, to want an AI doctor in a hospital instead of a human doctor? And so we're like one, some people are like, Oh, you know, the rare person's like, Yeah, it's better. Okay, one, but I think the tra the transition point, and I should look at these data with something like 50 beds or something. Like that's the break even point. Like, people are like, fine, You say like 50 people. That's a lot, right? Like, some small hospitals don't even have 50 beds, right? Like you 50 50 beds in a hospital. And some people said like, never, you know, like, a thousand, No. You know, So they were just like, uh, uh, fundamentally opposed the idea of AI doctors.

Speaker 2:

I'm gu I'm guessing in that study there was stipulated probably that, um, the AI doctor would be as good as a human doctor or something like that. Is that sound right? So that's right. So is is what's happening here, just that people just really don't like AI making more decisions, and they, they think even if it's as good and as effective at its job, it's just that they think it's not going to have empathy and therefore is not gonna be as ideal as, as a doctor, as a human. That's

Speaker 3:

Right. Yeah. I mean, people, people will say, Well, is it, is it good as an expert? And so we ran studies where we're we? If you, if you present people with, like, here's a, here's a doctor who's terrible, he's gonna kill you,<laugh>, you know, like, you're feeling sick, he's gonna recommend HeLOCK and you'll die. Like, not that extreme, but like, you know, doctor, doctor is not great. AI better, you know, and imagine that you're like, sick people are like, Okay, give me the ai, I don't want to die. Mm-hmm.<affirmative>. But if you don't make it such a stark comparison, you're just like, Oh, here's an AI that's 90% good, or a, or a human doctor is 90% good people, way less want the ai, even if they're equally good.

Speaker 2:

Hmm. Another, another interesting, uh, example I heard you mention as well is that the presence of robots can actually make people seem, uh, more similar and produce discrimination. Uh, so just to, I, I'll let you describe this, but just to prime you, I think this was from a, a blog post, um, on where you talked about humans as being red squares and slightly different shades of red. And then you introduce robots, which are blue squares. And then the, um, the example of participants imagining they're the treasurer of a post-apocalyptic community. But I'd love to hear you talk about all that.

Speaker 3:

Yeah.<laugh>, that's a lot done background.<laugh>, if you're listening, there's like, there's colored squares and there's a post apocalypse community where you're a treasurer. Um, and so this study was run by Josh Jackson, and he had this intuition that, well, kind of a a counter intuition. So most people expect that the rise of robots will turn people against each other, right? Like they're robots steal the jobs, not so many jobs. People turn against each other, they turn against immigrants, whatever. There's lots of hate. But we wondered whether robots could bring people together. If people realize, and I think this is the the key point, that like, they realize that robots are different. They're not people, right? Like if, if you look at the immigrant who's competing for the same job that you are, and you think, Well, at least we're human beings, right? At least we feel the same things and we eat, eat the same food, and a robot doesn't eat anything, but like, I don't know, motor oil and electricity, right? Like the robot's different. And so maybe recognizing the fundamental kind of like unhumans of robots can bring people together. And so we found that in, in a number of studies. But I think the funnest one that, that you mentioned is, um, is we ask people to imagine that they're living in this post-apocalyptic community where there are, uh, either humans, just humans in the community of different races. So white, black, and, and Asian. This is around America or the other community where there's like four races in a sense, like white, black, Asian, and, and robots. And so we ask people to, to kind of divide up salary, money, uh, a across folks in the commune based on their job. So you could be the blacksmith, you could be the, I don't know, like the cook, you know, whatever you need to keep the community running. And so, you know, you're the treasurer, you're on the treasurer shoes, and like, how do you give out money? And so typically what you find in these cases is, is white participants are racist, you know, with these outcomes, right? They give more money to the white folks and, and less money to the black folks, even if they have the same job. And so it turned out when there were no robots, robots in the community, the white treasurers were indeed racist, giving less money to the black people for the same job, right? Not, not great, uh, but expected. But it, it turns out that if there are robots in the community, then this gap gets way smaller. Um, I forget the data now. I'm not sure if it totally disappears, but it definitely gets way smaller because you're like,<laugh>, I'm not giving any money to the robot. Cause it's a robot. And, and we're all people here, right? Like, it doesn't, you know what your skin tone is, we're all people. Let's screw the robots over and keep the money for the humans. And so it can bring us closer together because you know, at least robots aren't humans.

Speaker 2:

Yeah. Do we, do we see this effect as well with within, uh, within humans say with like, you know, I don't, I don't know, you can imagine say like a country, people in a country with their differences banding together because immigrants or something like that, because the immigrants are even more different than the people within the country. Does that affect still hold if other cases like that?

Speaker 3:

Oh yeah. I hold, I mean, this is a super robust effect. It holds, it holds with every social category. So in the movie Independence Day, aliens come, you know, like we put aside all our differences to blow up the aliens. And then, you know, and as soon the aliens are gone, it's like, wait a second, you're a Muslim, I'm a Christian like that. You know, But, you know, and then like even Muslim and Christ Christians can agree that atheists are evil, right? So just like, it just matters how you kind of slice the pie up, but it's always there.

Speaker 2:

Sure. Yeah. Uh, so just last, the last topic I'd like to talk about before we wrap up is, uh, moral circle expansion. So that's this idea of, um, uh, hopefully an, an ever growing, expanding circle of moral concern where, um, at the center of the circle is yourself and the people closest to you. Um, but we'd like to expand that circle to include, um, other species and maybe if, uh, robots and AI sentient, then hopefully them as well. Uh, so there's this, um, idea of, uh, substrat as well, which is, um, you can think of it as like racism or speciesism where it it, for, if an entity is sensient, it shouldn't matter what substrate their mind is in, if it shouldn't matter if it's like a biological mind or like a synthetic, um, mind of an ai, uh, as long as everything else is, is the same. Uh, so do you have any thoughts about what we might be able to expect in the future for, I guess, uh, we've talked about this a little bit already, but, uh, discrimination against, um, uh, digital minds in the future?

Speaker 3:

Yeah, it's a good question. I mean, it's, it's, I think these are, these are super important questions and really interesting to think about, right? If, but you have to like, worry about kind of like prejudice, like AI wouldn't be like so far down on the list in terms of like all the prejudices within, within people. And you know, based on the paper we just talked about, like if AI's lack kind of full human sentis, maybe being total jerks to AI is the way we can bring people together, right? Like that's the solution to all the, the racism and, and discrimination against like religion and creed and politics, right? Like, we all get together and we all hunt robots down and hurt them, right? Like, as much as robots can peel paint, I'm not saying we should do that, but if we're talking about thought experiments here, I think we need to think about like, whose sentience are we most concerned about? Um, and I guess the other general point about like expanding moral circles is, I mean, I'm, I'm all for kind of like expanding moral circles, but I also, I wonder if it's costless, right? So if we expand moral circles to animals more, right? So people care a lot about their pets, right? Pets used to be something that we abuse, but you know, or just like kicked or put in dog houses, but now we buy them sweaters and get their DNAs tested, right? But is it, does it come with a cost? Like are you more likely to walk by the homeless person, right? Who, who needs a, a kind of like a, a war meal and a bed and just ignore them because like, you know, Mr. Your dog, Mr. McGillicutty, like needs a new cute sweater maybe, you know? Uh, and so I think it's always useful to think about like, well, what's the trade off? Because there's nothing, nothing free in the world. But yeah, I think as our, as our kind of senses of morality expand, I think it's possible and reasonable to think that we could care more about ai. Again, I don't know if it's a good thing, right? Like if you care more about whether your phone's upset, like some kind of new age tamago, and there's people dying in, uh, third world countries, you know, developing nations. Like I think, I don't know if right, like I, if we had to choose between her and we had to choose between someone who's, you know, a family who's got some te like malaria in sub-Saharan Africa, I'd be like, let's pick the, the human beings, you know? And, and then I guess the, the pragmatic question of like, could you make a mind out of silicon? It's like an interesting question, right? Like if we hit the singularity, I think for sure if, you know, minds can, silicon minds can improve'em themselves. And I think there's a way that it's possible, but I think the more we learn about biology, the more I'm kind of like astounded at the fact that humans are, you know, conscious and, and other animals have consciousness and like the computers are still kind of suck<laugh>, you know, like they can do things, they can, you know, they can pair things back. But I think like the, you know, like the levels of magnitude of like a neuron and then all the things that happen in the, in the neuron and all the things that happen, like within the, in the organelles, within the neuron, it's just like, it's mind boggling. So maybe, maybe it's fair to be prejudiced against silicon substrates, but, but I think if the day comes, when, when humans can convince me that, or sorry, that that robots can convince me that they're fully human, then I think they deserve moral rights. But I don't know when that day will come, if ever.

Speaker 2:

Sure. Yeah. Uh, just to get back to the moral cycle expansion, um, yeah, it's interesting. There's, there's, you can easily think of like, um, uh, a positive effect and a negative, well, a positive potential effect, negative potential effect. The positive might be if, um, we expand the moral circle and that kind of like brings everyone else along with it. Uh, like in, in our moral concern. Um, the opposite effect is, as you said, uh, if excited, the moral circle somehow makes it more diffuse and the things in the center of the circle, um, we still care about them, but maybe care about them blessed if like there's like a cap or something to the amount of moral concern we can have. May like a, maybe a practical example might be, um, uh, which kind of like almost get burnt out from caring, trying to care about everything. Like the more things you try and care about, the more it's easy to try and care about. You almost get burnt out to an extent. Um, but I'm, I'm curious, do you know if any, um, anyone who's tried to study this before or test this? Uh, it doesn't seem like it'll be too hard to test, I guess. Um, what, what do you think?

Speaker 3:

Yeah, I think that, I mean there is, I'm trying to think. There is some work showing the kind of like kind of collapse of compassion or empathy fatigue, right? That you kind of get tired, uh, you get burned out. So certainly people in the caring profession, this happens. The, I think that there is some work by Adam Way again who shows that liberals of conservatives vary in kind of the moral circle they emphasize. So conservatives emphasize the moral circle closest to them. Their, their community. They're kind of like tight knit church community. Their family and liberals are more likely to be kind of universalists. Um, so I'm assuming, you know, most folks in your institute are kind of like more liberal, right? Cuz it's like the uber expanded moral circle animals and, and possibly ai. And I think, I mean, I think to, to be honest in a lot of ways, the, the world would be better off if we cared in general more about the distant moral circle, right? At the same time, I think it's not costless to not care as much about your family. So let's go back to Gandhi, right, Who's not Arnold Schwarzenegger, but still a moral hero. He was not a, a good father, he was not a good husband, right? Martin Luther King Jr. Also not, right? And, and in fact, you know, I think they did a lot of things that, you know, if you think about like what it means to be a good father or a good husband, rather, they're like a, they're terrible examples. And yet they were amazing heroes who affected incredible social change for so many people. And so if you had to pick, right, should Gandhi be a, a national hero or like a good dad? I think, you know, I think the world would pick hero, but if you ask his kids, I don't know, right? They, they, they, I don't know what they would say. And like, I'm a dad, you know, and I think I feel this trade off a lot, right? Like effective altruism give all your money away, like where it has the most good, but also like, my kids need to go to college and it's expensive in America. And, you know, and so like, what do I do with my money? Well, I think I need to protect my kids in some sense. Um, and so it's, it's a struggle and, and there's a tension and I don't know the best way to resolve it. Um, but I think there is the tension.

Speaker 2:

Yeah. Uh, I thought of some follow up questions, but we could almost have another podcast interview about that. So might, might leave that, uh, there, but I'll, I'll leave you with one last question, which will be a little bit general just for us to finish up on. Uh, so I'm interested if you have any thoughts in on what social psychology can tell us, um, that might be useful to, to help people interested in getting others to care about more sentient beings. So I guess you could think of that as like, expand the moral circle, but just in general, how do we get people to care more about others? How, what does social psychology tell us? Any like twos that you'd like to share?

Speaker 3:

Yeah, that's interesting. I mean, I think the work on empathy is really applicable here, right? It's just like, it's just caring about suffering, right? So you have to do two things. You one have to recognize that people are suffering or animals or, um, ai and often that's just enough. Once you recognize it, then you care about it. But if, if, if they don't spontaneously care about it, then you have to get them to care, right? And there's lots of work on empathy of how you can, you know, like you get people to simulate it so they feel it in their own hearts, you know, like we're trying to, to raise our kids as moral kids. And so what do we do? You know, my daughter, my younger daughter, like say she like punches another kid, I'm like, Look, that causes harm to the kid. Okay, right? Like, we like, you know, this like makes this other kid cry and now she appreciates it. But that's, you know, that's not enough because like, yeah, yeah, that kid's crying, you know,<laugh>, I mean, sometimes enough. And then you have to say like, well, what would it feel like for you to get punched? You know, like, Oh, wouldn't that feel bad? And like, oh yeah. So I think you need to kind of like get people, we talked about perspective taking, right? You need to get people in the shoes of these other people, of these animals, right? So this is why people turn vegetarians after they see those videos of the kind of like horrors of factory farming, right? Because they're like, Oh my god, they're suffering. And I, I can kind of like project my own self in there and that feels terrible. So I think if you wanna expand the moral circle, get people to recognize suffering and then care about it.

Speaker 2:

Great. Well thanks so much for your time today, Kurt. Um, and where can people go to see more about your work if they're interested? Any, anything you'd like to plug?

Speaker 3:

Yeah, sure. Um, so they can see me on Twitter, they can see me at the Deepest Beliefs Lab site, the Center for the Science of Moral Understanding, or you know, if you, if you Google me, you'll find some stuff and if you're interested you could watch that stuff<laugh>. So that's all I'll say.

Speaker 2:

Great. And what's your Twitter handle?

Speaker 3:

Uh, it's Kurt J Gray.

Speaker 2:

Cool. All right, well, we'll have links to those websites and, uh, all the other studies we discussed in the show notes. Uh, so thank you again, Kirk, really appreciate your time.

Speaker 3:

Great. Thanks for having me.

Speaker 1:

Thanks for listening. I hope you enjoyed the episode. You can subscribe to The Sentence Institute podcast on iTunes, Stitcher, or any podcast app.