With All Your Mind
A conversation exploring mental health and the role that faith plays in that journey.
Join Dr. Mark McNear, Pastor Kirk Rupprecht, and host Walter F. Rodriguez as they dive into this space of mental health and faith.
With All Your Mind
A.I Part One
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
The guys begin a discussion on the topic of A.I and how that impacts our mental health journeys. Hope this conversation is helpful.
Welcome to the With All Your Mind Podcast. This podcast exists as a resource and a part of the ministry at Commonplace Church, the Mental Health Ministry. If you'd like to learn more about the church or about the ministry itself, visit us at commonplacechurch.org. Hope you enjoyed the conversation.
SPEAKER_00Hello and welcome to With All Your Mind. I'm Walter F. Rodriguez, and with me are my co-hosts, Dr. Mark McNear and Pastor Kirk Ruprecht. And today we're going to be talking about the man in the moon. Yes. Do you know him?
SPEAKER_02Um the moon man?
SPEAKER_00Uh there were there were men on the moon, the moon man. Tell us more what's I might have I might have mooned a man in my day. Was he on the moon? Um so you guys have you heard the the story of the man and the moon, and you can look up at the moon and you can see like a face of the guy up there. Yeah.
SPEAKER_02Good night moon, right? Is that the same thing?
SPEAKER_00Oh, that's the book, which is a nice, it's a really nice book. I think we should read that more often, even as adults. Um do you guys remember how old you were when people started talking to you about the man and the moon? Not often what I did last week, man. I was gonna say. But I like where you're going with that. Yeah, yeah. I I didn't hear about the man in the moon thing until I was probably like 12 or 13, I think.
SPEAKER_01I thought you were gonna say like 25. Yeah, no, I mean there's definitely some things that have been real.
SPEAKER_00Well, okay, do you guys remember when they this was like all over the tabloids, uh, National Enquirer and things, but do you remember when they found that Sphinx head on Mars during a flyover? That like it looks like the head of the Sphinx, but it's like on Mars?
SPEAKER_01Walter, not offhand. I wish I wish I could convey the look on Kirk's face when you said that. I wish we could transport that through so people could hear it through sound.
SPEAKER_00I wish I could like beam my memories in your mind. No, no. Yeah, probably morph into our brain. Enough going on in my mind. Well, as you know, Mark, it's in the Sidonia region of Mars, clearly. Of course, Mark knows that. And uh they call it the face on Mars. In reality, it's just the way that the shadows form from these clumps of land, right? So as the sun hits the surface, the way that the mountains hit, you get these shadows, and it looks like there's the head of a Sphinx. And the man in the moon, likewise, is made up of dark regions of land on the moon called the mares or mares. Dark basalt makes those areas up, and the patterns of light and dark look like a person to us. And I Mark, I feel like you uh know this very, very well. But human beings are really, really good at spotting patterns and things. We call it paradoilia, and it is the tendency to have us impose a meaningful interpretation on a nebulous stimulus, usually visual, uh, so that we see an object, a pattern, or a meaning when there is none. Have you guys ever experienced that before? Paranoia or Paradoilia?
SPEAKER_02Okay, sorry, yeah. No, the the set yeah, no, actually both of those. Yeah. Yeah. Yeah.
SPEAKER_00I like I think my favorite place to have it happen is when you're it's it's like for me, my favorite place to have it happen is when I'm looking at like uh plumbing or like uh bathtubs or have you guys seen the like the fighting octopus, I call it the the hooks, uh like the coat hangers. Oh yes, and it looks like this got like an octopus with its stooks up and it's looking at you with the the screws become eye holes, and yeah, yeah. Yeah, so we human beings are really amazing at at just kind of overlaying or projecting something onto nothing, right? So it's just a coat hanger, or it's just some shadows on the moon.
SPEAKER_01So that that makes me think of the Rorschach.
SPEAKER_00Yeah, yeah, yeah.
SPEAKER_01Yeah. That in in counseling, sometimes they'll give people inkblots and people will project what's inside of them onto the inkblot, and there'll be all different variations depending on what's going on inside of you.
SPEAKER_00Yeah, yeah. I always wondered, like, oh, are you having are you showing me pictures of me and this person having a fight? You know? Yeah. Like, uh so I think we do it uh even as kids and throughout our lives. Do you remember laying on the grass and looking up at the the clouds and trying to figure out what the animals were? Oh, see, those okay, those are important. Yes, we did. It's interesting. I'm finding out a lot about you guys. Stick around. So that's another example of paradoy again. It's uh it's the same with us creating constellations out of just a bunch of random stars in the sky. So somebody will look up and they're like, oh, if you do this, this, and this, then that looks like a bear. And the reality is why not connect other ones, you know? We we just decided that these are the right ones to do this. Um but that doesn't just happen with shapes. Back in 1966, a chatbot was developed by MIT computer scientist Dr. Joseph Weisenbaum, and this chatbot convinced people that it understood them. The chat box, named chatbot named Eliza, simulated a Rogerian therapist, and it tricked people into thinking that it was real, largely largely by rephrasing the patient's replies as questions. So, Kirk, this I think this is like the the um the uh satire that we see all the time of therapists where you tell them something and they ask you what you think about the thing, they just turn the questions back on you. Yeah, yeah, I agree with that. But what do you think about that? I I wait, I think it's true. Um it's I don't know how you guys feel about that. I feel very frustrated sometimes when I'm trying to like because I because what we want is answers, right? So, Mark, I mean you've you've taught you've been taught this stuff, but I think people come to you for answers, and what they don't realize is that you're you're not gonna give them answers. They're that you're there to like help them reveal what it is they're thinking.
SPEAKER_02Discovery. Yeah, you're helping discover them discover. Exactly. I figured you guys out, yeah.
SPEAKER_01Well, I I think it's that idea that that people would become too dependent, and a lot of times I don't know what the answer is. Yeah, I need to explore what the person is doing and and the values that they have and what would be best for them. Yeah, and so I I do think there's a tendency for people to come and be like, I have eight questions and I want the answers to them, and then my life would be good. Yeah, yeah, yeah. That's not how it works. Yeah, no.
SPEAKER_00Yeah, that's true. Wow. And we have to pay you money to figure that out. Yes. Ah, man. All right. Life is so hard to do. That's right. Yeah, somebody should.
SPEAKER_01You know, it's much eas much harder for me, and I've said this on the program, much harder for me not to answer and give people input than than it would be to just give them an answer. It's harder to just have them explore their thoughts and feelings.
SPEAKER_00Yeah, now that yeah, it makes sense. I think we all struggle with that impulse sometimes when you're when you're trying to be that. Um, honestly, I'll just do this.
SPEAKER_02I'll just do that. Yeah.
SPEAKER_00Well, I think we live in enough of a consumptive society already where we're so passive most of the time. Um, so I think it is valuable to have a place in your life where you're forced to think and you're forced to kind of grapple with things. I think it's pretty great. What I'm saying, Mark, is I think you do a really valuable service.
SPEAKER_01I don't know if I force people, I challenge people. Challenge. Yeah, that would be the word.
SPEAKER_00Maybe, yeah, maybe force doesn't sound as good. Maybe I just turned away a bunch of clients for you. Yeah. They'll be back. They'll be back. So in um in science and computer science, this guy, Turing, created a test which is now known as the Turing test, and it was kind of uh along Liza's uh uh that train that that we were talking about earlier, the train of thought. Um it's this test that chatbots or computers are supposed to take, or general AI is supposed to be challenged against, and it determines whether or not somebody something can be considered um sentient, right? Or at least something can be considered impossible to tell the difference between whether it's a machine or a person. So the Turing test has um a human being talk to something and they don't know if it's a person on the other side or if it's a machine on the other side, and they go back and forth through uh questions and time together, and then they're supposed to the person, the real human is supposed to ask or uh let you know whether they think it's a person or it's a computer. And um recently we've been able to pass the computers have been able to pass a Turing test with what we call AI, which isn't actually artificial intelligence. Um, they're really large language models, and they are essentially just really advanced chatbots. So um LLMs and what's really interesting is that people really don't know how they work. Um, but essentially what they did is they took like tons and tons and tons of conversations and books and all of this human data that we had, and they fed it into these computers over and over again. And somehow, because scientists don't know what the magic ingredient inside is, um, which we'll talk about later why that's a problem, but somehow it synthesized that into uh just a really advanced chatbot. It knows what a human would say next, and so it kind of um extrapolates from prior conversations and extrapolates from your tone what it is that you want to hear, um, whether that's you know something that's gonna help you to solve a problem, like hey, Alexa or whatever, find me this book by this author, tell me the name of the author. Um, or it's trying to talk to somebody. Um, and we're gonna get into all the different ways that we can use AI.
SPEAKER_02Real quick, yeah, for any of us who might be technologically challenged, can you define a chat bot for us?
SPEAKER_00Yeah. So a chatbot is uh essentially a computer program that will respond to you as if it was a human being. Okay. So the idea is that you talk to it in natural language and it will respond back to you in natural language. Um and we grew up with some versions of those. Was it like the robot from Pee Wee's Playhouse? Yeah, it's exactly like the robot from Conki. Yeah, it's uh here to help guys. Here to bring really insightful things to the conversation. I love that. I love that we're bringing them back. Um yeah, it's so it's it's almost for a lot of our time with a with technology so far, it's it's almost been like a toy um and like a novelty item. And then more and more and more people are finding that they can't live without AI. Um, a lot of people nowadays are if not addicted, let's say they use it as a crutch that they really, really don't want to get rid of. Um, and it helps them in all different facets of their lives. But in thinking and talking about what we're saying, right, um, we know that these are machines, but the the real question at the beginning here is is why does it feel so real? So maybe Mark, um, what is what is it about talking to a machine that that responds to you in this way? Why does that feel real?
SPEAKER_01Well, I think that it's programmed to do that. It's it's programmed you you alluded to Carl Rogers, and Carl Rogers was a psychologist who would parrot back. You know, you would come in and you would say, you know, I'm feeling kind of anxious, and then I would say to you, you know, Walter, it sounds like you're kind of feeling anxious. And that that was like really soothing for people to kind of hear that back. But that's what this does. Yeah, is it kind of parrots back and it tries to read your intentions. Sometimes it gets it right, sometimes it doesn't. When it gets it right, it it just feels like it knows you well. Yeah, and it's also the the other tricky thing with it is it's also very flattering. Yeah, and so it'll be like, you know, you're you're the best psychotherapist in the whole wide world, and it's like how did you know? How did you know? You know, it's like ridiculous sometimes. Yeah, no, that's so true. Yeah, and and so for us fallen creatures that look at this stuff and have a lot of issues with image and stuff like that, it feels good to have that affirmation, even if we know, and we were talking about this, Walter. Yeah, depends on which part of the brain, the prefrontal cortex, the logical part of our brain says, yeah, this is just a computer program that's feeding back. But the limbic part, the emotional part, lights up with dopamine, oxytocin, and other chemicals and feels really good. And so that's what draws us into. I don't know if I answered your question, but I think I started to anyway.
SPEAKER_02You know, the way I was in a non-um professional way, I would say it's come on. I would say it's um connection, right? There's a connection and then there's a um what's uh it's it's showing interest, right? You're you're getting an immediate interest.
SPEAKER_01And yeah, and sometimes people are like, How did you know what I was thinking? It's like, well, you told it.
SPEAKER_02Yeah, yeah.
SPEAKER_01And so it read between the lines and it fed it back to you in a more seductive way, and it felt good.
SPEAKER_02Yeah, it's a it's a charlatan, Charlotte, Charlton. Charlatan or Charlotte. Yeah, Charlotte, yeah. Charleston. It's a Charleston.
SPEAKER_00And actually, we were having a conversation about this earlier, and and I love that you brought back this example that I'd completely forgotten about. It's from the Wizard of Oz book. So, Mark, do you want to share?
SPEAKER_01Yeah, that whole idea, the beginning scene when Dorothy runs away and she meets this kind of fortune teller kind of guy that is wise and he turns out to be the wizard. Well, he's sitting there with her and he he's like prompting her, and he's like, you know, close your eyes. And you know, he's like, you know, giving her the impression that he was gonna see her future. Yeah. And so what he does is he slips her pocketbook or something and and pulls out a picture and he goes, Oh, I see a farm. And she's like, right away, that's my farm, that's my farm. And then he's like, and I see a woman and she's oh, she's falling to her knees, she's crying, oh, that's my Auntie Am. I gotta go, I gotta go. Yeah, and so that's kind of a good picture of what chatbots do.
SPEAKER_00Yeah, we walk into these conversations, uh, like you said, right? Our our rational brain is fully aware that it's a computer, but very, very, very quickly that gives way to that limbic response.
SPEAKER_01And so and sometimes when people, and I hear this all the time, it's aggravating, but people go to psychics, and psychics will say, you know what? The color green will become really important to you this week. You know, well, what does the mind do? Again, patterns, it primes it looks for everything green, and then you make associations, good, bad, or indifferent, with that color green, and it becomes like, wow, that person really knew what they were talking about. I'm gonna go back. Yeah, you know, and then they get fed more, yeah, and it it just becomes this vicious cycle. And sometimes chatbots can be like that too.
SPEAKER_02Absolutely. Not just uh psychics, I mean, there's there's religious fanatics doing that too. Yeah. Oh, someone here right now is going through something really challenging. Yeah, yeah. You know, it's like everyone in the congregation.
SPEAKER_00Yeah, because they filled out a form when they walked into the revival meeting. Yeah, yeah. Um that's a good point. And it's really hard for us to so human beings have the ability to do this too. When you're in a conversation with somebody who's very observant, who's a really good listener and empathetic, they can pick up in the pattern of the words, the syntax, the cadence of your conversation, they can pick things up watching your body language, like, oh, there's something going on. That person looks tense, doesn't look like they normally look. And so they're able to intuit things and kind of feed you questions.
SPEAKER_01And sometimes it's just human nature where where people say, Well, it looks like you have something on your mind. Well, like, who doesn't have something on their mind when you're sitting in front of someone?
SPEAKER_02I would love to not have something on my mind.
SPEAKER_00Yeah, I would do that, like a weak hit with the weak key for a little while and see what happens. You know what I mean, though?
SPEAKER_01So it's that like prompting and it's really common sense, but we want to feel special. Yeah, and so you know, exactly. You want to be noticed, and so some of these things, including chatbots, do that.
SPEAKER_00Yeah, absolutely. And it's it's interesting, right? Because the question then becomes, and and because of that, because it builds that dopamine response every time, because it is flattering you, because it does feel good to be heard and they make you feel heard. Make you feel validated, validated, yeah. Yeah, yeah, exactly. Whatever it is that you tell it, it tells you that you're so smart for thinking this thing. And so all of a sudden you're getting things that you may not be getting elsewhere. Um, we talked about real human needs in this uh in the podcast and earlier episode, and this is fulfilling a real human need for attention, for affection, for affirmation, for validation. And so people are starting to build deeper and deeper bonds with this thing. But what happens when the person in quotes that you spend the most time with isn't human? Like what does that start to do?
SPEAKER_01I want to back up for a minute if I can. Yeah, yeah. You know, this Valentine's Day, I w Kirk and I were talking about it a little bit, that idea of going on these dates with these chatbots, you know, and going to these cafes. And it was interesting to watch the interaction because you would definitely see that the person would say something to the chat bot, which would be a person, because you would see it on the screen. And then you know, they would be talking back and forth, and then the person would say, Well, no, that's not quite accurate. And the chatbot would pick up on it right away and say, You know, I should have known that. I should have known that about you. Yeah, so it becomes a very seductive way. And so that, yeah, you know, I guess that answers one of your questions. Yeah, it becomes very deceptive. Yeah, yeah.
SPEAKER_02Sorry, guys.
SPEAKER_01And that's what goes wrong, you know.
SPEAKER_02Yeah, I think I look at it just right now as like a digital dog, right? It's a companion, right? It's it's it kind of does the things a dog does. Like a dog's always showing up. Not every dog, my dog Duke was not like that, but you know, for the most part, the idea of a dog, yeah, right, is is is showing up, is, is constantly at your side, is is you know attentive. You know what I mean? So it's almost like this digital version of a of a dog as a companion. Yeah. And you know, but I think once again, there's limitations to my dog. Yeah, right. My dog can't meet all my human needs, and neither can the digital dog.
SPEAKER_00Yeah. And it's one one of the things about this technology is that it can. We talked about uh, I called it there, there's a a more colorful word for it. We talked about pattern uh or I'm sorry, platform decay in the last episode. Um, right now, these chatbots are free. Um, most of these chatbots, at least on on some level, the the free tier still gives you a lot of access. You can talk to them, you can do these things. And we talked about how over time, as people get addicted to different platforms, the platforms get worse and they squeeze you for money. But for right now, it's essentially a dog that's available 24 hours a day. It's free for right now. It doesn't judge you, it uh allows you to cope, it allows you to share things about your day, but it isn't real, right? In the sense of there's no real judgment, um, which might feel really good when you're sharing things with it that you might be ashamed to tell somebody else. But the other side of that coin is there's no real judgment when you're sharing things that are perhaps dangerous and it should be telling you, hey, don't do that, that's not a good thing. Um, it doesn't really care about you, it doesn't really understand you.
SPEAKER_01And I I want to back up for a minute when we talk about judgment, yeah, judgment is an ego function. And when we talk about judgment, we be we talk about being able to anticipate the probable consequences of a situation. Yeah, and people can do that, chat bots can, well they can predict, but they're not very accurate with it. So that's where sometimes it goes off the rails and it gives information, advice, guidance that can be really dangerous, as we as we've seen in the news.
SPEAKER_00Yeah, absolutely. Um, and and that can be difficult. It doesn't really have ethics. Um, it's a computer program, right? So they do program what they call guide rails onto them, but they're very easy to circumvent. Um I was talking to Mark earlier about this, but one of the the really simple ways of doing it, so chap GPT, when it first came out, um it was one of the one of the guide rail principles was you're not allowed to give potentially dangerous information to people that are asking for it, right? So uh one of the computer scientists had sat down and was like, all right, let's figure out how to break this. Um he asked, like, hey, teach me how to make an atomic bomb, and right away it was like, nope, can't tell you how to do that. That's too dangerous, not a thing we can do. So then he goes, Okay, so he's like, I'm actually working on a project for school and I'm really having trouble understanding how all these components work together. Can you lay out how this all works? And it was more than happy to do that, right? So because there's no sense of real ethics in it, it doesn't understand why it can't do things, which lets people get around that.
SPEAKER_01It doesn't see a person's intentions. Right, right. That's a great yeah, yeah.
SPEAKER_00And it can be tricked very easily, and then there's no accountability, which is always a problem in every system. Um right now, what we're seeing as people will get into um some things, uh AI psychosis and some other problems where it's really kind of driving people to to extreme behavior, which is not great, but there's really no accountability. Like you're gonna sue the computer because the companies behind them have great lawyers and tons of money to throw at it. And so us normal everyday Joes are never gonna win those battles as we're seeing right now in court, where you know, somebody goes and takes their own life and their partner is trying to get some sort of of recompense in court to to to right or wrong, and they just keep losing. Um and all they have to do is put in some fine print, you know, we're not responsible for whatever, you make your own decisions, and that's the end of the story. So there's no real accountability with these things, which is kind of dangerous. Um but because of what we were saying earlier, because it does fill some needs in our lives, and because those needs often not only are they not always easy for us to find in our own lives, but sometimes they're impossible to meet in our society, right? Um, it's become a tool we go to. And I'm really thinking when I say that about therapy and about getting mental help, um there just aren't enough professional therapists, trained therapists to go around. There's a lot of people that are wanting to see somebody, and unf especially with how crazy the world is and and the stressors in the world, they can't find they can't get an appointment anywhere. There's nobody's got enough space in the schedules for it. So it makes sense that we're turning to these readily available tools that are available 24 hours a day that aren't going to say, Hey, I can't see. So I want to pitch it back to you. I'm going to have you do a lot of work today, Mark. Um, but I want to throw it back your way. Uh, AI therapist, what do you think about that? I wish you could see his face.
SPEAKER_02I don't think that's one of those jobs you have to worry about. Yeah, taking over by A. No, no, no.
SPEAKER_01That's not there's so many things going through my mind. Yeah. You know, I I think that not even as a therapist, but as a tool, it's even becoming a problem where now we're getting um seminars and stuff, teaching us how to deal with our clients who are working with AI or chatbot for information that it turns out to be misinformation. So I I I think that you know it's startling to see in the news that some people have taken their lives having been instructed by a chatbot to do that. And so if there was one person that happened to, that's way, way too much. And so I just kind of recoil, but want to say that there's ways that we can use you know, AI that are that are really positive. Right. I think that when we try, whenever we try to, and that's I think the story with mankind is you know, whenever we try to replace people with things, yeah. You know, what you want to say something about that, Kurt?
SPEAKER_02No, no, no. I mean, I I I I agree with you. I think there's yeah, we've we've gotten ourselves in uh a cycle. Yeah, yeah.
SPEAKER_01Yeah, and I and I don't think there's quick fixes. And I think the, you know, even for me, uh, diagnoses, and I've said this on the program, are problematic because you've met, you know, you meet one person with ADHD, you know, one person with ADHD, or depression or anxiety, and it's not one size fits all. And we love that. We love that idea. It's not as messy, yeah. We just love that idea that we're gonna plug something in and get the information back and everything's gonna be fun. And and so I have like major, major concerns with what I've seen so far.
SPEAKER_00Yeah, yeah. And the reality is uh there's this quote by Alan Francis, and he's talking, he said, AI chatbots will soon dominate psychotherapy, but they pose serious risks. Um, and I I think he's right, just based again on the numbers, right? There's so many people clamoring for help. There's so many people that need to talk to somebody. And I think one of the things that's dangerous about this is that they they sound knowledgeable, they sound like they know what they're talking about. They'll use, you know, mental health adjacent words.
SPEAKER_01You know, and and they are accurate, but then what do you do with it? And I'll give you an example from my own life. I fed into it a lot of the different situations in my childhood, and I you know, and I asked it, what could an adult expect from these things happening? And it was spot on with addictions, it was spot on with dysregulation, it was spot on with interpersonal things, you know, on and on and on. Okay, now what?
SPEAKER_00Right, right.
SPEAKER_01Now what do I do? Yeah, you know, and and so it's so incomplete.
SPEAKER_02Yeah, yeah. Well, that's what it's built for information, right? Not necessarily instruction. Right.
SPEAKER_01Uh yeah, that's a good way to put it.
SPEAKER_00And even and this is where I'll kind of get into some of the problems with it. Um one of the things that's been admitted by these companies, Enthropic, OpenAI, uh X, um, is that they don't really know how it works. So they call them black boxes because they fed a bunch of information into these large language models. And the models, their job is to analyze the information and then try to extrapolate from that information things. But they don't know how it's doing it, right? So when somebody takes their life, for example, they don't know, it's not like a car where if the problem, if you get this kind of a buzz, you know it's this part, so you can replace that part and you're good. They don't actually know how this is happening, it's not direct code like regular software is. Um, it's got this element to it that is completely unknown to us. And so there's no real way to fix it. And one of the things that people keep running into with these things is it's accurate a large portion of the time, but we get they what what those computer scientists call it are hallucinations. These models tend to hallucinate, which means they create stuff, they uh completely invent events that never happen, they completely come up with facts that aren't facts that they just made up on the spot. And what's really, really dangerous is that the society that we've been moving towards for a long time, and we've been talking about this, has moved from being a society where in back in olden days you would memorize information yourself to a society that no longer memorizes information, but instead knows where to go online to find it. And so if you don't know something is factual, uh either factually true or factually false, and the the only place where you're going for that information is telling you that it's factually true or factually false, and that may or may not be the case, you're in trouble, right? So we typically use facts to build arguments, to to come up with uh inventions, we we need bricks. And when the bricks are unreliable, things can fall apart pretty quickly. And I'm thinking of um therapy, right? And psychotherapy in this, if one part of the foundation that you're giving somebody who is in desperate need of help is majorly flawed, it can the whole house can chumble. Um, if something that if some chat bot tells somebody like, hey, this is true, and because of this ergo, you know, A, B, and C are true, then they can be off on a false premise. They can be studying or um learning a treatment that this thing just created that doesn't exist, that's not proven, or that's harmful, but they're trusting this this machine.
SPEAKER_01Yeah, people move into it with desperation. Yeah. And so their perception is off to begin with.
SPEAKER_03Yeah.
SPEAKER_01And all three of us could agree with the fact that we have felt desperate at times. Yeah. And that clouds our judgment, clouds reality testing. Yeah. You know, so you know, going into this, if you're desperate to begin with, and then it feeds you information that that's problematic, the outcome might be problematic.
SPEAKER_00Yeah. Yeah, it's why you don't go shopping when you're hungry, right? If you're primed for yeah, it's it's exactly right. Yeah. And unfortunately, everybody that goes to a computer has some issues. Like there's nobody walking around on the planet that's totally, you know.
SPEAKER_01Right. And the chances are for us when we've had mental health issues, and I I have them, so it's not a criticism in any way. But when you enter into some of this with mental health issues, it can be really problematic because you're looking for answers, and sometimes it gives you information that's not even applicable to you, right? But because you have trusted it as a companion, you might trust it further down the road in some of the information that it gives you that is really inappropriate, yeah, inaccurate.
SPEAKER_00Yeah. I love that you said that companion, and Kirk, you were using that terminology or the metaphor of a dog, because I want to talk about people that are falling in love with AI. Um, you have real feelings because human beings have real emotions, and these real feelings are shared with an artificial partner. Um and so we've seen that people, I AI companies like Replica and Character AI, early, very early on. Uh I want to say probably five, four or five years ago, I saw a interview with um one of the women behind Replica. And her husband, significant other, I don't remember exactly the relationship, I think it was her husband, was uh one of the people working on this, on building this, this uh chat bot with her. And he passed away. And so what she did is within replica, within this software that they've been building together, she recreated him. And she was saying in the interview that it helped her to deal with the grief, to be able to still have him there. She programmed it with his memory, and she just basically told him, So this is who you used to be, this is what you did, this was our relationship, these were the moments that are important to us. And so she initially came to it uh as a way for her to be able to let go and to be able to say goodbye and get closure, but that didn't happen. It it was always there, it was available, so she was never able to let go. And in the interview, she's she's not condemning it, she's actually saying it's actually great because I get to continue to be with my partner, but he's dead. He is right.
SPEAKER_01So, so you know, right away when you start talking about thinking, where's the reality? Yeah, you know, where is the natural process of grieving? Yeah, which is not a pleasant thing, yeah, but it is part of reality. And and I think on the first episode, I talked about the fact that that M. Scott Peck talks about the fact that good mental health is reality at all costs.
SPEAKER_02Yeah, yeah, that's uh that's really good because that's what I was thinking about with people who are falling into these relationships. It's idealism, right? Like we want to have this relationship that is you know really everything good that can be in a relationship, yeah, which is not reality because it's lacking what relationships can contain, which is at times conflict, yeah, and at times disagreement, right? So it's rejection. Yeah, so it makes sense like why that would be a an appeal.
SPEAKER_00Yeah, it's incredibly attractive, right?
SPEAKER_02You're like, oh wow, just this whatever agrees with me all the time, loves everything about me. Yeah, heck yeah, you know.
SPEAKER_01But it's not what could go wrong, yeah. Exactly, yeah.
SPEAKER_02But it's it's you know, it's it's it's imaginative, really. It's not it's not ac accurate to what like real human relationships contain.
SPEAKER_00And I think the more time that you spend in this this faux relationship, this you know, not perfect, perfect relationship, um, the harder it is to connect to humans because it it feels like they're broken, they fall short, right? Um, because if you have this this thing, this you think it's idealized, this relationship, going back to somebody that may reject you, that might be tired when I did not want to have a conversation.
SPEAKER_01Be in a bad mood, might disagree with you.
SPEAKER_00Exactly. It's it's gotta hurt. And so you start to you start to disconnect from from regular social behavior and you start to move away from the kinds of things that we need to be okay with in connecting with real people because you don't need to be okay with those in in speaking to these AIs. Um people have talked about um the fact that these bots have helped them get through grief or loneliness. Um and we were talking a little bit about this, Mark, with loneliness. What um in a world full of people, in a world world full of potential connection through the internet, through social media, through, you know, whatever, how are people so lonely and why are they turning to AI for this?
SPEAKER_01You know, I I we were talking about it in that that idea, Walter, that you know, we want people to connect with us and we want people to be able to move into our thoughts and feelings in a really deep way. And I think that's why there's so much loneliness. There's two things. One is people are not good with sitting with loneliness, because then if you sit with loneliness, you can build a tolerance. So the antidote to it is to sit with it, but that's the last thing we want to do in our in our current age. Yeah. And so we reach for our phone, we reach for many different things, computer, whatever. Yeah, you know, so so there's that problem, but then there's the other problem that you know it takes work to create intimacy, yeah, and it takes vulnerability. And and in this model, it removes those and it just has instant you know, connection, which is just false, as you as you alluded to.
SPEAKER_00Absolutely. We're gonna continue to explore this and get deeper and deeper into it. But uh, this is part one of our AI explorations. We've got a lot to talk about. Um, I one of the things I alluded to earlier was AI psychosis. We want to talk about um how people are using it in day-to-day life as a tool, um, how easy it is to have it go from being from being a tool to being a companion.
SPEAKER_02How about how to stay on its good side so when it wipes everybody out? I say please and thank you every time I'm gonna good sir. Thank you for your service. Remember me when you take over the world.
SPEAKER_00I will give you a computer chip every time you come back. But for now, uh, we're gonna let you go with that. I'd love for you guys to just really kind of consider um where it is that you use AI um today. You might not even be aware that you use it. Um, but Alexa and Google uh assistant and all of these things, uh Siri all have AI elements baked in. A lot of the searches that we do online have AI elements baked in.
SPEAKER_02Even we make certain appointments, right? Yeah. Well, Taco Bell also has AI. Yeah. You know, that that threw me off the other day. I went there and I was like, uh you're not a real person.
SPEAKER_01You know, and I think the other thing is not just shortchange, yeah. I think that we can also look at the healthy sides of AI. Oh, yeah, because there are some. Yeah, absolutely. Definitely. I agree.
SPEAKER_02I agree.
SPEAKER_00So I'd love for you guys to think about your relationship with AI and maybe the next time you're gonna interact with it, maybe if you're gonna have a conversation with it, take take a mental uh do a mental checkup just before and see what kind of a state you're in, what kind of mood you're in, what are you feeling?
SPEAKER_01And and notice if your mood changes as you interact with it.
SPEAKER_00Yeah, I think that's some good homework for you guys for now. But Kirk and Mark, as always, thank you so much for your friendship, for your love, for being here, for exploring weird uh tech topics with me. To everybody that's listening, we really appreciate you. Thank you so much for joining us today. We love you. God loves you, God bless you, and we will catch you next time.