Talk To Me Petey D

Ep. 56: AI Literacy – Mental Health

Petey D Season 1 Episode 56

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 21:47

Can AI really replace a human therapist?

In this episode of Talk To Me Petey D, we continue the AI Literacy series by digging into one of the most sensitive and high‑stakes domains for artificial intelligence: mental health.

We look back at ELIZA, the earliest AI “therapist” from the 1960s, and trace how its lessons still apply to modern large language models like ChatGPT. While today’s systems are far more powerful, many of the same human behaviors—and risks—remain.

In this episode, we explore:

  • Why people project understanding onto AI systems
  • What ELIZA teaches us about human‑AI interaction
  • The dangers of non‑deterministic, probabilistic responses in mental health contexts
  • Why many AI mental health tools are labeled “wellness” or “entertainment” apps
  • Where AI can be helpful (information, routines, planning)
  • Where AI should not replace human care
  • Ethical concerns around safety, accountability, and data privacy

This episode is about using AI responsibly, especially when human well‑being is on the line.

🎧 Listen, reflect, and build stronger AI literacy—because not everything that’s possible is ethical.

🔗 Links & Resources

📘 Book:
https://www.amazon.com/People-Management-Ground-Up-Aspiring/dp/B0DBGQ57XT

💼 LinkedIn:
https://www.linkedin.com/in/pete-dempsey/

🌐 Website:
https://peterdempseywrites.com/

✉️ Newsletter:
https://peterdempseywrites.com/newsletter/

🦋 Bluesky:
https://bsky.app/profile/petedempsey.bsky.social

▶️ YouTube:
http://www.youtube.com/@TalkToMePeteyD

🍎 Apple Podcasts:
https://podcasts.apple.com/us/podcast/talk-to-me-petey-d/id1745885025

🎧 Spotify:
https://open.spotify.com/show/4NrlsWzansuCfuApMCZzj0

 

SPEAKER_00

AI literacy spans a broad array of domains. If we really want to understand how AI works, how it influences society and our lives, and to be what we consider AI literate, we need to have some understanding across all of these different domains. And while many things in AI and generative AI may seem new and cutting edge, there's actually a long history that we can draw upon to learn and better understand the influence of AI systems, how they work, and how they influence us. So today we're going to continue the series on AI literacy and we're going to take a look at AI's use in the domain of mental health and how it's been presented as a replacement for therapists in certain scenarios and what we can draw on and learn from that. So welcome to the Talk to Me PDD Podcast. I'm your host, PDD. The podcast we talk about all things tech and society, knowledge, work, management, leadership, all those fun things. So today is episode 56, AI Literacy Mental Health. So mental health chatbots, or a kind of chatbot that is supposed to replace a human therapist that you would talk to, have certainly been in the news with ChatGPT and other generative AI systems being used for those purposes, and we'll talk about that. But this isn't something that's new. If we go back all the way to 1966, is when the first AI chatbot or a simpler form of AI chatbot to replace a human therapist was deployed and tested. And this was a system called ELISA. And obviously the computing power that was available in 1966 versus what's available today was quite a bit different. So you couldn't run a large language model like we do today and have a chatbot based off of that. So Eliza used much simpler rules, so um tricks about grammar and language to be able to look at the input and kind of parse that into different patterns and reflect back onto the human user and to come across as having a human-style conversation. So but it's interesting because a lot of the themes and things we'll see and the experience that users had with Eliza are very similar to what we see even in more complex generative AI chatbots today. Another thing that's important to point out, and one of the differences, is ELISA wasn't something that was generally available to the to the public. So that's something that we'll return back to. Whereas today, um really if you want to, you can go to many of the different generative AI chatbots, start typing in mental health or therapist-related conversations, and get some responses back and have a similar experience without any sort of constraints or review of what's happening. Whereas with the ELISA system, this was being overseen by researchers, um, and there was a limited audience that was going in. Um another thing that was really interesting to me about the Eliza experiments is that the participants in the study were told how this system works. They're saying, okay, you know, here's some basic rules of grammar that the system will use, and this is how it talks to you. So they knew that there wasn't any attempt to trick the participants that this might be a person behind the scenes typing to them. They knew it was a computer, and they also knew that it was constrained to something, I don't know, like 30 or something different grammatical rules that um they could use, uh, that it could use to respond to them. But the really interesting thing was even knowing that knowledge and having a pretty simple conversational um repertoire of responses, the test subjects still had a similar reaction to what they would have with a human therapist, or maybe what you see with people with generative AI chatbots today in mental health type conversations. Um that there was this simulation of understanding. Obviously, the Eliza system didn't actually understand what the people were talking about, it was just kind of breaking up the sentences and playing them back. Um but the participants had a feeling of being understood because if this language had been from another human, we would interpret it as that human understanding what we were saying. So this really powerful ability that humans have to uh project understanding and human-like characteristics onto computer systems, even when they know how the computer system works, and they know it clearly doesn't have any type of understanding besides simple grammatical rules. Um, Joseph Weisenbaum was the creator of this system, and he he's quoted as saying that it was very hard to convince many of the subjects that Eliza, this this very primitive system was not a human. Um so we tend to have very strong beliefs, and even when there's clear evidence that some of these AI systems are not humans and do not have any sort of complex type of understanding, we we still often struggle to believe that when we have these conversations. Um one of the things that the participants really liked about this system is that it was seen as non-judgmental, so something that you would generally want in a you know therapeutic mental health type situation. Um I think this could be a potential advantage in some scenarios of computer systems because they literally do not have the capacity to judge you. Um, and that can be a potential advantage in certain scenarios for people to engage with these systems in mental health type conversations. Again, I think there's lots of risks that we'll get through, so I'm not saying we should go ahead and do that, but I could I think that is a reasonable advantage because computers don't have the capacity to judge us, and that's something that we seek in sharing some of these private conversations or struggles that we might have or things we might want to talk to. Um and if people and participants have this feeling of being listened to and that's a valuable conversation, you could make the argument that there are there are benefits to that. Now, what we'll have to look at is do those benefits outweigh the risks and how do we we separate the the two from each other? Um and again, this a lot of the value comes back to not so much what the systems are doing behind the scenes or what they can understand, but our human projection of understanding onto these computer systems. Um, one of the things that's really interesting about Eliza is Weisenbaum, the creator, um, was kind of horrified by people's reaction to this. Um, his thought was that it would be fairly obvious to people the way the system worked and they wouldn't kind of take it that seriously or form these emotional connections and conversations. Um and he actually became a critic of a lot of these use cases and was quoted as saying, you know, there are certain tasks which computers ought not to be made to do. Um and he made the argument a little bit later in 1985 that this the idea of using computers as therapists was an obscene idea, that it was just so morally wrong that it's not something that we should do. Um which I think is valuable to have these ethical conversations, especially as the capabilities of some of these systems potentially allow us to replace human interactions in some scenarios. Um while things might be possible in some sense, it's still up to us to make an ethical and moral judgment about whether that is uh an obscene idea or one that it that is valuable in our in our society. Um now when we get into modern large language model systems, these are probabilistic systems and how they output responses. So if we think about using a large language model-based chatbot in a therapeutic or mental health type scenario, I think the good question to ask is is mental health too important to leave to chance? Because that is what we're doing with these systems. Now, yes, you should have a particular window where you're likely to get responses in a certain area, and maybe if you can constrain things to that, um it's not so much chance, but there's always some there is always some with these systems. So, and you're getting non-deterministic output, you're not getting the same response every single time to the same input. So we have to ask, you know, is that okay? Is that acceptable in some of these scenarios? And I think you could argue that in some mental health type conversations that that's okay, and that could still be be valuable, and I'll talk about some of those. Um, but there are certainly some where it's not okay to have a unpredictable response or maybe uh responses that are outside of a certain bound. And right now, these systems don't do a good job of distinguishing on which side of that that divide the user is on and the conversation is on. Um, and some of that is maybe it just hasn't been designed in in some of these applications, but some of it is it may not be possible with some of these technologies today. So um even though there probably is value in a lot of scenarios, if the risk of not being able to turn determine when you're going into a scenario where it's not okay, um, that should give us pause about these systems being used in in mental health type conversations today. And you'll see a lot of the the chatbot providers will um set a distinction that they are not a health application, they are a quote-unquote wellness application or for entertainment purposes. So this is a legal loophole and having a way to skirt some regulations about what a a health application provides and what um boundaries it needs to have. Um, so I think that's a a signal that we should listen to that even though these are sort of being shown uh as effective ways to give yourself therapy or mental health support, um, they're not regulated the same way. Um and we're not getting the confidence of the makers that we should. If they were saying, yes, this is a health application and it conforms to all of these standards and accountabilities, then I would say users could be a bit more confident in using it. But but the fact that you know they're for entertainment purposes, I don't think you know, if mo if people are seeking serious mental health support or want the support of a therapist, it's not for entertainment purposes, right? It's for healthcare purposes. Um so look for that that language. That's how most of these are sort of presented and marketed, and I would strongly consider that in how how you use those those applications. Um also, you know, thinking about ethical applications, there's a big difference between running a controlled experiment like Eliza with participants who are monitored, there are researchers there that can step in if there are safety concerns. Um, there's been a lot of attempt to educate the participants about what's happening versus what we're seeing with AI chatbots today in mental health, but in other domains as well, where it's sort of just this natural experiment where people are being exposed and using these tools for mental health support. Um there's no overview, there's no safety if people are heading into dangerous territory, um, which is not an ethical and safe approach when it comes to health applications. And, you know, you you could argue that, well, okay, you know, these are quote-unquote entertainment or wellness applications, so they people should have different expectations. Um, but I think kind of silently there's this um support for using them as mental health um tools by the companies that make them, even though they might not say so explicitly. Um, I think it's implied that that can be a useful way. So it's not as clear as it could be to end users that um th they may be submitting themselves to an experiment with their own mental health and um making a decision about that, and that there's no oversight for their safety in a lot of cases. Um going back to how mental health chatbots and and AI-based mental health chatbots can be useful, um, I think if we look at AI like information retrieval like search, things like planning, that's that's a good thing for it. If you're trying to, let's say, get a meditation practice or a breathing practice or something basic like that to support your your well-being and your mental health, uh, to remind yourself of a routine, uh, things like that, basic health advice like you might get from an internet search, um, can use use a chatbot for that. I think these are all valid use cases that can be helpful getting people information that they need. Um but then some of the things to be careful about, you know, we talked about all of the other things already, kind of not leaving things to chance a little bit. Um, another one that can be difficult is this idea of synchroficity, that a lot of these systems are tuned to be very agreeable to the user because there are benefits of that for engagement or how people like the product. Um and that may not always be the best case. It may lead to things like long, looping, spiraling conversations, whereas if you were dealing with a therapist, they could you know break that up and say, hey, this is maybe not healthy or going into rumination or other negative behaviors. But an AI chatbot's not going to do that. It's gonna be available 24-7, which could be good sometimes, um, but there are probably benefits for interrupting these types of looping conversations that are that are spiraling when you have a human therapist who's not there 24-7 for for you to kind of have these conversations with um and breaking that up. Um and this agreeable nature can sometimes result in harmful responses from the AI system if the user is persistent. So even kind of initial guardrails that might prevent harmful responses can sometimes be circumvented in kind of a trade-off for agreeableness if the user keeps going and going and going, where finally the system may sort of be overwhelmed by the user's content and give back a response that is more agreeable, which may in fact be harmful for for the user. Um you can see this in some of the cases that have been documented about real harms that have resulted from mental health style interactions with chatbots, from you know self-harm or harm to others or suicides or things like that. Um, if you you know read some of these conversations or some of the quotes that come out of it, um, can be really disturbing what the AI systems have told to people. And it's not that they're thinking disturbing thoughts or have some uh agenda against people or trying to cause harm, um, it's just the nature of their story completion. Um, and I think it's you know, drawing on text or internet conversations they may have had in the past, and if the conversation a user is having is getting closer to that eventually, and they're trying to find text that agrees with the user's narrative of how this story should complete, um, you know, that often involves in maybe glorifying suicide or other harmful actions that the user might be talking about in order to complete that story in a way that matches with combination of training data and the reinforcement learning to make the system more agreeable. Um you can almost think about it as a way of jailbreaking the system or kind of an adversarial interaction to try and get these AI systems to respond in ways that you know they're quote unquote not supposed to respond. And it's not as if these users were intentionally trying to break the system, but that was the net effect of the conversation. And if you're dealing with an actual therapist and having these conversations, they have the opportunity for intervention if there is a fear of harm to either to yourself or to others, and current systems really don't have that ability, um at least not in a reliable way. Um, and then there's also this the you know the concept of accountability, whereas if you have a human who's giving you advice to go out and cause harm, um, there can be legal accountability for that person. And there's not really the same precedent or ease of doing that in um a situation where an AI chatbot encourages somebody to harmful behavior. And those are just the ones that we're seeing about. You know, there may be other conversations where there are lesser harms or um driving behavior that's not the best or most productive for that person, um, and that's probably happening at a much larger scale, and there's not really any recourse for that yet. Um, so it'll be interesting to see what happens there if new laws get defined. Um and then all of this with AI systems, a lot of it comes back to data and privacy. Um and one of the things that was called out with these systems, going all the way back to Eliza, is it um encourages this deep self-disclosure. So um the type of really private, personal, sensitive details that you might disclose in a mental health conversation. Um if you feel like that's a conversation that you're having with this non-judgmental system and you're getting support, um, you'll you might disclose more information than you would otherwise want to be made publicly available. Now, a lot of times these companies are profiting off of that data, selling it, using it for training, or even if they're not, you do you still have to be concerned about the security controls around it, if their data is compromised or exposed somehow, thinking about the information that can be traced back to you that you may or not may not want available publicly. So in a way, it's it's a trick. I mean, if you thought about your human therapist was making a little side money off of the conversations that you had with them, you might approach things differently. And I don't think that's something that many people are are as aware of as as they could be. Um then I think even with human therapy now or human medical interactions, we're seeing this um uh popularity of AI-based note-taking systems. Um, and maybe I'll dive into that more in depth in a separate topic, but that's something that's uh infiltrating the world of even human mental health and healthcare support. And you know, there are benefits in the sense that it's um helping summarize the conversation, usually for note-taking and things like that, and in theory, saving the practitioner some time so they can focus more on patient care. Um, but there are a lot of other potential negative things and automation bias and that sort of thing that can come out of that system. Um, and again, maybe I'll dive into that more in another time, but that's another data security layer. So the more and more of our personal data and our deep private data that we you know would like to keep to ourselves or maybe not make available to commercial data providers to bundle up and and sell so people can market things to us based on our deepest, darkest fears or fed into government monitoring systems or all sorts of fun stuff. Um so even in the world of human-to-human mental health, uh AI does have have an impact already. So, yeah, lots of fun stuff to think about there. Um hopefully you found this interesting and informative. I'll continue doing kind of more subject area deep dives into different areas to try and kind of expand what you know about AI literacy and all the different places that it can interact with us in our world and kind of the different themes that we see across all of these different areas. So hopefully you enjoyed. Um, you know, please like and subscribe to the podcast. You can check out my newsletter on peterdempseywrights.com. Um, if you have ideas about topics you'd like me to cover for AI literacy or anything, uh please drop me a comment or a note. And until next time, uh good luck in your AI literacy journey. Thanks.