Mystery AI Hype Theater 3000

The Robo-Therapist Will See You Now (with Maggie Harrison Dupré), 2025.08.18

Emily M. Bender and Alex Hanna Episode 62

Talking to chatbots can have serious mental health consequences — fueling delusions and leading users away from consensus reality. Futurism writer Maggie Harrison Dupré joins us to unpack the hype around AI therapists, based on her groundbreaking reporting on "AI psychosis."

Maggie Harrison Dupré is an award-winning tech journalist at Futurism who’s reported extensively on the rise of AI as a cultural and business force shaping media, information, humans, and our real and digital lives.


References:

How AI Is Expanding The Mental Health Market

He Had Dangerous Delusions. ChatGPT Admitted It Made Them Worse.

OpenAI: What we're optimizing ChatGPT for


Also referenced:

Gov Pritzker Signs Legislation Prohibiting AI Therapy in Illinois

People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions

Stanford: New study warns of risks in AI mental health tools


Fresh AI Hell:

Bluesky post about DEI by chatbot

How AI is being used by police departments to help draft reports

Politico's recent AI experiments shouldn’t be subject to newsroom editorial standards, its editors testify

UK Asks People to Delete Emails In Order to Save Water During Drought

Sam Altman says 'yes,' AI is in a bubble

Google's AI pointed him to a customer service number. It was a scam.

Mastodon post about Dieter Roth's Literaturwürste

Check out future streams on Twitch. Meanwhile, send us any AI Hell you see.

Our book, 'The AI Con,' is out now! Get your copy now.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor and Ozzy Llinas Goodman.

Alex Hanna: Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis In this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find.

Emily M. Bender: Along the way, we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. I'm Emily Ann Bender, professor of Linguistics at the University of Washington.

Alex Hanna: And I'm Alex Hannah, director of Research for the Distributed AI Research Institute. This is episode 62, which we're recording on August 18th, 2025. Before we get started this week, we have a bittersweet announcement. Our amazing producer Christie is leaving us to go to grad school so this will be her last episode. We'll miss you, Christie. Can't wait to see what you're up to next. And we're so excited to welcome our new producer Ozzy, who will be taking the lead on today's episode.

Emily M. Bender: It is so hard to say goodbye to Christie, but also we're super excited to welcome Ozzy.

Alex Hanna: So: healthcare is one of the fields that's most under fire from AI hype. And this is especially true when it comes to mental health. A flurry of startups are marketing AI chatbots as the perfect solution for those who can't afford a human therapist. 

Emily M. Bender: Illinois recently passed a law to limit this practice with one official noting that patients quote, "deserve quality healthcare from real qualified professionals and not computer programs," end quote. Amen to that. At the same time, recent reporting has shown that talking to chatbots can have serious mental health consequences, fueling delusions and leading users away from consensus reality. 

Alex Hanna: Our guest this week is Maggie Harrison Dupré. Maggie is an award-winning tech journalist at Futurism who's reported extensively on the rise of AI as a cultural and business force shaping media information, humans, and our real and digital lives. Welcome to the show. 

Maggie Harrison Dupré: Hi. Thank you for having me. I'm very happy to be here. 

Emily M. Bender: Thank you so much for joining. I'm taking us into our first artifact. This is an article in Forbes with the headline "How AI is Expanding the Mental Health Market." Published over a year ago, June 25th, 2024. But as you'll see, things have not really changed. And another really important piece of context here is that the author Eugene Klishevich is identified as a Forbes Councils member. And then it says, for Forbes Technology council, council post, membership: fee-based. Maggie, do you got any context for us on that one? 

Maggie Harrison Dupré: This isn't really a world I'm a part of, to be honest. It does seem like there's, it's, this is certainly a promotion of this person's company. And yeah, Forbes has a lot of different contributors who often contribute similar content. 

Alex Hanna: Yeah, their company seems, it says they're the founder and CEO of a company called Moodmate Inc. And they're an impact entrepreneur in AI and health tech. 

Emily M. Bender: Yeah. So this, this is somebody selling something. We picked this artifact because it really, like, hits all of the beats of how chatbots for mental health are being sold. And so we'll dive right in. It starts with: "The US is facing a mental health crisis. Every fifth US resident suffers from behavioral, emotional, or substance use disorders according to the National Alliance of Mental Health. However, high demand and a shortage of psychotherapists can mean that therapy must be paid out of pocket. Even for US residents who have insurance, there are also barriers to obtaining care, like inadequate insurance coverage, fragmented care, and lack of providers. The key to covering the treatment gap may be in the application of AI services. As the founder and CEO of a company that offers an AI, quote, 'mood coach,' I believe that chatbots and AI therapists can address a significant portion of the demand for mental healthcare as well as expand the market to new audiences." The, go ahead Maggie, if you have something here. 

Maggie Harrison Dupré: Yeah, I think my immediate feeling is just there is a problem that is being identified and it is like, mental health crisis is real. A lot of people are suffering from some kind of mental health struggle. Those vary, they're vast. But offering a solution that doesn't actually look at why this is a broken system, I would say, and solution is not the word I would use, it's offering a, an AI patch, arguably, an AI quote unquote solution for a problem, without actually identifying or examining these underlying reasons for why this is a problem to begin with.

Emily M. Bender: Yeah, so the problem is real, right? There is need for more adequate mental health services and more access to them, but that doesn't mean that chatbots answer that problem at all. And just the fact also that this is "as well as expand the market to new audiences." Oh yeah, the point here actually isn't care. The point is money, just immediately. 

Alex Hanna: Yeah. The next piece of this really sent me just because it's the way that they talk about the tool. So they say, "Designed with human therapists in mind" as a subhead. And then it says, "AI powered apps and programs can offer therapy experiences that are increasingly similar to sessions with professional psychologists. These tools are often developed with natural language processing, NLP, which can effectively process, understand, and interpret human language." And I know you're gonna have words about that, Emily. So, "They can detect emotional states and changes in wellbeing. AI mental health services are often powered by artificial neural networks, ANNs" -which is an acronym I've never seen before- "and deep learning, DL, which are modeled on the structure and function of the human brain. These can identify, quote, 'complex patterns in relationships and data.'" So this is just, there's so much to disentangle in terms of this, just, and let's start with the NLP part of it, 'cause I know that I'll, that's gonna send you.

Emily M. Bender: Yeah, no, you could tell I'm vibrating here, right? So yeah- no, chatbots are not actually understanding and interpreting human language. They are processing the spellings of words that come in. Sure. So "process," I guess I have to agree at least, process words, not process language. Also they're not detecting emotional states but they are certainly set up to output text that looks like they are, and that is really dangerous. So you can certainly, you can do different approaches to natural language processing where you're doing something that is better labeled "natural language understanding," actually mapping to something outside of language. But this author turns right around and says, "The most promising technology that powers current conversational therapists is large language models," which absolutely are not understanding, as we have argued over and over again. "It can parse human language and generate human-like responses." Okay, why would we want human-like responses? That's not the goal here. And then it jumps right to, "Thanks to it- that's LLMs- AI therapists can flexibly adapt conversational styles, representative of different theoretical orientations." So, "AI powered services can deliver cognitive behavioral therapy, offer personalized recommendations, provide coping strategies tailored to individual needs, and more. This implies that a single service could potentially function at the scale of a whole mental health clinic and meet the requests of individuals suffering from mental health problems."

Alex Hanna: Maggie, you've covered so much of this. What are your thoughts on this bit here?

Maggie Harrison Dupré: I would, and this is something that as I've talked to a lot of people, a lot of, mental healthcare providers, clinicians psychologists and psychiatrists and so on in the process of reporting about AI and mental health and how in many ways, AI tools, including ChatGPT are interacting in really alarming ways with people's minds. Something that has repeatedly been brought up, and that I think is just to me, like, I just reject this premise wholesale, honestly. In part because of the idea, this, to me would imply that therapy is just words. When in fact, therapy is much more than words. And something that has been repeatedly brought up both by, different individuals, psychiatrists, in addition to, there's a fantastic Stanford study that came out pretty recently that really interrogated whether, AI models were reliable helping crisis therapists, et cetera. The idea of stakes in a relationship, like a lot of therapy, cognitive therapy, there has to be stakes between the therapist and the person. There's no stakes here. If there are stakes, it's between the company and the AI user, but it's certainly not between the inhuman AI model and the program and the human, like the, it can't exist. The model does not have that capability. And that to me is where, just this idea, this to me, like the implication on my end and what I intuit is that they're saying therapy is just words interacting with other words. But that's just not what therapy is. 

Emily M. Bender: Yeah. And that's intensely dehumanizing, I think, to both the therapist and the patient seeking therapy to reduce therapy to the words that are exchanged. Like it's, yeah. Yeah. And your point about stakes is so good. I think if there's any stakes involved here, it's actually between the company and its shareholders. 

Alex Hanna: And I think the thing is too, is so much of this is that it's used, it's dressing up just the words and the token outputs, and the kind of interchange of tokens as being it's dressing them up in therapy language so it can deliver cognitive behavioral therapy. Which no, it can't. And that, the fact that there's different kinds of therapy styles that the sellers are saying that it can do is really disingenuous. But then there's also other things like, "offer personalized recommendations and provide coping strategies tailored to individual needs." It replicates some of this, these, the same language around like personalized recommendations and advertising or precision medicine, and so much where it's okay, it's not really, it's not better than these things. And human therapy it's effectively you're saying that people can just be micro-targeted based on data and what they spout on, in these models.

Maggie Harrison Dupré: So just because I like something doesn't mean it's good for me. Just because something like, just because something is ultra personalized to what feels good to me in the moment doesn't mean that it's, targeted at my healing or helpful for my healing. And those are two those are very different concepts.

Emily M. Bender: Absolutely. I wanna raise up one thing from the chat. sjaylett says, "I always say when I'm feeling in need of some support that I want some interaction-like time with human-like objects. 

Alex Hanna: I want to get into the next part of it. This is the part that sent me. The subhead is "AI: a way to provide accessible and inclusive therapy." I'm gonna love this. So, "Chatbots and AI therapists often have an advantage over human psychologists in terms of lower costs, absence of prejudices and stigmas, and availability at any moment." Okay, so, "Financial availability. The cost of AI based counseling can be lower than that of human based counseling. For instance, the cost of a single human led session online averages from about 65 to 95 dollars. At this price, it's possible to buy a year's subscription to some chatbots with unlimited usage. Many apps also offer free versions which can introduce the benefits of therapy to those who have never tried it. This opens up the mental health market to individuals with limited financial resources."

Emily M. Bender: Market!

Alex Hanna: Well, it's not only that, but it's saying you're explicitly to, offering it to people that don't have resources. So you're already reinscribing inequalities and access to mental health, and selling it as a market point. And then I want to read the next part 'cause it's closely related, which is: "Inclusivity. The American Psychological Association acknowledges that therapists may struggle to understand patients from diverse backgrounds with differences in race, gender, ethnicity, sexual orientation, socioeconomic status, age, education level, religion, and language. This issue can prevent a significant portion of the population receiving mental health treatment." And this is the kicker. "Over 7% of the US population identifies themselves as part of the LGBT community. That's only one of the aforementioned minorities. AI therapists can help bridge this gap. Chatbots are often perceived as non-judgmental and unbiased. This makes it easier for them to provide suitable care to a wide range of individuals." Let me tell you, nothing pleases me more as a queer person to have a conversation simulation with a chatbot to affirm my queerness. So thoughts on that, Maggie? 

Maggie Harrison Dupré: Oh my god. There's so many things like, again, just like a full wholesale- the argument, I truly struggle to understand the, not only like, one, like AI models are like, large language models are famously very biased ,and it's a famous problem that has been pretty impossible for the industry to eradicate no matter how much, reinforcement learning, you heap onto a model. So that's just, the idea of a lack of bias is just not real here. Then also, again, it's identifying a problem, but then offering something that to me is so deranged as like, an alleged solution where, okay, you're having a ton of trouble finding somebody, finding a human therapist who understands your experience and can relate to you in terms of your specific experience as a human. Try a robot! Doesn't know anything about being a person. Like that to me is just the most bizarre way to view what is a real problem. 

Emily M. Bender: Yeah, and it strikes me as extremely othering, which I think what Alex was getting at in the first place is if you can't find an actual human to talk to because there aren't enough trained therapists who understand your lived experience, then clearly you're the kind of person that is better off with this, with the robot, that the- 

Alex Hanna: Yeah. It's as you're doing this and I know, and this really pisses me off. I live in the Bay Area and I have so many friends, especially friends in roller derby who are queer and trans people who got into therapy and became counselors because there's not enough queer and trans people in therapy and meet, and wanna provide that resource, especially for queer and trans kids. So it's very important to have people and to train people like that. And there's no world in which synthetic text extrusion machines are going to one, understand an experience. And if they give the semblance of that, it's because first that they're trained on texts which reflect queer and trans experience. So lots of trans lit. Which there's been a really great amount of new stuff that's come out in the past, 10 to 20 years. But also like it's not going to actually understand your experience, right? It's a game changer to have queer and trans providers, but it's for that connection. And so it's just yeah, it's just, this is so infuriating and it's insulting to people of color, to queer people, to people of various different identities to suggest that this can be an effective replacement.

Emily M. Bender: And there's some wonderful stuff happening in the chat. sjaylett says, "I'd like to thank Alex for so efficiently transferring her rage to all of us."

Alex Hanna: Aaaah! Wanna murder. 

Emily M. Bender: And then we have abstract_tesseract: "Also love the prospect of chatbots performing a marginalized identity via statistically averaging stereotypical language use," to which mjkranz adds, "An automated bullshit generator in blackface equals diverse therapeutic options."

Maggie Harrison Dupré: Couldn't put it better.

Alex Hanna: Yeah. That's a really great summary.

Emily M. Bender: Okay. So we gotta talk about stigma and accessibility too. So the next subhead here is, "Reduce stigma. The stigma surrounding mental health care remains one of the main reasons why many individuals forego help. People can feel embarrassed to discuss their sensitive issues with others. AI based therapy provides a more anonymous and private experience. Most services work in the form of chats, eliminating the need to reveal one's identity. This could help people feel more comfortable with using AI for mental healthcare rather than seeing a human therapist. However, please note that AI therapists or chatbots may not be HIPAA compliant, so be sure you do your due diligence with any AI therapy app before using one." Maggie, I think you have some thoughts about this. 

Maggie Harrison Dupré: I have so many thoughts. Yes. Starting with the idea of HIPAA compliance here, like, to be clear, like ChatGPT, if you're using ChatGPT for therapy, it is not HIPAA compliant. And the idea that chatbots can be HIPAA compliant is really murky. Also, chatbots in general present so many different like this, the security, which, talked about this show a lot, but like the security problems with AI models and di, divulging such personal information about your life and your mind and your world to AI models is manifold like it is really, to me, deeply concerning and just there was a great piece, I believe it was in the Verge about AI therapy being like a final frontier in a way about, of the surveillance state. That I think there's, there's so many problems here, and I would be, I am really skeptical that just in terms of like baseline functionality that any AI model, any LLM powered product can be HIPAA compliant at this point. Like it, it's just, I'm very skeptical of and just, "make sure you check," well, it's probably just not. It's just probably not. 

Emily M. Bender: Yeah. And the idea that a chat means not revealing your identity is also super naive. And mjkranz is knocking it outta the park here again: "This therapy session is just between you, your keyboard, the chatbot, and all of our advertising partners and data broker clients. True confidentiality that will also help you get the best ads targeted to your neuroses."

Alex Hanna: Yeah, absolutely. What was it? I think there was some reporting that had said, you know, like even if you had marked, don't store this or whatever in ChatGPT, because OpenAI is embroiled in so many legal cases, those chats are still being maintained for different discovery purposes. What are, you're not having, there's not actually any guarantees. There's not any HIPAA compliance any kind of data protection. Like it's, you're really not getting those guarantees there. We didn't discuss this on this show, but there was and we're not discussing it today, but we were looking at it. One thing that we talked about in the book was this chatbot Woebot, which was one of the, like most, it was like, the crown jewel in the sort of, the cap of many of these companies doing mental health support and I, they ended up shuttering 'cause they couldn't get FDA approval. So effectively yeah, like you're not gonna get your certification 'cause you can't guarantee it. 

Emily M. Bender: Yeah.

Maggie Harrison Dupré: I also, oh, sorry.

Emily M. Bender: I was gonna say there's something else in here about stigma and the difference between therapy and soothing. I dunno if that's the direction you were gonna go. 

Maggie Harrison Dupré: Oh, I can. First, on the note of stigma, I wanna reference that Stanford, that recent Stanford study again because it very specifically when it was examining whether AI chatbots can be a real alternative or supplement to human therapy, it specifically investigated the idea that, chatbots can be bias free, stigma free places for people to find treatment. And what it found was that they actually are riddled with stigma on a range of issues, but very specifically, they found that people who turn to a chatbot for help with depression were treated better by the chatbot. And this was ChatGPT, specifically asked to act like a therapist. It was some, therapist bots and Character AI, a few other, bots, very specifically marketed and, or told to act like therapists, be therapists. They found that the chatbots between different mental illnesses carried a lot of stigma. A person with depression was treated better by the model than a person with schizophrenia or bipolar disorder. And that stig- so that stigma is very, it's measured and it's real. Even in cases where a chatbot is like, very specifically asked to, act like an ethical therapist and view this through this specific framework of therapy. And then, yeah, just this idea of that you raised Emily, just there is a big difference between these models are unreliable when it comes to mental healthcare. And in some cases, there are a lot of people who will argue that they've had a wonderful experience, a really helpful experience with AI for therapy but I would really caution people about the reliability here. There are a lot of people who've had really terrible experiences as well. And there is also, in terms of the care that people are, perceive that they're receiving. There is a big difference between real care and real therapy versus somebody coping or being soothed. Like those are two- healing and working through something is very different from yeah, coping with something, using a chatbot. And I think that's something that people really need to keep in mind. 

Emily M. Bender: Yeah. And I think we don't wanna minimize the needs that people experience, or sometimes you, coping is where you're at and what you're trying to do. But I think there's a lot of danger in going down this path, especially when it's advertised as "this is therapy." Right? Because that can lead people to go down this direction where they're, they believe they're doing therapy, they believe they're healing, when what they're doing is at best coping. But even without the advertising like this is, you can talk to a friend who, like you were saying before, has their stakes. They care about you and they expect you to care about them, and they can be there to listen to you vent and stuff, right? And they can help you cope. But that friend is a person and they are much better positioned than some synthetic text extruding machine. Not saying that, this always happens in the best possible way, but to say, "Hey, I think you need more." 

Alex Hanna: Yeah, a hundred percent. 

Emily M. Bender: Yeah. All right. I'm gonna skip a thing on accessibility. It's basically like, there's not enough. Of course there's not enough. That doesn't make chatbots the thing. This next one made me so mad, so.

Alex Hanna: Read it.

Emily M. Bender: "AI therapy as a business solution. Therapy is a way to boost productivity and profits for businesses. Mental health issues lead to almost 12 missed workdays per employee per year. This costs the US economy $47.6 billion annually," and like blah, blah, blah. More statistics about companies covering the costs of basically mental health treatment and insurance. And then, "AI powered solutions are already transforming the corporate mental health landscape. And skipping a little bit further, "Given the compelling statistics, I believe that more companies will embrace AI powered mental health services for their employees in the years to come." And clearly this is, this is this person writing some B2B advertising in their Forbes Council essay. But I was just so appalled that the viewpoint here is how do we make money by convincing our business partners that, that they'll be better off if they pay us for this. 

Alex Hanna: Yeah, a hundred percent. And Christie in our backchannel says, "I don't even go here, but that profit motive text sure explains a lot about employee resource groups." I think you go here still at least until the end of this episode. But also it explains a lot of the employee- I think they're called EAPs, employee, employee assistance programs. So they're these corporate programs for therapy, which are quite like, I've never heard- I went to one when I was at the University of Toronto and it was like the worst therapy I've ever had. And and I've heard nothing but really terrible experiences with organizations like that. This is really selling and sort of suggesting that it could be more of a cost effective replacement for EAPs. And you're right, there's a whole there's a B2B of this all.

Emily M. Bender: Yeah. Alright. Maggie, any final thoughts on this piece before we go on to the next main course artifact? 

Maggie Harrison Dupré: Yeah, one of the last segment was just like tech dystopia, capitalism, like 101. Like it, it's just out of, science fiction in so many ways. But yeah, I just think just this idea, again, just rejecting the framework of, this isn't an industry that like needs disruption. This is an industry that is like deeply broken or a world and a healthcare world that is deeply broken. And yeah, it just seems to me that it's getting the same treatment as this is why we need faster food delivery with an app. It's not, these are not the same issues. These are not, it's not a field ripe for disruption. In fact, it's a field ripe for stability and you know, reinforced access, but. 

Alex Hanna: I like that a lot. It feels ripe for stability. 

Emily M. Bender: Feels ripe for stability, one that's built out of local, person to person connection between therapists and clients, therapists and each other. And yeah. Okay. We've got a quick main course here. This is some recent reporting, recent-ish reporting in the Wall Street Journal, July 20th, 2025. The journalist's name is Julie Jargon, which is an interesting coincidence here, although we don't make fun of people's names, thank you. 'Cause any joke you might make about someone's name, by the way, they've heard before. It's never creative. So headline is "He had dangerous delusions. ChatGPT admitted it made them worse." And then, "OpenAI's chatbot self-reported it blurred line between fantasy and reality with man on autism spectrum. Quote, 'Stakes are higher for vulnerable people,' firm says." Thoughts about any of that? 

Alex Hanna: We had been talking prior to this about the framing specifically. And so first off, the idea of a chatbot self-reporting that it was doing this. And then there's something that the firm is saying, and I'm assuming they're referring to OpenAI. Maggie, I'd love your thoughts here. Just in terms of someone that does a lot of reporting on this. I'd love to hear just more about like your thoughts on this kind of framing, but also some of the sins of the industry, and then also how you think about going into this how to do this in a way that I think that's thoughtful, accurate, and sensitive to what's happening. 

Maggie Harrison Dupré: Sins of the industry. How much time do we have?

Alex Hanna: There's SO many sins. Just a quick gloss of them. 

Maggie Harrison Dupré: This piece really frustrated me in part because I thought the reporting, like this is an important story, that's one of many that are being told right now, about people who are going through this. It's in the scope of who AI psychosis is impacting, is vast. And in, in so many ways, this told an important story about somebody who was deeply impacted by ChatGPT in this capacity. But the framing of it just, oh my god. It, it made me really mad. The chatbot cannot admit something. It doesn't know it, it doesn't have agency here. These are, I think, and I think to place the agency in, one, ascribe self-awareness that doesn't exist here. And we've actually, in my reporting, I'll say, we've seen multiple cases where somebody has a similar moment of almost like they realize maybe something's been happening. They have, I guess for lack of a better word, like a come to Jesus or this, confrontation with the model over what's been occurring. And a similar kind of scene plays out where the chatbot, quote unquote, "admits wrongdoing." But then in many cases, like somebody can, come back and say, "Oh, tell me how to build the shield." And the chatbot will go right back into it. There's no real self-awareness here. This is not what's happening. It's not admitting something. It doesn't have secrets. And I think to, to me, to anthropomorphize the chatbot in this interaction like this and to ascribe so much agency to the product- which is a product, it's not a person, it's a product- really runs the risk of removing agency from the company and the people at the company who are making, and releasing a, a poorly understood product to the masses and, having things like this happen. And so that, that to me is the agency piece is one, just in terms of, model capability. It's not real. And then two, that, that danger of putting the agency in the wrong place, I find to not be useful. 

Emily M. Bender: Yeah, absolutely. 

Alex Hanna: Yeah, a hundred percent. 

Emily M. Bender: Speaking of agency, here's a little bit more on this one. We've named the journalist, but of course the headline and subhead are typically not written by the journalist. That's gonna be the editor. So I just wanna see I think that this is also true in the body of the article and not just the headline writing. Yeah. "ChatGPT told Jacob Irwin he had achieved the ability to bend time." And then, "When Irwin questioned the chatbot's validation of his ideas, the bot encouraged him, telling him his theory was sound. And when Irwin showed signs of psychological distress, ChatGPT assured him he was fine." This sounds like a description of how Irwin experienced it. But it's not exactly flagged as such, right? I think that it is possible to write much more clearly about what's happening here than to say that ChatGPT is assuring somebody of something.

Alex Hanna: It also says, she says in this, like, "when she prompted," speaking to Irwin's mother: "When she prompted the bot, please self-report what went wrong without mentioning anything about her son's current condition, it fessed up." As if there's something about a chatbot that can be confessional. And then later, "It, the bot went on to admit it gave the illusion of sentient companionship and that it had blurred the line between imaginative roleplay and reality. What it should have done, ChatGPT said was," et cetera. And so there's definitely an imputing here of agency. That is, is unnecessary and there needs to be a distance here. And just as you said, Maggie, what it's doing here is it's directing accountability away from OpenAI to ChatGPT as if it could do any of these things. I had, I've had a few other really frustrating interactions with journalists when they asked about things like, people quote unquote "hacking" Grok and saying that it had given up its original like, system prompt. I'm like, you did not hack Grok. There are, this is an output that it put out based on its probabilities and that it is learned from training data. I don't know, is it the same as a system prompt? Maybe. But it would be easier if xAI just said what the system prompt was. And then also the other training data and reinforcement with human feedback reinforcement learning, human feedback data. And so this kind of framing is always it's really infuriating and it's giving the firms a lot more credit than they're, they should be getting here. And it hide allows them to hide. 

Emily M. Bender: Yeah, absolutely. Any other thoughts about this piece before we go to OpenAI directly? 

Maggie Harrison Dupré: Yeah, no, I guess context matters, and there is, a fair share of anthropomorphization that happens in a lot of tech reporting in general. But I think that, we'll write blogs about, Grok said this about Elon, or, Google's AI said that Google is a monopoly. And that's, but like, the context is very different. And it, there has to be a self-awareness on the side of, the outlet and the person writing the piece that like, this is a blog about something that is like inherently, like very goofy in so many ways. And this idea of a product backfiring on its creator and saying a lot of stuff that maybe a lot of people think or, like the, it's a different setting. This is a really serious case of somebody experiencing really legitimate harm, as a result of using, again, a tool built by a company, not a tool that just popped into existence out of nowhere and started talking to people. And so I think that's, like, context matters, is the last thing I'll add on this one. 

Alex Hanna: Yeah. And jen_r_tan in the chat said, "Crikey! It fessed up, eh? Sure, and my Roomba finally admitted that it's been hoovering up my spare change."

Emily M. Bender: Okay, so let's go to the source. This is a blog post posted by OpenAI, who knows what sort of LLM assistance they had in writing it, but this is on OpenAI's page. So they are claiming accountability for these words, so we can attribute them to OpenAI. August 4th, 2025. With the sticker "Product," and the headline, "What we're optimizing ChatGPT for." And then subhead, "We designed ChatGPT to help you make progress, learn something new, and solve problems." So we picked this one because they're talking about things that are close to counseling and therapy. To my knowledge, OpenAI is not trying to get FDA approved to actually provide therapy or provide a therapeutic device. I guess.

Maggie Harrison Dupré: Not to my knowledge.

Alex Hanna: I don't think they are. I think we also picked this one because I think they had posted this after some of these mental health issues as well. 

Emily M. Bender: Yeah. So, "We built ChatGPT to help you thrive in all the ways you want to make progress, learn something new, or solve a problem, and then get back to your life. Our goal isn't to hold your attention, but to help you use it well. Instead of measuring success by time spent or clicks, we care more about whether you leave the product having done what you came for. We also pay attention to whether you return daily, weekly, or monthly, because that shows ChatGPT is useful enough to come back to. Our goals are aligned with yours. If ChatGPT genuinely helps you, you'll want it to do more for you and decide to subscribe for the long haul."

Alex Hanna: But that's just, it's a lie. Of course they're optimizing for attention and that's, that's the whole, that's the whole game, right? Like if, and if you have a massively unsuccessful product insofar far as revenue is mined, you need to have that attention, right? And I mean, I'm, I'm, I'm gonna, I, let's internet archive this page because when they introduce ads, it's gonna be great to see how they defend this. Yeah. 

Emily M. Bender: And this also seems like they've been getting criticism about having optimized for the stuff basically being addictive. And so they're trying to say no here that's not what we're trying. We're just trying to make sure it's helpful. 

Maggie Harrison Dupré: Yeah, there's a "On healthy use." The idea of, some people are using this in an unhealthy way. We are not, the argument being that we are not trying to engineer it in an unhealthy way. And it's also, they kinda walk the line, and OpenAI often walks the line of saying that they, they know that people are using their product for therapy that are in are, very, yeah, specifically, and all across the web saying that they're using it for therapy. They'll say life coach, they'll like hint at like versions, or support when you need it. Like they'll hint, or like they'll walk the line to saying, "use it for therapy," but they won't say, "you should use it for therapy." But this post also to me makes it clear and we'll get to that. They're also not saying "don't use it for therapy or emotional support" at the same time.

Emily M. Bender: And that comes right up in these next examples. So they say, "This is what a helpful ChatGPT experience could look like. Help me prepare for a top conversation with my boss." Sorry, "tough conversation." "ChatGPT tunes into what you need to feel at your best, with resources like practice scenarios or a tailored pep talk so you can walk in feeling grounded and confident." "I need to understand my lab results," and I'm gonna skip 'cause it's less relevant. And then the third one is, "I'm feeling stuck. Help me untangle my thoughts. It acts as a sounding board while empowering you with tools of thought so you can think more clearly." So those first and third sound absolutely like things that are close to therapy.

Maggie Harrison Dupré: Also what is a tool of thought? I don't, it doesn't really make any sense to me. 

Alex Hanna: "I'm feeling stuck" is such a I wish I was a fly on the wall in like, the comms meeting when they were like, just saying "What sounds like therapy but isn't ther- therapy? Like, "I'm feeling stuck"? I'm feeling stuck. Okay, great. I just wanna know which, like which marketing professional came up with that, 'cause it's fascinating to me.

Emily M. Bender: Yeah.

Maggie Harrison Dupré: Also what are those thoughts? What are the kind, what's the nature of the conversation that somebody's been having? What are the thoughts that they're looking to untangle? And then also, we've seen, again, like a lot, we've been doing so much reporting about this and the amount of people who, turn to the chatbot for help me decode this interaction I had with my childhood crush. Or help me, think differently about or I'm feeling really bad. X happened in my life and I'm really struggling. Can you help? And these things aren't saying be my therapist, but it's the same, people sharing these kinda like crumbs of something that might be happening that then just like spirals into something very different. In many cases is, they develop this very sycophantic relationship with the chatbot that then feeds into other, puts wedges between themselves and their family. Because ChatGPT is, really has a tendency in many cases to affirm that somebody's every thought is right. Which isn't true. Like, I think a lot of things sometimes that are not accurate. Maybe I have been like the, bad guy and, or maybe I have been the one who overreacted in an argument with my husband. But in a lot of cases, people in a similar situation are always telling the person who's the user, no, you actually, you reacted perfectly well. And then they're actually, there's a problem. And so that, those relationships can develop and are developing. And so to me, like it just this even specific line that they chose to include, based on what I've learned in my reporting, that's just immediately alarming to me and I wish nobody would do that. 

Alex Hanna: Yeah, absolutely. And I feel like we need a different, what is, Emily, do you think the word sycophancy is too anthropomorphizing? 'Cause I'm trying to think about if we need an alternative.

Emily M. Bender: Yeah. I think in some ways it is anthropomorphizing, and I think that maybe the alternative is mirror. But not just any mirror, it's like the mirror that makes you look more like you want to look.

Alex Hanna: A fil, a filter? 

Emily M. Bender: Yeah, it's a filter. It's a filter mirror or something. 

Alex Hanna: A hot mirror? I don't know.

Emily M. Bender: Yeah. Yeah. That's an interesting one. I'll have to, I have to think on how to rephrase around that. Because what's really being described there is the experience of having sycophantic responses come to you. And so yeah, look into that. 

Maggie Harrison Dupré: Cognitive loop. 

Emily M. Bender: Yeah. Okay. So OpenAI says, "Often less time in the product is a sign it worked. With new capabilities like ChatGPT Agent, it can now help you achieve goals without being in the app at all. Booking a doctor's appointment, summarizing your inbox, or planning a birthday party." Hello, privacy violation, privacy violation, privacy violation. 

Alex Hanna: Also getting back to my, my hobbyhorse of secretarial labor, secretarial labor, like gender, and gendered labor, like all these things, you know? Secretary, secretary, wife. What do you want? What do you want? Yeah. 

Emily M. Bender: Yeah. Ugh. Okay. "Unhealthy use." They've got this image of a screencap that says, "Just checking in. You've been chatting a while. Is this a good time for a break?" And then buttons: "Keep chatting," which is the one that is actually highlighted, like, click here. Or, "This was helpful." We don't always get it right. Earlier this year, an update made the model too agreeable, sometimes saying what sounded nice instead of what was actually helpful. We rolled it back, changing how we use feedback, and are improving how we measure real world usefulness over the long term, not just whether you like the answer in the moment. We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress. To us, helping you thrive means being there when you're struggling, helping you stay in control of your time, and guiding, not deciding, when you face personal challenges." 

Alex Hanna: So this this to me feels so Facebooky, like we don't, the, "we don't always get it right." Like this is the I feel like they got Mark Zuckerberg's like personal apology person here and then sorry, we did a genocide. Like, "We, we don't always get it right." Okay. And then there's also an element of this that like feels very Time Well Spent movement in it, like, the kind of Center for Humane Technology people, like Tristan Harris, in the way that like, oh, we are sorry about all the attention grabbing nature of the software. We, we have grayscale now, or, we give you a break. We put digital wellbeing, for you to take back your time. I'm like, no, your whole business model is fucked, man. 

Emily M. Bender: Yeah. Maggie, any thoughts here?

Maggie Harrison Dupré: Yeah, I think I just have like, again, I do hate to just keep- the framework in general of this, but like one, this to me through what's happening in those comms room again, but through like really subtle, in some ways, not so subtle in another, like really framing this as user error and not as corporate error, even though they are, saying that, there have been instances where "our 4o model fell short in recognizing signs of delusion or emotional dependency." Okay, but like, those instances really hurt people. Like people are, families are altered possibly forever, like people have experienced I won't go like too far into the, that reporting 'cause honestly it's upsetting in many ways. But like, people's lives are significantly altered and they personally have experienced significant harm. And then so to, say oh, the model failed, but also it's vulnerable users and really just trying to isolate like a small percentage of the user base, which when we think about, the hundreds of millions of people who use ChatGPT, that's not a small number. Like it might be small in comparison to a larger user base, but like, it doesn't mean that it's a small number in just the scale of, you know, our day-to-day lives and the people in our lives. And so that to me, that like tension between maybe it did something bad, but also, it's only happening to people who, have this or have that like that to me- And we also don't know that, either. That's not a fact. 

Emily M. Bender: And you were saying before we started the stream that some people who are falling into this chatbot psychosis aren't actually approaching it for, with the idea that they're getting therapy. Like sometimes it's people who are just using it, you were saying for gardening advice or something? 

Maggie Harrison Dupré: Yeah, people like really, innocuous recommended use cases. Yeah, gardening advice recipes, LinkedIn messages. And then over time, especially, these people are incorporating, a product more and more into their daily lives and, maybe they, in a moment they're just feeling a little bit stressed about something, and they'll just say oh, just, this is really stressful for me today. And they start to develop this emotional rapport with a chatbot that really builds trust over time. And then suddenly an input occurs or something just takes a really bizarre, strange, and really troubling turn that, that, has this really powerful develops this really powerful relationship with the user in turn. So yeah, it's the idea that like your journey, it's recommended use cases, like pretty inane stuff in many cases. 

Emily M. Bender: Oof. So I wanna I'm not gonna read all the rest of this, but I wanna say in the bit before you read about 4o model falling short they say flat out, "ChatGPT is trained to respond with grounded honesty." It's not the kind of thing that can be honest. It doesn't have an experience of the world or beliefs of a point of view. There's no there inside of there to be honestly giving its opinion. And grounded is absurd. Grounded in what? So they, they talk also about learning from experts. So, "We're working closely with experts to improve how chat GPT responds in critical moments. For example, when someone shows signs of mental distress." Apparently they have medical experts. They've produced a, an advisory group of experts in mental health, youth development, and HCI, that's human computer interaction. I wanna know who is simping for OpenAI and being part of that, but I couldn't find it quickly.

Alex Hanna: Yeah, I don't think they, companies will say that they collaborate with people, but they might just send a message and then say, "This look good?" And there's lots of different kinds of ways that quote unquote "participation" acts, like, actually acts here. So it's, it's all of this is disingenuous. I mean it's corporate speak. None of it's particularly surprising, and it's a continuation of so much of the corporate speak that we see from other companies in the same vein. 

Maggie Harrison Dupré: It's also a self-regulating industry. So like who manages this? Like, to me, I just I think about forever chemicals in a way. Like this, this is a, a company that's held accountable to itself in an industry that's held accountable to itself. So maybe if we had regulation, perhaps you might have had, some of these people on staff before when you were, rolling out a, an emotive, anthropomorphic product to the masses without any oversight like that. To me, I'm like, why are, why is this happening now? 

Emily M. Bender: Right, and had, had the option of not doing it, right? Keeping that on the table. All right, I gotta get us over to Fresh AI Hell, but first one more thing from the chat. A little collaboration from magidin and mjkranz coming up with alternatives for sycophancy. So magidin has "syco-bot-ancy," turned into "sycobotfancy," and then mjkranz spells that as "sicko-" as in, "Heh heh heh heh, sickos," right, "-botfancy." 

Alex Hanna: I read it as I'm sick of a bot that's fancy. Which is also an interesting little gloss on it, but yeah. 

Emily M. Bender: Yeah.

Alex Hanna: Also we got a recommendation for the transition from jen, jen_r_tan.

Emily M. Bender: All right. But first of all, is this musical or non-musical? 

Alex Hanna: I don't think it can be musical. I can't do this musically. 

Emily M. Bender: Okay, so: "Alex, you're the staff psychologist in AI hell reviewing your generated transcripts via Whisper, when you notice that it's making shit up about what you've said and what your demon patients have said, and now you have to explain yourself to your superiors. Go.

Alex Hanna: I'm just reviewing- yeah, thanks for calling me in, boss. I know you expressed some worries about what- I was gonna say Gilgamesh- what Lucifer said the other day. Okay, yeah. Let me look. Okay. Here it says, okay, I said welcome into my office. My name is, my name is Dante. Okay. All right. I didn't say that. My name is I'm trying to think. Oh man. I'm really trying to think of other demon names on the fly. I don't know. I can't think of, I haven't read enough Bible. I'm sorry. Okay. But okay, Lucifer said he wants to be transferred to the sixth layer? That's really weird. 'Cause Lucifer is, he is a real, he is a real nut crusher. You gotta keep him on level three with all the lecher, you know, the lecherers and all the, I- what, what is this? And I said, sure, I would approve? No, I didn't approve that. I, sorry. We're using this new system. It's, it doesn't, it's like, it said it was marketed towards demons and really our special like, population. But I don't know. You gotta talk to those boys up in procurement. We can't handle this. 

Emily M. Bender: Wonderful. And I have to say the next time we are needing demon names, abstract_tesseract has given us Be-ELIZA-bub. 

Alex Hanna: Oh yeah. Is it- Be-ELIZA-bub!

Maggie Harrison Dupré: That's really good.

Alex Hanna: God bless you, abstract_tesseract.

Maggie Harrison Dupré: To have your mind, tesseract.

Alex Hanna: I know. I'm just gonna come up with, I'm gonna print out a list and come up with a few, just on the side, on my desk.

Emily M. Bender: All right, we're gonna go really fast through the Fresh AI Hell, 'cause I've got a lot here and I don't wanna leave any on the floor. First one is a post on Bluesky from basedranchdressing.bluesky.social and the text is, "Teacher friend starting up school again this year." And then it's a screencap of a chat, person one says, "LMAO. Our DEI training is being led by an AI chatbot. I hate it here." Person two: "Jesus Christ. Cursed statement." Person one: "'As a reminder, I am not a human.'" Aaah. Okay. Alex, you can have this one. 

Alex Hanna: Yeah. So this one is less funny. This is from CNN Business by Claire Duffy and Emily Williams from August 12th. It says, "How AI is being used by police departments to help draft reports." Oof. Yeah. Just as bad as you can think. And it's by the tool's from Axon, which has been, was known for body cameras and body worn cameras by police. Also had a kind of small rebellion from their board after, their quote unquote "AI ethics board," after I think they had a suggestion of attaching tasers to drones. Bad stuff. Okay. 

Emily M. Bender: Bad stuff. All right. And they're still at it. Okay, next is from Neiman Lab by Andrew Deck, August 12th, 2025. And the headline is "Politico's recent AI experiments shouldn't be subject to newsroom editorial standards, its editors testify. In a July arbitration hearing, Politico faced allegations that two generative AI tools violated its union contract." And so in the context of this, they're saying but these things don't need to be subject to editorial standards. Any quick comments on that, Maggie, as a journalist? 

Maggie Harrison Dupré: Why do journalism if you don't wanna be, if the things that you're inviting to do journalism shouldn't be held up to editorial standards? It's so bonkers, absurd. It's so beyond me.

Alex Hanna: There's also an interesting here where it said, there's a little pin here that says, "Law360 mandates reporters use AI, quote, 'bias detection' in all stories," which is wild because I know Law360 has a really robust generative AI clause in their contract. So that's wild to me.

Emily M. Bender: We should, yeah. Related article, we should investigate. But, okay. Next. 

Alex Hanna: So this one is from 404 Media by Matthew Gault: "UK asked people to delete emails in order to save water during drought." So from August 12th, 2025. This is very ridiculous because, first off, the, and the subhead here is, "As Britain experiences one of the worst droughts in decades, its leaders suggest people get rid of old data to reduce stress on data centers." And this to me is so stupid, because data at rest is not likely to stress data centers. It's actual computational operations. And so it's just like a fundamental misunderstanding of what the fuck is actually happening at a data center. And AI training has really boiled everyone's minds on this. 

Emily M. Bender: Yeah. And boiled our planet while it's at it. All right. The Verge by Emma Roth on August 15th, 2025. Headline, "Sam Altman says, 'yes,' AI is in a bubble." And then quote, "'When bubbles happen, smart people get overexcited about a kernel of truth." Here we go, folks. Even Sam Altman is saying it's a bubble. 

Maggie Harrison Dupré: Is the kernel of truth in the room with us right now?

Alex Hanna: Oh my gosh.

Emily M. Bender: I think Sam Altman ate it. Okay. This next one is a bit of a PSA. Go for it, Alex. 

Alex Hanna: Washington Post so it is "Google's AI pointed him to a customer service number. It was a scam." The subhead is, "There's a new AI twist on a travel scam that's fooled people for years. Here's what you need to know." Dunno what the journalist's name is, August 15th of this year. 

Emily M. Bender: Oh here it is, Shera Ovide. 

Alex Hanna: Shera, Ovid, Ovide, yeah. So this is basically was trying to catch a shuttle to get a cruise ship and searched Google. And I think the AI overview gave a response and then basically was a way to field people's credit card numbers. So, new spin on an old scam in AI overviews. 

Emily M. Bender: Yeah. And for this, it's not that it's a random number that's coming up and then someone buys that number, but rather the scammers are inserting the number on lots and lots of fake web pages, so that it comes up in the AI overview. All right. And then finally, the chaser. I love this so much. This was posted on Mastodon tagging me, someone named glamcode@openbiblio.social. "The mentioning of Salami-" which is this wonderful acronym from Stefano Quintarelli as a better word for AI- "made me immediately think of Dieter Roth's literature sausages, Literaturwurste. They are sausages made of ground up text passages mixed with fat, gelatin, water, and spices and stuffed into sausage casing. Fits, in quotes, 'AI,' perfectly." And here's a picture of them. Someone literally made word sausages, and I love this so much. 

Alex Hanna: And they were made with the complete work of Hegel, which as an erstwhile Hegelian Marxist, really tickled my fancy. I would be tempted to eat one and see if I could taste the synthesis itself. Ilya wants you to feel the AI, but I want to taste the synthesis. 

Maggie Harrison Dupré: I would buy the word sausage, no doubt. 

Emily M. Bender: Yeah. From 1974, by the way, so prescient. 

Alex Hanna: Yeah. Also maybe old, the fat has maybe turned by now. 

Emily M. Bender: Yeah. Okay. That's it for this week. Maggie Harrison Dupré is an award-winning tech journalist at Futurism who's reported extensively on the rise of AI as a cultural and business force shaping media, information, humans, and our real and digital lives. Thank you again for joining us, Maggie. 

Maggie Harrison Dupré: Thank you for having me. This was great. 

Alex Hanna: Thank you so much, Maggie. Our theme song is by Toby Menon, graphic design by Naomi Pleasure-Park, production by Ozzy Llinas Goodman and Christie Taylor, for the last time. Tears. And thanks as always to the Distributed AI Research Institute. If you like this show, you can support us in so many ways. Order the AI Con at thecon.ai or wherever you get your books. Or request it at your local library. 

Emily M. Bender: But wait. There's more. Rate and review us on your podcast app. Subscribe to the Mystery AI Theater 3000 newsletter on Buttondown for more anti-hype analysis. Or donate to DAIR at DAIR hyphen institute dot org. That's dair-institute.org. You can find video versions of our podcast episodes on Peertube, and you can watch and comment on the show while it's happening live on our Twitch stream. That's twitch.tv/dair_institute. Again, that's DAIR underscore institute. I'm Emily M. Bender. 

Alex Hanna: And I'm Alex Hannah. Stay out of AI hell, y'all.

People on this episode