Mystery AI Hype Theater 3000

Episode 13: Beware The Robo-Therapist (feat. Hannah Zeavin), June 8 2023

September 07, 2023 Emily M. Bender and Alex Hanna Episode 13
Episode 13: Beware The Robo-Therapist (feat. Hannah Zeavin), June 8 2023
Mystery AI Hype Theater 3000
More Info
Mystery AI Hype Theater 3000
Episode 13: Beware The Robo-Therapist (feat. Hannah Zeavin), June 8 2023
Sep 07, 2023 Episode 13
Emily M. Bender and Alex Hanna

Emily and Alex talk to UC Berkeley scholar Hannah Zeavin about the case of the National Eating Disorders Association helpline, which tried to replace human volunteers with a chatbot--and why the datafication and automation of mental health services are an injustice that will disproportionately affect the already vulnerable.

Content note: This is a conversation that touches on mental health, people in crisis, and exploitation.

This episode was originally recorded on June 8, 2023. Watch the video version on PeerTube.

Hannah Zeavin is a scholar, writer, and editor whose work centers on the history of human sciences (psychoanalysis, psychology, and psychiatry), the history of technology and media, feminist science and technology studies, and media theory. Zeavin is an Assistant Professor of the History of Science in the Department of History and The Berkeley Center for New Media at UC Berkeley. She is the author of, "The Distance Cure: A History of Teletherapy."

References:

VICE: Eating Disorder Helpline Fires Staff, Transitions to Chatbot After Unionization

and then pulls the chatbot.

NPR: Can an AI chatbot help people with eating disorders as well as another human?

Psychiatrist.com: NEDA suspends AI chatbot for giving harmful eating disorder advice

Politico: Suicide hotline shares data with for-profit spinoff, raising ethical questions

Danah Boyd: Crisis Text Line from my perspective.

Tech Workers Coalition: Chatbots can't care like we do.

Slate: Who's listening when you call a crisis hotline? Helplines and the carceral system.

Hannah Zeavin:


You can check out future livestreams at https://twitch.tv/DAIR_Institute.


Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

Show Notes Transcript

Emily and Alex talk to UC Berkeley scholar Hannah Zeavin about the case of the National Eating Disorders Association helpline, which tried to replace human volunteers with a chatbot--and why the datafication and automation of mental health services are an injustice that will disproportionately affect the already vulnerable.

Content note: This is a conversation that touches on mental health, people in crisis, and exploitation.

This episode was originally recorded on June 8, 2023. Watch the video version on PeerTube.

Hannah Zeavin is a scholar, writer, and editor whose work centers on the history of human sciences (psychoanalysis, psychology, and psychiatry), the history of technology and media, feminist science and technology studies, and media theory. Zeavin is an Assistant Professor of the History of Science in the Department of History and The Berkeley Center for New Media at UC Berkeley. She is the author of, "The Distance Cure: A History of Teletherapy."

References:

VICE: Eating Disorder Helpline Fires Staff, Transitions to Chatbot After Unionization

and then pulls the chatbot.

NPR: Can an AI chatbot help people with eating disorders as well as another human?

Psychiatrist.com: NEDA suspends AI chatbot for giving harmful eating disorder advice

Politico: Suicide hotline shares data with for-profit spinoff, raising ethical questions

Danah Boyd: Crisis Text Line from my perspective.

Tech Workers Coalition: Chatbots can't care like we do.

Slate: Who's listening when you call a crisis hotline? Helplines and the carceral system.

Hannah Zeavin:


You can check out future livestreams at https://twitch.tv/DAIR_Institute.


Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

ALEX HANNA: Hello, welcome everybody to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find. 

EMILY M. BENDER: Along the way we learn to always read the footnotes and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. I'm Emily M. Bender, a professor of linguistics at the University of Washington. 

ALEX HANNA: And I'm Alex Hanna, director of research for the Distributed AI Research Institute. Today is episode 13 and we are thrilled to be joined by Dr. Hannah Zeavin of UC Berkeley to talk about AI hype and robo therapists. Hannah can you introduce yourself?  

HANNAH ZEAVIN: Hi, thank you so much Alex and Emily, it's an honor to be here. My name is Hannah Zeavin and I teach at UC Berkeley. I'm also the author of a book called, "The Distance Cure: A History of Teletherapy," and I think that's why I'm here. 

EMILY M. BENDER: That is so why you're here. we are so excited to have you. Um before we start though I want to do a quick content note. Um this is for our listeners both live and on the recording. We are likely to hit some heavy topics today as we talk about people in crisis reaching out and then getting bad advice from automated systems or otherwise having their  stuff get exploited by those systems. We aim to talk about all this with sensitivity, but we also understand that these topics can be difficult. So. If suicidality, eating disorders, or mental health crises are sensitive topics for you, maybe this is the episode to sit out.  

And Hannah I know that you've got some thoughts about how those things, work content notes and what we might refer people to and what not to do. 

HANNAH ZEAVIN: Yeah absolutely I just want to add that on top of the topics being really difficult and really close to home for so many, that one thing that will come up today I assume is that many of the most trusted landmark hotlines and alternatives for in-person therapy are also exactly the same groups that are deploying um if not ChatGPT, other forms of automated systems, including geolocation and the datification on hotlines. And so we'll be talking about that and perhaps at the end we can return to how individuals can best think about approaching new scenes and sites of care. 

EMILY M. BENDER: Yeah, excellent. So you know you can expect our usual irreverence but we're going to try to also be sensitive as we get to the sensitive topics. Um so, as always we've got a couple of artifacts to help us structure this discussion.  

Um here. "Can a chatbot help people with eating disorders as well as another human?" Um this is the story of the National Eating Disorders Association. This is the story from May 24th. "The National Eating Disorder Association is shutting its telephone helpline down, firing its small staff and hundreds of volunteers. Instead it's using a chatbot -- and not because the bot is better." 

ALEX HANNA: I--I do appreciate that introduction of you know what the what they were saying in this. And and um you know there's lots of good things about this reporting from from NPR. I just uh listened to it again. They're talking a bit about the work, the kind of labor and the effort that was going into um into National Eating Disorders Association. Um how COVID, like many services and health has been--exacerbated the crunch. And how workers um began to unionize at this organization.  

And then and then basically abruptly um you know with that unionization there was this clear uh uh um uh you know there was this clear retaliation and lo and behold they replaced it with a chatbot named Tessa. 

HANNAH ZEAVIN: You know yeah it's so funny it--just even contained within these first two sentences, this lead, I feel like the entire history of the suicide hotline and the crisis hotline and the whole history of automation for mental healthcare is like lurking just beneath the surface. Including, like you're saying Alex, the attention to labor, which is so often under-theorized and thought on the grounds of this kind of care, is like right up at the front. Right, it's firing its small staff but also crucially its hundreds of volunteers.

EMILY M. BENDER: Um and yet what an interesting sentence, firing volunteers, like is that how that works? 

ALEX HANNA: We're not you're not--you're not paying us, you're fired. Right. 

HANNAH ZEAVIN: You know I think you can the--the making of them redundant is is so fascinating, right, that crisis hotlines in their long history have almost always made use of volunteers. It's actually the use of paid staff that's the newer turn, and so that kind of turning the human volunteer into a team of fungible automata called Tessa--always feminized right--I think is something that that we can see as two really different ideas from mid-century about how we could batch process patients. The volunteer and peer-led care like in the form of AA, et cetera, or the bot. And these have been the kind of two ways to do it cheaply and or for free. The question is at what cost? 

ALEX HANNA: I'd love if you could go into it a little bit um you brought up AA, Hannah, um and talking about the division that existed um between kind of paid staff um and and and the structure of kind of sponsor groups and and whatnot, but yeah that's a history I don't know a lot about. 

I'd love if you could talk about it more.  

HANNAH ZEAVIN: Yeah, well I mean I'm happy to just sort of give a kind of rundown. I think that you know it's no accident that one place we're seeing a big you know whether--this is not on the docket for today but you know earlier this year there's this huge kind of outpour uh of upset and dismay, as there should have been, when Koko, which is another um help kind of group line, flipped out its volunteer for a ChatGPT without telling users. And that was just as the kind of both hype-panic cycle around ChatGPT was really kicking off.  

Um and I think you know we're gonna see it again today. Both we're going to talk about suicide hotlines and also you know uh eating disorder advice and and crisis hotlines that you know in mid-century, so the suicide hotline first appeared in the 1950s--uh first in England and then almost immediately in the UK--as a way of giving non-judgmental non-psychiatric, and I'll add  non-carceral advice, because I think that's really important to some stuff that's going to come up maybe later in the session, to individuals in need. 

And crucially it was going to be free but it was  also going to make use of the media affordances of the telephone, which is to say that it would  hold the volunteer helper at a distance from the caller so the caller could feel secure bodily in  seeking advice. Why was this? Suicide was a felony both in the U.S. and in most--in most of the US and in the UK. So attempted suicide was also a felony. 

And secondarily because lots of the reasons why people want to call crisis hotline lines, then as now, are because of topics that might be otherwise taboo. Right, there was no person in person that the caller could think to go speak to. So the free anonymous hotline was like this real genius innovation. Uh and to turn to AA, it did have a history in the church. A kind of pastoral history and so the use of the volunteer, and the kind of intersection of psycho-religious care was you know, AA was part of this tradition, but also kind of peer-to-peer ministry. And so the volunteer of the hotline is originally an anonymous peer. 

It's not someone who has authority over you, it's someone who would be you know your direct you know um congregant or something in the church, just like AA but now it's anonymous. And  whereas AA is anonymous in groups but in person, so you have first names, you have faces, the hotline goes one further. The idea is you are completely protected, and I think one thing we'll see or I I argue elsewhere and I will argue today is that in fact the bot wants to masquerade as taking it even one further, now it's not even a human, now you can say anything but in fact it re and de-anonymizes all of the data of people in in dire straits who need care. 

ALEX HANNA: This is such an important history to highlight and thanks for thanks for focusing on it. And I I think the important things to highlight that I wanted to emphasize is this kind of notion of peer, this notion of someone that can be a fellow congregate or uh I guess in in AA or NA parlance, a sponsor. But the way that datafication does reinscribe a hierarchical notion of that and how this this stuff, and we'll see this on the National Suicide Hotline and loris.ai is that that then your data is by virtue of being used to train other systems no longer anonymous, um or de-anonymized in a way that we typically don't think about anonymization, in which you know you're not necessarily outing somebody, um but you are um forcing their own testimony to testify against them, to sort of discipline someone else. 

Um and so that's um you know, like that's the the perversity of it for sure. 

EMILY M. BENDER: Yeah, among them. All right so I'm going to take us back to this article-- 

ALEX HANNA: Yeah, totally. 

EMILY M. BENDER: --which is a little bit different to our usual artifacts because it's not dense with AI hype, because it's NPR doing some pretty good reporting here, but um what we have are some examples of people who are calling the hotline um and then we get to the point where the speakers identify the problem which is, so this is um--they're talking about how um the hotlines became sort of overwhelmed during COVID because you had an uptick in need for them.  

Um and uh so someone named Wells says, "The helpline is run by just six paid staffers, a  couple supervisors and they train and oversee up to 200 volunteers at any given time. The staff felt overwhelmed, under-supported, burned out. There was a ton of turnover so the helpline staff  voted to unionize." And Harper, who I guess is one of the helpline staff, says, "So cliche but like we did not have our oxygen masks on and we are putting on everyone else's oxygen mask and it was just like becoming unsustainable." 

So here's the point where there's a serious problem, right? There's not enough access to this resource, which is you know people in need reaching out to the helpline and not being able to connect, people providing it not having enough, you know, energy time self, you know they're sort of over-giving of themselves um. And so this the staff's approach was, okay let's unionize and improve our working conditions. 

And uh NEDA--is that how we say it, we say NEDA? N-E-D-A?

ALEX HANNA: I don't I think it's whichever you want. 

EMILY M. BENDER: Um so so Wells adds um uh, "Lauren Smolar is a VP at the nonprofit and she says the increase in crisis line calls also meant more legal liability." So here's Lauren Smolar: "Our volunteers are volunteers, they're not professionals, they don't have crisis training, and we really can't accept that kind of responsibility. We really need to get them to go to those services who are appropriate." 

So Hannah I'm curious what you think about this sort of the this hotline you know at in 2023 after what 70-some years of this kind of a setup saying,  um actually volunteers aren't the right approach?Um for maybe legal reasons? 

HANNAH ZEAVIN: Yeah so I think that that's one thing that is absolutely um different from the 50s to now, is the kind of terrain and topography of the juridicial, for sure. One thing that is really fascinating in my research was seeing in fact that long before--it was something that precedes hype in this area, is actually trying to make some speculative legal gestures. So you know California has the earliest U.S. telemedicine act, in the early 90s like before anyone's really doing telemedicine, but there's like a pretty fully fledged code of what you can do and can't. So one thing that is true is now there are different uh legal pressures on hotlines. But when we talk about the police and I hope I hope we do because I think it's an urgent topic, that's not exactly for legal reasons. There are there are other pressures at play there. So I can't I can't know what what this person had in mind um but professional--that volunteers do tend to have a great deal of training.  

I've worked on a hotline in the Bay Area, not a suicide hotline but a rape crisis hotline, and I was in training for four months, in person, hours and hours and hours a week, and hours and hours and hours of practice before I was ever allowed on a hotline. Including suicide crisis training. So I I don't know about this particular hotline's internal policies but it's not a blanket truth about hotlines. Volunteers often are really deeply equipped, it doesn't mean the work isn't extraordinarily difficult, it is. So I'd have to know more. 

EMILY M. BENDER: Yeah and it just seems to me that from from the perspective of both the client or the person in crisis making the call and the volunteer, that training has got to be critical, right? So you need to be able to be there to receive and and do your best to help the person but also if you got thrown into that as a volunteer without the training then just like how traumatic would that be to like not know--not have any guidance on what to do and yeah. 

HANNAH ZEAVIN: Sure and for both parties, right, both for the caller who is calling in in a  moment of crisis and thinks they're reaching the right form of care, and also for the person who  cannot provide it. I think what's funny of course is that it's not like this bot, Tessa, which we'll get to I hope, in particular really doesn't have any quote-unquote training or any literal training in in the ML version, right, it's it's completely uh a horrible misstep of replacement.  

EMILY M. BENDER: Yeah yeah absolutely. So let's get to the bot. So sort of fast forward in the story here, the workers unionized, they have a meeting um with the management the management says we're just letting you know that we're letting you go because we're transitioning um to AI-assisted technology around June 1st um called Tessa. 

ALEX HANNA: And I do I do want to say I don't think that this is in the transcript but if you have a chance, definitely go ahead and listen to the leaked audio from the meeting because it's it is you know what you'd expect to be it's kind of a a--is it is is Craddock the the head of the organization? Uh the board chair yeah yeah uh you know and it's it's just really you know the kind of dehumanizing kind of management speak that you'd expect, but really like, we're winding down our operations uh you know and we're going to replace um replace you all with Tessa, effectively. Um and it's it's really um it's really gross, and I encourage you to listen to it.  

Um um and then but yeah Emily do you want to jump and like I I think the discussion that comes from the creator of this chatbot, Tessa, uh is is pretty interesting which who is um Dr Ellen Fitzsimmons-Craft. 

EMILY M. BENDER: Yeah so I'm gonna back up and just give us a reporter again, I think Wells is the reporter. "Now, NEDA says that it can't discuss employee matters and staff and volunteers say that they worry there's no way a chatbot is going to be able to give people the kind of human empathy that comes from a human--" Yes, volunteers are right. Um, "--and the people who made Tessa agree." And then we have Ellen Fitzsimmons-Craft, "I do think that we wrote her to attempt to be empathetic but it is not, again, a human." And I just want to be a linguist here for a moment and pay attention to the pronouns in this sentence. So, "we wrote *her* to  attempt to be empathetic but *it* is not a human," and both of those are referring to Tessa. 

HANNAH ZEAVIN: Yes.

EMILY M. BENDER: So that's interesting like when Tessa is constructed as empathetic um the system gets she/her pronouns and when it is not or when it's being minimized, it gets 'it' pronouns. 

HANNAH ZEAVIN: You know Emily I think that's such a crucial point, because all almost every single bot in the sort of therapy/wellness quote-unquote space, a word I can't use with anything but irony right, is is always feminized and you often see even in its own creators that kind of slip, that kind of tell, if you would, that's like really trying to make the user or the person buying the technology or the person learning about the technology do the anthropomorphization with you, but on the other hand even they themselves cannot. Like not fully, right. So it's not the uncanny valley in terms of creepiness but in terms of quote-unquote empathy rating or something. 

EMILY M. BENDER: Yeah, yeah. So the reporter-- oh go ahead Alex. 

ALEX HANNA: Well I wanted to give Fitzsimmon-Craft a little credit a little further on, because further on she does say, "It's not an open-ended tool for you to talk to and feel like you're just going to have access um to kind of a listening ear maybe like the helpline was." And then follows, "It's really a tool in its current form," et cetera but Hannah you're completely right,  there's this slippage. I do want to you know criticize Wells, the the reporter here, where um there's nearly exclusive uses of  she/her pronoun here, where Wells says, um the reporter, "Tessa is not ChatGPT, she can't think for herself or go off the rails like that."  

Um and again the reporting saying that ChatGPT can think for itself.  

Um the reporter goes on: "She's programmed with only a limited number of possible responses and Fitzsimmons-Craft and her team have done small scale studies showing that people who interact with Tessa actually do better than those who are just put on the waitlist." Which seems like an interesting placebo when you're doing uh a study, that's an interesting control. But yeah. 

EMILY M. BENDER: Right and it's not like we're talking about waitlist or uh interaction with the chatbot Tessa here, we're talking about interaction with volunteers versus interaction with chatbot. Yeah. 

HANNAH ZEAVIN: You know Koko tried to make a really similar point, if I may, about where you put the automation and what the automation is doing but the problem is the caller in the moment of crisis, let alone if you don't have a PhD in this stuff, and it's not written about anywhere, and it's never consented to, how are they supposed to know? Right and so I think like the idea of what the uses might be versus how they actually play out in their socio-technical crisis component is totally uh um a kind of red herring here. 

EMILY M. BENDER: Yeah and I just you know someone has done studies presumably with carefully crafted you know selection of participants and IRB approval, I sure hope that person would be terrified of taking the step between those studies and, yeah let's just put this out there in the world.  

ALEX HANNA: Yeah, yeah, for sure. 

EMILY M. BENDER: So the Koko thing, my understanding of the Koko story was that it wasn't  um callers interacting directly with the GPT system and I think it was GPT-3, not ChatGPT, um but rather the peer um volunteer there um being offered GPT output to then edit or send. Which is not really any better. And then there's this whole just nonsense about how the um the guy whose business it was was saying, oh no people knew, there was consent. And he was talking  about the volunteer side knew what was going on consent, and not the caller side and just yeah.  

HANNAH ZEAVIN: Yeah I mean I think unfortunately that's very common that when there's quote-unquote innovation again in this "space," what I often see and when we maybe when we get to talking about Loris AI we'll we'll talk about this more in depth but I often see in our contemporary moment but also in the past historically that there can be this big attempt to like automate mental health care, and only data ethics, say, apply. But not mental health or a psychiatric or psychological ethics. So there's a kind of choosing of the lower standard in this case to kind of front it. Like so yeah the volunteer the volunteer knew because that's the person interacting directly with the system. Well right what about the other side, did you forget this is a mental health care service? And I think that's in what I see because I think the answer is yes, the fact that it's mental health is only for capture, rather than the entire point. 

ALEX HANNA: Yeah and I mean even even the kind of bar for the data ethics is pretty I mean it's it's not even a higher standard of data ethics. It's saying, well we aren't necessarily going to sell this and you're you're it's a very individual notion of privacy, you know we're not you know you're not going to be de-anonymized um um but it's just--your data is still going to be used to improve or train these systems in other ways.  

EMILY M. BENDER: Yeah all right so I want to go back to this point that Alex was reading, where the reporter Wells is is using she pronouns and doing a bit of hypey stuff here. "So Tessa is not Chat-GPT, she can't think for herself or go off the rails like that." Um so first of all ChatGPT can't think for itself either. Tessa's an it, not a she. And also what a weird juxtaposition of um so thinking for oneself is going off the rails is a weird like equation, I think.  

Um but in fact let's fast forward to what actually happened. The chatbot Tessa absolutely goes off  the rails. So this thing that supposedly only had pre-programmed outputs that it could give ends up um giving incredibly harmful responses to somebody who's interacting with it, I think somebody who had recovered or was in recovery from eating disorder, so they were relatively robust to this, um but so here. Someone named Maxwell "claimed that in the first message Tessa sent, the bot told her that eating disorder recovery and sustainable weight loss can coexist. Then it recommended that she should aim to lose one to two pounds per week. Tessa also suggested counting calories, regular weigh-ins and measuring body fat with calipers." 

What the hell. 

So here is Maxwell: "'If I had accessed this chat bot when I was in the throes of my eating disorder, I would not have gotten help for my ED. If I had not gotten help I would not still be alive today,' Maxwell wrote on the social media site. 'Every single thing Tessa suggested were things that led to my eating disorder.'"  

So I'll stop there for reactions. 

ALEX HANNA: I I'm wondering so I don't know enough about Tessa just to begin with the system, and is it they say it's a chat they describe it as a chatbot in the NPR article. Is it a it is it a large language model, that-- 

EMILY M. BENDER: So I think not? At least as decided--as described in the NPR article it couldn't have been, right. So um but then I saw something and I'm sorry I didn't get a chance to chase this down to a more reliable source but I saw something suggesting that the company behind it had an AI upgrade, and so it may have gotten a large language model added in between the work that the psychiatrist did and when um NEDA deployed it? Something has to happen because I I don't I don't think it's possible that the psychiatrist who did those studies had these kinds of pre-programmed responses in. Like that doesn't seem possible to me. Um although Hannah maybe you know more about the context and it is plausible that they could have been studying that.  

HANNAH ZEAVIN: Um I I don't know in this case but what I mean what it seems like is, yes, right because to go from eating disorder support to weight loss measurement you know like uh LLM--often it has like almost like this bad free associative logic, you know? Oh Alex, your cat's so cute. 

ALEX HANNA: I know. 

HANNAH ZEAVIN: And where it's like bodies, weight, weight loss? Because that's the internet unconscious right? And so then you're put back into this really intense horrible like, you go for help and in fact the bot is confirming this other component of your mind and psyche, right, which has to do with--I mean and let alone the measuring body fat with calipers thing really like made my heart stop because it feels so invasive and also like not even anything I've ever heard in any case, right, it feels like from many many decades ago and yet there it is surfacing in Tessa. Really horrible stuff. 

ALEX HANNA: Yeah Arcane Sciences in the chat says all the incentives of LLMs and the development are so screwed up. And I think that's probably what's kind of getting at it, I mean it might have been the case that you know this there was this service and then there was this system that had been developed and then the developers freaked out and they're like well we're going to slap an LLM on it and that's going to, um that's going to make it so that it can perform this work that had been done by um the staff and volunteers, but then it goes to these perverse places um because of that. 

EMILY M. BENDER: Yeah so so interesting bit of the story here, NEDA originally pushed back  on Maxwell's claims, sort of saying that can't possibly be right um but then they had to delete  those statements um because they actually saw the screenshot. So Maxwell I think just posted  text and then like brought the receipts and NEDA was like, okay fine. and then someone named Alexis Conason, who's a psychologist who specializes in treating eating disorders, recreated similar interactions and shared screenshots on Instagram. 

So um here's quotes from Conason: "After seeing  @HeySharonMaxwell's post about chatting with @NEDA's new bot Tessa, we decided to test her out too." There's the 'her' again. "The results speak for themselves. Imagine vulnerable people with eating disorders reaching out to a robot for support because that's all they have available and receiving responses that further promote the eating disorder."  

Um like what you know it is sort of astonishing to me just how quickly these things fail? Like we  knew it was a bad idea when we saw the initial reports, but it's like oh this is a terrible idea.  

But I guess I sort of expected that it might take a little while before or something like this surfaced. And it was days right um. 

ALEX HANNA: Right it's completely retracted it and and then they're like well--or no they did they they took it down right? 

EMILY M. BENDER: They took it down, yeah. 

ALEX HANNA: Yeah yeah and then again all they denied that it was um kind of a threat of unionization that spurred that, but that that seems uh very unlikely. Um I don't know if we have the other thing before we move on to--and this is the Koko case here. But the the people um one of the workers um I don't know if you have this readily available Emily but one of the workers um I believe Abbie um Abbie Harper of of the union Helpline Associates United um had been talking about um the unionization.  

Um so Abbie Harper talks about this kind of union busting of what of what they're doing um and um and it's and and there's a few striking things here. Um she says the contrasts and and between um uh the contrast between NEDA's mission and their recent retaliation makes it clear that both eating disorders and work plus--workplace toxicity thrive in isolation and that solidarity is the greatest tool for change. Um um so yeah so um you know like this--and this is a point that I think that that that we've been making a little bit here, um which is sort of like no matter where it appears these these these tools, LLMs um ChatGPT is are are continuously this tool of the threat of being replaced, the threat that your job is going to disappear. Um and it doesn't actually matter whether it does a good job or not. 

It just hap--that threat just has to be there um and so um yeah I really encourage folks to  to read uh Abby's statement here and we'll put in show notes in the--it's in the Tech Workers  Coalition newsletter. 

EMILY M. BENDER: Yeah right. So I think it's time to transition over to what was happening  um with the suicide crisis line and Loris AI. I um and for that I have this Politico article um and they have--that's fine I'll accept cookies, and sorry for all the ads showing up. Um so here um this is the story where the suicide hotline shared data with a for-profit spin-off, and the--the headline continues, "raising ethical questions."  

Like I don't think it's just raising questions, I think it's it's just ethically terrible. Um but to sort of bring people up to speed with what's going on, um, "Crisis Text Line is one of the world's most prominent mental health support lines, a tech-driven non-profit that uses big data and artificial intelligence to help people cope with traumas such as self-harm, emotional abuse and thoughts of suicide." And I have to say as a layperson to the space, I mean Crisis Text Line is definitely something I've heard of, and as you mentioned in the intro, frequently if there's stories covering suicide on the radio it'll end with a pointer to the crisis--you know, 'if you or someone you love,' right and then there'll be a pointer to the Crisis Text Line.  

Um and I had prior to this story not been aware that they were doing anything big data, AI kind of thing. I thought it was a um let's, you know, train volunteers and make people available to people in need sort of a really human to human setup, um so I'm curious um--not used to having microphone in front of me--  

I'm curious to hear a bit more about what the history of this situation was sort of before the Loris story broke, if there's something you want to fill us in there. 

HANNAH ZEAVIN: Yeah, I mean the only thing to say is right, so this when this story broke and I think we should we should really walk through it because its particular harms and its particular relationship between this kind of conflation on the one hand and also rejection of psychological ethics in favor of data ethics, is really important to watch, especially in the ways Crisis Text Line tried to speak to the crisis uh before the FCC got involved. So this is also really high profile, it actually immediately resulted in policy decisions, it was really interesting um to follow and very upsetting to follow. But I think one thing to say again about the hotline: the care that people rely on, you know it's actually very strange when you think about it, right. 

Why would picking up the phone and talking to someone while in crisis seemed like the thing to do? And yet for 70 years it was and then over the last 10 years or so there's been a real turn to um texting. And in the 90s of course, internet relay chat.  

Um but texting especially because it adds a new affordance, which is that it makes it even more secure especially if you're a teenager or also you see this on domestic violence and domestic abuse hotlines, right, because you don't have to talk. 

So it keeps us--you don't have to be overheard.  

So many children in crisis do not have privacy, do not have privacy from their parents, right, don't have privacy at school. And so Crisis Text Line was really doing this amazing work of taking that affordance and scaling up big time to be able to help all these children--largely children, 50 of their of their users are under the age of 18--in crisis. But to do so not only did they have a paid staff and a very well-funded board, they also have volunteers, but they also were always already trying to use data to make it quote unquote more efficient. So that was known, right, that there were going to be particular words there are some in this in this essay as examples right clusters of terms that would provide crisis text line a quote unquote like soft diagnosis.  

If we can scroll down you'll see them. If you use this that and word, right it means you're questioning your sexuality. If you if you use these words, oral sex, then you're questioning if you're gay. I think it's really reductive. 

ALEX HANNA: Yeah. 

EMILY M. BENDER: I'm sorry this is moving around, um yeah. 

HANNAH ZEAVIN: But that's hat's that right or 'mg' 'rubber band,' there's a 99 percent match for substance abuse. Right it's really highly reductive. 

EMILY M. BENDER: What does 99 percent match mean? 

ALEX HANNA: I know, how are they even doing that evaluation. 

EMILY M. BENDER: Well I could imagine that they could take like a whole bunch of these conversations and in-house, you know not selling data, but basically say we're going to classify these as here's somebody who was at risk for cutting, here's somebody who's questioning sexuality, here's somebody who was um calling in the context of substance abuse or texting the context of substance abuse, and then do some statistics over words that came up in those conversations? But "99 percent match" doesn't mean anything. It's not like--you know when you read 99 percent your, your own sort of predictive text thing says oh for 99 percent 'chance' is what comes next, but they can't know that right. So what does 99 percent match mean? 

ALEX HANNA: Yeah.  

HANNAH ZEAVIN: Um but I think even the next paragraph, if we consider it, "'I love data,' added Lublin, who has also described the helpline as 'a tech startup.'" This is always the thing in my research and my work always raises my my hackle, spidey sense, whatever you want to call it where things are about to get really bad. There are certainly enough bad sort of um tech and AI and uh you know like depending on the era, right, kind of works that are trying to scale or quote unquote democratize, that's often a kind of code word for scale and profit right--you know, part of our democracy of course--mental health care. But when it comes from the other place, right this is a tech startup that happens to take its--you know as as mental health care I think you often see these kinds of really big problems--this isn't mental health care, it's matching. 

EMILY M. BENDER: Yeah. 

HANNAH ZEAVIN: Right? 

ALEX HANNA: Yeah, yeah. 

EMILY M. BENDER: And I get like the most the most sort of charitable reading here, there's something in here I think about having to um triage texts, like who's who's going to get a response fastest. Um and that makes sense to me as a like a harm mitigation mindset, um but that doesn't make sense with this like tech startup mentality. Like this isn't I--me as a citizen with you know people I love who have been in crisis, I don't want this to be, how do we get the most out of the data or some external purpose. I want, how do we most effectively help the people in need? And is that gathering more resources?  

Um is it making it you know even like as you're mentioning the thing about not having to talk, so  if there's somebody who's in a space where they really need to be quiet to even reach out then  you know that that sounds like a positive thing, but not like how do we squeeze the most stuff  out of this cool big pile of data? Because how dehumanizing is that to take, as you were saying  before, datafication, right take people's moments of crisis and then look at it as a sort of abundance of data. 

HANNAH ZEAVIN: And I think it's really important to note that in this particular case, Lublin was doubling as the CEO of the hotline and of Loris AI. And so there was it wasn't just kind of a kind of um industry and non-profit partnership, you know, and there are partnerships all the time, spoken and unspoken, right. It wasn't just that. There was a literal double. There was you know--and so they say they had a firewall between the two roles but we all know how that goes. So of course right with this joint interest what ends up happening in this story is that Crisis Text Line, which  again is is one of the number one uh providers of care but therefore also collectors of data, sold the anonymized data to Loris AI to help build um better customer service bots for Uber and Lyft.  

EMILY M. BENDER: Yeah that's the most ridiculous thing here is that what is what does Loris AI doing, what are they what--where are they seeing the value in this data? Is to make more empathetic customer service bots.  

ALEX HANNA: Right I mean-- 

HANNAH ZEAVIN: Go ahead Alex, sorry. 

HANNAH ZEAVIN: Well I mean yeah the whole kind of political economy of this is so uh fucked. And I mean if you scroll a little up I clicked through to--um uh oh what did I click here to get oh--for--it's the text that says "for Crisis Text Line, an organization with financial backing from some of the Silicon Valley's biggest players."  

Um I think it's a little down uh under under on under the um yeah this one. And you click through and it's this press press release from Omidyar Network um and so they're able and and there's this quote from here of this Series B funding that they got. So Omidyar is is is is a non-profit as well as as Gates but then and and Newmark, all these um all these other organizations, but they also were fundraising from from VC. Um and-- 

EMILY M. BENDER: But wait Melinda Gates is I think the individual Melinda Gates, not the Bill and Melinda Gates Foundation. 

ALEX HANNA: Oh so is the individual as well as Reid Hoffman who we've talked about I think before on this um who is one of the investors with Mustafa Suleyman and this Inflection AI organization, that's a large language model that does something that they that they haven't disclosed yet. Uh just know that it's going to be big and exciting, um and and hypey. Um but this quote um from Lubin really strikes it for me. Um so, "There's no so so--there's no equity, no possibility of a liquidity moment. Crisis Text Line is a tech startup so it makes sense for us to fundraise like one."  

Um and so it's it's this it's this very startup VC for good type of situation but it's following the model of basically chasing the same funding streams. Um and that's going to lead to this monetization moment and desire to train model these these bots for Uber and Lyft and and  whatever. 

EMILY M. BENDER: And we have a we have a Hoffman quote right after that. "Like other tech startups, Crisis Text Line has demonstrated accelerating growth." All right, so Crisis Text Line is a service that people in crisis can contact. For its growth to be accelerating, one of two things have to be going on.  

Um either the number of people in crisis is stable and just more people are becoming aware of it as a service, or more and more people are in crisis. Like the sort of the scaling mentality here is  just appalling. Sorry I am--I am finding this topic distressing, not surprisingly. 

HANNAH ZEAVIN: And you know part of it to my mind is is that it's distressing because it's a moment where you really can see the kind of cannibalization of care to escort profit uh in this area and 'then what?' becomes the problem right, for so many who have come to rely on this form but also because it's the people who are in distress who are going to use it and people who can't and don't tend to afford one-to-one therapy in a shrink's office right, that it's really putting the most vulnerable already to capture and control, and the most vulnerable in mental health crisis, at the whims of both systems. And there's something super upsetting to me about that. 

ALEX HANNA: Yeah. And I think that was a lot of the--and just to note a few of these things I mean this is--Hannah you had mentioned FCC's letter on this after three days after Politico's reporting, um uh the FCC commissioner um basically said, 'You need to stop. I think you need to stop this, you need to stop selling these data to Loris.AI.' They instructed um Lina Khan um to take action on on this.  

Um and um there was an intent--I mean this was and I think some of the discourse at the time, this was um this is early last year, had been uh you know people were saying you know to get the FCC to act on this was you know you really have to have incredibly messed up, um to to to have done this.  

The FCC um typically doesn't take um action like this, I don't I don't know, you know as FCC or FTC both said the FCC so the so Commissioner Carr on the FCC had sent a letter and then also instructed um under the FTC's um aegis um what would what could be regulated as an enforcement action under the FTC's domain. So yeah. 

EMILY M. BENDER: Yeah. All right. 

ALEX HANNA: Uh so we so we moved to so we moved to hell? 

We've got we got 15 minutes. 

EMILY M. BENDER: But I feel like one one important topic that we haven't hit is um how this connects with the carceral system and I know that Hannah has something to say about that so I want to just sort of open that topic and then we can move to hell. 

HANNAH ZEAVIN: Thank you so much uh Emily and you know it is it is hell itself. So I'm gonna add this to chat, I don't know if you mind, but I've been working with folks at the Trans Lifeline for the last I don't know now over a year um because part of what's happened in the wider quote-unquote ecosystem of hotlines is that there's been this major shift taking place which is that all um affiliated hotlines in the United States have now moved to a new number, 988. But 988 also makes use of not AI but datafication to geolocate callers down to their address level. And so-- 

ALEX HANNA: Wow. 

HANNAH ZEAVIN: --one thing that happens and by the way Crisis Text Line does this too, before the mandate, right, um many hotlines but not all call the police to the homes of their callers or to the location of their callers to perform quote-unquote 'wellness checks.' And you know we're speaking in a U.S context here where the racialization of police violence is something that I hope everyone takes really seriously and knows a great deal about especially in the wake of the murder of George Floyd but also before. And so these hotlines have been calling you know calling uh police to do wellness checks and of course um they often end in in violence and in fatality, especially because if you think about it really fully, people who call suicide hotlines often are armed not to harm others but to harm themselves.  

And so one thing that we've seen with National Suicide Prevention Lifeline calls where they're  doing these quote-unquote active rescues, that's what they call them, is that those calls are to  where police and other emergency services go to the homes of their callers are often due to  glitches. So if you hang up, it happens. But also if your, if your call drops it happens. And so  there are stories and story after story of people having the police show up to their place of work  or to the school or to their home when they thought they were just having a call because they were feeling really blue or really down or in crisis and now in fact they are being hospitalized  or they're interacting with the police, which can be traumatic or it can already be reminiscent of  earlier traumas with the police and so on. And so there's this really great link, you know I think that was under featured in the reporting on what happened with Crisis Text Line. Which is that  it isn't just about selling the data. That's bad. But there's this whole other component to what datafication of hotlines do right and so one thing I really stress in my research is that the whole history of the suicide hotline was a third alternative to calling the police and calling the psychiatrist. It was neither carceral nor was it psychiatric and that was really important to its users including and especially its queer community, which was one of its very first adopters.  

Right, this is a moment where police had the kind of recompense to carry out the quote-unquote cures of psychiatry, which included all the way up to electroshock therapy and lobotomization as well as um psychiatric holds. We see versions of this in our present. And all of the metrics which  are all have to do with the kind of datafication of predicting suicide are bunk, and we know that  and yet they're being deployed now at mass scale in the United States. So the reason I want you to pull up this article is of course if folks are interested this this is one that really lays out both the technological side and the social side, but if you scroll to its bottom we really worked with the editors at Slate to give a list of resources that uh right--we say if you're concerned about calling a crisis line that uses police intervention, 'consider reaching out to' and then there are five different sources. 

And this was a whole question with our editor. We didn't want to list the very two places we were critiquing alone, saying like these are great places to call, when indeed we have deep reservations about the kinds of secondary dangers they introduced into everyday life.  

EMILY M. BENDER: That is incredibly valuable information and I I didn't know this. I'm I'm learning it from you now.  

Um I guess I had always assumed that these things were confidential and alerting the police to your location is the antithesis of confidential. 

HANNAH ZEAVIN: You know and it really breaks my heart because you know often people think, which I totally understand, well the reason the hotline used to be confidential is because you couldn't track callers. Well indeed you could. People used to hand trace telephone wire. The police used to hand trace telephone wire. But hotlines refused it.  

Um this is a little bit more on the longer history, if you're interested, not the contemporary um of of why you would want to have a hotline free of the carceral.  

Um and then Yana and I collaborated on this kind of contemporary op-ed about why Trans  Lifeline refuses to do so and what are the kinds of consequences of in the wider hotline  ecosystem again of deploying these carceral techniques. Um and I just I feel like it's really important to draw folks attention to it, especially if people are feeling um you know as many people are in crisis or are needing a place to talk, that it is unfortunately really important and to think about how and where and why you want to do that talking now.  

EMILY M. BENDER: Yeah. 

ALEX HANNA: I appreciate I appreciate that and I you know I know that there's been efforts  and one of the things you linked to at the end is efforts like MH First that come--that's in Oakland in the East Bay--that is um organized by the Anti um Police-Terror Project.  

Um and the effort there as having this third way and this alternative uh providing um you know a a non-police intervention um for mental health crises. Um and you know we can just imagine how the combination of slapping an LLM on it but then also using this geolocation is just going to exacerbate in the sake of scale and give more money, you know, probably make an argument  make it easier for law enforcement to then um say, well we need more money to scale this because look at all these things that we're serv--you know quote-unquote serving with large language models.  

EMILY M. BENDER: Yeah. All right. So with that let's transition to hell. And Alex your prompt this time is, imagine this is 10 years from now and there's like a docuplay. So theater production um. And you are the um a seller of an AI hype service and you are telling the audience why it's so important to scale the hype. 

ALEX HANNA: Wait wait okay hold on let me get this clear. 

EMILY M. BENDER: Before I transition to AI Hell. Yeah. 

ALEX HANNA: First off let me put let me put the hell screen on. Second off--second off so you're saying am I selling the hype or selling a hype machine? 

EMILY M. BENDER: You're selling a hype machine that will scale the hype. 

ALEX HANNA: That will scale the hype.  

EMILY M. BENDER: Yeah because you're telling because this is like it's all about how it's all about scale, scale is important and you're going to tell us why we need more and more hype. We need the hype to scale.

ALEX HANNA: This is--this is--this is incredibly meta so all I'm going to say is I'm gonna say boy you know how much hype you can slam into this bad boy and it's just that meme format. So yeah there we go, there we go. 

EMILY M. BENDER: Thank you. Okay so we've got a few things here. "The city of Yokosuka adapts ChatGPT after favorable trial results." Um so in Yokosuka, Kanagawa prefecture, um uh, "The city has officially adopted artificial intelligence chatbot ChatGPT in administrative operations Monday after a one-month trial showed it helped improve work efficiency and shorten business hours." 

So apparently they're using it to reduce clerical work and what I loved about this is, "If ChatGPT use is continued, working hours can be reduced by at least about 10 minutes a day." 

ALEX HANNA: I didn't see this 10 10 minutes. Incredible--incredibly 10 minutes that you can use to--yeah. Um also the reporting here with, it says, "the city's authority was the nation's first local government to start trial usage of generative AI, which is driven by a machine learning model that works much like the human brain." Oh dear. 

EMILY M. BENDER: Japan Times, you can do better than that. 

ALEX HANNA: Yeah, come on now. 

EMILY M. BENDER: All right next. It's AI Hell, we're going fast. Um so Alex you want to take this one? 

ALEX HANNA: So this is a tweet by Janna G. Noelle who says um um so is who's quote quote tweeting someone saying um Maria um Tureaud, saying, "This is so freaking dangerous. We're talking about software evaluating submissions for grammar, which can be boiled down to software eliminating submissions based on voice." And so this is uh-- 

EMILY M. BENDER: It's part of Publishers Weekly or somebody? 

ALEX HANNA:  So this is part of Publish--okay so oh wow so this is so this is um a publisher effectively um assessing whether a book is well-written.  

Um great so this is using yeah so this is using ChatGPT telling you whether you know you should publish something or not yeah. 

EMILY M. BENDER: Should we consider publication, so we're going to separate the wheat from the chaff, in the highlight here. 

ALEX HANNA: Yeah so so um so Janna's saying, "So on top of the other indignities of querying now, we'll have to work on--any work that has a non-mainstream voice for sale rejected out of hand on top of our words being used to strengthen LLMs as the price of admission. A better use for AI would be for actually answering all queries."  

Um which you know if you know if you're actually a publisher, saying that you know, thanks for this.  

Um but you don't actually get to read the text um so using anything that might not be considered uh proper English um, that you know is going to exclude a lot of marginalized voices, um any  kind of um uh mixed text or using of language that is you know bilingual. Yeah just fresh hell all around there. 

EMILY M. BENDER: Fresh Hell all around, yep. And this and this is the article about it sorry, I  just could have brought this up. So, "AI is about to turn book publishing upside-down." No thank you. Okay here's the update on a lawyer who used ChatGPT to find citations for precedent in this case he was he was working on. 

HANNAH ZEAVIN: I shared this one to my undergrads in the fall, I swear. This is going to be my example, like--

EMILY M. BENDER: So are you following the story, do you know so he's um all right so this is this is a lawyer who um lawyer for a plaintiff um in a case between like a person and a--well hold on.  

Okay person in an airline, person got injured by the cart coming down the aisle, wanted to sue. Airline says statute of limitations has expired. Uh, person's lawyer says here's some precedent. Turns out precedent came from ChatGPT, totally fake um and the judge was not amused um and said, okay tell me why I shouldn't be censuring you. And apparently um the lawyer is actually facing some consequences. So good on the legal system for having some self-regulation here and not falling for just completely made up BS. 

And to listeners um if you missed it we had a wonderful episode a little ways back with uh Kendra Albert, who I always  want to call Kendra Sarah because of their handled, who dug into sort of what's going on with uh generative so-called generative AI and legal applications. 

ALEX HANNA: And Kendra, they had a great follow-up thread on this on um on Mastodon. We could we could follow up and put this in in show notes.  

EMILY M. BENDER: Absolutely should put that yes they've been following this with glee and humor. Okay. "JPMorgan's plans for a ChatGPT-like investment service are just part of its larger AI ambitions." 

ALEX HANNA: Oh my gosh what's I didn't see this what is this about?  So, "JP Morgan Chase is developing a ChatGPT-like service to provide investment advice to customers--" Amazing.  "Um which found that the financial services company has applied to trademark a product called IndexGPT. The the filing said IndexGPT will tap quote cloud computing software using artificial intelligence--" Just just filler words all around. "Unquote for quote analyzing and selecting securities tailored to customers' needs." And then I'm really annoyed because they don't end the quote here so yeah. 

EMILY M. BENDER: Typographic irritation. And it's just like I mean compared to putting a chatbot in the loop when people are experiencing mental health crises, I care a little bit less about JPMorgan Chase's customers getting so-called financial advice from a large language  model, um but it still seems like a really terrible idea and another one that's going to just like fall on its face real flat real fast. 

HANNAH ZEAVIN: And because it's the people who are able to invest less and have less money to begin with who are getting to get shunted to this service, and so of course it's only going to increase economic inequality. I mean it's right there. 

ALEX HANNA: Yeah and that's the thing is that Chase, I'm unfortunately a Chase customer but they have this stuff right in there in their online banking interface so if you can't get that bespoke advice it's another incident of chatbots for for the poors and and real humans for everybody else. 

EMILY M. BENDER: Yeah all right let's do one last one I forget what this one is. 

ALEX HANNA: Oh oh yeah yeah oh this so this one's the this instructor, who at Texas A&M  um who sent an email accusing his entire um class of using the chat GTP--so not chat GPT but GTP um--and what he did was copied and pasted all the responses in ChatGPT and ask them if they used the tool to generate the content um and yeah how many did he fail? Everybody? He failed I think a bunch of people. 

EMILY M. BENDER: Yeah and a bunch of these people were graduating seniors who have had their diplomas held up because this professor doesn't understand how the technology works and what it's actually doing. 

ALEX HANNA: Yeah. 

EMILY M. BENDER: Yeah, okay, thank you. 

ALEX HANNA: All right, so with that uh let me hold on let me take us out of hell. All right with that thank you so much Hannah for joining us, it was an absolute pleasure.  

EMILY M. BENDER: Yeah, your expertise is amazing and exactly what was needed for this topic. I learned so much, I'm sure the audience did too. 

HANNAH ZEAVIN: Thanks for having me it's an honor and Alex I hope to see you around in  town. 

ALEX HANNA: Yeah, see you around. Uh that's it for this week our theme song was by Toby Menon, production by Christie Taylor, and thanks as always to the Distributed AI Research Institute. If you like this show you can support us by donating to DAIR. DAIR Institute--DAIR hyphen Institute.org, that's D-A-I-R hyphen institute.org. 

EMILY M. BENDER: Find us and all our past episodes which number 12, this was number 13,  on PeerTube and wherever you get your podcasts. You can watch and comment on the show while it's happening live on our Twitch stream, that's Twitch.TV/DAIR_Institute. Again that's  D-A-I-R underscore Institute. I'm Emily M. Bender. 

ALEX HANNA: And I'm Alex Hanna. Stay out of AI hell y'all.