Ethics Untangled

44. Do Large Language Models Gossip? With Lucy Osler

Jim Baxter

Gossip is an ethically interesting phenomenon when humans do it. It creates a bond between the people doing the gossiping, but it does so by implicitly excluding the person being gossiped about, and can cause harm, especially when the gossip is malicious, or simply isn't true. What I hadn't realised until I spoke to Lucy Osler, a Lecturer in Philosophy at the University of Exeter, is that large language models like ChatGPT and Claude can gossip, or at least they can do something which looks an awful lot like gossip. In this conversation with Lucy, we got into what might be happening, how it might harm people, and what we might be able to do about it.

Following my conversation with Lucy, I had an interesting conversation with ChatGPT about the same topic.

In the episode we discuss Kevin Roose's interaction with the chatbot Sydney. Here's Roose's own article about that experience:

Why a Conversation With Bing’s Chatbot Left Me Deeply Unsettled - The New York Times

And here are some academic articles that might be of interest:

Fisher, S. A. (2024). Large language models and their big bullshit potential. Ethics and Information Technology, 26(4), 67.

Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26(2), 1-10.

Alfano, M., & Robinson, B. (2017). Gossip as a burdened virtue. Ethical Theory and Moral Practice, 20, 473-487.

Adkins, K. (2017). Gossip, epistemology and power. Springer International Publishing AG, Gewerbestrasse, 11, 6330.

Ethics Untangled is produced by IDEA, The Ethics Centre at the University of Leeds.

Bluesky: @ethicsuntangled.bsky.social
Facebook: https://www.facebook.com/ideacetl
LinkedIn: https://www.linkedin.com/company/idea-ethics-centre/