Ethics Untangled

53. How should social media platforms regulate AI-generated content? With Jeffrey Howard

Jim Baxter

AI-generated content is a familiar and increasingly prevalent feature of social media. Users post text, video, audio and images which have been created by AI, sometimes being clear that this is what they're doing, sometimes not. This isn’t always a problem, but some ways of using AI-generated content do raise significant dangers. So do social media platforms need to have policies in place specifically to deal with this form of content? Jeffrey Howard is professor of political philosophy and public policy at University College London. In a paper co-authored with Sarah Fisher and Beatriz Kira, he argues that policies that target AI-generated content specifically aren't necessary or helpful. It was great to get the chance to talk to him about why he thinks this, and how platforms should moderate this type of content without shutting down valuable free speech.

Ethics Untangled is produced by IDEA, The Ethics Centre at the University of Leeds.

Bluesky: @ethicsuntangled.bsky.social
Facebook: https://www.facebook.com/ideacetl
LinkedIn: https://www.linkedin.com/company/idea-ethics-centre/