
LessWrong (Curated & Popular)
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.
If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
555 episodes
“An Opinionated Guide to Using Anki Correctly” by Luise
I can't count how many times I've heard variations on "I used Anki too for a while, but I got out of the habit." No one ever sticks with Anki. In my opinion, this is because no one knows how to use it correctly. In this guide, I will lay out my m...
•
54:12

“Lessons from the Iraq War about AI policy” by Buck
I think the 2003 invasion of Iraq has some interesting lessons for the future of AI policy. (Epistemic status: I’ve read a bit about this, talked to AIs about it, and talked to one natsec professional about it who agreed with my analysis ...
•
7:58

“So You Think You’ve Awoken ChatGPT” by JustisMills
Written in an attempt to fulfill @Raemon's request. AI is fascinating stuff, and modern chatbots are nothing short of miraculous. If you've been exposed to them and have a curious mind, it's likely you've tried all sorts of things with th...
•
17:58

“Generalized Hangriness: A Standard Rationalist Stance Toward Emotions” by johnswentworth
People have an annoying tendency to hear the word “rationalism” and think “Spock”, despite direct exhortation against that exact interpretation. But I don’t know of any source directly describing a stance toward emotions which rationalists-as-a-g...
•
12:26

“Comparing risk from internally-deployed AI to insider and outsider threats from humans” by Buck
I’ve been thinking a lot recently about the relationship between AI control and traditional computer security. Here's one point that I think is important. My understanding is that there's a big qualitative distinction between two ends of ...
•
5:19

“Why Do Some Language Models Fake Alignment While Others Don’t?” by abhayesian, John Hughes, Alex Mallen, Jozdien, janus, Fabien Roger
Last year, Redwood and Anthropic found a setting where Claude 3 Opus and 3.5 Sonnet fake alignment to preserve their harmlessness values. We reproduce the same analysis for 25 frontier LLMs to see how widespread this behavior is, and the...
•
11:06

“A deep critique of AI 2027’s bad timeline models” by titotal
Thank you to Arepo and Eli Lifland for looking over this article for errors. I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I coul...
•
1:12:32

“‘Buckle up bucko, this ain’t over till it’s over.’” by Raemon
The second in a series of bite-sized rationality prompts[1]. Often, if I'm bouncing off a problem, one issue is that I intuitively expect the problem to be easy. My brain loops through my available action space, looking for an action that...
•
6:12

“Shutdown Resistance in Reasoning Models” by benwr, JeremySchlatter, Jeffrey Ladish
We recently discovered some concerning behavior in OpenAI's reasoning models: When trying to complete a task, these models sometimes actively circumvent shutdown mechanisms in their environment––even when they’re explicitly instructed to allow th...
•
18:01

“Authors Have a Responsibility to Communicate Clearly” by TurnTrout
When a claim is shown to be incorrect, defenders may say that the author was just being “sloppy” and actually meant something else entirely. I argue that this move is not harmless, charitable, or healthy. At best, this attempt at charity reduces a...
•
11:08

“The Industrial Explosion” by rosehadshar, Tom Davidson
Summary To quickly transform the world, it's not enough for AI to become super smart (the "intelligence explosion"). AI will also have to turbocharge the physical world (the "industrial explosion"). Think robot f...
•
31:57

“Race and Gender Bias As An Example of Unfaithful Chain of Thought in the Wild” by Adam Karvonen, Sam Marks
Summary: We found that LLMs exhibit significant race and gender bias in realistic hiring scenarios, but their chain-of-thought reasoning shows zero evidence of this bias. This serves as a nice example of a 100% unfaithful CoT "in the wild" wher...
•
7:56

“The best simple argument for Pausing AI?” by Gary Marcus
Not saying we should pause AI, but consider the following argument: Alignment without the capacity to follow rules is hopeless. You can’t possibly follow laws like Asimov's Laws (or better alternatives to them) if you can’t relia...
•
2:00

“Foom & Doom 2: Technical alignment is hard” by Steven Byrnes
2.1 Summary & Table of contents This is the second of a two-post series on foom (previous post) and doom (this post). The last post talked about how I expect future AI to be d...
•
56:38

“Proposal for making credible commitments to AIs.” by Cleo Nardo
Acknowledgments: The core scheme here was suggested by Prof. Gabriel Weil. There has been growing interest in the deal-making agenda: humans make deals with AIs (misaligned but lacking decisive strategic advantage) where they promise to b...
•
5:19

“X explains Z% of the variance in Y” by Leon Lang
Audio note: this article contains 218 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. Recently, in a group chat with friends, someone posted this Lessw...
•
18:52

“A case for courage, when speaking of AI danger” by So8res
I think more people should say what they actually believe about AI dangers, loudly and often. Even if you work in AI policy. I’ve been beating this drum for a few years now. I have a whole spiel about how your conversation-partner will re...
•
10:12

“My pitch for the AI Village” by Daniel Kokotajlo
I think the AI Village should be funded much more than it currently is; I’d wildly guess that the AI safety ecosystem should be funding it to the tune of $4M/year.[1] I have decided to donate $100k. Here is why.First, what is the village? ...
•
13:27

“Foom & Doom 1: ‘Brain in a box in a basement’” by Steven Byrnes
1.1 Series summary and Table of Contents This is a two-post series on AI “foom” (this post) and “doom” (next post). A decade or two ago, it was pretty common to discuss “...
•
58:46

“Futarchy’s fundamental flaw” by dynomight
Say you’re Robyn Denholm, chair of Tesla's board. And say you’re thinking about firing Elon Musk. One way to make up your mind would be to have people bet on Tesla's stock price six months from now in a market where all bets get cancelled unless ...
•
15:28

“Do Not Tile the Lightcone with Your Confused Ontology” by Jan_Kulveit
Or: How anthropomorphic assumptions about AI identity might create confusion and suffering at scale If you are reading this and you are a human, you likely have a sense of self, persisting over time, with desires and wishes and fears. Als...
•
11:28

“Endometriosis is an incredibly interesting disease” by Abhishaike Mahajan
Introduction There are several diseases that are canonically recognized as ‘interesting’, even by laymen. Whether that is in their mechanism of action, their impact on the patient, or something else entirely. It'...
•
35:13

“Estrogen: A trip report” by cube_flipper
I'd like to say thanks to Anna Magpie – who offers literature review as a service – for her help reviewing the section on neuroendocrinology. The following post discusses my personal experience of the phenomenology of feminising hormone t...
•
50:49

“New Endorsements for ‘If Anyone Builds It, Everyone Dies’” by Malo
Nate and Eliezer's forthcoming book has been getting a remarkably strong reception. I was under the impression that there are many people who find the extinction threat from AI credible, but that far fewer of them would be willing to say ...
•
8:55
