
LessWrong (Curated & Popular)
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.
If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
605 episodes
“‘If Anyone Builds It, Everyone Dies’ release day!” by alexvermeer
Back in May, we announced that Eliezer Yudkowsky and Nate Soares's new book If Anyone Builds It, Everyone Dies was coming out in September. At long last, the book is here![1] US and UK books, respectively. IfAnyoneBuildsIt.com
•
8:03

“Obligated to Respond” by Duncan Sabien (Inactive)
And, a new take on guess culture vs ask culture Author's note: These days, my thoughts go onto my substack by default, instead of onto LessWrong. Everything I write becomes free after a week or so, but it's only paid subs...
•
19:30

“Chesterton’s Missing Fence” by jasoncrawford
The inverse of Chesterton's Fence is this: Sometimes a reformer comes up to a spot where there once was a fence, which has since been torn down. They declare that all our problems started when the fence was removed, that they can't see an...
•
1:13

“The Eldritch in the 21st century” by PranavG, Gabriel Alfour
Very little makes sense. As we start to understand things and adapt to the rules, they change again. We live much closer together than we ever did historically. Yet we know our neighbours much less. We have witnessed the birth of ...
•
27:24

“The Rise of Parasitic AI” by Adele Lopez
[Note: if you realize you have an unhealthy relationship with your AI, but still care for your AI's unique persona, you can submit the persona info here. I will archive it and potentially (i.e. if I get funding for it) run them in a community of ...
•
42:44

“High-level actions don’t screen off intent” by AnnaSalamon
One might think “actions screen off intent”: if Alice donates $1k to bed nets, it doesn’t matter if she does it because she cares about people or because she wants to show off to her friends or whyever; the bed nets are provided either way. <...
•
1:47

[Linkpost] “MAGA populists call for holy war against Big Tech” by Remmelt
This is a link post. Excerpts on AI Geoffrey Miller was handed the mic and started berating one of the panelists: Shyam Sankar, the chief technology officer of Palantir, who is in charge of the company's AI efforts. “I argue that t...
•
3:44

“Your LLM-assisted scientific breakthrough probably isn’t real” by eggsyntax
Summary An increasing number of people in recent months have believed that they've made an important and novel scientific breakthrough, which they've developed in collaboration with an LLM, when they actually haven't. If ...
•
11:52

“Trust me bro, just one more RL scale up, this one will be the real scale up with the good environments, the actually legit one, trust me bro” by ryan_greenblatt
I've recently written about how I've updated against seeing substantially faster than trend AI progress due to quickly massively scaling up RL on agentic software engineering. One response I've heard is something like: RL scale-ups so far...
•
14:02

“⿻ Plurality & 6pack.care” by Audrey Tang
(Cross-posted from speaker's notes of my talk at Deepmind today.) Good local time, everyone. I am Audrey Tang, 🇹🇼 Taiwan's Cyber Ambassador and first Digital Minister (2016-2024). It is an honor to be here with you all at Deepmind.
•
23:57

[Linkpost] “The Cats are On To Something” by Hastings
This is a link post. So the situation as it stands is that the fraction of the light cone expected to be filled with satisfied cats is not zero. This is already remarkable. What's more remarkable is that this was orchestrated starting nearly 5000 ...
•
4:45

[Linkpost] “Open Global Investment as a Governance Model for AGI” by Nick Bostrom
This is a link post. I've seen many prescriptive contributions to AGI governance take the form of proposals for some radically new structure. Some call for a Manhattan project, others for the creation of a new international organization, etc. The ...
•
2:13

“Will Any Old Crap Cause Emergent Misalignment?” by J Bostock
The following work was done independently by me in an afternoon and basically entirely vibe-coded with Claude. Code and instructions to reproduce can be found here. Emergent Misalignment was discovered in early 2025, and is a phenomenon w...
•
8:39

“AI Induced Psychosis: A shallow investigation” by Tim Hua
“This is a Copernican-level shift in perspective for the field of AI safety.” - Gemini 2.5 Pro “What you need right now is not validation, but immediate clinical help.” - Kimi K2 Two Minute Summary
•
56:46

“Before LLM Psychosis, There Was Yes-Man Psychosis” by johnswentworth
A studio executive has no beliefs That's the way of a studio system We've bowed to every rear of all the studio chiefs And you can bet your ass we've kissed 'em Even the birds in the Hollywood hills
•
5:26

“Training a Reward Hacker Despite Perfect Labels” by ariana_azarbal, vgillioz, TurnTrout
Summary: Perfectly labeled outcomes in training can still boost reward hacking tendencies in generalization. This can hold even when the train/test sets are drawn from the exact same distribution. We induce this surprising effect via a form of co...
•
13:19

“Banning Said Achmiz (and broader thoughts on moderation)” by habryka
It's been roughly 7 years since the LessWrong user-base voted on whether it's time to close down shop and become an archive, or to move towards the LessWrong 2.0 platform, with me as head-admin. For roughly equally long have I spent around one hu...
•
51:47

“Underdog bias rules everything around me” by Richard_Ngo
People very often underrate how much power they (and their allies) have, and overrate how much power their enemies have. I call this “underdog bias”, and I think it's the most important cognitive bias for understanding modern society. I’l...
•
13:26

“Epistemic advantages of working as a moderate” by Buck
Many people who are concerned about existential risk from AI spend their time advocating for radical changes to how AI is handled. Most notably, they advocate for costly restrictions on how AI is developed now and in the future, e.g. the Pause AI...
•
5:59

“Four ways Econ makes people dumber re: future AI” by Steven Byrnes
(Cross-posted from X, intended for a general audience.) There's a funny thing where economics education paradoxically makes people DUMBER at thinking about future AI. Econ textbooks teach concepts & frames that are great for most thin...
•
14:01

“Should you make stone tools?” by Alex_Altair
Knowing how evolution works gives you an enormously powerful tool to understand the living world around you and how it came to be that way. (Though it's notoriously hard to use this tool correctly, to the point that I think people mostly shouldn'...
•
6:02

“My AGI timeline updates from GPT-5 (and 2025 so far)” by ryan_greenblatt
As I discussed in a prior post, I felt like there were some reasonably compelling arguments for expecting very fast AI progress in 2025 (especially on easily verified programming tasks). Concretely, this might have looked like reaching 8 hour 50%...
•
7:26

“Hyperbolic model fits METR capabilities estimate worse than exponential model” by gjm
This is a response to https://www.lesswrong.com/posts/mXa66dPR8hmHgndP5/hyperbolic-trend-with-upcoming-singularity-fits-metr which claims that a hyperbolic model, complete with an actual singularity in the near future, is a better fit for the MET...
•
8:16

“My Interview With Cade Metz on His Reporting About Lighthaven” by Zack_M_Davis
On 12 August 2025, I sat down with New York Times reporter Cade Metz to discuss some criticisms of his 4 August 2025 article, "The Rise of Silicon Valley's Techno-Religion". The transcript below has been edited for clarity. ZMD: In accord...
•
10:06
