LessWrong (Curated & Popular)
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.
If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
668 episodes
“Please, Don’t Roll Your Own Metaethics” by Wei Dai
One day, when I was an interning at the cryptography research department of a large software company, my boss handed me an assignment to break a pseudorandom number generator passed to us for review. Someone in another department invented it and ...
•
4:11
“Paranoia rules everything around me” by habryka
People sometimes make mistakes [citation needed]. The obvious explanation for most of those mistakes is that decision makers do not have access to the information necessary to avoid the mistake, or are not smart/competent enough to think ...
•
22:32
“Human Values ≠ Goodness” by johnswentworth
There is a temptation to simply define Goodness as Human Values, or vice versa. Alas, we do not get to choose the definitions of commonly used words; our attempted definitions will simply be wrong. Unless we stick to mathematics, we will ...
•
11:31
“Condensation” by abramdemski
Condensation: a theory of concepts is a model of concept-formation by Sam Eisenstat. Its goals and methods resemble John Wentworth's natural abstractions/natural latents research.[1] Both theories seek to provide a clear picture of how to posit l...
•
30:29
“Mourning a life without AI” by Nikola Jurkovic
Recently, I looked at the one pair of winter boots I own, and I thought “I will probably never buy winter boots again.” The world as we know it probably won’t last more than a decade, and I live in a pretty warm area. I. AGI is li...
•
11:17
“Unexpected Things that are People” by Ben Goldhaber
Cross-posted from https://bengoldhaber.substack.com/ It's widely known that Corporations are People. This is universally agreed to be a good thing; I list Target as my emergency contact and I hope it will one day be the best man at my wed...
•
8:13
“Sonnet 4.5’s eval gaming seriously undermines alignment evals, and this seems caused by training on alignment evals” by Alexa Pan, ryan_greenblatt
According to the Sonnet 4.5 system card, Sonnet 4.5 is much more likely than Sonnet 4 to mention in its chain-of-thought that it thinks it is being evaluated; this seems to meaningfully cause it to appear to behave better in alignment evaluations....
•
35:57
“Publishing academic papers on transformative AI is a nightmare” by Jakub Growiec
I am a professor of economics. Throughout my career, I was mostly working on economic growth theory, and this eventually brought me to the topic of transformative AI / AGI / superintelligence. Nowadays my work focuses mostly on the promises and t...
•
7:23
“The Unreasonable Effectiveness of Fiction” by Raelifin
[Meta: This is Max Harms. I wrote a novel about China and AGI, which comes out today. This essay from my fiction newsletter has been slightly modified for LessWrong.] In the summer of 1983, Ronald Reagan sat down to watch the film War Gam...
•
15:03
“Legible vs. Illegible AI Safety Problems” by Wei Dai
Some AI safety problems are legible (obvious or understandable) to company leaders and government policymakers, implying they are unlikely to deploy or allow deployment of an AI while those problems remain open (i.e., appear unsolved according to...
•
3:29
“Lack of Social Grace is a Lack of Skill” by Screwtape
1. I have claimed that one of the fundamental questions of rationality is “what am I about to do and what will happen next?” One of the domains I ask this question the most is in social situations. There are...
•
11:08
[Linkpost] “I ate bear fat with honey and salt flakes, to prove a point” by aggliu
This is a link post. Eliezer Yudkowsky did not exactly suggest that you should eat bear fat covered with honey and sprinkled with salt flakes. What he actually said was that an alien, looking from the outside at evolution, would predict th...
•
1:07
“What’s up with Anthropic predicting AGI by early 2027?” by ryan_greenblatt
As far as I'm aware, Anthropic is the only AI company with official AGI timelines[1]: they expect AGI by early 2027. In their recommendations (from March 2025) to the OSTP for the AI action plan they say: As our CEO Dario Amodei writes in...
•
39:25
[Linkpost] “Emergent Introspective Awareness in Large Language Models” by Drake Thomas
This is a link post. New Anthropic research (tweet, blog post, paper): We investigate whether large language models can introspect on their internal states. It is difficult to answer this question through conversation alone, as genuine in...
•
3:00
[Linkpost] “You’re always stressed, your mind is always busy, you never have enough time” by mingyuan
This is a link post. You have things you want to do, but there's just never time. Maybe you want to find someone to have kids with, or maybe you want to spend more or higher-quality time with the family you already have. Maybe it's a work project....
•
4:17
“LLM-generated text is not testimony” by TsviBT
Crosspost from my blog. Synopsis When we share words with each other, we don't only care about the words themselves. We care also—even primarily—about the mental elements of the human mind/agency that pro...
•
19:40
“Post title: Why I Transitioned: A Case Study” by Fiora Sunshine
An Overture Famously, trans people tend not to have great introspective clarity into their own motivations for transition. Intuitively, they tend to be quite aware of what they do and don't like about inhabiting their cho...
•
17:21
“The Memetics of AI Successionism” by Jan_Kulveit
TL;DR: AI progress and the recognition of associated risks are painful to think about. This cognitive dissonance acts as fertile ground in the memetic landscape, a high-energy state that will be exploited by novel ideologies. We can anticipate cu...
•
21:27
“How Well Does RL Scale?” by Toby_Ord
This is the latest in a series of essays on AI Scaling. You can find the others on my site. Summary: RL-training for LLMs scales surprisingly poorly. Most of its gains are from allowing LLMs to productively use longer chains of thoug...
•
16:11
“An Opinionated Guide to Privacy Despite Authoritarianism” by TurnTrout
I've created a highly specific and actionable privacy guide, sorted by importance and venturing several layers deep into the privacy iceberg. I start with the basics (password manager) but also cover the obscure (dodging the millions of Bluetooth...
•
7:59
“Cancer has a surprising amount of detail” by Abhishaike Mahajan
There is a very famous essay titled ‘Reality has a surprising amount of detail’. The thesis of the article is that reality is filled, just filled, with an incomprehensible amount of materially important information, far more than most people woul...
•
23:54
“AIs should also refuse to work on capabilities research” by Davidmanheim
There's a strong argument that humans should stop trying to build more capable AI systems, or at least slow down progress. The risks are plausibly large but unclear, and we’d prefer not to die. But the roadmaps of the companies pursuing these sys...
•
6:34
“On Fleshling Safety: A Debate by Klurl and Trapaucius.” by Eliezer Yudkowsky
(23K words; best considered as nonfiction with a fictional-dialogue frame, not a proper short story.) Prologue: Klurl and Trapaucius were members of the machine race. And no ordinary citizens they, but Constructor...
•
2:22:21