LessWrong (Curated & Popular)

"Precedents for the Unprecedented: Historical Analogies for Thirteen Artificial Superintelligence Risks" by James_Miller

Since artificial superintelligence has never existed, claims that it poses a serious risk of global catastrophe can be easy to dismiss as fearmongering. Yet many of the specific worries about such systems are not free-floating fantasies but extensions of patterns we already see. This essay examines thirteen distinct ways artificial superintelligence could go wrong and, for each, pairs the abstract failure mode with concrete precedents where a similar pattern has already caused serious harm. By assembling a broad cross-domain catalog of such precedents, I aim to show that concerns about artificial superintelligence track recurring failure modes in our world.

This essay is also an experiment in writing with extensive assistance from artificial intelligence, producing work I couldn’t have written without it. That a current system can help articulate a case for the catastrophic potential of its own lineage is itself a significant fact; we have already left the realm of speculative fiction and begun to build the very agents that constitute the risk. On a personal note, this collaboration with artificial intelligence is part of my effort to rebuild the intellectual life that my stroke disrupted and hopefully push it beyond where it stood before.

Section 1: Power Asymmetry [...]

---

First published:
January 16th, 2026

Source:
https://www.lesswrong.com/posts/kLvhBSwjWD9wjejWn/precedents-for-the-unprecedented-historical-analogies-for-1

---



Narrated by TYPE III AUDIO.