LessWrong Curated Podcast

Many arguments for AI x-risk are wrong

March 09, 2024
LessWrong Curated Podcast
Many arguments for AI x-risk are wrong
Show Notes
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.The following is a lightly edited version of a memo I wrote for a retreat. It was inspired by a draft of Counting arguments provide no evidence for AI doom, although the earlier draft contained some additional content. I personally really like the earlier content, and think that my post covers important points not made by the published version of that post.

Thankful for the dozens of interesting conversations and comments at the retreat.

I think that the AI alignment field is partially founded on fundamentally confused ideas. I’m worried about this because, right now, a range of lobbyists and concerned activists and researchers are in Washington making policy asks. Some of these policy proposals seem to be based on erroneous or unsound arguments.[1]

The most important takeaway from this essay is that [...]

The original text contained 8 footnotes which were omitted from this narration.

---

First published:
March 5th, 2024

Source:
https://www.lesswrong.com/posts/yQSmcfN4kA7rATHGK/many-arguments-for-ai-x-risk-are-wrong

---

Narrated by TYPE III AUDIO.