LessWrong MoreAudible Podcast

"Why I think strong general AI is coming soon" by Porby

October 12, 2022 Robert
"Why I think strong general AI is coming soon" by Porby
LessWrong MoreAudible Podcast
More Info
LessWrong MoreAudible Podcast
"Why I think strong general AI is coming soon" by Porby
Oct 12, 2022
Robert

https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soon

I think there is little time left before someone builds AGI (median ~2030). Once upon a time, I didn't think this.

This post attempts to walk through some of the observations and insights that collapsed my estimates.

The core ideas are as follows:

  1. We've already captured way too much of intelligence with way too little effort.
  2. Everything points towards us capturing way more of intelligence with very little additional effort.
  3. Trying to create a self-consistent worldview that handles all available evidence seems to force very weird conclusions.

Some notes up front

  • I wrote this post in response to the Future Fund's AI Worldview Prize. Financial incentives work, apparently! I wrote it with a slightly wider audience in mind and supply some background for people who aren't quite as familiar with the standard arguments.
  • I make a few predictions in this post. Unless otherwise noted, the predictions and their associated probabilities should be assumed to be conditioned on "the world remains at least remotely normal for the term of the prediction; the gameboard remains unflipped."
  • For the purposes of this post, when I use the term AGI, I mean the kind of AI with sufficient capability to make it a genuine threat to humanity's future or survival if it is misused or misaligned. This is slightly more strict than the definition in the Future Fund post, but I expect the difference between the two definitions to be small chronologically.
  • For the purposes of this post, when I refer to "intelligence," I mean stuff like complex problem solving that's useful for achieving goals. Consciousness, emotions, and qualia are not required for me to call a system "intelligent" here; I am defining it only in terms of capability.
Show Notes Chapter Markers

https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soon

I think there is little time left before someone builds AGI (median ~2030). Once upon a time, I didn't think this.

This post attempts to walk through some of the observations and insights that collapsed my estimates.

The core ideas are as follows:

  1. We've already captured way too much of intelligence with way too little effort.
  2. Everything points towards us capturing way more of intelligence with very little additional effort.
  3. Trying to create a self-consistent worldview that handles all available evidence seems to force very weird conclusions.

Some notes up front

  • I wrote this post in response to the Future Fund's AI Worldview Prize. Financial incentives work, apparently! I wrote it with a slightly wider audience in mind and supply some background for people who aren't quite as familiar with the standard arguments.
  • I make a few predictions in this post. Unless otherwise noted, the predictions and their associated probabilities should be assumed to be conditioned on "the world remains at least remotely normal for the term of the prediction; the gameboard remains unflipped."
  • For the purposes of this post, when I use the term AGI, I mean the kind of AI with sufficient capability to make it a genuine threat to humanity's future or survival if it is misused or misaligned. This is slightly more strict than the definition in the Future Fund post, but I expect the difference between the two definitions to be small chronologically.
  • For the purposes of this post, when I refer to "intelligence," I mean stuff like complex problem solving that's useful for achieving goals. Consciousness, emotions, and qualia are not required for me to call a system "intelligent" here; I am defining it only in terms of capability.
Is the algorithm of intelligence easy?
What does each invocation of a transformer have to do?
Transformers are not special
The field of modern machine learning remains immature
Scaling walls and data efficiency
Lessons from biology
Hardware demand
Graph 1
Graph 2
Near-term hardware improvements
Graph 3
Physical limits of hardware computation
Avoiding red herring indicators
Monitoring your updates
Will it go badly?
Graph 4
Graph 5
Graph 6
Why would AGI soon actually be bad?
Optimism
Conclusion
Semi-rapid fire Q&A