AI Safety Fundamentals
Listen to resources from the AI Safety Fundamentals courses!
https://aisafetyfundamentals.com/
AI Safety Fundamentals
In Search of a Dynamist Vision for Safe Superhuman AI
•
BlueDot Impact
By Helen Toner
This essay describes AI safety policies that rely on centralised control (surveillance, fewer AI projects, licensing regimes) as "stasist" approaches that sacrifice innovation for stability. Toner argues we need "dynamist" solutions to the risks from AI that allow for decentralised experimentation, creativity and risk-taking.
Source:
https://helentoner.substack.com/p/dynamism-vs-stasis
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.