The Trajectory

Dan Hendrycks - Avoiding an AGI Arms Race (AGI Destinations Series, Episode 5)

Season 1 Episode 5

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 54:15

This is an interview with Dan Hendrycks, Executive Director and Co-Founder of The Center for AI Safety (safe.ai).

This is the fifth and final installment of our 5-part series about "AGI Destinations" - where we unpack the preferable and non-preferable futures humanity might strive towards in the years ahead. This episode was recorded in October of 2023.

Read more of Dan's ideas and https://www.safe.ai/work/research

This episode referred to the following other essays and resources:

-- The Intelligence Trajectory Political Matrix: danfaggella.com/itpm
-- Natural Selection Favors AIs over Humans: https://arxiv.org/abs/2303.16200
-- The SDGs of Strong AGI: https://emerj.com/ai-power/sdgs-of-ai/

Watch this episode on The Trajectory YouTube channel: https://www.youtube.com/watch?v=arqYqHX13eM

Read the Dan Hendrycks episode highlight: danfaggella.com/hendrycks1

...

About The Trajectory:

AGI and man-machine merger are going to radically expand the process of life beyond humanity -- so how can we ensure a good trajectory for future life?

From Yoshua Bengio to Nick Bostrom, from Michael Levin to Peter Singer, we discuss how to positively influence the trajectory of posthuman life with the greatest minds in AI, biology, philosophy, and policy.

Ask questions of our speakers in our live Philosophy Circle calls:
https://bit.ly/PhilosophyCircle

Stay in touch:
-- Newsletter: bit.ly/TrajectoryTw
-- X: x.com/danfaggella
-- Blog: danfaggella.com/trajectory
-- YouTube: youtube.com/@trajectoryai