The Trajectory

Eliezer Yudkowsky - Human Augmentation as a Safer AGI Pathway (AGI Governance, Episode 6)

Daniel Faggella

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 1:14:45

This is an interview with Eliezer Yudkowsky, AI Researcher at the Machine Intelligence Research Institute.

This is the sixth installment of our "AGI Governance" series - where we explore the means, objectives, and implementation of of governance structures for artificial general intelligence.

Watch this episode on The Trajectory Youtube Channel: https://www.youtube.com/watch?v=YlsvQO0zDiE

See the full article from this episode: https://danfaggella.com/yudkowsky1

...

About The Trajectory:

AGI and man-machine merger are going to radically expand the process of life beyond humanity -- so how can we ensure a good trajectory for future life?

From Yoshua Bengio to Nick Bostrom, from Michael Levin to Peter Singer, we discuss how to positively influence the trajectory of posthuman life with the greatest minds in AI, biology, philosophy, and policy.

Ask questions of our speakers in our live Philosophy Circle calls:
https://bit.ly/PhilosophyCircle

Stay in touch:
-- Newsletter: bit.ly/TrajectoryTw
-- X: x.com/danfaggella
-- Blog: danfaggella.com/trajectory
-- YouTube: youtube.com/@trajectoryai