The Trajectory

Eliezer Yudkowsky - Human Augmentation as a Safer AGI Pathway [AGI Governance, Episode 6]

Daniel Faggella

This is an interview with Eliezer Yudkowsky, AI Researcher at the Machine Intelligence Research Institute.

This is the sixth installment of our "AGI Governance" series - where we explore the means, objectives, and implementation of of governance structures for artificial general intelligence.

Watch this episode on The Trajectory Youtube Channel: https://www.youtube.com/watch?v=YlsvQO0zDiE

See the full article from this episode: https://danfaggella.com/yudkowsky1

...

There are four main questions we cover in this AGI Governance series are:

1. How important is AGI governance now on a 1-10 scale?
2. What should AGI governance attempt to do?
3. What might AGI governance look like in practice?
4. What should innovators and regulators do now?

If this sounds like it's up your alley, then be sure to stick around and connect:

-- Blog: https://danfaggella.com/trajectory
-- X: https://x.com/danfaggella
-- LinkedIn: https://linkedin.com/in/danfaggella
-- Newsletter: https://bit.ly/TrajectoryTw
-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954