The Trajectory

Ben Goertzel - Regulating AGI May Do More Harm Than Good (AGI Destinations Series, Episode 3)

Daniel Faggella Season 1 Episode 3

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 1:05:55

This is an interview with Ben Goertzel, CEO of SingularityNET, and AGI researcher for many decades.

This is the third episode in a 5-part series about "AGI Destinations" - where we unpack the preferable and non-preferable futures humanity might strive towards in the years ahead.

Watch Ben's episode on The Trajectory YouTube channel: https://youtu.be/faU0EdQHDpY

See the full article from this episode: https://danfaggella.com/goertzel1

Read more from Ben on X: https://twitter.com/bengoertzel

I often recommend Ben's "Cosmist Manifesto" as a relatively frank and honest take on posthuman / AGI futures: https://www.amazon.com/Cosmist-Manifesto-Practical-Philosophy-Posthuman/dp/0984609709

Some of the resources referenced in this episode:

-- The Intelligence Trajectory Political Matrix: http://www.danfaggella.com/itpm
-- The SDGs of Strong AGI: https://emerj.com/ai-power/sdgs-of-ai/

...

About The Trajectory:

AGI and man-machine merger are going to radically expand the process of life beyond humanity -- so how can we ensure a good trajectory for future life?

From Yoshua Bengio to Nick Bostrom, from Michael Levin to Peter Singer, we discuss how to positively influence the trajectory of posthuman life with the greatest minds in AI, biology, philosophy, and policy.

Ask questions of our speakers in our live Philosophy Circle calls:
https://bit.ly/PhilosophyCircle

Stay in touch:
-- Newsletter: bit.ly/TrajectoryTw
-- X: x.com/danfaggella
-- Blog: danfaggella.com/trajectory
-- YouTube: youtube.com/@trajectoryai