The Trajectory

Ben Goertzel - Regulating AGI May Do More Harm Than Good (AGI Destinations Series, Episode 3)

Daniel Faggella Season 1 Episode 3

This is an interview with Ben Goertzel, CEO of SingularityNET, and AGI researcher for many decades.

This is the third episode in a 5-part series about "AGI Destinations" - where we unpack the preferable and non-preferable futures humanity might strive towards in the years ahead.

Watch Ben's episode on The Trajectory YouTube channel: https://youtu.be/faU0EdQHDpY

See the full article from this episode: https://danfaggella.com/goertzel1

Read more from Ben on X: https://twitter.com/bengoertzel

I often recommend Ben's "Cosmist Manifesto" as a relatively frank and honest take on posthuman / AGI futures: https://www.amazon.com/Cosmist-Manifesto-Practical-Philosophy-Posthuman/dp/0984609709

Some of the resources referenced in this episode:

-- The Intelligence Trajectory Political Matrix: http://www.danfaggella.com/itpm
-- The SDGs of Strong AGI: https://emerj.com/ai-power/sdgs-of-ai/

...

There three main questions we'll be covering on The Trajectory:

1. Who are the power players in AGI and what are their incentives?

2. What kind of posthuman future are we moving towards, or should we be moving towards?

3. What should we do about it?

If this sounds like it's up your alley, I'm glad to have you here.

Connect:
danfaggella.com/trajectory
twitter.com/danfaggella
linkedin.com/in/danfaggella

Newsletter:
bit.ly/TrajectoryTw