The Trajectory
What should be the trajectory of intelligence beyond humanity?
The Trajectory pull covers realpolitik on artificial general intelligence and the posthuman transition - by asking tech, policy, and AI research leaders the hard questions about what's after man, and how we should define and create a worthy successor (danfaggella.com/worthy). Hosted by Daniel Faggella.
Episodes
43 episodes
Robin Hanson - A Successor Must be Adaptive (Worthy Successor, Episode 16)
This new installment of the Worthy Successor series is an interview with Robin Hanson - Associate Professor of Economics at George Mason University, economist, polymath, and author of The Age of Em. Hanson is one of the few thinkers who approac...
•
Season 2
•
Episode 16
•
2:01:18
Joe Carlsmith - A Wiser, AI-Powered Civilization is the “Successor” (Worthy Successor, Episode 15)
This new installment of the Worthy Successor series is an interview with Joe Carlsmith, a senior advisor at Open Philanthropy, whose work spans AI alignment, moral uncertainty, and the philosophical foundations of value. In this conversation, J...
•
Season 2
•
Episode 15
•
1:52:41
Blaise Agüera y Arcas - AGI Symbiosis and the Arrow of Intelligence (Worthy Successor, Episode 14)
This new installment of the Worthy Successor series is an interview with Blaise Agüera y Arcas, Vice President and Fellow at Google, and CTO of Technology & Society. In this conversation, Blaise talks about how life and intelligence...
•
Season 2
•
Episode 14
•
1:23:58
Brad Carson - AGI Competition with Civility and Understanding (US-China AGI Relations, Episode 5)
This is an interview with Brad Carson, who served as a U.S. Congressman and as Under Secretary of the Army. Later, he served as the Acting Under Secretary of Defense for Personnel & Readiness, and now serves as President of Americans for Re...
•
Season 5
•
Episode 5
•
1:04:25
Irakli Beridze - Can the UN Help with Global AGI Governance? (AGI Governance, Episode 11)
Joining us in our eleventh episode of our series AGI Governance on The Trajectory is Irakli Beridze, Director of the UNICRI Centre for Artificial Intelligence and Robotics under the United Nations mandate. In this conversation, Irakli d...
•
Season 3
•
Episode 11
•
52:52
Dean Xue Lan - A Multi-Pronged Approach to Pre-AGI Coordination (AGI Governance, Episode 10)
Joining us in our tenth episode of our AGI Governance series on The Trajectory is Dean Xue Lan, longtime scholar of public policy and global governance, whose recent work centers on AI safety and international coordination.In this episo...
•
Season 3
•
Episode 10
•
37:01
RAND’s Joel Predd - Competitive and Cooperative Dynamics of AGI (US-China AGI Relations, Episode 4)
This is an interview with Joel Predd, a senior engineer at the RAND Corporation and co-author of RAND’s work on “five hard national security problems from AGI,”. In this conversation, Joel lays out a sober frame for leaders: treat AGI a...
•
Season 5
•
Episode 4
•
1:09:41
Drew Cukor - AI Adoption as a National Security Priority (US-China AGI Relations, Episode 3)
USMC Colonel Drew Cukor spent 25 years as decades in uniform and helped spearhead early Department of Defense AI efforts, eventually leading project including the Pentagon’s Project Maven. After government service, he’s led AI initiatives in th...
•
Season 5
•
Episode 3
•
51:15
Stuart Russell - Avoiding the Cliff of Uncontrollable AI (AGI Governance, Episode 9)
Joining us in our ninth episode of our AGI Governance series on The Trajectory is Stuart Russell, Professor of Computer Science at UC Berkeley and author of
•
Season 3
•
Episode 9
•
1:04:32
Craig Mundie - Co-Evolution with AI: Industry First, Regulators Later (AGI Governance, Episode 8)
Joining us in our eighth episode of our AGI Governance series on The Trajectory is Craig Mundie, former Chief Research and Strategy Officer at Microsoft and longtime advisor on the evolution of digital infrastructure, AI, and national security....
•
Season 3
•
Episode 8
•
36:45
Jeremie and Edouard Harris - What Makes US-China Alignment Around AGI So Hard (US-China AGI Relations, Episode 2)
This is an interview with Jeremie and Edouard Harris, Canadian researchers with backgrounds in AI governance and national security consulting, and co-founders of Gladstone AI.In this episode, Jeremie and Edouard explain why trusting Chi...
•
Season 5
•
Episode 2
•
1:33:06
Ed Boyden - Neurobiology as a Bridge to a Worthy Successor (Worthy Successor, Episode 13)
This new installment of the Worthy Successor series features Ed Boyden, an American neuroscientist and entrepreneur at MIT, widely known for his work on optogenetics and brain simulation - his breakthroughs have helped shape the frontier of neu...
•
Season 2
•
Episode 13
•
1:19:27
Roman Yampolskiy - The Blacker the Box, the Bigger the Risk (Early Experience of AGI, Episode 3)
This is an interview with Roman V. Yampolskiy, a computer scientist at the University of Louisville and a leading voice in AI safety. Everyone has heard Roman's p(doom) arguments, that isn't the focus of our interview. We instead t...
•
Season 6
•
Episode 3
•
1:28:56
Toby Ord - Crucial Updates on the Evolving AGI Risk Landscape (AGI Governance, Episode 7)
Joining us in our seventh episode of our series AGI Governance on The Trajectory is Toby Ord, Senior Researcher at Oxford University’s AI Governance Initiative and author of The Precipice: Existential Risk and the Future of Humanity. To...
•
Season 3
•
Episode 7
•
1:24:49
Martin Rees - If They’re Conscious, We Should Step Aside (Worthy Successor, Episode 12)
This new installment of the Worthy Successor series is an interview with the brilliant Martin Rees - British cosmologist, astrophysicist, and 60th President of the Royal Society. In this interview we explore his belief that humanity is ...
•
Season 2
•
Episode 12
•
1:17:06
Emmett Shear - AGI as "Another Kind of Cell" in the Tissue of Life (Worthy Successor, Episode 11)
This is an interview with Emmett Shear - CEO of SoftMax, co-founder of Twitch, former interim CEO of OpenAI, and one of the few public-facing tech leaders who seems to take both AGI development and AGI alignment seriously.In this episod...
•
Season 2
•
Episode 11
•
1:30:42
Joshua Clymer - Where Human Civilization Might Crumble First (Early Experience of AGI - Episode 2)
This is an interview with Joshua Clymer, AI safety researcher at Redwood Research, and former researcher at METR.Joshua has spent years focused on institutional readiness for AGI, especially the kinds of governance bottlenecks that coul...
•
Season 6
•
Episode 2
•
1:51:37
Peter Singer - Optimizing the Future for Joy, and the Exploration of the Good [Worthy Successor, Episode 10]
This is an interview with Peter Singer, one of the most influential moral philosophers of our time. Singer is best known for his groundbreaking work on animal rights, global poverty, and utilitarian ethics, and his ideas have shaped...
•
Season 2
•
Episode 10
•
1:25:55
David Duvenaud - What are Humans Even Good For in Five Years? [Early Experience of AGI - Episode 1]
This is an interview with David Duvenaud, Assistant Professor at University of Toronto, co-author of the Gradual Disempowerment paper, and former researcher at Anthropic.This is the first episode in our new “Early Experience of AGI” ser...
•
Season 6
•
Episode 1
•
1:55:59
Kristian Rönn - A Blissful Successor Beyond Darwinian Life [Worthy Successor, Episode 9]
This is an interview with Kristian Rönn, author, successful startup founder, and now CEO of Lucid, and AI hardware governance startup based in SF.This is an additional installment of our "Worthy Successor" series - where we explore the ...
•
Season 2
•
Episode 9
•
1:47:40
Jack Shanahan - Avoiding an AI Race While Keeping America Strong [US-China AGI Relations, Episode 1]
This is an interview with Jack Shanahan, a three-star General and former Director of the Joint AI Center (JAIC) within the US Department of Defense. This the first installment of our "US-China AGI Relations" series - where we explor...
•
Season 5
•
Episode 1
•
1:41:56
Richard Ngo - A State-Space of Positive Posthuman Futures [Worthy Successor, Episode 8]
This is an interview with Richard Ngo, AGI researcher and thinker - with extensive stints at both OpenAI and DeepMind.This is an additional installment of our "Worthy Successor" series - where we explore the kinds of posthuman intellige...
•
Season 2
•
Episode 8
•
1:46:15
Yi Zeng - Exploring 'Virtue' and Goodness Through Posthuman Minds [AI Safety Connect, Episode 2]
This is an interview with Yi Zeng, Professor at the Chinese Academy of Sciences, a member of the United Nations High-Level Advisory Body on AI, and leader of the Beijing Institute for AI Safety and Governance (among many other accolades). ...
•
Season 4
•
Episode 2
•
1:14:19
Max Tegmark - The Lynchpin Factors to Achieving AGI Governance [AI Safety Connect, Episode 1]
This is an interview with Max Tegmark, MIT professor, Founder of the Future of Humanity Institute, and author of Life 3.0.This interview was recorded on-site at AI Safety Connect 2025, a side event from the AI Action Summit in Paris.
•
Season 4
•
Episode 1
•
26:06
Michael Levin - Unfolding New Paradigms of Posthuman Intelligence [Worthy Successor, Episode 7]
This is an interview with Dr. Michael Levin, a pioneering developmental biologist at Tufts University.This is an additional installment of our "Worthy Successor" series - where we explore the kinds of posthuman intelligences that deserv...
•
Season 2
•
Episode 7
•
1:16:35