How can a binding international treaty be agreed and put into practice, when many parties are strongly tempted to break the rules of the agreement, for commercial or military advantage, and when cheating may be hard to detect? That’s the dilemma we’ll examine in this episode, concerning possible treaties to govern the development and deployment of advanced AI.
Our guest is Otto Barten, Director of the Existential Risk Observatory, which is based in the Netherlands but operates internationally. In November last year, Time magazine published an article by Otto, advocating what his organisation calls a Conditional AI Safety Treaty. In March this year, these ideas were expanded into a 34-page preprint which we’ll be discussing today, “International Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty”.
Before co-founding the Existential Risk Observatory in 2021, Otto had roles as a sustainable energy engineer, data scientist, and entrepreneur. He has a BSc in Theoretical Physics from the University of Groningen and an MSc in Sustainable Energy Technology from Delft University of Technology.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Promoguy Talk PillsListen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonListen on: Apple Podcasts Spotify
In this episode, we return to the subject of existential risks, but with a focus on what actions can be taken to eliminate or reduce these risks.
Our guest is James Norris, who describes himself on his website as an existential safety advocate. The website lists four primary organizations which he leads: the International AI Governance Alliance, Upgradable, the Center for Existential Safety, and Survival Sanctuaries.
Previously, one of James' many successful initiatives was Effective Altruism Global, the international conference series for effective altruists. He also spent some time as the organizer of a kind of sibling organization to London Futurists, namely Bay Area Futurists. He graduated from the University of Texas at Austin with a triple major in psychology, sociology, and philosophy, as well as with minors in too many subjects to mention.
Selected follow-ups:
Other people mentioned include:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Promoguy Talk PillsListen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonListen on: Apple Podcasts Spotify
Our subject in this episode may seem grim – it’s the potential extinction of the human species, either from a natural disaster, like a supervolcano or an asteroid, or from our own human activities, such as nuclear weapons, greenhouse gas emissions, engineered biopathogens, misaligned artificial intelligence, or high energy physics experiments causing a cataclysmic rupture in space and time.
These scenarios aren’t pleasant to contemplate, but there’s a school of thought that urges us to take them seriously – to think about the unthinkable, in the phrase coined in 1962 by pioneering futurist Herman Kahn. Over the last couple of decades, few people have been thinking about the unthinkable more carefully and systematically than our guest today, Sean ÓhÉigeartaigh. Sean is the author of a recent summary article from Cambridge University Press that we’ll be discussing, “Extinction of the human species: What could cause it and how likely is it to occur?”
Sean is presently based in Cambridge where he is a Programme Director at the Leverhulme Centre for the Future of Intelligence. Previously he was founding Executive Director of the Centre for the Study of Existential Risk, and before that, he managed research activities at the Future of Humanity Institute in Oxford.
Selected follow-ups:
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonListen on: Apple Podcasts Spotify
Our guest in this episode, Ramez Naam, is described on his website as “climate tech investor, clean energy advocate, and award-winning author”. But that hardly starts to convey the range of deep knowledge that Ramez brings to a wide variety of fields. It was his 2013 book, “The Infinite Resource: The Power of Ideas on a Finite Planet”, that first alerted David to the breadth of scope of his insight about future possibilities – both good possibilities and bad possibilities. He still vividly remembers its opening words, quoting Charles Dickens from “The Tale of Two Cities”:
Quote: “‘It was the best of times; it was the worst of times’ – the opening line of Charles Dickens’s 1859 masterpiece applies equally well to our present era. We live in unprecedented wealth and comfort, with capabilities undreamt of in previous ages. We live in a world facing unprecedented global risks—risks to our continued prosperity, to our survival, and to the health of our planet itself. We might think of our current situation as ‘A Tale of Two Earths’.” End quote.
12 years after the publication of “The Infinite Resource”, it seems that the Earth has become even better, but also even worse. Where does this leave the power of ideas? Or do we need more than ideas, as ominous storm clouds continue to gather on the horizon?
Selected follow-ups:
Music: Spike Prote
Promoguy Talk PillsListen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonListen on: Apple Podcasts Spotify
In this episode, our guest is Rebecca Finlay, the CEO at Partnership on AI (PAI). Rebecca previously joined us in Episode 62, back in October 2023, in what was the run-up to the Global AI Safety Summit in Bletchley Park in the UK. Times have moved on, and earlier this month, Rebecca and the Partnership on AI participated in the latest global summit in that same series, held this time in Paris. This summit, breaking with the previous naming, was called the Global AI Action Summit. We’ll be hearing from Rebecca how things have evolved since we last spoke – and what the future may hold.
Prior to joining Partnership on AI, Rebecca founded the AI & Society program at global research organization CIFAR, one of the first international, multistakeholder initiatives on the impact of AI in society. Rebecca’s insights have been featured in books and media including The Financial Times, The Guardian, Politico, and Nature Machine Intelligence. She is a Fellow of the American Association for the Advancement of Sciences and sits on advisory bodies in Canada, France, and the U.S.
Selected follow-ups:
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonListen on: Apple Podcasts Spotify
The most highly anticipated development in AI this year is probably the expected arrival of AI agents, also referred to as “agentic AI”. We are told that AI agents have the potential to reshape how individuals and organizations interact with technology.
Our guest to help us explore this is Tom Davenport, Distinguished Professor in Information Technology and Management at Babson College, and a globally recognized thought leader in the areas of analytics, data science, and artificial intelligence. Tom has written, co-authored, or edited about twenty books, including "Competing on Analytics" and "The AI Advantage." He has worked extensively with leading organizations and has a unique perspective on the transformative impact of AI across industries. He has recently co-authored an article in the MIT Sloan Management Review, “Five Trends in AI and Data Science for 2025”, which included a section on AI agents – which is why we invited him to talk about the subject.
Selected follow-ups:
Listen on: Apple Podcasts Spotify
In this episode, we return to a theme which is likely to become increasingly central to public discussion in the months and years ahead. To use a term coined by this podcast’s cohost Calum Chace, this theme is the Economic Singularity, namely the potential all-round displacement of humans from the workforce by ever more capable automation. That leads to the question: what are our options for managing the transition of society to increasing technological unemployment and technological underemployment.
Our guest, who will be sharing his thinking on these questions, is the prolific writer and YouTuber David Shapiro. As well as keeping on top of fast-changing news about innovations in AI, David has been developing a set of ideas he calls post-labour economics – how an economy might continue to function even if humans can no longer gain financial rewards in direct return for their labour.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonListen on: Apple Podcasts Spotify
Our guests in this episode have been described as the world’s two oldest scientifically astute longevity activists. They are Kenneth Scott, aged 82, who is based in Florida, and Helga Sands, aged 86, who lives in London.
David has met both of them several times at a number of longevity events, and they always impress him, not only with their vitality and good health, but also with the level of knowledge and intelligence they apply to the question of which treatments are the best, for them personally and for others, to help keep people young and vibrant.
Selected follow-ups:
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonListen on: Apple Podcasts Spotify
Our guest in this episode is Jeff LaPorte, a software engineer, entrepreneur and investor based in Vancouver, who writes Road to Artificia, a newsletter about discovering the principles of post‑AI societies.
Calum recently came across Jeff's article “Valuing Humans in the Age of Superintelligence: HumaneRank” and thought it had some good, original ideas, so we wanted to invite Jeff onto the podcast and explore them.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonListen on: Apple Podcasts Spotify
Our subject in this episode is altruism – our human desire and instinct to assist each other, making some personal sacrifices along the way. More precisely, our subject is the possible future of altruism – a future in which our philanthropic activities – our charitable donations, and how we spend our discretionary time – could have a considerably greater impact than at present. The issue is that many of our present activities, which are intended to help others, aren’t particularly effective.
That’s the judgement reached by our guest today, Stefan Schubert. Stefan is a researcher in philosophy and psychology, currently based in Stockholm, Sweden, and has previously held roles at the LSE and the University of Oxford. Stefan is the co-author of the recently published book “Effective Altruism and the Human Mind”.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Listen on: Apple Podcasts Spotify
Our guest in this episode is Amory Lovins, a distinguished environmental scientist, and co-founder of RMI, which he co-founded in 1982 as Rocky Mountain Institute. It’s what he calls a think do and scale tank, with 700 people in 62 countries, and a budget of well over $100m a year.
For over five decades, Amory has championed innovative approaches to energy systems, advocating for a world where energy services are delivered with least cost and least impact. He has advised all manner of governments, companies, and NGOs, and published 31 books and over 900 papers. It’s an over-used word, but in this case it is justified: Amory is a true thought leader in the global energy transition.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Listen on: Apple Podcasts Spotify
Some people say that all that’s necessary to improve the capabilities of AI is to scale up existing systems. That is, to use more training data, to have larger models with more parameters in them, and more computer chips to crunch through the training data. However, in this episode, we’ll be hearing from a computer scientist who thinks there are many other options for improving AI. He is Alexander Ororbia, a professor at the Rochester Institute of Technology in New York State, where he directs the Neural Adaptive Computing Laboratory.
David had the pleasure of watching Alex give a talk at the AGI 2024 conference in Seattle earlier this year, and found it fascinating. After you hear this episode, we hope you reach a similar conclusion.
Selected follow-ups:
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonListen on: Apple Podcasts Spotify
In David's life so far, he has read literally hundreds of books about the future. Yet none has had such a provocative title as this: “The future loves you: How and why we should abolish death”. That’s the title of the book written by the guest in this episode, Ariel Zeleznikow-Johnston. Ariel is a neuroscientist, and a Research Fellow at Monash University, in Melbourne, Australia.
One of the key ideas in Ariel’s book is that so long as your connectome – the full set of the synapses in your brain – continues to exist, then you continue to exist. Ariel also claims that brain preservation – the preservation of the connectome, long after we have stopped breathing – is already affordable enough to be provided to essentially everyone. These claims raise all kinds of questions, which are addressed in this conversation.
Selected follow-ups:
Related previous episodes:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonListen on: Apple Podcasts Spotify
Our guest in this episode is Sterling Anderson, a pioneer of self-driving vehicles. With a masters degree and a PhD from MIT, Sterling led the development and launch of the Tesla Model X, and then led the team that delivered Tesla Autopilot. In 2017 he co-founded Aurora, along with Chris Urmson, who was a founder and CTO of Google’s self-driving car project, which is now Waymo, and also Drew Bagnell, who co-founded and led Uber’s self-driving team.
Aurora is concentrating on automating long-distance trucks, and expects to be the first company to deploy fully self-driving trucks in the US when it deploys big driverless trucks (16 tons and more) between Dallas and Houston in April 2025.
Self-driving vehicles will be one of the most significant technologies of this decade, and we are delighted that one of the stars of the sector, Sterling, is joining us to share his perspectives.
Selected follow-ups:
Previous episodes also featuring self-driving vehicles:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonListen on: Apple Podcasts Spotify
Our guest in this episode is Parmy Olson, a columnist for Bloomberg covering technology. Parmy has previously been a reporter for the Wall Street Journal and for Forbes. Her first book, “We Are Anonymous”, shed fascinating light on what the subtitle calls “the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency”.
But her most recent book illuminates a set of high-stakes relations with potentially even bigger consequences for human wellbeing. The title is “Supremacy: AI, ChatGPT and the Race That Will Change the World”. The race is between two remarkable individuals, Sam Altman of OpenAI and Demis Hassabis of DeepMind, who are each profoundly committed to build AI that exceeds human capabilities in all aspects of reasoning.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Promoguy Talk PillsListen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonListen on: Apple Podcasts Spotify
Our guest in this episode is Andrea Miotti, the founder and executive director of ControlAI. On their website, ControlAI have the tagline, “Fighting to keep humanity in control”. Control over what, you might ask. The website answers: control deepfakes, control scaling, control foundation models, and, yes, control AI.
The latest project from ControlAI is called “A Narrow Path”, which is a comprehensive policy plan split into three phases: Safety, Stability, and Flourishing. To be clear, the envisioned flourishing involves what is called “Transformative AI”. This is no anti-AI campaign, but rather an initiative to “build a robust science and metrology of intelligence, safe-by-design AI engineering, and other foundations for transformative AI under human control”.
The initiative has already received lots of feedback, both positive and negative, which we discuss.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonListen on: Apple Podcasts Spotify
Our guest in this episode is David Wakeling, a partner at A&O Shearman, which became the world’s third largest law firm in May, thanks to the merger of Allen and Overy, a UK “magic circle” firm, with Shearman & Sterling of New York.
David heads up a team within the firm called the Markets Innovation Group (MIG), which consists of lawyers, developers and technologists, and is seeking to disrupt the legal industry. He also leads the firm's AI Advisory practice, through which the firm is currently advising 80 of the largest global businesses on the safe deployment of AI.
One of the initiatives David has led is the development and launch of ContractMatrix, in partnership with Microsoft and Harvey, an OpenAI-backed, GPT-4-based large language model that has been fine-tuned for the legal industry. ContractMatrix is a contract drafting and negotiation tool powered by generative AI. It was tested and honed by 1,000 of the firm’s lawyers prior to launch, to mitigate against risks like hallucinations. The firm estimates that the tool is saving up to seven hours from the average contract review, which is around a 30% efficiency gain. As well as internal use by 2,000 of its lawyers, it is also licensed to clients.
This is the third time we have looked at the legal industry on the podcast. While lawyers no longer use quill pens, they are not exactly famous for their information technology skills, either. But the legal profession has a couple of characteristics which make it eminently suited to the deployment of advanced AI systems: it generates vast amounts of data and money, and lawyers frequently engage in text-based routine tasks which can be automated by generative AI systems.
Previous London Futurists Podcast episodes on the legal industry:
Other selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Listen on: Apple Podcasts Spotify
Our guest in this episode is Matt Burgess. Matt is an Assistant Professor at the University of Wyoming, where he moved this year after six years at the University of Boulder, Colorado. He has specialised in the economics of climate change.
Calum met Matt at a recent event in Jackson Hole, Wyoming, and knows from their conversations then that Matt has also thought deeply about the impact of social media, the causes of populism, and many other subjects.
Selected follow-ups:
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonListen on: Apple Podcasts Spotify
Our guest in this episode is Karl Pfleger. Karl is an angel investor in rejuvenation biotech startups, and is also known for creating and maintaining the website Aging Biotech Info. That website describes itself as “Structured info about aging and longevity”, and has the declared mission statement, “Everything important in the field (outside of academia), organized.”
Previously, Karl worked at Google from 2002 to 2013, as a research scientist and data analyst, applying AI and machine learning at scale. He has a BSE in Computer Science from Princeton, and a PhD in Computer Science and AI from Stanford.
Previous London Futurists Podcast episodes mentioned in this conversation:
Other selected follow-ups:
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonListen on: Apple Podcasts Spotify
Our guest today is Pedro Domingos, who is joining an elite group of repeat guests – he joined us before in episode 34 in April 2023.
Pedro is Professor Emeritus Of Computer Science and Engineering at the University of Washington. He has done pioneering work in machine learning, like the development of Markov logic networks, which combine probabilistic reasoning with first-order logic. He is probably best known for his book "The Master Algorithm" which describes five different "tribes" of AI researchers, and argues that progress towards human-level general intelligence requires a unification of their approaches.
More recently, Pedro has become a trenchant critic of what he sees as exaggerated claims about the power and potential of today’s AI, and of calls to impose constraints on it.
He has just published “2040: A Silicon Valley Satire”, a novel which ridicules Big Tech and also American politics.
Selected follow-ups:
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonListen on: Apple Podcasts Spotify
Our guest in this episode is the journalist and author James Ball. James has worked for the Bureau of Investigative Journalism, The Guardian, WikiLeaks, BuzzFeed, The New European, and The Washington Post, among other organisations. As special projects editor at The Guardian, James played a key role in the Pulitzer Prize-winning coverage of the NSA leaks by Edward Snowden.
Books that James has written include “Post-Truth: How Bullshit Conquered the World”, “Bluffocracy”, which makes the claim that Britain is run by bluffers, “The System: Who Owns the Internet, and How It Owns Us”, and, most recently, “The Other Pandemic: How QAnon Contaminated the World”.
That all adds up to enough content to fill at least four of our episodes, but we mainly focus on the ideas in the last of these books, about digital pandemics.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonListen on: Apple Podcasts Spotify
In this episode, we have not one guest but two – Brett King and Robert Tercek, the hosts of the Futurists Podcast.
Brett King is originally from Australia, and is now based in Thailand. He is a renowned author, and the founder of a breakthrough digital bank. He consults extensively with clients in the financial services industry.
Robert Tercek, based in the United States, is an expert in digital media with a successful career in broadcasting and innovation which includes serving as a creative director at MTV and a senior vice president at Sony Pictures. He now consults to CEOs about digital transformation.
David and Calum had the pleasure of joining them on their podcast recently, where the conversation delved into the likely future impacts of artificial intelligence and other technologies, and also included politics.
This return conversation covers a wide range of themes, including the dangers of Q-day, the prospects for technological unemployment, the future of media, different approaches to industrial strategy, a plea to "bring on the machines", and the importance of "thinking more athletically about the future".
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonListen on: Apple Podcasts Spotify
Our guest in this episode is Jordan Sparks, the founder and executive director of Oregon Brain Preservation (OBP), which is located at Salem, the capital city of Oregon. OBP offers the service of chemically preserving the brain in the hope of future restoration.
Previously, Jordan was a dentist and a computer programmer, and he was successful enough in those fields to generate the capital required to start OBP.
Brain preservation is a fascinating subject that we have covered in a number of recent episodes, in which we have interviewed Kenneth Hayworth, Max More, and Emil Kendziorra.
Most people whose brains have been preserved for future restoration have undergone cryopreservation, which involves cooling the brain (and sometimes the whole body) down to a very low temperature and keeping it that way. OBP does offer that service occasionally, but its focus – which may be unique – is chemical fixation of the brain.
Previous episodes on biostasis and brain preservation:
Additional selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declar
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonListen on: Apple Podcasts Spotify
Our guest in this episode is Holly Joint, who was born and educated in the UK, but lives in Abu Dhabi in the UAE.
Holly started her career with five years at the business consultancy Accenture, and then worked in telecomms and banking. The latter took her to the Gulf, where she then spent what must have been a fascinating year as programme director of Qatar’s winning bid to host the 2022 World Cup. Since then she has run a number of other start-ups and high-growth businesses in the Gulf.
Holly is currently COO of Trivandi and also has a focus on helping women to have more power in a future dominated by technology.
Calum met Holly at a conference in Dubai this year, where she quizzed him on-stage about machine consciousness.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Listen on: Apple Podcasts Spotify
How do we keep technology from slipping beyond our control? That’s the subtitle of the latest book by our guest in this episode, Wendell Wallach.
Wendell is the Carnegie-Uehiro fellow at Carnegie Council for Ethics in International Affairs, where he co-directs the Artificial Intelligence & Equality Initiative. He is also Emeritus Chair of Technology and Ethics Studies at Yale University’s Interdisciplinary Center for Bioethics, a scholar with the Lincoln Center for Applied Ethics, a fellow at the Institute for Ethics & Emerging Technology, and a senior advisor to The Hastings Center.
Earlier in his life, Wendell was founder and president of two computer consulting companies, Farpoint Solutions and Omnia Consulting Inc.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonListen on: Apple Podcasts Spotify