Our guest in this episode is Noel Hurley. Noel is a highly experienced technology strategist with a long career at the cutting edge of computing. He spent two decade-long stints at Arm, the semiconductor company whose processor designs power hundreds of billions of devices worldwide.
Today, he’s a co-founder of Literal Labs, where he’s developing Tsetlin Machines. Named after Michael Tsetlin, a Soviet mathematician, these are a kind of machine learning model that are energy-efficient, flexible, and surprisingly effective at solving complex problems - without the opacity or computational overhead of large neural networks.
AI has long had two main camps, or tribes. One camp works with neural networks, including Large Language Models. Neural networks are brilliant at pattern matching, and can be compared to human instinct, or fast thinking, to use Daniel Kahneman´s terminology. Neural nets have been dominant since the first Big Bang in AI in 2012, when Geoff Hinton and others demonstrated the foundations for deep learning.
For decades before the 2012 Big Bang, the predominant form of AI was symbolic AI, also known as Good Old Fashioned AI. This can be compared to logical reasoning, or slow learning in Kahneman´s terminology.
Tsetlin Machines have characteristics of both neural networks and symbolic AI. They are rule-based learning systems built from simple automata, not from neurons or weights. But their learning mechanism is statistical and adaptive, more like machine learning than traditional symbolic AI.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Could the future see the emergence and adoption of a new field of engineering called nucleonics, in which the energy of nuclear fusion is accessed at relatively low temperatures, producing abundant clean safe energy? This kind of idea has been discussed since 1989, when the claims of cold fusion first received media attention. It is often assumed that the field quickly reached a dead-end, and that the only scientists who continue to study it are cranks. However, as we’ll hear in this episode, there may be good reasons to keep an open mind about a number of anomalous but promising results.
Our guest is Jonah Messinger, who is a Winton Scholar and Ph.D. student at the Cavendish Laboratory of Physics at the University of Cambridge. Jonah is also a Research Affiliate at MIT, a Senior Energy Analyst at the Breakthrough Institute, and previously he was a Visiting Scientist and ThinkSwiss Scholar at ETH Zürich. His work has appeared in research journals, on the John Oliver show, and in publications of Columbia University. He earned his Master’s in Energy and Bachelor’s in Physics from the University of Illinois at Urbana-Champaign, where he was named to its Senior 100 Honorary.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
This episode of London Futurists Podcast is a special joint production with the AI and You podcast which is hosted by Peter Scott. It features a three-way discussion, between Peter, Calum, and David, on the future of AI, with particular focus on AI agents, AI safety, and AI boycotts.
Peter Scott is a futurist, speaker, and technology expert helping people master technological disruption. After receiving a Master’s degree in Computer Science from Cambridge University, he went to California to work for NASA’s Jet Propulsion Laboratory. His weekly podcast, “Artificial Intelligence and You” tackles three questions: What is AI? Why will it affect you? How do you and your business survive and thrive through the AI Revolution?
Peter’s second book, also called “Artificial Intelligence and You,” was released in 2022. Peter works with schools to help them pivot their governance frameworks, curricula, and teaching methods to adapt to and leverage AI.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
The guest in this episode is Hugo Spowers. Hugo has led an adventurous life. In the 1970s and 80s he was an active member of the Dangerous Sports Club, which invented bungee jumping, inspired by an initiation ceremony in Vanuatu. Hugo skied down a black run in St.Moritz in formal dress, seated at a grand piano, and he broke his back, neck and hips when he misjudged the length of one of his bungee ropes.
Hugo is a petrol head, and done more than his fair share of car racing. But if he’ll excuse the pun, his driving passion was always the environment, and he is one of the world’s most persistent and dedicated pioneers of hydrogen cars.
He is co-founder and CEO of Riversimple, a 24 year-old pre-revenue startup, which have developed 5 generations of research vehicles. Hydrogen cars are powered by electric motors using electricity generated by fuel cells. Fuel cells are electrolysis in reverse. You put in hydrogen and oxygen, and what you get out is electricity and water.
There is a long-standing debate among energy experts about the role of hydrogen fuel cells in the energy mix, and Hugo is a persuasive advocate. Riversimple’s cars carry modest sized fuel cells complemented by supercapacitors, with motors for each of the four wheels. The cars are made of composites, not steel, because minimising weight is critical for fuel efficiency, pollution, and road safety. The cars are leased rather than sold, which enables a circular business model, involving higher initial investment per car, and no built-in obsolescence. The initial, market entry cars are designed as local run-arounds for households with two cars, which means the fuelling network can be built out gradually. And Hugo also has strong opinions about company governance.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Can we use AI to improve how we handle conflict? Or even to end the worst conflicts that are happening all around us? That’s the subject of the new book of our guest in this episode, Simon Horton. The book has the bold title “The End of Conflict: How AI will end war and help us get on better”.
Simon has a rich background, including being a stand-up comedian and a trapeze artist – which are, perhaps, two useful skills for dealing with acute conflict. He has taught negotiation and conflict resolution for 20 years, across 25 different countries, where his clients have included the British Army, the Saudi Space Agency, and Goldman Sachs. His previous books include “Change their minds” and “The leader’s guide to negotiation”.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Nate Soares, President of the Machine Intelligence Research Institute, or MIRI.
MIRI was founded in 2000 as the Singularity Institute for Artificial Intelligence by Eliezer Yudkowsky, with support from a couple of internet entrepreneurs. Among other things, it ran a series of conferences called the Singularity Summit. In 2012, Peter Diamandis and Ray Kurzweil, acquired the Singularity Summit, including the Singularity brand, and the Institute was renamed as MIRI.
Nate joined MIRI in 2014 after working as a software engineer at Google, and since then he’s been a key figure in the AI safety community. In a blogpost at the time he joined MIRI he observed “I turn my skills towards saving the universe, because apparently nobody ever got around to teaching me modesty.”
MIRI has long had a fairly pessimistic stance on whether AI alignment is possible. In this episode, we’ll explore what drives that view—and whether there is any room for hope.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Henry Shevlin. Henry is the Associate Director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, where he also co-directs the Kinds of Intelligence program and oversees educational initiatives.
He researches the potential for machines to possess consciousness, the ethical ramifications of such developments, and the broader implications for our understanding of intelligence.
In his 2024 paper, “Consciousness, Machines, and Moral Status,” Henry examines the recent rapid advancements in machine learning and the questions they raise about machine consciousness and moral status. He suggests that public attitudes towards artificial consciousness may change swiftly, as human-AI interactions become increasingly complex and intimate. He also warns that our tendency to anthropomorphise may lead to misplaced trust in and emotional attachment to AIs.
Note: this episode is co-hosted by David and Will Millership, the CEO of a non-profit called Prism (Partnership for Research Into Sentient Machines). Prism is seeded by Conscium, a startup where both Calum and David are involved, and which, among other things, is researching the possibility and implications of machine consciousness. Will and Calum will be releasing a new Prism podcast focusing entirely on Conscious AI, and the first few episodes will be in collaboration with the London Futurists Podcast.
Selected follow-ups:
Other researchers mentioned:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
How can a binding international treaty be agreed and put into practice, when many parties are strongly tempted to break the rules of the agreement, for commercial or military advantage, and when cheating may be hard to detect? That’s the dilemma we’ll examine in this episode, concerning possible treaties to govern the development and deployment of advanced AI.
Our guest is Otto Barten, Director of the Existential Risk Observatory, which is based in the Netherlands but operates internationally. In November last year, Time magazine published an article by Otto, advocating what his organisation calls a Conditional AI Safety Treaty. In March this year, these ideas were expanded into a 34-page preprint which we’ll be discussing today, “International Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty”.
Before co-founding the Existential Risk Observatory in 2021, Otto had roles as a sustainable energy engineer, data scientist, and entrepreneur. He has a BSc in Theoretical Physics from the University of Groningen and an MSc in Sustainable Energy Technology from Delft University of Technology.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In this episode, we return to the subject of existential risks, but with a focus on what actions can be taken to eliminate or reduce these risks.
Our guest is James Norris, who describes himself on his website as an existential safety advocate. The website lists four primary organizations which he leads: the International AI Governance Alliance, Upgradable, the Center for Existential Safety, and Survival Sanctuaries.
Previously, one of James' many successful initiatives was Effective Altruism Global, the international conference series for effective altruists. He also spent some time as the organizer of a kind of sibling organization to London Futurists, namely Bay Area Futurists. He graduated from the University of Texas at Austin with a triple major in psychology, sociology, and philosophy, as well as with minors in too many subjects to mention.
Selected follow-ups:
Other people mentioned include:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our subject in this episode may seem grim – it’s the potential extinction of the human species, either from a natural disaster, like a supervolcano or an asteroid, or from our own human activities, such as nuclear weapons, greenhouse gas emissions, engineered biopathogens, misaligned artificial intelligence, or high energy physics experiments causing a cataclysmic rupture in space and time.
These scenarios aren’t pleasant to contemplate, but there’s a school of thought that urges us to take them seriously – to think about the unthinkable, in the phrase coined in 1962 by pioneering futurist Herman Kahn. Over the last couple of decades, few people have been thinking about the unthinkable more carefully and systematically than our guest today, Sean ÓhÉigeartaigh. Sean is the author of a recent summary article from Cambridge University Press that we’ll be discussing, “Extinction of the human species: What could cause it and how likely is it to occur?”
Sean is presently based in Cambridge where he is a Programme Director at the Leverhulme Centre for the Future of Intelligence. Previously he was founding Executive Director of the Centre for the Study of Existential Risk, and before that, he managed research activities at the Future of Humanity Institute in Oxford.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode, Ramez Naam, is described on his website as “climate tech investor, clean energy advocate, and award-winning author”. But that hardly starts to convey the range of deep knowledge that Ramez brings to a wide variety of fields. It was his 2013 book, “The Infinite Resource: The Power of Ideas on a Finite Planet”, that first alerted David to the breadth of scope of his insight about future possibilities – both good possibilities and bad possibilities. He still vividly remembers its opening words, quoting Charles Dickens from “The Tale of Two Cities”:
Quote: “‘It was the best of times; it was the worst of times’ – the opening line of Charles Dickens’s 1859 masterpiece applies equally well to our present era. We live in unprecedented wealth and comfort, with capabilities undreamt of in previous ages. We live in a world facing unprecedented global risks—risks to our continued prosperity, to our survival, and to the health of our planet itself. We might think of our current situation as ‘A Tale of Two Earths’.” End quote.
12 years after the publication of “The Infinite Resource”, it seems that the Earth has become even better, but also even worse. Where does this leave the power of ideas? Or do we need more than ideas, as ominous storm clouds continue to gather on the horizon?
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In this episode, our guest is Rebecca Finlay, the CEO at Partnership on AI (PAI). Rebecca previously joined us in Episode 62, back in October 2023, in what was the run-up to the Global AI Safety Summit in Bletchley Park in the UK. Times have moved on, and earlier this month, Rebecca and the Partnership on AI participated in the latest global summit in that same series, held this time in Paris. This summit, breaking with the previous naming, was called the Global AI Action Summit. We’ll be hearing from Rebecca how things have evolved since we last spoke – and what the future may hold.
Prior to joining Partnership on AI, Rebecca founded the AI & Society program at global research organization CIFAR, one of the first international, multistakeholder initiatives on the impact of AI in society. Rebecca’s insights have been featured in books and media including The Financial Times, The Guardian, Politico, and Nature Machine Intelligence. She is a Fellow of the American Association for the Advancement of Sciences and sits on advisory bodies in Canada, France, and the U.S.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
The most highly anticipated development in AI this year is probably the expected arrival of AI agents, also referred to as “agentic AI”. We are told that AI agents have the potential to reshape how individuals and organizations interact with technology.
Our guest to help us explore this is Tom Davenport, Distinguished Professor in Information Technology and Management at Babson College, and a globally recognized thought leader in the areas of analytics, data science, and artificial intelligence. Tom has written, co-authored, or edited about twenty books, including "Competing on Analytics" and "The AI Advantage." He has worked extensively with leading organizations and has a unique perspective on the transformative impact of AI across industries. He has recently co-authored an article in the MIT Sloan Management Review, “Five Trends in AI and Data Science for 2025”, which included a section on AI agents – which is why we invited him to talk about the subject.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In this episode, we return to a theme which is likely to become increasingly central to public discussion in the months and years ahead. To use a term coined by this podcast’s cohost Calum Chace, this theme is the Economic Singularity, namely the potential all-round displacement of humans from the workforce by ever more capable automation. That leads to the question: what are our options for managing the transition of society to increasing technological unemployment and technological underemployment.
Our guest, who will be sharing his thinking on these questions, is the prolific writer and YouTuber David Shapiro. As well as keeping on top of fast-changing news about innovations in AI, David has been developing a set of ideas he calls post-labour economics – how an economy might continue to function even if humans can no longer gain financial rewards in direct return for their labour.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guests in this episode have been described as the world’s two oldest scientifically astute longevity activists. They are Kenneth Scott, aged 82, who is based in Florida, and Helga Sands, aged 86, who lives in London.
David has met both of them several times at a number of longevity events, and they always impress him, not only with their vitality and good health, but also with the level of knowledge and intelligence they apply to the question of which treatments are the best, for them personally and for others, to help keep people young and vibrant.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Jeff LaPorte, a software engineer, entrepreneur and investor based in Vancouver, who writes Road to Artificia, a newsletter about discovering the principles of post‑AI societies.
Calum recently came across Jeff's article “Valuing Humans in the Age of Superintelligence: HumaneRank” and thought it had some good, original ideas, so we wanted to invite Jeff onto the podcast and explore them.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our subject in this episode is altruism – our human desire and instinct to assist each other, making some personal sacrifices along the way. More precisely, our subject is the possible future of altruism – a future in which our philanthropic activities – our charitable donations, and how we spend our discretionary time – could have a considerably greater impact than at present. The issue is that many of our present activities, which are intended to help others, aren’t particularly effective.
That’s the judgement reached by our guest today, Stefan Schubert. Stefan is a researcher in philosophy and psychology, currently based in Stockholm, Sweden, and has previously held roles at the LSE and the University of Oxford. Stefan is the co-author of the recently published book “Effective Altruism and the Human Mind”.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Amory Lovins, a distinguished environmental scientist, and co-founder of RMI, which he co-founded in 1982 as Rocky Mountain Institute. It’s what he calls a think do and scale tank, with 700 people in 62 countries, and a budget of well over $100m a year.
For over five decades, Amory has championed innovative approaches to energy systems, advocating for a world where energy services are delivered with least cost and least impact. He has advised all manner of governments, companies, and NGOs, and published 31 books and over 900 papers. It’s an over-used word, but in this case it is justified: Amory is a true thought leader in the global energy transition.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Some people say that all that’s necessary to improve the capabilities of AI is to scale up existing systems. That is, to use more training data, to have larger models with more parameters in them, and more computer chips to crunch through the training data. However, in this episode, we’ll be hearing from a computer scientist who thinks there are many other options for improving AI. He is Alexander Ororbia, a professor at the Rochester Institute of Technology in New York State, where he directs the Neural Adaptive Computing Laboratory.
David had the pleasure of watching Alex give a talk at the AGI 2024 conference in Seattle earlier this year, and found it fascinating. After you hear this episode, we hope you reach a similar conclusion.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
In David's life so far, he has read literally hundreds of books about the future. Yet none has had such a provocative title as this: “The future loves you: How and why we should abolish death”. That’s the title of the book written by the guest in this episode, Ariel Zeleznikow-Johnston. Ariel is a neuroscientist, and a Research Fellow at Monash University, in Melbourne, Australia.
One of the key ideas in Ariel’s book is that so long as your connectome – the full set of the synapses in your brain – continues to exist, then you continue to exist. Ariel also claims that brain preservation – the preservation of the connectome, long after we have stopped breathing – is already affordable enough to be provided to essentially everyone. These claims raise all kinds of questions, which are addressed in this conversation.
Selected follow-ups:
Related previous episodes:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Sterling Anderson, a pioneer of self-driving vehicles. With a masters degree and a PhD from MIT, Sterling led the development and launch of the Tesla Model X, and then led the team that delivered Tesla Autopilot. In 2017 he co-founded Aurora, along with Chris Urmson, who was a founder and CTO of Google’s self-driving car project, which is now Waymo, and also Drew Bagnell, who co-founded and led Uber’s self-driving team.
Aurora is concentrating on automating long-distance trucks, and expects to be the first company to deploy fully self-driving trucks in the US when it deploys big driverless trucks (16 tons and more) between Dallas and Houston in April 2025.
Self-driving vehicles will be one of the most significant technologies of this decade, and we are delighted that one of the stars of the sector, Sterling, is joining us to share his perspectives.
Selected follow-ups:
Previous episodes also featuring self-driving vehicles:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Parmy Olson, a columnist for Bloomberg covering technology. Parmy has previously been a reporter for the Wall Street Journal and for Forbes. Her first book, “We Are Anonymous”, shed fascinating light on what the subtitle calls “the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency”.
But her most recent book illuminates a set of high-stakes relations with potentially even bigger consequences for human wellbeing. The title is “Supremacy: AI, ChatGPT and the Race That Will Change the World”. The race is between two remarkable individuals, Sam Altman of OpenAI and Demis Hassabis of DeepMind, who are each profoundly committed to build AI that exceeds human capabilities in all aspects of reasoning.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Andrea Miotti, the founder and executive director of ControlAI. On their website, ControlAI have the tagline, “Fighting to keep humanity in control”. Control over what, you might ask. The website answers: control deepfakes, control scaling, control foundation models, and, yes, control AI.
The latest project from ControlAI is called “A Narrow Path”, which is a comprehensive policy plan split into three phases: Safety, Stability, and Flourishing. To be clear, the envisioned flourishing involves what is called “Transformative AI”. This is no anti-AI campaign, but rather an initiative to “build a robust science and metrology of intelligence, safe-by-design AI engineering, and other foundations for transformative AI under human control”.
The initiative has already received lots of feedback, both positive and negative, which we discuss.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is David Wakeling, a partner at A&O Shearman, which became the world’s third largest law firm in May, thanks to the merger of Allen and Overy, a UK “magic circle” firm, with Shearman & Sterling of New York.
David heads up a team within the firm called the Markets Innovation Group (MIG), which consists of lawyers, developers and technologists, and is seeking to disrupt the legal industry. He also leads the firm's AI Advisory practice, through which the firm is currently advising 80 of the largest global businesses on the safe deployment of AI.
One of the initiatives David has led is the development and launch of ContractMatrix, in partnership with Microsoft and Harvey, an OpenAI-backed, GPT-4-based large language model that has been fine-tuned for the legal industry. ContractMatrix is a contract drafting and negotiation tool powered by generative AI. It was tested and honed by 1,000 of the firm’s lawyers prior to launch, to mitigate against risks like hallucinations. The firm estimates that the tool is saving up to seven hours from the average contract review, which is around a 30% efficiency gain. As well as internal use by 2,000 of its lawyers, it is also licensed to clients.
This is the third time we have looked at the legal industry on the podcast. While lawyers no longer use quill pens, they are not exactly famous for their information technology skills, either. But the legal profession has a couple of characteristics which make it eminently suited to the deployment of advanced AI systems: it generates vast amounts of data and money, and lawyers frequently engage in text-based routine tasks which can be automated by generative AI systems.
Previous London Futurists Podcast episodes on the legal industry:
Other selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Our guest in this episode is Matt Burgess. Matt is an Assistant Professor at the University of Wyoming, where he moved this year after six years at the University of Boulder, Colorado. He has specialised in the economics of climate change.
Calum met Matt at a recent event in Jackson Hole, Wyoming, and knows from their conversations then that Matt has also thought deeply about the impact of social media, the causes of populism, and many other subjects.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration