Bio(un)ethical

#18 David Thorstad: Evidence, uncertainty, and existential risk

with Leah Pierson and Sophie Gibert

In this episode, we speak with Dr. David Thorstad: Assistant Professor of Philosophy at Vanderbilt University, Senior Research Affiliate at the Global Priorities Institute, and author of the blog, Reflective Altruism. We discuss existential risks–threats that could permanently destroy or drastically curtail humanity’s future–and how we should reason about these risks under significant uncertainty.

(00:00) Our introduction

(09:32) Interview begins

(14:32) The longtermism shift

(23:17) Framework for objections to longtermism

(29:47) Overestimating existential risk: population dynamics

(36:06) Overestimating existential risk: cumulative vs. period risk

(39:44) Overestimating existential risk: ignoring background risk

(42:14) The time of perils hypothesis

(46:11) When and where should philosophers speculate?

(1:09:02) Extraordinary claims require extraordinary evidence

(1:21:44) Regression to the inscrutable and the preface paradox

(1:30:07) The tendency to quantify

Used or referenced:

Bio(un)ethical is a bioethics podcast written and edited by Leah Pierson and Sophie Gibert, with production support by Audiolift.co. Our music is written by Nina Khoury and performed by Social Skills. We are supported by a grant from Amplify Creative Grants.

Note: All transcripts are automatically generated using Descript and edited with Claude. They likely contain some errors; if you find them, please tell us. 

Leah: Hi, and welcome to Bio(un)ethical, the podcast where we question existing norms in medicine, science and public health. I'm Leah Pierson, a final year MD PhD candidate at Harvard Medical School.

Sophie: And I'm Sophie Gibert, a Bersoff fellow in the philosophy department at NYU, soon to be an assistant professor at the University of Pennsylvania.

Leah: Imagine three possible futures. One of peace. One where nuclear war kills 99% of humanity. And one where humanity goes extinct entirely. Which difference is greater? The gap between peace and near extinction or between near extinction and complete extinction? This thought experiment from philosopher Derek Parfit has become far more than an abstract question. Today, the allocation of billions of dollars as well as decisions about thousands of careers are being shaped by how we answer questions about humanity's long-term future and the risks to our survival.

Sophie: Today's episode examines existential risks, threats that could permanently destroy or drastically curtail humanity's future, and how we should reason about these risks under significant uncertainty.

We won't debate fundamental moral questions like whether creating additional happy lives is valuable, or whether the expected consequences of our actions should single-handedly drive what we decide to do.

Instead, we'll focus on a more basic question. What methods and evidence should we use when trying to evaluate humanity's long-term future? When we estimate how many people might exist centuries from now, should we use theoretical calculations about the universe's capacity, or should we rely more on conventional demographic projections? How can we meaningfully think about influencing events millennia into the future? In this episode, we'll focus specifically on the approaches long-termists within the effective altruism community take to estimating and evaluating existential risks. Our focus is on epistemic questions. Questions about how we can know things, and what kinds of evidence we should rely on when making predictions.

If you don't know much about effective altruism, long-termism and existential risk, we'll provide an overview now. But if you are familiar with these things, feel free to skip ahead to about minute nine for the start of the interview.

Leah: The effective altruism movement, which we'll refer to as EA, is a philosophical and social movement that aims to use evidence and reason to figure out how to benefit others as much as possible. Effective altruists, or EAs, take action on this basis, using the movement's ideas to inform their decisions about what charities to donate to, which jobs to take, what foods to eat and so on. EA is a young movement. The term effective altruism was only coined in 2011. But during its short lifespan, EA has had a lot of influence. Billions of dollars have been committed to EA causes and organizations. There are numerous centers, institutes, and grants focused on conducting research related to effective altruism. And there are many podcasts and publications that aim to explore and report on ideas related to effective altruism. We've benefited from these resources ourselves. We received an EA grant to start this podcast, and our research has been supported by EA funders, though we have editorial independence.

Sophie: Early on, EA was best known for promoting charities doing global health and development work. For instance, the charity evaluator GiveWell, which was founded in 2007, assesses and recommends charities that save or improve lives the most per dollar, like charities that prevent malaria or incentivize providing childhood vaccines. Over time though, a growing share of attention and funding within the EA community has shifted away from global health and development and toward causes related to long-term existential risks like those potentially posed by AI or nuclear war.

Leah: The intellectual foundations for this shift towards what gets called long-termism come partly from the thought experiment we referenced a few minutes ago. Specifically, Derek Parfit wrote, quote: "I believe that if we destroy mankind, as we now could, this outcome would be much worse than most people think. Compare three outcomes. One: peace. Two: a nuclear war that kills 99% of the world's existing population. Three: a nuclear war that kills 100%. Outcome two, nuclear war that kills 99% of the world's population, would be worse than one, peace. And outcome three, total human extinction, would be worse than two, nuclear war. Which is the greater of these two differences? Most people believe that the greater difference is between one and two. I believe that the difference between two and three is very much greater," end quote. In other words, Parfit thought it was a great deal more important to prevent human extinction over nuclear war than it is to prevent nuclear war over peace, implying that there is great value in preserving the potential existence of future people with lives worth living. This view and the broader normative and empirical work it has spawned have influenced and will likely continue to influence the allocation of billions of dollars and the attention of a large and influential community.

Sophie: In their widely cited 2021 paper, "The Case for Strong Long-Termism," Oxford philosophers Will MacAskill and Hilary Greaves argue that impact on the far future is the most important feature of our actions today. They defend what they call axiological strong long-termism, roughly, the view that in the most important decisions we face today, the primary determinant of an action's value is its effects on the very long-term future.

Their argument has several key components. First, they contend that humanity's potential future is vast. Even if we only survive as long as the typical mammalian species, we have over 200,000 years ahead of us, and it could be billions of years before Earth becomes uninhabitable. Second, they argue that we can meaningfully affect this long-term future through actions like reducing existential risks or shaping the development of transformative technologies like artificial intelligence.

Finally, they show mathematically that even tiny reductions in existential risk, say decreasing the chance of human extinction by one millionth of one percentage point, would in expectation save far more lives than even the most effective near-term interventions, like distributing anti-malarial bed nets.

According to standard decision theory, this means it would be rational to aim for the reduction in existential risk over the near-term intervention. While they consider various technical objections, for instance, arguments against what are called fanatical decision theories, which tell you it's rational to accept arbitrarily large finite costs in exchange for infinitesimal chances of an infinite payoff, they argue that the basic case is robust. The paper provides much of the philosophical foundation for EA's growing focus on safeguarding humanity's long-term potential.

Leah: It's hard to say exactly how much this paper and philosophical arguments related to existential risk and long-termism more generally have motivated shifts in EA funding and attention.

As of 2022, most of the funding in the EA community was still directed to global health and well-being causes, and global poverty is still cited as a top priority for EAs in community surveys. But there have clearly been major shifts. For instance, between 2019 and 2022, Open Philanthropy, the major EA funder, increased its grant-making across global catastrophic risks by 700%. Since 2015, a growing share of EAs have prioritized historically long-termist causes, like AI risk, over near-termist ones, like global poverty. Today, most EAs rate a long-termist cause as their highest priority. In short, long-termist thinking appears to have shaped the EA community's priorities in significant ways.

Sophie: To help us understand the epistemic foundations of long-termist thinking and explore how different approaches to handling uncertainty might lead to different practical conclusions, we'll be speaking with David Thorstad. David is a professor of philosophy at Vanderbilt University.

His work explores the intersection of epistemology and ethics, with a particular focus on altruistic decision-making and technologies. Before starting at Vanderbilt, David completed a three-year postdoctoral fellowship at the Oxford Global Priorities Institute, a research institute connected to the effective altruism community. David also writes the excellent blog, Reflective Altruism, which aims to, quote, "use academic research to drive positive change within and around the effective altruism movement."

Leah: As always, you can access anything we reference in the episode notes or on our website, biounethical.com. And if you enjoy this episode, please consider subscribing, submitting a rating or review, or sharing it with a friend.

Sophie: Hi, David. Welcome to the podcast.

David: Thank you for having me.

Leah: So we think, and our sense is that a lot of effective altruists think, that you are one of the best critics of effective altruism, perhaps because you seem pretty sympathetic to many effective altruist ideas, engage deeply with effective altruism scholarship, and have a good grasp on not just the intellectual, but also the social, financial, and other relevant currents within the effective altruism community. How did you become one of the foremost effective altruism critics?

David: Well, first, thank you. That's really sweet of you. I think, honestly, at first, it was a pretty normal job for me. It was 2020, the pandemic was starting, job offers were getting yanked, left and right. I got a great postdoc at Oxford with, you know, Hilary Greaves is unquestionably one of the best philosophers alive today. Great terms of the job. And it was contractually obligated that half of my time would be on longtermism.

So it started, both out of interest, but also out of contractual concern. I did my job. I wrote papers about longtermism. I organized conferences about longtermism. I gave talks to effective altruists. I was in a building full of effective altruists, and I liked a lot of things. In particular, the short-termist work, work on poverty, work on global health was, was great. I think really impressive to me and I was less enthusiastic about the turn to longtermism. I saw that as moving from something, you know, very impactful and very supported by evidence to something less impactful and less supported by evidence. And I was really concerned to push back against that turn.

Sophie: You said that there were a lot of aspects of EA that you liked, especially with the short-termist aspects. What aspects of EA today are you most and least sympathetic to?

David: Good. So most sympathetic is altruism. A lot of effective altruists give 10 percent, 50 percent, 90 percent of their income to charity, which is almost completely unprecedented. I know effective altruists in leadership positions have given kidneys to total strangers. This is, you know, a very impressive degree of altruism.

The emphasis on evidence-based philanthropy was, I think, hard to overstate how impactful that was. Effective altruism came into philanthropy at a time where there wasn't a great evidence base for a lot of interventions, and a lot of money was being wasted, and really pushed forward the tide that is now stuck across the world of evidence-based philanthropy.

I liked that. And I liked the idea that you need to do substantive normative theorizing to do philanthropy, you know, the contrast class would be someone like Charity Navigator. They say, oh, we don't rank causes, but then what can you evaluate? You can evaluate overheads and staff costs and financial transparency. And the worry is, look, you're not really doing evidence-based philanthropy about the questions that matter. You actually have to talk about what matters. So that's what I like.

Some things I don't like: existential risk. I don't think it's very high. I don't think the interventions being funded are particularly impactful. And when they are, I think they often push us to do something in the wrong direction, and more generally, some of the most speculative views, the time of perils hypothesis that, you know, if we just make it through this time, risk is going to fall by many orders of magnitude and stay there, ideas about tiling the universe with, you know, Dyson spheres of stars, optimally producing digital minds, sending out von Neumann probes, the very speculative stuff inherited from some of the transhumanist tradition, I think doesn't have the kind of evidential basis that I really liked very much in effective altruism.

Leah: So if effective altruism was just short-termist effective altruism, would you consider yourself an effective altruist?

David: Oh, that's until the very last sentence, maybe like I would certainly be very supportive of what they're trying to do and could and do donate to their causes and probably wish I would donate more as far as a movement. I just, I've never been a member of the effective altruism movement. And I don't know that in that case, it would be anything about agreement or disagreement. It's just, I'm not a member of.

Okay, but maybe you'd be like something in the realm of short-termist EA adjacent.

Honestly, I think the short-term effective altruists do a lot of good work. I think it's underappreciated that a very large amount of effective altruist money still goes to that work, that a very large number of effective altruists still give a lot of money to short-termism. And then, frankly, a lot of them give more money than I do, which kind of embarrasses me.

So when we talk about lifting people out of poverty, preventing deaths, especially in young children and disadvantaged groups, in disadvantaged nations from disease, that's very hard to argue with. And doing it at scale is, it's impressive.

Leah: So as you mentioned over the last decade or so, the EA community's priorities, financial resources, and attention have shifted somewhat towards causes and ideas associated with longtermism. Longtermism understood as an axiological claim or a claim about what states of affairs are best says roughly that the best states of affairs are the ones that are good for people in the far future. Understood as a deontic claim or a claim about what we should do, longtermism says again roughly that we ought to do the things that are expected to benefit people in the far future. From your perspective, what has motivated the shift to longtermism and how significant or far-reaching has it been within the movement?

David: Sure. In terms of significance, I think it's been really far-reaching. I think broadly, there have been three waves of effective altruism. The first with the short-termist wave very much focused on animals, global health, poverty, global development. Then there was a wave around 2020, 2021, 2022 to really shift the focus to longtermism.

And then there's been a new wave, 2024, 2025, to focus much more squarely on artificial intelligence and maybe to the degree that you're going to get arguments that we don't even need to be longtermists anymore to be worried about risks. So we're somewhere between the longtermist focus and a shift to artificial intelligence with an important holdover, for the short-termist wave.

So that's, I think, how I see the sociology. Why this happened, I think there's epistemic reasons and non-epistemic reasons. The epistemic reasons are exactly what Will MacAskill is going to tell you, is look, the future is large. And by large, I mean very large, there could be not one planet, but billions or trillions of planets.

And we could be around not for a hundred years, but a million years or a billion years. And when you look at the number of sentient beings who could be around then, it gets to be pretty important to affect them, unless you think that benefits in the future should be discounted, and philosophers generally don't.

So I think that's the epistemic argument, and that's a decent argument. There's also an important sociological factor, namely that longtermism comes out of a history of transhumanism through things like the extropian movement with Nick Bostrom, through things like the rationalist community.

And the transhumanists have always been very concerned with technological futurism, with space development, with pushing the future of humanity. So it's not to say that these ideas don't have arguments behind them, but there is also an important sociological reason why these ideas exist in this community at this time.

Sophie: We take it that at least some of the courses of action that longtermists promote, like pandemic prevention, or as you mentioned, maybe some of the artificial intelligence projects, are ones that people who don't endorse longtermism might also promote. To what extent do the actual policies or practices advocated by effective altruists depend on the truth of longtermism?

David: That's important to note. It's definitely not the case that everything longtermist do is net harmful or even not worth doing. So you make a great case. Pandemic prevention, effective altruists were one of the few groups before COVID sounding the alarm, reminding people how bad the flu was in the 1910s and reminding us that it could happen again.

And it did happen again. And now already the world has become complacent. We did not prepare again for the next pandemic and effective altruists are still warning us that we need to do that. I don't have to tell you folks that the effective altruists are right about the importance of pandemic prevention.

We should distinguish the question of whether these actions are beneficial from whether they're best, which is what longtermists care about. And I think most longtermists tend to think that the reason you should be preventing pandemics is not because a million or 10 million people might die, but the entire future of humanity and maybe 10 to the 30th or 10 to the 40th sentient beings might die.

And when you push back against those existential risk claims, which I have, I think the value tends to drop to the degree that say preventing malaria might be more. It's also important to stress that some of the interventions have been net harmful. So I think for example, a lot of early discussions on internet forums about AI safety have really infiltrated, you know, the academic and the public discourse about the ethics of artificial intelligence.

And I think often this has made it harder to push regulations for, you know, more immediate threats, and it's made it harder to coalesce as a field around the issues that I think exist and matter. So definitely some actions are good when they're good. They're not always good. But there are definitely some actions which I wish would just stop.

Leah: Can you say more about specifically which actions you think have been harmful?

David: Sure. So I think that the focus on existential risk from artificial intelligence has been a mistake. I think that the extent to which governments have been lobbied to prevent existential risk from artificial intelligence has done two bad things. Number one is just put a backlash against all AI regulation.

A lot of things like the bill in California just failed. And we got nothing where very immediate issues like deepfake pornography, like fraud using artificial intelligence, like militarization of artificial intelligence didn't get regulated the way they should. And in the academic community, it's really been hard to get the focus back to kind of the core methods of the core issues that we wanted to talk about around artificial intelligence, because there's just this massive influx of philanthropic money and media attention on a scale that's very hard for universities to compete with.

Sophie: I see. So just to make sure I understand the first part, are you saying that the move to work on artificial intelligence and existential risk has caused backlash from the public that has then prevented other AI-related policies or regulations from going through?

David: Yes, yes. Yes. I think that's thought by a lot of people and there was backlash there. There was also corporate backlash. So if you think about Open AI, effective altruists tried to fire Sam Altman. And what was the backlash? Well, people were going to lose their very lucrative stock options. And then people in Silicon Valley who had previously been very supportive of effective altruism, got a little bit worried about effective altruism because it was going after people's pocketbooks. So sometimes these actions can really make it harder to push forward more consensus policy because there's such a stark disagreement over the edge case policies.

Leah: Interesting. Okay. I mean, I'm so deeply out of my wheelhouse in talking about this, but one observation I've had is that it seems like there is ample support for and attention being paid to other risks posed by AI that are not existential risks. And part of the reason I think that is because like, if you just look at, for example, jobs on PhilJobs, the number of jobs emerging in AI ethics that are not explicitly about existential risks, the number of grants that are being offered by institutions around this, it seems like there's actually a decent amount of attention being paid to, for example, issues around bias, issues around mis- and disinformation, deep fakes, all these different things. And so I maybe remain somewhat agnostic about this claim as to whether the attention and the emphasis on X-risks has actually led to backlash in these other domains such that it's actively undermined efforts to improve policy in these spheres. But you seem to feel more confident about that.

David: I thought you were going another way with that question. So I thought what you were going to say is there's room for both. That there's a lot of work being done on things that are not existential risk and there's a lot of work being done on things that are existential risk and maybe there's room for both.

Leah: I think that too.

David: But then you wanted to express skepticism about backlash and I think maybe that claim needs more work than just the fact that there's work on both, because from the fact that there's work on two things does not settle whether or not these two things are complimentary, whether they're in tension, whether they are actively neutral.

And I think it's definitely true that there's a lot of work on artificial intelligence that's not focused on existential risk. That's how I got hired at Vanderbilt. But I think there's definitely concern among many or most academics working on artificial intelligence that the discourse around existential risk has been a little bit too much and a little bit more invasive than we would like it to be.

Leah: Okay. So it seems like there are three general categories of challenges to longtermism. First, challenges to the moral assumptions baked into longtermism. For example, challenges to the idea that it's valuable to add happy people to the world as opposed to just improving the well-being of people who do or will exist.

And I'll just flag that we're basically going to bracket those challenges in this episode. The second are challenges related to object level estimates on which longtermism is based, for instance, the estimate that existential risk is high. Third, challenges related to the epistemic or methodological assumptions that go into rendering object level estimates.

For instance, the idea that when we're estimating the value of existential risk mitigation, we should focus on the theoretical population carrying capacity of the universe rather than the population projections rendered by demographers. It seems to us like you're often challenging longtermists from this third perspective and indirectly from the second one. Does this framework seem right to you and would you characterize your work in this way?

David: Yeah, thank you. I think that's an excellent way to characterize things. It's, I think, often assumed, and definitely when I came to Oxford in 2020, was thought that the only way off of the longtermism train, given the astronomical size of the future, was to reject consequentialism or totalism or one or another normative views that I kind of like, and I find this concerning because I am a consequentialist, but I'm not a longtermist.

So one way to think of me would be as something like a fairly orthodox consequentialist who doesn't love longtermism. I do want to stress that in addition to the third route of resistance, the second one you mentioned pushing back against levels of existential risk, I do want to do a lot. It's just that I mostly do it on my blog and not in the academic literature and frankly, the reason I do that is because there are incredible academic estimates of existential risk.

So a lot of my publication decisions are driven by what will get past the editor desk. And I've written a lot about climate risk, about pandemic risk, about AI risk, but I'm not in a position to write these things in academic journals, not because I don't have the stuff, but because the people I'm responding to have never put the arguments in a way that would meet the bar for an academic response.

Leah: Okay, got it. Yeah, we want to get into some of that discussion later on. One of the objections to longtermism that appears in two of your recent academic articles is that a lot of EAs significantly overestimate the value of mitigating existential risks, risks of events that could destroy or drastically curtail humanity's potential future and that they do this because of how they think about risk. Before getting into the details of your view, we first want to get clear on one thing, which is that there are existential risks aside from extinction risks. Because the term existential risk also refers to, quote, other ways that humanity's entire potential could be permanently lost, end quote. For instance, Greaves and MacAskill talk about concerns related to the development of artificial superintelligence by an authoritarian government that wanted to entrench its ideology. When you write about existential risk, are you generally focusing on extinction risk and not other risks that could lead to the permanent loss of humanity's entire potential? And if so, why?

David: Yes, you're quite right. So I always say in like the section two of the paper, I'm going to focus just on extinction risk and thank you for making this distinction. I think this is one of many terminological issues that should be policed more because people get it right. I focus on extinction for two reasons.

The first is just going where the action's at. I think a lot of the interventions that effective altruists are actually funding, and a lot of the concern is actually pushed as directed to extinction risk. So it's quite right that we could be having a debate about risk of astronomical suffering or stable totalitarianism, but that that's often not the practical debate that I want to affect.

And the second thing is that matters just get a lot messier. If you look at my models, they're pretty nice models. And one of the reasons they're nice is because it's very clear what happens when extinction happens. There's nothing else. But if extinction is coupled with one of a family of various catastrophes that could be bad in various ways for various durations of various probabilities, my models are gonna explode. And so I just don't think that a lot of my conclusions are gonna generalize. They're gonna be so messy I don't know what to say about them anymore.

Sophie: I see, so you're not, it's not that you think they're going to generalize, but the math is so complicated, I don't want to put it in this paper. It's more of like, you're not sure how that's going to generalize when it gets that complicated.

David: Yeah, okay, so here's one point that does generalize. If you think risk of extinction is high right now, then the value of pretty much everything long-term goes down a lot, and that'll be true not only for existential risk mitigation, including extinction risk, non-extinction risk, just things like passing on good values to humanity, everything will go down.

So the point about high risk leading to low reward should transfer. All the other stuff about cumulative risks and background risks, probably a way of reframing it and it might be interesting to think about the reframing. I'm not immediately sure how well it's going to work. Probably the population dynamics stuff would still work pretty well.

Leah: Hmm. Mm-hmm. Mm-hmm. Okay. So tell me if this is a correct synopsis: even if you were right about everything you say about extinction risks, someone could still get longtermism off the ground potentially by making a compelling case that these other kinds of existential risks still go through, and that we still have ability to make progress on them.

David: I think so. Yeah, if you have both the view that extinction risk isn't too high and non-extinction risks are fairly high and fairly severe. You should be able to get traction on that. I would need to hear more about the view, but I don't think there should be an armchair mathematical case against that. I think that should be worth thinking about. Yeah.

Leah: Okay. So one reason you think that EAs tend to overestimate the value of existential risk mitigation is that they overestimate the size of future human population due to focusing on the carrying capacity of each region humans might exist in. What are some important differences between how EAs like Hilary Greaves and Will MacAskill estimate future population size and how population demographers do?

David: Sure. So if you read Nick Bostrom, Greaves, MacAskill, if you read the Newberry report that many of them are citing, you're going to get an estimation method like this. They'll say, take a region we think humanity could live in, like Earth, like the solar system, and multiply two quantities. The amount of time you think we could be there, a million years, a billion years, whatever, and the amount of people that this region could feasibly carry.

You know, at the technological limit, given current time, and they say, well, if it could feasibly carry this many people for this many time, then clearly it will. And so that's our population. This wasn't always a bad way of estimating populations. So until a couple of centuries ago, human populations were Malthusian, what constrained the size of the population was land, food, resources.

We would quickly reproduce until we had used up almost all the land and resources. It was kind of horrible. Actually, it was almost guaranteeing we would be barely above a subsistence level of existence. And so if you just took the amount of land, feasible resources, the carrying capacity, that was a good estimate of population.

And then you multiply that by duration. That was great. That is totally out the window. I think no demographer thinks right now that we're in a Malthusian regime. In particular, there's a strong negative correlation worldwide between how wealthy a country is and its fertility rate. Having more resources gives you fewer children, not more children, and this is a fact of the present. Children now are the focus of an explicit decision, not a happy byproduct, and having more children is not really on the table, or at least is not immediately on the table, for a lot of well-off families.

It depends not on how many children people can have, but on how many children people would like to have. And precisely because of that, the question of carrying capacity doesn't really get that much traction. And not only does it not, we can really end up a lot below carrying capacity. So, in most of the wealthiest countries, fertility is well below the level needed to replace the current population.

Just over 2 children per adult human female. It's, I think, 1.6 in the US, Japan, South Korea, we're talking about 1.2, 1.1. And so, of course, the worst scenario is one that I explored in some of my work, drawing on work out of the Population Wellbeing Initiative, is that a human population, even of 8 billion, is fairly optimistic.

But the more general point is that you just can't learn that much about the size of the human population from asking, if we wanted to fill the space, how many people could we fill it with? Because we're not on a mission to fill this space with as many people as we can have. That's just not how having children currently works.

Leah: Yeah, got it. So it seems like EAs are often thinking about the value of the far future in expected value terms. And it might make sense to consider scenarios where there are 10 to the 16 future people even if these scenarios are very unlikely. Do you think that longtermism still gets off the ground even if we accept demographers' estimates, simply because there will be a long tail of possibilities where there are massive numbers of future people?

David: Okay, so a couple of things to say. The first is they are not saying 10 to 16. So Bostrom says a conservative scenario of an earthbound population is 10 to 24 and it gives a lot more people. Greaves and MacAskill say they think any reasonable estimate needs to be at least 10 to the 24. If you read other papers, you're going to get estimates like 10 to the 40, 10 to the 15.

So talking about 10 to the 16 could be chopping 5, 10, 20 orders of magnitude off the case for longtermism. I think to contextualize that, let me use the word I'm going to use in my book, namely a strategy of shedding zeros. So, longtermists say, look, the axiological case for longtermism is 10 orders of magnitude or 15 orders of magnitude better than the case for competing short-termist interventions.

So, therefore, unless you are radically non-consequentialist, longtermism is going to win at the level. And I want to chip away a lot of zeros in those value estimates, and then maybe do some other deontic things too. And so if the longtermist is just in one swoop gonna hand me five or ten or twenty zeros, I think there's two things to say. The first is they might run out of zeros just there.

5, 10, 20 orders of magnitude is a lot. But the second is this isn't the only time I'm gonna ask them for some orders of magnitude back. And this thing that they do which is correct is they point at every single argument I make and they say I can afford to pay that cost and that cost and that cost. But the question is whether they can afford to pay them all together, and I think, at least as the line of the argument in my book, that if we're really tossing orders of magnitude around that freely, we're probably going to run out of orders of magnitude quite quickly.

Leah: Got it. Okay. And, I just want to follow up on the last thing you said. So has that been the response of the people who are writing on these issues? Like, do they read your work and they say yeah, I can see that.

David: I get, well, sometimes it's concessive, sometimes it's not, but I almost always, somebody raised their hand, they say, David, couldn't I believe that, and still be a longtermist. So I had to rewrite some of the demographics section in my paper. They said, look, aren't you uncertain about demographics?

Maybe there's a 10 to the 8th probability I'm right about demographics, so maybe I lose eight orders of magnitude, and the response there is, okay, maybe you do. And then they'll say about the time of perils, maybe there's a 10 to the 9th chance I'm right about the time of perils, maybe I lose nine orders of magnitude, and okay, you do.

Obviously, we have a disagreement about how many orders of magnitude are lost each time, but I think it's a response I see in isolation every time I give a paper, and I'd like people to see it as a response that works in isolation, but can't just keep being repeated.

Leah: Hmm.

Sophie: Okay, got it. Yeah, so let's get into some of those other issues. So another problem you point to is that longtermists tend to focus on cumulative risks rather than period risks. Can you walk us through this distinction and explain why it makes more sense to you to focus on period risks than cumulative risks?

David: Yeah, so here's the mathematics that was scaring everybody, and this is why no consequentialist thought they could get off the boat. Nick Bostrom says, let's be really conservative here. Assume there's a billion people on earth. There's a stable population we're going to reach that's not too big. And assume we'll make it for, I think the estimate was a billion years. Yeah. A billion people, a billion years. He said, look, if you multiply carrying capacity times duration, 10 to the 9th, 10 to the 9th is 10 to the 18 years of life, or about 10 to the 16 normal human lifespans. So then he says, wait a second, that means if I take just a millionth of 1 percent off of risk, that's all?

Well, that's 10 to the minus eighth off of risk, 10 to the 16 people in expectation, 10 to the minus eighth increase of the chance of getting them, should in expectation get you a hundred million lives, so he says, even under really conservative assumptions, if I can shave a millionth of 1%, that's all off of risk, that's as good as saving a hundred million lives, so why are you working on malaria?

Yeah. And, if those numbers are right, he's of course right. So we need to ask what's wrong with the numbers. And what's wrong with the numbers is we can talk about risk in two ways. We can talk about risk during a fixed period of time, risk during a year, risk during a decade, risk during a century, or risk during the overall human lifespan.

So Nick Bostrom, when he talks about pulling a millionth of 1 percent off of risk, means increasing by a millionth of 1% the chance that we make it for a billion years, but the problem is if you ask in risk per year risk per century terms, what that means, it's not a millionth of 1 percent change.

It's a very large in particular. You crunch the numbers at a minimum. You would have to think that average risk per century would drop down to about 1 in a million. A lot of effective altruists think it's over 1 in 10. So on this story, the right way of putting it would not be, you know, Bostrom cumulative risk thing.

You only have to shave one in a millionth of a percent. It would be you have to shave five orders of magnitude off of risk in every century without fail. And of course, we both said true things. We've just talked about risk in different ways. But I think Bostrom puts the point in a way that makes it look very small and tractable.

And my reading puts the point in a way that reveals this is actually meant to be a very large and potentially intractable change.

Sophie: Got it. So would this be a correct summary that, he's making a sort of conditional claim, if we could shave this much risk off, then this would be incredibly good. And you're saying, yes, but notice that shaving that much cumulative risk off would actually be shaving off like a huge amount of period risk.

David: That's absolutely right. But then in practice, and you're going to see this in the next section of that paper, a lot of people flip between claims about cumulative risk and claims about period risk. So when they say how good it would be to reduce risk, they use the cumulative risk, you know, reducing risk over a billion years.

But then when they say look, here's how tractable it could be. They say, can't we shave 1 percent off of risk in our own century? And so we need to, if we're going to make claims about value in cumulative terms, make claims about tractability in cumulative terms, and that's going to be really hard, because it's really hard to tell people in a billion years what to do.

Leah: So a third mistake you point to has to do with the impact of background risks. You generalize this point in your article, "High Risk, Low Reward, A Challenge to the Astronomical Value of Existential Risk Mitigation." There you point to a general puzzle that confronts longtermists. A lot of EAs are pessimistic about existential risk. They think that every century the chances of an existential catastrophe happening are pretty high, 15 or 20 percent. And some EAs also think that efforts to mitigate existential risk are astronomically valuable, as we've discussed, vastly more valuable than things like poverty reduction or global health work. You argue that on almost any plausible model of the expected value of existential risk mitigation, pessimism about risk cuts against the value of preventing existential risk. Can you walk us through that argument?

David: Sure, I think maybe an analogy would help. I am 34 years old. Imagine I get really sick, I get cancer. The doctor says, look, David, you got a 50-50 chance of living if I give you the treatment, but it's really not going to be very much fun. I've got an alternative for you, is hospice, and you'll have a couple years.

I would say, Dr. Hospice, what are you talking about? I'll take the 50-50 chance, and the reason I would take it is because if I make it through, I have a long, happy life ahead of me. But then imagine I'm 95 and the doctor says, Look, you got a 50-50 chance of making it through, but it's going to be two years of suffering.

Or you can go to hospice. Now the decision looks a little different because if I just make it through these dangerous times, I've got dangerous times ahead in a year or two. And so the idea is that the less attractive or less long-term future you have to look forward to, the less enthusiastic you should be about throwing all your resources into preserving that future, and the more enthusiastic you should be about just living life well while you have it.

So the thought is if you take effective altruism seriously and thinking that right now in this century there's a 15 or 20 percent risk of human extinction. And you were to say the same next century and next century and next century, you think humanity is a little more like at least maybe an 85 year old.

And then it just becomes relatively more attractive to do what we can for say the global poor now and relatively less attractive to improve the fate of humanity in a million years because on this view, we're probably not going to have a million years.

Sophie: In the paper, you go through various ways in which we could try to reconcile existential risk pessimism with the claim that efforts to mitigate existential risk have astronomical value. The most promising route is to adopt a rather conservative approach, a strong version of what you call the time of perils hypothesis, the idea that you've already mentioned that we are in a perilous period right now, a period in which existential risk is very high.

But if we survive this period, we are likely to then be in a period of very, very low existential risk. Can you illustrate why strong forms of the time of perils hypothesis would get someone out of the conundrum that you presented?

David: Yeah. So the tension between risk pessimism, thinking that risk is high, and what I call the astronomical value thesis, thinking it's very important to reduce existential risk. That tension can never be made to go away. It can only be mitigated. So even, you know, the time of perils is only going to mitigate it.

The only way out of the tension is to more or less drop one of the theses. So the time of perils hypothesis says I'm pessimistic now. Now is a really dangerous time. Next century is a really dangerous time. But if we just make it through these next couple centuries, risk is going to drop. And by drop, I mean four or five orders of magnitude drop, and it's going to stay there for a very long time.

No blips until, you know, a very long time. And this will, of course, mostly get rid of the problem because you've mostly gotten rid of pessimism, precisely because it'll get rid of the problem, I think it's a very challenging view to hold. The first reason it's challenging is just it's quite a strong claim.

You're claiming very soon we're going to drop risk by four or five orders of magnitude. You're claiming we're never going to have any regressions where new technologies or new developments bring us back up to risky time. And the second reason I think it's implausible has to do with the argument given for it.

So when you ask people why risk is high now, they'll follow Toby Ord, they'll say technology drives risk. Humanity finally has very powerful technology, technology is going up and up and up almost hyperbolically. They got Our World in Data to say that with a graph. And so you would think, okay, risk is going up and up and up hyperbolically is the story, and that's why we're supposed to be in peril now.

But then they need risk to go down and down and down really fast in the future, despite technology presumably tending to go up and up and up. And if that wasn't already hard enough without, you know, technology pushing in the other direction, they need a very sharp permanent drop, despite the phenomenon that they're pointing to going exactly the other way.

And you just need a very strong argument to explain how this is going to happen.

Leah: Yeah. Okay. And so just to make sure that we understand, the idea is basically that it's not that unintuitive to think that we're living in a risky time because we have all these rapid technological changes, these could be destabilizing in various ways. But in order for this to provide a justification for prioritizing existential risk mitigation, you need to believe that not only is risk going to go down, but it's going to go down a lot, like by orders of magnitude.

So it's not just that existential risk of the century is 15%, but it's going to go down to 5 percent next century. You need it to go down to like a tiny fraction of 1%.

David: Yes. And it has to happen quickly. You need it to happen, I think, most of our models within about 10 or 20 centuries. So none of this, you know, what's going to happen in a million years, that's not going to cut it.

Because the problem is if you have the worry that we're probably not going to make it through the perilous time, if you say, "Oh, we've got this perilous time, 20 percent risk per century, but that's okay. We only have to face this risk 27 times." We'll raise 8 to the 27 and you get a small number.

So that's not a great idea. I mean, of course, some people do make it through. Some people make it to 110, 115. But every time you have to roll these dice, the value of taking a long shot on the future is looking worse just because the shot at the future is looking longer and longer by about 20 percent multiplicatively every century.

Sophie: Okay, so one common argument in favor of the time of perils hypothesis within effective altruism is that if we manage to survive the development of artificial general intelligence, if it doesn't kill us and doesn't wind up dramatically misaligned with human values, then we'll be able to use this huge powerful tool to prevent future existential risks. When asked about this line of argument, you have sometimes resisted engaging with it on grounds that good academics don't engage in that kind of speculation. So, we're not gonna ask you to speculate about it, but we do want to hear more about this general attitude toward speculation. So, how do you decide when to speculate about something like this, lending legitimacy to the inquiry, and when do you refrain?

David: Yeah. Okay, so I think there's a general view about forecasting in most formal models that if you try to predict something, your prediction is driven by three things. It's driven by the truth, you know, you've got a function of truth tracking. It's driven by random noise, kind of a normally distributed, mean zero noisy signal, which is noisier and noisier the harder it is to forecast, and it's driven by systematic biases of you or of your community.

And the thought is, if I make a forecast, there's some strength of truth signal, some strength of noisiness due to like difficulty of forecasting and some strength of systematic bias due to my situation as a particular kind of person, a particular kind of community with a particular kind of beliefs.

And if there's a pretty low true signal and a very high difficulty of forecasting and a pretty reasonable accusation of systemic bias, because communities shape my opinions and effective altruist opinions about the future. As soon as I make a forecast, you should just... I just think, knowing nothing else, that almost everything in the forecast is noise.

And so if that's right, the only reaction that would be rational to my forecast would be to almost completely discount it. But I'm quite confident that's not what's going to happen. I think that people would take me as speaking from a place of reasons and evidence and of reliability and would update on my forecast, which is precisely the wrong thing to do.

So instead, I really want to encourage people not to take forecasts any more seriously than they should be taken, and if that means not forecasting when there's not much value to forecasting, that's, I think, the right attitude now.

Leah: Okay. I guess, I don't know if this is like a clarificatory question or a follow up, but it might seem then that like actually engaging and trying to counteract the bias that you think might be baked into some of these estimates due to people all being part of the same community and sharing the same concerns and the same hype and whatever that actually like arguing against that is sort of course correcting to some extent and that might be an argument for engaging even if you think that the entire enterprise that they're embarking on doesn't have a lot of merit.

David: That's right, and that's what I do, and that's why I do that. But I need to be very targeted with where I engage. So I wrote about the Singularity Hypothesis because there is a serious academic mythology on it, very good philosophers, Nick Bostrom, Dave Chalmers, have written defenses about it. But you'll notice what I did not do, I did not respond to Eliezer Yudkowsky, I don't read that sort of stuff, I don't write responses to that sort of stuff, and I don't think it's appropriate.

I did not respond to forum posts, I don't think that's appropriate. And I got a lot of flack from management for not doing that. And I said explicitly, I won't do this because I don't think this is up to standards. And so what I've said about this argument, namely artificial general intelligence gets us time perils, is that if you look where it's been written up, nobody's written it in a paper, nobody's given an extensive discussion of it.

The longest written discussion I think I can find is a forum comment by a famous effective altruist in one or two paragraphs specifying the thesis. So if the question is, why am I not engaging with the thesis, it's not because I wouldn't engage with it if the argument was there. It's because I don't have enough to grab on to and engage with and that's where I think I wouldn't really have any course to correct.

I would just be fighting speculation with speculation in a way that, you know, what I want to tell these folks is if you don't have a substantive argument that speculation should not be done.

Leah: Mm hmm.

Sophie: So the unwillingness to speculate seems liable to lead to substantively different views because speculation is to some extent baked into long termism, which in many ways gets off the ground via the possibility, however remote, of very valuable possible futures.

Leah: For instance, in discussing the possible value of digital sentience, Graves and McCaskill write, quote, "One might feel skeptical about these scenarios, but given that there are no known scientific obstacles to them, it would be overconfident to be certain, or near certain, that space settlement or digital sentience will not occur."

End quote. Is your unwillingness to speculate just a methodological commitment, or is it deeper, say, an endorsement of a non-fanatical decision theory?

David: Oh no, it's not endorsement of non-fanaticism. I think, like everybody, I have some worries about fanaticism, but I also don't have a non-fanatical decision theory I like. So I've never before in print pushed on fanaticism. I'm going to mention it in my book, but with the caveat that I'm not sure I want to go there, it's very much the concern that I don't like where the speculation is going to take us.

So if we were to engage in speculation with the understanding that our true signal is a very small relative, the noise signal and bias signal, and we should probably not update on it very much, I would be a little more sympathetic. But when I see communities shifting first to one view than the other about what's gonna kill us all in orders of magnitude shifts, and they're all shifting together. I think very much they're underestimating noise, and they're systematically talking each other into positions and so, you know, systematic worries about everybody coming from the same positions of the same Internet forums talking to each other are going to start to kick in and then forecasting worries.

How well can you really predict all this is going to start to kick in and I'm starting to see the kinds of behaviors and belief changes that I wouldn't predict if people were updating far too much on speculation. And so I tend to think that speculation is not serving people very well here.

Sophie: You mentioned that it would be responsible to realize weakness of these speculations and so not update on them. So I guess I was wondering to what extent is your disagreement with these folks about what our priors should be? Because a lot of the time it sounds like what's going on here is that the burden of proof is in a different place.

So when they say something like "There are no known scientific obstacles to these theories" that kind of sounds like we should have priors that they are likely to some extent to be true and, then if there's no evidence against them, then we should just leave our prior there. So, yeah, do you have thoughts about that?

David: All of it, all of the disagreement is about priors. And that's why it's so hard is because by hypothesis, there are things that are hard to disagree about. When I sit down and talk with effective altruists, within an hour, we usually get to a place where I think and they think we don't have much evidence in support of claims like the time of perils hypothesis, like existential risk from artificial intelligence. So, you know, whatever our priors were updating on limited evidence is probably going to leave us pretty close to there. And so I say, you know, look, the claim that risk is going to drop by many orders of magnitude and stay there. I don't care the claim that aliens. This is expansionist aliens control 40 percent of the universe to claim that robots are going to kill us.

These are all really crazy. If they say no, they're like plausible. And I say they're crazy and they say they're plausible. And then we start asking each other for reasons, but we've got to a point where reasons run out. And I don't know what to do because we all agree on what's happened here. And we disagree on what to do about it. And, that's where we are.

Sophie: Yeah, okay, well, let's table that a little bit because we want to talk about it more when we talk about what kinds of claims are extraordinary or call out for explanation. But for now, you have a paper called "Against the Singularity Hypothesis," which you've mentioned, in which you consider the claim that artificial agents might gain the ability to improve their own intelligence, allowing them to increase their intelligence at a rapidly accelerating rate and wind up orders of magnitude more intelligent than the typical human being.

You have said that you had reservations about engaging in this line of inquiry at all, but that you don't regret writing that paper. How did you think through whether to write that paper and why are you glad that you did?

David: I think that's right. Honestly, the driving consideration on writing that paper is it was the only paper about artificial intelligence and existential risk that I could write, because it was the only one that would be responding to a series of creditable academic work at the time. There have been some good papers on instrumental convergence and power seeking since then.

And that's why I was able to write a paper on that, but I wrote the singularity paper, even as singularity worries were coming out of focus and effective altruism purely because it was the only paper where there was enough I think to write. In terms of degree of speculation, I think first it helped that I had really weighty interlocutors to engage with.

So I could look through the Chalmers, I could look through the Bostrom really in detail and just at least make the negative point that there wasn't that much argument there. And I could make some positive points. For example, that many presentations of singularity were presenting constant or even increasing returns regimes, you know, Ray Kurzweil just says there's this law of accelerating returns where almost everything in the universe shows accelerating returns.

And this is not the way that mainstream social scientists are thinking about the returns to things like investment in computer science or about research. So there were, I think some evidentially backed points I could make, but at the same time, you're worried, was I engaging in too much speculation?

I have it. A lot of editors had it, a lot of readers had it, a lot of people looking at my tenure case are going to have it, and I sometimes have that worry.

Leah: Okay. Yeah. I mean. Professional philosophers outside of EA circles tend to be very skeptical about the kinds of debates EAs have about AI and existential risk.

Do you think that the philosophy community is on the right track in how they feel about this, or are they too resistant?

David: Yeah, so let me give readers some context. I think maybe philosophers might know this and other listeners might not. My latest paper on Artificial Intelligence and Instrumental Convergence and Power Seeking has gone down in two straight journals. The editors killed it both times. I want to read you, I won't give you the name of the journal, but otherwise I'll read you what the editor has just said about this paper.

This is, I'm sorry, it's a little mean, but I think people need to know what is being said behind closed doors. The editor said, "I think there's a way of spinning this paper as arguing that work on power seeking theorems, I use the word pejoratively here, is not sufficiently scientific because many of the key assumptions are insufficiently specified for serious reasoning. That's fine philosophy so far as it goes, but it doesn't mean it's a paper in philosophy. The arguments critically evaluated in the paper are all just over the place. Excuse me, I didn't write this. This is the editor verging from silly napkin math to speculative metaphysics to formal explorations of reinforcement learning agents. A small minority of that could be considered philosophy of science, but the rest of it, in my opinion, is computer science is verging into bad philosophy of mind and futurism. The targets of this criticism definitely want to pretend they're doing science. I worry that publishing a critical takedown of these arguments could lead legitimacy to this experience."

This is something I hear all the time. And this is something I hear all the time from people who I respect and are going to have influence on my career. That is, they see what I'm doing as debunking conspiracy theories, and they see this as valuable for the world. You can target, you know, existential risk claims.

You can target QAnon, you can target vaccines cause autism, but they think it's punching down, it's too easy, and that it's not the kind of thing an academic does to win tenure. If I were to spend my time refuting QAnon, they would say, thank you. That was a great service for the world. Why don't you do it somewhere else? And similarly, when I spend time telling my colleagues that robots aren't going to kill us all, they're thankful. They always express gratitude. This editor expressed gratitude elsewhere that I was doing this, but said, look, this just isn't what we do here. And I think the stance is the stance that I agree with, that this is not very credible work, that it's not currently advancing the discussion and that there's not enough there there to be tractable through serious tools.

And I, I know this is really tough stuff to say and that they're fighting words, but this is what I believe and this is what most of my colleagues believe and this is where we're divided.

Leah: Yeah, I mean, I guess let's just, like, grant, for the purposes of our agreement, that everything that the editor said is true about the merit of these claims, and they are akin to conspiracy theories. I guess one reaction I have is like, that actually seems like exactly the kind of thing that philosophers should be doing.

I mean, I'm biased because like, this is part of the reason that we have this very podcast is like, we want to question existing norms in medicine, science, and public health, right? Like, you know, to the extent that people are getting wrong in how they're allocating billions of dollars and that philosophers can actually weigh in on these live public debates that have real implications for how we do things in the world.

It seems actually critically important for philosophers to be ensuring that we're getting these questions right. Right. And so I don't know the idea that this isn't like in some theoretical sense, rigorous enough, or it's not flashy and precise or something, like if it actually is responding to these ideas that are shaping how our world is like, I don't know.

I feel like I want philosophers to be working on that. But you guys are both philosophers, maybe you disagree.

David: No, I mean, that's why I'm here. That's why I have a blog. I think it's really important. It's really important to do this work. It's just, I think, important to distinguish first, like, should it be done from should it be done in our journals? So, you know, the journals say, look, you're responding to a bunch of internet forum posts and a couple of articles. We don't respond to internet forum posts in journals. If they come to the journals, you can respond to them. But until they don't, and I agree with that, but there's also just the question of what do we want to reward as a profession? And I think very often, when we see philosophers doing some things like philosophy of prisons, like public philosophy as important, but that we see the work of research that we do at a research university, which is the job I was hired to do as very much producing a kind of research that isn't being done here. And so I think they're very glad that I'm using my skills to engage in publicly relevant work. But we can't just have philosophy departments being about doing publicly relevant work, because that's not our game, that's never the game that we'll win at, and then we would start losing at the thing that we're really good at, namely doing the best philosophy there is.

Leah: Yeah, I mean, there are some various serious philosophers who seem to buy some of these ideas. Like we've been talking about a number of them in this interview. And so I'm just wondering how you reconcile this editor saying like, this is not serious stuff, this is conspiracy theories with a bunch of philosophers at Oxford and NYU and some of the top philosophical institutions in the world saying like, no, actually we should be taking these ideas really seriously.

David: Yeah, that's important. A lot of good philosophers believe this stuff. I was responding to Bostrom and Chalmers, our world leading philosophers. Many of my colleagues at Oxford who were very good believe this stuff. Many philosophers, especially young philosophers, are very sympathetic to this. So the claim can't be that no serious people believe this stuff. One of the frustrations I have in talking to academics about this is in where the debate should be conducted. So, I was very clear with my managers, with my friends, there's one I'm writing about the Singularity Hypothesis, namely Nick Bostrom and Dave Chalmers put it in article or book form. And I said, look, I will not respond to your forum posts.

If you want me to respond to instrumental convergence arguments, you need to write very serious papers. Turner and colleagues put two in a row in NeurIPS as a good conference, and there I'm responding. And so, I'm very open to responding to serious people. But I want to be very clear that we as a profession do not conduct our debates in public.

We do not conduct our debates on internet forums. That my discussions with my colleagues are meant to be happening in journals. And that they're expected, for good epistemic reasons, to be making their contributions there very much. And the contributions that I can and would respond to in the way I would respond to a colleague are the ones done in their capacity as world leading philosophers, namely philosophers writing in world leading journals. So I'm very open to the idea that this could be done. Many of these people hit these journals every year. But I want to see them do it. And if they're not going to do it, I don't know what to do.

Sophie: Okay, yeah, so it sounds like it kind of comes down to the question of what are philosophy journals for? Are they for talking to other philosophers? And then maybe more importantly, is there a value in that, that we should be forming a field around? Or, I can imagine on the other side thinking, the value of philosophy is to talk to the public about ideas that interest them, need to be fixed or something like that. Would you agree that that's where the rubber hits the road or not?

David: Well, so I think that's very important. The downstream value of philosophy is what we do for our students, what we do for the public. That's largely true. When we talk about science communication or communication of results in the humanities, generally the idea is the research happens first. We figure out as researchers what's true, if we're doctors, what works, and then we tell the public, do this. If we're scientists, we figure out what works, you know, what's happening with the climate, and then we tell the public, do this. What we do not do is intermingle informing the public with the process of figuring out the truth. We have a system for conducting research. We are all trained in the system. It works.

It's why people listen to us. And when we skip straight to talking to the public at best, we muddy the truth because we've gotten up board with the system, but at worst, we give the impression of speaking from a place where the research has been done, which is what the public expects from us, whereas in reality, we're speaking too soon.

And I think that's really abusing our platform, especially from leading universities. We go to the public. If we're not very clear about the epistemic status of these claims, we can be trading on a really hard earned reputation for rigor and research first that academics have, and we can be misusing that reputation.

Leah: Okay. Interesting. I mean, yeah, one thing that's sort of striking to me about this is that there's been a move in research more generally to try to bridge the divide between what the public is curious about and what researchers are prioritizing in their research. And it's interesting because EA loves, like expertise-driven decision making rather than sort of community based or democratic based decision making.

But this strikes me as almost like an example of research democratization where you have the community saying, you know, we're really concerned about this. We think these issues are really important and you might actually think that that itself is an argument for philosophers saying like, okay, people seem to think this idea is compelling.

It's been expressed in these various forum posts. How can we formalize this? How can we advance the most plausible formal hypothesis of it in an academic context? And that might seem to be a good thing to try to do. Right.

David: And that's fair enough. And just to be clear, I have taken and I'm currently taking some of the research grants, intended to do is that. So my paper on instrumental convergence is funded by the Survival and Flourishing Foundation. People like Yudkowsky, Habrika, have influence over those funding decisions.

My job at GPI was funded through that. So the idea that public interest can generate funding and funding can generate research, I'm pretty sympathetic to. But, I very much think that research is done first in journals. So if I write something on my blog, it's either something that's just not publishable on a journal, or like it's backed by my papers. And I respond mostly to journals. And what's frustrated me is a lot of the people getting this funding have sometimes written trade books instead of academic books, have written forum posts instead of academic articles, have put the articles on repositories instead of putting them in journals. And what they said they were going to do is they were going to use this money to push the research and then. I'm hitting a brick wall because I'm ready to do my end, but I'm running out of research to respond to.

Leah: Okay. If I can try to summarize sort of where we landed on this, there are these ideas that are sort of being informally advanced in forum posts, blog posts, and otherwise that sometimes are then being used as justification for decisions about what types of things to prioritize, what types of things to fund.

And it seems like one thing you're saying is like, the academic research piece needs to fit in there somewhere. So if you have the forum version of the idea, that's fine. But then at some point someone needs to try to publish this idea in an academic journal. And then once that has happened, then maybe you make the funding decisions, the prioritization decisions and so on based on the formal statement in the journal, but you can't just like skip that step. Is that fair?

David: You can't. And effective altruists used to think that in the short term they were very big on evidence first, randomized control trials first. If they gave somebody money and they weren't doing RCTs they said we want to see better data to get the money. And they've started skipping that step, and I worry that they're backsliding on the very most important thing they injected into philanthropy, namely an insistence on rigorous evidence and skipping those steps.

Sophie: Following Carl Sagan and others, you have said at multiple points that quote, "extraordinary claims require extraordinary evidence." This seems to give people license to downplay or dismiss claims labeled extraordinary when they're not backed by strong evidence.

And so it seems important to have a good handle on what makes a claim extraordinary or not. Do you have a view about what makes a claim extraordinary?

David: I do, but people aren't going to like it. So by extraordinary, I just mean implausible. When I call a claim extraordinary, I mean, that's a pretty implausible thing to think. And then you ask me why, and we're mostly back to priors, I suppose. You've all been in debates with someone where you think that the premises they're starting with are quite implausible, and you don't really know what to do except tell them that these are implausible starting premises. And that's often where I find myself, and I realize that there is a group of people, namely effective altruists, who don't find these beliefs implausible. I think they are often wrong about that. And, that's basically what I'm saying.

Leah: Okay. Yeah. I mean, I take it that in 1750, if you had correctly predicted things that would happen in the next 300 years, like, we will go to the moon, women will be more educated than men in many countries, millions of people will travel through the sky every day, smallpox will be eradicated. These claims both would have sounded extraordinary and maybe implausible, and also they would have been hard to support with much evidence.

Of course, for each one of these claims, there probably would have been thousands and thousands of extraordinary and false claims. The true ones would have been needles in the haystack, but it seems like there are a couple ways you could go here. You could either ignore the haystack entirely, needles and all, but given the importance of identifying the true claims, you might think that it's actually really important to try to sift through the haystack and figure out if there are emerging problems that we should try to address, which I take it is what effective altruists do. So how should we reconcile the facts that yes, some extraordinary claims are true.

It won't always be possible to support those claims with strong evidence and identifying these true claims is important.

David: Sure, yeah, implausible things happen every day. You know, at every second of my life, it's implausible I'm gonna die, and one second I'm gonna die. So we can't just be denying that implausible things happen. The question is not, would it be good to find a needle in a haystack? The question is, if I sift through a haystack and I try to find a needle, how likely am I to know a needle when I find it, and how likely am I to confuse hay for a needle? And these two probabilities are, you know, what Bayesians are concerned about. And the worry is, look, I think it's pretty hard to know a needle when you find it. And I think a lot of people confuse hay for a needle. I think that's what's going on with effective altruists right now. And so, of course, if people can give me decent evidence that they've got a needle, but if what we all agree on is it's going to be pretty hard to know a needle when we find it, then I'm kind of worried about why are we picking, you know, a piece of hay that might be a needle to bet on when there are perfectly good apples next door and apples are tastier than hay.

Sophie: So in 1930, someone might have reasonably said new technologies are emerging rapidly. These seem likely to fundamentally reshape our world in ways that are going to have profound, potentially good, but potentially bad consequences for humanity. It's hard to predict exactly what those consequences will be, but we should try to identify and mitigate them to the extent we can.

Do you think that this kind of approach is different from what effective altruists are doing today? And if so, why?

David: I don't, and I think it's a really good example. So imagine the 1930s were at the height of the Great Depression. Nobody's got work. Nobody's got food. Nobody's got money. Imagine you've got a large private foundation locking the granary doors, saying we're not going to give you food, work. This is for future people. And then the people bang on the granary doors say, okay, at least tell us what you're going to do for future people. And they say, oh, I don't know, but there's probably something useful to do. This looks very bad. And this is something like the situation we're in right now, namely that many people in the world have very little in the way of food, money, basic necessities, and we have a movement that was providing these things that is now not providing them.

And the thought is that you need a pretty good argument to do that. In particular, if you think about 1930, it's not clear what that foundation would have done after they locked the granary door. That it's not immediately obvious, given knowledge of time or even knowledge now, what would have been a good thing to work on. And it's quite plausible they would have worked on some bad things. So I think most wealthy folks in the United States in the 1930s were quite fond of eugenics. They thought that, you know, one of the problems was there was a wide rising tide of, you know, non-white folks taking over, and that maybe what the future needs is, you know, to promote some good white babies, and I think that this reflection maybe helps it to shed light on just how hard it is to find the actual needle in the haystack when you lock the doors, and just how easy it is to take a piece of hay, or maybe something a lot worse than a piece of hay, call it a needle, and go for it, and so I think this is a good image for what effective altruists are doing, but it's maybe not the most attractive image for making their case.

Leah: Yeah, because I take it that what they would want to say is 1930s philanthropy cases, like we're trying to figure out what we can do to prevent world war two and the Holocaust. And if you could actually do that, that would be extremely valuable. And so I guess the argument also has to be like, not only are they not going to do that, but they might wind up funding some of this eugenics bad stuff. You know, there's an empirical claim there, about our ability to sort of identify the things that are going to... Yeah.

David: That's right. Well, to be clear, the 1930s United States was quite isolationist. They were very unconcerned about the world in terms of global politics. We put absolutely punitive sanctions on Germany with no understanding whatsoever that lead to a second world war. So, you know, foreseeability of the war and the likelihood that people are going to put this on their minds was quite low. Concern for the Holocaust, quite frankly, a lot of Americans were not particularly concerned about this, and I don't know if we would have expected anyone who had money in the thirties, which was mostly not my folks yet, to have been particularly concerned with preventing the Holocaust. In particular, some of them might have been certainly not enthusiastic about genocide, but enthusiastic about some of the beliefs about race and religion that underlay it.

So of course, nobody's going to say if you could go back in time and prevent World War II, you shouldn't do it. But we can't just assert that if somebody sat down in the thirties and thought about what to do, they would have realized World War II was coming and that they would have stopped it because both of those claims, I think, are not that plausible without evidence.

Leah: Yeah, no, I think that's right about the 1930s case, but I guess I wonder whether it's right about modern EA's because it doesn't seem like EA's haystack searching has been entirely fruitless. You know, as we discussed earlier, they correctly predicted that we should be worried about pandemics before COVID and they were worried about the risks posed by AI long before this became a mainstream concern that many lay people now share.

Of course, there have been misses too. I take it that the nanotechnology concerns of some proto EAs in the early 2000s, 1990s might be a counterexample. But, when deciding whether to pay attention to extraordinary claims or not, track record might be the kind of thing that seems to matter. So to what extent does EA's track record provide a reason to pay attention to the kinds of claims EAs are currently making about existential risk?

David: Sure, so I think two things to emphasize. The first and most important is counting the track record correctly. That is, Effective Altruists did not just say we should be concerned about pandemics. They said we should be concerned about existentially catastrophic pandemics. They did not just say we should be concerned about AI.

They should have said we should be concerned about existentially catastrophic risks from AI. As far as pandemics, you know, I've written a great deal trying and failing to find some semblance of an argument that we might get existentially catastrophic pandemics of the future. I can't even get a detailed argument out of effective altruists. When I push them for an argument, they say, okay, we can't tell you it's an information hazard. I've literally asked a very high ranking effective altruist. Why don't you tell the CIA? And gotten no response. So the track record is not just we should be concerned about pandemics and we should be concerned about AI.

The track record is of pushing very extreme worries that not only have not happened yet, but that really don't have much evidence behind them. And then I do think the negative track record is important. So, you know, you mentioned nanotechnology. Remember, Eliezer Yudkowsky and a lot of people in rationalist circles were following Eric Drexler, who was a pioneer of nanotech, were following the Extropians.

They were very, very concerned about nanotech. Yudkowsky was so concerned that self-replicating nanobots were going to do us in. He said, there's one and only one way out of it. He said, we need to build the singularity because only the singularity is going to stop it. So he founds the Singularity Institute, now MIRI, to build the singularity.

He says, if we don't do this, we're all going to die. That doesn't happen now, he says. If we don't stop the singularity, we're all gonna die. And this series of changing, very strong predictions of doom that doesn't come to pass. I think is not a great track record and of course it's right that some degree of concern about pandemic, some degree of concern about AI is warranted and that's important, but I don't think that the track record of making more explosive claims has been particularly good.

Sophie: So, on the EA Critiques podcast, you cited existential risk claims from the EA sphere. For instance, in The Precipice, Toby Ord estimates a 1 in 6 chance of extinction before 2100, as well as other higher X risk estimates, and said, quote, these are strong claims. These are claims that you can give arguments for, they are claims you can defend, but I think everybody should admit these are niche views, and these are views that many people are going to find pretty surprising. You expressed hesitance about assigning your own X risk probability, but said that if you had to, you would give a number that's a couple orders of magnitude lower.

So, at risk of just asking you to justify your priors, in your view, what makes a 1 in 6 X risk estimate more surprising or niche or implausible than, say, a 1 in 600?

David: Sure, sure. So let's talk about extinction risk for a moment because we can go back to existential risk more broadly, but I think it gets a bit more complicated. A couple of things to know, humans and like most species are very hard to kill. Many species make it for millions and millions of years.

So many species don't have anything like, you know, 1 in 10 or 2 in 10 risk of extinction per century. Many of the mechanisms being surveyed have a particularly low base rate of killing people. So if you ask, for example, which mammalian species have ever gone extinct due to a pandemic, remember, this is supposed to be the number two existential risk of the century. There's one we can name. It's a species of rat on a single island. Went extinct from pandemic. So if we say all of humanity is going to go extinct during a pandemic, despite there being more of us, despite us being spread out, despite us being able to think about it and make vaccines, they go on nuclear submarines is just a very strong claim.

And then you go through some others. You go through asteroids. We all know the numbers is probably not very likely. You see, okay, maybe nuclear war is going to do us in. That sounded really plausible, but effective altruists were the ones who actually taught us that wasn't a plausible claim. It's very hard to see how we would all die in an existential catastrophe from nuclear war.

And so we go on and on and on examining the evidence, and we look like we're talking about phenomena that have very low base rates of cause of extinction and not very plausible arguments of leading to extinction, and then they say, okay, but there's these, you know, a couple of phenomena that we don't really have much evidence about, like, future nanotech or biotech, who knows what that'll be, or like future AI, who knows what that'll be.

And that's the thing that's going to do us in. And it sounds awfully lot like, you know, you've got a series of claims that weren't very plausible on their face that when we examine the evidence, they all failed. And now we're making, you know, similar claims in different areas where there's no evidence. And you say, given that these were kind of strong claims, and they all failed when we made them, why do you not believe, you know, the boy crying wolf this time? And it's like, well, where were the wolves?

Leah: Okay, so in your writing you talk about this idea of regression to the inscrutable in which effective altruists, this is a quote, place increasing confidence in the least evidentially and scientifically tractable risks, end quote.

As we understand it, the idea is something like, most existential risk doesn't come from obvious stuff like climate change, which we can measure and analyze. It comes from more abstract risks like those potentially posed by superintelligent AI. This is an interesting observation, but it almost seems like the argument that extraordinary claims require extraordinary evidence justifies this approach. In other words, we can systematically and rigorously assess the risk posed by things like asteroids and climate change, and we can determine that the existential risk posed by these things is relatively low. Conversely, as you note, it's harder to do this for more abstract risks, like those posed by artificial intelligence or engineered pandemics.

And on this view, it might be reasonable to say something like, to the extent that we're worried about existential risk, the bulk of this concern should come from places where it's hard to prove that we shouldn't be worried. This view would then be compatible with a low overall assessment of risk, but the distribution of that risk might look similar to EA's current distribution.

What do you make of this kind of view?

David: I think that's right. So the conditional claim to the extent that we're worried about existential risk in the near term, it has to be from exotic sources that are hard to evaluate is absolutely right. We all agree, we looked at earthquakes, maybe they'll kill us, they won't. Super volcanoes, probably not.

Asteroids, no. Climate change, no. Nuclear war, no. Pandemics, we're not agreed, but I think probably no. So we're all agreed that if we're going to have a substantial risk estimate, it's got to be somewhere else. So I look at that somewhere else. I say, well, I don't want to put the probability mass very high anywhere else. I don't have any evidence here. So why would I put a lot of evidence mass here? And we all agree if there's, you know, probability mass to be had, it has to be at the tails. But to me, when the more plausible risk stories went down, the right thing to say about the tales is that they should go down pretty heavily, too.

Sophie: Mm hmm. Yeah, so this question felt a little paradoxical to me in a way that reminded me of the preface paradox. So, the idea that, you know, an author writes a book and states in the preface that despite their best efforts to be accurate, they believe there are surely at least some errors, some false sentences in their work.

And, yet they also wrote the book and they just revised it and they believe every individual claim in the book is true. And this creates a sort of paradoxical sense since the author, you know, simultaneously believes that all their individual claims are true while also believing that at least one of them must be false.

And this had me thinking that the analogy is something like, you check the first 50 of 5,000 sentences, you think they're all true. And so now you put higher probability on each of the following sentences being false because you're so confident that at least something you said is false.

And that seems like a paradoxical thing to do because it seems like if you look at the first 50 sentences and they all seem true, then you should just lower your credence in the idea that something's false in the book. Do you think that there's a similar paradoxical thing going on here?

David: Yeah, I think that's exactly right. So I believe of every second of time it won't be my last second on earth, but one of them will. I believe of every fairly specific cause of death it won't be my cause of death, but one of them will, and these are absolutely correct descriptions of the situation. I don't know exactly the right response to the preface, but I know what it's not.

Namely, nobody responds to the preface by saying, Oh, that sentence is going to be the one that's wrong. Or that, you know, that cause of death or that second is going to be the one that kills me. What generates the preface is supposed to be that we don't want to say that. And so there's certainly some kind of a puzzle going on here, but the idea that the way out of a preface type situation is to find the witness to the existential that something's wrong or something's gonna kill me is not usually the route that we go.

Sophie: I mean, I know very little about responses to the preface paradox, but are you saying, most people don't want to say something like, if you check the first 49 sentences and there are only 50 sentences, and all the first 49 seemed true, then you should become very confident that the 50th sentence is false?

David: Oh, I see. That's a little different. So if you conditionalize on the fact that one of 50 sentences is false, and you have very good evidence the first 49 are not false, then you get pretty good evidence the 50th is false. Okay. If you don't conditionalize on the fact that one of them must be false, you get pretty bad evidence that the 50th is false. So if we were to say, existential risk is very high in this century, conditionalize on that, what does it come from? You rule out all the scrutable causes, you say it must come from the inscrutable causes, that would be a good inference. But if you don't conditionalize on risk being high in this century, I think you get very good evidence that risk is low in this century. And the right thing to do is just to push your risk estimates, maybe even lower than they started, which was pretty low.

Sophie: Yeah, so maybe the thought here is that there's a disanalogy because in the Preface Paradox case, you have some independent evidence for the claim that at least one of your sentences is false.

Whereas, in the long-termism case, there's no independent evidence that something really bad is going to happen other than just there being evidence that all of these individual bad things might happen.

Does that seem right to you?

David: That's absolutely right. And of course you could widen the comparison class to build the analogy. So you could say, there's very good evidence that some second in time will be the last moment for humanity. Or there's very good evidence that some event will be, you know, the cause of the end for humanity. But then you can't say, and the event is one of the things on this page of the preface and, you know, if we make this table of like 10 things that could do us in, it's got to be one of these 10 things. That would be, I think, really putting the restriction in place where it shouldn't be.

Leah: Yeah. I mean, one thing that's maybe tricky here is if we say like, look, we've ruled out these obvious risks as we do this, the existential risk we assigned should be decreasing because the list is getting shorter and shorter of things that could potentially be contributing to risk.

And the ones that were the obvious ones are not. But one thing that's sort of challenging here is that it does seem like a lot of people share this intuition that existential risk is high. I mean, there've been surveys of just like average Americans, average Chinese respondents who cite risks on the order of one in 20 to one in seven.

We get these super forecasters predicting one in 100 this century, domain experts predicting one in 20 this century. You know, these risks aren't one in six, they're not one in two, but they are not one in 10,000. And so I guess I'm just sort of wondering, like, if we like this method of systematically eliminating risks and decreasing our probability, are we willing to then say that like everybody who's predicting these high risks is getting it wrong?

David: More or less. I think one thing that's important is if you look at the data behind these surveys and you don't take the mean or the geometric mean or the median, you just look at the distribution of responses. They're all over the place. So, my claim is there's a very high bias, very high random noise, very low true signal.

That claim would predict forecasts being all over the place, and the forecasts are all over the place. So, of course, you can take forecasts that are all over the place and average them. And if you're predicting something very tiny, the person who thinks it's not tiny is always going to win because averages push it up. But I told a story which is the story that, you know, at least is consistent with the evidence on which these forecasts aren't tracking that much and we shouldn't take them that seriously.

So in particular, the idea with forecasters, we take people and we train them how to do very well on a six month, one year, two year time horizon, and we prove they could do well on that horizon. And then we say, Okay, I want you to forecast the next 1000 years. I want you to forecast what's going to kill humanity. And we don't have any evidence to expect on the basis of that short term reliability that they can somehow cut through all of the noise in forecasting the long term, and so we really need to have some reason to be averaging these forecasts that are all over the place before we say anything more than, it's not clear that people really know what's going on, and it's not clear how seriously we should take these forecasts.

Sophie: So we've talked about the sort of tendency to speculate, but one thing that's kind of in the background is just the tendency to quantify at all. So there's a tendency within EA to assign numbers to things that are really hard to estimate, like the cost effectiveness of an intervention to promote the welfare of shrimp or the likelihood of existential risk due to AI, or even maybe just things like well-being, quality of life.

So you ask in your blog post on epistemics whether a preference for quantification might not be as well suited to debates about new areas as they are in, say, Silicon Valley contexts.

What factors inform your decisions about when and when not to quantify?

David: Yeah, there's this thing I've called in a review of some work by Dan Greco, Bayesian hegemony. And I say this as someone who used to be a very hegemonic Bayesian. Namely, Bayesians have had a very good couple of decades. They're making progress in computer science, in cognitive science, in every area of social science, in every area of philosophy. And they point to their many victories in explaining things as a cause to go ever further and further and further. They just want to explain almost everything. And there's no doubt that giving a precise probability to all events and giving a precise utility to all outcomes and other elements of the Bayesian formalism is often very helpful.

This is why Bayesians have been so successful. But at the same time, Bayesianism is one of many models of uncertainty, and one that many people think gets less appropriate as uncertainty widens. So I think a lot of philosophers consider at least a small deviation from Bayesianism, the idea that when you're very uncertain, maybe you should use a set of probability functions or an interval of probabilities, and likewise for utilities. Down the line, if you look at the kinds of people who get paid by governments to predict 50-100 years of the future, they don't even like that. They don't really think we should be forecasting in any way. They're going to use scenario planning. They're going to use robust decision making methods like info gap decision theory that when you look at them, there's certainly ways of thinking through what the future could be.

And there are ways that people pay millions of dollars to get reports on how the future could be, but at least what most world governments and oil companies and long term planners do today is to precisely quantify the probabilities and utilities they can, and when they can't, to construct models that are a bit more appropriate to our level of uncertainty rather than putting in numbers that are going to mislead us because, you know, they're not coming from a solid basis.

Leah: Yeah, I mean, this is somewhat related, but somewhat unrelated, going back to the AI risk thing. I mean, I don't work on AI risk and I, you know, find some of the numbers that people put forward about the probability of X risk due to AI to just be like, where the hell did you get that?

But I also want to be able to hold on to the claim that, like, AI developments are scary and some of the things that are being developed, intuitively seem like they could really set back humanity. They could be really good, but they could be really bad.

And, even if we don't assign numbers to those things, they seem like the kinds of things that it's not unreasonable to be worried about and to be, to the extent that we can identify tractable opportunities to mitigating those risks, to be devoting time and resources to. Which I mean, do you just reject that kind of claim?

David: No, I completely agree. I was hired at Vanderbilt to do that work. Here's some scary things about AI: militarization, Open AI and Anthropic this year signed major defense contracts, Russia and Ukraine are currently exploring the idea of taking drones, they already have a lot of drones on the battlefield, giving them machine guns, they already do that, giving them an AI system, taking the human out of the loop and just telling them to go kill some people—that's probably going to be on the battlefield when people make it work. So militarization of AI is frightening.

Deepfakes are frightening. I don't know if we can trust photographs and videos anymore. People can trust their own, you know, privacy and control of their body. North Korea is stealing Bitcoin now for a living. I'm probably not allowed to say that, because, you know, they've been quite successful at it and people committing digital crimes of that sort are going to have a major heyday.

Someone just convinced somebody the other day they were Brad Pitt and they needed 800,000 for cancer treatment. Everybody's worried about AI. That's literally my job. I am quite concerned about AI. I don't think that we need to link concern about AI to concern for the very least likely and the very worst possible outcome, precisely because there are so many good reasons for concern already.

Leah: Okay. And putting aside extinction risk, you're less worried about these scenarios of like an authoritarian government having control over a super powerful AI that sort of entrenches their power and their norms, which are bad norms?

David: Oh, I mean, I'm terrified of that. So obviously, the Chinese government is not a democratic government, is very actively invested in surveillance and tied it to increasing aspects of people's lives. Think like the social credit score. And I think I'm uncomfortable about what they'll do with that data. And Congress was uncomfortable enough about what they'll do with that to ban some major apps on the US market. There's a fairly right wing authoritarian government coming in, in the U.S. and in this country where I really have a lot of concern about what they're going to do with AI and when they have, you know, the backing of people like Elon Musk, who have a lot of AI tools and a lot of data and whose beliefs and motives I'm not particularly happy about. I'm quite concerned about what the right wing will do with AI. Again, I don't know that we need to jump to these claims that effective altruists make about, you know, a million year stable totalitarianism coming from artificial intelligence to be worried about totalitarian uses of artificial intelligence that are already here or right on the horizon.

Leah: Got it. So it's like, we don't need the indefinite persistence piece of this to be very concerned about it. And in fact, that might be a bit of a red herring.

David: I think that's right. I think that's right. But it's a good concern, and I'm glad EAs are concerned about it.

Leah: Okay. That brings us to the end. So we'll go with last question, which is what is one rule or norm broadly related to what we've been talking about today that you would change if you could and why?

David: Sure. I think you've probably guessed speculation beyond the evidence. I think it's very important to milk everything out of your evidence that you can. Take nothing more out of your evidence and to be very clear about the limits of what inferences we can and can't draw. So I think if we could get a bit more of a match between the degree of speculation and the degree to which evidence warrants that speculation, I think we would see a return to many of the very helpful things we got from early movements in evidence based philanthropy, early randomized control trials.

And I think we'd get a lot of the positive developments that Effective altruists brought to philanthropy coming back in full force.

Sophie: Okay, great. Well, that seems like a good place to end. So thank you for being on the podcast. We've really enjoyed this conversation.

David: Thank you for having me.