The Sentience Institute Podcast

Tobias Baumann of the Center for Reducing Suffering on global priorities research and effective strategies to reduce suffering

July 28, 2021 Sentience Institute
Tobias Baumann of the Center for Reducing Suffering on global priorities research and effective strategies to reduce suffering
The Sentience Institute Podcast
More Info
The Sentience Institute Podcast
Tobias Baumann of the Center for Reducing Suffering on global priorities research and effective strategies to reduce suffering
Jul 28, 2021
Sentience Institute

“We think that the most important thing right now is capacity building. We’re not so much focused on having impact now or in the next year, we’re thinking about the long term and the very big picture… Now, what exactly does capacity building mean? It can simply mean getting more people involved… I would frame it more in terms of building a healthy community that’s stable in the long term… And one aspect that’s just as important as the movement building is that we need to improve our knowledge of how to best reduce suffering. You could call it ‘wisdom building’… And CRS aims to contribute to [both] through our research… Some people just naturally tend to be more inclined to explore a lot of different topics… Others have maybe more of a tendency to dive into something more specific and dig up a lot of sources and go into detail and write a comprehensive report and I think both these can be very valuable… What matters is just that overall your work is contributing to progress on… the most important questions of our time.”

  • Tobias Baumann

There are many different ways that we can reduce suffering or have other forms of positive impact. But how can we increase our confidence about which actions are most cost-effective? And what can people do now that seems promising?

Tobias Baumann is a co-founder of the Center for Reducing Suffering, a new longtermist research organisation focused on figuring out how we can best reduce severe suffering, taking into account all sentient beings.

Topics discussed in the episode:

  • Who is currently working to reduce risks of astronomical suffering in the long-term future (“s-risks”) and what are they doing? (2:50)
  • What are “information hazards,” how concerned should we be about them, and how can we reduce them? (12:21)
  • What is the Center for Reducing Suffering’s theory of change and what are its research plans? (17:52)
  • What are the main bottlenecks to further progress in the field of work focused on reducing s-risks? (29:46)
  • Does it make more sense to work directly on reducing specific s-risks or on broad risk factors that affect many different risks? (34:27)
  • Which particular types of global priorities research seem most useful? (38:15)
  • What are some of the implications of taking a longtermist approach for animal advocacy? (45:31)
  • If we decide that focusing directly on the interests of artificial sentient beings is a high priority, what are the most important next steps in research and advocacy? (1:00:04)
  • What are the most promising career paths for reducing s-risks? (1:09:25)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the Show.

Show Notes Transcript

“We think that the most important thing right now is capacity building. We’re not so much focused on having impact now or in the next year, we’re thinking about the long term and the very big picture… Now, what exactly does capacity building mean? It can simply mean getting more people involved… I would frame it more in terms of building a healthy community that’s stable in the long term… And one aspect that’s just as important as the movement building is that we need to improve our knowledge of how to best reduce suffering. You could call it ‘wisdom building’… And CRS aims to contribute to [both] through our research… Some people just naturally tend to be more inclined to explore a lot of different topics… Others have maybe more of a tendency to dive into something more specific and dig up a lot of sources and go into detail and write a comprehensive report and I think both these can be very valuable… What matters is just that overall your work is contributing to progress on… the most important questions of our time.”

  • Tobias Baumann

There are many different ways that we can reduce suffering or have other forms of positive impact. But how can we increase our confidence about which actions are most cost-effective? And what can people do now that seems promising?

Tobias Baumann is a co-founder of the Center for Reducing Suffering, a new longtermist research organisation focused on figuring out how we can best reduce severe suffering, taking into account all sentient beings.

Topics discussed in the episode:

  • Who is currently working to reduce risks of astronomical suffering in the long-term future (“s-risks”) and what are they doing? (2:50)
  • What are “information hazards,” how concerned should we be about them, and how can we reduce them? (12:21)
  • What is the Center for Reducing Suffering’s theory of change and what are its research plans? (17:52)
  • What are the main bottlenecks to further progress in the field of work focused on reducing s-risks? (29:46)
  • Does it make more sense to work directly on reducing specific s-risks or on broad risk factors that affect many different risks? (34:27)
  • Which particular types of global priorities research seem most useful? (38:15)
  • What are some of the implications of taking a longtermist approach for animal advocacy? (45:31)
  • If we decide that focusing directly on the interests of artificial sentient beings is a high priority, what are the most important next steps in research and advocacy? (1:00:04)
  • What are the most promising career paths for reducing s-risks? (1:09:25)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the Show.

Speaker 1:

Welcome to the sentence Institute podcast. We interview activists, entrepreneurs and researchers about the most effective strategies to expand. Humanity's moral circle. I'm Jamie Harris research, essentially since you and, and my fussy, Chris, welcome to our 17th episode of the podcast. This is the second episode with Tobias Bauman of the center for reducing suffering. In the first episode, I spoke to devise mostly about why he thinks we should accept long-termism and focus on reducing risks of astronomical suffering. The future. Also known as asterisks. We were first to those ideas in this episode, too, but we focus more on implementation. What we can do to reduce asterisks most cost-effectively. We discussed how to advance the field of global priorities research. What implications long-term, as in has random advocates, whether we should kickstart a movement focused on encouraging consideration of artificial sentence beings and more on a website. We have a transcript for this episode, as well as timestamps for particular topics. We also have suggested questions and resources that can be used to run an event around this podcast in your local effective altruism group or an advocacy group. Please feel free to get in touch with us. If you have questions about this and we would be happy to help devise Bauman is a co-founder of the center for reducing suffering. A new long-term is to research organization focused on figuring out how we can best reduce suffering, taking into account all sentient beings. He's also pursuing a PhD in machine learning at university college, London, and previously worked as a quantitative trader at Jane street. Capital. Welcome back to the podcast bias. Yep. You're welcome. Okay. So as a quick recap of the last episode, what are suffering risks or asterisks and why they are plausible priorities?

Speaker 2:

Yeah, so risks are simply risks of worst case outcomes in the future that contain a very large amount of suffering. And so the, the definition is that it exceeds all the suffering that exists that has existed so far past the seats that, and well, why is it a priority because that'd be very bad basically. And so even at relatively small risk of that happening would be important in expected value. And one can also argue, I mean, it used to us to discuss a little bit how optimistic or pessimistic we should be about this, but I think the probability of something like this happening is not very small given all of human history and what we've seen so far.

Speaker 1:

Great. And listeners can obviously go back to the previous episode and listen to that if they want a bit more detail on kind of what the conduct is and why we should, or shouldn't think about working on this topic. So I guess we're going to dive in a little more in some of the kind of practicalities and the, almost like the next steps that arise from thinking about these topics and the rest of the episode. So first question, who is currently working to reduce asterisks.

Speaker 2:

So th this kind of depends a little bit on how broadly you construe, uh, working on estrus because a lot of things are potentially relevant. Something is working on improving the way the system works or on a quote unquote normal animal efficacy. That's also relevant to reducing asbestos. But I guess your question is about the more narrow conception of like people who explicitly focused on estrus. Who've made this the main focus of their work. And in terms of that, there are two main groups that I would make, which is the center on long term risk CLR and the center for reducing suffering, which I co-founded. And these two groups have slightly different focuses on it. I can go into more detail on that.

Speaker 1:

Yeah, that was going to be my next question, really. And I guess just before we dive into that, I, I do agree that this definition thing is quite important in the sense that to some extent, literally everybody doing animal advocacy, or even more broadly, anything related tomorrow circuit expansion in some form could be conceptualized as reducing asterisks. Uh, but as you say, I think it makes sense to focus more kind of specifically on the organizations who are explicitly focusing on those kinds of long-term risks. So, yeah. Uh, talk us through, I guess, what, what is the, you mentioned CLR and your own organization center for reducing suffering. What is the kind of main focus of each of these groups? We might as well start with CLR seeing is there the existed for longer?

Speaker 2:

Yes. The purpose of the center on long-term risk is on a corporate AI conference, artificial intelligence. The idea there is to prevent worse outcomes that might arise from escalating conflicts involving transformative artificial intelligence. So what they're doing is, I mean, it spans peels, um, such as bargaining theory, game theory, decision period, and is looking to apply these insights to artificial intelligence in order to reduce the risk of worst case outcomes for asterisks arising from interactions between this AI system and other AI systems, or also just as system on humans and to increase the chances of a competent outcome, uh, resulting from these interactions. So the gods to the center for reducing suffering, I would say that our focus is more on broader privatization research and macro set GA and exploring many different interventions, probably facing suffering. We are also doing some work on advancing something focused more on points of view and developing those further, which is something that the CLR is not doing a lot of anymore. So, but I can go into a lot more detail on these things

Speaker 1:

I'd love to, and I'm interested as well in terms of how the, the focus of CLR let's start with CLR again, how the focus of that has changed over time, because obviously they've been going for a few years and I mean, listeners might hear that and think that sounds quite specific. If you think back to all the different types of asterisks we talked about in the episode, for example, there are lots of different potential pathways towards reducing suffering risks. So the question is kind of how, and why has CLR ended up on that particular path,

Speaker 2:

Do you think? Yeah, I mean, I guess they simply think that this is the most important or the most neglected or the most tractable, um, way to reduce as stress. I mean, whether or not that's true, uh, is of course, uh, a long story, um, they're making the case for that in their reset agenda. I think there's about to be set, uh, either way. And generally, like if you're asking, how has it come about? Um, I mean, yeah, it's just always like gradual evolution of, uh, priorities when new people are coming in, they have new interests, new skills points since I think CLS current focuses is obviously due to Jesse Clifton and his work. I mean, yeah. And over time you have topics that are being discovered such as operative AI, um, but also discussed about evelence last time and yeah. Then people will, uh, yeah, that's just going to be over time. I think they invite a spectrum of approaches at different views within both. These and performance is set up for reducing suffering CRS and NCLR. It can be a little bit annoying that it's a CR of those, but it is what it is. So what you're seeing is like over time that this being modified that a diversity of approaches, but earlier it's just being one group which tends to resolve and maybe a little bit different intellectual monoculture.

Speaker 1:

And I get the sense that, so I wonder how much CLRs focus is about just the need to specialize somewhat versus a competent view in the P cause prioritization issue. Uh, for instance, I've noticed that in some of the funding they give out, I think they still fund groups like wild animal initiative and are willing to fund a broader variety of intervention types that might reduce asterisk or at least research streams that might reduce asterisk, even if their own research is slightly more narrowly focused.

Speaker 2:

Yeah, that seems true. It might be some combination of people being particularly interested in this and believing that it is a particularly attractive a way to reduce that as this.

Speaker 1:

Okay. So we've talked about CLR a bit, but I wonder as well, there are, there are another, a number of other groups that whose work is at least in some way relevant to this topic, right. Um, and an obvious example being sentenced, you, we focus on a particular type of, we focused on rural circle expansion, which is a one particular way of potentially reducing suffering risks. So there are, there are lots of other explicitly long-term it's organizations whose work presumably touches occasionally at least occasionally on S risks. So for instance, how much of the work by groups like the future of humanity Institute is relevant to S risk research efforts and S risk reduction efforts?

Speaker 2:

Um, yeah, I mean, I think a lot of it is, is relevant. For instance, work on AI governance, is it surely Humphrey? That's also relevant to asterisk deduction. I mean, the work that's being done there, it's just, uh, it just pursues a different goal. And therefore, usually the questions they look at are not necessarily the ones that are most important from an SMS perspective, but there's still somewhere between very important and somewhat important, I guess. So, yeah, it's, it's a spectrum of relevance of different things for the that's what I would say. And I also definitely agree that the work of organizations like sensitive Institute and animal ethics is very relevant. And of course these organizations, while they're not like explicitly talking so much about asterisks, that people are very much aware of these risks and do things that they're very, we're very much worth consideration.

Speaker 1:

Yeah, that's certainly true. Uh, my conversation with Oscar Horta on a previous episode of this podcast, we touched on this topic briefly, but didn't dive into it as it was not Oscar's explicit focus. Are there kind of particular individual thinkers whose work is especially relevant? The obvious example, and he is very much affiliated with CLR and co co-founder dub Eve, but Brian Tomasic is kind of semi-independent agent. So he's an obvious example of somebody whose work is focused explicitly on this. Um, but there are presumably others. I know, for example, that David Paris has quite a suffering focus in his work. And are there people like that is David Pearson's work relevant. Now there are other kinds of individual academics or thinkers who often crop up as having lots of relevant things to say on the topic.

Speaker 2:

So, I mean, the topic of asterisks in particular is quite normal. And like, I don't think that's, there's so many independent thinkers that are really doing what one could call cost productization from a suffering focus perspective. But if your question is more about people who have written about something focused ethics, then that's like names like Jamie myopathy, both maybe come, come to mind that have defendants, but itself that could use that could be described as something focus ethics.

Speaker 1:

Yes. So in the work that's been done today, are there any kind of major gaps or blind spots that you think have been missed out on that seem high priority to be addressed through the next steps of research?

Speaker 2:

Yeah, that's a good question. I guess probably our gaps, but I just don't know what those gaps are, because if I knew about it, then, uh, you know, we, we picked it. Uh, yeah. I mean, I guess that's why it's called that spots. Maybe one thing I would say is that there's surely a need for more empirical grounding, uh, because a lot of the work that has been done is could maybe if you're uncomfortable be described as an armchair speculation and yeah. Taking, putting things on a more solid footing would be quite valuable in my opinion.

Speaker 1:

Yeah. Yeah. But necessarily more time consuming, I guess that tends to be the sort of thing that sentence to you focuses on. Um, but it's a lot, it takes a lot longer to write posts like that. How track tool is changing the course of history posts that I talked about and actually do all the digging into that research than it is to kind of outline some initial thinking on the topic. But yeah, I agree that that stuff is important and would be helpful as next steps in some of these questions. So I wanted to ask about a couple of kind of topics or almost buzzwords that come up quite often in at least when I've seen others in the effective altruism community, discuss this topic of asterisks. And one is the topic of information hazards. And this is something that Nick Bostrom defines as risks that arise from the dissemination or the potential dissemination of true information that may cause harm or enable some agent to cause harm. And my impression is this is something that many people concerned with asterisks tend to be very concerned about. Um, and essentially that sharing some work publicly might increase the risks of some negative outcomes. And I do think about this topic periodically, but I think I'm, I feel I just kind of intuitively feel slightly less worried. What are your thoughts on this? How careful do we need to be in general with talking about these topics?

Speaker 2:

Yeah, I mean, I definitely agree that we should be careful. Um, I mean, it always depends on like the specific material in question and like how likely that is to actually, uh, lead to some, some information as, and what exactly the pathway is to some person causing harm. Um, I mean, for instance, uh, I think in classes have also been discussed in the context of biosecurity. Um, if you're, if you're publishing a paper on like, okay, this is how you could, uh, have a superbug and this is how you would produce it and disseminate it, then that's obviously very impressive if you're just in the abstract talking about like ways to prevent that sort of thing from happening without giving away any problematic information, then that's not so into hesitant. And yeah, I mean, in other fields that are, have like some affiliation to security, usually one is more or less openly discussing what the risks are. And like the hazardous parts are only things that would really give potential attackers, like non-trivial information that they would not figure out, uh, on their own. Yeah. So for instance, just talking about the idea of threats, doesn't strike me as so, uh, worrisome, because that's something that you can see in like every second, the James Bond movie. So it doesn't really seem that this is something that nobody would come up with if I keep silent about it, but there are like maybe more, maybe other ideas that are more yeah. That qualify for that type of thing.

Speaker 1:

What are the kind of rules of thumb do you think of what is more or less info hazardous? One thing you said there is, uh, essentially more generalized information as opposed to more specific information, probably poses less of a risk. Um, and another one is another one, it sounded like you were saying, was focusing on how to reduce risks as opposed to going into detail about exactly how the risks might arise might be another way to reduce.

Speaker 2:

Yeah. I mean, there's certain types of information that are more useful for preventing attacks and certain types of information are the more useful to carrying out attacks. Like in the biosecurity example, it surely isn't so imperative in front of us to talk about, uh, I dunno, vaccines against the potential supervisor, but it is invited us to talk about how one might create about feminist

Speaker 1:

Is the concern mostly about intentionally malicious agents looking out for this sort of information and then misusing it, or is it about some kind of indirect thing where just like the information being out there somehow gradually increases salience and there's kind of some kind of indirect effects that's not necessarily through intentional action.

Speaker 2:

Yeah. I mean, it actually is often rather unclear to me what the main concern of people actually is. Like in more detail, did this sort of effect of increased saliency is definitely a candidate. Another candidate is like that you are somehow transmitting valuable information to a malicious actor. Yeah. It can't be a combination of all of these things.

Speaker 1:

All right. So yeah. Another idea that I often see discussed in the context of SRS, or at least people who are interested in working on asterisks is the importance of cooperation and Magnus your colleague at center for reducing suffering has an, a post on CRS, his website called why our interests should be co-operative. Why is there so much emphasis on cooperation within the community of people focused on reducing suffering risks?

Speaker 2:

Yeah, because cooperation is, is very important. And in general, I think, and also from a[inaudible] perspective in particular, the idea is that having all the conflict in and of itself a respect, respect of a very bad outcomes and also effect that that might make it less likely that more concerns are taken into consideration if like the group that is voicing these concerns is somehow despised by many, then that makes it far more likely that people are going to ignore whatever compassionate people say. And so it's much more promising to try and be on good terms with other people and use this Goodwill to suffering and to make our concerns heard as one concern by also taking into account the values of others also.

Speaker 1:

Cool. Sounds good. So we've been speaking mostly about the work of CLR and, uh, work on asterisks in general to date, let's talk more specifically about sense of reducing suffering, the organization you co-founded with Magnus, what's the theory of change behind the whole kind of concept of what CRS does and what its work is

Speaker 2:

About? Yeah, so we think that the most important thing right now is capacity building that is ensuring that in the future, the relevant people will be both motivated and able to, uh, apart as risks. So it just makes a lot of sense in light bulb. Long-termism, we're not so much focused on having impact now or in the next year, but we're thinking about the longterm and the very big picture. And then there's also this idea of cluelessness. We just don't really know in so much detail what exactly the future homes. And we were very uncertain about what exactly we can do now to best influence that on computer, to best produce suffering the longterm future, given all that. It's quite natural to think that we should focus on building capacity now, as that is perhaps the most robust thing we can do, even if it can perhaps sometimes feel a bit unsatisfactory as an onset. Now, what exactly does capacity building mean? Uh, it can simply mean getting more people involved, uh, building a community of people interested in reducing suffering, but I think it's also important to realize that this is it's very, it's different from just going out and spreading the word and growing as fast as possible. Um, I would frame it more in terms of building a healthy community. That's, that's stable in the longterm rather than maybe disintegrating at some point or becoming toxic for that. So you need, uh, good, good competent norms and an open-minded episodic modest, portable culture. That also ties into the, what we talked about in terms of why we should be competent. And one aspect that's just as important as the movement building is that we need to improve our knowledge of how to, how to best reduce suffering, given that it's so unclear yet how to do it. You could, you could call it wisdom building in analogy to, to move on building and be sort of need to do need to do both. And CRS is to contribute to that who all research.

Speaker 1:

Yeah, really interesting that you kind of answered first of all, with the answer about capacity building, because I certainly see how I see it's very clear how CRS is work plays into wisdom building, right? The research you do is quite clearly focused on understanding the problems and what can be done about them and all that sort of thing. It's obvious how this is a form of capacity building. I guess if I think of capacity building, I think of kind of explicit efforts within the effective altruism community, things like local groups where people meet people and basically more along the lines of what you were saying as an alternative, like spreading the good word of effective altruism and kind of encourage welcoming people and supporting them to get engaged in various ways. There's also the kind of the formal capacity building in the sense of more along the lines of what a lot of animal advocacy nonprofits do of basically doing concrete work on the topic and just like providing some kind of infrastructure for more and more people to get engaged as kind of awareness and discussion and all those sorts of things grow. So what's the model through which CRS contributes to capacity building. Um, and like, I guess, D how do you kind of optimize for it? How do you think, is there kind of an explicit way that you think this is what we're aiming for, or is it something quite diffuse and hard to operationalize in that way?

Speaker 2:

I mean, um, if you have things that are worth noting here, maybe that there is of course the EA movement and sort of viewing as health as part of that, um, at least to a large degree and in that movement that is already the infrastructure, uh, that, that, that has been built on outreach and local groups and things like that. So we don't think that it would make a lot of sense for us to just try and do the same thing before slipping the flavor, which is why we're focusing more on the other aspects of capacity building that outlet. And also we think that given the nature of what they're doing, it's probably not, not necessarily a thing that is like mass compatible in any way. Um, like unlike, for instance, animal efficacy. Yeah. So we're not necessarily trying to just get the word out to a lot of people we're trying to contribute more to it being perhaps a small community, but one that, that is open-minded and, uh, yeah. Re reflects on all these important philosophical and strategic questions.

Speaker 1:

Yeah. So that brings me on nicely to who's the main audience. Cause it sounds like what you're just saying there, the goal is to create a, would you say it's a research focus community or is it a kind of core of people who are flexible to pivot towards different kinds of interventions that might or might not crop up as promising? I guess there's two things there. One is like, what's the end goal with the capacity building and relatedly yeah. Who's their kind of interim audience.

Speaker 2:

Yeah. So I mean, the thing is the people that are working at Sarah's are primarily researchers, but I would definitely say that it's not a, a research project as a matter of principle, if we find something more concrete to do, then we would pivot to that. We are trying to build a community that is able to be, uh, that, that is cost neutral in the sense, in the sense of being able to switch to something else. Yeah. So, I mean, one large part is definitely the effective altruism community and especially the, the longterm just parts of it and people who are at least sympathetic to the suffering focus, moral views, it's not limited to people that could be share our views. Of course then I would also say that people who are interested in the fact that Panama, because he like a more specific target audience in part, because they tend to share are very bad, moral cetera. There's also some evidence that it concern for animals correlates with more suffering, focused, more views, but obviously there's different target groups that overlap anyways.

Speaker 1:

And so on the research side of things, how do you decide what to focus on there? What's the prioritization process for the prioritization research? Oh, that's getting quite meta.

Speaker 2:

Yeah. I don't think there's a, this anything formal. Um, it's just a, it's a function of, uh, the individual interests and skills of a researcher as well as sort of our, our collective thinking on how important of a question is this in the scheme of things. So how likely is it that you're gonna come up with an important insight, if you come up with an important insight, how much of a difference is it going to make to the overall prioritization? Yeah, it's a combination of all these factors. Yeah. So

Speaker 1:

When you started, it was just you and Magnus, but I know you've done some work trials with a few potential researchers. So how's that been going? Do you plan to hire more researchers and grow quickly or is it more of a steady kind of conservative growth plan you have?

Speaker 2:

Yeah, so I would say perhaps more of the latter, but of course it dependents on, but yeah, finding people that are talented and interested in contributing to our mission, I would actually say that so far, we've gotten a surprisingly good amount of, uh, applications and very high quality applications with, I was actually almost a surprise by myself. Like I was just uploading a form and we didn't even promote it so much and got a few very promising applications. It's well known that in the EA movement, there is maybe a shortage of available positions at EA EA organizations. Then a lot of people apply for these positions, but yeah, so this is going very well. We've just recently hired, hired, um, two additional interns, uh, capitals that I've learned and this awful rum. Um, and they're, they're doing very fascinating research

Speaker 1:

Mentioned before about the work often being determined, partly by their skills and interests of the researchers. How different does it look from what you and Magnus are doing? Is it very similar projects or does, is everyone going off in slightly different directions?

Speaker 2:

Yeah, I mean, of course it's not exactly the same, but I would say it is, it is very aligned with our overall overarching framework and priorities. So we have some for instance is looking at the different resolutions of the permanent paradox and its implications on asterisks and suffering focused epics in particular. So that's inspired in part by the reason reasons work by Robin Hanson on[inaudible], which has been a significant update for me. So like it's not that the alternatives are only us being alone or us being in a populated universe. There are also other scenarios such as a lot of civilizations coming up at roughly December that a similar time. And if that sort of thing happens, then it obviously has implications or something focus ethics in terms of, for instance, how likely it is that space with contextually be colonized by other civilization. If we don't do it, it could be relevant to think about, well, what exactly would happen if different civilizations meet in space? Could that be a potential source of stress? And Winston is looking at all questions caffeine on the other hand is looking at global pituitary aneurysm and the question of how that relates to as risks. So in what specific more specific circumstances would global terrorism result in espresso rather than just being bad in other ways, which I think is also a very fascinating topic. And it ties into what I was talking about, uh, on malevolence that, that, that these topics are obviously related.

Speaker 1:

How much are those topics and I guess about the wider work of CRS, but how much are those topics covered in or do they overlap with work within academia? Cause I, I mean, I have not read anything about the Fermi paradox, but it's a well-known thing. I am assuming that that's a topic that's been like looked into quite a lot within academia. Is that something, I guess my, my gut reaction is like, is this CRS has comparative advantage to focus on something like that, if it has substantial overlap. Um, I'm assuming that there's some kind of aspect of like explicitly linking it to asterisks as well, which is important

Speaker 2:

There. Yeah, exactly. I mean, I would be somewhat less excited about simply doing work on the planning, paradox, but looking at its implications for some kind of reduction particular, that that seems like a high priority. There is usually a lot of academic work on all kinds of topics, but the question is always how relevant is it to what our main concerns are. Yeah. And working that out as well from that high musicality. Yeah.

Speaker 1:

So it's a bit of aggregating, various different things from directions and, uh, interpreting them within a certain framework type thing. Yep. Okay. So you mentioned, you kind of hinted at this before with saying that you had a lot of grant applications, even though you didn't publicize the role very far. Uh, and that there's, it's, well-known that within the EA community, there's a lot of demand for roles. What do you think are the main bottlenecks to further progress in the general kind of the general field of work on asterisks? And obviously this is mostly you guys and CLR.

Speaker 2:

Yeah. That's definitely a very interesting question. So the usual candidates are money and talent finding skilled people who kind of want to contribute. And, um, yeah, I think both is medic. It's not really a single here in my opinion, that there are also in addition to these more intangible factors like organizational capacity and this problem of favorably involving people. I think this has also been discussed for EA at large. So not everyone can do cutting edge research or should do cutting edge research. And then there's a question of like, how exactly can people contribute in other ways. And it's not always easy to do that, which is of course very related to this problem in EA of like even five qualified applicants struggling to find a job. And yeah, so having progress on this would be really valuable, both for EA at large, but also I think it's also a very significant problem for as was conducted given again, that the nature of it is so, so, so difficult that it's unclear how it sappy. Okay. Contributed now in terms of funding, uh, I would say it really differs quite drastically between different kinds of work and a different organization. Um, it's also well-known that in effective altruism, some people have, have really a lot of money like, uh, open fill and the open philanthropy project and other large grant makers. So if the work that you're doing can have into these funnels beans, then money is maybe not so much of a constraint by contrast, if you're doing something that these large grant makers are not so keen on, then it can be quite funding constraint and no disrespect to work on asterisks. I would say that some forms of it do enjoy the, can get support from these large funders, such as work on cognitive AI, um, while maybe about other types of work that I think are important would maybe not, maybe be things that these Grantmakers are not so keen on like work that is more about macro strategy from a selfish focus perspective or book. It is specifically endorsing a suffering purpose point of view. Um, so that's out of work. I think it tends to be much more funding constraint.

Speaker 1:

Cool. Going back to the idea that there are lots of great applicants for, or lots of potential great applicants for the research roles. And you also mentioned that it's not necessarily just funding. What does that look like in practice? Like if somebody said to you, uh, for CRS here's another million dollars or whatever, what would be stopping you just hiring as many more researchers as you could afford?

Speaker 2:

Yeah. I mean, I, I guess we wouldn't hire more people if that, if we get a million dollars, but, um, yeah, it's a combination of whether or not these people are really contributing on a, on a research level, which is really not an easy thing to do. Uh, and maybe there aren't that many people that can really do it. Uh, and then the other factor is w what I was talking about in terms of organizational capacity, if you're, if you're starting off with two people, then you just shouldn't hire 10 people, uh, at the same time. Uh, in fact, common startup advice is, uh, hiring don't do it, you know, so there's a strong case for growing more conservatively, especially if you think that, I think that the current state of CRS is, is small, but it's working well.

Speaker 1:

Cool. So what about the streams that can tap into those funding sources then? Uh, could, could those aspects of the work not just be grown more rapidly than some of the other aspects that don't have enjoyed that wider support?

Speaker 2:

I mean, this is actually, this is in a sense what CLR is doing, and then they're doing things like giving out grants to people working in these areas. But yeah, I mean, it's not a civic with it, and you can always wonder, I mean, this is a thing that I'm maybe hesitant to talk about, but I was alluding to it saying that, like, maybe not that many people can be contributed. I mean, I'm not entirely sure. I haven't really sat in on the view on this, but it might really be that's the work of most people that's not actually contribute that much to progress on Microsoft. Uh, you might be quite elitist about that.

Speaker 1:

Yeah. So I want you to dive into a couple of questions about some of the research that you've put out through CRS. Uh, and one post you've written is called a typology of asterisks, which lists out different categories of asterisks. Uh, you call the categories, incidental asterisks, agential asterisks and natural asterisks. And you've also got a post about risk factors for asterisks. Uh, I essentially, things that might encourage increase that increase or decrease the likelihood of those more specific asterisks and those that posts includes the category of advanced technology capabilities, lack of efforts to prevent asterisks inadequate security and law enforcement, polarization, and divergence of values and interactions between these factors. And obviously there's a kind of partly just a definitional thing about what counts as a risk and what counts as a risk factor. Um, but do you have an overall sense of whether it seems most cost-effective at this stage to work directly on the most plausible asterisks or whether it makes more sense to address risk factors that might affect a number of different asterisks

Speaker 2:

Yeah. Question? Um, I, I would say that I lean maybe that the latter, if only because, and as I said, the carotid B, we don't really know in very precise terms, what, what the most important aspects are. And we can gesture like broadly at the sort of dynamics that seem most worrisome, but are fairly clueless about the exact details. And given that it does make sense to try and broadly improve the future and work on these risk factors for asterisks, without committing to a specific scenario. That's what inspired me to write this post about this practice class risk and what also inspires CRS is strategic focus on capacity building. Although of course it is a spectrum, and we're also not entirely unimportant. We can narrow it down a little and say that that some things are much more likely to be relevant than others. And if you go out and to reduce unemployment, because that's a broad improvement of the future, then maybe I'm not going to be too convinced. Uh, so we can say for instance, that animals and digital minds are particularly relevant because they're likely to be excluded from all consideration, uh, which is what we've talked about before. We can also say that conflict and escalating conflicts are an important factor for aspects. And so, as an example, if you want to work on improving politics, then from an eSports perspective, avoiding excessive political polarization is perhaps more promising than improving institutional decision making, despite both being broad improvement, because polarization is more directly related to worse his outcome outcomes and asterisk. Whereas, I mean, that institutional decision-making is also a problem that that's very much we're working on, but maybe not so directly tied to. So there's a lot of things that can be bad, but not as bad.

Speaker 1:

Oh, it sounds good. It sounds like you're kind of spread between those two categories is comparable at least to 8,000 hours spread, uh, or they, they have a slightly different focus, obviously with focus primarily on reducing extinction risk growth, and reducing suffering risks, but they list their highest priority cause areas as two of them are fairly narrow work on specific problems, so positive to shaping the development of artificial intelligence and reducing global catastrophic biological risks. But then they also list two types of work to build capacity, which is obviously where you said you focus on yourself at CRS, which are global priorities research and building effective altruism. So there's a lot of overlap with your focuses CRS there.

Speaker 2:

Yeah. I mean, I think people let it are reasonable and it comes to these questions.

Speaker 1:

Good to know. Um, all right. So within the research that you do that, I mean, with this kind of general area of work kind of caused privatization research, generally there are lots of different approaches that an organization could take. So for example, one option is to try to rigorously assess some of the important underlying assumptions relating to cause privatization. And that's the sort of thing that I see global priorities Institute is doing. Um, and so, you know, going into detail on some of the kind of specific philosophical ideas on depending on term, as in, for example, then another option is to try and identify many possible cause areas and do some brief initial exploration of the promise of work on each of those areas, generally entrepreneurship, or hoping to incubate a new organization, dedicated full-time to exploring and making a strong case for new cause areas. So they obviously have the sense that that kind of aspect of short and brief exploration is not being covered that thoroughly at the moment. And then another option is to essentially pick a plausible priority cause area and explore it in relative depth, making progress on the problem while simultaneously gaining information about how attractable further work is. And that's more comparable to what citizens to you is doing with our work relating to artificial sentence in the sense that, you know, we're not necessarily explicitly always doing a research project, that's intended to evaluate the promise of the cause area. But the idea is that somewhat by looking at it in some depth will gain a better understanding of the promise of certain types of actions. Do you have a thoughts on which of those overall options is kind of most needed at the moment?

Speaker 2:

Yeah. Uh, I think there's a place for all of this and there's not really that one lightweight to do it. It really depends a lot on one's interests and skills. Some people naturally tend to be more inclined to explore a lot of different topics. For instance, I think I, myself, well, hold on that category, others have maybe more of a tendency to dive into something more specific and dig up all the sources and go into detail and write a comprehensive report. And I think both of these can be very valuable. So, I mean, I think Sarah is not committed to one side or the other on the spectrum. And what that does is stress that overall your work is contributing to, to progress on, on the most important questions of our time. So I would still say maybe that in comparison to quite since academia, the sort of work that we do as a general tendency, perhaps less specialized, and it's still much more about big picture thinking, although maybe the dependency is raised to, to become more specialized as the field matures. Yeah.

Speaker 1:

I guess that's, that seems inevitable to some extent in that your there's only, presumably there's only so many different things you can uncover. I don't, I find it hard to imagine what the, the aspect of fleshing out the case for alternative cause areas looks like, unless you're just kind of going down the list of already known causes that people just haven't tended to prioritize with innovative auditors and for some reason or another, and kind of like steel Manning, the case for them being promising, or I suppose like potentially you're just like extremely inventive somehow. And you just come up with these ideas, but I, I, I didn't know what the process would be there.

Speaker 2:

I mean, I think I agree there was maybe a time a couple of years ago when I would have said that that effective altruism is maybe a bit narrowly focused on the relatively small number of causes, but I think that it doesn't improve the lot and people have fun now, like for instance, I did cost and ours had a policy on all kinds of different causes that might potentially be promising. So yeah, I mean, I think it has gotten a lot broader and at the point that you're at right now, if someone just wants to make the case for a new cost, like, okay, good luck with that. I don't necessarily think that it's like it hasn't been explored at all or too little, uh, yeah,

Speaker 1:

I guess it's interesting in terms of your own work, you, I get sense of most of your posts fit into this category of the kind of slightly shallower investigation of number of different ones. Like you've got a kind of medium sized post on space governance, for example, in the sum of your investigation of the political topics, but then actually the, your post-storm 11 actors is at least as far as I can think of as my head notably longer than most the other ones that you've done. And as we mentioned before, it was actually also one of the most kind of ones that's been most well received. I think it's like the second most up posted post on the forum of all time or something like that. So I kind of interested if you have thoughts on why that post had such good reception, do you think it was something to do with the depth of it or do you think it was like just it, do you think it was more about there? The novelty of the idea or something

Speaker 2:

In there? Yeah. I mean, it's a combination of all of these things. I should note that a lot of the more in-depth research was done by David alphas was like first author of this piece. Um, because as you say, I myself have more of a tendency towards yeah. Big picture thinking, which is what I said earlier about how this is about different people's interests and skills. And, um, yeah, I mean, it's, it's of course impossible to know how many uploads this post would have gotten if it had been less comprehensive. Yeah. I mean, I, myself tend to be someone who's most interested in like the, the basic ideas, but yeah, it definitely helps if a post is as complete, as comprehensive as that one was. Yeah.

Speaker 1:

On the subject of the air forum. There's a post on there by Sam Hilton called the case of the missing cause prioritization research. And at one point, Sam writes that if you look at best practice in risk assessment methodologies, it looks very different from the naive expected value calculations use NEA. He goes on to say, I think there needs to be much better research into how to make complex decisions. Despite high uncertainty. There is a whole field of decision-making under deep uncertainty or nighty and uncertainty used in policy design, military decision-making and climate science, but rarely discussed in EA. And so this is, it kind of brings up the idea that there are certain methodologies that have been tried and tested in other fields. It would be really useful if applied to cause prototization research. Do you have that sense that there's like, I guess it kind of touches as well on the critique, which is sometimes shared that people within the effective altruism community too often trying to reinvent the wheel and do things their own way when it's been done well in other contexts, do you have a sense of that, that there are like methodologies or types of research that we should be using and some description that have just been ignored so far,

Speaker 2:

It might be worth looking into. I'm also somewhat hesitant when I hear things like[inaudible] have not looked into this enough. I mean, is this even true? And like, if it is, so maybe that there's a visa, I'm like, yeah, I'm not entirely sure how much you would learn from this sort of very generic investigation of decision procedures or something like that.

Speaker 1:

I tend to agree in the sense that I think the best way to work out if a methodology is useful is to try and apply it rather than to do some abstract discussion. But, um,

Speaker 2:

Yeah, I mean, I w I would just, uh, I would challenge, uh, people who believe these things to actually come up with like, uh, a useful insight on corporatization

Speaker 1:

In our previous discussion when we were focusing on whether or not work on moral circuit expansion is approvable and potentially cost effective way of reducing suffering risks. Uh, and we spoke about the kind of animal advocacy specifically within that. And we've been talking a lot about the effective altruism community and the long-term isn't community, but actually a lot of the people who work on advocacy don't necessarily explicitly identify with either of those communities necessarily, but that doesn't mean that they wouldn't agree with some of the kind of underlying motivations, I think. And so I kind of suspect that there are various lessons that each of those, to each of those groups, in fact, to altruism and long term mists and like animal advocates, you know, they're partly lapping, but I suspect that there's various ways in which these groups can kind of share it as a note from each other. You've written the post specifically about long-term ism and animal advocacy. What do you think are some of the implications of the, of taking a long-term its approach for animal advocacy?

Speaker 2:

Yeah, I think that's a, that's a great question because are, um, a lot of, I think very important implications that it can have. And I also completely agree with what you said about how it doesn't mean that you have to agree that the long isn't community on, on everything or that you have to apply this label to your own identity, to find the symphony that looking at the long term is important, and that I think enjoys much, much broader support. Um, now in terms of the implications of long-termism on animal advocacy, one, one implication is that it is a stronger focus on achieving long-term social change and comparatively less emphasis on like the immediate alleviation of animal suffering, because it's, it's a marathon, not a sprint. And so it's about achieving lasting change particular about locking in persistent moral consideration of McGill incentive being that's at least hog. And from this optimist perspective, it's also critical to ensure the long term health, the long-term stability of this movement. So, so it's, it's likely avoid accidents that that could impair our ability to achieve long-term goals, either as individuals or as organizations, or as a, as a movement. And in a sense, maximizing the likelihood of eventual success eventually achieving sufficient concern for all centered being is arguably from this big picture perspective, more important than accelerating the process by a few years in particular, one way to jeopardize the sponsor term influence is by triggering a serious backlash and the permit factors and by the animal movement becoming toxic. So I think it's really important that we take reasonable steps to prevent that from happening to prevent the movement from being too controversial. And that could happen at, for instance, because advocacy itself is divisive or because the movement associates itself with like other highly contentious political views, which is perhaps happening in some degree of social justice topics. And yeah, I mean, this ties into what I've said earlier about polarization and conflict being, being a risk factor. And in addition to that, it's, it's crucial that the animal movement is, is thoughtful and is open-minded. And this is because of the uncertainty over what will eventually turn out to be the most important issue in the long-term, which I've talked about before. So, and in particular, we must ensure that, that this movement encompasses all of the relevant issues and all the relevant sentence beings, including wild animals, possibly invertebrates, if they're sent in possibly artificial minds, if they're sent in. And for instance, I definitely think that neglecting brother-in-laws is currently a major blind spot blind spot of the animal movement. And this can be a reason to focus more on at aestheticism and on, on careful philosophical reflection, rather than for instance, just advocating veganism. And we should generally be mindful of how biases could distort our thinking and should consider different strategies and an evidence-based rate, including perhaps an Orthodox strategies like learning to give or, or patient philanthropy.

Speaker 1:

Yeah. Sounds good. Okay. Well, this I've kind of long had on my to-do list to write up a post with some of my own thoughts on this topic. I thought I might briefly get your reaction to some thoughts I've got to see, uh, see if you agree with them. So one of them is, well, for some context, Brian Tamasic has written a post called why charities usually don't differ astronomically in unexpected cost effectiveness, which among other things argues that different charities or interventions working on seminar broad cause areas may have similar sorts of indirect effects and cross-fertilization. And so once we account for these indirect effects and the differences between interventions seems like there'll be, there'll be smaller basically. And I think one practical implication of this is that we should be willing to invest in a broader range of tactics rather than doubling down on interventions. That current evidence suggests most cost-effective. For example, I think we should invest in a broader range of institutional tactics rather than focusing, predominately on corporate campaigns as we currently do in the fund. And at least within the kind of effective animal advocacy contingent of the founder movement. And I think that this point is kind of further supported by the idea that if you're a long-term mist, you should be more patient as well, and therefore more willing to experiment with a wide variety of different tactics that work out the essentially the ideal distribution of tactics in say 10 or 20 years time. How's that sound?

Speaker 2:

Yeah. Um, that sounds good. I definitely strongly agree with this more institutional and political focus of the animal movement, uh, rather than individual dietary change or even a corporate campaign. I definitely think that people would be good to move more towards the farmer, but for guys to try and different strategies that, that definitely seems right to me, maybe two, two caveats to that one would be that it shouldn't be something that, that endangers the long term health of the movement, as I mentioned before. And the other problem of course, is that with regards to the many things, it might not be so easy to actually measure whether or not your, your intervention has been, has been good, like, especially when you're talking about long-term, uh, impact and social change and things like that, it might not be so easy to measure how much your, your intervention has done to achieve that. Yeah. And that there's a risk off of a bias towards things that aren't measurable. Yeah.

Speaker 1:

Yeah. Great. Another implication that I think of sometimes is that we should potentially be open to focusing on particular decision makers who might shape the far future. And so I guess, especially if you're worried about some kind of lock-in effect, uh, then potentially this is includes AI designers, but more broadly, it might also just be like policy makers and things like that. And so there, th these people's impact might be greater than we would assume by just looking at kind of immediate effects for animals. What do you think about that? And kind of, especially this aspect of intentionally focusing somehow on people involved with artificial intelligence.

Speaker 2:

And it's a very interesting question. And one that comes up repeatedly, I think, relative to what you would do without focusing on AI at all, it does make sense to consider whether we should direct the targets this group. So I I'm very much open to that, although I am also hesitant to like, embrace this completely because on the one hand, like there's a lot of uncertainty about the future of artificial intelligence and what the relevant scenarios are, I think are about pop ways that sort of mediate the influence of AI developers in particular. Like if your, if your company is producing an AI that doesn't do what I want then as a customer, I would just go to this other company that does what I want that's them, whether they can do it, there's going to be of course, political regulation, some political societal backdrop to this development of artificial intelligence. And so I don't think, I think it would be quite wrong to, to expect that AI programmers to the world basically, but they might have a much larger influence than one would expect. Yeah. So it's a very interesting question. And then I haven't separate on that. And at the conclusion of this yet,

Speaker 1:

I agree with the things you just said. I think one of the things that I hear discussed as well is, is going back to the idea of cooperation that like, especially given that the kind of overlap with the AI research community and the effective altruism community of which we are apart in various ways, it feels like, especially like explicitly focusing on a particular group seems like the opposite of cooperativeness.

Speaker 2:

Yeah. And I mean, what does it even mean to focus on a particular group? If you look at how people are coming up with their attitudes to the animals, and it's usually just shaped by the society and the context that they, they live in, it's not like AI programmers are just this completely separate island and then using their moral priorities are going to be shaped by what, what society in general things. And I would hope that most people that are developing AI also think that it should be in the hands of all of societies rather than just sort of a power grab AI developers.

Speaker 1:

Um, I suppose it does depend on a number of different specific things about how exactly AI is developed. And if something like AGI comes to being, whether that's as, as we were kind of talking about last time with the timelines of AI, whether it's, for instance, whether it's some particular company or entity or whatever that stumbles across the relevant discovery or whether it's just a, kind of a gradual accumulation of different factors. Because then if it's, if it's more like the former, it seems like potentially there's some kind of scope for even just like accidentally the decisions and the processes that are used to in various kind of programming aspects or whatever, be disproportionately influential. Um, even if as, as I say, it could be accidentally, it could be that they're just the people who designed the train, the algorithms or something like that. And just by somewhere in that process, their values are overly represented somehow.

Speaker 2:

Yeah. That's definitely a possibility. And it's more of a possibility. The more you think that it's going to be a single invention rather than something distributed gradual. Yeah. That seems about I'm relatively skeptical about that more extreme forms of, I think this company is going to develop it in the next year or so, but if you do believe that, then I think it makes a lot of sense to focus on somehow shaping the people involved there, their values. Yeah.

Speaker 1:

I have a longer list of ideas. Um, but we won't go through them all, I guess, uh, an easy way to summarize a lot of them though, is just that like being willing to be patient could change a lot of your prioritization, um, in terms of different tactics. And there's just a lot of implications that could stem from the idea that essentially what we're aiming for is something like the end of animal farming at some point, uh, or at least before, before some kind of lock-in as opposed to necessarily immediate impact for animals and like reducing suffering over the next few years or something like that. And I think there's just like lots of implications that could stem that stem out of that slight shift in focus. Yeah. Cool. All right. Well, let's move on then to the other side of the coin. Do you think there's anything that the longterm his community can learn from animal advocates?

Speaker 2:

Yeah. So maybe, uh, one thing is that animal advocacy is perhaps more about action and are changing things in the real world and finding a balance between that and research. And I, I sometimes worry about a certain bias in some sense, perhaps to just default to research as the thing to do. If you have this question, then like, okay, more research is needed and like maybe long term it's kind of done from animal ethicacy advocates to, uh yeah. I'll have a sufficient balance between acts and pieces.

Speaker 1:

Yeah. There's I think there's a lot of things relating to that as well. Something that I've I sometimes think about is that some of the, well, I guess, I guess this is more like, assuming you are doing taking action, but assuming you are taking action. I think a lot of the strategic lessons from the fall dental movement could apply to some kinds of work. Especially the main candidate, I think, is work. That's explicitly about encouraging consideration of future generations. And there's a lot of overlap, potential kind of strategic overlap there as well. Like yeah. I mean, literally the tactic types and kind of what makes tactics works and various kind of generic social movement lessons, which is a lot of the focus of our work at census do could be applied pretty much directly to the same question to sorry to that other question. Yeah. That's and touching on what we were saying before. I think another aspect there's not necessarily, this is, this is more within the research side of things. I sometimes wonder whether it comes back to what we were talking about before, where there's somewhat like a distinction between people who identify as long-term ES and people who don't. And I think that overlaps with a preference for different kinds of research and almost like epidemics as well. And we've touched on it before, but in the longterm it's community, there's quite often a lot of kind of theoretical focus rather than a empirical focus. Whereas I see pretty much the exact opposite in the research on animal advocacy, uh, of where it's very empirical. And it's like, let's look at this past data or let's run this experiment or whatever, and kind of work out what that tells us rather than starting with the theory. And I wonder, I frequently finds that I'm thinking that one group should do a little bit more of what the other one does.

Speaker 2:

Yeah. I mean, that sounds exactly right. To me. You probably need some kind of synthesis of both more empirical and more abstract, big picture work and maybe you're right. That it's like, yeah. There's one group needs to move more in the direction of the other.

Speaker 1:

Yeah. Cool. Sounds good. All right. Um, we also talked before about artificial sentience advocacy and whether this is something that's high priority to do fairly directly. So if we did decide that directly working to increase it, like the society includes artificial sentience within the moral circle is one of the best opportunities for reducing asterisks. What do you think are the next steps basically? What are the top priorities within the,

Speaker 2:

Yeah, that's, uh, still quite, quite uncertain. Um, I mean, I definitely think one should perhaps be reluctant to immediately do broader outreach, especially to the broader public might be more reasonable to, to talk about academics or philosophers or like people who have ineffective animal efficacy. Yeah. I think what's really most important right now to, to figure out the best way to do it. Um, to even just to figure out the best way to, to talk about it, what framing we should use, whether it's about rights or about welfare, we should of course think about how it might potentially backfire, um, which is also, I mean, this ties back into the implication that I've mentioned earlier about avoiding things that could permanently impact our ability to do something in this space. Yeah. In terms of categories of research, I tend to be most excited about work. That's looking at these macro strategic questions that these framing questions that there's perhaps a lot of room for psychology research on what sort of attitudes people currently have to arts. Well, I mean, not yet existing artificial beings, but it's still very much in flux.

Speaker 1:

Yeah. Music to my ears ready because that is a substantial amount of the focus of what senses Institute is doing in kind of research in this area is looking at those kinds of psychological aspects and, um, attitudes both towards current and sees that kind of map onto this topic and some stuff. And where possible kind of asking more explicitly about attitudes towards future entities and trying out some kind of, even some interventions that might affect that. I guess there's kind of whole other streams of possible research that could be taken as well. Uh, and I wonder what you think about some of those ones. So I hinted before about how our work is kind of a type of deep exploration of the topic. And I think that touches on global priorities research in the sense that by understanding some of these more concrete things, you get a sense of tractability and things like that. And so you get a sense of how plausible this is as a general area, but I wonder as well if there are, if you think about some of the, I guess that more explicit research that's intended to very concretely go through and test the, the promise of the area, for instance, you could kind of go through some of the various questions we were talking about last time and just like test those and try and make headway on those. Do you have, do you have thoughts about like how important it is to do that targeted global equities research versus have a go at just basically making progress on the problem?

Speaker 2:

Yeah. I mean, it depends on how you would go about it then and how you would rely on the evaluate, but it's gone well. I mean, yeah, that's a general problem with Paul long-term as well almost is that we don't have these tight feedback loops and maybe we can have tight feedback loops with respect to certain sub-questions like, I mean, I can just measure like how many academics, uh, replied to my email that I sent them about artificial sentence, but I mean, how much does that actually tell you about yeah. About the impact of his color and maybe it does something, um, so I'm not saying that this is a bad approach, but there are also limits to how far you can go testing these things.

Speaker 1:

Yeah. Yeah. I know you've done some work thinking about whether we need to kickstart a movement focusing specifically on artificial sentience. What's your current thinking on that? I know you were saying before that you think we should probably avoid some of the, the broader outreach types. Do you have anything else? I have any other thoughts on this broader question?

Speaker 2:

Yeah. I still think that's potentially quite leveraged and quite high impact. If we can like lay the groundwork for that, we'll be the first people to get picked up this movement, but I do think it needs more groundwork and perhaps consequently be started right away. I'm just all questions that we should, for instance, integrated integrate like efficacy for artificial beings in the animal movement versus doing a more specific, I guess, ethicacy movement. I put one of these, is that it? And likewise, just the practical side of things or things that, that need to be a new project or can it be integrated at existing organizations like[inaudible] and yeah, I mean, I guess it's also just the practical matter of there be sufficiently many people that are interested in doing that, that have enough drive to make it happen and that don't have other people, other things to do that they just to be even more important. Yeah. Such people don't really grow on trees, but if you're listening and you're one of them, please do get in touch. Yeah.

Speaker 1:

Sounds good. Uh, it, obviously it depends a little bit on how we're thinking of the term movement in the post-Soviet and just kind of introducing our interest in this area called the importance of artificial sentience. I, the kind of suggestions I end with is that we should focus on kind of field building, which is almost like a more conservative form of outreach where it's focusing on people with similar to what you're saying. We'd like overlapping interests already, um, in terms of like maybe their researchers doing relevant work or people with kind of who are doing advocacy that is comparable already, and that maybe that's the kind of the lower risk form of outreach. And it's, that's almost like it's the, the seed for a potential movement. Even if you wouldn't call it a movement already, it's like the kind of the relevant experts have kind of done some preliminary thinking on it and there's potential people who could become involved if something happened next.

Speaker 2:

Yeah. I mean, of course it's, it's a debatable what you can call it a movement. And maybe it's a little bit pretentious to think that we would be kick-starting that movement like maybe, uh, maybe a lot of the existing work on, on the topic is more philosophical or academic in nature and not so much focused on actual social change and action other than some philosophical discussion.

Speaker 1:

Yeah. Certainly my impression from the literature of literature review I've done, um, which is currently just a pre-print on, um, archive calls, the moral consideration of artificial end states. Uh, the title is somewhat stemmed from what we found in the sense that the vast majority of things that I identified were yeah. Philosophical. And there was, uh, there's a kind of a brief section at the end about relevant empirical research. And there's a whole kind of, there are some kind of adjacent fields of empirical research like that. I mean, to some extent, the whole field of kind of human robot interaction. And it's got something to say on this topic, but in terms of like thinking very concretely about like, what do people think about artificial sentience? So like what are the predictors of concern or what works to encourage concern even? Yeah. Even on the research side, it's quite devoid from the actual relevant questions, but that said, there's already policy interests. You know, there's people, there's already people talking about this and it's obviously covered in various forms in like science fiction. So it doesn't feel as distant as what I just said to my imply. Um, it feels like, as you say, there's opportunities for leverage because the stuff's already happening on this, you know, whether we get involved or not. Yeah. You mentioned just now, if people were interested in it, they should get in touch. Do you have thoughts about what people should do right now if this, if they do think that this seems like something they're interested in and it seems like an important area, uh, that they would like to get involved in to help increase to yeah. To work on this and increase Morgan's iteration of occupational centers.

Speaker 2:

Yeah. I mean, definitely one thing that I would almost always recommend is to read more stuff on the topic, um, that has been written by, by possible people in the EA then. Yeah. I mean, I guess the next step would be to actually reach out to the people that are working on this topic such as Jamie or like me. Yeah.

Speaker 1:

Yeah. I agree. I do think that like, there's scope for actually doing things now. I don't want to, like, I, I, it would be great if you reached out if those people reached out to us so that we could kind of discuss next steps. But I think, for example, if somebody was, if somebody worked for a research research organization and was in a position to conduct relevant research, like I think there's loads of stuff that people can get started on. Uh, you know, we were talking about, we were talking about outreach and that very extensive award outreach. It's probably not a good idea for various reasons, but I think in certain context, right, like within the community or if appropriate and if appropriate within annum, obviously people can basically just like start talking about this idea and mentioning it and that could potentially help to find other people like that. Yeah. Who are potentially interested in this. So, yeah, I agree, but I, you know, it's not, so I think there are concrete things that people can do. Uh, it's not so amorphous, and of course there's always the option for donating as well and things like that. Cool. All right. Well, I wanted to kind of finish with some discussion of career opportunities and if somebody wants to focus on asterisks, like we were kind of talking about concrete opportunities for artificials sentience there, but in asterisks more broadly, do you think, and obviously there's been a lot of discussion within the effective altruism community generally about kind of career strategy and what sorts of things people can do if they want to maximize their positive impact. Do you think that prioritizing asterisks substantially changes these kinds of career strategy considerations? Are there things that become more or less important if you prioritize as risk reduction?

Speaker 2:

Yeah. I mean, I guess there is a lot of all of that and I would generally recommend like 80,000 hours and then materials on this one difference perhaps is simply that the eSports space is smaller. And one implication of that is that there's less specialization and perhaps more generalists at more of a need for this entanglement ISA. So yeah, it's worth exploring whether or not that is something that one is interested in. Also since it is a small number of organizations working on this, it's in a way quite easy to find out where, to it as a chance of working on estimates, I think to be applying to these organizations and seeing what happens.

Speaker 1:

Yeah. So 8,000 hours tend to divide things into various sort of general categories. They've got research in relevant areas, which is obviously related to a lot of things that we've been talking about and work at effective non-profits, which is in the case of SRX substantially overlaps. There's also a category though for government and policy in an area relevant to a top problem. What about that? Is that an opportunity? Is it, is it too premature to, for people to go into policy careers if they're interested in reducing asterisks? Is there anything that can be done?

Speaker 2:

No, I don't think that's very mature and I didn't mean to imply that, like the only thing you can do is to research an aspect. That's the sort of thing that we're doing at the center, but I do think that lots of possible careers possibly very impactful careers outside of this EA research space, such as a policy career, that could be quite, quite quite important. It is, of course not always entirely clear what you would be doing in that position. Like what sort of policies you would advocate for that would reduce assets. But yeah, there are some, some things might, could gesture such as maybe trying to refuse political polarization or increasing concern for all sentencings that that might be feasible at this point. So like the more abstract argument there is simply that, yeah. I mean, if something important happens in the future, then it's important to, to have sufficiently many people in the right positions. And one aspect of that is his political influence and having the opportunity to make moral concerns heard in that way too. So, and I think having people going into, into a policy career would contribute a lot to that.

Speaker 1:

Yeah. Yeah, definitely. Okay. So I said, I mentioned that research could potentially, it could be conducted within nonprofits, but there's also obviously the other option of pursuing research within academia. What else? I guess we kind of touched on this before about whether there are, whether there's much overlap and you mentioned, you said that there's, there's lots of overlap and potentially it comes down to some of these other things as well. Like what sorts of different types of research you think are most needed? Yeah, I guess, do you have any further comment on this topic generally, basically. Do you think that there are kind of additional pros and cons of academia versus nonprofit work for if you want to focus on asterisks specifically, or is it just a kind of similar trade off to what are the people interested in effective altruism might be going through?

Speaker 2:

That's a big possible, I think to have an academic couple of year on topics that are very relevant rather than to as risks, depending on what exactly you want to do. It might be somewhat difficult though. Like you might not be able to work on the questions that you find most important, uh, or like doing so might not be ideal for your academic progression. So that might, that might be difficult for some people, but then it depends on your own psychology, whether or not you would be willing to, to compromise perhaps on, on these things. It's always, it always depends on your supervisor and all these things better than not. It is possible to work on topics that are really important.

Speaker 1:

So working to reduce suffering risks seems generally more de-motivating than striving to achieve a flourishing future. Have you found this and how do you manage to stay motivated?

Speaker 2:

Yes. I don't really find it myself, but yeah. I think one needs to compartmentalize a bit and not like, constantly think about[inaudible] because that's, yeah, it's probably not healthy watch track footage every day. And in doing that, both likely to just lead to aggression, but yeah, it is a balancing act because you also don't want to abstract away so much that you, you stopped caring and that losing the motivation to do something about it. So yeah, one way I'm thinking about it is stuff that it is quite an incredible opportunity that we have to make so much of a difference in a sense. So if you're starting with, oh wow, I can actually save a child that would die. And then you realize, oh wow. I can actually say a pause and then have a live spot, even in a sense, it's the next level off of that, to realize that our actions have the potential to help an incredibly large number of future beings, which is arguably more abstract, but also justice, real expectations.

Speaker 1:

Do you agree? Um, I somewhat struggled to, well, I guess I tend to, I think I might have gone too far down the abstracting it away, and I sometimes think of things in terms of that, that kind of like the opportunity and get excited about opportunities to do good. I don't want to generalize without having asked these questions in front many people, but I do get a sense of a lot of people who work on topics like this share a sense of like excitement, about being able to make progress on things.

Speaker 2:

Yeah. Yeah. And I mean, I share that sense that in a way it's not really the right way to think about it, but as long as it's enough to keep you going, maybe that's what matters.

Speaker 1:

Yeah. I agree. I agree. I don't know. All right. Well, it's been great to have you back and for the second episode and yeah, thanks so much for joining me both, I suppose. Thank you. Cool. Yeah. Any last thoughts or like how people can get involved with CRS or work to reduce stress more generally,

Speaker 2:

I would recommend just reading up on our materials and issue, like what we are writing about if you're, if you agree with our priority areas and we'd like to do this sort of work yourself, you can, we can get in touch on our website. Great. Thanks again.

Speaker 3:

Thank you. Thanks for listening. I hope you enjoyed the episode. You can subscribe to the sentences, do podcast in iTunes, Stitcher, or other podcast apps.