Bio(un)ethical

#13 Sarah McGrath: Are there moral experts?

with Leah Pierson and Sophie Gibert Season 2 Episode 13

In this episode, we speak with Dr. Sarah McGrath, professor of philosophy at Princeton University. We discuss whether and when it makes sense to defer to others about the answers to moral questions, whether moral deference is any less appropriate than deference in other domains, like math or science, and whether we have reason to think bioethicists are moral experts.

(00:00) Our introduction
(04:28) Interview begins
(08:02) Varieties of moral deference: pure versus impure
(12:39) Outline of Sarah’s view and argument
(20:58) The (ir)relevance of meta-ethics (what ethics is and where it comes from)
(41:13) How to identify moral experts
(50:37) Are utilitarians likely to be moral experts?
(52:32) Does education in moral philosophy make you an expert?
(1:01:18) Practical implications: endowing bioethicists with authority
(1:14:55) Why talk of optimism and pessimism is misguided

Mentioned or referenced:

Bio(un)ethical is a bioethics podcast written and edited by Leah Pierson and Sophie Gibert, with production support by Audiolift.co. Our music is written by Nina Khoury and performed by Social Skills. We are supported by a grant from Amplify Creative Grants.

Note: All transcripts are automatically generated using Descript and edited with Claude. They likely contain some errors.

Leah: Hi, and welcome to Bio(un)ethical, the podcast where we question existing norms in medicine, science, and public health. I'm Leah Pierson, a final year MD/PhD candidate at Harvard Medical School.

Sophie: And I'm Sophie Gibert, a Bersoff fellow in the philosophy department at NYU, soon to be an assistant professor at the University of Pennsylvania.

Leah: When faced with difficult moral decisions, we often turn to others for guidance. We turn to our friends about relationship dilemmas, to spiritual leaders about how to raise our kids, to professional mentors about ethical challenges at work. The field of bioethics takes this to the extreme through hospital ethics committees, ethics consultation services, institutional review boards, IRBs, and policy advisory groups, we encourage, facilitate, and sometimes even mandate the quest for ethical guidance.

For instance, the research enterprise doesn't trust scientists to decide whether research involving human participants is ethical. They're required to have their research vetted by IRBs. This all raises the question of who we should turn to for moral guidance. And what makes those people qualified to weigh in on moral issues?

Who we select matters, as their judgments shape important outcomes. What research we conduct, what practices clinicians engage in, what public policies we enact, and so on.

Sophie: Sometimes we treat bioethicists as experts on moral issues simply because we recognize they've had more time to think about them. Or because they've had the opportunity to gather relevant empirical facts. Or because they can serve as an impartial arbiter in fraught situations. Other times, however, we treat bioethicists as pure moral experts. In other words, we defer to them about moral matters because they've been marked out as being better at making ethical judgements, evaluating moral arguments and identifying the moral truth.

It isn't just that they've thought about things more or learned more of the relevant empirical facts. Rather they're supposed to be better positioned to make moral judgements. For instance, when clinicians face a moral dilemma and call an ethics consult, we take it that what they're doing is analogous to calling a cardiology consult when a patient develops an abnormal heart rhythm. Clinicians are seeking guidance from someone who may be seen as generally better positioned to weigh in on a specific type of problem.

In short, moral expertise is treated similarly to cardiology expertise.

Leah: But is there any such thing as a pure moral expert? In other arenas, we do treat people as pure subject matter experts. For instance, someone can be an expert in physics or mountaineering or cardiology, but is morality, like physics, mountaineering, or cardiology, a subject in which one can develop expertise?

And given how bioethicists are trained and credentialed, do we have any reason to think that they are the ones to defer to? This is the topic of today's conversation. We consider what it means to defer to someone about a moral matter, whether there is something inherently fishy about deferring to others in the domain of morality, as opposed to, say, science or medicine, and what we would have to know about someone in order to reasonably treat them as a pure moral expert.

Sophie: Our guest today is Sarah McGrath, Professor of philosophy at Princeton University.

Sarah has written extensively on moral disagreement, moral expertise, and other issues at the intersection of metaphysics, epistemology and ethics. She is the author of numerous articles in leading philosophical publications, as well as the book, Moral Knowledge. In this domain, Sarah holds the view that moral expertise is at least in principle, no fishier or more problematic than expertise in other domains, but that we face a special set of challenges when it comes to identifying moral experts.

Leah: As always, you can access everything we reference at our website, biounethical.com. And you can submit feedback there or email us at biounethical at gmail dot com.

Hi, Sarah. Welcome to the podcast.

Sarah: Hi. Thanks for having me.

Sophie: So our topic today is moral deference. What is moral deference and why is it important?

Sarah: So by deference I just mean taking someone else's view and taking that as what you believe. So, a simple example of just deference would be, you're in a new town, you haven't been on campus before, and you ask someone for directions to the stadium, and they say, well, the stadium is that way, and you believe that the stadium is that way, and you don't have independent evidence, you haven't looked on your phone, you don't have a map, and you take that person's word and that's what you believe.

That's what I mean by deference. So moral deference would be a case where you ask someone a question, like, what should I do in this difficult situation? What's the right thing to do? Or it could be more general like is eating meat immoral, and you ask someone and they answer the question and you take what they said as your own belief. And actually I should say, deference I understand it a little bit more broadly. So it doesn't even have to be that the person says something.

They could just make a face, you know, like if you say, is eating meat impermissible, they could make a certain kind of face and you could come to believe it's impermissible based on the way that they indicated their view even if they didn't say anything.

Sophie: Okay, perfect. And then would treating someone as a moral expert just be adopting a general practice of deferring to them?

Sarah: Something like that. Yeah, so I make some more distinctions. There's so I'm going to talk a little bit about the difference between pure moral deference and impure moral deference. The impure kind is like I'm taking your moral opinion as my own, but not because I think you're better at making moral judgments.

It's rather that you have more information about the situation than I do, more non-moral information. So you can imagine going to a history lecture and having the professor tell you, well, the worst of all the Nazis was this one because this Nazi committed more heinous crimes. You might take that moral claim, and write it down in your notebook and come to believe that that was morally the worst Nazi. But it's not treating the history professor as having specifically moral information that you don't have. It's rather historical information. So you might be deferring to the history professor, but you're really deferring to them because of their expertise in history, not because of their expertise in morals.

So pure moral deference would be like, I know all the information that you know about, say, factory farming and facts about animals and their sentience and what experiences they go through. We share all that information and I still don't know what I think. It seems to me like an open question, whether it's wrong to use animals for food, or maybe it seems to me like it's fine to use animals for food.

But then you tell me, sharing all this other information about factory farming, you tell me actually it's immoral. And I think, oh, it's immoral then. And I take your view of the matter as my own. In that kind of case, which I call pure moral deference, I really am treating you as an expert in morals as such.

And so when I say treating someone as a pure moral expert or as a moral expert, I mean blindly deferring to them in the sense that you take their moral view to be your own. And it's not because they have some other non-moral information that you don't have, but just because you think that their moral judgment is better.

Leah: Got it. Okay. So there are many different kinds of moral deference ranging from cases in which deferring is relatively uncontroversial, perhaps like the Nazi example you just mentioned, to cases where it seems potentially more problematic. Can you walk us through some cases and discuss the relevant features that tend to make a given case of moral deference more or less controversial?

Sarah: Sure. Actually, before I go through any cases, there's an interesting discussion about - like you use the word controversial and some people in this literature use the word fishy. Like, why does this case seem fishy? That's David Enoch. And other people say, why is moral deference bad?

So one thing is to ask us like, wait, what exactly are we talking about? What do we mean by controversial or fishy? In some of my earlier papers, I would just start with two examples. So one example is like we're in the car together. I think you have a better sense of direction than I do.

And so when I don't know which way to go, and it might just really for all the world seem to me like I should turn right. But if you say no, go left, I'll just defer to you, 'cause I think that you're more reliable at directions than I am. And I contrast that kind of case with a case where someone looks at all the information that's available about the ethics of factory farming and thinks, actually, I think using animals for food is fine, but his wife tells him it isn't fine. It's immoral. And so he believes that eating meat is immoral. And he doesn't eat meat and he tells people, well, I'm a vegetarian.

And then when they say, well, why are you a vegetarian? He says, my wife told me it's immoral. That kind of gives us pause in a way that like, if you say, well, why did you turn left? You said, my wife has a better sense of direction. She said, turn left. That doesn't seem odd. So to me, I like the fishiness thing.

And I guess what I would mean by controversial, if that's the word we want to use is something like, it seems kind of surprising or weird. And then the question is like, why would that be? So, I think that that kind of example that I just told you about where you say to the guy, Well, why are you a vegetarian? He says, Well, it seems like eating meat is fine to me, but my wife told me it's wrong, so I believe it's wrong. That seems like someone is treating another person as a pure moral expert.

But there are other cases where deferring to someone about a moral matter doesn't seem odd at all. And I already gave you one example, like the history professor, you might defer to him about which Nazi was the most evil, but it's really because you're deferring about the deeds that the different Nazis committed. And another kind of example would be if you're in a fight with someone, and you want to know, am I being unfair? Is she being unfair? Is someone not thinking about this correctly? And Montaigne says at one point, in that kind of case, you grab a passing stranger. Because it's your own personal investment in the case that makes it hard for you to see things right.

So in that kind of case, you might defer to someone else just because you know your own judgment has been impaired by your own stake in the case. And another, you know, you're really, really tired, and you don't have time to think about it, you might defer to someone else. So there are kinds of cases where it doesn't really seem like there's anything fishy or weird about deferring to someone about ethics, but in the kind of example I give where the guy just defers to his wife about moral matters and just doesn't trust his own judgment at all and just defers to her.

That's the kind of thing where it looks like you're treating someone as a pure moral expert and it seems kind of weird.

Sophie: Okay. So the cases that seem the least fishy are the ones in which one, you defer to someone about a moral matter, because you think that they have relevant empirical information that you lack. For example, you defer to them about whether to eat meat, because they know a lot more about factory farming or the psychological lives of animals or something. Or two, you defer to them because you have some reason to think your judgment is compromised in some way, either in this particular case - let's say you have investments in Boar's Head and so you're not a very reliable judge about things that might affect their stock. Or compromised in general, let's say because you're really tired.

And then on the other hand, the most fishy cases are the ones in which none of that is true. You have the information, you have no reason to think that your judgment is compromised either in general or in this particular case. And yet you defer.

Sarah: That's right.

Leah: Okay, great. So before getting into any details, we want to make sure that we understand the basic picture you endorse and the structure of your argument. So we take it that you think there is nothing in principle wrong with the beliefs or the knowledge we get from deferring to others about moral matters, even in these fishier cases.

And that moral deference is in principle no more problematic than deference about non-moral matters like deferring to a physicist about whether the universe is expanding, or deferring to a biologist about how mitochondria work, and so on. Do we have that right?

Sarah: Yes.

Sophie: Could you say a little bit more about what you mean when you say that it's no more problematic in principle?

Sarah: Good question. So I was coming at this from the perspective of someone who thinks that there are right answers, there are correct answers to moral questions. It's objective, it's not just a matter of opinion. It's not just a matter of how you feel. So that was sort of my starting place.

And it seemed to me that one reason why moral deference might be fishy or problematic or seem odd would be something like there are no objective right answers, correct answers. There's just what I value, what you value, what your opinion is, what my opinion is. So on some views about what ethics is like, it's just relative.

You have your values, I have my values, so abortion might be impermissible for me because of my values, and it might be permissible for you because of your values. And there just isn't one single true morality. That would explain why it's weird to just adopt someone else's opinions as your own, because it would be like, when I taste something, I think it tastes kind of bad, but you told me it was good, so I like it. And that's the kind of thing that doesn't make sense. And so it looked to me like if we think there is a single true morality and we think it's objective and there are truths out there that we could learn, then it would seem that some people who study ethics more or who think about it more would be better placed to answer moral questions.

And other people, when you've got some people that are better placed to answer the question and other people, then you've got experts and then you've got people you should defer to. So if that's not the situation we're in, if it does look weird to defer to people, a natural explanation is there's no correct answers.

It's just a matter of opinion. We should be anti-realists about ethics. So that's sort of what I was thinking. Like, if that were true, there would be an in-principle problem with deference, because there aren't any objective moral facts to defer about. So when I say, well, there's nothing in principle problematic, what I mean is, the things that make it seem weird are consistent with there actually being these objective facts to defer about.

Sophie: Got it. When you say that there's nothing in principle weird about it, do you also mean something like the beliefs that we get from deferring morally amount to knowledge or are justified?

Sarah: Yes, I do think that there are not deep disanalogies with respect to whether knowledge can be acquired, whether justified belief can be acquired. You can get all these epistemic goods by morally deferring just as you could with non-moral deference.

Leah: Got it. Okay. And as for your argument, it seems to us that it has two major premises. First, that non-moral deference or deference about things like science, math, and geography is often warranted and appropriate. And second, that even though there are some differences between the moral and non-moral cases, there's nothing special about the moral case that makes it in principle more problematic to defer.

Is that how you would put the argument or is there anything kind of big picture that you would add?

Sarah: That sounds good.

Leah: Great. Okay. So we planned to spend most of the time talking about that second premise, the claim that moral deference is no more problematic in principle than deference about things like science and math, and discussing what differences you do see between these kinds of deference. But we wanted to start just by getting a better understanding of the first premise, the claim that non-moral deference is often unproblematic or not fishy.

You've given lots of examples here, and you give lots of examples in your work where differing about something non-moral doesn't seem fishy or weird at all. So deferring to a mathematician about whether Fermat's last theorem is true, deferring to physicists about truths in physics, deferring to someone who has a better sense of direction. But presumably not all cases of non-moral deference are like this. Sometimes non-moral deference is fishy. Are there any cases that you can give us of deference about empirical matters, math, this sort of thing, where deferring to someone does seem kind of weird?

Sarah: Sure. So, I mean, imagine that you and I were just looking out the window together and I ask you what the weather is like. In that kind of case, it's like we both are looking at the same exact scene. And if I want to defer to you about what we're looking at, that seems very strange because it's so obvious.

And similarly, if I defer to you about questions of simple arithmetic, that seems very odd because I should just be able to arrive at the answer myself. Or you could imagine, like, two mathematicians and they regard themselves as equally competent, it would be strange if one of them defers to the other one about the answers to the questions in mathematics.

So I think those would be the cases that look parallel to the kind of puzzling, fishy cases.

Sophie: Okay. And in that case, the source of the fishiness for you is that I'm just as competent a judge of what the weather's like or of what the truths of basic arithmetic are.

Sarah: Yeah. So one of the examples that I give is when I was at MIT during IAP.

Sophie: Independent activities period.

Sarah: Yeah, you could go to these lectures. And I went to this thing. It was about time travel and the physicist would explain how you could build a time travel machine and the things that they would say would go against your ordinary sense of how things are.

But because of their expertise, you would defer to them, right? So it's like they just have a deeper understanding and some of the things that they would tell you about ordinary objects or your ordinary surroundings - like there is no up, you know, this is an example. It's like you think there's an up and a down. No, there's no up - that really revises your understanding of what the world is like. And if it seems for all the world to be the case that there's an up and the physicists tell me there's no up, I defer to them.

So the analogous case would be like, it seems to you for all the world that you owe more to the people close to you than you do to distant strangers and Peter Singer, you know, you fall asleep in his lecture. You don't know what the argument was, but you wake up and you hear this moral expert saying, actually, you owe the same amount to the distant strangers as you do to the people right next to you and you just write that in your notebook and you say, well, I learned it in class. That seems strange. So, when you ask people, well, why do you think that, why do you think that you don't owe anything more to your own kids than kids on the other side of the world, you say, well, that's what my philosopher said. And I wrote it down in my notebook. It seems like that's the kind of thing that doesn't make sense.

Sophie: Right. Okay. So, if you have a case of non-moral deference, where you do have reason to think that you're a competent judge of things and that you have all the relevant information, that makes it fishy to defer about a non-moral matter. But it's fine to defer in the physics case because I'm not a physicist.

I'm not a competent judge of whether there's an up.

Sarah: Right. So that was why I just gave that original example of looking out the window, because if we're looking out the window together, I'm not going to defer to you about anything because I have all the information that you do. So the asymmetry is supposed to be just, treating someone as a pure moral expert looks funny in a way that treating someone as a pure physics expert doesn't look funny.

Sophie: Got it. Okay, so let's transition now to talking about the second premise, which is the claim that moral deference is no more problematic in principle than deference in other domains like math and science.

Leah: In your work, you consider and reject a number of views according to which moral deference is especially problematic. We want to talk through some of these. A first, rather intuitive proposal is that whether moral deference is especially problematic depends entirely on the true theory of metaethics.

That is, the true theory of what ethics is and where it comes from. But perhaps surprisingly, you and some others working in this area seem to think that the extent to which the nature of morality determines whether there is such a thing as moral expertise is quite limited. Can you say more about why?

Sarah: Sure, yeah, so just to kind of back up, different views in metaethics, I was referring to these earlier. Some people are individual relativists, so they think that whether a moral claim is true, spoken by a particular individual, depends on that individual's values. Then there are cultural relativists, so like, whether abortion is permissible, that depends on whatever moral code the culture accepts.

Other people are emotivists. No one's an emotivist anymore, but there's a whole strand of views that started with emotivism. So the emotivist thinks that when you say abortion is wrong, you're saying something like boo abortion. And when you say that we're required to recycle, take better care of the environment, you're saying, yay recycling, taking care of the environment. So that was kind of a crude, that was like the original idea was emotivism that has developed over the past 150 years to a view that when you make a moral judgment, you express something like a plan. So if I say abortion is impermissible, I express a plan to never have an abortion, to try to prevent other people from having abortions.

Right. I'm expressing an attitude toward future actions that isn't a belief. That's the crucial thing. So the emotivists evolved into the people that are called expressivists. And the key idea for the expressivist in metaethics is that when you make a moral judgment, you express an attitude toward an action or type of action that isn't a belief that could be true or false.

It's more like a plan, something that couldn't be true or false. So, on any of these views - relativism, cultural relativism, expressivism, emotivism - these are called anti-realist views because they think there's not a single true morality that we could be right or wrong about. In some sense morality isn't really real.

It might seem like they are well placed to explain what's weird about moral deference. And I think one of the reasons that I came to think that that actually wouldn't be a good explanation is just that what would matter for explaining why deference seems fishy to us wouldn't be the real truth about morality.

It would be like how we perceive it. So if it seems to the ordinary person that there are correct answers in ethics, like correct answers about whether abortion is permissible and what we owe to children and what we owe to animals, if ordinary people think that there are answers, then deference shouldn't look weird, right? It doesn't matter which meta-ethical view - I mean, who knows what meta-ethical view is actually true, but it doesn't matter which meta-ethical view is actually true. What matters is what people think, how people think about morality. And it's interesting if you look at the kinds of things that anti-realists say, so the relativists and the cultural relativists and the expressivists, a lot of the work they do is to try to kind of recapture or resuscitate the seeming objectivity of morality because they think people feel like morality is objective.

They get in big arguments about it. Like when people think abortion is wrong, they do all kinds of stuff to try to make it the case that abortion is illegal and that people don't have access to abortion and that abortion doesn't happen. People don't act that way when they think it's just a matter of opinion.

They act that way when they think it's actually something objective that matters universally. So if you're an anti-realist in metaethics, a lot of what you try to do is explain how come so many people feel that morality is objective. So lots of anti-realists try to say things like, well, we can actually say moral beliefs are true and false.

We can say that there are moral facts. And then they say, well, we speak that way. We're not really getting it right. It's because of this and that and the other, and there's a big story to be told about why it seems like morality is subjective or why it seems to be a fact-stating discourse when it really isn't.

So given all that, it doesn't really look like the truth about whether realism or anti-realism is the case would explain our behavior or our feeling that moral deference is fishy.

Sophie: I see. Okay. So the thought is even if you have an anti-realist view on which moral statements express beliefs about your own personal values or your culture's values, or maybe a view on which they express complex intentions or plans, you're going to be on the hook for explaining this observation about moral discourse, which is that it seems like it's fact-stating in a way that, say, physics is. Because for example, disagreement in ethics looks a lot more like disagreement in physics than disagreement in matters of taste. And the thought is whatever maneuvers you're going to make within your anti-realist theory to accommodate that observation, those same maneuvers are going to negate whatever it was about your theory that made it uniquely able to explain the fishiness of moral deference.

Sarah: Yeah. That crude form of emotivism that died out, it kind of died out because there's something that looks like moral argument and moral reasoning and if it was just expressing like boo, hooray, that doesn't really make sense.

So the versions of these views got more and more sophisticated until they really could recapture things like giving moral argument or like the logic of morals. They work really hard to get to that, to kind of recapture a lot of the features of our moral discourse, which make it look like it's fact-stating.

So that's one thing. But the other thing is just like when you read, AJ Ayer was one of the original emotivists and you read, Oh, you know, when I say abortion is wrong, I'm just saying boo abortion and I'm not expressing a belief. I'm just expressing an emotion that sort of comes as a surprise to us because it looked like "abortion is wrong" was expressing a subject-predicate sentence that could be true or false. That's what it looked like. So it didn't really look like morality was just boo and hooray talk.

If we experience it as fact-stating discourse, then we will find moral deference fishy. So even if emotivism is true, it can't explain why we find moral deference fishy because nobody was thinking it was true. Take the example of rooting for sports teams, like Hooray for the Yankees, Boo for the Yankees. That's kind of like paradigmatic.

People are just expressing how they feel, and there we do think deference looks weird, doesn't really make sense. And so we do have an explanation, but that's because everybody already knew that Boo and Hooray for the Red Sox and the Yankees wasn't about anything objective. It's just rooting for your team.

So here we do have an explanation of why deference would look weird, but in ethics, it's just not true that the ordinary person is an emotivist. The ordinary person hasn't internalized that view. So her attitude toward deference, that it looks fishy, can't be explained by the truth of emotivism.

Sophie: Okay. Okay. I think that makes sense. So, on the one hand, you think that the truth of anti-realist views can't explain the actual fishiness of deference, because there's this evidence of moral discourse being fact-stating and whatever the views do to accommodate that is going to make it so that on their views, moral deference isn't especially fishy. And then on the other hand, what you were saying just now is that the truth of anti-realist views also can't explain why ordinary people find moral deference to be fishy because ordinary people don't go around acting as though moral discourse just is "boo-hooray" talk or just is the expression of plans or intentions.

I mean, I take it that one thing expressivists are trying to capture is this idea that ethical judgments are intimately connected to motivation. So for instance, on some versions of this theory, you know, if I sincerely think that donating to charity is the right thing to do or take it to be the right thing to do, then at least absent any countervailing considerations or pathologies of willpower, other defects, I will be motivated to donate to charity. And if I don't, then I must not really think it's the right thing to do.

So something about moral judgment involves some kind of commitment to doing the thing that you think is right, and by contrast, you know, empirical beliefs or empirical judgements or just regular run-of-the-mill beliefs, aren't like that. There's nothing defective about me if I, let's say, think it's raining outside and I don't do anything in particular. I don't take an umbrella. There's nothing defective about that, unless I also don't want to get wet or intend not to get wet.

So if you like that kind of thought, then that would suggest that there's something to moral judgment that is commitment-like, and I was wondering what it even really means to defer to someone about a moral matter if that's right. Because it's more than just adopting a belief.

And so you might think that would be a really good reason to think there's an asymmetry between moral deference and other kinds of deference.

Sarah: Great. Yeah, that's a great question and a great point. So there's a puzzle about moral forgetting that is in some ways related to these puzzles about moral deference. So the puzzle about moral deference, it's like, why would deferring to someone about ethics seem different from deferring about physics? And there's a puzzle about moral forgetting, which is the philosopher Gilbert Ryle thought you can't forget the difference between right and wrong.

And he wondered what's the explanation for this. And his answer was that knowing right and wrong involves caring. And we don't call ceasing to care "forgetting." Forgetting is losing information. And if knowing the difference between right and wrong has to do with what you care about, we wouldn't call it forgetting. So that was Ryle's solution to his own puzzle. And so that's analogous to the solution that says, look, when it comes to your moral beliefs, they stand in a special connection to like what you care about and then what you do.

And so the reason why moral deference looks strange is because how could you pick up that caring from what somebody said? You can pick up a belief from what somebody says, a belief about where the stadium is, but if knowing that it's wrong to hurt animals involves caring about animals, you can't pick up a caring from another person in the same way that you could pick up a belief. And I got really excited once when I thought that that was the answer to the question of what's wrong with moral deference, but then I thought it wasn't right. And here's why I thought it wasn't right. Because it seems like the way it works - the way that your moral emotions and your motivation and your intending is connected to the belief - is first you get the belief and then you get the caring and intending and the motivation. So in other words, if I come to realize that I owe you more respect, first, I have the realization I owe you more respect. And then the change in my motivation and my action follows in the wake of that.

So it doesn't look like this could be the whole explanation because it seems like once I get the belief from your testimony that it's wrong to do this, if it really is a genuine moral belief, then a change in intention and motivation and emotion and all that stuff would come in the wake of that. So this idea that you're talking about, this idea that the expressivists are trying to capture, moral belief has a special connection to motivation and to action, that's what motivates some people to say, look, it's not even belief. It's something like a planning state.

When you think abortion is wrong, it's a kind of plan, and people call that internalism. There's an internal connection between a moral judgment and something like motivation or action. And I think that just can't be the explanation because if we agree that you can get beliefs from other people's testimony, and if the way it works is that moral belief gives rise to moral motivation in a rational person who's not weak willed, then we just don't have an explanation yet of the asymmetry.

Sophie: I wanted to hear more also about what you write about constructivism, another view in metaethics. I'm thinking in particular of the sort of content constructivist views on which moral judgements and the concepts that they invoke serve a very particular function, they're for something. And what they're for is answering a fundamentally different kind of question than theoretical judgements and concepts are.

They're answering the question, what to do. And the whole reason we have these concepts of, you know, should and reason, good, bad, right, wrong, is to answer these questions that we face in the first person while we're doing practical things like trying to figure out what to do. And by contrast, theoretical judgments, judgments like what the weather is, are answering a different kind of question like, what is the world like?

And I guess I would just find it really surprising if that kind of view is true, that story about deference in these two areas, the practical and the theoretical, was the same. So do you think that that kind of constructivist theory is a threat to the claim about symmetry?

Sarah: Good question. And I've definitely been in the situation where a Kantian constructivist tells me that Kantian constructivists can explain this by saying, look, it's a practical judgment. It's made from the first person. I think there's just a question for that view. Like, is there an objectively correct answer or not?

If there is an objectively correct answer about what to do, then it seems like some people might be better placed to arrive at that answer than others. So constructivists will tell you about this whole procedure of working back and forth between judgments at different levels of generality. There's a computation.

And just as we think some people are better at math than others, it looks like some people would be better at doing this than others. So if there's an objectively correct answer, then it looks like just saying that the objectively correct answer emerges from a process of working back and forth or legislating or practical reason, just saying that wouldn't explain the fishiness of moral deference, because it seems like some people would be better at carrying out the constructivist procedure than others, or better at knowing exactly how to arrive at the answer to the question of what to do in a particular case.

So it doesn't seem like you get the whole explanation, although it's intriguing the way you put it, that if practical knowledge is somehow fundamentally different from theoretical knowledge, we would expect an asymmetry in where deference looks fishy. It's just not the whole story. So like that's an intriguing thing to say. And then the question is like, what is the full explanation?

Sophie: Okay. Yeah. Yeah. So the thought is, of course, the content constructivist is a realist. She describes herself as a procedural realist. She thinks there are correct and incorrect answers to questions about what to do. But she thinks that those answers issue from some sort of procedure that can be carried out correctly or incorrectly.

So just to give a concrete example for our listeners, one procedure you might use is the test of universalizability. So you take a principle or a maxim - an act done for the sake of an end - like "I will help people when they need it," or "I will break promises for the sake of convenience when it suits me." And one test for whether that's an acceptable principle to act on is whether you can universalize it.

That is, will that everyone follows that same principle without, in some sense, contradicting yourself or your own purposes.

So, okay, this is a bit simplified, but you can't really universalize the principle "I will break promises for the sake of convenience when it suits me," because if everyone followed that kind of principle, then promises would have no weight and you couldn't make promises at all. And so I take it your thought is that some people are going to be better than others at going through that reasoning process, figuring out whether you can universalize a principle. And so it looks like something like deference should make sense on the constructivist picture. That's interesting.

Yeah, I'm not sure what to think about this, but I do take the point that some people could be better or worse at implementing the central procedure of a constructivist view.

Sarah: Right, and crucially, the constructivists don't want to say anything goes. They don't want to say like, whatever Sophie decides, that is the right thing for Sophie. They want to say, no, Sophie could definitely get it wrong. And so once you've got that distance between like getting it right and getting it wrong, it looks like you've got room for improvement.

Sophie: Some people that are better at it than others, and then you've got room for like experts relative to you.

Sarah: Right. That makes sense.

Leah: Okay. So we want to transition now to talking about how moral expertise becomes relevant in bioethics. Perhaps the dominant methodology of bioethics and especially clinical ethics is principlism, a view on which there are four ethical principles, beneficence, non-maleficence, autonomy and justice. And ethical conflicts are to be resolved by specifying and balancing these principles. On at least some understandings of principlism, it is a version of value pluralism. In other words, the four principles represent different values, each of which is fundamental and cannot be reduced to or explained in terms of the others. Of course not all bioethicists are pluralists. Some are monists. They think that there is a single fundamental principle of morality like the principle of maximizing expected utility or respecting humanity. And some are particularists. They think there are no defensible moral principles. Rather moral thought is a matter of exercising judgment in particular cases. And even the set of true moral judgments resist theoretical codification or statement in general terms. Is the existence of moral expertise more or less plausible or intuitive for some of these methodologies then for others?

Sarah: I think we could find domains where we want to be pluralists, where we want to be monists, where we want to be particularists, and find experts across those domains. In other words, I think we could find a domain where we're kind of inclined to be particularist and I think it makes sense to think that there are experts at just recognizing how these unsystematized particulars come together in this case. We could even just imagine that moral particularism is true.

That's kind of like a sort of an Aristotelian picture. But Aristotle thought there's the phronimos, that's the moral expert who can just recognize what the truth is in every situation. So it's a matter of more like seeing than weighing on the particulars picture, but just because it's a matter of seeing rather than weighing, it doesn't seem to push out the idea that there could be an expert.

Sophie: And does that seem right? At least though, that depending on the different method, there would be different ways of figuring out who is the moral expert or maybe different standards for who counts as a moral expert. So like figuring out who the phronimos is seems very different from figuring out who the best balancer of the four principles is.

Sarah: It's a really interesting question, and I think it would be interesting to look at different examples to bring out our answer to the question of whether it's harder to identify a phronimos than it is to identify someone who's really good at weighing how the different principles interact in a particular case. My own view about the difficulty in identifying experts has to do with whether there's some independent way of figuring out who's getting it right, an independent way of calibrating their judgment.

To give this sort of paradigmatic example where we would have an independent way, think about different weather forecasters, or I guess in our modern day and age, different weather forecasting apps. You can tell how reliable your app is on your phone because it'll say something like it's going to rain in the next 30 minutes. You can just see whether it's true that it rains by going outside. You don't have to consult another app and see what this app said - you have an independent way of seeing which predictor is reliable, and similarly you know for calibrating, and this is really very much in Aristotle as well, which cobbler is good.

So the different cobblers might articulate different theories about making a good shoe, but you can actually look at the shoes and see which shoes are good, which shoes are comfortable and last a long time. You don't have to take the guy's word for it.

Just one more example is like with the directions, different people might have different views about how to get from point A to point B, but you can just calibrate who's got the better judgment by seeing who can get you where you want to go without getting lost.

And so it seems like in the case of ethics, there's not some kind of independent way to tell which alleged expert is getting it right. Peter Singer is someone who is a consequentialist, he's a utilitarian. He thinks that the fundamental thing that determines whether something has moral status is whether it can suffer.

Then you have someone like Chris Korsgaard at Harvard. She's a Kantian. She has a completely different theory from Peter Singer. And she's going to come up with different answers. How can we know which of them is getting it right?

There's nothing straightforwardly analogous to like, here's our two meteorologists. Maybe they have a different theory, but we don't have to be a meteorologist to know who's more reliable. We can just look at the weather. What's analogous to that in the case of two alleged moral experts? How can we tell which one of them is more reliable?

It seems like the answer is, well, we have to use our own moral judgment, so that's not really an independent check. So that's what I think makes it difficult to identify experts. And I'm not sure how that difficulty would interact with these different methodologies that you mentioned. There might be some interesting differences.

Leah: Yeah, we want to get into this more because like you said, with empirical matters, you can kind of assess an app's track record or assess an individual's track record of predicting empirical facts. With moral matters, this is a lot harder and it puts us in a strange position in that the only people who are actually able to identify moral experts are the ones who are already reliable in their own moral judgments.

And so these people may be the ones who are least in need of moral expertise. If I'm unreliable with respect to moral matters, then I'll end up treating as a moral expert people who are also unreliable because they're going to be the ones that agree with me, whereas if I have a terrible sense of direction, I can still identify someone who has a good sense of direction, right? There's nothing about my own ability in that domain that makes me unable to find the expert. This might not seem like just a problem for moral deference in practice, but it might also seem to undermine the legitimacy of moral deference in principle. We're wondering if you can say more about like whether you think this is a deeper problem, and if not, why not.

Sarah: Well, it does seem like a deep problem to me. The reason I say it's a problem in practice is that it seems like if circumstances were different, we might come to think it's not that hard to identify moral experts. Like, what if all the people in moral theory had this breakthrough, and then they all came to agree.

We might think in that case, okay, we can tell who the experts are. Another thought I have here is that it seems like it's a difference in degree rather than a difference in kind. I don't think that we're in the situation where we can't get any evidence about who's more reliable.

I just think it's not as straightforward. So a lot of people think, for example, that if we're doing immoral things more often, and if we adopt more immoral policies than we used to have, things will go worse for us because we're doing things that are morally wrong. You know, if there's unrest in a society, that's evidence that the society is unjust. So it's not like we're completely cut off from any feedback, it just seems less straightforward.

Sophie: Let's talk about one of the mechanisms that you discussed for identifying moral experts. You suggest that we can identify moral experts by looking for people who have consistently outperformed us in their judgements about moral matters in the past.

So you write, quote, "Suppose, for example, that when you and I have disagreed about some moral issue in the past, more often than not, I have come around to your point of view after further reflection, experience, or discussion. Perhaps the moral judgments that I once contested are now incorporated into my own current moral outlook. If this happens enough, then it might become reasonable for me to think that when it comes to those moral issues about which we currently disagree, you are more likely to be right than I am." End quote. One thing we were wondering about is whether we could do this on a larger scale. So for example, at least historically, it seems possible to identify examples of people who were morally ahead of their time, people who called for the abolition of slavery when slavery was widespread or who advocated for women to have rights before this was a norm and so on. And maybe we could even look at these people and identify traits that they had that were correlated with their having the right views. Do you think that there's hope for this strategy? And if so, do you think that there are any groups of people who are alive today who are likely to be experts on this basis and/or traits that are likely to mark people out as moral experts?

Sarah: That's such an interesting question. How can we identify in advance the moral reformers who are getting it right?

I think a difficulty for doing that is, suppose that some practice is in fact wrong, but it is not widely recognized to be wrong. Then you're going to be in a situation where lots of really otherwise good people, just ordinary people, are doing this thing that's wrong. It's evidence that it isn't wrong that otherwise good people are doing it.

So I think there's really a difficulty for moral reformers to change people's minds about moral practices because of the pressure of like, look at all these good people, they wouldn't be doing it if it was wrong. It would be interesting to look at the examples of cases where we really think someone was ahead of their time, like some of those people would be moral philosophers.

Jeremy Bentham seemed to be ahead of his time with respect to the animals. And John Stuart Mill was ahead of his time with respect to women. And so sometimes it seems like systematic thinking actually helps with moral progress. In that way, it seems like progress might come from abstract thinking and applying an abstract principle to particular cases, like with Bentham, he thought what matters is suffering and what we should be trying to do is reduce suffering. And then he thought, well wait, animals suffer. And so they need to be taken into consideration too. And it's a very abstract process of thinking, but other times it seems like moral progress happens not through abstract thinking, but just through like someone you know, who you think is cool, does this thing, which you think is immoral, and it just changes your imagination.

And so I think a really interesting example of this is how, over the course of the 1990s, public opinion really changed about same-sex couples, like, really rapidly. And it seemed like the way that that happened was that people in the public eye that everybody loved came out as gay.

And so you would see these examples of people and if they were a lesbian or if this was a gay man, then maybe it's not so bad because look at them, they're great. You know, it almost seemed like the opinion changed from the bottom up as opposed to the top down.

So it just seems like it might be hard to sort of identify the traits of the people who are successful reformers, because reform comes in so many different varieties.

Leah: Yeah, I mean, I want to follow up on what you said about Bentham and Mill, because you know, those thinkers weren't just applying any old abstract principle or different abstract principles. They were all from this utilitarian tradition. And oftentimes, utilitarians will cite this as evidence for their view.

They'll cite the fact that, you know, utilitarians were kind of ahead of the curve on all sorts of issues, whether it was like animal welfare, prison reform, abolishing slavery, women's rights, gay rights, like there's actually a pretty impressive list. This doesn't mean that present day utilitarians are ahead of the curve as well, but is it at least some evidence that they might be, or maybe to put this differently, does evidence like this suggest to you that utilitarians might be moral experts?

Sarah: Interesting. I think, yeah, that's a great question. So it looks like we've been ahead of the curve on this and this and this and this, and you think we're wrong about cutting up the one healthy patient to take his organs and give them to the five triage patients.

You know, you think we're wrong about that, but that's just because we're ahead of the curve. Yeah, that's a good question. And to be fair, though, we would have to compare whether the Kantians were right or virtue ethicists, you know, comparing the track record, but to be fair to the utilitarians, it does seem like if they are ahead of the curve on all these issues that we now think we're right about, it does seem like that counts in favor of their view.

Leah: Yeah. I mean, I doubt that any utilitarian is going to try to argue that we should cut up the one patient to save the five. I think more likely this kind of argument is going to be made with respect to things like putting weight on the importance of future people. And, you know, the thought would be, look, we were ahead of the curve on expanding the moral circle in all of these ways - first to include women, to include gay people, to include people who were wrongfully enslaving. And now we're also ignoring, you know, future people. So, yeah, we were wondering if you think your view lends support to arguments like that.

Sarah: To be fair to them, I think I would have to say yes.

Sophie: Okay. So you might think that in identifying moral experts, we could do a little bit better than just comparing their track record to our own moral judgments. Peter Singer, for example, who's one of the most prominent defenders of the existence of moral expertise, proposes these five criteria for identifying moral experts that go beyond the track record evidence.

First, he thinks moral experts are people who are able to reason logically. That is quote, "avoid fallacious reasoning in their own arguments and detect fallacies when they occur in the arguments of others." Second, they have, quote, "some understanding of the nature of ethics and the meanings of moral concepts." Third, they have a quote, "reasonable amount of knowledge of the major ethical theories, such as utilitarianism, theories of justice and right, natural law theories, and so on." Fourth, they are reasonably well informed about the relevant non-moral information. And fifth, they have quote, "time to think and reflect about ethical issues." You argue in your work that these five criteria are insufficient, though not totally irrelevant, for moral expertise.

Can you say a bit about what you think he is missing?

Sarah: Well, it's just if you look at that list and then you think about undergraduates who have taken intro to moral philosophy, as long as they kind of paid attention they're moral experts.

And I guess what I think is just, it's misleading to use the term moral experts in that way. Not because I have anything against the undergrads, but because it seems like meeting all those criteria is insufficient for having the kind of reliability that makes it the case that anybody who has that is more likely to get the answer right.

For us to be aware of that criteria because I'm thinking a moral expert is someone who's more likely to get it right than we are, who's more likely to be reliable than we are, and that's why it would make sense for us to defer to them, even if what they said contradicted our own sense of what was right.

But it's just very easy to see that meeting all those criteria doesn't make you more reliable. Just look at how many philosophy professors who meet those criteria to the nth degree just disagree with each other about all kinds of ethical truths. So some of them are just getting it wrong.

They have a PhD and a nice certificate on their wall, but we don't have a reason to think that they're more reliable than the guy in the next office who disagrees with them about everything. So if what we mean by an expert is like someone who's more likely to get the answer to moral questions right than the ordinary person, we don't have that from someone who meets the criteria on Singer's list.

Leah: Yeah, so to just get into this issue of reliability a bit more, one thing that might make it hard to identify moral experts is that there can be a mismatch between moral thinking and moral behavior in a way that seems potentially just analogous from non-moral thinking and non-moral behavior. For instance, multiple studies have found that even professional ethicists do not by their own lights behave more morally in certain respects than non-ethicists.

Just like just one example, despite being more likely to believe that people should donate to charity, ethicists don't donate more than non-ethicists. If the person who we thought had a good sense of direction said stuff like, we should turn left at the light to get to our destination, but then turned right and got lost just as often as the rest of us, this might make it hard to know that they had a good sense of direction.

Does the fact that there's often a mismatch between someone's moral thinking and moral behavior in a way that doesn't seem to readily apply to non-moral thinking and non-moral behavior undermine the case that moral deference is no more problematic than non-moral deference?

Sarah: That's such a cool question. What that question makes me think of is the doctors smoking outside the hospital and, you know, it doesn't make you doubt their medical expertise that they're smoking outside the hospital. So I think there's something more going on with our - and I think it's really important to have that awareness about identifying moral experts, because with the doctors, we know that they're medical experts, and that's not undercut by the fact that they're smoking outside the hospital.

So, yeah, I think even if Peter Singer meticulously lived by his principles, we might wonder whether those principles were true.

One thing that's interesting about the question is it brings out the connection between what's possible for us and what's the true ethical theory because you might think that it can't be the case that what the true ethical theory asks of us is something that we couldn't do. Let's say the true ethical theory tells us to give up everything that we have until people in need are at the same level as us and some people think like that's too demanding and there's an interesting question of whether what ethics asks us to do is something we can do with sort of "ought implies can" question.

I think that might be related to this issue - do ethicists live the way they say we should live? You might think if nobody could live the way that they say we should live, then they're wrong about how we should live.

Sophie: Yeah, that makes a lot of sense. Another reason to doubt that moral philosophers are moral experts is that if you really look at it, professional philosophers aren't incentivized to directly pursue moral truth. Instead, they have professional incentives to make arguments for views that are not currently defended in the literature. And often this means raising novel objections to views that actually might be consensus views.

So philosophers are not necessarily incentivized to pursue consensus. And in fact, they may be incentivized to do the opposite. Do you think that in a world where somehow moral philosophers were professionally incentivized directly to pursue moral truth above all else, it would make sense then to defer to moral philosophers as moral experts?

Sarah: I actually think we would have to just fill in the details of the case more. So if in that circumstance where they're incentivized differently, if they still all disagree with each other, no, because then we still get to think like only some of these guys are getting it right. And then there's also just like what methodology.

And I think that brings up really interesting questions about moral knowledge and experience because there might be aspects of experience that really need to inform moral judgment that these people in the Ivory Tower just don't have. And to the extent that particular moral facts are going to depend on particular empirical facts, a lot of times philosophers don't bother to learn the empirical facts, so we might want other people to be on the team in this world where they're, you know, the incentive is really just get at the truth. We might want other people who have lots of empirical knowledge to help.

I'm trying to figure out exactly how to formulate this, but I guess, let's say everybody was like highly, highly incentivized to get at the moral truth.

Leah: Like, let's say getting into heaven depended on it. Do you think that there would actually be that much disagreement among moral philosophers about, you know, what the right thing to do is?

Sarah: There might be, because our common sense thinking about morality I think is torn, and this is why moral philosophy is so interesting and why there are so many problems that are so hard because I think we really are kind of on the one hand, um, and so I think that, you know, we're all committed to the idea suffering is bad and suffering is what ultimately matters. And maybe we think it's just human suffering. Maybe we think it's all the suffering, whatever, but we think that that really matters. And in deciding what to do, you should just be thinking about bringing about the best consequences.

But then we have other convictions, like that there are constraints on how you can pursue good consequences. You can't do it by lying to people. You can't do it by killing innocent people. And we really are committed to that too. And then there's, you know, the question of like, well, at a certain point, if the world's going to end unless we sacrifice the one innocent person, it's crazy not to do that.

And you're like, okay, well, in that case, yes, we sacrifice the innocent one. But what if it was like we had to sacrifice five innocent people to say, you know, if you switch around the numbers, and I think we really are in our ordinary moral thinking, we're torn. And so maybe what's going on in philosophy is some people have kind of started by being moved by the suffering and they think that that's the central thing. And other people are thinking about constraints. I think both of those things really are in our common sense moral thinking and maybe that's why we have these philosophical views that are kind of like pulling out different aspects of our moral thinking and maybe it's a conflict that just can't be reconciled.

Leah: Yeah.

So we're talking about practical implications already a little bit, but let's get into that more. In many ways, the field of bioethics is premised on the idea that there are moral experts to whom we should defer about moral issues. We grant bioethicists positions of moral authority as members of hospital ethics committees and institutional review boards, as ethics consultants and as policy advisors.

Among other things, we want to now discuss some of the practical implications of what we've said so far for the field of bioethics. So just to start, given the difficulties you identify with determining who is a moral expert, do you worry about the very idea of institutionalizing moral expertise in the way that the field of bioethics does?

Sarah: I guess, you know, more about the field of bioethics than I do, and I want to know more about it, because I think this is a really, really interesting question. And one initial thought I have is just, you know, the idea of authority is famously ambiguous. So the physicists are authorities on physics, and what we mean by that is, they know more about it.

It's like this epistemic authority. If you want to know the truth, ask them. There's another kind of authority that's, you get to decide. You get to say, so, political authority, and then we can ask about, you know, the legitimacy of claims to authority that, you know, the physicists, when they claim to be authorities, when we question them we want to know if they really know what they're talking about.

If someone is a political authority or a legislative authority, we want to know okay, how did they, why did they have that authority? And then the questions are central questions of political philosophy, like, is it that people who it's going to govern that get to decide? Or is it like some kind of implicit consent?

So an initial question that I just have about bioethics, so if you're on this committee, the ethics committee that gets to decide things, is their claim to authority like the physicists claim to authority that they're the ones that know the truth? Or is their claim to authority something more like the political authority?

Sophie: Yeah, no, I think that's a really good question. I think it's some of both, and it might even be some of both within a given role.

I mean, the members of institutional review boards, the boards that review research, they have a kind of coercive power. I mean, they can say that a research study doesn't meet ethical standards and then the study just can't be done. Or at least it can't be done with federal funding.

And then in a different setting, you have ethics consultants who serve more in an advisory role, recommending the best course of action, but not requiring it. So you might think the institutional review boards are being treated more like political authorities and the consultants more as epistemic authorities. But yeah, I guess how would you think through the legitimacy of these different types of authority, depending on the kind of role the bioethicist has?

Sarah: Yeah, that's very interesting. So I think with the coercive power, that really makes the authority question quite pressing. When people are answering the question, does this meet the ethical standard? Is that sometimes basically an empirical question?

Sophie: There is a sense in which it's empirical because it's kind of like checking whether something meets legal standards. So you're just looking at the standards and applying them.

Sarah: But there's a lot more ethical judgment that goes into it because the standards are vague. And so you have to figure out how to interpret them. There are bioethicists - I mean, we had one on the podcast a couple of weeks ago who was talking about the regulations that govern pediatric research. And, you know, his role is very much figuring out what is the best interpretation of the regulations, where that's largely a matter of ethical judgment. And then he's telling people on institutional review boards how they should interpret the regulations when they're exercising coercive power.

Sophie: Right. Mm-Hmm.

Sarah: Then I guess my question would just, and I'm sorry, I'm turning this around and I'm just asking you questions.

If we think that that person should be treated as an expert, as a moral expert, in the sense that we're going to defer to what they say, is that because we think that they're more likely to answer the moral question correctly than the rest of us?

Sophie: If we really are treating them as a pure moral expert, then we think they are more likely to answer the question correctly. And maybe in this particular kind of setting where you have doctors who have their job and nurses who have their job, maybe it does look a little bit like Singer's criteria.

Sarah: Like this person actually had the time to sit there and think about it. And you could have done just as good a job if you had the empirical knowledge and the time to sort of think about it. So I was so curious whether we're thinking that that person has better moral judgment or are we thinking they have the empirical information and the time.

Leah: Yeah. I mean, I guess, maybe I'm a little confused because it feels like there's like two layers of moral experts here.

Sarah: We have like the doctors who are sitting on the institutional review board, making decisions about whether to approve trials. And then we have podcast guests, Dave Wendler, who writes about the ethics of the participation of children in research. And to me, Dave definitely seems like a moral expert on these issues.

Leah: I'm a little biased because, you know, I was involved in inviting him to be on our podcast, but the IRB committee members don't necessarily seem like moral experts. And the reason that I think that the latter are not is because of some like empirical facts I've learned about them from having done some of these podcasts interviews over time, as well as like having read some of the literature on these issues, which is like one, they oftentimes are not very familiar with the regulations.

Like, what is the definition of minimal risk research? Which is one of the things you have to be able to define in order to decide whether to approve a given study. Two, they're inconsistent in their judgment.

So, like, if you look at different IRBs, they reach different conclusions about what minimal risk research is. And three, it just doesn't seem like they're particularly well positioned to have moral expertise by some of the Singer criteria compared to anyone else. I don't know, would you agree with that, Sophie?

Sophie: Yeah. I mean, it certainly sounds like different IRBs just have different practices about how to review research that sort of vaguely track the regulations, but are definitely not consistent with one another and arguably not consistent with the regulations.

Sarah: And so we have, we don't really have any reason to think that they're epistemic authorities. It's more just they have the authority in the sense that they have the power. I think that the pediatric research guy, I'm interested in the question to what extent are we thinking he's a pure moral expert versus he's an expert at these pediatric ethical questions because he learned about what's at stake and what all the factors are that are relevant. You know, often when someone wants to pronounce on some moral matter, our reaction is you don't know what you're talking about. And then we want to tell them all the things that this ethical issue depends on. And we're filling them in on this like big amount of empirical background.

I was interested in the fact that when you described this, that one of the things that people have when they come out of like a master's degree program in ethical advising...

Sophie: Yeah. Yeah. The certificate programs for being a healthcare ethics consultant.

Sarah: Okay, that's what I was thinking of, that you described you're guaranteed that the person has had 400 hours of experience. And it really does seem like having that experience of talking to people who are in these situations and advising them really helps, would really get you up to speed on a lot of the, you know, contingent factors, the different variables and the different things that work out and don't work out.

So it really does seem like that kind of experience with these particular types of questions would be relevant to how reliable we think you're going to be. Okay, so it sounds like you're wondering, in the field of bioethics, how much of this is really treating people as pure moral experts versus how much is treating them as experts because they have more empirical information, more time, more experience, what have you. And I think probably it's a huge mix. But I do think there is some pure moral deference going on. So I guess we were wondering when it comes to treating them as pure moral experts, you sound more concerned about that?

Is that right?

Sarah: I guess I just want to know why we think they have better moral judgment. And it's hard to hear my question in the two different ways, but there's two different ways of having better moral judgment. One is just like, you are better at discerning the answer to moral questions than somebody else who had like the same amount of time and non-moral information, etc.

But then another, like, there's a sense in which obviously these people are going to be experts relative to the rest of us because they have spent the time and they have looked at all the non-moral, the underlying non-moral factors that are relevant. And so it just looks like they will be experts. And this kind of example of this person that you had on the podcast who talked about pediatric ethics, it almost seems like it makes the pure-impure distinction seem really artificial.

It's a big deal that someone knows all this information and has like had all this experience. It's a really big deal. And it confers all the kind of moral expertise that we would ever want. It seems to start to seem silly to say, well, are you treating him as a pure moral expert?

Because, here's a person who's going to be more reliable in that context. You don't really care if you're treating them as a pure moral expert or not. I guess when you ask me questions about like, what would I be concerned about? I guess my answer to that question was possible slippage in the two senses of authority, because there's like a historical thing where if someone has the political authority to decide something or like the, whatever it is, the kind of like structural authority, it's this IRB is allowed to decide.

There's an easy slippage to think, well, that they know best. And it's like, wait, no, why did we think they know best? Because we said they were the authority, but we originally just meant authority in the sense of like they get to decide because of who they are. And so I guess my main concern would be slippage between they get to decide because who they are and they are more likely to give the correct answer.

And that just that seems like the thing to be worried about to me.

And it might be interesting at some point to think about the similarities and dissimilarities between like an IRB and a jury. It's not like the jury is a bunch of experts. It's like, they're just people who are going to think about what seems true given what you've got to work with. Are they supposed to be epistemic authorities or is it more like a kind of authority because of who they are?

It would be interesting to think about how an IRB is or is not like that.

Leah: Yeah, I mean, it's interesting because in preparation for this interview, I was looking up some of the healthcare ethics consultant exam questions, like what they get asked in order to get the certification.

Sarah: Good. Yeah. Yeah. Good.

Leah: So some of them are like purely empirical. So it'll be like, you know, here's the situation that happens. The healthcare ethics consultant comes in and does this thing. What did they do in this encounter? And then the answers will be like they evaluated the capacity to make informed medical decisions - like that's like purely an empirical question of what happened. But then there are other questions like, what is the underlying ethical standard guiding the healthcare ethics consultant analysis of the case? Where you're supposed to choose: is it the best interest standard? Is it beneficence? Is it substituted judgment? And like, that to me, that's not an empirical question.

It's like, what is the correct ethical approach to getting the right answer in this complicated ethical scenario? And so it is interesting because it seems like if someone passes that exam and they get the license, they're having multiple different kinds of expertise conferred on them, and it's not clear we always kind of tease those apart.

Sarah: I always point out to my students, like when I teach introduction to moral philosophy, that there's a funny thing that the exam never says, like, is abortion permissible? Is eating animals permissible? Like, the final exam does not ask them for the answers to first-order ethical questions. You don't give like a multiple choice that says, like, which animals is it permissible to use for food?

So I was wondering about if someone really is an ethics expert with a certificate, I was wondering if there's a test where there really is a multiple choice, what's the right answer?

Leah: It is, it's multiple choice.

Sarah: Wow.

It's so interesting. I think part of what animates all these conversations about moral expertise is like, at some level, we kind of agree with Kant that morality is just common sense and it has to be. Otherwise, it wouldn't be fair to get mad at people when they do the wrong thing, to blame them and think they're a bad person.

But in another sense, we know that there are really, really hard moral questions like questions about abortion. If it was common sense, then we wouldn't have two sides vehemently disagree where on both sides, people are really being sincere, you know, they really sincerely believe what they're saying.

People disagree. So that really pushes against the idea that, oh, it's just common sense. And like, we wouldn't need ethics advisory boards if everything is just common sense. So it's hard to reconcile those two things.

Sophie: We're coming to the end of our time. We like to close by asking our guests, what is one rule or norm broadly related to what we've been talking about today that you would change if you could and why?

Sarah: Well, there's this thing in the literature on moral expertise, I think it took a bad turn at a certain point where people started saying, who are the optimists and who are the pessimists? And so the optimists are supposed to be people that think there's nothing wrong with moral deference.

People should morally defer. And the pessimists are supposed to be people who think, oh no, there's something wrong with moral deference. And so then there are all these people that wrote papers that said like, oh, come on, here are all these examples where it's good to defer.

And I got labeled a pessimist. Again, it's supposed to be the view that like, it's wrong, bad, wrong to morally defer. And like, nobody should think that, right? Because everybody should think there are lots of situations where the thing you morally ought to do and epistemically ought to do is defer to someone else about a moral matter.

Like everyone should think that. But somehow, so the thing that I thought was interesting, like at the very beginning when I started writing about this, it was just that I thought it was just a datum that we have a different attitude toward moral expertise.

Deferring to a moral expert seems odd or problematic in a way that we just don't toward the scientists. That was sort of what got me going that like that's puzzling that there's an apparent asymmetry. But in thinking it was puzzling that there was apparent asymmetry, I didn't mean to, like, condemn moral deference, I just meant to say, like, wait, can moral realism be true if, like, we have this funny attitude?

And what explains that attitude? And I do think there is this funny feeling that morality is supposed to be readily available to everyone so that we can be good people, but then again, there are these hard moral questions and we can't figure out what the answer is. So that's kind of what got me going, but I never meant to go on the record as being against moral deference.

Sophie: Interesting. So maybe the takeaway is that everyone's actually more okay with moral deference than everyone perceived other people to be.

Sarah: Yeah. I think saying that there's a debate between the pessimists and the optimists is just like a big wrong turn. 'Cause I think it's just more, the question is more like what's the deal with our attitudes toward expertise and deference and why do we have those attitudes?

Leah: That makes sense. All right. Well, that brings us to the end of our time. Thank you so much for coming on the podcast. We've really enjoyed this conversation.

Sarah: Thanks for having me.

Sophie: Bio(un)ethical is written and edited by me, Sophie Gibert and Leah Pierson with production by audiolift.co. If you want to support the show, please subscribe, rate, and review it wherever you get your podcasts and recommend it to a friend.

You can follow us on Twitter at biounethical, no parentheses, to be notified about new episodes and you can sign up on our website biounethical.com to receive emails when new episodes are released. We promise we won't spam you, but we may reach out to let you know about upcoming guests and give you the opportunity to submit questions. Our music is written by Nina Khoury and performed by the band Social Skills. We are supported by a grant from Amplify Creative Grants. Links to papers that we referenced and other helpful resources are available at our website, biounethical.com. You can also submit feedback there or email us at biounethical at gmail.com. Thanks for listening and for your support.