
Bio(un)ethical
The podcast where we question existing norms in medicine, science, and public health.
Bio(un)ethical
#12 David Wendler: Are we overprotecting kids in research?
In this episode, we speak with Dr. David Wendler, Head of the Section on Research Ethics in the Department of Bioethics at the National Institutes of Health Clinical Center and philosopher by training. We discuss the ethics of pediatric research: how much risk we should expose kids to in research; what we should do when the federal research regulations don’t make sense; and what was and wasn’t wrong with the Kennedy Krieger Lead Abatement Study.
(00:00) Our introduction
(05:25) Interview begins
(13:56) How risky can pediatric research be?
(32:38) What counts as “minimal risk”? Risks of daily life standard
(45:28) Understanding research participation as a charitable activity
(49:48) Why the rules say we can expose kids to more risk when they don’t stand to benefit
(1:00:05) How to interpret research regulations when they are flawed
(1:03:42) Do kids need to understand altruism to assent to research?
(1:12:49) Should there be laws governing pediatric research?
(1:20:40) David’s take on the Kennedy Krieger Lead Abatement Study
Mentioned or referenced:
- US pediatric research regulations: HHS/OHRP 45 CFR 46 Subpart D
- NIH Inclusion Across the Lifespan
- Creating Hope Act of 2011/2012
- Pediatric Research Equity Act of 2003
- Mott Poll Report: Is my child’s medicine FDA-approved?
- Hwang et al., “Completion Rate and Reporting of Mandatory Pediatric Postmarketing Studies Under the US Pediatric Research Equity Act”
- Grodin and Glantz, Children as Research Subjects: Science, Ethics, and Law
- Wendler and Shah, “Should Children Decide Whether They Are Enrolled in Nonbeneficial Research?”
- Shah et al., “How do institutional review boards apply the federal risk and benefit standards for pediatric research?” [note: we refer to this as a 2008 study, but it was actually published in 2004]
- Wendler and Forster, “Why we need legal standards for pediatric research”
- Grimes v. Kennedy Krieger Institute, Inc. (Court of Appeals of Maryland)
Bio(un)ethical is a bioethics podcast written and edited by Leah Pierson and Sophie Gibert, with production support by Audiolift.co. Our music is written by Nina Khoury and performed by Social Skills. We are supported by a grant from Amplify Creative Grants.
Note: All transcripts are automatically generated using Descript and edited with Claude. They likely contain some errors.
Introduction:
Leah: Hi, and welcome to season two of Bio(un)ethical, the podcast where we question existing norms in medicine, science, and public health. I'm Leah Pearson, a final year MD PhD candidate at Harvard Medical School.
Sophie: And I'm Sophie Gibert, a Bersoff Fellow in the Philosophy Department at NYU, soon to be an assistant professor at the University of Pennsylvania.
Leah: Imagine a world in which medical treatments were never tested in children. In which dosages were largely guesswork and side effects remained unknown until it was too late. This was once our reality and in many parts of medicine, it still is. Less than one third of the medicines we prescribe to children are FDA approved for use in children, and off-label prescriptions account for anywhere from 60 to 85% of all prescriptions in pediatric care.
This may not seem like such a big deal. After all, can't we just extrapolate from what we know about adults? In fact, the situation is more complicated than that. Children aren't just little adults. Their bodies are both structurally and functionally different. Their narrower airways and developing lungs make them more susceptible to breathing problems. Their smaller blood volume and larger surface area to volume ratio mean that even minor fluid losses can have major effects. Their higher metabolic rates and underdeveloped digestive systems make proper nutrition and hydration more of an issue. Children get sick and die from different infections, cancers, and congenital diseases. And their ever-growing bodies present special challenges for devices like implants, heart valves, and prostheses, which need to be specially designed to accommodate growth and tolerate replacement. The list goes on.
All of this makes pediatric research vitally important. However, children are especially protected under research regulations in the US and elsewhere. And for good reason. Children cannot give informed consent to participate in research the way that adults can.
And even when they have loving parents who can consent on their behalf, we might not want to expose them to the same risks we expose adults to, without them meaningfully agreeing to it. Moreover, the history of pediatric research is riddled with abuses. Research on unconsenting, uninformed, and often institutionalized children was common throughout the 18th, 19th, and 20th centuries.
Striking an appropriate balance between protecting children in research and ensuring their access to safe and effective treatments is a difficult thing to do. And so far, we have not succeeded. In the US, the first guidelines explicitly concerning pediatric research were introduced in 1977. And the National Commission for the Protection of Human Subjects published an appendix to the National Research Act focused on research in children. These guidelines arose in response to the research atrocities of World War II, including the experiments carried out on the children of Auschwitz by Dr. Joseph Mengele, as well as the controversial Willowbrook studies. These were studies in which institutionalized children were deliberately infected with hepatitis to study the natural course of the disease. Understandably then, the guidelines emphasized the protection of pediatric research subjects rather than their access to the benefits that research can provide. Over the past 30 or so years, the default among funders, investigators, and institutional review boards has been to exclude children, as well as other groups deemed vulnerable, from research. And although there have been efforts to incentivize and even require more pediatric research, progress has been frustratingly slow.
Our guest today is our former colleague David Wendler. He has been writing about the ethics of pediatric research and research ethics more broadly for around 30 years. He is head of the section on research ethics in the Department of Bioethics at the National Institutes of Health Clinical Center, and is a philosopher by training, having received his PhD in philosophy from the University of Wisconsin, Madison. In his work, David has often advocated for less restrictive policies and practices around pediatric research, such as allowing net risk research in a broader range of cases and raising the risk threshold for adolescents who can understand and agree to participate in research studies. At the same time, he strives to strike a balance that has eluded us, and in some cases argues for more protective policies, for instance, greater respect for children's descent and legal standards for pediatric research.
With David, we discussed the ways in which our current policies and practices around pediatric research are flawed and how we might change them for the better. One disclaimer before we get into the interview: The views that David expresses in this interview are his own, and nothing he says represents the US federal government, Health and Human Services, or anyone else but himself. As always, you can access everything we referenced at our website, biounethical.com, and you can submit feedback there or email us at biounethical@gmail.com.
Interview:
Leah: Hi, David. Welcome to the podcast.
David: Thanks. Nice to be here.
Leah: Over the course of your career, there've been numerous efforts to ensure children's access to safe and effective treatments via pediatric research. For example, the 2003 Pediatric Research Equity Act authorized the FDA to require pediatric clinical studies. The Creating Hope Act of 2012 encouraged the development of new drugs for rare pediatric diseases. And the NIH's 2019 Inclusion Across the Lifespan policy required the inclusion of children in research, unless there were scientific or ethical reasons not to do so. This is just to name a few. Moreover, you mentioned in your work that bioethicists have increasingly called into question what you call the "default of exclusion" - that is, the practice of excluding children from research by default due to ethical concerns. To what extent do you think that this default has changed over the course of your career? That is to say, have we made progress?
David: So I think that in one sense, the default has changed a lot, basically in a normative sense or people's views regarding how we should approach pediatric research. I think 30 years ago, a lot of people, and maybe people like me who have been doing this, would say that we should be reluctant. We should be careful about enrolling minors in research.
I think that view as a normative view has changed a lot, and people now recognize the importance of doing pediatric research. People emphasize the need to do it. I think that's been a big change. Now, in terms of actually doing more pediatric research, I think there has been some progress, but I don't think it's been as significant, certainly as a lot of people would like, and the people who drafted, for instance, the policies you were talking about.
Some of that problem is a result of the fact that over that same time period, say over the last 30 to 40 years, there's been this enormous shift in who conducts clinical research. It used to be that the vast majority of clinical research was conducted by the NIH, by governmental organizations, by NGOs. Some people used to estimate it used to be 30 percent for-profits and 70 percent governments and NGOs. And that's basically flipped. There are estimates now that 75, maybe even 80 percent of clinical trials are done by for-profit entities. And the challenge there is that, fortunately, kids just don't get sick as frequently as adults do. And so the market is pretty small. And the profit incentives for doing pediatric research are on the low side, and the worries about ethical concerns, legal concerns, getting sued are on the high side.
And so there's still reluctance to do it. We're doing more, we're doing better, but I don't think we're doing as well as anybody thinks that we should be.
Leah: Yeah, just to quickly follow up on that - I'm sort of struck that there seems to be a tension between two of the things you highlighted there. One, that there's been this flip in the amount of private sector research that's being done on kids. And two, that there aren't really profit motives to do this kind of research. So how do you explain that change?
David: I don't know. I think I could speculate. I think that a lot of this shift has happened in the US, but a big percentage of clinical trials that get conducted, get conducted in the US. And so that's why what's happening in the US has had a big influence on that overall trend. So not completely, but to a large extent, the question comes down to why there has been this shift in the US. And there I think it's some combination of governments just stepping back.
So certainly when I started working - I've been working at the NIH for 30 years - when I started working at the NIH, I think there was much less worry about budgets, about cutbacks of budgets, of eliminating departments. And there's just been much more of that. And I think some of that is just a general attitude across the US of shrinking government, and the NIH gets caught up in that effort, even though when you separate out the NIH from the government writ large and you ask people, "Well, what do you think about supporting the NIH?" Most people think it's great and they want to support it, but I don't think that gets necessarily always translated into federal budgets.
Leah: That's helpful. So just to make sure I understand - the basic idea then is it's not that private funders are going out of their way to fund pediatric research. It's just that relative to the amount that governments are spending on pediatric research, private funders have just been investing more, and some of that is going towards pediatric research specifically. And so this reflects a broader trend that doesn't really have to do with increased private sector prioritization of kids.
David: Right. I think that's right. And there certainly is a corresponding trend of, at least in some cases, doing clinical trials. Clinical trials are really expensive, so a given big clinical trial can cost $700 million, they cost a lot of money. But if you get a drug, particularly if you get a blockbuster drug, you can make billions, literally billions of dollars a year. So the potential profits are huge. Pharmaceutical companies know that, and so they're fired up to try to realize those profits. But again, given the small number of kids who tend to get really sick, getting that kind of blockbuster drug almost just isn't going to happen in kids in the way it could happen with adults.
Here's one anecdote. This isn't about pediatrics, but it shows the level of the money interests here. This was about 10 years ago. There was a big worry that we were stalling in effective treatments and vaccines for malaria. And the NIH had developed up to a certain point, an approach that looked really promising. It had stalled. They had tried to get some pharmaceutical companies to pick it up and develop that approach, and nobody had, and there was a renewed effort on the NIH part to try to see if we could get somebody to pick this up.
As part of that, I went out to a conference in Seattle and talked to the developers of drugs for two of the largest pharmaceutical companies. We went out to dinner, had a very nice dinner - you guys remember I like wine - I actually had two very nice glasses of wine.
The reason why I was there was because some people thought that the reason why the drugs weren't getting developed was basically ethical worries, and I thought the ethical worries could be addressed. So my role was to try to convince them of that. And we were at dinner and somebody raised one of the ethical worries, so I went into my spiel and both of these guys listened very politely. And after about five minutes or so, they said, "Okay, you might be right. We might be able to address those ethical worries, but it's still not clear that we're going to be interested in this."
"You don't understand that the market for this, even if it were successful, would probably make us four to 500 million a year." And this is where the wine came in. I heard that and I thought that's really strange. It sounds like he's describing 400 million a year as a reason not to pursue something. That sounds like a lot of money to me. I thought I usually don't get drunk on a glass of wine or two. I must be drunk. No, that's what they were saying. They were saying for them, that level of money just isn't appealing enough for them to go into the investment at the beginning. They look for big things like billions of dollars a year. And this wasn't a pediatric case, but it was just to illustrate the extent of the money here and the levels of money that these companies are interested in and they're playing with. It's big.
Leah: Yeah, that's wild.
David: Also, relevant context here is malaria does kill a lot more kids than it does adults. Malaria is a lot worse. Malaria really doesn't kill adults for the most part. It can make you absolutely miserable, but it's really cerebral malaria in little kids that's devastating. And it kills a lot of them.
Leah: Yeah, well, that's important. So let's get a bit more into how the different regulations treat adult and pediatric research. As we understand it, there are at least two main differences between how we conduct pediatric research and how we conduct research with adults.
The first has to do with risk. Because children cannot consent to research, we generally place stricter limits on how much risk they can be exposed to in the course of research as compared to adults. The second has to do with choice. Again, because children cannot consent themselves, we typically ask children's parents to consent to research on their behalf. We also seek children's assent or their affirmative agreement once they reach a certain age. Let's set aside choice for the moment and focus on the topic of risk.
Sophie: So, listeners of the show will be familiar with the idea that US regulations place different requirements on research studies, depending on how much risk they involve. When it comes to pediatric research, an important dividing point is between studies that involve minimal risk and ones that involve more than minimal risk. Could you just elaborate a bit on why it's so important whether a pediatric study is classified as minimal risk or not according to the regulations?
David: So, with respect to risk, as you said, and I'll get back to this in a minute, the minimal risk standard is really important in pediatric research, also in other research, but certainly in pediatric research. There are also a couple of other risk categories. In fact, the US regulations, unlike a lot of other regulations, have four risk-benefit categories for pediatric research.
So most regulations around the world, if you look, have minimal risk and prospective benefit. The US regulations certainly have those two, but they have two others. The first one is called a minor increase over minimal risk. Now, what constitutes a minor increase over risk isn't defined anywhere. People debate it a lot. But for the most part in practice, it's understood as just a tiny bit more risk than minimal risk. So even there, the threshold of minimal risk is really important because minor increase over is anchored to that minimal risk standard.
The other one is that, at least in principle or in regulation, there's another category in the US regulations, which is called the 407 [50.54]. That's defined as research that's not otherwise approvable under the regulations. So what that means is it's basically a catch-all for any studies that don't fit under minimal risk, minor increase over minimal risk, or prospect of direct benefit.
Now that category requires a lot of extra review. There's two rounds of public comment. You have to get this independent federal board that reviews it. The important point is that there's no explicit risk limit in that category. Instead, what that category does, it takes out the risk limit and it replaces it with this fairly general requirement that the research has to be consistent with, quote, "sound ethical principles." Now, who knows exactly what that means, but presumably it means not excessive risks.
Now, two things, a caveat on that caveat. One is that when I've talked to people who are very prominent in pediatric research ethics, where I say that in principle, there's no upper limit on the net risks to which pediatric subjects can be exposed, a lot of people in the audience just say I'm wrong, and they'll say, "No, it's minimal risk and minor increase over minimal risk." And I say to them, "Well, show me in the 407 [50.54] category where it says anything about risk level." And what's amazing is a lot of them think it's there, and then they'll go and look and they're like, "Oh, it's not."
So that shows, I think, in practice that a lot of the people, people in IRBs, even though it's not explicitly there, I think they see it as a protection. And there's maybe even a requirement for that category. The second thing is that it turns out that very, very few studies are even considered in that category. And as far as I can tell, talking to some people at FDA and OHRP, it may be that there hasn't been any studies that have been reviewed in that category in the last 10 years. And possibly it looks like the total, the grand total over the history of the US regulations is no more than 20 for the whole country. So in practice it almost doesn't count.
In practice what you said, minimal risk is really important. There's minimal risk and direct benefit. It's largely, it's not true in terms of the regulations, but it's largely true in terms of what people think is okay and what gets done.
Sophie: Okay. Great. So just to make sure I understand. There are three categories of research that can be approved by IRBs, which are minimal risk, minor increase over minimal risk, and prospect of direct benefit. And then there's this fourth category which would have to be approved by some kind of special board. And in that category, there's in principle no upper limit on the amount of risk that children could be exposed to in research. But it's extremely rare for a study to be looked at in that category.
David: Yeah, no explicit written regulatory risk limit. All that is there in this regard is this requirement that the research is consistent with sound ethical principles. Now, I think a lot of people in pediatric research ethics thinks that means it can't pose more than minimal risk or a minor increase over minimal risk. And obviously they think that. And if they're on an IRB or they're in charge of an IRB, that opinion is going to have a lot of influence, but it's not something that's coming out of the regulations themselves. They're reading that into the regulations.
Leah: Hmm. Okay. One more question about the relationship between risk and benefit here, because prospect of benefit is treated as a separate category. But I think intuitively, it's hard to understand - like, benefit and risk seem like separate axes. Why are we pulling out prospective benefit as a separate category from minimal risk or minor increase over minimal risk?
David: Yeah, so I think the basic reason is that, as Sophie said before, the critical question is whether it poses minimal risk or more than minimal risk, and the basic idea of this starts with the National Commission. So for the most part, the current regulations we have for pediatric research in the US are a product of the recommendations of the National Commission in the early and mid-1970s.
And there the thought was - and this gets back to the kind of underlying ethical issue, I don't know if we want to talk about it or not - is just the question of why is it ethical to expose a two-year-old to any research risks for the benefit of other people, even if it's minimal, even if it's tiny. How do you justify that? Certainly, if you look at legal standards around the country, most legal standards around the country will say parents should make decisions for their kids based on what's in the best interests of those kids.
And on most construals of what constitutes best interest for a two-year-old, it doesn't involve getting stuck even with a needle, which most people will grant as minimal risk, in order to collect information that might help somebody else. So that's just to say that there's this background worry that everybody has in mind when they're thinking about pediatric research. The National Commission certainly had that in mind.
And their response to that was, the initial response was, "Well, as long as that risk is minimal, as long as it's really, really, really small, then we're not going to be too worried." And so they start there. And that's the first category in the regulations for risk benefit is minimal risk. And I thought if it's minimal risk, then you could do pediatric research. And just to make the point we made before, that this standard is relevant to adult research. So for instance, you may not even need to get informed consent at all for studies that are minimal risk if they're minimal risk and they meet a couple of other conditions.
So they started there and they said, "Okay, okay, we're good. That's okay. We can accept that, we're comfortable with that kind of research." Well, that's not all research. That's probably going to be a small slice of the pie. What about the rest of it? And then what they did was they went to a kind of clinical care standard, and they said, "Well, look, some research is going to actually benefit the kids, and if it benefits the kids sufficiently, then we're going to think it's okay as well." So that got them to the prospect of direct benefit category.
Minor increase over minimal risk, my understanding historically is that some people just thought minimal risk was going to be too restrictive, and they put in minor increase over minimal risk. Okay. But one thing, I don't know if it's just helpful to clarify at some point is that I'm afraid that people are going to have listened to what we're saying and they're going to have a natural assumption, which most people have, which is that these risk benefit categories we're talking about - minimal risk, prospect of benefit - what do they apply to? Well, the natural assumption is that they apply to studies. And actually, they don't apply to studies.
So if you look at the federal regulations, for instance, for prospect of direct benefit, it doesn't say you can enroll a child in a study when that study offers a prospect of direct benefit that justifies the risk. What the focus is of those standards is procedures or interventions. So it says when there's a procedure or intervention that offers the prospect of direct benefit, then the consequences of being in that category follow.
And why that's important is because obviously clinical trials are made up of a lot of different components, a lot of different interventions. There might be blood draws. There might be sort of just a single blood draw. There might be some surveys. There might be administration of a chemotherapy. And what the federal regulations say is that it's not enough for the overall study, for instance, in the prospect of direct benefit category, it's not enough that the overall study offers a prospect of direct benefit that justifies the collective risks to which the kids are exposed in that study. You have to look at the individual level procedures.
So I'll just give a specific example that people talk about in this regard. Imagine that I'm doing a study for a kind of internal cancer - so a liver cancer or a kidney cancer. And what I need is a biopsy of the liver or the kidney to see whether or not my drug is getting in there. So, the assumption is the drug's only going to work if it gets into the organ, and the way to see whether or not the drug is infiltrating the organ is to do a biopsy.
Now, it might be that overall, the chance of getting the experimental chemotherapy might be sufficiently beneficial or promising for kids to justify an overall assessment that the prospect of benefit of being in that study justifies the risks of everything, including the biopsy. Now, I think a lot of IRBs actually do it that way and they would approve that study, but that's not what the regulations tell them to do.
What the regulations tell them to do is you've got to look at interventions or procedures. So what you've got to do is you've got to look at the individual biopsy. So in this case, the liver or the kidney biopsy, and that has to either pose minimal risk, a minor increase over minimal risk, or offer a prospect of direct benefit. Well, if it's just for research purposes, then it's not going to offer a prospect of direct benefit. And then you're left with minimal risk and minor increase over minimal risk.
So the overall point of that is it's not just a limit on the overall risk-benefit profile of a study. It's also, at least in principle, a limit on the risk-benefit profile of the specific interventions or procedures that you include in your study. And that makes it, at least in theory, a lot more restrictive.
Sophie: Okay. That's super helpful because I agree with Leah that it's strange to think of these as risk categories if one of the risk categories is called prospect of direct benefit. I think it's easier to think of it as a sort of decision tree. So the first decision is, is this minimal risk or not? If it is, then we're good. Maybe there are some other requirements we have to fulfill, but basically we can do it.
If not, then we reach the next decision point and we ask, does it have a prospect of direct benefit for the person participating? Again, if it does, then we're okay. And if not, well, then we ask, is it only a minor increase over minimal risk? If it is, then again, we're okay to do it. And if not, then we would have to go to this fourth category, this special board to get approval. Is that right?
David: Exactly. And I think that's the way the regs are written is for IRBs to go through that process. So they first ask themselves, is it minimal risk? No, it's not minimal risk. Okay. Is there sufficient prospect of direct benefit to justify the risk? Yes or no. And then if not, do you go to this minor increase over minimal risk?
Leah: Yeah, I mean, it is weird though, in the context of what you were just saying about how we're supposed to be assessing every individual intervention or procedure that's performed as part of a study in terms of this decision tree. I mean, it sounds like what you're saying is that's not actually what happens in real life, but it seems like if that did happen in real life, it seems like individual procedures within a given study would be classified differently. So you wouldn't even be able to say like, overall, this study is X because it would be mixed depending on what different things you were doing.
David: It could be a huge mess. So just to take even a simple example - it gets very complicated, but to even take a simple example - we wrote a paper a long time ago now looking at whether or not and when you can approve placebo-controlled trials with kids. So take a standard placebo-controlled trial that randomizes kids, let's say kids with a disease, to either getting experimental treatment or getting a placebo.
Now, I think in practice what a lot of IRBs do is they'll just look at it from the sort of ex ante point of view of enrolling in the trial with the say 50-50 or whatever it is, chances you could end up on one arm or the other. Is that potential benefit of getting the experimental treatment and the experimental treatment benefiting you, is that sufficient to justify the risks in that arm and also the possibility you're on the placebo arm?
I think that's probably the way IRBs actually do it, but I think what you're saying is exactly right. The way they should be doing it under the regulations is at a minimum they should be looking at the active arm, giving that a risk-benefit assessment or categorization, and then looking at the placebo arm and presumably giving that a different one. Unless you think that placebos can offer a prospect of direct benefit, which almost nobody does, then almost all of those trials should really have in effect at least a dual risk-benefit categorization.
And then what you might find out is you have a three-arm trial. I can approve two of your three arms, but I can't approve the third one.
Leah: Okay, wild. And just one last question. I mean, when IRBs are misapplying the standard, do they realize they're doing that? Like, are people sort of flagrantly flouting the regs here?
David: I doubt it. I don't know, but I doubt it. I think that what happens in a lot of cases, what they're doing is that they're thinking, "What's the point of these regulations?" The point of these regulations is to make sure that nothing egregious is going on, basically. Or more specifically, protecting kids but allowing valuable research.
And so I think a lot of IRB members just use that lens to evaluate the studies that their IRB is looking at. I would love to just see a survey that interviews 1,000 IRB members and just asks them, "How many of you have ever read the regs?" Because I bet a lot of them just haven't. They've never done it. And so I think there's likely very little flouting going on. You might think that's a good thing, but to a certain extent, that's enough to be flouting.
At least some people think, well, that's okay. IRBs are supposed to be protecting kids. So as long as they're protecting the kids and they're allowing valuable research that'll help kids in the future, then they're doing their job and the regulations should be a facilitator of that process and not a hindrance to it. Then it depends. You have to have different theories of what you think about sort of the status of the regulations.
Leah: Okay. And just one more question on the regulations themselves. Could you just clarify for us how these risk-benefit categories for pediatric research relate to the ones for research with adults?
David: Well, technically, under US regulations, there are just no risk limits on research within the US with capacitated adults who give informed consent. So the only requirement with regard to risks and benefits for adults is the requirement that the risks to the subjects have to be justified by the benefits to the subjects and/or the social value of the study.
And so, of course, what that means is that it's possible that if you have an enormously valuable study, even if there's no potential benefit to the individual subjects, you could justify serious risks. It looks like it's a kind of just sort of utilitarian calculus. And that's how some people criticize the regulations. They'll say if you're a strict utilitarian, and one of the interesting things obviously, this is the dilemmas you get on consequentialism, is that if you came up with a possible treatment that could save hundreds of thousands of lives, on a strict risk-benefit calculation, how much risk can that justify to, say, 50 research subjects? A lot. Probably a fairly high risk of death. And so there's nothing in the US regulations that says you can't do that trial.
Sophie: Yeah. Okay. That makes sense.
Leah: So we've talked about how this definition of minimal risk becomes really important in terms of determining what research is allowable. And most research regulations around the world define minimal risks as the risks encountered in daily life. For example, US regulations define them as risks that are quote "Not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests."
In 2008, you did a study of IRB chairpersons in charge of reviewing pediatric research to understand how they interpret this "risks of daily life" standard. There were some alarming results. For instance, you found that 70 percent of the IRB chairs classified allergy skin testing as more than minimal risk, even though US regulations define minimal risk in terms of the risks of routine physical examinations and blood tests, and allergy skin testing seems pretty similar to these things.
You also found a great deal of variation in how people apply the standard. Do you think that the situation has improved since then, either in terms of how consistently IRBs apply the risks of daily life standard, or in terms of how reasonably they do so?
David: No.
Leah: I think there's a problem that all of this is running up against, which is really hard to address. And I think there's different sources for the problem. One of the sources is the fact that evolutionarily, one of the things that we've been endowed with is a very fast assessment of risks in our environment.
David: So, when I take a dog for the walk in the woods in the morning and I see a snake, there's no sort of assessment or analysis or rationalization. I just jumped back and I'm terrified instantaneously. It's just hardwired into me. I think risk assessments for the most part are hardwired into us. So I think we're pushing up against that. People just have a view of risks.
The problem is it's not fully hardwired in the sense that it's just evolutionary. It's really informed by our experience, and the reason why and the way that we're so fast in evaluating risks is that we use these shortcuts of these heuristics, like things that are familiar are safe and things that are unfamiliar are risky. Well, that's a bad heuristic to use in the context of research, where almost everything is unusual, or many of the things are unusual. People are just going to think that they're risky because they're not familiar with it. But that doesn't get us anywhere.
So, I think in terms of just human psychology, there's something we're running up against. The second thing is a more programmatic one, which is that a lot of what I think of myself as doing is trying to come up with standards, data, evidence to try to implement the regulations, protections for pediatric research. And what I've just been regularly, consistently, and repeatedly struck by is the fact that when it comes to these issues, I tend to be much more systematic than clinicians, doctors, surgeons, and scientists.
And I find this really striking and strange. So one of the first IRBs I was on was an infectious disease IRB. And the investigators wanted to do a study where they took healthy 11-year-olds and put them in an fMRI machine and did a quick scan of their brains. No chance of benefiting - these were healthy volunteers. Everybody thought it was an important study.
The way that IRB worked at the time was the way we implemented the categories - risk-benefit categories - we were just talking about is the chair just said, "Okay, let's go around." It went around this big oval table and asked everybody, "Do you think this is minimal risk, minor increase over minimal risk?" And I was towards the end. First person said one thing, second person, "minimal risk," third person, "minor increase over minimal risk." It went around and as it started getting to me, it felt like this drumbeat that was just terrifying me because I was thinking, "God, how are they doing this? They must have all of this data and this way of implementing it and making these assessments really fast in a way that I'll never understand. And I don't know anything about it." And I thought, "Oh, this is just crazy."
So it got to me. And I just abstained and I just had absolutely no idea what to say. That was the last protocol. We're then walking out of the room and I had gotten to be friends with this one guy on the committee. And I said, "How did you know that fMRIs in healthy 11-year-olds are minimal risk? Is there studies that I don't know about? Is there a lot of data?" He said, "I have no idea. It just seems like minimal risk to me."
And it's this way in which all of a sudden, the scientists and the researchers become like a priori philosophers - like they think you just sit in a corner, scratch your head for a minute, contemplate the form of an fMRI in an 11-year-old, and your intuition tells you whether it's minimal risk or not. And I think it's what most people did.
I'll just tell you one more thing. That survey that we did with the IRB chairs, the results were frightening, but what was even more frightening to me was the process. So these were chairs who were experienced in evaluating pediatric research. We gave them this whole list of procedures to categorize as minimal risk or more than minimal risk. And not one of the chairs, not in one case, did a single chair say, "I don't know" for that one because I don't know the data on the risks of, say, allergy skin testing. I can imagine if you're an oncologist, what do you know about the risk of allergy skin testing? There should have been a lot of those cases where the chair said, "I just don't know. I don't have the data. I don't have the data in front of me." Didn't happen once. In every single case, they gave us their opinion, and people are just doing it.
Some of it I think is a result of the psychological mechanisms I was talking about initially, but I think some of it has to do with just this view. It's this view of ethics that somehow, yeah, look, these are people - when they're doing their science, they're incredibly technical in ways that I just can't understand. But then when they shift over to ethics, they somehow think, "What's data got to do with ethics? It's just what I think is right and what I think is wrong, what I think is minimal risk, what I think is minimal risk." And it is scary to a certain extent.
I think for the most part they're erring on the side of caution. So the scariness here, I don't think, is kids getting exposed to excessive risk. I think it's more the possibility that valuable studies aren't getting approved. But until we started thinking about, "Let's actually come up with data for what the risks of daily life are" - and I started saying that to investigators, like, "We never thought about that." Well, how could you never think about that? If you read the standard, just read it.
Leah: I was going to say, I mean, I'm not a huge defender of IRBs in general on this podcast, but I will say, I think a lot of IRB members are clinicians and these are the kind of snap clinical judgments that you have to make in real life. Like obviously you want your clinical recommendations to be informed by data and to be following guidelines. But with this fMRI and 11-year-olds, like no one knows what the risks of that are. Like no one's collected data on the risks of that, no one's surveyed the 11-year-olds, no one's done a systematic analysis of this. And so it may be one of these things where you learn to recognize these situations where you're operating without a lot of data, and in those situations, you're used to relying on sort of heuristics and vibes and judgment.
David: I think that is what they're doing. And just to not let IRBs off the hook, but put everybody else on the same hook - maybe I think everybody else is doing the same thing. So there's this group that does diabetes research and they do a lot of research with IV glucose tolerance tests. And a couple times I've gotten calls from the guy who's head of that concern - used to at least be head of that group - who said to me, "You got to help us. The IRBs won't approve our studies because they think IV glucose tolerance tests are more than minor increase over a minimal risk. So we can't do them for research purposes."
I said to the guy, I said, "So when you present these protocols, what data do you present to IRBs in terms of the actual outcomes of kids who get IV glucose tolerance tests?" He said to me, "What do you mean?" I said, "Well, what data do you show to support your claim that they're minimal risk?" "What's data got to do with it? They're just minimal risk."
And then to your point - I asked him, I said, "Of all the people in your group, how many IV glucose tolerance tests in kids do you do a year?" And he said, "I don't know, maybe 600,000." And I said, "If you just collected systematic data on that for two months, you would have all the data you would need. And if you were right, IRBs would have to agree with you." Okay? And his response was basically, "That's just not what I do."
And part of it is just that it falls outside the scope of people's research. That guy was like, "Look, I'm a diabetes researcher. I'm not a risks of IV glucose tolerance test researcher." So just nobody does it and no one's getting the data. I've been trying to convince people we have all of this experience that goes on in the clinical setting. We could get all the data we need for most of these procedures pretty quickly, but nobody's doing it.
Leah: So not everyone thinks that the risk of daily life standard is appropriate for assessing minimal risk. And one objection is that some kids routinely face more risk in their daily lives than we think they ought to. For instance, kids who live in unsafe neighborhoods routinely face more risk than we would be comfortable allowing them to face in research. And it would seem unfair to expose them to greater risks in research just because they live riskier lives overall, especially when this fact is often due to injustices.
This has led some people to advocate for a "reasonable parent standard," which defines minimal risks as the risks a reasonable parent would intentionally expose their kids to, or just to not factor in the risks of daily life when they seem inappropriate. While we see that there are good reasons to adopt this kind of standard, we're also a bit wary of it because we worry it might end up being really conservative about how much risk we can expose kids to in research. I mean, we're not sure - would a reasonable parent intentionally expose their kid to any risks if there were no good reason to do so?
We had a similar reaction when reading this quote from one of your papers. You write, quote: "Some of the risks children ordinarily face in daily life are inappropriate and do not provide a standard for determining which risks are appropriate in research. The risks teenagers face when driving recklessly are inappropriate. They should be eliminated from daily life and to the extent possible, not allowed in clinical research." End quote.
Now, in principle, this sounds pretty good. It would be great if we could eliminate these kinds of risks from daily life, but obviously eliminating them would mean severely restricting teenagers' freedoms. And the fact that we don't do this suggests that we do think that the risk is reasonable. Are you similarly wary of the reasonable parent standard being too conservative?
David: Yes. I think it has to be too conservative. I also think given my experience of talking to people about risks, that it would just be all over the place. Given the things we were talking about before, for instance, people's judgments of risks are so influenced by their own personal experiences that you get completely different views depending upon the parents you talk to.
I think in principle, it's a nice idea. Maybe it's a heuristic that IRBs can use to think, "Okay, would a reasonable parent accept this?" But as an actual way to implement and judge studies, I don't think it works. And I think the fundamental problem is just that we don't have, as far as I know, an independent standard on what constitutes or who is a reasonable parent.
If we had that, right? If there was some psychological test where you give somebody 30 questions and they answer those questions, and if they get a score that's over 27 or something, then they're a reasonable person, a reasonable parent. If you knew that, then you just get the people who get the sufficiently high score on that test, and you just ask them. All you have to do is just give them the study and say, "Is this a study you'd enroll your kid in?" And you'd have the answer.
But of course, we don't have such a test. We don't know whether parents are reasonable or not. So what we end up being stuck with is we end up being stuck with their assessments of which research risks are acceptable or not. And then the whole thing just becomes circular because then you think, "Well, but I have to see whether or not I agree with what they said about the risks of this study or not." So I think maybe as a heuristic it's valuable, but I think as a way of actually implementing the standard, it basically just doesn't get us anywhere.
Leah: And in some of your work you've advocated for another standard, a "charitable participation standard," which thinks of minimal risks as those that children are regularly exposed to for the benefit of other people, such as the risks they encounter while participating in charitable activities or doing family chores. Could you explain the rationale behind this standard and how it might play out differently in practice when IRBs are reviewing research studies?
David: Sure. So the motivator for it was some of the issues we were just talking about. First is that some of the risks in daily life are just inappropriate. So the risks that basically any kid right now faces in Gaza faces, in Afghanistan, faces in Syria - any reasonable person would say that those risks are just excessive and inappropriate. And nobody thinks that it provides a standard for assessing the appropriateness or the acceptability, that's the point, right? We're trying to see what risks are acceptable, and you can't answer that question by appealing to risks that everybody thinks are unacceptable. So that was the first worry.
The second worry was even when you think about appropriate risks in what people call relatively safe environments, there are some risks that we expose kids to because they're associated with activities that we think the kids will benefit from. So I have friends who take their kids snow skiing or take their kids hiking. Snow skiing actually turns out to be one of the riskier things that kids can do. And my guess is if you proposed a study that had the risks of snow skiing, most people would say there's no way an IRB is going to approve that.
So why do parents, even reasonable parents, let their kids snow ski? Well, presumably the assumption is they get something out of it. They're learning something or they're appreciating it or they're enjoying it in a way that seems at least less likely when it comes to clinical research. So my thought was I like the idea of the risks of daily life standard, because I like the idea of having some comparator that gives you a baseline to assess the risks of research. So we're not just stuck with our intuitions on which studies and pediatric research we think are ethical or not. We have some baseline to compare it to. I like that aspect of it, and I thought I want to keep that, but it needs to be narrower.
What we need is we need an articulation of it that knocks out some of these risks that seem inappropriate, either because it's in a place where the risks are excessive or because the risks are connected to an activity that offers a prospect of benefit. And so my thought was charitable activities offers maybe a plausible description of that limiting. So we just look at the risks that we think are acceptable and appropriate charitable activities.
So just in process, I think it helps for those problems. The other two things I like about the charitable participation standard is that it reminds people what research involves. It reminds people that particularly when we're talking about non-prospective joint benefit research, it's a kind of charitable activity. And so I think that's a good reminder for IRB members, for investigators, for everybody.
The second reminder, and this one I feel like I have a harder one conveying, the hope was also just to remind people that outside of the research context, we do expose kids to some risks for the benefit of others, and almost every parent I've ever talked to thinks that - they might disagree about which activities they think are appropriate or not, but they think in general, doing that, exposing their kid to some risk in the context of a charitable activity to benefit other people can be acceptable.
And so the thought was to try to get that into people's mind to try to address some of this worry that hovers over pediatric research that I talked about before, which is this view that is just complete and utter exploitation of defenseless little kids who don't understand and can't make their own decisions.
Sophie: Okay. Yeah. It's helpful to think of it like that as a sort of compromise between the overly lenient risks of daily life standard and the reasonable parent standard, which while being less lenient is maybe too hard to implement.
So another important concept in regulating pediatric research, as you mentioned earlier, is the notion of prospect of direct benefit. Sometimes participating in a research study can directly benefit you in addition to benefiting other people by producing generalizable knowledge. And not just in the sense that you get paid to participate. Research can offer some potential clinical or therapeutic benefit. For instance, sometimes participating in a study is the only way to access a treatment that has a chance of helping you.
Other times participating in research is purely for the benefit of other people, or at least there are some procedures involved in the research, like additional blood draws, that are purely for the benefit of others. In your work, you've pointed out a surprising feature of our regulations on pediatric research, which is that we actually, in a sense, apply stricter standards to research that does stand to benefit a child than research that doesn't. Here's our understanding of how that works. And feel free to correct us.
When a study doesn't offer a child any prospect of direct benefit, when it's purely for the benefit of others, the only requirement that that study has to meet is that it doesn't exceed a certain threshold of risk. It can't be too risky, more than a minor increase over minimal risk. And so it's fine if the child faces what we might call net risks in that study - risks that aren't justified by the benefits that they get from the study. After all, they don't get any benefits directly from the study.
By contrast, when a study does stand to benefit a child, we place additional requirements on it. For one thing, it can't involve any net risks. Any risk that the child faces in the study has to be compensated for by the benefits that the child receives. Moreover, the risk-to-benefit ratio the child faces in the study needs to be favorable as compared to the alternatives they would access outside the study.
This feature of the regulations rules out certain kinds of studies. Suppose, for example, that you have an approved treatment for some condition and it's very expensive. Then you have another treatment that you think also works, but not as well and is less expensive. What you want to do is you want to find out how much better the expensive treatment is to see whether the additional cost is justified. Well, you can't do this kind of study in children because any child who received the inexpensive treatment would be facing a risk-to-benefit profile that's less favorable than whatever they would get outside the study just by accessing the more expensive treatment. To your understanding, why are the regulations like this?
David: So, let me say one thing as a preface to all this, which might be relevant when we come back to thinking about how we should understand and interpret the regulations. So, as you mentioned, this category is "prospect of direct benefit." And that raises the obvious question of what the heck is a direct benefit? We're stuck with that. That's what's in the regs and it's not defined in the regs.
If you go back and look again at the National Commission that I mentioned earlier, they're the source for all of these regulations. They were worried about researchers who might try to justify significant risks to pediatric participants by just the speculative possibility of future benefits sometime down the road. "Well, I don't know if I give these kids this test, if I do this biopsy in these kids, we might learn something that when they're 70, it could help them in this way or that way." They were worried about that kind of speculation.
And so what they did was they didn't say, as you might otherwise assume, if it's just a risk-benefit assessment that the potential benefits have to justify the risk, they said the direct benefits have to justify the risk, and then they explained that what they mean by that is relatively near-term, certain benefits. That's why they put that in cottage there on what that direct industry that's grown up in explaining what "direct" benefit means.
And what's interesting is - this is maybe understandable - when we're thinking about direct benefits in this context, we're thinking in a way of clinical benefits, benefits that are actually going to directly affect a kid's health. What the standard view, I think, if you ask most IRB members, and there's a couple papers defending this, is that a direct benefit is a benefit that you get from receiving the experimental intervention that's being tested in this study. So it's pretty narrow, and there's nothing in the regulations that limits it that way, but that's the way people understand direct benefit.
Okay, so that's just the background to all of this, but now your point is right that it looks like the risk tolerance that we have is lower or non-existent when it comes to the chance of benefit than it is for when there's no chance of benefit. And looked at in one way, that's just odd or makes no sense at all. Right? If we go back to the reasonable parent, can you imagine a reasonable parent saying, "Well, I'm okay with you exposing my kids to some risk to help other people, but not if there's a chance my kid's going to benefit?" It just doesn't make any sense at all.
So how did we end up here? I think there's a historical explanation and then maybe a potential sort of retrospective justification. So the historical explanation is you gave a really nice example of how a trial could offer a prospect of direct benefit and yet in a sense pose what I think of as "net risks" when you have an arm that isn't as good as what the kids would get otherwise. I think it's just a fact of the matter that the people who are making these recommendations were really talented, really smart people, but they weren't omniscient. And I just don't think they thought of that as a possibility. It just didn't occur to them. So that, I think, is the historical explanation. It's just an accident.
Now, some authors try to justify this approach. And their view is that the risk-benefit standards or the normative standards that apply to research aren't consistent across studies. So take the net risks test. I think that just should apply to all studies. The net risk test is just asking, what are we worried about here? We're worried about the possibility of exposing kids who can't consent to excessive risks for the benefit of others. That's what we're worried about.
So what we need to do is basically two steps. And this is the net risk test. The first one is you got to identify whether or not they are being exposed to risks that go beyond the potential for benefit. And that, I understand, is the net risks. So that if the potential benefits completely justify the risks or outweigh the risks, however you want to say that, then there are no net risks. There are net risks when there are some risks that aren't justified by the potential benefit. Those are the net risks. And then in the next step, you'd say, well, are they too big or are they small enough to apply the minimal risk standard to them?
I think that just should apply to all studies. So that assumes that the standards, the risk-benefit standards, the normative standards that you have apply across all these different kinds of trials - minimal risk, prospective direct benefit, minor increase on risk. I think that's right. What some people will say is, no, there's different principles depending upon the context.
So when you're in a non-beneficial study, then you're in what you might think of as a pure research context, and the argument is that the obligations that the researchers have, and most researchers, at least, are clinicians - the obligations they have are different between these categories of study. So that when they're doing a minimal risk study, then they're purely doing research. They're just researchers and the only obligations they have are obligations qua researcher. And those are things like, "I can't expose these kids to excessive risks for the benefit of others."
But the view here is that when you switch into the direct benefit category, now, you're in a kind of quasi-clinical setting. And the obligations that the investigators have, not as researchers, but as clinicians, kick in. And the assumption there is that as a clinician, you're not allowed to expose kids to any risks for the benefit of others.
Certainly, if you take your kid to the pediatrician's office and the pediatrician just says, "Just for the heck of it, I'm taking this extra blood draw from your kid. I think it would be really interesting and it would help my postdoc with my postdoc's research." That's malpractice, right? You don't get to do that. So they're certainly right that those are the standards that apply to clinical care. It's just a question of whether or not you think those standards are relevant in the research setting. And they think they are. And most people, I certainly they are.
Leah: And are we right to think that the net risks test is not consistent with the US regulations?
David: I think basically the answer is yes. You could make it consistent with the regulation. So basically, there's two parts to the net risk test. The first part is identifying whether there are net risks and how big they are. And then the second part of it is assessing whether those net risks, if there are any, whether they're acceptable or not.
So you could jerry-rig a net risk test, which says that when you're in the not prospective direct benefit research, then you figure out the net risks and the net risk just can't be more than a minor increase over minimal. But when you have a direct benefit study, then you apply the net risk test. And if there are any net risks at all, the study is unethical because the clinicians and the researchers have these ethical obligations from being clinicians that apply in that setting.
So you could do that, but I wouldn't do that. And that's, I think, gets the puzzle of this view, which is, well, why is it that I could expose my kid to two extra blood draws when there's no chance of benefit, but not when there is a chance of benefit? It just comes back to the point we started with, but that just seems, I mean, at best, counterintuitive.
Sophie: Right. Okay. So we wanted to step back a bit and ask a big picture question about your work. It seems to us that what you're often doing is making recommendations as to how IRBs should make decisions where your goal is to recommend the most reasonable, defensible thing that IRBs can do that is consistent with the letter of the law.
So you're obviously not recommending that IRBs go against the regulations, but you're often pointing to inconsistencies or problems with them and recommending courses of action that, while consistent with them, are maybe not their most natural interpretation. So you're interpreting the regulations basically as liberally as you can, so that you can recommend courses of action that you think are defensible while still sticking to them. Is that how you understand what you're doing?
David: Yes, exactly.
Leah: Okay. And to the extent that others disagree with you, is it usually because they disagree about what the most defensible course of action is, or because they interpret the regulations differently, or because they see their mandate differently?
David: I think most people who disagree with me, they disagree with me because the process is different. So let's just talk about the direct benefit again. So we have this concept of direct benefit that's in the regs. We're not going to get rid of it. So we've got to work with it. How do we work with it? Well, my view is let's understand why it's in there and it's in there - I sort of explained why it's a bit of a historical accident or mistake that it's in there.
And so given that, my thought is what we should do then is we should try to interpret it as basically flexibly as we can without just contravening just ordinary standards of English and how people would understand what "direct benefit" means. That means we should try to understand it in a way that avoids the inadvertent limitations that we're stuck with. That's basically my view.
And so my view for direct benefit, for example, is that we should think of it fairly broadly as a benefit that accrues from being in the study to the kids. I think what other people do is, they say, "No, that's cheating too much. We have this term 'direct benefit.' So what we need to do is, we need to block out the ways in which it might inadvertently limit research, and we need to just do an honest assessment of what's the most plausible way to understand direct benefit."
So maybe the thought is this, is that to come up with the right, maybe unbiased, understanding of these terms, what you should be doing is you should be bracketing the implications of alternative understandings. It's as if it's as if we gave these this wording to Google and a bunch of perfectly expert users of English, but we didn't tell them anything about the regulations or what this would do in terms of what research we can do or can't do. We just said, "What do you think is the best way to understand this term or this word?" And I think that's what a lot of people are going by.
Okay. So Leah's question was why do people disagree with me? I think that's the side that most people come from. They think that you need to take this more, I don't know, kind of ordinary language evaluation of what the regs are. That's what most people do.
Leah: Okay. We want to now transition from focusing on risk to focusing on choice. So even when children aren't old enough to consent, researchers are usually required to get their assent, that is, their affirmative agreement, to participate in a study. However, federal regulations do not specify how old children need to be to give assent or what capacities they need to have.
You've argued that the age at which children's assent should be considered morally significant in non-beneficial research, that is, research that doesn't stand to benefit them directly, is 14 because that is around the time that children develop the concept of altruism. While children generally help others before 14, they don't do so for altruistic reasons. Rather, they do it to get rewarded or because a trusted adult tells them to, or because they feel bound by unwritten social rules or something else.
We're curious about this because we don't generally require that adults participate in non-beneficial research for altruistic reasons. That is, we think an adult can validly consent to participate in a study that offers them no therapeutic benefit, even if they're just in it for the money or the social clout or what have you. And you say in your work that we should not require children to assent for altruistic reasons either. So why think that this concept of altruism is so important for children's assent?
David: Good. So the basic idea is that the appeal to having the concept of altruism there isn't relevant to the motivations of the kids saying yes or no. Instead, it's supposed to be a test of whether or not they understand the research sufficiently to give assent.
So the basic idea is something like this: There are different justifications for the assent requirement, but I think this argument works for all of them. So I'll focus on the primary one, which is basically the view that even when individuals can't consent for themselves, if they can understand to a certain extent and they can make decisions as a matter of respect, we should still ask them whether or not they want to be in the study. And at a minimum, we should take their answers seriously.
So when is it respectful to ask somebody to make a decision? And when is it disrespectful? Well, here's at least one seems to be pretty clear principle: It's not respectful to ask people to make decisions that they just can't understand.
So in this context of pediatric research, I think it just wouldn't be respectful to take a two-year-old and ask them whether or not to assent to a complicated bone marrow transplant crossover gene therapy study. That's not respect. Why isn't that respectful? Because the kid just can't possibly understand it, right? So that's the general principle. We could talk more about that, but if you think that's right, so that seems roughly right to me, then what that suggests is in order to be able to give assent, kids have to understand the research, right? That's what that general principle suggests.
And now obviously you could respond, well, research is complicated, includes a lot of different things. What are the different things they have to understand? What you could try to do is you could try to go through all of them.
And what that paper involved was basically trying to do a shortcut, which is asking the question: Well, what's the hardest or what's one of the hardest aspects of research to understand? And then the assumption would be that kids come to understand that one later in life or last. Right, that they'll understand the easy things first, like, you've got to go into the hospital tomorrow. That's easy to understand. And probably 3-4 year-olds can understand that. But that doesn't mean they're capable of giving assent because there are other aspects of the research they can't understand.
So the thought was, let's look for the last thing or one of the last things that understanding develops in kids. And my thought was one of the most complicated or hard to understand aspects of research is just this: that research is intended to do things to kids to collect information that can be used to help others - or altruism, doing things to them, posing risks to them to help others. That's a fairly sophisticated, complicated concept. And so the thought was it's plausible to think that's going to be one of the last or maybe the last piece of research for kids to understand.
So the thought was, well, okay, if that's right and we can identify the point at which kids understand this, then we'll have at least a reasonable proxy for when they can understand the research. And then under the principle that it's respectful to ask for assent at the point at which they can understand, then the point at which they have the concept of altruism suggests the point at which it's respectful to ask for assent. The rest said, now to then do that paper. So that's the philosophy of it.
So I'm able or at least I was - maybe I'm not able, I was trained to do that stuff. But what that requires then is you need to figure out when kids have the concept of altruism. That's obviously not a philosophical question. It's a developmental psychology question. I talked to psychologists and basically their data suggested that on average - now, of course, it varies by individual ages - so it's not magically at 14 that suddenly everybody gets this concept. The data suggested that it's probably between 11 and 14 that the vast majority of kids get it. So there's a range.
And then the final piece of the puzzle was, okay, if there's this range, should we go with the lower end? Should we go with the middle? Should we go with the upper end? Should we ask investigators, "Well, you got to test every kid?" The "test every kid" thing seemed excessive to me, so we could do the average. That just seemed too wishy-washy and Aristotelian to me. So I thought, okay, it's the low end or the high end.
And my thought was - I mean, I don't know if we're going to talk about dissent because there isn't an explicit requirement of dissent in the regulations. I think that's a mistake. But to the extent that we assume investigators are respecting the dissent of children, even if they are, my thought was, well, if we go with the high end rather than the low end, then we're not going to be asking for assent from some kids who could understand the research. But as long as we're still respecting their dissent, they're not going to be forced to be in something that they really object to because they can dissent. And then once they dissent, we'll take them out. And so that led me to think that we should go with the upper end, 13 or probably 14 as the age. So that was sort of the idea.
So basically, you're right. It's not about insisting or even nudging kids to enroll in research for altruistic reasons. Rather, it's a proxy for whether or not they understand.
Leah: I guess intuitively I just have this reaction of like, you know, when we ask a six-year-old to pick up the blocks they played with in their kindergarten classroom so the kids after them can play with the blocks or something like this - isn't this so fundamentally different from when we're asking kids to engage in minor things in the course of research. So why doesn't the concept of assent work for the six-year-olds in cases that are sort of analogous to the block example?
David: Yeah, well, so this is just an appeal to the developmental psychology literature, which again, I'm definitely not an expert on. At least at the time when I talked to developmental psychologists and read the developmental psychology literature, what they said was, you're right. In that case, the 6-year-old, if you say to the 6-year-old, "Pick up your blocks so other kids don't trip over them or put them away so they can use them or they know where they are." Kids understand that, but - and this is why it's a concept of altruism rather than something more general, like just doing things to help others - is that, according to developmental psychologists, at least the vast majority of kids, they do get that, and they will, at least hopefully they'll do it, but they're not doing it because they think there's this reason or this ethical reason to do things to help other people.
They're doing it because they think if they don't do it, they're going to get in trouble. A lot of kids think there is somebody, actually - and a lot of parents encourage this - who's actually looking over the behavior of kids and knows all the time, Santa Claus, knows exactly who's naughty or nice. The explanation, at least that I was given in those cases, at least for a lot of kids, is that they often will do the right thing, but it's not because they're motivated by altruism. They're motivated by something else, like they're going to get yelled at, or they have to do it, or they have no choice, or they're going to get punished or something.
Sophie: Right. And the thought is that indicates that they don't actually even understand the purpose of the activity.
David: I think in a way, that's probably right. I think it probably indicates there's a sense in which they don't even understand the real reasons why they should put their blocks away.
Sophie: Yeah. Okay, so we're going to move to the last part of the interview. And we wanted to talk a little bit about some of the big picture changes you'd like to see in the practice of pediatric research. One question is about legal standards. Listeners might recall that the regulations on human subjects research in the US are not laws. If you don't follow them, then you can't get federal funding for your research and you can't have your product approved by the FDA, but it's not as though you've broken a law.
In 2003, you and a colleague of yours called for the development of legal standards for pediatric research. And to our understanding, this was in response to a court case in Maryland about the controversial Kennedy Krieger Institute-led lead abatement study. This was a study done in Baltimore between 1993 and 95 that aimed to find cost-effective ways to reduce lead exposure in low-income housing. The researchers wanted to know how effective different levels of lead abatement were in reducing children's blood lead levels.
So they recruited families with young children, largely low-income black families, to live in homes with varying levels of lead abatement. The participants were offered reduced rent as an incentive to participate. And although they were informed, questions were raised about whether they adequately understood the risks of participating.
The parents of two of the children enrolled in the study sued the Kennedy Krieger Institute, claiming that the researchers failed to warn them in a timely manner about the elevated levels of lead dust that were found in their homes. And although Maryland's lower court dismissed the case, the Court of Appeals decided that adhering to the federal regulations was not sufficient for avoiding legal liability.
We want to ask you about your views on that particular study. But first we want to ask, do you still think that we need legal standards for research and how much of a problem has this been since you wrote about it?
David: In practice, I don't know how much of a problem it really is. But in principle, it's a serious problem. It's exactly as you said. You could have a researcher, a pediatric researcher who, in all good conscience, follows the regulations, the federal regulations to the letter. They don't do anything that's inconsistent with regulations. They don't even say, adopt one of my kind of flexible, slightly fanciful understandings of any of the terms. They do it all by the letter. They're still vulnerable to being sued.
And the problem is, what we were saying before, is there's this background worry people have about exposing kids to any risk for the benefit of others. And that at least is inconsistent with a lot of state regulations, which say, again, parents are supposed to be making decisions for their kids based on what's in their best interest. So you have this possibility that following the federal regulations could nonetheless get you in serious legal trouble. And that's what happened. I mean, it's a more complicated case, but in essence, that's what happened in this case. And that just seems to me like a bad situation to be in. It seems like if we have these regulations and researchers follow them faithfully, then that should offer some kind of legal protection for them.
That was the first part of it. The second part is just to have a kind of consistency, which is that if we don't have them, what we have now is that, yeah, if you do that study in Maryland, you might get sued. If you do it in Oklahoma - I'm just making these up - obviously if you do it in Oklahoma, it's okay. You could do part of it in California, but not the other part. That just seems, in terms of trying to do research and trying to improve and find ways to improve pediatric care, that just seems really counterproductive.
Sophie: Mm-hmm. And it could end up being more conservative because people might be afraid to do research even by following the regulations.
David: Exactly. If you're going to get sued, you might say, "Look, even minimal risk research, I'm not going to do it because I don't know whether or not I could, could ruin my career." The other aspect of this, this isn't tied to Kennedy Krieger, but the other aspect, as you emphasized, is that in most countries I'm aware of, almost every country I'm aware of, the laws regarding research are basically similar to other laws, which is that they apply to a geographical region or a jurisdiction, like a country.
So if you go to France and you try to do research anywhere in the country of France, you need to follow the French regulations. Same with Vietnam, same with Norway, same with lots of countries. That's not true, as you said, in the US. The US regulations don't apply to a country or jurisdiction. What they do is they attach to sources of funding and approval. If you get funding from the US government, if you use US investigators, or if you're going to apply, use your study to apply for approval from the FDA, then you have to have followed the US regulations.
But what that means is it means for-profit companies and anybody else could be out there doing research on their own, and they're not subject to any regulations at all. They're not subject to any research regulations at all. And that seems - now again, is it a problem in practice? I think it's not entirely clear, but at least in principle, that seems like a bad situation.
Leah: Mm-hmm. I assume that the FDA requirement does practically speaking limit what a lot of private companies are willing to do.
David: I hope so. Interesting. So this is speculation. I have a quick ax to grind here, which is that if you look at the literature on the ethics of running drug trials, it's voluminous. There are just thousands and thousands and thousands of articles. Okay, now do this tomorrow. Go up and look for papers on the ethics of drug approval process. There's almost nothing. The FDA process is basically like a black box. Now, there are good reasons why they want to keep it as a black box, their proprietary interests in all of this, but it's really hard to know what's going on there and how things work, which isn't to say that they're making mistakes. It's just to say that it's really hard to know.
So, in this case, if they're doing it, they're doing it because they want to develop a drug. They're going to want to get that drug approved. And so then the studies on which they try to get that approval are going to have to go to the FDA. So they're going to follow the regs. That seems reassuring to a certain extent. There are at least two possible worries you could have with it.
One is that they might want to do studies that are more early phase studies, just like proof of principle. Let's see if we can get this thing into kids. Let's see how they respond to this thing. It's not attached to any drug they're trying to develop. They're just trying to see.
The other thing I've asked people at the FDA, and I've gotten different answers on this, is if I have a drug and I want to get approval for it, so I submit a package to the FDA for approval of that drug, what has to be in that package? And to be really comfortable here, you want the answer to be any and all studies that you ever did with that drug. Then you have some reassurance that they were all consistent with the federal regulations. But it's not clear that that's true. It might be that, at a minimum, this is true - all the studies they put in the package, every study that the company puts in the package, those studies have to be consistent with the regs, but is the FDA going to come back and say, "Are these all of them? Are there some phase one studies that you did that didn't go that well that you're not putting in this package that maybe weren't done exactly how we'd want them to?" It's not clear the FDA does that.
Leah: Okay. What is your take on the ethics of the Kennedy Krieger Institute lead abatement study?
David: Well, so there's a lot of different aspects to it, and I don't think every aspect of it was appropriate. Just to take the aspect that the suit was connected to - so this was about trying to reduce the amount of lead dust that was in homes. Lead is really bad for the developing brains of kids. You want to keep them away from lead, and that was the goal of the study - was at least to minimize it.
And the way they tested it was they had these two methods, this sort of standard white method and this experimental cyclone lead dust collector method. And the study had five different groups. One of the groups were houses that had lead paint in them that got completely abated. Another one was houses that were built after 1978 when lead paint was prohibited, so it didn't have any and never had lead paint in them. The other three were abated to varying degrees. So those are the really important ones.
So what they did is they just collected dust periodically using these two methods from the homes. What the parents argued was that some of those tests had suggested the possibility of worryingly increased lead levels in some of the homes, and the parents weren't warned or at least they weren't warned in a timely manner. And as far as I can tell, part of the problem is, I don't have perfect information on some of this - there's a legal suit, people close down on information - but what I understand happened was that they batched these things so they collected a lot of information and then they waited - I don't know how long but they waited before they looked at it and then got back to tell anybody about the results.
That, I think, just seems to me like a mistake. If you're doing this and you're really trying to protect the kids, then what you want to do is you want to look at the results in as close to real time as you can. And if you get an alarming finding, then what you do is you let them know.
So, I'm on the ethics review board for the ABC study, which is a longitudinal study of kids. They do all sorts of tests, they do suicide screening, they do MRIs, and they have this very nice process when they do an MRI. If the neurologist looks at it, the radiologist looks at it and says, "Well, this doesn't look good," they don't batch that and wait six months. They call up the parents and they say, "Here's what we find. Let's talk about this and decide what to do." That's what the Kennedy Krieger study presumably should have done. And it didn't do that.
Sophie: When you say there were increased lead dust levels, do you mean relative to the levels that existed when they agreed to live in the home?
David: No, relative to the thresholds where you'd start getting worried.
Sophie: I see. But when the participants consented to living in the homes, presumably, they already thought that the levels might be at a higher, more worrying level than what you might see in other homes. So is it that the researchers were finding that the levels were even higher than they had previously informed them they might be?
David: Well, the people who did this study were lead paint experts. They really knew what the impact was. They were some of the people in the world who had found this out and were really worried about it. And so the initial proposal was lead is bad for kids' brains. What we need to do is we need to make sure that no little kids are in houses that have lead paint. So we just want complete abatement or there never was lead paint in there in the first place. That's the golden standard. That's what people were after.
The problem was that you have all this old housing stock in places like East Baltimore that originally were painted with paint that included lead. Now I don't know whether this is true or not, but the claim was made by landlords that a full abatement, in other words, taking out all of the lead paint from those properties would just be prohibitively expensive - some estimates are that a complete abatement of one of those houses would cost more than the house was worth. And so no real estate agent was willing to do it. And no billionaire was stepping up to give them money to do it. And the government wasn't either.
So the investigators fought for a while to try to get full abatement. They weren't getting it. They thought, "We have a bad situation. These kids are still being raised in these houses. What can we do?" And they thought, "Well, maybe this goal of getting rid of all the lead paint is just too optimistic. And maybe clinically it's not necessary. We don't know. Maybe there are thresholds here, and if we get rid of enough of it, it might have a protective effect, either better effect or maybe a complete protective effect."
So what the studies did was they went in there and they removed some of the lead paint from these houses. And then they put kids in there. So at that point, the worry isn't the lead paint, right? What the worry is, is that it chips off the wall, the kids rub up against it, you get this dust, and then they breathe that stuff in. That's how they get the lead into their systems.
And so at a study level, we knew some of these kids were going in to homes that had partial abatement. But what we didn't know, and that's what this study was about, was to look at how much lead dust gets produced in those partially abated homes. And so that's what they were trying to find out. They knew, as you said, there was some lead paint in these homes in the partially abated homes anyway. Now, as they're going through this study and they're doing this white method and they're starting to get lead dust levels of lead dust.
"Yeah, that's not a big deal. That's a little bit worrisome. And that's worrisome." And at least in some of the cases, they were getting findings that said, "This is a worrisome level of lead dust that's being produced in this particular house." And what you would have wanted ideally was then the investigators would have contacted the parents or gotten the kids out of the home.
So there are aspects of it that are worrisome. There was also a lot of payment. The courts were really worried about the payment. I'm not so worried about the payment. Maybe I'll just say with respect to the lead dust, I think both courts got it wrong.
So what the lower court said was, researchers as researchers don't have any obligations at all to say, warn people about health risks that they identify in their kids. And I just can't imagine any view that would say that that's right. Ethically, that seems like it's just got to be wrong. If you are a researcher and you find out something that's really, really important for some kid's health, you need to tell them. I think most people agree with that. So that the lower court got it wrong.
It got appealed. And I think the higher court got it wrong too, because they stepped back and brought on this question I've been mentioning a couple of times of just exposing kids to risks for the benefit of others at all. And they took an extremely dim view of it. If you read the ruling, it's just incredibly, they throw in Nazis all over the place and they're really worried about the study. So I think they went overboard.
The big worry that people have is the worry of knowingly putting little kids, particularly poor, often black kids, minority kids, putting them in homes, some of them of which you knew, although they were partially abated, they still had some lead paint in them. That's, I think, the real ethical dilemma or challenge that gets raised by these studies.
It's a version of what's called in international research ethics, the standard of care debate. And it's basically the question: is there some level of care that you need to ensure across all the arms of a study? And how a lot of people answer that is they say, yes, you can't have an arm that offers something that's regarded as inappropriate. So in a drug trial, you can't offer a drug that's less appropriate.
So in the example you started with at the beginning, maybe you can't offer that cheaper drug. This came from the HIV studies in Africa where they couldn't afford the most expensive treatment that was being used in the US. And they said, "Well, let's test something that's less expensive that might be feasible there." So you are knowingly giving people what was likely less good than what you could otherwise give. Is that an ethical thing to do? Some people think it's just never an ethical thing to do.
I think it can be ethical, but it needs to meet certain conditions. I think basically there are four conditions that you can do that:
- When you need to do it to collect valuable information
- The study is relevant to the population or the community in which you're doing the study
- The information that you're collecting will be valuable to that community - so it'll go back to them or you'll report to them, or it could actually help them in some practical way
- And fourth, critically, that you shouldn't be posing greater risks to the subjects than they would face outside of the trial
And so in the Kennedy Krieger case, my view is it actually probably passed those tests. Now we can't be sure, but what they tried to do is they tried to get kids who otherwise would have been living in houses that had full lead paint. Now they tended to be minority kids from poor families and so it looks really bad. It looks like what they're doing is they're specifically targeting kind of the worse off, the most vulnerable kids. And in a certain sense, that is what they were doing.
But there's another sense in which, looked at from this point of view where I think it's justified, because if it's true that those kids would have otherwise been in a house with full lead paint, none of the houses in the study had full lead paint. They were partially abated or completely abated. So while it wasn't an ideal setting, it wasn't what you would want for your kid, it was better, presumably, than what the kids would get otherwise. And I think that's a regrettable situation that we have to put kids in those situations anyway. But if it really was a valuable study, then I think it was acceptable.
When I wrote that paper, I got this scathing email from this guy in Europe who said to me, "Look, you think you're an ethics person? What are you doing going around trying to justify this partial abatement strategy? We know that lead is bad for kids. What you should be doing is you should be calling the public health authorities and explaining to them that there are kids in Baltimore that live in houses with lead paint and they got to get them out of those houses."
And that's a perfectly reasonable response. If you're a European, you might think, "Look, in the US they would never let kids, if they knew it, be in houses and grow up in houses that have lead paint in them." And I wrote back and I said, "I understand what you're saying, but I can assure you the public health officials in Baltimore and Maryland were fully aware of the fact that there are kids and there still are kids - I think it's not very many anymore, hopefully - living in these houses. They knew that. It's just nobody had done anything about it."
And so basically it's a question of what you think are the ethics of sort of second and third best alternatives, testing them and trying to see if they'll help to a certain extent. And so I think in the end, it's a hard question. You could argue either way. I think it was ethically appropriate.
I mean, there are people, Lainie Ross, who's a very prominent person, has argued, you could see these as prospective benefit studies rather than risky studies. If the alternative truly was being in a house with full lead paint, then it's better to be in a house with less lead paint than more lead paint. So all the kids were better off in the study than outside of the study. That looks like a direct benefit study, not a risky study. So I think in the end, the only real objections you can have to something like the Kennedy Krieger study are on complicity grounds that these guys were putting kids and studying kids in homes that had some lead paint in them. The homes were abated, but at least three of the five groups were not fully abated, and they knew that they wouldn't put their kids in homes like that. And it just raises, I think, really interesting questions about complicity. And some people think you just should never do that. I think that's a mistake, but it's a separate debate, I think.
Leah: Do you know if the information that we gained from that study wound up being valuable to future efforts to do lead abatement?
David: So it's hard to answer that question, or at least any answer I give or most people give is going to be based on a lot of speculation. There is some data to suggest that it was valuable. First, my understanding from the things that I've read is that in all three of those abated groups, they showed pretty significant overall declines in lead paint levels in the little kids who lived in those houses, and that was really what they were after. What they were trying to do is they were trying to protect the brains of little kids. And the study suggested that those partial abatement processes did result in significantly lower levels of lead paint in at least most of the kids who are in those homes. So that's the first thing.
And then the second thing is, if you look, there are a number of both state and national efforts in the mid-90s that really start promoting abatement. And I'm assuming that those two things aren't just a temporal coincidence, but that those policies, those guidelines, the regulations, both at the state and the federal level were at least in part a result of the results of Kennedy Krieger, which found that the partial abatement was effective, at least to a certain extent. I mean, it's obviously not as good as complete abatement. It's not like the kids' lead blood just disappeared, but it went significantly down in most of the kids.
And so I'm assuming that yeah, there were results that looked positive, and I'm assuming that those results informed those policies and guidance, but again, that's just speculation. I mean, the timing is there. It's exactly the same topic. It's hard to believe that the people who were writing those regulations and guidance wouldn't have looked at the Kennedy Krieger results and been informed by it. So I'm assuming that in the end, it actually did have some beneficial impact.
Sophie: You said the scientists found a reduction in the kids' blood lead levels. So then was it scientifically important that they had kids in the study who had previously been living in homes with lead paint, so that at baseline, they had some lead in their blood?
David: Yes. So this is one of the big debates and one of the controversies about the study, but the claim is made that at least most of the kids or the vast majority of the kids who were in the study, if they hadn't been in the study, the claim was they otherwise would have been and would continue to be in homes that didn't have any abatement at all.
I mean, there are estimates that in those neighborhoods, something like 95 percent of the houses hadn't had any abatement at all. And the assumption is that's where those kids would be living if they weren't living in these houses. Now kids who are moving, say a kid's moving into Baltimore, it's at least a hard thing to prove. But I think you can firmly say, at least for most of those kids, they would have been in worse situations if they hadn't been in a study than if they were in the study.
Is that true of all of the kids? I think that's probably impossible to say. It might not have been true of a few of them. Now, the consent form does warn about the possibility of lead and the dangers of it for little kids. And presumably if the parents had an alternative, they would have put their kid in a house, I hope, that didn't have any lead paint in it at all. Now, putting something in a consent form and communicating clearly are two different things, and it's who knows how clear those messages were made.
I mean, another thing that's interesting, by this time, this was really, at least in developed countries, really almost a unique US problem. People knew about the dangers of lead paint for almost 100 years, and there had been a lot of efforts in the 1920s, 1930s to try to reduce it. And in fact, a lot of countries, European countries, prohibited lead paint completely - prohibited it from like the 20s and the 30s.
And I'm not a historian, so I don't know this stuff that well, but my understanding is that there was an enormous advertising campaign by basically the lead paint industry to try to convince people that lead paint was safe and it looks like those efforts were successful. And that's why they continued to have lead paint in these homes through the 60s and 70s when a lot of other countries had eliminated it. So in a way that's sort of the real tragedy of the story.
Leah: So I just want to make sure I fully understand. So basically we know what the kids' baseline lead levels were because at the beginning of the study they got blood tests that assess that, and we also know that over the course of the study, most of these kids' blood lead levels decreased, but there were some kids whose lead levels went up during the study. And so the presumption maybe is that those kids were made worse off by having been in the study, or at least were not benefited by it.
David: Right, it's possible. Although it's interesting and I don't know how accurate this information is. But what I've been told is that at least so - right, it's the parents of two kids who are bringing the suit. And the worry was that their lead levels had gone up and that they weren't notified.
And to the extent that that's true, I think that was a mistake. And I think the defense was that they were using this experimental cyclone collector, which to me doesn't seem like a very compelling argument. But what I have heard was at least one of those kids was in a fully abated house. So that their lead paint levels did go up, but it basically could not have been a result of their being in that house and presumably therefore being in the study. Now, I don't know what the speculation is of where that came from. They were visiting friends or staying over at friends' houses or what was going on. But at least for that kid, as I understand it, their lead levels did go up, but if they're in a fully abated house, it couldn't have been because of being in the study.
Leah: Right. I mean, arguably for them to have been harmed by being in the study, it also would have had to be the case that, counterfactually, had they not been in the study, their lead levels would have gone up less than they did in this study.
David: Yeah, exactly. You have to use the counterfactual again. You have to look at where would they have been living otherwise. And now if they had - if it is a result of visiting other friends in the neighborhood or something like that, or even staying over their houses - if they had moved to a completely different neighborhood, then they likely would have been better off. But you assume at least for the vast majority of kids that wasn't an option, or else the parents would have taken it, I hope.
Leah: Well, we're nearing the end of our time. We like to close the podcast by asking our guests, what is one rule or norm broadly related to what we've been talking about today that you would change if you could and why?
David: So I'm going to try to give two instead of one. The first one is about participation in research and trying to get people to think about the participation of minors in research in a different way. I think a lot of the worries expressed about the ethics of pediatric research are based on a view of kids as very passive subjects.
So this goes back to the ongoing debate about whether you call the people in research subjects or participants, and people are trying to move to "participants," as I understand it, to emphasize the more active role that they play, the contributions they make to the research, as opposed to being just passive individuals who are subjected to research procedures.
I think that's a great idea. And I think it does provide more recognition for the importance of the participants and what they're doing. But a lot of people balk when you suggest that "participants" should be applied to minors as well, with the thought that if they can't consent for themselves, then they really are passive subjects rather than participants.
And I think, first of all, we know that from day one, 15 to 16-year-olds can understand just as well as most adults can, so their assent is basically, I think, as normatively weighty as a 25-year-old's is. And secondly, for me, at least, I don't think of the distinction between participants and subjects as a matter of whether the individual consents for themselves. For me, it's a matter of what role they play in the study. And even two and three-year-olds do things and they actively contribute to the study in valuable ways to research studies, and I think that that is something that we need to recognize better.
I mean, people who do pediatric research ethics need to recognize it. I think also, to the extent these things get to courts, courts need to realize it as well because I think it just transforms the way you think about the ethics of pediatric research. And we've done a bunch of studies where we talked to kids from 7 to 16, and they'll agree. They see themselves as making valuable, active contributions. They feel good about it. They feel proud about what they're doing.
The second thing I want to just sneak in is part of the problem here is just this way of doing research. And we're still relying on what I think of as a segregated model of clinical research where clinical research is considered distinct from care. It happens in a different place. It's done with different people. It's done by different people.
And I think this really is a legacy of things like Tuskegee, where the problem people regarded there is we pretended that that was clinical care when it was really clinical research. And so people thought the solution was to really divorce clinical research from clinical care.
Now, I think that has been valuable in terms of abuses. I think there are just very, very few abuses you could see over the last 30 or 40 years. So I think it really has been protective of participants, but I think it's come at an enormous cost in terms of the amount of information that we've just lost. Every clinical encounter, particularly in an era of electronic health records, is a potential data point. And if you put all that together, smart epidemiologists and statisticians could learn a lot that could really help us.
And so I think what we need to move back to is a more integrated model that is cognizant of the worries of pretending that research is clinical care, distinguishing the two of them, and doing it right this time.
Leah: That makes sense. I think that brings us to the end of our time. Thank you so much for such an interesting conversation and for coming on the podcast.
David: You're welcome. Good to see you again.
Host: Bio(un)ethical is written and edited by me, Sophie Gibert, and Leah Pearson with production by audiolift.co. If you want to support the show, please subscribe, rate, and review it wherever you get your podcasts and recommend it to a friend.
You can follow us on Twitter @biounethical (no parentheses) to be notified about new episodes, and you can sign up on our website, biounethical.com, to receive emails when new episodes are released. We promise we won't spam you, but we may reach out to let you know about upcoming guests and give you the opportunity to submit questions.
Our music is written by Nina Khoury and performed by the band Social Skills. We are supported by a grant from Amplify Creative Grants. Links to papers that we referenced and other helpful resources are available at our website, biounethical.com. You can also submit feedback there or email us at biounethical@gmail.com. Thanks for listening and for your support.