Adverse Reactions

Bringing Cohorts in Cahoots with Lab Science

July 06, 2023 Julie Goodman, PhD, DABT, ACE, ATS, Gradient Season 3 Episode 3
Adverse Reactions
Bringing Cohorts in Cahoots with Lab Science
Show Notes Transcript Chapter Markers

The fields of epidemiology and toxicology sometimes find themselves at odds, but Gradient’s Julie Goodman, an epidemiologist and toxicologist, shares how the two disciplines can complement each other to evaluate public health risks. Dr. Goodman also dives into the finer points of systemic reviews and meta-analyses in her conversation with co-hosts Anne Chappelle and David Faulkner.

About the Guest
Julie E. Goodman, PhD, DABT, FACE, ATS, is an epidemiologist and board-certified toxicologist with over 20 years of experience.  She is a Principal with Gradient and applies her multidisciplinary expertise to evaluate human health risks associated with chemical exposures in a variety of contexts, including products, foods, and medical applications, as well occupational and environmental exposures.  

Dr. Goodman is a fellow of both the American College of Epidemiology and the Academy of Toxicological Sciences.  She was also an adjunct faculty member in the Department of Epidemiology at the Harvard T. H. Chan School of Public Health, where she taught a class on meta-analysis for several years.  Before joining Gradient, she was a Cancer Prevention Fellow at the National Cancer Institute.  

Dr. Goodman has authored numerous original peer-reviewed research articles, review articles (including systematic reviews, meta-analyses, and weight-of-evidence evaluations), and book chapters on a wide variety of chemicals and health outcomes. She has presented her work to a wide variety of audiences.

Dr. Goodman obtained her master's in epidemiology and PhD in toxicology from Johns Hopkins University.

[00:00:00] Adverse Reactions “Decompose” Theme Music

[00:00:05] David Faulkner: Hello and welcome to Adverse Reactions. 

[00:00:08] Anne Chappelle: This season our theme is intersections, where we see toxicology intersect with another science.

[00:00:15] David Faulkner: Well, a lot of other sciences.

No person is an island, and no discipline has all the answers. But when scientific fields collide, some really interesting things happen. I’m David Faulkner,

[00:00:27] Anne Chappelle: and I’m Anne Chapelle. 

[00:00:28] David Faulkner: Welcome to Adverse Reactions Season 3: Intersections.

[00:00:32] Adverse Reactions “Decompose” Theme Music

[00:00:39] David Faulkner: Thanks for joining us just in time for today’s episode, “Bringing Cohorts in Cahoots with Lab Science.”

[00:00:46] Julie Goodman: For toxicologists, you might have a beautiful animal model but really think about, “Are we seeing this in people?” You know, if there’s good epidemiology studies and there’s no increased risk seen in these studies, then, maybe your animal model isn’t that relevant to people—and so, to consider that. I wish people in both fields would have a mutual respect for those in the other field and really just not be so siloed. 

[00:01:08] Adverse Reactions “Decompose” Theme Music

[00:01:11] Anne Chappelle: I want to say, “Welcome,” to my friend Julie Goodman, who is a principal at Gradient, which is an environmental and risk science consulting firm. 

[00:01:21] Julie Goodman: I’m excited to be here.

[00:01:22] Anne Chappelle: Full disclosure: Julie and I have known each other for some time, and I was thrilled when she accepted our invitation to join us today. 

[00:01:31] David Faulkner: Yay! Julie, you have a lot of acronyms behind your name: PhD, DABT, FACE, ATS. I have to wonder: did you always know that you wanted to have this really long list of titles? And also, how is your knighthood coming along?

[00:01:47] Julie Goodman: Working on the knighthood.

[00:01:48] David Faulkner: OK.

[00:01:49] Julie Goodman: I didn’t even really know these titles existed except PhD until after I started working, but I actually went to graduate school for toxicology. And then, in my first quarter there, I took an epidemiology course and had this revelation that you really needed toxicology and epidemiology—and nothing else! No, I’m just kidding. But basically, those two sciences to really understand disease causation in humans. Because, of course, toxicology, you’re looking at mechanism and high doses in animals that you could hopefully extrapolate to low doses in people. But in epidemiology, you’re looking at statistical associations in people, but because they’re associations, they don’t really always tell you about causation. So, if you had them both together, then, you can really hopefully start addressing questions about causation.

[00:02:38] Anne Chappelle: So, could you explain then in language that my mom would understand what is it that you do as an epidemiologist and toxicologist?

[00:02:48] Julie Goodman: Basically, I’m a consultant. I’m not in a lab. I sometimes get raw data but not often. Usually, I’m really looking at published or publicly available studies and trying to understand, as a whole, what are they saying. So, it could be about a certain chemical and a certain health outcome, say a specific cancer. And so, I would look at all the epidemiology studies on that topic that I could get my hands on, all of the bioassay data, all the in vitro studies and take them all together. And, of course, when I say “I,” I really mean “I and my team of people at Gradient” cuz it’s a very big effort to do this. But basically, what do they all mean together, and are the results from epidemiology and toxicology consistent or do they contradict each other? And if they seem to contradict each other, how do you explain that? What’s the most likely explanation for that?

[00:03:34] David Faulkner: So, I think what you’re describing, it sounds like something called a meta-analysis, and I know that phrase has showed up in a couple of places as different publications have come out about COVID-19 in recent years. So, when we talk about meta-analyses or things like systematic reviews or weight of evidence, it’s like, “These all sound like the same thing.” Are they different? What are the differences between them?

[00:03:57] Julie Goodman: Well, a systematic review, as the name implies, is you are looking at the literature as a whole and doing so in a systematic fashion. So, having a systematic way, a method of making sure you basically obtain all the relevant literature and you include all relevant studies, exclude non-relevant studies, and you evaluate them all in the same way. You look at study quality. Based on this evaluation, you interpret individual studies and then studies as a whole based on looking at these kind of study quality aspects. 

A meta-analysis is a type of systematic review where you’re actually calculating an overall estimate from all the studies you look at. In epidemiology, you often have studies that look at, say, relative risks or odds ratios, some measure of risk, and each study has reports of risk. So, you take one risk from each study, and then, you take a weighted average from all the studies and so that is what a meta-analysis is.

[00:04:54] David Faulkner: Is there a way that you could, when you’re setting up these systems or criteria, to game the system, basically? Like to set up the criteria so that you are more likely to include things that you want to include because they’re favorable to your viewpoint or remove things that you would rather not include in your analysis?

[00:05:14] Anne Chappelle: Let me turn that around and say that I am always worried about when there are these meta-analysis or systematic analyses done because it’s so hard to say that you’ve gotten quote-unquote everything. Cuz you’ve got studies that may be from China or Russia or some other country, and it’s hard sometimes to really find all of that. So, when you’re looking at these studies, how do you make sure that they’ve really thought about all of the data or haven’t excluded certain data? Isn’t there always somebody that’s gonna come back to you and say, “You work for industry, you’re cherry picking,” or “You work for academia. you are only choosing certain data.”

[00:06:03] Julie Goodman: There will sometimes be people that say that, yes. You could say, “Yeah, there’s no way that exposure metric is reliable at all,” and that’s just a very poor quality study. Or sometimes, you’re actually looking at specific levels in air or of a chemical in blood or urine or something, and you could say, “Ooh, that’s a really good exposure metric.” But most, I would say, is somewhere in between, where it might be okay but it’s hard to say and you don’t wanna just throw it all out cuz it’s not perfect. But then, you don’t wanna accept it as completely reliable because it does have some issues. And so, you have to make a lot of subjective decisions, but if you’re transparent about the decision you make. And someone might say, “I would’ve made a different decision,” but that’s okay. And then, also, what’s good is if I do a study and make this decision, and then, you do it and make a different decision, and let’s see if the results are different. If they’re the same, then, it shows you that decision didn’t have an impact on the results, but if that decision did make a difference and that tells you like, “Okay, we really need to make sure that these exposure metrics are right.” Because if they’re not, it’s gonna totally change how we interpret the data as a whole.

[00:07:00] David Faulkner: I think that’s an important lesson for anybody doing science: you should be to reproduce this.

[00:07:05] Julie Goodman: Yeah, epidemiology and toxicology have a lot more in common than you’d think or how you interpret it. But another thing I am loving, and again, not just for epidemiology, there are now several places where you can register a protocol. So, before you conduct a systematic review or a meta-analysis, you submit your protocol, and it’s basically on record as saying, “This is how I’m going to do it.” And the idea is that should minimize bias because you’ve decided before you started how you’re going to look at the data—and you’re not beholden to it. If you’re looking and then you say, “Oh, we really should do it this way,” you can put in an addendum or whatever to say, “Well, we changed the protocol for this reason.” But the idea is, I think, that’s another way to help show, “Look, we’re not letting the answer drive the analysis. We had the analysis plan before we tried to set it up, so it’s as objective as possible and now, we’re gonna go and do it—and we have proof.”

[00:07:56] Anne Chappelle: Do other people get to critique your systematic review—or just like swipe left or swipe right on some of these reviews?

[00:08:04] Julie Goodman: I didn’t know people were swiping. You submit them for publication in the peer-reviewed literature, so reviewers can have their day, but you also can do a review and if you want to submit it for regulatory comment; they don’t just go out in the ether. So, hopefully reviews are judged based on the merits of their methods. I strongly feel that a good scientist should be able to judge another scientific study in their field by reviewing the methods and determining whether those methods are adequate, regardless of who funded it, regardless of who’s publishing it. You should be able to look at those methods and draw conclusions on to how reliable those results are. 

[00:08:39] David Faulkner: So, it sounds like this is, I guess, the most objective and rigorous way to do research, basically. Cuz, you know, an individual study, you can always point to it and just say, “It’s an individual study. It says what it says and maybe it’s a really well-done study, but even so, it’s just one.” So, seems like this is…averaging isn’t the right word…

[00:08:57] Anne Chappelle: The weight of evidence. 

[00:08:58] David Faulkner: Yes, this is the way that you would decide what goes in like a textbook or something or…I guess, actually, that’s a good question is: what do we do with these types of reviews once we have them? Obviously, you have a way that you use in your line of work, but what becomes of these reviews?

[00:09:12] Julie Goodman: I think they’re used for all sorts of things and the same thing for toxicology, right? It can be used to submit to regulators. If you wanna submit data to a regulator on a specific chemical, this is a good way to collect it all in one place. You know, in the world of pharmaco-epidemiology, that’s something you could imagine seeing all these studies reviewed on a particular drug to see either efficacy or toxicity, right? 

One project I worked on—it was very cool—using epidemiology and toxicology was benzoate. Benzoate is a preservative, and the acceptable daily intake in Europe was based on a study from 1965. And so, what we did was evaluate the literature conducted starting then, moving forward. And there was a lot of epidemiology and toxicology and toxicokinetics. And also, what was cool about this, too, was that there were also clinical data we could review. So, we looked at evidence from all these different streams, all these different disciplines, and based on all of them combined, we felt that it supported a less conservative ADI. We published it in the peer-reviewed literature, and then, the analysis was submitted to EFSA, and then, they did end up actually raising the ADI. It was zero to five, and they raised it to zero to 10. That’s one way that it was used in a regulatory setting.

[00:10:30] David Faulkner: That’s really cool. That’s gotta be pretty exciting that you have a role in shaping regulatory policy for the European Union.

[00:10:38] Julie Goodman: It was pretty exciting. I really wanna see regulations based on good science. That was one reason, too, why that was exciting to see.

[00:10:45] Anne Chappelle: So, Julie, you and I have worked together on a bunch of different projects over the years, and one of the things that we’ve always struggled with was the exposure metrics back in the 50s, 60s, 70s, even more recently. A lot of times, those were really terrible, and you have these exposure scenarios in a facility where you can’t really say what happens now. It’s something that happened before they knew how to have good air-handling systems or whatever. So, how do you think about some of the old human data against today’s standards of evaluating data?

[00:11:31] Julie Goodman: I think you’re really getting at hazard versus risk, right? And it’s not just old studies versus new. It’s exposures even in an occupational setting versus an environmental setting. You can have studies that clearly show that high exposures to some chemical cause cancer, and it’s not debatable—the evidence is there. But just because something in a really high exposure can cause cancer doesn’t mean it necessarily causes cancer at a low exposure. And I think, maybe, people have an easier time understanding that: extrapolating high-dose animal studies to humans but have a harder time with that concept with a high-exposure human study down to where we are today. 

If you watch the news at all, whenever they talk about exposures—“Flame retardants found here,” or “something else found there”—they never mention concentration—ever—because that’s not gonna get people to stay on the channel or buy the newspaper. 

[00:12:20] Anne Chappelle: I know that I tend to, with a news report, there’ll be something about a over-exposure all of a sudden—“This flame retardant!”—and you’re like, “Ugh, come on. That doesn’t matter.” And I find that I’m humphing a lot about the way that the news media perceives some of these things or why they get excited about it.

[00:12:37] Julie Goodman: Well, I think that one thing is context. They’re not providing any context at all. They’re just saying, “Chemical,” and everyone should panic. And then, the other is, a lot of times, I think, the way that they use numbers. Like they’ll say, “There’s a 68% increase,” which is what they’re saying is that in an epidemiology study, there was an odds ratio of 1.68. And so, when you say, “68% increase,” that sounds like a really big number. When you say, “Less than a doubling,” that sounds even smaller. And when you say, “Well, that’s actually a relative risk, and if your risk starts at one in a million and now it’s 1.68 in a million, that’s almost nothing.” And so that kind of context is really missing, and I wish that the media would get better about providing better context when they discuss epidemiology data. 

I’m not trying to say anyone has any bad intentions. I think, sometimes, they just see a number, and they don’t have the context. I think it’s actually, maybe, our fault as scientists for not doing a good enough of job explaining what numbers mean and how to interpret it. 

Except that time when Ross dated a molecular epidemiologist on Friends—bonus points if anyone remembers that episode—people hadn’t even heard the word “epidemiologist,” and they always think, “Epidermis,” and you’re a skin scientist. And now, with Covid, everybody’s an epidemiologist. But I almost think it went the other way in that so many numbers were thrown at us—all the time, every day—that it was too much. So, we kind of need to get somewhere in the middle, where we do give people numbers—and I think people can interpret them—but they need to have the context and understand where they came from. And also, understand if you have a study of a few hundred people, to apply that to the entire country is maybe a bit of a stretch. It’s one small study. So, that’s kinda important, too.

[00:14:09] Anne Chappelle: So, you walk this very delicate line, Julie, between epidemiology and toxicology. So, you see, kind of, where they intersect. As a toxicologist, what do you wish that epidemiologists knew or vice versa? I think there can be some misunderstandings about exposure considerations or the role of systematic reviews or whatever it happens to be. What is it that you think provides this unique perspective that you bring to both sides?

[00:14:43] Julie Goodman: So, I think for epidemiology, I wish—and this is like a huge generalization, of course—but it would be nice if more people thought about biological plausibility. And it’s on there: it’s on the list of things you’re supposed to consider. But, I think, oftentimes, epidemiologists are very focused on the statistical associations—and they do think about things like confounders and other things that could lead to statistically significant associations that aren’t causal—but when it comes down to it, I don’t think there’s a lot of training or background into biological mechanisms and saying, “You might have beautiful statistics, but does it really make any sense biologically? Is it even possible that could happen?” And so, if epidemiologists would think about that more, I think it would be helpful.

And for toxicologists, you might have a beautiful animal model but really think about, “Are we seeing this in people?” You know, if there’s good epidemiology studies and there’s no increased risk seen in these studies, then, maybe your animal model isn’t that relevant to people—and so, to consider that. I wish people in both fields would have a mutual respect for those in the other field and really just not be so siloed. 

I am seeing more and more this idea of people in different disciplines being in the same room, but what happens is when one of them’s on stage, if it’s someone not in your discipline, you’re gonna be on your phone. And so, the next step is to not be on your phone but actually listen to people in the other disciplines. And then, so you can think about how to interpret your results in light of the results of their studies.

[00:16:14] David Faulkner: I have a hard enough time with one field. I don’t know how you’ve managed two fields. It seems like a big challenge. How have you balanced your interests in both sides of the equation? 

[00:16:24] Julie Goodman: Well, I’m really lucky as a consultant because I have projects come in that they’re all really different: different chemicals, different goals of the project in terms of if it’s just to basically do a straight-out scientific review or if it’s to review, say a regulatory document, to review their analysis and then comment on it.

And so, I do get opportunity for all different kinds of projects. Usually for regulatory review, there’s epidemiology, there’s toxicology, there’s exposure, there’s all different aspects. And so, I guess I’m just lucky that I have the opportunity to really look at those areas that are in my area of expertise.

[00:16:58] David Faulkner: I think that there’s a lot of appetite for certainty in a lot of cases in our world. What would you say to somebody who says, “Do you know for sure about this?”

[00:17:07] Julie Goodman: Yeah, it’s funny, I was just saying that, as a scientist, to say, “It’s safe,” doesn’t feel right because how can you ever say something’s safe? I could say, “Based on a review of the evidence, there’s no evidence of increased risk,” which I guess means it’s safe? That’s another issue, I think, with science communication about how much uncertainty there is and how many data gaps there are. And we do not do a good job communicating that. I would say the general public does not have any sense of how many unanswered scientific questions there are. They think we know everything, and we’re just hiding it, or we’re trying to pretend that there’s uncertainty. When, “No, there really is uncertainty. We really don’t know.” I think that’s something else that’s really important in toxicology, epidemiology, risk assessment, in general, that we need to do a better job communicating that these uncertainties exist, but even with these uncertainties, we can still make decisions based on the data we have.

[00:17:57] David Faulkner: Yeah, I think, that’s the real key is that idea that you can make a decision without all the information, that you can have enough information to make a reasoned response. Like earlier, you were talking about ways of conducting systematic reviews and meta-analyses and things like that to get at, this is the sum of knowledge that we have on this and that allows us to make certain statements about what we know and don’t know. That’s an interesting balance between being able to say, “We don’t know everything, but we do know enough.”

[00:18:26] Julie Goodman: Or, “We know something,” right? Or, a decision has to be made. I feel like sometimes I’d feel better if they say, “Look, we just don’t know, and we’re arbitrarily picking a number” than when they’re using the studies but there’s so much uncertainty. I don’t really think you can be confident that the number they picked is really science-based. I don’t think that happens all the time. I’m not trying to make a huge generalization but do think, sometimes, it does happen. 

[00:18:47] Anne Chappelle: I’m a toxicologist, and 10, 15 years ago, I said to myself, Aa lot of the issues that come up for me in my day-to-day are because of some kind of stupid epi study. Now, I’ve gotta respond to this particular epi study.” So, I went and I took some classes at Hopkins. For the regular toxicologist, I think that having a stronger understanding of epidemiology and the differences between the odds ratios and relative risk is good. So, where do you go to get some of that training? Do you think that’s something that maybe should be part of a tox curricula?

[00:19:29] Julie Goodman: I think at least a class or two, you know, when you have your intro to tox should be on epidemiology. I think at least a couple hours would go a long way in terms of just being able to understand the epidemiology. You know, SOT often has epi courses as a continuing ed. I think there are a lot of open classes now from different universities, where you can take an intro class. I think it’s really helpful and will help with the foundation so when you’re doing your tox study and you can think about how to relate that to people.

[00:20:01] David Faulkner: You currently teach, or have taught, at the Harvard School Public Health? 

[00:20:04] Julie Goodman: Yes, I taught a meta-analysis class at Harvard for seven years.

[00:20:07] David Faulkner: Oh, wow. Was there any kind of consistency of the types of questions that you tended to get from students or things that they had a harder time with? 

[00:20:15] Julie Goodman: Everything doesn’t fit into a nice little box, and sometimes, it’s really hard to figure out. Like when you’re doing a meta-analysis, and I talked about how, you know, you’re trying to systematically look at studies and so, you know, ideally you make a table and each study’s in a row and you have to fit information with that study in each cell. And it’s not always easy to find that information or to pick out that piece of information, for anything. It doesn’t have to be epi or tox, whatever. 

And also letting people know, “You’re gonna have to make a decision.” There’s not a “right” answer. You can do this for this reason or this for that reason. You have to make a decision and then essentially justify it and say, “I had to choose, so I chose this, and this is why.” And then you keep going. It’s uncertainty. That’s what people ask most about. People don’t like uncertainty.  


[00:20:56] David Faulkner: Oh no, no, no. Nobody does. I especially do not. 

How do we use the types of information that we have, scientific literature, to inform policy decisions around reducing cancer risk, and is that different from ways that we might recommend individuals reduce cancer risk?

[00:21:16] Julie Goodman: That’s a good question. From a regulatory standpoint, we want to protect the most sensitive people, and we need to do two things—hazard identification and then dose response—so we can figure out what the risks are. The first thing we do, which is true from a regulatory standpoint or for an individual, is understanding if something’s going to increase risk of a specific cancer at all, and then, if it is, from a regulatory standpoint, we want to be on the conservative side to make sure we’re protective of everyone that we’re regulating. 

Whereas from an individual side, I think you really need to understand, “How much am I increasing risk, and is it a meaningful increase based on my exposure?” If your risk of getting cancer at all is one in three, breast cancer is one in eight—or whatever cancer it is—if we have a very small increased risk, say 1.001, which I have seen, or 1.008. The idea is even though that increased risk is small, if you’re looking at a population of millions and millions of people, it is going to have a big public health impact—and it’s important. However, you’re assuming that estimated risk is correct, and that risk is so small. What that means is if there’s any bias or maybe some kind of uncertainty, it might be that the risk is really zero; the relative risk is actually one. And so, I really think people are confusing those two concepts, and I wish people would separate them first. Determine how reliable, even if it’s statistically significant, which it will be if you have enough people in your study. But just because it’s statistically significant doesn’t mean it’s accurate. And so, really thinking about how confident can you be about the accuracy given maybe unknown information about confounders or other things before you then say, “Wow, this is going to affect so many people.”

I think one other thing worth mentioning, it’s called hypothesis-based weight of evidence, and Lorenz Rhomberg, one of my mentors, came up with this term. And the idea behind it is that the evidence is what it is. You observe what you observe. The question is: how do you best explain it? And if everything is consistent, you have an easy explanation. But if things aren’t consistent or coherent, you have to figure out how do you best explain what you’ve observed? And, I think, when we think about toxicology and epidemiology and exposure and other kinds of evidence together, that’s really what we should be doing. Hopefully, we all agree on the facts that this is what was seen in these studies, and what we’re disagreeing on is how to interpret it. But if we’re really honest about saying, “Okay, if we accept that this series of studies has the reliable results, and these ones aren’t reliable. We have reasons for saying why these aren’t reliable and why these are.” But saying, “Okay, but say we’re wrong. What if these ones aren’t reliable, and then, these ones are? What do we have to assume? What caveats do we have to make to essentially make all of these observations fit together to tell a story?” And, I think, that’s a really good way of doing it. I hope more people do it that way because as opposed to just saying, “Oh, we think this is crap, and we’re focused on this,” it’s really forcing you to think about how the evidence—as you have it—how it supports different hypotheses and which hypotheses does it support the best.

[00:24:20] Anne Chappelle: And that goes back to your biological plausibility, and that’s one of the things that is difficult.

[00:24:25] Julie Goodman: And it’s always possible that something is biologically plausible, and we just don’t know the pathway—we just haven’t discovered it yet. But, I think, there are some cases when we know about the biology, and we can say, “Based on the biology, it’s just not likely because we know the body has these defense mechanisms. And so, given we’re not seeing X and Y, it doesn’t make sense that there’d be some unknown Z that can explain it.”

[00:24:48] David Faulkner: Yeah, I feel like that’s the answer that I give a lot of my friends and family when they’re like, “Well, is it possible that this could be this?” And I always wanna say, like, “Anything’s possible or nearly anything’s possible, but I’m not gonna lose sleep over that. I mean, it just doesn’t seem likely.”

[00:25:03] Anne Chappelle: It’s possible that this could be the winning lottery ticket, but it’s not probable. You know, I’m just saying. … You have one, too? 

[00:25:14] David Faulkner: Yeah. Oh, yeah. No, more than one. I’m not gonna give details, but—

[00:25:17] Julie Goodman: How am I on a podcast with two people that bought lottery tickets?

[00:25:22] David Faulkner: We’re not epidemiologists. It’s that statistical training.

[00:25:25] Anne Chappelle: I did not do well in statistics, clearly.

[00:25:28] Julie Goodman: Okay, you definitely need to take that CE course then, Anne.

[00:25:34] David Faulkner: So, thinking about these questions of biological plausibility and just cuz you can measure something doesn’t mean that it’s necessarily meaningful, my mind goes to Prop 65 and seeing labels on Starbucks because of volatiles that are in coffee. And to me, that feels very counterproductive in terms of giving meaningful recommendations to consumers so that they can adequately make decisions about risk. And it seems like this is a problem to do with epidemiology and statistics. What do we do about this? Like, how do we make meaningful policy with these uncertainties and the temptation for politicians just to be like, “Let’s just slap a number on it, and we’ll call that good.”

[00:26:18] Anne Chappelle: You mean like being just too conservative? 

[00:26:20] David Faulkner: Yeah, being overly conservative.

[00:26:22] Anne Chappelle: If we’re overly conservative, is that a bad thing?

[00:26:25] Julie Goodman: Yes, for a number of reasons. To take the Prop 65 for an example, if everything is labeled a risk, then, people don’t worry about anything because it’s just too many warnings. When something actually is a risk and they should be worried, it gets lost in it. 

From a regulatory perspective, it’s always better to err on the side of being conservative because you do want to protect the most sensitive person in the population, but I think it’s important to really be honest about it. For example, if you label something as a carcinogen when the evidence really isn’t there to suggest it’s a human carcinogen, then maybe there’s a chemical that has some really important uses and it can’t be used anymore because now you’ve labeled it a carcinogen when it isn’t. And it’s also possible that if uses are banned, then that might cost people jobs. And so, those are just two examples to just make the point there can be consequences to calling something a carcinogen that isn’t or saying something is riskier than it really is. 

[00:27:22] Anne Chappelle: When I think of epidemiologists, I think of industry epidemiologists, but would you say that there’s a certain percentage of epi specialties that are in more public health, or would you tend to see more in government? Or, kind of, where do you find your people? 

[00:27:41] Julie Goodman: We’re everywhere. 

I think you could divide epidemiology many different ways, but one way is to say descriptive epidemiologists versus more comparative epidemiologists. I’m not sure if those are the exact right words, but descriptive are really describing disease statistics. So, for example, with COVID, how many people got COVID? How many people got the vaccine? How full are hospitals? Things like that. So, describing what’s going on, which is incredibly important. And then, what I do more is comparative where you’re looking at risks and exposed versus not exposed or people with certain features versus people without those features and seeing if there’s a difference in disease risk.

But, I think, you have epidemiologists in academia that are generating hypotheses and testing those hypotheses. A lot of epidemiologists work at insurance companies where insurance companies have just tons and tons of data and so they can really go through that insurance data and look for associations. Pharmaco-epidemiologists can develop clinical trials or work on clinical trials or evaluate clinical data. Basically, anywhere that associations can be found, you can probably find an epidemiologist.

[00:28:46] David Faulkner: Wow. There are literally dozens, dozens of you. That was an Arrested Development reference for those—

[00:28:53] Julie Goodman: Yes. 

[00:28:54] David Faulkner: Appreciate it. 

[00:28:54] Julie Goodman: Yes.

[00:28:55] Anne Chappelle: Thank you so much, Dr. Goodman, for joining us today on this episode of Adverse Reactions and really seeing how that intersection of epidemiology and tox can really help, I think, both sides of the fields of study.

[00:29:11] Julie Goodman: Thanks for having me. 

[00:29:12] Adverse Reactions “Decompose” Theme Music

[00:29:19] David Faulkner: On the next episode of Adverse Reactions, “Tox in the Family: Generational Exposure and DDT.”

[00:29:27] Barbara Cohn: An example is that we’ve done some work on testis cancer and shown correlation between maternal alcohol use in pregnancy and also the pesticide DDT in the chance for testicular cancer later at age 20 or 30. People are pretty sure it has an embryological origin—just a couple cells that are hiding out that didn’t do what they should do during development and are prone to growth in a way that’s unhealthy later in life.

[00:29:56] Adverse Reactions “Decompose” Theme Music

[00:30:01] Anne Chappelle: Thank you all for joining us for this episode of Adverse Reactions presented by the Society of Toxicology. 

[00:30:08] David Faulkner: And thank you to Dave Leve at Ma3stro Studios, 

[00:30:11] Anne Chappelle: that’s Ma3stro with a three, not an E 

[00:30:14] David Faulkner: who created and produced all the music for Adverse Reactions, including the theme song, “Decompose.” 

[00:30:20] Anne Chappelle: The viewpoints and information presented in Adverse Reactions represent those of the participating individuals. Although the Society of Toxicology holds the copyright to this production, it has, 

[00:30:31] David Faulkner: definitely,

[00:30:33] Anne Chappelle: not vetted or reviewed the information presented herein,

[00:30:37] David Faulkner: nor does presenting and distributing this podcast represent any proposal or endorsement of any position by the Society.

[00:30:43] Anne Chappelle: You can find out more information about the show at AdverseReactionsPodcast.com 

[00:30:49] David Faulkner: and more information about the Society of Toxicology on Facebook, Instagram, LinkedIn, and Twitter. 

[00:30:54] Anne Chappelle: I’m Anne Chappelle, 

[00:30:56] David Faulkner: and I’m David Faulkner. 

[00:30:57] Anne Chappelle: This podcast was approved by Anne’s mom.

[00:31:01] Adverse Reactions “Decompose” Theme Music

[00:31:03] End of Episode

Introduction to the Episode
An Epidemiologist and Toxicologist!?!
Systematic Review vs. Meta-analysis
How Are Systematic Reviews Used?
Hazard vs. Risk
How Can Epi and Tox Complement Each Other?
The Frustrating Lack of Certainty
Making Hard Decisions during Analyses
How to Better Understand Risk
The Cautionary Tale of Too Many Warnings
Where Do Epidemiologists Work?
Episode Credits