Fact Check Your Health
Are you tired of the maze of health information online, unsure of what to trust? Welcome to "Fact Check Your Health," the podcast that teaches you how to confidently navigate online health information. If you've ever felt lost or uncertain about the accuracy of online health advice, this podcast is for you! Join Katie and Sydney as they break down the steps to finding accurate health information online in plain, everyday language. No medical jargon here – just practical tips and real-world examples to empower you in making informed decisions about your health.
For more information and additional resources check out the Fact Check Your Health website at https://factcheckyourhealth.squarespace.com
Fact Check Your Health
Episode 4 - You’re reading it all wrong
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In the fourth episode Katie and Sydney tackle a common pitfall we all stumble upon daily – the misleading and out-of-context health "facts" that flood our social media feeds. Ever read one of those eye-catching headlines claiming something like coffee is the new miracle cure for a serious disease? Our hosts dive into why we should take these sensational claims with a grain of salt. They discuss the crucial differences between causation and correlation in health studies and why understanding these differences can save us from jumping to false conclusions.
This episode isn't just about debunking health myths; it's a practical guide on how to sift through research findings and headlines to find the truth. This episode explains why just because a study is statistically significant doesn't mean it's going to change your life or health in meaningful ways. They also shed light on the tricky business of risk evaluation and how a seemingly scarier "triple risk" might not be as daunting when you look at the actual numbers.
So before you swear off your favorite foods or jump on the next health trend based on a buzzy article, tune in. This episode might just change the way you view health news and help you make better-informed decisions about what's truly beneficial for your well-being.
Episode Outline:
0:20 - Why that headline might not be true!
2:00 - Correlation vs. Causation. What is it and why does it matter?
3:25 - Example of Correlation vs. Causation
4:25 - What is statistical signifcance?
5:50 - What is clinical significance?
7:00 - Example of statistical vs. clinical significance
8:40 - Additional example of statistical vs. clinical significance
9:25 - Importance of risk in health research
10:15 - Medication risk example
13:15 - What is a natural frequency and why should you look at risk in natural frequencies?
For more information and additional resources check out the Fact Check Your Health website at https://factcheckyourhealth.squarespace.com
Disclaimer: The information provided is for educational and entertainment purposes and is not intended as medical advice. For medical advice contact a licensed medical provider.
Episode 4
[00:00] Katie: So, welcome back to the fourth episode of Fact Check Your Health. This is an episode that we are super excited to talk about because it's something that you probably see almost every single day when you're scrolling through social media, and that is information that's taken out of context. So, we're going to help you break down how to figure out when information is taken out of context and what it actually means.
[00:22] Sydney: Right. Our first topic of the day is information taken out of context. You may see a news headline or see someone you know on social media talking about findings from a research study that seems interesting to you, and you may even go look up the study yourself and think that the study seems to support the claims that these people are making. However, there are several common mistakes that people make when interpreting research studies that lead them to take results out of context. [00:47] Katie: For example, we've all come across catchy headlines that make big, huge claims that say something like, "A new study claims that coffee can cure cancer."
[00:56] Sydney: Yeah, it would be wonderful if coffee had such miraculous powers, but it's crucial to approach these types of headlines with skepticism because often, news headlines oversimplify complex research findings, and they don't really present the full context of the research study.
[01:11] Katie: Whenever I see a headline that makes a claim like that, that seems either too good to be true or too simple, some little red flags should go off in your head, and you can start to evaluate whether or not that headline's actually accurate.
[01:25] Sydney: Yeah. Consider it false unless proven true.
[01:27] Katie: Exactly. To truly understand the research, it's important to go beyond the headline and dive into what the actual study says. So, in the case of that coffee and cancer headline example, what you might find if you actually looked into the study is that maybe the study was conducted on mice in a laboratory setting, and it's yet to be replicated in human trials.
[01:49] Sydney: Right. Or the study could have found that drinking coffee was correlated with a reduced risk for cancer, not that it caused a reduced risk for cancer. This is something we talked a little about in the past episode, and it's a huge distinction that you should make when interpreting research findings.
[02:04] Katie: Exactly. So let's tackle the distinction between causation and correlation because I think this really is a critical concept, especially when it comes to health research. So, Sydney, do you want to give us an example?
[02:17] Sydney: Yeah. So remember, last episode, we talked about how observational studies can't prove causation like RCTs can, and that's because of the difference between causation and correlation. For example, let's consider a study that finds a correlation between high sugar consumption and obesity rates. Let's say this study was an observational study done over a 20-year period. While the study might show a strong correlation between sugar consumption and obesity—and it seems very plausible that sugar consumption is linked with obesity—since this study is an observational study, it doesn't automatically mean that sugar is what caused obesity.
[02:54] Katie: Yeah, exactly. Because in that example, there could be other factors at play such as someone's overall diet, their physical activity levels, or genetic predispositions. So, to establish that causal relationship, you would have to do further research, like a randomized control trial that we discussed in the last episode, to actually determine if sugar caused obesity.
[03:16] Sydney: Right. So when you come across health claims or research findings, be mindful of this difference between correlation and causation.
[03:23] Katie: So to explain causation and correlation in a different way, there could be two things that are just randomly correlated, even though neither one actually causes the other. There's actually a website out there called Spurious Correlations that touches on this exact topic. So, if you go on the website, you can see all sorts of things that are correlated. For example, one of the things is that divorce rates in Maine are actually correlated with per capita margarine consumption. So obviously, it's not that margarine consumption is actually causing anything to happen with the divorce rates, but it's instead that those things happen to just be correlated with each other. So that's why it's important to make this distinction between causation and correlation. While these examples that we just gave are obvious examples that causation doesn't equal correlation, in the case of health studies, they might present something to you like obesity and sugar, and since that's something that seems plausible, that it could be a causal relationship, the correlational studies can be misleading because you think that they're causal when they're really just correlational.
[04:25] Sydney: Okay, so next let's talk about the phrase statistical significance. Statistical significance plays a very big role in health research. For example, a lot of times in research studies they'll say they found a statistically significant result. Let's say we're looking at a study that examines the effectiveness of a new medication for treating migraines. Researchers might analyze the data and determine that the group that received the medicine had a statistically significant decrease in migraine frequency compared to the control group. But what does this actually mean? What is statistical significance?
[04:58] Katie: Statistical significance is a very big and complicated topic. So, we're not going to go into all the details of what technically statistical significance means. But basically, in this scenario, what we can consider statistical significance to mean is just that whatever difference was observed between those two groups of people—the group of people that received the medication and the group of people that did not receive the medication—statistical significance means that it was unlikely that this occurred by chance. So, what that suggests is that the medication might actually have an impact on reducing migraines. But again, it's a little bit more complicated than that, but really the most important thing to note is that statistical significance alone doesn't provide the whole picture. Specifically, when it comes to health, statistical significance isn't the most important thing that we want to look at. Instead, what we're going to want to look at is whether or not that difference is clinically significant. So, Sydney, do you want to frame clinical significance for people and give them an overview of what clinical significance actually means, and why they should be more concerned about clinical significance instead of statistical significance?
[06:05] Sydney: Clinical significance considers whether the observed effect is actually meaningful in real-life situations. Statistical significance looks at, "Okay, did this occur by chance alone?" But clinical significance is, "Does this actually mean anything? Is this important for us?" So continuing with our example, even if the group who received the migraine medication experienced a statistically significant decrease in the number of migraines they have, the number that it decreased by might be small and might not actually significantly improve the patient's quality of life.
[06:36] Katie: So let's say for example that if before the medication, the average person had 30 migraines a month, but then after receiving the medication, they had 29 migraines a month. So, since that decrease by one migraine a month might be statistically significant, that difference or that decrease of one migraine a month might not actually be clinically significant since they're still having 29 migraines a month.
[07:03] Sydney: Right. So like a common area of research where you run into this issue is studies on weight and BMI. So, if you did a six-month study on a type of diet, you might find that people on the diet lost two pounds over six months, and this was statistically significant. However, for the average person, this two-pound weight loss isn't really clinically meaningful because it doesn't actually reduce their risk for disease. So, for rule of thumb, clinical guidelines are that you need about a 7% weight loss for a reduction of risk for a lot of diseases. So on top of that, if the diet was really difficult to adhere to, most participants would agree that the two-pound weight loss wasn't actually meaningful enough for them to continue with the diet. So, for clinical significance, sometimes it's based on the outcome you're looking at; there are good clinical guidelines you can go by. Sometimes it's just an intuitive thing—is this a meaningful change? Is this change worthwhile?
[07:54] Katie: And I think that's an important distinction to make, especially when it comes to health, because let's say, usually, we're talking about taking a medication or doing some type of lifestyle change. So taking a medication or following a diet could be expensive, it could be time-consuming, and there could potentially be side effects. So it's really important to consider whether the benefit that you're getting from that medication or that diet is ultimately worth it for the potential risk, time, etc. So that's why it's beneficial to really look at whether or not it's clinically significant instead of just statistically significant. And you're going to have to make that decision either for yourself or with you and your doctor to figure out if that difference is clinically significant for you.
[08:42] Sydney: Right, exactly. So let's say you were looking at studies on melatonin and sleep because you're trying to decide if you want to take melatonin. This could be a totally personal decision. Say you find studies that melatonin decreases somebody's time to fall asleep by 15 minutes. For some people, that 15 minutes could be really valuable and important to them, so they want to take melatonin because they don't want to lay in bed for an extra 15 minutes before they fall asleep. But for other people, maybe that's not really that meaningful for them; they don't care about that 15 minutes. So then it can be a very personal decision, and it's up to you sometimes to read through the results of a study and think, okay, what was the actual change here that occurred? And does it seem like it's worth it for me to try this thing that I'm interested in?
[09:25] Katie: Yeah, exactly. So now let's talk about the concept of risk. When it comes to health research, it's really important to evaluate the magnitude of the risk and consider what it is in relationship to the benefits. So let's say, for example, that you're considering the risk of developing skin cancer from sun exposure. While we know that prolonged exposure to the sun may increase your risk of skin cancer, it's essential to weigh that risk against the potential benefits that you might get from being in the sun, like vitamin D or your overall well-being and quality of life.
[09:58] Sydney: Developing a good risk perspective allows us to make informed decisions by considering the likelihood and impact of different outcomes. It helps us to prioritize our health choices and recognize that not all risks are created equal. For instance, we might be more willing to accept a certain level of risk if the potential benefit is significant and the alternative options are limited.
[10:19] Katie: For example, let's say you know that you've recently been diagnosed with a disease, and that disease requires that you take a medication in order to survive. So you start talking to your doctor, and the doctor presents you with two different treatment options. The doctor says you can either take Medication A or Medication B, but there's a catch. Each medication is going to come with an increased risk for developing a certain type of cancer. So let's say, for example, that maybe Medication A is going to triple your current risks for a kind of non-common cancer, like ear cancer, where normally your risk of developing that cancer would be super small, like only one in a million. So if you decide to take Medication A, your risk then triples. So that would mean that you would go from having a one in a million chance of getting that cancer to then having instead a three in one million chance of getting that cancer.
[11:12] Sydney: And then let's say Medication B doubles your risk for a common type of cancer, like lung cancer or breast cancer. So let's just pretend that your risk of cancer before taking the medication is more like one in 100, but then if you took Medication B, your risk increases to about two in 100. So your risk doubled and went from one to two in 100.
[11:33] Katie: So the question is then, which medication should you take? So at first glance, it might seem like that's a tough decision as both medications increase your risk. But in this scenario, we have Medication A, which is tripling your risk of cancer for a cancer that has a really small frequency. So even though it triples your risk, it's still only a chance of three out of a million. So that's still pretty low.
[11:59] Sydney: Yes. And then on the other hand, we have Medication B, which only doubles your risk, but it's a common cancer. So now your chance is two in 100. [12:07] Katie: So we can see that even though Medication A triples the risk, whereas Medication B doubles your risk, your overall likelihood of getting cancer is still going to be higher if you decide to take Medication B compared to Medication A.
[12:21] Sydney: Right. The reason why we're explaining this is because sometimes you'll see a study and it'll say, oh, this medication triples your risk for whatever outcome. And that can be really scary and can make you feel like, oh, I shouldn't take this medication. But what's your baseline risk? If your baseline risk is one in a million, then maybe it does make sense for you to take the medication. Like me, personally, I took a reflux medication for six months, which there are so many studies that say, oh, it doubles and triples your risk for various outcomes, but I have to look at my baseline risk and say, okay, my baseline risk is pretty low, so I'm okay with a double or tripling in my risk as long as this significantly improves my quality of life.
[12:57] Katie: And while I know that this can be a complicated idea to grasp, understanding this concept really is one of the most important things you can do when it comes to making health decisions because ultimately, not all risks are equal. So it's important to be cautious when you're reading information, especially if they give you risk in percentages. So if they say, you know, 50% or 200%, you really need to try to get those numbers in what's called natural frequency. So, in other words, a natural frequency is going to be something like one in a hundred or one in a thousand because that's going to let you compare two different risks together. So this could be for the effectiveness of a drug or for the side effects or negative outcomes of a drug. And unfortunately, what happens a lot of times is that they'll use those percentages that sound really big when it comes to the benefits, but then when it comes to the harms or the side effects of something, they'll usually put those numbers in natural frequencies so that it sounds less likely. So that's why it's really important for us to be able to grasp this difference between natural frequency, which is again, like one in a hundred or one in a thousand, compared to just looking at numbers like 50% or 75% or triples or doubles.
[14:12] Sydney: So that wraps up our discussion on statistical significance, clinical significance, and the dangers of headlines that are taken out of context. Join us in our next episode, where we'll explore the world of competing hypotheses and how to evaluate these in health research.
[14:26] Katie: As always, thanks for listening to Fact Check Your Health.