Dr. Journal Club

The Unveiling of Homeopathy: Navigating Its Complexities, Part II

December 21, 2023 Dr Journal Club Season 1 Episode 39
The Unveiling of Homeopathy: Navigating Its Complexities, Part II
Dr. Journal Club
More Info
Dr. Journal Club
The Unveiling of Homeopathy: Navigating Its Complexities, Part II
Dec 21, 2023 Season 1 Episode 39
Dr Journal Club

Step into the realm of homeopathy with us! In this episode, we navigate the intricacies of its efficacy, examining debates and studies with insights from expert Mark Davis. Using the Lind et al. meta-analysis as our guide, we scrutinize statistical challenges, shedding light on potential biases and the scientific scrutiny that studies endure.

Next, we delve into the controversial Lancet study comparing homeopathy to conventional medicine. Unpacking its rigorous methods and unique design, we reveal a methodological maelstrom questioning the quality of homeopathy research. Join our critical discussion on homeopathy evidence, where we dissect studies, invite listener and expert contributions, and share personal reflections. Together, we embark on an enlightening quest for truth, ensuring even the extraordinary claims withstand the rigors of scientific scrutiny.

https://www.sciencedirect.com/science/article/pii/S1876382018304451
Petter Viksveen, Philippa Fibert, Clare Relton, Homeopathy in the treatment of depression: a systematic review, European Journal of Integrative Medicine, Volume 22, 2018, Pages 22-36.

Learn more and become a member at www.DrJournalClub.com

Check out our complete offerings of NANCEAC-approved Continuing Education Courses.

Show Notes Transcript Chapter Markers

Step into the realm of homeopathy with us! In this episode, we navigate the intricacies of its efficacy, examining debates and studies with insights from expert Mark Davis. Using the Lind et al. meta-analysis as our guide, we scrutinize statistical challenges, shedding light on potential biases and the scientific scrutiny that studies endure.

Next, we delve into the controversial Lancet study comparing homeopathy to conventional medicine. Unpacking its rigorous methods and unique design, we reveal a methodological maelstrom questioning the quality of homeopathy research. Join our critical discussion on homeopathy evidence, where we dissect studies, invite listener and expert contributions, and share personal reflections. Together, we embark on an enlightening quest for truth, ensuring even the extraordinary claims withstand the rigors of scientific scrutiny.

https://www.sciencedirect.com/science/article/pii/S1876382018304451
Petter Viksveen, Philippa Fibert, Clare Relton, Homeopathy in the treatment of depression: a systematic review, European Journal of Integrative Medicine, Volume 22, 2018, Pages 22-36.

Learn more and become a member at www.DrJournalClub.com

Check out our complete offerings of NANCEAC-approved Continuing Education Courses.

Introducer:

Welcome to the Doctor Journal Club podcast, the show that goes on to the hood of evidence-based integrative medicine. We review recent research articles, interview evidence-based medicine thought leaders and discuss the challenges and opportunities of integrating evidence-based and integrative medicine. Continue your learning after the show at www. drjournalclub. com.

Josh:

Please bear in mind that this is for educational and entertainment purposes only. Talk to your doctor before making any medical decisions, changes, etc. Everything we're talking about that's to teach you guys stuff and have fun. We are not your doctors. Also, we would love to answer your specific questions. On www. d rjournalclub. com you can post questions and comments for specific videos, but go ahead and email us directly at josh at dr journal club. com. That's josh at drjournalclub. com. Send us your listener questions and we will discuss it on our pod. Alright, hello everybody. This is Josh and Adam. We are going to continue our discussion of the homeopathy debate, which should be quite interesting. I was just looking at the data on our first homeopathy drop, which happened last week Really high uptake, like a lot of downloads. People are really interested in the topic. I think we're going to get a lot of interest in this series, which is good.

Adam:

Did we get Avogadro's number of downloads?

Josh:

Not quite. I think the exponential is a little off, but we'll get there, we'll get there, in our slow-quested takeover of the world. So last week we did Lind et al, which was the first major meta-analysis on this question of whether or not homeopathy is real is a thing. It was a good. If you guys haven't listened yet, it's a good one. We don't normally do series, podcasts, but I think for this one we'll probably do four or five or something like that. I want to go into this until we're sick of it.

Josh:

Do you know Mark Davis? He and I met years ago, maybe 13, 15 years ago, at a conference. We met over this paper, Shang et al. We were the nerds in the hallway fighting over the methodological interpretation of this paper. I was talking to him the other day and I was like, oh my gosh, I almost forgot about that. We just did this homeopathy series. I think this was the paper that we became friends over, and so it's kind of neat. But anyway, I think we're going to invite him on. He's also very interested in this topic. He sent us a couple other meta-analyses that I don't think that I have. Well, one of them I've read, the other one I haven't, and so we'll kind of go through them together to continue the series, which should be kind of interesting. Anyway, anything you want to start with before we jump into this paper which is sort of the response to Lind 1997?.

Adam:

No, that's it. I think I'm ready to go on this one.

Josh:

All right, so super fast background for folks that didn't hear our last one. Homeopathy is very popular. It also kind of defies our normal understanding of the universe, especially with the high potency, dilutions or potatizations or potencies we are beyond Avogadro's number, meaning there's very little chance there's any molecule at all in the substance of what was originally diluted. And so the question is well, how could it work? And there's different potential mechanisms like energetics and memory of water and all the stuff that I do not understand. But one of the things that I do understand very well is meta-analyses, and so one approach to understand if homeopathy is real or not is to do these meta-analyses and figure out well, if it is just placebo, then if you meta-analyze all the placebo-controlled homeopathy studies, there should be no difference, you should get no effect, and that made sense. So we had this really amazing paper in the Lancet which we talked about last week, check it out, which was very, very high quality, and basically said well, turns out, when you do that, there is an effect, it is statistically significant, it is a large effect, and even when you do all these controls and things like that, it's still there, and so that kind of blew everybody away and then what we're going to talk about today is sort of this response of paper Shang et al.

Josh:

So the fast summary from Lind is that there was about 90 studies and there are 89 randomized controlled trials, placebo controlled trials. There were high quality and low quality studies. When you just look there, there was a large effect when you met, analyzed everything together. When you just looked at the high quality studies there were, it was a lower effect, but it was still there, is still statistically significant. They did a worst case assessment. Where. What was it? They?

Adam:

they did high quality studies published in Medline and then only looking at preparations that were considered like a moderate or high Potency in the sense of meaning that there's even less substance in that then something that's considered quote-unquote like low potency.

Josh:

So the argument always was well, maybe homeopathy works for those because that's there's like real stuff there that can have an effect, but certainly it's not going to work for stuff that's so dilute that there's nothing there. And so in this kind of worst case assessment, they just looked at those studies that were super, super, super dilute, high quality, medline, etc. So, yeah, awesome. And when they did that, there was a lower effect, but it was still there and it was still just simply significant. And then the last thing that they noted and this is gonna be important for today, and then I'll stop my like rambling preamble is that there was publication bias. Do you want to tell the audience briefly what publication biases?

Adam:

I mean, I think you do a better job at it. But essentially what? What they were testing for and what they found was that smaller studies with greater effect sizes, if you will, are being published, likely because they're statistically significant, but small studies without an effect are not being published because that's not as sexy, if you will. Whereas a large study with a negative outcome or not statistically significant result would be relevant because you know it provides value in the sense of something not working, whereas something that's small and it doesn't show an effect, it doesn't really add much. So those studies are likely missing from a publication standpoint.

Josh:

Perfect, yeah, that's great. So, if you like, if you visualize it as like a big funnel and you put the large studies on top and the small studies on the bottom, and Then so that's sort of the y-axis and the x-axis is the effect size, right, let's? Let's imagine a scenario where there's no real effect, right, and you have like a hundred studies. Some are large, some are small. You would get like a Christmas tree Presentation, right where at the very top of the Christmas tree is like right on zero, like no effect, and then as you get lower down the Christmas tree, the studies are smaller and there's more variability. So the effects are all over the place, they're negative, they're positive, but they average around that line in the middle which is zero. That's what you would expect if there was no effect.

Josh:

Now the problem is like visualize that Christmas tree and now just scoop out all the studies that are on the bottom of that Christmas tree that say that there's no effect, right, those negative studies.

Josh:

And now Because, just like you said, not sexy, we're not going to publish them, but all we can see is the Christmas tree that was published. So now, when you average those all together, when you do to your meta analysis, you see an effect because you're averaging basically only the good studies and you're not pulling it back towards null from the negative stay. So that's publication bias. Okay, so when they analyze for publication bias, in the lint paper they did see evidence of publication bias not surprising, we see that everywhere. And then they did different clever ways of addressing it, like trim and fill, and they did, I think they did like a regression analysis and what they found is that, yes, that lowers the effect and it got pretty darn close to not statistically significant, but it was still statistically significant. Okay, so that is sort of the summary of lint and the background leading into Shang. Any comments, insights on any of that? Before we move forward.

Adam:

I guess one thing I would say is, even though they still found an effect and they got closer towards and no effect. With these, there's still always the possibility that finding an effect was due to chance. And Even if it may be unlikely, what are your thoughts on that study of you know, even though, that they did everything to try to essentially find no effect and they still found an effect. How much of that do you think was due to chance?

Josh:

Chance. I think it's very small because you had, especially with your the primary analysis, you had very tight conference intervals that were very well beyond the no effect line. I think the I don't know what the exact p-value was, but it was probably, you know, 0.00001 or something like that. There were lots of studies with a large effect size. I think, chance causing this would be extraordinarily unlikely. I think that the argument that well, this is okay, that's perfect setup for for this paper. So basically, this was published in the Lancet, which is the biggest medical journal in the world, and everyone freaked out, especially the meta analysts that were doing methods, because they're like they didn't believe that this is real and so they said if our work, if our methods, can't find that this isn't real, there's something wrong with our methods and so it's layers. I've had it. I had a few like back-general conversations with some of these guys and it's, and they were all guys and it's been quite interesting. It's literally like the way they talked about it. And in this there's this wonderful book Called Systemic Reviews in Healthcare by the, published by the BMJ, where they sort of talk a little bit about some like reading between the lines, analysis between 97 and and and this new paper and this is the the gist of it is.

Josh:

After this came out, everyone freaked out and they did all these other analyses and this was the conclusion that they reached it was no-transcript. There is quality issues, risk of bias issues. There's risk of bias issues in all of medicine and when you but when you address that, the effect is still real, meaning when you only look at the high quality studies and yes, there's evidence of publication bias, and when you address that, it still looks like it's real. However, what they didn't do is correct for both and they are like together, not independently, and they were arguing that both there was evidence that both quality of individual studies and missing studies from the publication record were playing a role and they should be addressed concurrently.

Josh:

And that's what Shang was all about. Is what are some clever ways we can think of to address both of those together and will that effect survive? That kind of analysis? And that's essentially what they did. So just the super fast lead up to that is, it was a standard meta analysis. They updated the search that Lynn did in 97. And so, instead of what? 89 randomized schedule trials, they found what? Over a hundred now, something like that.

Adam:

I didn't. I don't read. Let me actually see. Yeah, it was 110. 110. What they did was they searched 19 databases between 95 and 2003 without any language restrictions.

Josh:

Yeah, so really extensive search. And we were impressed with the search in 97 as well. And they basically took that and updated it and took it even further. So, yeah, so they found 110 randomized placebo controlled trials of homeopathy. So larger doubt is the set makes sense. It's been eight years since the last paper and then they said, okay, so we got 110 studies. We're gonna do basically all the same analysis that Lynn did, the same setup, the same compartmentalizations of high risk, worst case, et cetera. And when they did that they found on initially similar results right, A large effect when you do a standard meta analysis, smaller effect when you look at risk bias and all this, but all statistically significant. Now here's where it differs.

Josh:

They then said, okay, this is how we are going to address this issue. They looked at, they did two things. One, they just looked at the largest studies from the meta analysis and their argument the largest quartile, I think, or quintile and their argument was these studies are gonna be the least likely to be shifted based on publication bias. And when they did that, the effect was not statistically significant, but that was only what, like eight papers or something like that, when they just limited it to the largest studies. Then they said, well, let's do another way of looking at this also, because that's just eight studies, right? So that doesn't seem right. So they did this weird regression where, again, if you visualize this as like a funnel plot, meaning like that Christmas tree, and you drew a line, they said, well, let's draw a regression line and it's gonna be most skewed at the bottom because that's where the things flare out from this apparent gap in the literature, and we're gonna draw it all the way up to the top where the largest studies would be, and we are gonna estimate, based on this regression line, what the effect would be at the largest level of study. So it's interesting, it's not just meta analyzing the highest, the largest studies is using all the studies, but drawing them out down this line as if it was the largest study. And this was their kind of novel way of saying we're gonna be informed by everything, but apply it only to the largest ones, if that makes sense. So that was sort of like their second way of doing it and that also was not statistically significant.

Josh:

Now the story I heard back channel was that they submitted this to the Lancet and the Lancet, to their credit, was like no, Like nobody does that. And that's true, nobody does that and no one's done it since I've yet to find it in any other paper. I think they tried. The same group tried to do it on traditional Chinese medicine, like that was it, and just nobody does that.

Josh:

And they're like we have zero idea if this kind of A, like this particular approach, but B, like this strict, of basically adjusting for two things at once, would hold up anywhere else. So what they said is you need to show that this doesn't also negate the effect of conventional medicine, right, Cause there's like no control group here, essentially. And so they went back and they applied the same analysis to conventional medicine. And this was sort of a clever design where they basically said okay, we have 110 homeopathy studies, we're gonna pull 110 randomized, placebo-controlled conventional studies, we are gonna get what was it like five to one or something, for we're to match the outcomes and pick one, you know, randomly, one study for each homeopathy study, match the outcomes, match the clinical condition and do everything that we did to homeopathy, to conventional medicine.

Adam:

Meaning if they did, you know, if they found a trial of homeopathy on like acute respiratory tract infections, then they also tried to look for a similar randomized, controlled trial of a medication for acute respiratory.

Josh:

Exactly, exactly, and I think they also. I don't think they matched on size, but when they looked at the range of sizes of studies, I think they were pretty equivalent, like across both groups. I think. And so, these were their, it was sort of like it's almost like a case control study, but on a meta-analytic level which is like- Really interesting.

Adam:

I thought that was Really interesting, really interesting.

Josh:

Like the authors on this paper are like some of my heroes, like Egger you might know his name from like Egger's Regression which is literally the statistical test for publication bias, egger's regression test, and then Jonathan Stern is like a hero of mine as well, like literally spearheaded risk-abias 2.0 for Cochrane. It's on a zillion other things. So these are like, these are good, these are interesting folks. And what's interesting too is all the stuff that they later developed that like is now standard in meta-analysis, like risk-abias and publication bias assessments, like it's almost like they were just testing them out, like this is early iterations, like before it was even a thing and they were using it in this paper, which was kind of cool, so anyway.

Josh:

So where was it? Okay? So there's a case, basically case control study, and when they did that, they found some interesting things. They also found that conventional medicine looks like you carved out a bunch of your Christmas tree, like there was evidence of publication bias in conventional medicine. Not surprising, we know it's a huge issue. There was a lot of crummy studies in conventional medicine. Not surprising, we know risk-abias is a major issue. Interestingly, the studies of homeopathy like on, like a two to one ratio or something, were like higher quality, right.

Adam:

I think it was a little bit more than that. Let me actually bring it up here. I think it was. Yeah, it was 19%, yeah, basically two to one. It was 19% of the homeopathy trials were considered high quality and only 8% of the conventional medicine.

Josh:

Yeah, which was really interesting. And when you look at I don't know if you caught this when you looked at, like, the breakdown of the quality, it was pretty equivalent, except for adequate concealment of allocation, aka like blinding Well, not necessarily AK blinding. So for the most part they're equivalent, except for that one adequate concealment of allocation. Well, I wonder if they're, is that talking about allocation concealment, like we think about it now, or is that talking about blinding? Well, we don't need to go down that. But essentially, the homeopathy studies were on net higher quality and I don't think that's too surprising. We talked about this last time. It's so easy to blind homeopathy studies, it's so easy to blind it that I'm like would be shocked if it would be an issue, whereas, like some conventional medicine, like or you know, medical approaches, might be really difficult to blind, right. So maybe that was part of the issue. Okay, but certainly the argument that homeopathy doesn't have randomized control trials not true, 110, at least in 2005. Certainly the argument that they only have low quality studies not true, they're better on that than conventional medicine on average.

Josh:

Look, the thing is we don't do this for money, this is pro bono and, quite honestly, the mother ships kind of eeks it out every month or so. Right, so we do this because we care about this, we think it's important, we think that integrating evidence-based medicine and integrative medicine is essential and there just aren't other resources out there. The moment we find something that does it better, we'll probably drop it. We're busy folks, but right now this is what's out there. Unfortunately, that's it, and so we're going to keep on fighting that good fight. And if you believe in that, if you believe in intellectual honesty and the profession and integrative medicine and being an integrative provider and bringing that into the integrative space, please help us, and you can help us by becoming a member on Dr Journal Club. If you're in need of continuing education credits, take our Nanceac approved courses. We have ethics courses, pharmacy courses, general courses, interactions, that's on social media. Listen to the podcast, rate our podcast, tell your friends. These are all ways that you can sort of help support the cause.

Adam:

Also, I think one thing that's important to recognize is that they match the studies without knowledge of the results of the study.

Josh:

Ah, that's good.

Adam:

So it's not like cherry picking. They weren't cherry picking the studies to match to. They basically had a computer-generated random selection of the trials and they didn't know the results of those trials and just tried to match them based on what the outcome was, what the clinical indication was and from that standpoint. So, like I said before, it feels like children with acute respiratory tract infections. Well then, they would try to match that to a study that looked at children receiving a medication for upper respiratory tract infections. But they didn't know, have knowledge of the results prior to.

Josh:

Yeah, that's awesome. So I think that's really important and I wanted to talk to you about this later, about the lack of registration of the methods on this, because you could see how people have strong priors on this topic and you can analyze anything to death if you will. But anyway, so they applied this to conventional medicine. It was the same general thing, which is that when they did this, the effect size well, when they just did a meta-analysis, there was an effect. It was statistically significant medicine works. And then when they did all this stuff, the effect was much smaller, got close to no significance, but it remained it was statistically significant at the end.

Josh:

And so then they basically concluded well, lynn suggested that their meta-analysis said their results are not compatible with the hypothesis that Homiopathy is just placebo, and then Shang basically said our results are in line with the hypothesis that they are placebo and that the effects that you see in the literature are a resultant from risk of bias on an individual study level, compounded by publication bias in a evidence universe level. Ok, ask me all the questions. I've just talking for a long time, but it's just like a rather complicated approach, which is fascinating. But yeah, that's the basics of it.

Adam:

Yeah, so basically more positive results coming from just crumbier trials.

Josh:

More positive results coming from crumbier trials and more positive results coming from smaller trials.

Adam:

Yeah, I kind of love the smaller trial within the umbrella term of crumbier.

Josh:

Right. Well, that's kind of like the crux of it, because they're arguing that they were like independent issues. Because, to your point, they do talk about that, they do say that. And that was a little confusing to me actually, because somewhere in the discussion they said the best way to address bias on a study level is actually just to look at the largest studies because, to your point, smaller studies are not as well controlled and risk of bias tools aren't sufficient. But then well, if that's true, then why? Yeah, it was just a little confusing to me, but they basically approached us we're going to address both of these at once and that did away with the effects size.

Adam:

And then I think, another thing to highlight, because I know that there's other people who would be like yeah, but what about this type?

Adam:

What about that, kind of like what we get with the vitamin D crowd. Is that the treatment effects did not vary by duration of follow-up, the clinical indication, the type of homeopathy or the clinical topic. So meaning it didn't matter if it was classical, clinical complex or isopathy, didn't matter if it was for a single dose, multiple doses for one week, for six months, et cetera, et cetera. They didn't find any sort of treatment effects with any of that.

Josh:

That's right. Yeah, one of the things I've always heard is well, if it was classical homeopathy. Da, da, da da. And it's funny because there's a line in their discussion when they said we talked to some colleagues and they said, well, if it was classical homeopathy.

Adam:

Yeah, no, I actually really appreciated that because they said, well, if it was this? And then they said, OK, well, we'll address it, and we're still not finding an effect.

Josh:

Yeah, yeah, that was interesting, and remember, though, that this is classical homeopathy. Removing the two-hour intake, that probably has therapeutic value. So, again, they're basically randomized to get everybody gets that the issue that they're testing in these studies is just the homeopathic remedy itself.

Adam:

So yeah, which only strengthens the argument that it's just a placebo because you're removing it from the theater.

Josh:

Yeah, exactly. And in mixed company, we say non-specific effects, not placebo effects, non-specific effects of intervention. But yeah, exactly, like once you remove the theater of medicine, the non-specific effects of intervention, it's just that pill Is there an effect, and so they're saying that effect can be negated with this approach. So lots of interesting things. Oh, I want to make one more methods point, and then let's talk about what this means and is it fair or how they did it and all this sort of thing. The only methods point is they had this very interesting comment in the discussion that I don't think I noticed when I read this last year's ago, which is they were basically arguing this publication bias issue is a phenomenon that you see across medicine, but you can't see it unless you have massive numbers of trials, like over a hundred studies. And so they said like, for example, if you just looked at I think they use the upper respiratory infection example if you just looked at the homeopathic studies in Upper respiratory infection, there is an effect. And if you looked for this Christmas tree effect, you wouldn't see it. It doesn't look like a Christmas tree with a gaping hole on one side and you would assume that it works, and their argument is the alternative hypothesis is there is publication playing a role here. But when you get in these nitty-gritty like just the five studies on this, you can't see the evidence of publication bias and then so you're basically defaulting to a better assumption, and this is, this is sort of an interesting thing.

Josh:

And when you look at publication bias studies analysis, the argument is you really need at least 10 studies to to do this and it's underpowered otherwise and Was very and I'm going on a bit of a tangent and so I'm gonna shut up in a second.

Josh:

But the way we it wasn't invented in 2005 for this paper, but the way we look at this from a great perspective is one of the major things that we look for, for the confidence in the evidence is evidence of publication bias and everything else. When you go through it is like you know if they didn't do it, you knock it down. If you didn't do it, they knock. You knock it down like you have to find proof for it to survive. But for the publication by his one, it's like you know you don't knock it down if you don't see evidence, but you need a lot of studies to see evidence. So it almost is like the smaller number of studies in a meta-analyses will get a pass on this, because it's impossible to see evidence of it, and their argument is you might want to consider, you know, looking at things on a larger level. So I thought that was kind of curious.

Adam:

Yeah, no, I thought it was. I thought it was still an important, you know remark that that they made yeah, okay, so let's talk about so, that's, that's basically take home.

Josh:

So whenever I presented this in in class when I was teaching at best year, for example, you know a people were furious, furious and be you know, I think, fairly, they're like well, that's not fair and and I think that's a decent point like what are your thoughts about like this approach in particular and and where we go from there?

Adam:

Well, I think it kind of goes back to what we were saying earlier, where extraordinary claims require extraordinary amounts of evidence.

Adam:

And I mean, if you think about it like and I'm always sort of, you know, dogging on mechanism of action and how, I don't really care for it, but I also do value the importance of it, especially when it comes to the early science in trying to develop therapeutic agents. From that standpoint I think it's important. But from a clinical standpoint, I think it, you know, there needs to be a lot less emphasis on it, because then you can kind of become attached to the mechanism of action To then justify why you're doing something even though it may not have any sort of clinical benefit. Mm-hmm, it's homeopathy. There is no mechanism of action that makes any sense. You, there's no substance in in the Substance that you're trying to give someone, and so to say that essentially nothing is having an effect that's not due to placebo is going to require a lot of proof as to why we should believe that or Trust the evidence to suggest that there is something there.

Josh:

Yeah, I think I'd mostly agree with that. I do think it's a quote-unquote Unfair approach. I've never seen anyone do sensitivity analysis where you're doing stuff in duplicate, like like concurrently, like that.

Adam:

But I think, I think this is one of the exceptions where you have to, just because of how Ridiculous the claims are when it comes to homeopathy.

Josh:

Yeah, I think that would be. The argument is that it's like you said, extraordinary claims, extraordinary evidence, yeah, so I think you know we're gonna have to Do at some point and if listeners are aware of them, look at some mechanisms to. I'm sure there are a bunch of mechanisms, studies on homeopathy, and so maybe we should turn our attention there. And you know, it's interesting, remember when Lind in the in their discussion, there are like we don't think that doing additional meta analyses or additional randomized control trials is going to help this, this argument, you're gonna need replication studies and and mechanism studies. And then, like Shang went up, you know, a couple years later, was like, well, we'll come up with a method to make this not work, right, which is like exactly what Lynn was saying was gonna happen. And and they're like, yeah, you're not gonna convince anybody and so you need to do it. Look at it this other way. So, yeah, it'll be curious. I think we should transition to to that.

Adam:

I do want to say to that in the discussion these these authors also talked about, you know, other studies that had similar Results to their findings, and one that they pointed to is that there was a study of 23 trials of homeopathy that looked at, you know, studies that were implementing high methods and found that only a few trials that actually used objective endpoints All of them were negative.

Josh:

Yeah, so that's interesting because objective endpoints should only matter if you're not blinded, right. And so it's interesting that Because, in theory, when you do your risk of bias assessment, you're looking at blinding and they're saying that when you look at high quality studies, presumably there's adequate blinding, but they're still saying this, so maybe this goes to shanks point, which is that the risk of bias tools are just inadequate. And and you know they, they said this before we got the cocking risk of bias tool and then now cocking risk wise 2.0. But maybe even now they would argue that our current tools for risk of bias are inadequate and there might be Quantitative approaches that are that are better. These folks clearly come from the quantitative camp. Anyway, that's a big debate in the cocking world. But yeah, I don't know, that's a good, that's a good point, and it doesn't make sense to me that that would be true unless we're basically saying the risk of bias assessments are inadequate and there is still unmasking or something like that going on. Yeah, if that makes sense.

Adam:

Yeah, and then one, one last thing, and I thought that this was sort of a healthy conclusion, even if they were coming into this with some priors. I did like how they kind of ended the paper of saying, you know, rather than doing more placebo-controlled trials, more effort should be, you know, used to focus on the nature of the context effects and on the place of homeopathy in healthcare.

Josh:

Yeah, right, exactly, and I think that makes sense, and I think a lot of people get benefit from these like long intakes, although I have to say I was required at best year to do to go as a patient to a homeopathic intake and, as you know, I was not a fan, and so I did the one required one and that was it. But I was. It was like traumatizing for me. So it's like an end of one anecdote. But, like you know, they spend two hours like digging into your soul and it was like I did not, it was just kind of it was like quasi counseling, in that like they go really deep but it feel like they didn't have like the counseling training to like keep it safe and package it up. At the end it was just like, okay, now that we've, you know, shown your soul, we're going to go find your remedy type of thing. See you later. And so I just remember being feeling like very I'm used to that being said like I think a lot of people did not have that experience and it was a teaching hospital, so it wasn't, you know, might have just been an experienced student or something like that, but so yeah, anyway.

Josh:

So I think we're going to have Mark Davis on to talk about some of these other medas. One of them is to the replication point. I think there is a. He was saying there was a meta analysis of a single study group that did a bunch of studies and, like I think, replicated and did a meta analysis. So that would go to that replication argument. And then I think we're going to have to find some good summary paper on like homeopathy mechanism studies. So, Adam, you and me, that's our homework to find something in the interim and any listener that's listening that knows this stuff. I think that'd be great. There are some really, really great, brilliant homeopathy researchers that I discovered in my like quest to understand the challenges that is homeopathy. One is Iris Bell. Like her, randomized potential trials are brilliant, and I think she's done. I think she's done some mechanism studies too, so that might be someone for us to look at as well. Cool, any, sorry I dominated today. Any comments last minute things you wanted to bring up, Adam.

Adam:

No, I don't think. I would just say is like I know it can come off as crass and in a bit direct, and by no means I have any sort of like you know intent to disrespect those who use or practice homeopathy, but just more so to get people to think about sort of the evidence regarding homeopathy. And the good thing about evidence based medicine and working with numbers is that it really shouldn't stoke an emotional reaction, although I certainly can see why it would, you know, cause people to be, you know, upset or happy or kind of you know play into their biases, coming into it from either direction. But I do think it's important to recognize what the evidence is showing.

Josh:

Well, so that I think that supports the homeopath's argument, because, honestly, the shang paper is a bunch of methodologists trying to find a way to disprove it.

Josh:

But if you looked at it from like any other conventional evidence based medicine approach, there is a large statistically significant effect that survives sensitivity in LC. So I don't know, man, I think like, honestly, like if you look at the evidence with like eyes that are fair, there's no reason this shouldn't be like a thing. Now, another question is like is it on an individual clinical outcome level? Do we have enough evidence that there's something that works? But we're talking about on is this real or not? And which is a bit of a different issue. But yeah, so that's the thing I'm finding myself to make my worldviews work. So I've got this like worldview that homeopathy can't be real and this worldview that I trust evidence based medicine approaches. Maybe this is why I'm so obsessed with this, because I feel like this issue like highlights the cognitive dissonance between these two worldviews in my mind, and I'm like when I use a standard evidence based medicine.

Josh:

Approach like this should be a thing. But it can't be a thing, and so I'm going to use all these fancy stats to like explain it away. And that's okay because of priors and because of Bayesian and because of extraordinary claims, but then, but is that so? I don't know. So I just feel like I don't know. I'm still clearly torn on this thing.

Adam:

Yeah, I don't. I mean, I think people know where I stand, but yeah.

Josh:

Yeah, no, no, fair enough. Fair enough, yeah, and I was like definitely where you were when I was like, well, certainly in school and going to these conferences, and like pigeonholing the what's that the right word when you like, just like attack someone after a talk and don't let them go to the the refreshments because you have a zillion questions Like trying to figure out like the backstory of this study, like dragging down random excerpts in random chapters of other books. But back then I was like this isn't real and I want to understand why. Now I think like I'm just a little less I don't know, like I still think it's unlikely, but I feel like I'm less certain in things the older I get, that it's like there's lots of stuff that I don't understand. But, that being said, like this seems like a really big thing to somehow magically work I mean, we got to figure this out. They argue for this like memory of water thing that like there's like some like the water molecules stay in formation and it somehow survives the question.

Adam:

But then again it's like okay, that's an extraordinary claim, you have to prove it.

Josh:

Yeah, yeah, no exactly. So I'm like really curious to see like we need like a good review of these like mechanism studies, because it's just like and if someone tells me quantum physics period, I'm going to lose my freaking mind, right, Like that's always I feel, like that's always the argument for things that people don't understand. They're like well, quantum physics. You're like what is that?

Josh:

What does that even mean? Like, give me a little bit more, so anyway. So I think it's. I would like to know what these studies say, and I haven't looked at them yet. So that'll be the thing we got to find a review paper, ideally from an unbiased perspective, on on mechanism studies. All right, homiepas, if you're listening, please send us your data, send us your studies for us to take a peek at. Alrighty, should we leave it there?

Adam:

We should.

Josh:

All right, take care everybody. We'll talk to you next time. If you enjoy this podcast, chances are that one of your colleagues and friends probably would as well. Please do us a favor and let them know about the podcast and, if you have a little bit of extra time, even just a few seconds, if you could rate us and review us on Apple podcast or any other distributor, it would be greatly appreciated. It would mean a lot to us and help get the word out to other people that would really enjoy our content. Thank you, hey y'all. This is Josh.

Josh:

You know we talked about some really interesting stuff today. I think one of the things we're going to do that's relevant. There is a course we have on Dr Journal Club called the EBM Boot Camp. That's really meant for clinicians to sort of help them understand how to critically evaluate the literature etc, etc. Some of the things that we've been talking about today. Go ahead and check out the show notes link. We're going to link to it directly. I think it might be of interest. Don't forget to follow us on social and interact with us on social media at Dr Journal Club Dr Journal Club on Twitter. We're on Facebook, we're on LinkedIn, etc. Etc. So please reach out to us. We always love to talk to our fans and our listeners. If you have any specific questions you'd like to ask us about research, evidence, being a clinician, etc. Don't hesitate to ask. And then, of course, if you have any topics that you'd like us to cover on the pod, please let us know as well.

Introducer:

Thank you for listening to the Dr Journal Club podcast, the show that goes under the hood of evidence-based integrative medicine. We review recent research articles, interview evidence-based medicine thought leaders and discuss the challenges and opportunities of integrating evidence-based and integrative medicine. Be sure to visit www. drjournalclub. com to learn more.

Homeopathy Debate and Meta-Analysis Response
Analysis of Homeopathy and Conventional Medicine
Examining the Evidence for Integrative Medicine
Debating the Evidence for Homeopathy