Dr. Journal Club

Soothing the Mind with Electrical Waves Demystifying CES

March 07, 2024 Dr Journal Club Season 2 Episode 9
Soothing the Mind with Electrical Waves Demystifying CES
Dr. Journal Club
More Info
Dr. Journal Club
Soothing the Mind with Electrical Waves Demystifying CES
Mar 07, 2024 Season 2 Episode 9
Dr Journal Club

Discover the lowdown on cranial electrical stimulation (CES), an FDA-approved method with promising results for anxiety, depression, insomnia, and pain. 

Our exploration of cranial electrical stimulation (CES) highlights its promising potential for anxiety relief. While visual insights from randomized trials paint a positive picture, it's essential to approach these findings with scrutiny and an awareness of research complexities. As we navigate the evolving landscape of evidence-based integrative medicine, the conversation around CES continues, inviting ongoing exploration and discussion.

Chung FC, Sun CK, Chen Y, Cheng YS, Chung W, Tzang RF, Chiu HJ, Wang MY, Cheng YC, Hung KC. Efficacy of electrical cranial stimulation for treatment of psychiatric symptoms in patients with anxiety: A systematic review and meta-analysis. Front Psychiatry. 2023 Apr 6;14:1157473. doi: 10.3389/fpsyt.2023.1157473. PMID: 37091717; PMCID: PMC10115990.

Thank you for joining us on this insightful journey, and we look forward to continuing the conversation in our ongoing pursuit of knowledge and well-being within the realm of evidence-based integrative medicine.

Learn more and become a member at www.DrJournalClub.com

Check out our complete offerings of NANCEAC-approved Continuing Education Courses.

Show Notes Transcript Chapter Markers

Discover the lowdown on cranial electrical stimulation (CES), an FDA-approved method with promising results for anxiety, depression, insomnia, and pain. 

Our exploration of cranial electrical stimulation (CES) highlights its promising potential for anxiety relief. While visual insights from randomized trials paint a positive picture, it's essential to approach these findings with scrutiny and an awareness of research complexities. As we navigate the evolving landscape of evidence-based integrative medicine, the conversation around CES continues, inviting ongoing exploration and discussion.

Chung FC, Sun CK, Chen Y, Cheng YS, Chung W, Tzang RF, Chiu HJ, Wang MY, Cheng YC, Hung KC. Efficacy of electrical cranial stimulation for treatment of psychiatric symptoms in patients with anxiety: A systematic review and meta-analysis. Front Psychiatry. 2023 Apr 6;14:1157473. doi: 10.3389/fpsyt.2023.1157473. PMID: 37091717; PMCID: PMC10115990.

Thank you for joining us on this insightful journey, and we look forward to continuing the conversation in our ongoing pursuit of knowledge and well-being within the realm of evidence-based integrative medicine.

Learn more and become a member at www.DrJournalClub.com

Check out our complete offerings of NANCEAC-approved Continuing Education Courses.

Introducer:

Welcome to the Dr Journal Club podcast, the show that goes on to the hood of evidence-based integrative medicine. We review recent research articles, interview evidence-based medicine thought leaders and discuss the challenges and opportunities of integrating evidence-based and integrative medicine. Continue your learning after the show at www. drjournalclub. com.

Josh:

Please bear in mind that this is for educational and entertainment purposes only. Talk to your doctor before making any medical decisions, changes, etc. Everything we're talking about that's to teach you guys stuff and have fun. We are not your doctors. Also, we would love to answer your specific questions. On wwwdrjournalclub. com you can post questions and comments for specific videos, but go ahead and email us directly at josh at drjournalclub. com. That's josh at drjournalclub. com. Send us your listener questions and we will discuss it on our pod.

Josh:

How many cups ? Are we still counting? Are we doing that?

Adam:

I'm on my 7th.

Josh:

Okay, well, we're being honest. I'm probably not far away. Hello, dear listener, how are you doing? We're just discussing our caffeine addiction, although I was just presented to Sonoran University with Heather and she presented on a paper on non-sugar, non-dairy coffee and mortality. Basically, for everything besides cancer, it's beneficial pretty much in a linear direction, although I'm sure at some point it becomes a U-curve.

Adam:

I think I read the meta-analysis where it said the health benefits started at the 6-cup point.

Josh:

Oh, that I didn't see. This was a pretty good paper she showed it was New England Journal of Medicine. It was a U-curve. But even where you and I are at with 6-7 cups a day, it's Ewing, but it's not Ewing into danger zone, it's still in the benefit market. It's just a little less beneficial than, say, 4 cups of coffee or something like that. We're good, all right.

Adam:

We should.

Josh:

I was going to say yeah. Go ahead.

Adam:

American College of Cardiology did a atrial fibrillation guideline update and they did say that coffee doesn't have an impact on it. But that can't be right. If you get subjective symptoms of palpitations and you think it's associated with coffee, then perhaps consider cutting back. But they were like otherwise, it's fine.

Josh:

Yeah, okay, all right, I feel like that's pretty clear for a lot of people. But, okay, cool, yeah, no, I'm totally fine continuing to give into my addiction here and find appropriate evidence to support it. All right, okay, but today we digress. Okay, so we have a listener requested review. Which, dear listener, always feel free to just reach out to us if there's a paper of interest to you and you have a question about it or what have you always just feel free to say hey guys, can you review this and see what? I'd love to know what you think. So this is. Dr Ali Naviti reached out and was like yeah, what about this electrical cranial stimulation thing? I didn't know what that was, I was embarrassed to say. But he was like yeah, I think there's some evidence on it. You mind doing like a little bit of a review? And I found this recent systematic review and meta analysis on it and so I figured this would be a good one for us to go over.

Adam:

Yeah, let's get into it.

Josh:

All right, let's do it, okay. So let's start here. Okay, so I guess the first thing. So maybe we just explained this because this was news to me. So there's electrical cranial stimulation, which they call CES. So I guess cranial electrical stimulation, and that is like Alpha Stem no affiliation apparently is like the classic company product that most people associate with that. So, if that makes sense to the listener, it's like it is not. It's related to trans cranial magnetic stimulation, but not really. Maybe it's like analogous, it's like further from the brain, like you put these things on your ears, I think, or your eyelids.

Adam:

It seems like with TMS is that's what I'm familiar with.

Josh:

Right that I know for like depression and things like that yeah.

Adam:

Yeah, there's a lot of research in the field of TMS, but it sounded like with CES, cranial electrical stimulation seemed like the main difference there was not as intense or like less depth with the stimulation, but definitely not something I'm super familiar with.

Josh:

Right, me neither. And yet I was like surprised to hear that it was FDA. It's been FDA approved, like for a while, for anxiety, depression, insomnia and different pain conditions. So I was like, wow, this is something that is FDA approved for all these conditions. Turns out there's a lot of research on we're going to talk about that and I was like I felt very benighted to not know that this is a thing. So apparently this is a thing and all of our listeners are going to be like, yeah, no kidding guys, but anyway.

Josh:

So that's about the background that I feel like it's necessary. If you're interested in learning more about it, absolutely go check it out. Like I said, there's an Alphastin website, no affiliation. I think that's the main provider of this product. That's still in business, I suppose is one way to put it. And then we're there and then I think that's the last bit of background. We need, besides, to say this focus is going to be specifically on anxiety and the rationale given is that there had been previous systematic reviews on CES but there was a lot of heterogeneity and the hypothesis was maybe that heterogeneity was because of varied conditions and patient populations. So I thought, very reasonably, these authors decided we're just going to pick a relatively homogenous patient population anxiety. We are going to measure secondary outcomes on depression and insomnia in an anxious population. So we are looking at different outcomes, not just anxiety, but the patient population is homogenous. So I think that's the only other background pieces that I picked up on anything else you want to add to any of that.

Adam:

The only thing I would add is that this paper wasn't funded by any private industry. So I know that you mentioned that, the AlphaStim but there was no influence from industry on this paper.

Josh:

Yeah, no, so really good point. So I was getting my Spidey Sense was definitely going off. From the get-go. I was like, oh, this is definitely an AlphaStim funded paper or some new company. And yeah, I couldn't find any evidence of that either. So, yeah, so good point and it's important to call that out. Okay then, all right. So let's jump into the method. So this was a systematic review and meta analysis of only randomized controlled trials, so we're dealing with sort of high quality evidence from the get-go. I would also point out that they have a Prospero registration. Although you say Prospero and for other people say Prospero, am I? Have I been mispronouncing it this entire time? How do you know that it's Prospero and not Prospero?

Adam:

I know all things Josh.

Josh:

I don't know why. I think it's a Shakespeare character, so maybe it is pronounced the one specific way, I don't know. Anyway, it's like the major systematic review registry. You've probably heard us talk about it a lot, but it's a good sign that it is registered. I did go and check it out. I didn't check the timestamp I probably should have done that but everything that was on the Prospero actually I think if you change Prospero is pretty much clearly highlighted. I didn't see anything like that. All the outcomes there were in line with the outcomes reported here. I didn't see any red flags. And maybe, super briefly, adam, do you want to tell the listeners like what might be some things they want to quickly eyeball on a registry link?

Adam:

Well, you just want to make sure that that what's in the protocols matching the sort of the methods of the paper, and you just kind of want to make sure that what they originally set out to do is actually being conducted and they're not drastically changing things. It doesn't mean that any sort of changes in protocol are necessarily a bad thing. Sometimes you have to change a protocol for feasibility of the study. Maybe they needed to change recruitment criteria a little bit, because if they're studying something that's a little bit more rare and they need to get more people into the study, they may kind of change from that. But you just want to make sure that they're not making sort of drastic changes of like you know, let's say, if this paper was for the sake of anxiety, let's say if their initial outcome was on Suicidal ideation, and then they actually just changed it to, you know, change in anxiety scales, that would be sort of a big red flag.

Josh:

Yep, yep, exactly. So yeah, I actually was looking for that. I don't normally I probably should, but I don't always go to the registry link. But I was literally looking to see if they set in their protocol that they were only interested in anxious populations, because that's like huh, that's pretty specific. I was still suspicious if it was funded by AlphaSTEM at this point, and so I was trying to dig some stuff up, but I couldn't find any dirt. It looked pretty legit and their outcomes were as they are stated here too, so that looked good, okay, awesome.

Josh:

So they did their search to find their randomized controlled trials. I thought they, as far as they reported, they did a very good job. They searched five major databases, which is excellent, which is more than you normally see. They didn't do a, as far as I could tell, they didn't do a gray literature search at all, so that might be one little mark against them. I didn't see that they searched conference abstracts or Google scholar or anything, anything like that, but their database searches looked pretty robust and anything on search that you wanted to add, adam.

Adam:

No.

Josh:

Okay, cool, let's see. If you're interested in the particular data from particular studies, dear listener, you can go to Table 1. They do a pretty good job of laying out the details of every specific randomized controlled trial. The rest of the methods I thought were pretty standard and pretty good. They had screening done in duplicate, with disagreements resolved by consensus. They did the same with extraction. These are all gold standard recommendations. They used the Cochrane Risk Abias tool to evaluate risk of bias in randomized controlled trials. That's also gold standard. They didn't use the updated one, but that's totally fine Gold standard, not necessarily platinum standard, but still pretty darn good.

Josh:

What else Again, they very clearly stated their primary outcome was anxiety, although they were interested in secondary outcomes like insomnia and depression within anxious populations. They were very clear about their outcomes. Those outcomes matched up with the registration, presumably done a priori, which was also good. They warmed the cockles of my heart. They did a great assessment in addition to a Cochrane Risk Abias assessment. I was just losing it over here. I was very, very, very pleased. What else did they do? They noted thresholds for effect size. That was important. You want to talk a little bit about that one? You love minimal, important differences, yeah.

Adam:

What they used was a standardized mean difference.

Adam:

The reason why we would do that is because if you have different studies using different types of methods of assessing anxiety symptoms, so perhaps they use a GAD7.

Adam:

That's used pretty commonly from a clinical setting versus, let's say, some sort of like Hamilton Depression Index Scale, which is used a lot in research, versus maybe a visual analog scale On a scale of 1 to 10, how would you rate your anxiety? The hard part about pooling all that data is that you kind of can't. If you're able to somehow standardize all of those effects to give you one number, then you can pull all of those things together to create one effect size. It helps make the overall effect a little bit more clinically meaningful. However, it's still a little bit challenging to understand how much benefit that you're getting from a specific intervention if you're reporting it as a standardized mean difference. However, it gives us a gist as to the sense of how efficacious a treatment is. In this study they suggested that anything with a standardized mean difference of less than 0.2 is considered minimal, 0.2 to 0.5 as medium, Less than 0.2 as a minimal treatment effect, 0.2 to 0.5 as small, 0.5 to 0.8 as medium, and then anything greater than 0.8 would be considered a large effect.

Adam:

If you were to try to then say, okay, if I'm administering a GAD7 in clinic and you had a large treatment effect, we actually don't know from an absolute standpoint how much that number would improve. The GAD7 ranges from 0 to, I believe, 21. If you were to have a large treatment effect, does that mean that a score of 18 now goes to 15? Does it mean an 18 now goes to a 9? We don't know. We just know that, qualitatively, that the treatment effect is minimal, small, medium or large.

Josh:

Yeah, excellent, I think that's totally right. You borrow the benefit of standardizing everything because you can compare across different tools, like you said, but of course, and it's not immediately translatable to tools that we are used to using Now. That being said, there are techniques to do that. We've done that in some of our meta-analyses and we plan to do more of those where you basically like I describe it as Celsius and Fahrenheit. They're both measuring heat and you can convert one to the other and back and forth. It's a similar idea If you've got the HamD and the GAD7, if you can argue that they're fundamentally measuring the same thing, which in theory you should if you're doing a standardized mean difference, then you might be able to convert back and forth and convert these back to okay. What does that mean? Well, it means on the GAD7, you drop five points, right? Not a lot of people do that. Anyway, long story short, they had clearly defined thresholds. Everything looks totally kosher. These are standard thresholds that everybody uses for what's considered important effect sizes, so that all looked amazing.

Josh:

They did some sensitivity analyses, including leave one out analysis, which is good. It basically means if you meta-analyze five studies, but one study has this dramatic effect and it's a decent size study. It's going to shift your overall quote-unquote average. That may not be appropriate because there might be something very unique to that study. To test the robustness of your result, it's always good to take each study out one at a time and see if it impacts things in any very meaningful way. They did that. That looked awesome. They looked at funnel plots for publication bias. All in all, I was very pleased with the methods. Any other methods comments before we jump into actual results.

Adam:

Yeah, what I would say is on the topic of the standardized mean difference. If people are still confused. It's like saying it's a little bit hot outside, it's really hot outside, it's scalding outside, versus it's 79 degrees, 80 degrees, 85 degrees. That's the difference of very hot, hot, not too hot, versus actual temperature.

Josh:

You've got some studies reporting it in Celsius and God knows what those numbers are, and some reporting it in Fahrenheit. Yet we can combine them all when we talk about how hot it is outside. That's awesome. I'd steal that for teaching mechanisms. That's a good one. Jumping into results, they had one, two, three, four, five, six, seven, eight randomized controlled trials. That's a lot. Eight randomized controlled trials Again, these are randomized controlled trials, high level evidence.

Josh:

The risk of bias assessments looked pretty darn good Across the board. Most domains had a low risk of bias, a few unclears, which probably means it was poorly reported. One study had a couple high risk of bias elements around blinding, but for the most part this looks like a pretty robust evidence base as far as risk of bias in the randomized controlled trials. Any comments on risk of bias assessments? Nope, Okay, we got a good in my opinion, solid base of evidence to base our results on.

Josh:

Now let's look at the results. I'm just going to go through the far spots. Adam and I have talked about this. We prefer the figures to the text. I just think it's a little bit more fun and just easier. Figure three, I think, is the first one, If you're following along, and we'll give a citation link, of course, in the show notes where we're looking at the severity of anxiety symptoms between the CES, that's, the cranial electrocal stimulation and control groups.

Josh:

This is the main outcome that they cared about. What did this do to the anxiety levels? What you see here? Overall, when you meta-analyze all the results, it's close to a 0.96 improvement on standardized mean difference. Remember, anything greater than 0.8 is considered a large effect. This is an improvement of close to one. This would be considered a large effect size, a large improvement in anxiety scores across eight different randomized controlled trials. The 95% conference intervals are pretty darn tight. Even worst case scenario, at the upper bound, you're still talking about just shy of a large effect size. You're talking about a 0.73 standardized mean difference improvement. Even worst case, you're still talking about a really sizable effect. The I squared, which is heterogeneity, and you want that number low, was zero, which is great, which means that there's no statistical heterogeneity here. It looks like we're comparing apples to apples. That was pretty strong results from my perspective. Any thoughts on that one?

Adam:

A couple of things. This is a positive is that one of the newer studies, which was in 2014, also carried the majority of the weight of the effects, so 32.5%. One of the studies was from Barclay 2014, which was one of the two studies where it had a perfect risk of bias score, so it was very low risk of bias. They used the DSM-4 criteria to get anxiety. Of course, as we know, it was a randomized controlled trials, one of the larger trials and one of the longer trials in duration.

Josh:

Look, the thing is we don't do this for money. This is pro bono and, quite honestly, the mother ship kind of eeks it out every month or so, right, so we do this because we care about this, we think it's important, we think that integrating evidence-based medicine and integrative medicine is essential and there just aren't other resources out there the moment. We find something that does it better, we'll probably drop it. We're busy folks, but right now this is what's out there. Unfortunately, that's it, and so we're going to keep on fighting that good fight. And if you believe in that, if you believe in intellectual honesty and the profession and integrative medicine and being an integrative provider and bringing that into the integrative space, please help us, and you can help us by becoming a member on Dr Journal Club. If you're in need of continuing education credits, take our Nanciac-approved courses. We have ethics courses, pharmacy courses, general courses, interactions, that's on social media, listen to the podcast, rate our podcast, tell your friends. There are all ways that you can help support the cause.

Josh:

Adam just spouted out in a couple sentences, so I wish we had visuals on this, but if you can envision a forest plot. So there's a few things here that he teased apart. One is you're looking at a meta-analysis. You envision this forest plot In a way. Meta-analysis are really just fancy. Weighted averages is essentially what they are. And so Adam's point is like well, whenever you look at this, always try to see what study or studies, if any, are driving these results, which ones are weighted the most, and make sure that we're happy with them. And to his point, this one was weighted way more than anything else Almost double the next largest one and that's because it's a large study. And, yeah, there's a lot of comfort in knowing that it was low risk of bias, perfect score, very recent small confidence intervals, et cetera, et cetera. So, yeah, really really good point. So, no red flags, just green flags across the board here. So I was yeah, go ahead.

Adam:

One more thing I just wanted to point out too is when you actually look at all the studies, the effect size of them so if you're looking at the paper, there's their red squares is sort of like the average effect size. They're all kind of in ballpark, like right on top of each other. So we're seeing a consistency in these results, even though the confidence intervals are a little bit wider for some studies, and that's likely due to the sample size. Excellent, yeah. However, there's only one study where it's leaning towards a null effect with its confidence interval, but the average effect size is still sort of in line with the other ones. So we are seeing consistency across all the studies. Sometimes in meta-analysis, you'll see two with an effect, three with no effect, one or two on favoring a control group, and it's kind of all over the place, and then when you put them all together, you can kind of get a better sense. But this is consistently right, one on top of each other having a large effect, with the exception of one study Kim 2021.

Josh:

Yeah, exactly, and not to get technical, but that is why so, in that very well-described explanation. That is why the I-squared statistic is 0%. So the I-squared statistic, which measures the statistical heterogeneity, is literally measuring the spread of those confidence intervals. Are they stacked really nicely on top of each other, and any variation we see we would expect by random chance, or are they all over the place, like Adam said? And so to his point, they stack super nicely. We see that with the I-squared statistic. You can see that visually. And yeah, with that one exception, most of the all the studies are reporting effect sizes that are pretty much all one or above or really darn close to one, which again would be considered a large effect size. So very pleased with these results. Very impressive so far. Okay, I don't have any stock in AlphaStim, although maybe I should get some. Okay, so next one is Figure 4. This is a subgroup analysis. So here they looked at AlphaStim. I swear we're not sponsored by AlphaStim.

Adam:

But we might be in the next episode.

Josh:

Yeah, there we go. So of these eight studies, four of them use the same machine, so it makes sense that you would combine them, and also the machine that was used in the older studies. That company doesn't exist anymore, so the point of the author is where? Look, this is one that's still available that you could go out and buy, and so it makes sense that we would look at this subgroup. And that makes total sense to me. And when you look at that, you see a very similar story.

Josh:

Effect size of a 1.05, so a large effect size, even a little bit better than the overall result, a 0%. I square. Everything stacks up nicely To your weighting point earlier. That Barclay study continues to be weighted really highly. In fact it makes up 50% of the data for this meta-analysis, which again, we've got no problem with because it looks like that was a really good study. So, yeah, so if we look at so it's not just an old machine that was used years ago that worked, it's actually stuff that you can still buy now that has these large effects. Anything to add on that?

Adam:

And yes, when you actually look at the risk of bias figure, they do note at the bottom in asterisks. So anything that has an asterisks indicates that both authors and studies received no financial support from private companies and that Barclay study is one of those that did not have any sort of financial support from private companies. So the two studies, barclay 2014 and Kim 2021, which were both rated as low risk of bias, had no financial support from the private companies, and at least Barclay but not Kim, it looks like did specifically use Alphastin. Again, we have no financial compensation from Alphastin.

Josh:

Yeah, I totally missed that. Oh, so that's really okay. So that's another really nice point A for the authors of this paper that they made it clear visually which ones were clear of that bias which is outstanding. And maybe they were assuming we were going to have all the questions that we're having that we're like there must be some industry bias here. But yeah, no, really excellent points.

Josh:

And the only other thing I would point out, while we're back on that risk of bias graphic and again, dear listeners go to if you're on the paper, it's figure two. It's just a nice, it's sort of the classic Cochrane way of presenting the risk of bias in every domain across studies. You'll note that almost everything's green. Things that aren't green are yellow. Well, only two little reds. But my point about the yellows is yellow is almost always because the study it's not bad, but it's just poorly reported, like they don't specifically say anything. And that is not surprising for old studies, because old studies were published before we had clear reporting standards, and so you'll note that a lot of these yellows come from studies done in the 70s or 80s before we had these clear standards. So again, even more of a even anything that looks even a little bit off. I'm like even less suspicious of now. Yeah, so again, you know we should have someone keep track. I feel like we almost always rip apart studies. This is one of the few. We also like the last study we did. We like that umbrella review on diets. This is another paper that I really really like and the evidence is looking really good.

Josh:

Okay, so let's jump back in. Okay, so, here they did. This was a neat subgroup. We're talking about dates. They're like well, you've got a bunch of these studies that were published in the 70s and 80s and then a bunch of them are published way more recently. Maybe there's a difference in effect across dates and maybe that's because they use different machines, right, like different machines were available at different times with different companies. Also, the quality and reporting standards are better. So again, that's not an unreasonable subgroup. It wouldn't have been one that I would have thought of a priori, but if I got eight studies, half of which were done in the 70s and half of which were done in the 2000s, this would probably be something that I would think about doing as well. So they did this subgroup assessment and there was no statistical evidence of subgroup effects. So it doesn't look like we can only trust one grouping over the other. And yeah, p-value is 0.21, which means that there's no difference there between the years. Any comments on that one?

Adam:

No.

Josh:

Okay, cool, Now they do another subgroup analysis. These are you can kind of think of them as subgroup or maybe even sensitivity analyses. Now they're saying, okay, well, what if you know, we just looked at patients with anxiety versus patients with anxiety and also who had depression. Are we going to see a difference there? And no, again subgroup analysis, statistical analysis says there's no difference there. This should work equally well for those with anxiety alone and those with mixed anxiety and depression. We track the науч and clinicalimento synchronization test data points. In my opinion, as a clinician, I would. That would be useful to me. They do a similar thing with yeah, go ahead.

Adam:

However, the certainty of evidence for depression was low, so we need to keep that in mind.

Josh:

Yeah, okay, so, yeah, so let's let's remember to come back to circle back to that when we do. Certainty of evidence, that's a good, good point, okay. So then we look at monotherapy. So if they just took the, the CES, the electro stim, versus using it with medications, versus medications were not controlled for. So again, that would be important because if it's like, yeah, it works great, but all the studies where it works great, they were also taking meds and that's why it worked great, we don't see any evidence that that's the case. It works.

Josh:

It seems to work just as well, at least statistically, whether it's just the CES or CES with meds, etc. And that's just because different studies will have different, you know, inclusion criteria, right, like some will say you know, if you're on medication already, don't change it, just add this on. Other people will say, no, we only want to recruit patients that aren't on medication. So again, we look at this and it looks totally fine. Now we jump into the last figure, figure eight, where we're looking at some of these other outcomes depression and insomnia, and Also pretty darn good effects. Depression is point six, nine. So what do we say? That was like a moderate effect size.

Introducer:

Mm-hmm.

Josh:

Okay, it's moderate effect size, Statistically significant. There is some spread, there is some heterogeneity here I square of 55 percent.

Adam:

But also remember that that was more like really more exploratory. They didn't focus on that, it was about anxiety, and so they were just kind of like oh, by the way, we also look at this, that's right. And so I would take these with a grain of salt and just kind of say, oh, this is more more of like an interesting kind of thing. And also, if we look at the insomnia, all of those studies are from the 70s, so those are the older studies that are probably likely more more risk of bias. Then then the depression symptoms, which has a little bit of a mix. But I do want to point out that, with the exception of the depression symptoms, effect size, almost across the board everything is about like a point nine four to point nine six consistently. And so you know, if it went, if one was like a point nine six and one was a point three and one was a point five, we wouldn't have as much consistency in the results, whereas with this it's pretty darn consistent with that overall Initial analysis of of point nine six.

Josh:

Yeah, this seems like a thing. How do we, how do we not know about this? It's totally seems like a thing.

Adam:

I mean I know that you know from an anxiety standpoint. In the study it's saying it's at high risk of bias and we're seeing these treatment effects. However, overall the duration of the studies is still Small, right. The longest trial, I think, was six weeks.

Josh:

Okay.

Adam:

Relatively small sample sizes. There was only a, I think, a total of like what? 337 total over the course of six weeks. So Can we kind of Extrapolite this out to a broader patient population? What are the long-term effects? There's still a lot of things that are unknown, but at least in the short term we do have some evidence to suggest that this may be beneficial.

Josh:

Okay, don't piss on my parade quite yet. I mean, I, I agree with some of that, but let me push back a little bit. So I, I definitely buy the short term nests right there. They're, you know, six week trials or whatever. So I guess we don't know the stickiness. So, like, one question is if you stop Putting these weird things in your ears, like, are you going to no longer get better? And maybe it's not the end of the world? If that's the case, right, like you take, and you know, and zeolitic medication every day, like maybe it's not so crazy that you would regularly do alpha stem, right. So that's one thing. But, to your point, we don't know about the stickiness and how long it lasts. But okay, so I was curious. I knew you were gonna bring this up. I was curious about and I want to have this conversation About the patient count.

Josh:

So you're right, it's it's eight studies, but it's really just 337 participants, so that's not a lot of people. However, you will often say things like don't be a lazy critical analyst and just say, oh, there's not enough people. Why do we care about people? Well, we care about people for lots of reasons. Dear listeners we do care about people.

Josh:

When it comes to this, is that two things. One, is it representative of a patient population that we care about? If it's too small or too specific, it may not be applicable. But two is precision, right, like, the more people you have, the more data points, the more precise the estimates. Even though there's only 337 patients, the precision here is stellar, and that's, of course, because these are continuous outcomes is easier to get, you know. You know tight conference intervals with smaller number of people.

Josh:

So I don't know that it's actually concerning and in fact and maybe this will segue into grade, you know they didn't rank down for in precision because they had tight confidence intervals. So I guess one question is does it matter as much that it's only 337 patients? And then the last thing is okay, maybe it's not applicable. Well, they made a very specific choice to only recruit patients with anxiety and so, yeah, you wouldn't be able to apply this to a depressed population, right? Or or an insomniac population, right. These are anxious folks who have in, you know, who might have insomnia, might have depression, right? So I don't know, I you're right, and I don't know that it's as concerning as it would normally be for me in a situation like that.

Adam:

Yeah, I mean I think you bring up some good points, but I still think you know, when we're looking at those studies, we have Individuals being recruited from the US, china and I think it was Korea, mm-hmm, and then within the US based population. I didn't kind of go into specifics with regards to the demographics of of that. I'm assuming that the Korean and Chinese populations are probably Homogenous, and so I'd like to kind of know what the demographic breakdown of that USA population was a little bit more, and then see if we could then extrapolate that out for some more external validity. And Perhaps this would play into Kind of enhancing your point, because the DSM five or DSM, the DSM criteria kind of itself, is a little bit more stringent than some of these other Diagnostic criteria.

Adam:

Only one study apply, the DSM for, which was the Barclay study. Uh-huh, whereas one looked at anxiety on a, on a visual acuity scale, one used ICD-10 codes, which there's, you know, bias there, because it's kind of subjectivity to how, how it was diagnosed. One was mixed anxiety and depressive and then one was chronic hysteria right, interesting.

Josh:

Okay, this is a really interesting point. So you are suggesting, dear friend, that the diagnostic criteria Might be a cause of outcome heterogeneity, and that that wasn't accounted for. The counter would be even these eight studies, there's so much consistency in results. We don't really see any heterogeneity. But we're underpowered for these sorts of sensitivity analysis. So, in theory, you're really only supposed to do one subgroup analysis for every five to 10 studies that you have, and so they're already underpowered for what they've already done. So it's not like they could do yet another one looking at the diagnostic approach, although that would be interesting. And then to your other point. So that's a good point. And then the other point is about different ethnicities, perhaps, or different patient characteristics, and so you were suggesting that. Well, maybe, like we don't know about these populations, if they are super homogeneous and you have a patient in front of you who's of a different ethnic group or race or anything like this, is it fair to apply these results to them? Yeah, so we don't know. That's a good point. We don't know.

Adam:

And also they never really got into the severity of anxiety. Yeah, One was great at four out of 10 on a visual acuity scale, but I still don't know what that means.

Josh:

Yeah that's a good point.

Adam:

Like. Were these people with severe anxiety? And if they have severe anxiety, is that why we're getting larger effect sizes? Because typically with the larger absolute values you would expect a greater result? Is this sort of like mild anxiety? So it's kind of hard to kind of know how to apply this to the person in front of me.

Josh:

Yeah, that's a really good point. That is all true, and I think this speaks not to limitations of these authors but limitations of meta-analysis in general, because these are all really important clinical questions and unless you have I don't know 100 studies, you're not powered on a meta-analytic level to answer these questions right Because you can't parse the data enough. That would be better for something like an individual participant data meta-analysis, where you could look at results by race or ethnicity or gender or whatever, and so, or diagnostic well, diagnostic criteria would be study level. So that would be, you just need more studies. But on the patient level, variability, that would be something that you would just need a different method for.

Josh:

But, yeah, very fascinating, excellent points. All of that, adam, I love it. See, anything else? Oh, let's get into grade. Let's talk briefly about the grade. So we love grade. Grade is a way of ranking the confidence we can have or the certainty we can have, rather in the estimate of effects. So here we have this study that says you know what? There's a large effect size, pierre, there's a large effect size, right?

Adam:

Yeah.

Josh:

What's that?

Adam:

And do we trust it?

Josh:

And do we trust it Right? So you have a study that says there's a large effect size, but the question is, how much can we trust it? Like will a new study come out tomorrow and totally change our result, or will a new study come out tomorrow and it's just impossible, based on how solid we are, how much confidence we have right now, that it would shift things. That's one way to think about it, and grade is notoriously strict. So to be rated high is very, very rare. Very few interventions, including pharmaceuticals, are ever ranked at high.

Josh:

And here, believe it or not, pretty amazing the certainty of evidence was graded at high for the primary outcome of the improvement in anxiety symptoms. So that large effect the authors believe there is a lot. You should have a lot of confidence in that result and it's highly unlikely that any new study will come out with drastically different results that will change your opinion. And for the secondary outcomes, they ranked it as moderate for insomnia and low for the depression symptoms in these anxious patients. And so bear in mind that, like Adam pointed this out earlier, we have less confidence in that large effect size for depression and so just kind of bear that in mind. That may change tomorrow with a new study. But as far as the impact on anxiety, that looks pretty darn solid. And that's not too surprising because there's no evidence of publication bias, there's zero heterogeneity, there's zero risk of bias. So all these things we look for when ranking grade all look really, really good to me as well.

Introducer:

Agreed.

Josh:

Cool, all right, I'm just seeing if there's anything else I highlighted I want to talk about. Yeah, just that, it's crazy that the FDA apparently approved anxiety. Using this for anxiety in the 1960s Isn't that fascinating, gosh, that is just so interesting, and they need a better marketer. I had not heard about this, or maybe I've heard of alpha stim. Alpha stim sounded familiar, but maybe I just didn't know that it was called CES. And oh, they did point out too we don't know really about adverse events. They didn't note any severe adverse events, but they noted that the studies weren't very transparent in reporting on adverse events. So that's sort of a little bit of a question mark. But they didn't see any risk or evidence as well. Yeah, man, I was impressed. Good systematic review, good evidence base. I think you should use CES. If you're a patient, talk to your doctor, but if you're, I don't know. Man, I'm pretty convinced.

Adam:

Yeah, I think you can consider it.

Josh:

Adam's like I will wait until there's 8,000 people and 100 randomized controlled trials. So instead of 1960 to now, he's going to wait till 2060. But yeah, although I did find it interesting that if it really was done in 1960, it's as good as this looks like it is, why weren't there larger studies Like also did you find it interesting that it was approved in 1960, when the trials listed are all from the 70s?

Josh:

Yeah, yeah. So I feel like all sorts of stuff went down in the 60s medically. I feel like things got proved that you would just laugh at now and be like you got you approve that Right. And I feel like things are getting our grandfathered in because the level of scrutiny and strictness and criteria have like, yeah, new drug, now you'd need 2,000 person randomized controlled trial, multi-year outcome data, at least 2 randomized trials, right, phase 3 and phase 4 monitoring and all this and it's like you made this crazy thing that shoots electricity through your brain. And it's cool, it's the 60s, we'll accept it. You want to use it for depression? No problem. You want to use it for pain? No problem. Anxiety I've thrown in there Like OK, but apparently this thing actually works, so at least it seems that way anyway, so cool, dr Naviti, thank you for the recommendation. That was a fun one to do. I think that's maybe one of six of the past 12 months that we actually gave a good score to. So that's awesome.

Josh:

Adam, any last minute words how is your hair? I note that you have hidden your hair behind a hat, so it must look quite interesting. You hid it for a podcast. Ah, ok, Dear listener, just be glad you're a listener and not a viewer is all I can say. And with that, our friends, we will talk to you later. If you enjoy this podcast, chances are that one of your colleagues and friends probably would as well. Please do us a favor and let them know about the podcast. And if you have a little bit of extra time, even just a few seconds, if you could rate us and review us on Apple Podcast or any other distributor, it would be greatly appreciated. It would mean a lot to us and help get the word out to other people that would really enjoy our content. Thank you.

Josh:

We talked about some really interesting stuff today. I think one of the things we're going to do that's relevant. There is a course we have on Dr Journal Club called the EBM Boot Camp. That's really meant for clinicians to help them understand how to critically evaluate the literature, et cetera, et cetera Some of the things that we've been talking about today. Go ahead and check out the show notes link. We're going to link to it directly. I think it might be of interest. Don't forget to follow us on social and interact with us on social media at Dr Journal Club. Dr Journal Club on Twitter, we're on Facebook, we're on LinkedIn, et cetera, et cetera, so please reach out to us. We always love to talk to our fans and our listeners. If you have any specific questions you'd like to ask us about research, evidence, being a clinician, et cetera, don't hesitate to ask. And then, of course, if you have any topics that you'd like us to cover on the pod, please let us know as well.

Introducer:

Thank you for listening to the Dr Journal Club podcast, the show that goes under the hood of evidence-based integrative medicine. We review recent research articles, interview evidence-based medicine thought leaders and discuss the challenges and opportunities of integrating evidence-based and integrative medicine. Be sure to visit www. drjournalclub. com to learn more.

Exploring Cranial Electrical Stimulation for Anxiety
Analyzing Treatment Effects and Results
Effect of Meta-Analysis on Anxiety Levels
Analyzing Electro Stim Study Data
Anxiety Treatment Evidence Assessment
Engage With Dr Journal Club