Physio Network

[Physio Explained] Do low back pain treatments beat placebo? What the evidence says with Dr Steven Kamper

In this episode with Dr Steven Kamper we discuss an interesting new paper on the effects of various treatments for low back pain vs placebo. We explore:

  • Limitations and strengths of the paper
  • The key findings of this research
  • Clinical takeaways to use in your practice
  • And more!

👉🏻 Learn more about Physio Network’s Research Reviews here - https://physio.network/kamper-podcast

Dr Steven Kamper is Professor of Allied Health at the School of Health Sciences and Nepean Blue Mountains Local health District. He has over 190 publications in peer-reviewed journals, has presented his work in >10 countries, and received >$5 million in competitive research funding from Australia, Ireland, Canada, and Norway. Steve is senior editor of the Journal of Orthopaedic and Sports Physical Therapy, associate editor in the Cochrane Back and Neck Group, Vice-chair of the Executive Organising Committee of the International Back and Neck Pain Forum. 

Reference to article -  Cashin AG, Furlong BM, Kamper SJ, et al (2025) Analgesic effects of non-surgical and non-interventional treatments for low back pain: a systematic review and meta-analysis of placebo-controlled randomised trials. BMJ Evidence-Based Medicine, 30, 222-232.

If you like the podcast, it would mean the world if you're happy to leave us a rating or a review. It really helps!

Our host is @sarah.yule from Physio Network

👏 Become a better physiotherapist with online education from world-leading experts:

https://www.physio-network.com/

SPEAKER_01:

The important question is how big is the difference? And then the interpretation, then, i.e., what you do with that, is does it matter? And then we can talk about does it matter to who and in what context and all that sort of stuff. They're all things which need to go into the interpretation.

SPEAKER_02:

Today I'm joined by Professor Steve Camper, Professor of Allied Health at the University of Sydney and the PM Blue Mountains Local Health District. Steve trained as a physiotherapist, and his research focuses on chronic pain and evidence-based practice. Today we unpack a recent paper published in BMJ Evidence-based medicine titled Analgesic Effects of Non-Surgical and Non-Interventional Treatments for Low Back Pain, a systematic review and meta-analysis of placebo-controlled randomised trials. Today, Steve provides a summary of this research and helps us consider how we should think about the evidence as we work towards translating this research into practice. I think you're going to love today's episode. I'm Sarah Yule, and this is Physio Explained. Well, welcome to the podcast, Steve. Thanks so much for joining us.

SPEAKER_01:

No dramas. Nice to be here.

SPEAKER_02:

Now, if you could start by giving us a summary of this study and what it actually set out to answer.

SPEAKER_01:

So this is a systematic review and meta-analysis, which I'm sure all the listeners are familiar with. And so the idea here was really to estimate the effects of various treatments for low back pain versus placebos. And it's an update of a review that we did in the 2000s. So we originally published a version of this in, I think it's 2008 or 2009. And this is an update to bring it up to 2020, I think, four, I think, where the searches were done.

SPEAKER_02:

Fantastic. Can I clarify? How was nonspecific low back pain defined?

SPEAKER_01:

Essentially, we used the definitions that the individual study authors used. Typically, that was pain between 12th rib and buttock crease, plus or minus leg pain, and excluding serious pathologies, fracture, rheumatological conditions, cancer, that sort of thing. For all intents and purposes, more or less on self-report of pain in that area.

SPEAKER_02:

Yeah, great. And in terms of which is obviously very relevant for clinicians listening, as they can distinctly apply it as is different to presentations like a ridiculopathy as well, which I know we've previously had discussions on this podcast around the heterogeneity of non-specific or on heterogeneity of low back pain research as well.

SPEAKER_01:

Look, and I think it's an open question as to how important it is and how feasible it is to distinguish. And I don't have strong views on that at this point in time, and we didn't seek to make any of that distinction within this in this study.

SPEAKER_02:

Yeah, great. And what treatments were included?

SPEAKER_01:

You probably don't want me to go through all of them. There were 56 treatments or treatment conditions, and altogether there was 300 trials. It's a big body of evidence. Most of the usual suspects, all different types of drugs. There were manual sort of treatments, mobilization, manipulation, massage, acupuncture, exercise, a pretty big range of most of the sort of things your clinicians would be familiar with. We didn't look at surgical treatments, which in hindsight, if we were going to do this again, we probably would have, because it wouldn't have added much more burden to that. And there's not that many placebo controlled surgery trials, and it would have given us then, I guess it would have completed the picture a bit. And one of the things doing the media for this study that kind of came out, that point of visa, this is the results that apply to non-surgical trials sort of leaves this open implication, perhaps for some people that oh, maybe it's surgery that works, and it's almost certainly not the case. But that was a decision we made early on. But yeah, I think anyway, if we were to include surgeries in there, it wouldn't change our findings.

SPEAKER_02:

Fantastic. From one study leads into another, doesn't it?

SPEAKER_01:

Yes, that's yeah, that's the nature of the beast of being a researcher.

SPEAKER_02:

Well, I suppose onto tapping into that research brain. Clinically, we often want a simple yes or no. Does it work? Doesn't it work? Can you explain the difference between dichotomous and continuous thinking and why it matters?

SPEAKER_01:

I think that the point here is this question: what do you mean by does it work or not? And the answer to that question is is there a difference in the effect of treatment A and treatment B? Treatment effectiveness questions are always comparative questions. So it's either a comparison between two treatments or a comparison between a treatment and nothing. Okay, but it's always a comparison. So what we're talking about there is the effect of nothing versus the effect of the treatment or the effect of treatment A versus treatment B. In this case, treatment B was always a placebo. Okay, so the question I think isn't is there a difference? Because if your scale is sensitive enough, there will always be a difference. But the question is, is there a difference which is big enough to matter? And when we say matter, matters in terms of should we choose one or the other. And so for me, in terms of the sort of dichotomous way of thinking, is it effective? It's not really an important question. The important question is how big is the difference? And then the interpretation then, i.e. what you do with that, is does it matter? And then we can talk about does it matter to who and in what context and all that sort of stuff. They're all things which need to go into the interpretation. But we're talking here for our main outcome, which was pain, and that was always measured on a continuous scale. So actually what matters is how big is the difference in effect between the two treatments, in this case, the treatment and placebo.

SPEAKER_02:

That makes sense. So sort of uh similar to the question of does exercise work versus how much benefit does exercise provide on average and is it clinically meaningful?

SPEAKER_01:

Absolutely, absolutely. And so there's two bits, right? So does exercise work? That question is always compared to what? So does exercise work compared to placebo is a different question to does exercise work compared to nothing. And so that's the first bit you need to ask. The second bit is are you really wanting to, you know, when you say work, what you're really saying, is there a difference which is big enough to choose one or exercise or whatever the comparator is? Because you might say, say yes, exercise works, but the difference is two points on a hundred point scale. Is that worth getting out of bed to go exercising for? It might not be. But we can still say yes to to the question of does it work? But the question is not the right one, in my opinion, if we're looking to apply this. And there's all sorts of methodological problems with defining something as works or not works, and that's an unfortunate binder that we as scientists have brought, which is intensely unhelpful often. But fundamentally, the question is how big, not yes or no.

SPEAKER_00:

Are you struggling to keep up to date with new research? Let our research reviews do the hard work for you. Our team of experts summarize the latest and most clinically relevant research for instant application in your clinic. So you can save time and effort keeping up to date. Click the link in the show notes to try Physio Network's research reviews for free today.

SPEAKER_02:

Fantastic points. And I think really helpful for clinicians as we incorporate this research into what that means for the patient in front of us. Well, I suppose the other big theme in here is obviously the range of interventions compared with placebo. Can you walk us through which treatments fared better and which didn't, and what we should take from it, knowing of course that there's many interventions, but perhaps what might be relevant for the clinician listening?

SPEAKER_01:

So there's a couple of things that people who are using this evidence need to keep in mind. One is this issue of how big the effect is, and we can categorize that from quite small to pretty small to medium to quite big. That's one thing. We've got this sort of one axis which is how big is the effect. The other important axis is how sure are we that that estimate is right. So each trial will give us an estimate of how big an effect is. We might have several trials which we can get an average of, and so we get an average effect from those several trials for any particular treatment. From there, have to make a judgment is how sure are we that that estimate is the same as the mean for the whole population, because that's what the estimate is trying to estimate. And so we've got these two axes, how big is the effect and how sure we are. And the how sure we are part of it depends on the risk of bias of the individual studies associated with the individual studies, how homogeneous they were in terms of their methods and also the sorts of interventions and so on, and also how well their estimates lined up. Okay, so if they're all if you've got five trials and they're all telling us the same thing, then we're a bit more sure that that's right than if we've got five trials that are all telling us different things. We can still get a mean, but we're less sure that that mean is actually going to represent the population mean. So there's these two things, the size of the effect, which is the mean effect, and how sure we are. And so what we did when we interpreted this, we only sort of drew attention to the things that we had what we call at least moderate certainty of evidence. And there's a whole process that we apply as part of the methods which help us determine how sure we are that the estimate that we found is true. And so for the ones where we had moderate certainty evidence, and we split between acute pain, the studies which looked at acute pain, and the studies which looked at chronic pain at a cutoff of three months. For acute pain, the only one that we found with moderate quality evidence was NSAIDs. Okay, so not all anti-inflammatories. And for cron, sorry, do I say that was for acute? And for chronic low back pain, it was exercise, spinal manipulative therapy, taping, antidepressants, and we call TRIP V1 agonists, which is essentially a sort of topical capsaicin sort of an ointment, which is a sort of an irritant, which essentially the idea is that it works on the no-susceptive processes. So they all had moderate quality evidence, but for all of them, the effect sizes were very small. Okay, so around the five points on a hundred-point scale type size versus placebo. So we can say we're moderately sure, but we're moderately sure that the effects are pretty small versus placebo.

SPEAKER_02:

I'm curious, when the evidence does show a small effect, even if we do have moderate certainty, how should clinicians go with communicating that to patients? Because we know that though many of those treatments probably fall into our arsenal as part of management. But what do you think and how do you think we're best navigating that conversation?

SPEAKER_01:

Look, if we look at again, research on average of the size of placebo effects in back pain trials, they might be eight to ten or so on a hundred-point scale. So, you know, one on a one to ten scale, something like that. So you might extrapolate and say, okay, if it's five, it might be add a bad one to that, it's one point five versus doing nothing. So that's one way that that clinicians might communicate that. You you hit on a really important point of how do you decide what to do with the patient? And I think that's not a question that I can answer. And it probably in an ideal world, it's not a question that any individual clinician should answer by themselves either. But I would argue the place of this is as part of a conversation and say, hey, we've got this option. This is about on average what you think you'll get out of it. Because of what I know about you, maybe you'll get a little bit more or maybe you'll get a little bit less. And here are some other options, and here's the same information that as we understand and that applies to them. How are we going to choose to go forward?

SPEAKER_02:

Fantastic points. As you say, it's probably not necessarily one study, it's it's reviewing these things ongoing and it's embracing the the nuance and and communicating that clearly with patients and continuing to dissect the evidence and the methods and and the rigo that's within each study, isn't it?

SPEAKER_01:

I would agree.

SPEAKER_02:

So, Steve, final question. I'm curious. For the clinician that's listening and wondering what I should do differently in clinic tomorrow morning with relation to these studies, uh, this study and and others, what would your advice be?

SPEAKER_01:

Depends a bit on what you were doing in clinic today. Not sure there should be anything different. You might be doing just fine. For me, things like systematic reviews, practice guidelines, and so on are a distillation of research evidence. None of that is a prescription for how to treat patients. It's a piece of information. There's always more pieces of information other than research evidence that goes into a treatment decision. And so I think the important thing about this is that clinicians kind of understand something about the methods and they understand where this information fits in with other information that comes from research as well. Because if they don't, then they're just being blind to one body of information, and you then you've only access to other sorts of information, all of which may be important. All of which, by the way, needs to be assessed. How biased is that information? How much should I rely on that? In the exactly the same way as we need to work out how much we should rely on research information. It's the same for your clinical experience, it's the same for the stuff you learned on any on any course on the weekend, it's the same for what you remember from uni. All that is at risk of bias as well, and you need to make your assessment of that too. Come bring that all together and bring that into the conversation with your patient and incorporate what they want and desire and feel and think and all that sort of stuff in making the decision. I don't envy you. Research is much easier.

SPEAKER_02:

Steve, thank you so much for helping us make sense of the evidence. I think today's discussion was a timely reminder that evidence doesn't necessarily make the decisions, but people do. Um, and our job as clinicians is to interpret the evidence and translate it and bring patients along that journey in the process. So thank you for all of your wisdom.

SPEAKER_01:

My pleasure.

SPEAKER_02:

Thanks, Steve.