BJJ Podcasts

Publishing study protocols: maximizing research transparency and spotting spurious statistics

September 02, 2020 The Bone & Joint Journal Episode 30
BJJ Podcasts
Publishing study protocols: maximizing research transparency and spotting spurious statistics
Show Notes Transcript Chapter Markers

Listen to Mr Andrew Duckworth, Mr Dan Perry, Professor Matt Costa and Professor Fares Haddad discuss the paper entitled 'Publishing study protocols: maximizing research transparency and spotting spurious statistics'.

Click here
to read the article

BJJ Podcast - Sept 2020 (FSH, MC, DP)-1 
[00:00:00] Welcome everyone to our BJJ podcast for the month of September. I'm Andrew Duckworth and a warm welcome from your team here at The Bone & Joint Journal. As always, we'd like to thank our readers and listeners for the comments and support we continue to receive, as well as to our many authors and guest interviewers who have taken part so far. 
We hope you've found our two recent podcast series on the COVID-19 pandemic helpful and informative during these difficult times. We also hope that you've enjoyed our podcasts that have accompanied The Hip and Knee Society supplements for the month of June and July. 
As we entered the last quarter, we plan to continue to build on the range of topics we've covered through our series so far this year, with our primary goal to improve the accessibility and visibility of the studies we published for both you as our readers and listeners, as well as for our many authors. 
For this month's podcast, we'll be taking a slightly different format, however, as we will not be discussing an original paper or instructional review, we'll be focusing on an excellent editorial from two of my board colleagues here at the general, the topic of which I'm sure will stimulate some excellent and informative discussion. 
So firstly, I have the pleasure of being joined by our editor in chief, here at The BJJ Professor Fares Haddad. Welcome Prof and [00:01:00] thank you so much for taking the time to join us today. 
Thanks Andrew. It's great to be with you. 
Fares and I are delighted to be joined by two of our board colleagues here at the journal to discuss their editorial entitled 'Publishing study protocols: maximizing research transparency and spotting spurious statistics'. 
So firstly, I'd like to welcome Dan Perry, who's not only our specialty editor for Children's Orthopaedics here at The BJJ, but also the chief investigator of several excellent pediatric orthopedic trauma trials that are running at the moment. Dan many, thanks for taking the time to join us today. 
You're very welcome. Hi Andrew. 
And to round off our guests today, we have the pleasure of welcoming back our special editor for trauma here at the journal, Professor Matt Costa from Oxford, who we all know has a wealth of experience in this area. Welcome back Matt, and again many thanks for joining us today. 
Thanks, Andrew. Nice to be back. 
So, Prof if I could start with you. Before we get onto the meat of the editori al, in terms of the journal's position, how it's evolved over time with regards randomized controlled trials in terms of trial registrations or protocols, where have we come from and where are we now? 
Thanks Andrew. I think the critical thing to say is whatever kind of [00:02:00] study we're looking at the quality and transparency of the methodology and reporting have always been critical to The Bone & Joint Journal and are increasingly so. I think that's what differentiates us from a lot of other journals out there and makes the product very special. 
In terms of randomized controlled trials, these are increasingly seen within our specialties and it's really important to us that they are prospectively registered and that the reporting is appropriate. So, whereas in the past, we were happy to accept retrospective registration of these studies, as of the beginning of 2018, we made it the standard for us and across some of the other major orthopaedic journals that studies should be prospectively registered. In other words, they should be registered ideally very early on in the inception of the study, but certainly before the last patient is recruited or before the first outcome [00:03:00] is reached. 
So I think that's become a standard that we stuck to. We've turned down some pretty interesting studies on that basis, cause it's been a good two-and-a-half years since we set that threshold. We've always accepted that there may be the odd study that will have a very good reason for why registration wasn't completed, but this is I think a pretty good rule to work from, and I think everyone should stick to and start prospectively registering their studies. 
Absolutely. Totally agree. And as you say, many of the bigger journals have certainly moved over to that. So if Dan we come to you next, you talk about, in your editorial, the transparency being at the heart of research, which is obviously intuitive, but what do you mean by that and particularly in terms of randomized trials? 
Sure. So by transparency I mean we need to be very clear that everyone knows what was done, at what stage, so right throughout the study, everyone needs to be really clear on what that study group was looking to achieve, what their main outcome was, so what their primary [00:04:00] outcome was and what all of their secondary outcomes were. 
So that's really important when we register a study ,that we register that up upfront and we're really honest and say, 'Look, this is our main outcome. This is the one that we're going to hang our hats from. This is the one that's going to drive our power calculation, and then everything else is kind of, you know, everything else is hypothesis generating. It's kind of interesting, and we're going to talk about it, but it's a bit less relevant to us. We've got this really important first outcome. And then beyond that, transparency is about being really clear about how we're, how we're collecting the data, how we're conducting the study. So we can, as a reader, we can start to get a feel if there's any, any, might be any bias or any other influence that might be driving the way that the result eventually comes out. 
Yeah, no, absolutely Dan. And so that sort of leads me onto my question for you, Matt, in terms of, we know bias and error can compromise research, as Dan has been just talking about there, but can you give our listeners some examples of how this can happen even for randomized controlled trials, where we're trying to eliminate all that bias, but also maybe for some other forms of research? 
[00:05:00] Yes. So last time I looked, which admitedly was a little while back, but there were about 130, 140 different types of bias that you can have within a study. And I'm, I'm sure that list is expanding all the time. And so you've got to be very careful. 
I mean, one thing to say up front is that most systematic errors or bias within studies is not done deliberately. It's not that the researchers are trying to pull the wool over anyone's eyes or mislead. It's part of the study design and the fact that the study design is not clear to everyone up front. 
So in terms of examples, well, I mean, two very obvious ones are the inclusion of patients in the trial. So what does the sample, the patient the participants involved look like compared to the population of patients with that condition? And we describe the selection of patients into the sample that's reported to the trial by way of eligibility criteria. So inclusion criteria might be all adults with a particular fracture at a particular time and so on. And then we have [00:06:00] exclusion criteria, excluding maybe open fractures or other particular groups of patients who might not benefit from these interventions. 
So setting those out up front allows you to eliminate or at least alleviate the risk of selection bias within the trial. And the important thing is not necessarily the exact detail of every single elibility criteria, but that those eligibility criteria are reported transparently at the beginning of the study. And the easiest way to do that as Dan's alluded to is to register the trial upfront. 
The same happens at the other end of the trial. And Dan already touched on this, about reporting outcome. So you can have reporting biases when you're actually describing the outcomes. And the easiest way to avoid that is to actually clearly identify, as Dan's said, the primary outcome, the outcome of which you're going to hang your hat on to answer the question you've set in your research and the secondary outcomes. And if they are labeled up front and everyone knows what they are at the beginning of a study, then any inadvertent or deliberate attempt to change the secondary outcome that might have a slightly more interesting [00:07:00] p-value to become the main focus of the study is alleviated. 
So that sort of reporting bias that sort of outcome bias is eliminated by this registration process because everyone knows up front what was planned and the reporting needs to follow that, that registration plan to the letter. 
Yeah. And would you say , do you think that latter one, you talked about the fact that, you know, looking for positive results, shall we say, has probably been the most common thing that happened in the past, but since registrations come in it's eliminated that. Do you think that's probably one of the most important things defining that primary outcome and saying 'This is what we're hanging our hat off, like Dan said. 
 Yeah, it's certainly right up there. It's a top thing and we don't need to cast dispersions about it. So I've done it as much as anyone, particularly when I was, you know, more junior. You collect a lot of data and you find one bit of data that is maybe surprising or particularly interesting, and it naturally becomes the focus of your discussion section. And sometimes even your conclusions in it suddenly becomes the focus. But of course, if you have 20 different outcome measures, one of them by chance at [00:08:00] a significance level of 0.05 will be statistically significant. So, 20 different outcomes, one in 20 chance you'll get one thats positive, you're going to get one. 
So it's just the way the statistics work. And if you focus on that one that just happens to be significant, then you are distorting the evidence. You're not reporting what you planned to do upfront. And that's where prospective registration of the study is so valuable because then everyone knows what you're going to report, and that temptation, whether it's deliberate or not, to focus on the more interesting secondary outcomes is eliminated because everyone knows what you're going to do. 
And that's um, that's the principle of registration is long standing for, for trials, as I think Fares said, and that's meant that reporting for trials has generally been better than for other study types, particularly retrospective observational studies, but there's no reason why you shouldn't apply the same principles to an observational study, setting out what you plan to collect, from which patients, using [00:09:00] which outcome measures and how you're going to analyze them upfront. Those study protocols are published less frequently, but we certainly publish our observational study protocols these days, and I don't see any reason my other groups can't. 
I think we'll give a bit more leeway to some observational studies within the journal, but for trials, as Fares said, I think that the time of not publishing trials that were not registered prospectively, I think are rapidly coming to an end. 
I totally agree. And if I come back to you, Dan, go into that a bit more detail, but maybe talking about the protocol publication as well. I mean, that's a little bit further down the line in the trial registration, but how is that essentially addressing some of those issues as well? And when is that usually completed and what are their advantages of both, I suppose, for us as readers and researchers? 
Yeah. So, if you're doing a high quality trial- trials are hard work, they're hard work from the outset cause you have to plan them and set them up and things. And so when you start the study, you've got this protocol and, and you know what this protocol is. And, and in a way, it's, it's a good [00:10:00] early bonus for you to, to be publishing that protocol, because it tells the world what studies you're doing. If there's any other groups around the world that are doing research it allows them to start to integrate their research alongside yours. And then that's going to make for a really awesome meta-analysis in the end. 
And some of the studies we're doing at the moment, there's an US group, they're doing exactly that - they've taken a protocol and they're just repeating it. Which, which in one way you could say 'Well, you know, they're just copying.' But they're not, you know, they're just strengthening the research. That's what it's all about. It's all about, you know, providing a stronger answer for patients, rather than, you know, everyone choosing a different timepoint and a different outcome, you know, publishing and being up front about protocols really early is really important. 
And there's not just the protocol about how you're running the study. There's also the protocol about how you're going to do the analysis. And for us increasingly in the sort of work that we do, how are you going to do the statistical analysis and also the health economic analysis? And they're called SAPs. So Statistical Analysis Plan and HEAPS, which are Health, Economic Analysis Plans. 
[00:11:00] And the reason we do those is because we've already alluded to the fact, that in trials it's really easy just to, to pick a more interesting outcome because it was significant. And we've all done that as well when we're juniors, you know, you'll do a t-test, and if the t-test isn't significant you go 'ah' perhaps we'll just do a different test. We'll do every test that this computer can do, and one of them is surely going to be significant and it does that and it becomes significant, and that's your test and we need to overcome that. Cause that's just as much a badness as anything else. And that's why we publish the SAPs and we, the HEAPs. And, you know, we try to be really, really clear about how are we going to do this. And so if anything really does show less, equals less than 0.05, you know, your magic number, which, you know, we're trying to move away from, but hey, but if anything is significant then it really truly is significant- there really is a finding. 
Yeah. I think it's interesting hearing, hearing you all say that though. I think it's like you say, every everybody's done it in their time. In particular, when you're a junior, you're trying to find that significant result [00:12:00] and get that paper that'll help you move on. And I think, like you say, it's not done in a malice way often, but it's just to make that level of research really high. And so that actually, like you saying in your editorial, these trials change people's practice, thousands of people's practice, and it's important that they're at that level. And so I think that's, that's a really important point to, to highlight like you've done. 
Matt, if I come back to you, in terms of the trial protocols and the registration, particularly the protocol though - what about, you know, I think some people are interested in about when that deviates cause obviously that can deviate during the trial or certainly the feasibility stage. What are the common situations in which this occurs? And does that really matter? You know, when you're publishing your trial protocol. 
Yeah. I mean, everyone on this gathering has been involved in research for a long time and many of the listeners will be as well. And we all know that a bit like battle plans, you know, nothing survives contact with the enemy or in this case contact with the patients you're trying to recruit to this study. So protocols have to change inevitably, you know, care treatment pathways, change, new information comes along, [00:13:00] patients not presenting, patients not wanting to take part. So , in fact, I'm always more suspicious of studies where something doesn't go horribly wrong than the ones where it does really, because the truth is a study that finished exactly as we planned it at the beginning, it just doesn't happen. That's that's life and that's clinical practice in trauma & orthopedics. 
The key thing is about being transparent about that and saying up front, what changed when it changed and for what reason? And if you've got a valid reason and amended protocol approved by research ethics committee and adopted by each R and D departments, and every trust that you're working in. And you've documented that upfront, then there's no problem with that. And we all accepted that those things change and, and have to, to make the trials work. 
So, yeah, I mean, some journals have gone as far as to ask for every version of every protocol, that's ever written in association with the study and confirmation of ethics approval for each of those and changes to trial registration, and we haven't quite done that yet at BJJ, but it's good practice. And if you I've never had a paper turned [00:14:00] down by any journal that have asked for protocols where we've had a justification for it, they go, yeah, fair enough. That seems reasonable. 
And occasionally they've questioned it, reviewers have questioned changes, but as long as you've got an upfront and honest answer, I don't think it's a problem. We shouldn't be afraid of changing protocols. It's just being transparent about it. Absolutely, yeah absolutely. And if we sort of move away from trials just briefly. 
Dan if I come back to you. And what about registration? Matt's already alluded to it, but about other forms of research, such as, you know, things like meta-analysis or systematic reviews, what could you tell our listeners about that? 
So, whenever you're doing a study, in essence you're doing a study to find something out from the outset, and in best practice would be to register all, all investigations, right from the beginning. 
And then people know what, what your primary outcome is, what you're looking to achieve. And they can then test how well you've done that and how reliable your statistics are likely to be. Uh, you know, even when you're trawling through your boss' [00:15:00] last 50 hip replacements. If people can just be a bit clearer about what you're actually looking for in the beginning, it's a bit more useful than, than, you know, telling them that leg length is 0.1 millimetre different or something, you know. And it's the same for all studies, meta-analysis that you can register with a site called Prospero and clinical trials.gov. They accept registration of cohort studies and you know, other study designs. So there's lots of places to register that. 
Um, yeah. And obviously, you know, within the journals, so particularly Bone & Joint Open, that's what we want - these trial registration documents, sorry the trial protocol documents to be really upfront and clear about the way things are, you know, the way things are done. 
Absolutely Dan. No, I totally agree. And just before I come back to Prof to discuss that, Matt, you've already mentioned it about sort of retrospective research and registering that, I mean, some purists would say, you know, RCTs are the best methodology we have to produce good evidence. And are we holding them to a higher standard than retrospective research with [00:16:00] all these things such as trial registration protocols? And I know it's a difficult argument, but what are your thoughts on that? 
Well, so obviously, I'm heavily involved in randomized trials and I'm a big fan and they have a single, really big advantage over other study designs for assessing interventional studies and that's that they eliminate confounders, both known and unknown. 
So, although there are other ways, statistical and clinical things you can do to try to minimize confounding, there's nothing that really does that in a way that randomization does at the moment. They're all sort of second best. So if you're testing two interventions, then I always think to myself, what are the reasons for me not randomize people to these two interventions, because that's the best methodology so why wouldn't I? 
Sometimes there are good reasons. Sometimes you just do those studies. So, you know, we have to adopt to the study designs. Having said that most questions in trauma & orthopedics are not actually necessarily about interventions. If you're looking at the epidemiology incidents and prevalence of different conditions, you [00:17:00] can't do an RCT to do that. If you're looking at diagnostic tests and which ones are more accurate for diagnosing patients or eliminating those diagnoses, then you wouldn't use an RCT design for that. 
So the important thing is that you're following prespecified and transparent methodology, no matter what type of study you're doing. It's not just about RCTs and as Dan said, through the equator network, which is the quality standards for reporting of all studies, they started with trials - the consort statement, which many of the listeners will be familiar with, but there are now standards and reporting standards for pretty much every type of study design you can possibly think of .At least 20 are on there website now. 
So no matter what research you're planning or doing there are guidelines for how to set that up, how to publish your work, register your work and how to report it at the end in the most transparent way. 
The only other thing I like to say on this point about sort of minimizing bias is involving people and setting up your data [00:18:00] management plans in a way that keeps clinicians away from the data. Might seem a very odd thing to say, but when you've got a big data set in front of yourself and you've got that clinical background you want, you're looking for specific things, and that in itself is a bias in the reporting. 
So, just as an example, we did a trial once ages ago and Fares actually hates this trial, so I mention this one specifically. I was on my way to present the work in a meeting over in the US and I was on the plane and I got the dataset from a statistician just before I left on the flight. So I was working on a presentation on the way, you know, being very prepared as usual. And I got halfway over and I realized that the two groups were labeled A and B and at no point in the, in the data sets the analysis was it described which hip was A, which hip was B. 
So I go off in a bit of a panic at the other end and trying to get hold of the statistician. Went on a weekend- didnt know if he was probably still in bed- different time zone. And I've said, you know what, what the hell is which grouping? He said 'Oh, [00:19:00] Oh, yeah, it didn't occur to me really. I just thought they were really nice datasets and I did the analysis and this is what it should show, but who cares what... 
If you keep those data away from clinicians and let the statisticians go on it, they don't care. They just laugh at the dataset. What type of hip is which is, you know, is irrelevant. 
If you include that sort of thing in your data management plan, which is another step on from your statistical analysis plan and your health economics analysis plan, essentially making sure that the database is locked and the analysis done before anyone gets sight of it, and then you're pretty much bomb proof. Cause, well people don't believe it, but I'm myself and Dan and so on are really the last people that find out the results of the trials. You know, the statisticians, health economics, the trial teams have known for ages, and then we get told right at the end and say 'What do you make of that?' And that, that really protects you from all kinds of biases in the analysis and so on. It's good practice. I'm going on a bit now so I'll shut up Andrew. 
No, no, no, that was, that was great, Matt, and I think we'll just move to Prof as you've mentioned him, but how would you sort of [00:20:00] drag that altogether, Prof, and how does our new open access journal BJO help us to maximize our transparency for these studies? 
Thanks Andrew. I think we've had some great comments. The critical thing is that with any research question, there is a best way of answering that question. And for some that will be an RCT for others it won't be, but whichever the method that's going to be used, we are keen for that to be set out in advance very clearly and well-documented and then well-reported. 
And for our RCTs, Bone & Joint Open - that gives us the opportunity to be a repository for protocols for the way that data's going to be looked at the way the health economics are going to be looked at. So it's a really good way for orthopedic researchers to profile their work early in its evolution, before the main study findings come into the Bone & Joint Journal subsequently. So I think Bone & Joint expands our remit. It also just, [00:21:00] uh, gives us the opportunity to publish some sound studies that are good, well done studies, but just do not fit into what we can fit into the Bone & Joint Journal. So it's been great to see Bone & Joint Open expand and grow this year. We hope it will serve a useful purpose for the research community. 
Absolutely Prof, and that's a really good note for us to finish on. So to all three of you, thank you so much for joining us. That was a really fun and informative discussion. And I'm sure everybody's gained a lot from it who's listening. So thanks very much guys. 
 And to our listeners, we do hope you've enjoyed joining us and we encourage you to share your thoughts and comments through Twitter, Facebook, and a like. Feel free to post a tweet about anything you've heard here today, and that we've discussed. And thanks again for listening. Take care, everyone. 
 

Topic of discussion
How has the journal evolved with regards RCTs?
Transparency at the heart of research
How can bias & error happen even for RCTs?
Eliminating looking for positive results
Protocol publication
Does it matter when trial protocols deviate?
Meta-analysis and systematic reviews
Are we holding RCTs to a higher standard than retrospective research?
How does BJO help maximise our transparency for studies?