BJJ Podcasts

Deep learning in orthopaedic research

The Bone & Joint Journal Episode 56

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 16:48

Listen to Andrew Duckworth, Fares Haddad and Jonathan Vigdorchik discuss the editorial 'Deep learning in orthopaedic research: weighing idealism against realism' published in the August 2022 issue of The Bone & Joint Journal.

Click here to read the paper.

Find out as soon as the next episode is live by following us on Twitter, Instagram, LinkedIn or Facebook!

[00:00:00] Welcome everyone to one of our BJJ podcasts for the month of August. I'm Andrew, Andrew Duckworth and a warm welcome from your team here at The Bone & Joint Journal. As always, we'd like to thank you for all your continued comments and support for our knowledge translation work here at the journal, as well as a big thanks to our authors and colleagues

who've taken part. We hope that you're continuing to enjoy our podcasts and all our knowledge translation work, including our animations and infographics. Our podcasts continue to focus on papers published each month here at the BJJ, as well as our accompanying special edition podcast series. So moving on today, the topic which I think will be of real interest to many of our listeners and readers, given the real explosion of the literature in this area recently.

So firstly I have the pleasure of again being joined by our Editor-in-Chief at the BJJ Professor Fares Haddad. Prof welcome back. It's great to have you with us as always. Andrew, thank you. And thank you for doing this. Fares and I are delighted to be joined by another of our editorial board colleagues here at the journal, Dr.

Jonathan Vigdorchik from HSS in New York to discuss editorial entitled, 'Deep learning in orthopaedic research: weighing idealism against realism'. Welcome Jonathan. Thank you so much for taking the time to join the today's great to have you with us. [00:01:00] Thanks. True honour. To be here. Looking forward to the discussion.

Yeah, thanks for that. I think it's really interesting. Isn't it? There's been a, as I say, there's been a real explosion in, in these type of papers particularly over the past few years. And even in, in this month's journal alone, there's, there's, there's a annotation about using AI and computer vision and orthopedic trauma.

And we've got another paper looking at in hip and knee replacement. And I think it's, that's think such good timing this editorial's come out. And I think it's a really really good ex explanation of, of where we are and, and what it is more than anything. So. So if I, if I could maybe start with yourself, as you say, in the editorial AI, technology's already embedded in our daily lives.

So could you give us a brief overview of what I suppose brief and as simple as possible overview of what AI and deep learning is and what potential roles it has, or even already has in medicine. So AI is a catchall term. Artificial intelligence really means it's referring to computers that are learning to do what humans can already do.

Mm-hmm problem solving, decision making, speech, language. We see [00:02:00] it in our everyday lives. It's things like Siri or Alexa on our phones, facial recognition software. You know, I, on my phone, my phone can organize every single photo of me just by doing facial recognition. That's all AI. Self-driving cars.

Mm-hmm . If any of you use Instagram? Instagram will bring you ads based on your previous solutions. That's AI. If you use Amazon, it's suggesting products based on your previous learning, it's all where computers are learning, how you think and trying to execute things that you're trying to do. Mimicking human intelligence.

It's actually been around since World War II and, you know, even in the 1950s where they're teaching computers, how to do things. Yeah, that's really, really interesting. Isn't it? I, I didn't realize it'd been around for, for so long, but and, and I think they say it's, it's like, and I think that's a good example of how it permeates our everyday lives.

It's just everywhere. Isn't it? Like you say now, and in terms of, in medicine so far, have there been, you know, maybe even out with our specialty, do you know much use of it so far? Is it, has it [00:03:00] actually been in employed clinically day to day? Yeah. So, you know, do you know what they call two orthopedic surgeons reading an EKG?

IT's a double blinded study right. So nowadays every time you get an EKG, it's got a little report on it saying if it's normal rhythm, if there's any abnormalities, that's the very earliest version of AI in medicine, just reading EKG. Now with COVID, we've used it for vaccine development. The AI can actually simulate how the virus is going to mutate so that it can create new strains of the vaccines.

Hmm. And even in radiology, I think they use it most frequently where the computer can detect breast cancer from mammography, lung cancer from x-rays. Equal and maybe even better than the radiologist, you know, if I was them, I might be worried for my job in those minuscule tasks and they may start needed to go into more interventional type of fields.

Yeah. Yeah, absolutely. And so, you know, you mentioned AI, but also in your editorial, you talk about sort of the sort of methodologies of [00:04:00] AI, almost likely in terms of deep learning and, and computer vision. What, what are those. So AI is this big catch all term, right where the computers are learning to do what we can do.

Computer vision is actually what a computer can do to apply basically learning what a human eye can do. Recognizing things from images, recognizing things from videos. Machine learning another different subset of AI is when a computer is actually learning from the tasks that it's already doing. So every time it executes a command, it learns how it does it.

And then when it executes that command again, it takes its prior knowledge and keeps learning. Deep learning goes even further. It's where we're simulating what a brain can do. Neural networks, where you have multiple different. You know, it's almost like there's a hundred different computers. Each one of them has a different minuscule task that it executes.

And then it combines and meshes all this information together in this kind of neural network architecture, just like our human brain can do. So they're all different [00:05:00] subdivisions of what AI is and each referring to really a different subset for how we can do research or clinical tasks. Yeah, that's, that's a really nice explanation.

I think just in terms of those different subsets, I think these terms can get bandied around a little and actually that's a really nice overview of how they all, all, all fit. So Prof, maybe I could come to you as you, you, as you both point out in your editorial, there's been a real surge. In, in this area, in this literature, orthopedic, deep learning literature in particular.

And I suppose that, like you say, it's due to the increasing accessibility of, of using these tools for both researchers and in healthcare systems. What has been your own experience of it? I suppose, maybe on, with two hats on maybe as your role as editor of the journal, and also as your own role as, as a researcher and a clinician.

I know it's a, it it's really had an impact on both sides in, in terms from the editorial perspective. Like any, if you like new methodology, we've seen a, a huge upsurge in submissions that reflect some form of AI or machine learning. [00:06:00] Some. Outstanding. Great ideas, important questions that need to be addressed.

And that's probably the way to, to do it. And some just copy studies as often happens with these things. We saw this previously, you know, with systematic reviews and meta-analysis, we saw it as soon as people got hold of big data in this various forms, including the registries you know, it just becomes an exercise.

Where people just throw it just about anything. And I think, you know, right now we're in that phase where we are seeing much more poorly carried out work by often by units that are just doing it in big volumes because they think it's a great idea and that's gonna raise their profile, you know, I was fascinated at the O RS walking around, seeing one particular unit that I'd associated with some really tremendous basic science work in the past.

About 20 posters, just using the same methodology for different questions without really having. Thought through it to the, [00:07:00] with the rigor that we'd like people to think through it and, you know, without reporting it quite as we would wish. Yeah. So, so I think it's something that's here to stay. It's something as an editor though, that we're seeing a, a little bit too much of right now.

And, and in particular, we're seeing it, if you like with that mystique of the black box, whereby people are creating and writing these reports that other people can't interpret or understand. Yeah. Yeah. Which is a problem. Absolutely. And in your, in your, sort of your research hat on Prof in terms of what, what have you seen or a clinical hat?

What, what have you seen come through that hasn't maybe impressed you, or you think that that has an avenue maybe that we can use. No. I mean, you know, the flip side is we are dealing with very big data sets all the time. And you know, in my current practice, keen computer assisted planning and computer assisted arthroplasty surgery is just an obvious example where we've got, you know, gate data, we've got preoperative

imaging data, both plain radiographs and cross [00:08:00] sectional imaging. And then we've got a tremendous amount of intraoperative data from these computers and robots and actually, and starting to put all that together with all the process data that we have, you know, we've all got electronic medical records now, or many of us have.

So, so, so I sort of in my own little world that that just seems to be an endless opportunity to really try and make sense of that and, you know, make, make, try and create some patterns for understanding what we should be doing at an early stage and also for prognosticating. So I think it's, it's gonna be a rich seam of work over the next, next, next few years.

So this is definitely something that's going to grow. It's going to be very useful. It's really just how we apply it. Yeah. And, and, and the, the rigor with which we look at. Yeah. No. And I think like you, like you say Prof, I think it's, it's sorry. Sorry, to interrupt, I think, like you say, it's, it's like a lot of these things like that you've described that have come up.

It's actually, it it's an answer to some questions. It's not an answer to all the questions, isn't it. And, and I think, I think it's just as refining it [00:09:00] and, and using it as a better, as a best tool as we can. JV. Sorry, I interrupted there. What's your experience? Is that similar? You would say as well. Yeah, very similar.

I mean, I think people are using the term AI for just about anything. AI. Statistics on steroids in some cases, right. Just looking at bigger sets of data with the computer, doing very much larger multi-variate analysis. So we have to be careful with what's actual AI. And what's just, what is just the multi-variate analysis.

That's prone to the same errors that are possible with that, right? Like confound. Yeah, absolutely. There's a really good example where they've looked at, you know, hundreds of thousands of medical records of people who were admitted with pneumonia, trying to predict death risk. And they actually found that if you have asthma

or if you are greater than a hundred years old, those are actually your best predictors of not dying . And the reason is because if you're a hundred there, people are gonna be really aggressive with your care. And if you have asthma, you already have a doctor. You know what the symptoms of shortness of breath are, you get earlier access [00:10:00] to care.

So these confounders in the data sets are prevalent exactly the same as you would expect from many other confounding. So you have to be careful applying these big models to, to bigger data sets. No, I think that's, that's a really nice example, JV. I haven't heard of that one. And I think, I mean, that takes us on nicely a bit to some, maybe the other part of your editorial about, you know, you highlight some of the important limitations of this methodology.

So could you maybe just go over a few of those Jonathan just in terms of, you know, what we have to look out for and maybe some examples in the literature, like you've just done where it can be a, a problem. Yeah. I mean, it's, it's prone to a lot of the different errors that we see with any types of statistics. You can actually, overfit your model.

What that means is, you know, a lot of these models and when you look at big data sets, you see these scattered plots and the model can actually predict perfectly an entire scatter plot. As opposed to looking at the trends of the data over time. So sometimes the models are too good. And where really loses its applicability or generalizability to the other data sets.

Mm. I think what we're starting to [00:11:00] look at also is what, what Prof was talking about a little bit was how do you report these types of studies? Yeah. And what's come out recently has been something called TRIPOD, which is the transparent reporting on studies. Actually it's a huge, it's a huge thing.

Transparent reporting of studies on the prediction models for individual prognosis or diagnosis TRIPOD. Yeah. Thankfully it's got that abbreviation, but it's really a, it's a list of 22 questions that the author needs to go through to select exactly how they did the study in great detail. It's almost like the P R ISMA way to do multi or systematic reviews.

Right? A very. Proper way to do it so that everybody knows the kind of science and integrity that you're putting through into that study. And I think that's really important. No, I think I thought that was really interesting actually because like you say, it seems to be going through the same evolution, shall we say?

Maybe as metaanalysis still in terms of just making it more robust and, and I think maybe I suppose you would agree as well, JV like, like [00:12:00] meta matter meta analysis or, you know, the examination of big data is it's the importance of the data quality. Isn't it. If, if it's, if you, if you put rubbish in, it's afraid you get rubbish out, don't you, would you agree with that?

Yeah, absolutely. It's the same, like any computer navigation system, like any data set garbage in, garbage out. Yeah. Because it's looking at trends and it's so good at predicting things that there's a lot of confounding. Yeah. Simple, simple things that looking at the data, just looking at different cutoffs for, if you initiate a certain treatment at a cutoff of a lab value, you're gonna get an uptick in some survivorship at that particular lab value.

Exactly. You know, 100 for your BUN or at a certain age value, which are just it's proving statistics based on something fundamental that we're doing nothing to do with that exact number while a regular multi-variate analysis might show that it's a, you know, prediction over time. And as you age, you you're getting worse off.

Yeah. Yeah, no, I agree. You also [00:13:00] highlight, I think, which is a really good point. And I think it's something where we are getting to the point where we do have this. You talk about having external multi-institutional setting data with large, heterogenous groups of patients for that sort of, excuse me, cross validation, independent and independent testing.

That that's really gonna be key. Do you not think and we, we, we are getting the data sets that maybe we can do that with. Would you agree? Absolutely. I mean, the Mayo Clinic just published on looking at several hundred thousand hip x-rays in their database of AI for hip radiography. I mean, these are data sets that would take an army of a hundred medical students, 10 years to measure everything and we can do it in a matter of hours.

Yeah. You know, we just. So we presented at the AAOS about 20,000 measurements done in an hour for leg length discrepancy for hip replacements. So massive data sets looking at automation of imaging can be done looking at thousands of readmissions for risk prediction from after joint replacement or even any of the medical factors.

We can now do these much [00:14:00] larger scale studies. If we do them correctly and interpret the data correctly. Absolutely. So Prof, if I maybe come back to you and maybe just to sort of sum everything up, I suppose, you know, how do you feel like we've touched on it already, but you know, sort of bringing it all together, how do you feel we harness the power and the potential of AI and, and, you know, and as a first point, and, and what is our role at the what's the role of the journal in sort of guiding that type of research

would you say? Yeah, no. I mean, I think we need to recognize that it is a powerful tool. That's now at our disposal. Mm-hmm and we need to apply it correctly because there are situations, there are data sets. There are questions where this is going to be our best bet for getting an answer and for being able to improve healthcare.

So from a, from a journal perspective, what we've tried to do initially with Bayliss and Jones' annotation. And then last year, really where I'd point people to Luke Farrow and Dominic Meek's paper, where we really [00:15:00] tried to announce like we did with SEARCH that we want to look at these papers, but that when we are looking at them, we really want to understand what the aims of the study are

and we need to be transparent about the methodology. What is the data, what are its limits, how's it validated? And how's it all been looked at so that essentially somebody else could repeat that piece of work mm-hmm and, you know that we recognize that it is a rigorous, valid study mm-hmm . So I think from a, from a journal perspective, we're going to continue to encourage the stream of work, but we're going to only encourage it if it is high quality and

for the right reasons. And you know, as, as, as I've said many a time now, there, there, there are so many things that are unanswered in our specialty. Yeah. And there will be a different technique for answering many of those questions and AI and machine learning, and this ability to work through massive data sets will be a key part of what we're using over the next few years.

And I'm sure will evolve in a very, in a much, much [00:16:00] more sophisticated way and become more freely available to many, many more researchers. Yeah, I think it'll be useful, but I think as, as JV said, let's not forget that there's going to have to be a ironically, a level of human intelligence in the interpretation of

what this data all means. Yeah, no, I think that's a really, a really nice point to finish on. Cause I think that's the key to it all. Isn't it? And I don't, I don't think we're obsolete just yet, so I think we're still, we're still needed as surgeons. So well, both a sincere thanks for joining me today and, and shedding some light and, and a really, really enjoyable discussion on really important emerging topic in our specialty it was

really informative and so nice to talk to you both and to our listeners, we do hope you've enjoyed joining us, and we encourage you to share your thoughts and comments on social media from we to like, feel free to tweet about or post about anything we've discussed here today. And thanks again for joining us.

Take care everyone.