MedStar Health DocTalk (series)

How AI (Artificial Intelligence) Is Transforming Breast Cancer Detection

Debra Schindler and MedStar Health physicians Season 6 Episode 4

Would like like to share feedback on this podcast? Or suggest another topic for us to explore? Click here, or email us at DocTalk@medstar.net

In this episode of MedStar Health DocTalk, host Debra Schindler sits down with Dr. Nicole Sakla, a breast radiologist and published researcher in artificial intelligence, to explore how AI is changing the way mammograms are read and how it can help radiologists detect disease earlier, sometimes by years.

Learn how AI tools flag subtle patterns invisible to the human eye, act as a double-check system for radiologists, and may reduce unnecessary callbacks and missed cancers—all without replacing the human expertise behind every diagnosis.

Dr. Sakla also clears up misconceptions about AI in medicine, explains how it’s trained to be accurate across diverse patients, and shares what excites her most about the next generation of AI technology in breast imaging.

Listen now for a fascinating, accessible look into how human expertise and machine learning are working together to advance early detection and patient care.

For more information about breast imaging and screening at MedStar Health, medstarhealth.org/breasthealth or call the breast center at MedStar Washington Hospital Center: 202-877-2800 .

If you would like to comment on this podcast, or recommend a topic for another episode of DocTalk, send an email:  DocTalk@medstar.net



For more episodes of MedStar Health DocTalk, go to medstarhealth.org/doctalk.

- Comprehensive, relevant and insightful conversations about health and medicine happen here. When Health DocTalk, these are real conversations with physician experts from around the largest healthcare system in the Maryland DC region When it comes to detecting breast cancer, we know that finding it early makes every difference in what happens next in the journey. For decades, radiologists have relied on mammograms and their own trained eyes to find even the smallest signs of disease. But now a new tool is changing the gain. Artificial intelligence from flagging subtle patterns invisible to the human eye, to helping radiologists read images faster and more accurately. AI is beginning to play an important role in breast cancer detection and diagnosis early, perhaps even by years to better understand what people know and how they feel about ai. In breast imaging, Health conducted a national survey of more than a thousand adults and found only one third of women surveyed, and were aware that AI was even being used to help doctors read mammograms. In today's episode of MedStar Health DocTalk, we are taking a closer look at how this technology actually works, what it means for accuracy and detection, and how it can ease, not add to patient anxiety. Welcome Health Radiologist, Dr. Nicole SLA to help us separate fact from fear when it comes to artificial intelligence use in mammography. I'm your host Schindler, Dr. Sakla. Thanks for joining me.- It's an absolute pleasure. Thank you for having me.- So you have an extensive background of published medical research, including in artificial intelligence use for early breast cancer detection in some of the most prestigious and high impact medical journals. So while many of us are just learning about chat GBT and artificial intelligence, and we haven't really heard of this, you were already learning about it using it in diagnosing breast cancer, you've been studying it for years. When was the writing on the wall for you?- So it's interesting. Thank you again for having me. So AI has been on the docket, some people would say since really around the eighties, but even there's been early research done even prior to that. For me personally, I had my first exposure to artificial intelligence in medicine when I was a resident in New Jersey, a peer of mine who's named Dr. Duggan Deep Singh, and he was the one who first introduced me to artificial intelligence and its applications in medical imaging. What I started to do was to research how can AI help the doctor, especially the radiologist, identify pathology with respect to CT MRI and mammography better and faster. As I got further on in my training and I started to see, okay, wait, this, this technology may actually enable us to catch small findings that may potentially not be able to be seen with the naked eye, particularly with MRI. When I ended up going into women's imaging, the application of AI for breast cancer became even more evident, and it has really, really astounded me how far we've come in just even the last five years as far as the development of artificial intelligence algorithms and techniques.- Let's start with the basics. We hear the term AI all the time. What does it actually mean in the context of breast imaging?- So that's a good question. I think when people hear the word ai, they automatically think of like the Jetsons, and they think that there's gonna be like a machine popping out of the wall and it's gonna do the job of the doctor for you,- Right? Exactly.- And that is what I'm actually trying to dissuade people from thinking of AI in that manner. Here's why AI is more of an assistance tool. It's not meant to replace the radiologist. So when it comes to a breast radiologist specifically and how we use AI to find breast cancer, we have to kind of divide our thinking into two main categories. We have the people who come in for their screening mammograms, and then we have the women who come in for what's known as diagnostic examinations. So on a screening mammogram level, when women come in and they're asymptomatic, they're just coming in for their normal annual mammogram, what typically happens on the day-to-day basis is they come in, they get two views of the right breast, two views of the left breast, they get the squeeze, and then they go and their imaging gets sent to a list for the radiologist to read. When the radiologist sits down to read all of the screening mammograms from that day, or rather the week, there's several hundred oftentimes screening mammograms awaiting them. What the AI algorithm does on the front end is it has the capacity to flag cases that it finds suspicious. So ones that it thinks need to be prioritized. So perhaps a patient who was screened two days ago has a suspicious finding, but maybe there was an additional 50 screeners also read after that patient, it will flag that study and put it in a color for us and it'll say, Hey, read this study first because this study may need attention faster. Maybe there's cancer on this exam. So when it comes to the screening, that's how we use AI on the front end. When the radiologist ultimately opens the exam, however, they are a hundred percent the ones who are doing the interpretation from front to back. So the AI is not doing the interpretation for them, and I think that's a common misconception that there's this fear that the AI is going to replace the radiologist in terms of interpretation, and that's, that's simply not true. The radiologist is still interpreting a hundred percent. Once they're done reading the screening mammogram and they've made a decision about the findings, we get a little kind of cheat sheet from the AI algorithm and it says, did you look at this area? Did you look at this area? It will circle findings that it believes on the end side of things that it believes are important. So you have kind of this front end and backend use of AI to act as on the front end as a, as a, basically a flagger to say, Hey, this is really important. Check this case out first because maybe they have cancer. And on the back end, it's used as a double check to make absolutely sure that the breast radiologist didn't, you know, accidentally skip over calcifications or a mass or maybe a distortion, even though they've already reviewed the case. So that's how we use AI with respect to screening. Now with diagnostic patients, these are the patients who come into the breast reading room and they have either a symptom, they were maybe sent in from a doctor because they feel a new lump or they have focal pain, or perhaps it's something that we're following up. We saw something on their screening mammogram that we wanna investigate. Those patients are always seen by a radiologist. Live diagnostic imaging means that you're going to be seen by the radiologist. Most practices follow that. That's pretty much the standard of care for breast imaging. So when we do our investigation with mammography on that day, we take special pictures on that day. It's not your standard screening views. When we do those special pictures, the AI will tell us after we're done, and we already kind of have an idea of what we're gonna do with the patient, it will again, flag any findings that it thinks might be pertinent. And again, it acts like a double check. So overall, the thing to take away is that it's not replacing the radiologist, but it's helping the radiologist. Instead of just having us go through our examinations and make a decision without any double checks, this acts as a double check that we have trained through a number of studies and basically through numerous mammograms to act as a double check to make sure we don't miss anything.- Now, is this a standard of care now in all hospitals who have radiology and radiologists reading mammograms? Is there always this AI set up?- So the current AI that typically we refer to nowadays is not standard yet because we are not at the point where every practice in the country has the capacity nor the volume to necessarily be able to have this type of technology. Here at , we're very, very lucky in that we have the, what I consider to be the most futuristic AI technology and access to it. And we really do believe that that's best for our patients, and I think that we're very lucky to have it. However, there's always been some version of AI that's been around in breast imaging for the last 20 years. To give you an example, the earlier versions of quote unquote AI that were used on mammograms from maybe 10 years ago, what it would simply do is just kind of circle a finding, but the sensitivity and the specificity was not really there. The AI algorithms of today are markedly different and they're much more improved.- So go through your day as a radiologist, describe how it is that you review the images and where AI fits into that. Is it scanned first by the AI and then you look at it, or you look at it first and then the AI looks at it? And what if a hospital or a system doesn't have ai? What's the standard of care then?- Right, so that's a good question. So typically what happens is the, when a patient comes in, especially for a diagnostic exam, so we have to remember when patients come in for a screening mammogram, those patients come in are seen, they're checked in by the front desk in most practices, and then they are seen by a mammography specialist. So a mam, a mammo tech who specializes in taking mammogram pictures. So they're not necessarily seen by a physician if they're screening, Those are the patients we were discussing earlier who come in, they get two standardized views called a CC and an MLO view on each breast. And then once those images are obtained, they get to leave the center and those images are read in what we call not live situations. So the radiologist can read it the next day or the day after. As we were discussing earlier, when they sit down and they have a list of screening mammograms, typically radiologists read this in batches when they're uninterrupted and they tend to sit there for about a couple of hours and read several screening mammograms in a row. This is the best way to hone in on your interpretation skills and to prevent also being distracted. Now, on my typical day, what I do is I come in in the morning and I see majority biopsy patients and diagnostic patients. So as a reminder, the diagnostic patients are the patients who have symptoms or something that needs to be evaluated. Perhaps we saw in screening mammogram that needs to be interpreted and investigated, or perhaps we're following up something that they had, we've seen it before, but we wanna keep a close eye on it. Or perhaps there's a symptom that they have, like a lump or pain. When I see these patients, they will get specialized mammographic imaging most of the time, and we use a combination of mammography and ultrasound. In the many of these cases. When we take these special mammogram images, the AI algorithm is implemented the minute we start taking those pictures. So by the time the images are sent from the, what we call the machine, the mammo machine to the reading room, and I can actually pull them up on my screen, the AI has already made a decision as to whether or not it believes there's anything suspicious. So what will happen is the radiologist will typically go through those mammo images that were sent and they're gonna make a decision independent of the ai. That's how most of us do it. But before we ultimately go see the patient, we're gonna look at that double check and we're gonna see did the AI flag something or circle something that maybe we didn't look at or maybe we, you know, take a double look at again. So maybe we saw the finding, but we wanna double check. So we always double check the AI before we finalize the study and make an ultimate decision. And that's pretty much how we use AI in our everyday practice when we're coming in-house. Now, if we're reading screening mammograms, I already described to you, we use it on the front end and on the backend to tell us which studies to prioritize first and on the backend, again to double check our findings.- What's been your experience with that? Do you find that AI often picks up things that you didn't see?- You know, to be honest with you, so I'm a a breast trained radiologist. So for breast trained radiologists, I think if you ask the majority of them today, we have extensive experience in in reading screening mammograms and diagnostic imaging. You know, at many facilities you are doing up to 40 diagnostics a day. So with that level of screening and diagnostic imaging, you kind of become like a machine yourself. That's actually what you ultimately wanna become. You wanna become the ultimate machine that has a, a process that you never deviate from. Most of the time the AI is flagging things that you have already seen. However, the perfect example of when AI is useful is not so much when the radiologist misses something. It's quite rare for the radiologist to altogether miss a finding, if I'm being honest with you. But when it's really important is when you're kind of at a 50 50 impasse as to what you believe a finding to be. There are some findings that you know, you could go either way on. Sometimes you kind of are like, you know, I wish I had a second pair of eyes to compare and say, you know, do I think that this is benign? Is it probably benign, or is this something I should biopsy? And having an algorithm that's ultimately trained on multiple, like thousands of additional mammograms and is statistically backed can help you actually make a decision in several of those cases. And that's what I personally find to be the most useful utility of- Ai. How do you make that determination if, for example, I've seen mammograms, we've all seen a mammogram, and I'm not trained to determine what I'm looking at, or if there is a mass in all of that white that's showing up. Yep. Especially when there's heavy or dense breast tissue. Is it most helpful then, would you say?- So the more complicated the breast, I think the more useful the algorithm. I agree on that. Dense breasts are notoriously harder to read for, for radiologists because they can hide small masses. This is why we know that the denser a woman's breasts are the higher propensity and the higher risk there is for that patient to potentially have a missed mass. So again, having that second pair of eyes is incredibly important because when you're going through such mammograms, especially with increased complexity, maybe the fiber glandular tissue pattern is a little bit more complex than someone who has a homogeneous pattern that's kind of the same tissue pattern everywhere. AI becomes more useful as opposed to the easy cases where you know, the tissue's very homogeneous appearing and maybe has more fat content. Those cases are a little bit easier, but AI really comes and makes a difference for cases where there is increased complexity.- You mentioned, and you were very emphatic about this as as we want to be in this podcast, that AI isn't replacing a radiologist and it's not making a diagnosis. Right. It's just a helping tool.- Correct. - Right. And that distinction is very important because in that survey that I referenced earlier, we found that while younger generations are a little bit more aware of AI's role, most people, especially women over 45, still aren't sure how it works or what it does, and, and the uncertainty can fuel fears.- Right.- Do you ever have that conversation with patients? Do they even know that you're using this tool? I've, how would that come out- Really? Yeah, so I've had a mixed experience with this. Some patients have, are now asking, I believe this year actually more so than any, that they're starting to realize, okay, wait, I have this AI thing. They're starting to hear about this thing called artificial intelligence that's being used in breast imaging, but not just in breast imaging, but in radiology in general. And they are asking, which I appreciate because it gives kind of the opportunity to disavow any fears associated with it. But basically the thing that I think is the most important takeaway is that AI is used as a helpful double check, but it's never going to, it's never intended to replace the radiologist as far as interpreting, and I'll give an example. So when we talk about patients who come in, patients go through several different diagnostic histories. Some patients have a history of having maybe breast reduction or surgery, or maybe they had breast cancer in the past and they've had a lumpectomy. Those simple examples can result in people having, for example, scars or maybe calcifications in their breasts because of the changes from surgery.- And a calcification can look like a, maybe a cancer.- Well, so that's the thing. Sometimes these post-surgical changes, and I'm just using this as an example, there are several benign findings, which when you don't have the correct history in mind, can look like cancer. So it may be flagged, for example, by the AI saying, whoa, whoa, whoa, look at this area of maybe weird fibro glandular tissue or distortion. However, we as the radiologists, we know our patients all throughout. So holistic medicine and making sure you know your patient, the story, the surgical history, their pertinent medical findings is incredibly important.- And it, - It doesn't really get replaced by an algorithm because at the end of the day, we may know, oh, it marked an area that was their surgical scar. They've had it, you know, for the last 30 years, no big deal.- Right? - So while I think AI is incredibly useful, its limitation is that it's still not a human. It's meant to be a help, not a replacement.- 36% of women in our survey aged 40 and older, told us they wouldn't feel comfortable with AI assisting their doctor. Why do you think that is?- I think the inherent fear is that there's a, a depersonalization. I think we see it not just in medicine, unfortunately. You know, we all know what it's like when you're trying to like return something to an organization and like you get like a voicemail or you get like a voice machine. I think most patients want to know that there's still a doctor there, that we still have a support system, especially when we are dealing with cancer. And I think my most important takeaway, and what I hope most people take away from this podcast is that we are never deserting our patients. The interpreting physician is a hundred percent interpreting their mammogram from start to finish. That will never change. This algorithm simply helps us to make less mistakes in the event that we are maybe on our 50th study of that day. And we're looking at a scan that's maybe particularly complex, it acts as double check. And I think the fear comes from a worry that the doctor's gonna be removed one day, and that now I'm gonna have a machine looking and maybe the machine's gonna miss something, or maybe the machine won't know that I had a family history of breast cancer, or that I felt a lump last week. So my hope is that I can reassure patients that we are not trying to replace the doctor in any way. I love my patients, I love my job. I would never want to give away the opportunity to act, to participate in a patient's care to a machine, because we're just not there in terms of the, at the sensitivity specificity. But not only that, history is important and physical exam is still important, and talking to patients is still important. So it's more of a help, not a replacement. Right.- I think when anyone talks about ai, it's always that, that it's gonna replace humans and replace. Yeah. And that's always the fear that comes front and center. But a study published last year in the Lancet said mammography screening has been a cornerstone of early detection of breast cancer since the eighties, which you've mentioned, and among its challenges is a marked variability between the radiologists in diagnostic accuracy, which leads to unnecessary recalls and missed cancer.- Right.- Is there any data that you know of that supports the, the use of AI in reducing recalls or missed cancer diagnosis?- So there's several journals that are currently trying to evaluate how AI is impacting our callback rate or our statistics. How often are radiologists calling back abnormalities from the screening mammogram? Now, to first address your first point, there is going to be inherent variability between humans, right? So between one radiologist to another, there may be differences in quote unquote sensitivity. Some radiologists may call back a little bit more, versus other radiologists who may call back less. The goal in the United States, which is standardized, we do have standardized numbers that we try to attain, is that the radiologist doesn't call back more than around 10 to 12% of screening screeners for diagnostic evaluation. We are not in the habit of trying to scare everybody and say, oh my gosh, this could be something, this could be something, this could be something we have to be reasonable. Which is why getting mammograms every year is the standard of care in the United States. Because if you get a mammogram every year, it enables the radiologist to increase their specificity and be sure that, ah, this is a new finding. This wasn't there 30 years ago. Now, things that have improved over the years that actually improve our radiologist's sensitivity and specificity, we now have fellowship programs that are dedicated to entirely women's imaging. So I myself underwent that type of fellowship training. So during that year, essentially what you're doing is honing your skillset. You're seeing mammogram after mammogram. So you start to see, ah, you know, the last time I called back this calcification or this mass, it ended up being benign. So what ends up happening is that with experience and with training and increased years of education, you end up finding that your specificity goes up and you're not calling back. Quite so many AI does assist with this because like I said, there's several scenarios where sometimes you look at a finding and you could kind of go either way on it, you could say, you know, I could buy that this is benign. But part of me kind of is like, you know, it's only a finding in this part of the rest. Maybe I should call it back for more diagnostic evaluation. If the AI algorithm says, no, this is nothing, it assists in preventing too many from being called back initially. So it does have a role. It it, it does help the radiologist maybe get rid of those in-betweener cases and downgrade it instead of over calling. Now, when the Lancet, the reason that there's also some variety in terms of how sensitive and how specific certain AI algorithms are is, we have to remember that there's several AI algorithms that exist. And we also have to keep in mind that different patient populations have different breast densities. So some populations of the world and in the country have very, very dense breasts, and other populations have fattier breasts. We have to keep in mind that the AI algorithm may have been trained on a data pool that maybe had denser breasts. So what happens when we take that same AI algorithm and we apply it to a different group of women, maybe a different age group, maybe a different density group, a different racial group. So we have to be very careful that when we train these AI algorithms, that we make sure that we train them on a diversity of mammograms and breast types, because it can train itself over time on one specific subtype and then maybe won't be as useful or sensitive on another.- And who are you talking to when you say, we have to be careful to train the AI on the manufacturers of the,- So manufacturers of the, so right now, AI is being developed in large part as a kind of a conglomerate between physicians as well as manufacturers. So the manufacturers or the designers of these softwares are largely medical engineers. Now, medical engineers don't just make algorithms without consulting physicians. Oftentimes physicians are on the boards for these things because, you know, we need to know how to apply it and who knows better how to apply it than the physicians. We, we know what we need for our patients, and we also demand a certain level of, you know, sensitivity and accuracy before we think that it's good enough to ultimately treat a patient or even participate in the treatment process. I will say this though, that AI algorithms that exist today are already light years better than what was available even just five years ago. So that's very promising. So it's, it's very interesting. It's very important to continue researching. But overall, I think the trajectory of AI is that it's only gonna help our patients.- I can see that we're all finding, the more we use ai, the more comfortable we get with it. I've seen the reading room, it's dark, you guys are sitting in there for hours on end, like you said, you might have 50 mammograms to read. How do you even keep yourself alert to look at each image, each different image, isn't it? And maybe it would just be easy to let AI take over and oh, if AI says it's okay, I'm gonna let this one go.- Yeah, no,- I mean, how is it, how do you keep from getting that lackadaisical about the, the work?- So radiologists, I think we're an interesting breed. Like if you, if you talk to a neurosurgeon, you know, I, I always, I think that they're like very revered and I, I respect them immensely. You know, they go in for 12 hour surgeries and I sit here thinking, oh my gosh, how do they do a surgery for 12 hours? And I think with radiologists, I think the common misconception is, oh my gosh, you're in a reading room, how do you stay awake? But it's so funny with radiologists, we love our jobs. Most of us think of ourselves as kind of detectives. It's fun and it's enjoyable for us to actually help patients and be able to do it through medical imaging. We're kind of the ones that enjoy looking at every single millimeter of that image and participating in the medical process by finding disease in that way. So while fatigue can happen across any medical specialty with radiology, you know, the dimness of the reading room rarely has anything to do with our alertness. We, we are trained and specialized in interpreting mammograms for, for hours. And quite frankly, most of us would be able to do this for eight to 10 hours if we so desired. And most of us don't do that, though. We typically read screening mammograms in several hour increments, And the AI simply acts as a helper tool. But it's more so just to help us in terms of our sensitivity, making sure that we have a double check and a two check system. I would say it less has to do with like fatigue or anything of that sort, because most radiologists know that, you know, if my fatigue limit is two hours, they're not gonna continue reading beyond that. So most radiologists have that, and we know at this point as part of our process, when is it a safe time to kind of sign off and then we resume work the next day?- Explain the two check system.- So when you have a two check system or a double check system, it used to be kind of theorized, like, what would it be like if we had two doctors look at the same mammogram just as a double check to each other? Because we really didn't have back in the day two set of eyes unless it was another human. Now the problem with that is that the capacity to take two doctors away from the patients on a given day to only be reading, for example, screens, is not very realistic, nor is it efficient. We want our doctors to be in front of patients. I wanna be with the patients, making sure that I interpret imaging with them and alongside them, and I'm also performing biopsies with them. So when you can actually have your double check or your two part system be a component of ai, it increases efficiency. And therefore we can have one doctor go through a screening list confidently with a double check and have another doctor perhaps with the patients doing diagnostics at the same exact time. So efficiency is critical and AI is very, very helpful. With that,- Are there concerns about bias, whether the technology performs equally well across all ages, races, and breast densities? What's being done to make sure that these tools are fair and accurate for everyone?- I think I should go into a little bit about what does it mean when we say ai, ai, because AI is a very umbrella term and it doesn't really discuss, what are we talking about? Like it, it, it's obviously an algorithm, it's a machine, but what does that mean? Artificial intelligence is typically referring to something called a deep learning network. So deep learning, when we develop a deep learning network, which is ai, we're trying to simulate how the brain works in a human with a computer algorithm. So we're trying to simulate how the brain interprets findings, whether that's imaging or a picture or, or speech or even language. So when you're talking about developing a deep learning network or ai, there are several ways you can do that. A subtype of way of doing that is something called a convolutional neural network, A CNN. So when we talk about making AI in radiology for CT images, or for MRI or from mammography, oftentimes what we're referring to is creating an algorithm that simulates how the human brain works. So when we see something on a picture, our eyeballs take it in, and then we convey that information slice by slice. In some scenarios, if we're talking about CTS or with X-rays, it's oftentimes two dimensional imaging. And we convey that to our brain, and our brain does its magic, like it always does. And it says, okay, I made a decision. This is what I think it is. When you're training an AI algorithm to do such a thing through these convolutional neural networks or deep learning states, what you're doing is you're saying, okay, I'm gonna train the algorithm to take slice by slice information or voxel information from an MRI or maybe just a two dimensional image from a 2D mammogram. And it's gonna make a decision and it's gonna ultimately form a conclusion. And that's how we typically form the AI algorithms. Now, with respect to your question about how do we train it so that it's used to seeing a diversity of patients, whether it's dense breasted patients, fatty breasted patients, patients who are small, large, different ethnicities, races, even different genders.'cause we sometimes have to use mammography for men as well. What we do is we increase the data set. So typically what most medical engineers will do and doctors is we say, okay, you can't train an AI algorithm on only 10 patients. Typically you need thousands. And we try to make sure that that population is as diverse as possible. And then what we do is we test the algorithm against a doctor. So a lot of these algorithms when they're being developed will say, okay, what did the breast radiologist think this this patient had? Did they say it was a negative case or was it a positive case? And then they're gonna test the AI algorithm, and if there's ever a difference, interpretation was between the doctor and the AI algorithm. Doctors interpretation is ultimately what's gonna be used to train the AI algorithm. And the more times you can have the AI algorithm learn and practice on increasing mammograms in a diversity of patients, it gets better and better. It's very promising, and that's how we ultimately train it to provide good data regardless of the diversity of patients.- Well, it sounds a lot easier than getting in a car and getting behind a wheel and letting the car drive you. Yeah, I mean,- It's very complicated, but you know what, thank, thankfully we have such a skilled, like medical engineers nowadays and scientists who are working on this, and they are obviously including medical experts and doctors so that we can ultimately develop this artificial intelligence. So it's useful for us and not the other way around.- Have you had a discrepancy in what you have found versus what the AI has found and and how do you handle that if there is a discrepancy?- Yes, absolutely. It happens actually quite often, but it's nothing to be feared. I gave an example earlier of when patients have a history of surgery and the AI algorithm thinks, oh my gosh, this is a distortion. This must be like a cancer, like a huge speculated mask. However, the doctor looks at the image and knows that this patient, they've been in the practice for years, we know that they had perhaps a reduction on that breast or they had a lumpectomy. We ultimately know what the results are and we know why it's flagging it. So while it's still useful to double check, ultimately if there's ever discrepancy and the radiologist looks at the finding and has a reasonable reason for the finding, the radiologist's interpretation as usual will be the one that is interpreted, not the ai. The AI is again, meant to be a help, not a replacement.- Even if you in inform the, the software or the, the ai, the patient had a, a lumpectomy here, the patient has dense breast tissue.- It can still happen, unfortunately. Yeah, it, it does its best. You know, it does its best with dense breasted women. It does its best to, you know, not mark everything under the sun. And it's, it's really improved over the course of the last five to 10 years. However, it's not perfect. Again, it's not a human. So it's very hard to train an algorithm to know things that a human perceives when they talk to a patient. For example, if I go talk to a patient and the patient looks very nervous and is guarding and perhaps guarding a certain component of their breast, they may have already undergone mammography and ultrasound. But when you go in and talk to the patient and the patient then reveals, oh, I, you know, I had, I actually fell several days ago and I actually have this lump on the left side of my breast. The physical exam has not lost. Its, its ju if you will, like, it's still important to follow up with the patient and ask about history, because sometimes findings on mammography and ultrasound mean very little without a pertinent patient history. So for example, a hematoma can look like a big ginormous mass on a mammogram. If I know that the patient just had a trauma a couple days ago, I'm gonna lean with, this is probably hematoma. If I don't have that history and I just leave it up to an algorithm or anything really to just say, oh, there's a mass, and then call it that, it may lead to unnecessary workup for something that's quite simply a benign finding.- What excites you most about this technology and what still needs to be figured- Out? I think what excites me most personally is how we can apply it to different modalities, not just mammography. So everybody kind of talks about mammography because that's what we have the most exposure to, is that we can use AI in our screening mammograms and our diagnostic mammograms as we've been discussing. But you know, I think one of the things that excites me in particular is how we can use it in ultrasound or MRI. So MRI is used in breast imaging a lot when it comes to breast cancer screening detection, especially in our high risk genetic carriers, high risk patients in general. There are things that AI can pick up, especially with contrast enhanced exams like MRI, that we simply cannot see with the naked eye as opposed to mammography. Most of the time when we talk about AI with mammography, it's not so much picking up things that we can't see with our naked eye. We can see them, it's just a matter of, did you look at this area with MRI? However, when you give someone contrast and it spreads throughout the breast and you start to see lesions pop up, sometimes you can train AI algorithms to pick up on information that you just cannot see with the naked eye. For example, if you know someone has cancer, and I actually did a study on this several years ago, and if you know someone has cancer and they get an MRI exam to stage and to see how extensive the cancer is, where did it go? How big is it? It's very interesting that AI we're finding has the ability to be trained to tell you whether or not a mass will necessarily respond to chemotherapy before the patient even undergoes chemotherapy.- Wow. - Now that's incredible and that's incredibly useful because you still need the radiologist to interpret the imaging. However, it adds a different benefit that I otherwise would not be able to offer. It tells me, you know, based off of how this mass took up the contrast on a very, very micro level, looking at the voxels in an MRI, it can tell me whether or not it thinks they're gonna respond to neoadjuvant chemotherapy before ultimately undergoing surgery. So why is that important? If I know from the AI algorithm, if we develop an algorithm that's so good that it can tell me, you know, this person's not gonna respond to neoadjuvant chemotherapy'cause this cancer is picking up the contrast in such a way that it just doesn't look like it's gonna be responsive, then I'm not gonna waste my time potentially delaying the patient's surgery for several months waiting for the cancer to shrink on chemotherapy if there was no chance for it to in the first place. Hmm. So it's, it offers additional information that perhaps we just wouldn't have at our fingertips just by simply imaging alone. So this is just a little kind of splice of the pie and just an idea of how we can apply AI in the future.- Exciting stuff.- Yeah, very. The AI is an assistance tool, but it is in no way a replacement for the physician. The physician still has to do the interpretation and still ultimately has to make a decision based off of all of the information at hand, whether it be the patient's history, surgery, health overall, and what, what the ultimate question is for each exam.- I, I appreciate that because the more patients understand about how AI is working, putting their trust still in their doctors and how it's helping the doctors, I think that they'll be more confident in how they'll feel about the whole screening experience.- Right. - And not shying away from mammograms. Dr. Sakla, thank with us here on DocTalk.- Awesome. Thank you guys so much for having me. It was an absolute pleasure. This was very fun. And yes, just remember, don't forget to get your screening mammograms every year.- For more information about breast imaging and screening at MedStar Health visit MedStar Health.org/breast health or call 2 0 2 8 7 7 2 8 0 0. If you would like to comment on this podcast or recommend a topic for another episode of DocTalk, send an email to DocTalk@medstar.net.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

MedStar Health DocTalk (series) Artwork

MedStar Health DocTalk (series)

Debra Schindler and MedStar Health physicians