The Buzz Podcast
Welcome to THE BUZZ PODCAST, where business, technology, and innovation collide! We dive deep into candid conversations with the leaders who are pushing the boundaries in healthcare, technology, and beyond—people who are rewriting the rules, and in some cases, breaking them.
The Buzz Podcast
Episode 11: Unlocking Secrets, Risks, and Hidden Gold
Join us as we dive into the world of AI in healthcare with Julia Komissarchik, CEO and founder of Glendore. With over 20 years of experience in AI and 10 US patents, Julia shares her insights on the challenges and opportunities in using AI to improve healthcare outcomes. Discover how AI models are trained, the importance of data privacy, and the potential for AI to revolutionize patient care.
The three pillars of AI: models, hardware, and data Challenges in accessing sensitive healthcare data The role of federated learning in AI model training The importance of diverse data sets for reliable AI models The potential of AI in rural healthcare settings The future of AI in predictive and diagnostic healthcare.
Julia Komissarchik is a leading expert in AI with a background in mathematics and computer science. As the CEO of Glendore, she focuses on making healthcare data accessible while ensuring patient privacy.
Be sure to find us on TheBuzzPodcast.Net
Welcome to the Buzz Podcast. I'm your host, Mike Moschino.
SPEAKER_01:And I'm Maureen Dyleen.
SPEAKER_04:The Buzz Podcast is where business, technology, and innovation collide. It's where we deep dive into candid conversations with leaders who are pushing the boundaries in healthcare, technology, and beyond.
SPEAKER_01:It's also where you're going to hear industry experts who are rewriting the rules, and in some cases, they're breaking them. So don't forget to join us for each new episode, and like and subscribe to The Buzz Podcast.
SPEAKER_04:Hello, everyone. Mike Moschino with the Buzz Podcast here, along with my amazing co-host, Maureen. And we are with a wonderful guest today that's joining us to give us a little bit more of our intrigue that you love to listen to. Our episodes are tailored to give you insight, direction, and a little bit of clarity about what's happening in our world of healthcare. Today, I'd like to introduce Julia. I'm going to butcher your name. Maureen, can you say her name?
SPEAKER_01:Yes, it's Julia, Mike. Julia.
SPEAKER_04:Julia. Julia. We're not going to do it. She got me again. She always does this
SPEAKER_00:to me. Okay, if it helps. My college nickname was Juliak because nobody can pronounce my last name.
SPEAKER_04:It's an amazing name. Julia, welcome.
SPEAKER_01:Welcome, welcome. I had the privilege, Mike, of meeting Julia a couple of months ago. And after I finished my conversation with her, I immediately hung up the phone and called you and said, oh my gosh, we have got to have her on the show because she is so fantastic. So Julia, I'm going to let you introduce yourself along with your last name here in just a second. But what I want our viewers really to hone in on today's conversation is that the information that you are going to hear is going to be something different and unique and that you will want to understand this more and know more about Julia and the work that she is doing, because I'll be honest with you, as long as I've been in healthcare, this was all new to me. So Julia is the CEO and founder of Glendor, but more than that, and I definitely want to highlight the credentials because she's got all of the science, To back this up, she has a bachelor's in mathematics and computer science from the University of Berkeley. In addition to that, she has a master's in computer science and cognitive studies from Cornell University. So safe to say, Julia, you are a powerhouse. But please introduce yourself further and tell the viewers a little bit more about you.
SPEAKER_00:That's great. And thank you for having me on the show. So my name is Julia Komisarczyk. I know there won't be a test on how to pronounce my last name at the end of the podcast. And my background is AI. I've been in AI for over 20 years. I have co-wrote 10 US patents, and we also have several more coming up. And I've been living AI all my professional life.
SPEAKER_01:Yeah, and I think what I told Mike, as I said, I feel so silly having admitted this, but AI feels like a new wave to me. But actually, you've been at this for over a decade. And so not new, it just may be new to me. So tell us a little bit more about why you got into the work that you're doing within Glendore and how AI has transformed what you feel is the next wave for healthcare.
UNKNOWN:Yes.
SPEAKER_00:Yeah, absolutely. So AI depends on three things. One of them is models. And we're seeing an upsurge in new models right now, like LLM and others. One is GPU or CPU in the old times. And the third one is data. So for GPU, it's a question of resources and money. How much do you want to invest into creating those models? buying those machines that you want to run your models on. But the data is the trickiest part of all three. And here is why. Because if we look at the chat GPT or any other LLMs that are on the market right now, they were all trained on publicly available internet data in billions of points. However, when it comes to AI and healthcare, the data that the models are trained on is... by definition highly sensitive, highly personal medical records of patients. So access to that data is very limited. And that's the problem that we are solving at Glendor. How to make it possible for AI companies and for researchers in general to have access to the data while still protecting patients' privacy. Because that's the most important thing. Once the data is out there, it's not gonna be protected and it's actually highly valuable on the dark web. If we look at the pricing of the dark web, social security will cost about 50 cents. But the medical record, of a single medical record costs$250 plus. So 500x. There's a reason for it. Credit cards come and go, you know, economically. Sometimes we're better, sometimes we're worse. But if we had a childhood disease or if we had COVID, that will stay with us forever. And a lot of this information is highly sensitive. It can be actionable in the wrong hands.
SPEAKER_01:Yeah. And so just to To dive into that a little bit further and tease that out, one of the things that I didn't realize, of course, when I think about PHI, I think of it in very traditional terms, name, date of birth, but you actually go many, many layers deeper than that. It's a baby's fingerprint or their footprint or... the back of somebody's head. So tell us a little bit more. It's not just the traditional PHI. You really are sanitizing a lot of information so that it's helpful but not identifiable.
SPEAKER_00:Exactly. So when we think about PHI identification, we're thinking about a report, which will have a patient's name, address, doctor's name, et cetera, on it. But it's so much more, especially, for example, if we're talking about medical images. x-rays, CT scans, ultrasounds. For example, both x-rays and ultrasounds are very famous for so-called burnt in PHI information. That's information if you look at your x-ray on our ultrasound, you will see your name actually stamped into the image. That is not an overlay or addition or a meta tag. It's actually part of the image itself. So that information has to be de-identified. Otherwise that x-ray will be uniquely connected with you. But it goes beyond that because if we're looking, for example, at brain MRIs or CT scans. So what is an MRI or CT scan? It's slices, an image which has slices in it. If you combine that together, that's a face. Let's say you de-identified mad attacks, you de-identified burnt in, you can actually recognize the face itself because our facial recognition is now so much better. It's so
SPEAKER_04:crazy
SPEAKER_00:to me. It's crazy.
SPEAKER_04:Yes. I mean, it makes more sense that you would want to go this level deeper because that data, like you said, in the wrong hands, just a sliver of it can go a long way in causing havoc in someone's life. Let's talk about that data equity and those who can't protect and those who can't protect. And when it comes to when AI is trained on that wrong data that's put into these systems, How do you see that being managed? What do you think happens? Because I speak on PHI and data and AI a lot. From your perspective, you see a lot of this in your world. What happens when you see AI trained at this level of depth and it
SPEAKER_00:goes wrong? There are actually two answers to this. One of them will go back to how the models are trained. One of the things which was very promising for a long time, especially recently, so-called federated learning. The idea here is that you train your models, you actually send the models to the location where the data is at, rather than sending the data to the model's location. So the perception was that if we do this, and then as a result from, let's say, a hospital, you only send weights. So the thought was that that will be protecting the data because you're just sending weights. Unfortunately, based on those weights, you can reconstruct the original image. and going back to that x-ray together with the name in the corner. So as a result, whenever models are trained, federated learning or not, the data has to be cleansed of identifiable information, otherwise it can be reconstructed.
SPEAKER_04:But there's also- Who's liable in that instance there, Juliet?
SPEAKER_00:Well, that's the hospitals and AI developers, et cetera. But there's also a second answer to your question because your question sort of can be answered both ways. So the other question is, what is the data that the model is trained on? Because if the data the model is trained on is not the data that represents us, all of us here in the room, then those models will not be reliable. And that's a big challenge because, unfortunately, we need to have this data. We need this data to be shared in order to make sure that those models are actually applicable to us. So if we look at the publication in JAMA, Journal of American Medicine Association, in 2020, they were discussing where the data for AI models is coming from. Granted, it's 2020, we're now in 25, so the situation has changed a little bit, but we're still not where we need to be. But in 2020, the data was coming from the coasts. I'm, for example, from Utah. It's nowhere present there. And also like California was very well represented, but if we look closer, it's only the coast. It's not the rural California. So no rural hospitals are participating because it's easier to get data from Stanford's and mayors of this world. However, the data from rural hospitals is very much needed because without it, those models will not be applicable to us who are not necessarily living on the coast.
SPEAKER_04:So when you talk about those models, and only taking the coast and not the rural areas, that creates those data silos. And the underserved population isn't represented. What are you seeing there? Because I've spoken on this. I'm now talking to an expert here. What are you seeing for these data silos for the underserved? And where do you see that going? It's 2025. We should have started working towards this by now.
SPEAKER_00:We as a society are starting to work on this. Some countries actually intentionally created data sets for their AI specialists. For example, Korea. Korea has created a data set that they offered or provided to companies that are training their models on that data. And that gave those companies very big jumpstart. That's why one of the leaders in the space is Blue Neat, which is based out of Korea. So we are starting to see this. There are a lot of conversations, a lot of different initiatives on the federal level, on state level. But yes, there's a lot of work that still needs to be done because going back to those AI models, if we look at the, so thousand plus AI models have been approved by FDA. But what do you think is an average installation count for those AI models? Just take a while. One. Each model is installed on average in only one location. Oh. And here's the reason. Because it's trained on certain data set, it works for certain data set or certain population. But if you take it to another place, it won't. So Going back to where we started, right? We need to make sure that the data is representative. Otherwise, those models are just not so-called generalizable. You can be used in different locations.
SPEAKER_02:Wow.
SPEAKER_01:I think the hard part is when I think about all of that, Julia, those rural areas, especially in the climate that we are in today, where funding is really... It's constricted in a major way. In those rural health areas, they are feeling massive, massive pinches. And every day we hear about a critical access hospital that can't stay open. So the data we know needs to be there. It needs to be represented. Those people need to have representation for these models to work correctly. But how are these rural health clinics and hospital systems going to be able to afford that? Mike and I were at a conference a couple of months ago, and we had someone in the room bring up an unpopular opinion, Mike, of can that data be monetized in some way? You know, is there even a capacity to be able to look at that? I know it's not everybody's favorite, but we've got to have a trade off. I don't know how we're going to get the data and have that data be utilized in a way that's effective and also have it be cost effective.
SPEAKER_00:Well, Hospitals, rural hospitals in particular, are sitting on the goldmine because the price for, let's say, x-rays or common x-rays from larger hospitals are basically even cheaper than the price for something from rural hospitals. It's harder to obtain. So rural hospitals are sitting on the goldmine. And just to add some statistics more into what you mentioned about the situation of rural hospitals, 2024 Charter's report on rural hospitals showed that 50% are in the red. Half of them. But good news is they are sitting on the goldmine because the data that they have is extremely valuable and extremely needed to train those models. The question is how to prepare this data. And to your point, a lot of times rural hospitals don't have enough IT resources to even think about it. And that's something that needs to be addressed. Well, if I may pitch ourselves, but for Glenda, our solution is let's make it very easy to de-identify the data. Because if we can make it easier for hospitals to cleanse the data, then they can actually share it safely. Because with compliance with HIPAA laws, and most importantly, also while protecting patients' privacy. Because it's sort of a cleanse data, right? One of the ways to think of it is like having a washing machine. So you make sure that no dirty laundry, in this case, PHI, leaves the house. So that's how it can be done. But we are part of the whole, there's more stuff that needs to be done, which is how to make it easier for hospitals to prepare the data, to store the data. So the whole IT infrastructure needs to be there. And there is some work done on it, and there are companies that are focusing on it. For us, our job, and that's what we focus on, is how to make sure that the data is cleansed so that the privacy is protected. At the same time, so that the data can be monetized. Funnily enough, I've also heard a lot of times where monetization is sort of a bad word. But I just came back from CIM, which is Medical Imaging Conference in Portland. And monetization was something that a lot of people were talking about. So I think we're turning the corner on this. Oh, no, no, no. Let's not do monetization. This is a scary word. But it must be done in a proper way with proper data governance so that the data is de-identified so that things like MRI CT scans are not just sent out in the world with faces still visible.
SPEAKER_01:It's just shocking, Mike. It really is when I look at all of this and think of the possibilities. But yet, what steps do we need to take in order to truly get there?
SPEAKER_04:All right. Because people think of security only around infrastructure and cyber. They don't think security around the data of a patient's face, of a footprint, of a fingerprint, of a baby being that type of security. So it's really, you know, it's still data. No matter the structure, you're not getting behind the scenes to get credit card information. Sometimes you want images. And like you said, we were talking pre-show about security. very sensitive images that people want to get access to. They want that information for personal, ill personal reasons or financial gains. Doesn't matter which, it still breaks the law and rule of governing data inside a healthcare institution.
SPEAKER_00:Yeah. And also it's punishable. So one of the, so we're discussing to let everybody in on the conversation, we're discussing sensitive images, like nude pictures, right? And there's an interesting paper, if I may say so myself, on sensitive imaging that was published in a gym journal last year. But there's also such thing to consider as there's a hospital in, I think, Pennsylvania, who they accidentally, it wasn't intentional, but they accidentally released hundreds of images of cancer patients, no cancer patients with genitalia, breasts and everything else. So they actually were fined 70,000 to 90,000 per patient. And this hospital has now merged with another hospital because basically they couldn't, they had to pay and financially ruined them. So there is a fiscal and legal implications, but also moral and reputation damage implications in the de-identification and the need for de-identification.
SPEAKER_04:And the need for patients to understand where their data is kept and transmitted too. Yes. Because that's also just SS key, right? You're not faxing images over to another office anymore, right? This is moving fast at hyperspeed and large volumes.
SPEAKER_00:I wish it was not that. But yes, everybody's using faxes. If you go to the orthopedics officer, I was in the orthopedics office a couple of days ago taking somebody else there, and they were discussing that we need to fax XYZ to another location.
SPEAKER_01:I know. We're never going to get away from those things, are we? I know that we're in the age where everything is so fast and electronic, and we're still sending faxes back and forth. I think it's
SPEAKER_00:funny. That's right. I agree. Mike, you raised a really good point because there's like privacy and security. So let's say I'm a hospital in your research institution. I'm sending you the data. So security is about making sure that my silo is not accessed by the bad guys, right? Your silo is not accessed by the bad guys and the data in transfer is not accessed by the bad guys. Privacy is about me choosing how much should I share with you because privacy For most of the research, you don't need to know that it was coming from John Smith on 123 Main Street, this particular record. There are some exceptions. Let me give you an exception. Dental imaging. So teeth identify the person. But at the same time, if you're doing dental research, there's no way around it. You have to have access to the teeth. So privacy is about deciding how much should be disclosed. Security is about how to protect all of it. So you're absolutely right. They're sort of always perceived as one, but there are two different things, and they both need to be thought through. And also, I really liked your point. We as patients need to think about what's happening to our data because we as patients can contribute to research. And I have done, I know everybody else has done that. But at the same time, one has to be very careful about where the data is going, whether or not it will be shared as is or de-identified. It could be from medical hospital or it could be from a wearable device, IoT, Internet of all things. So Internet of things, they're collecting a lot of data. Now it actually can identify a lot of things about us. How do they share it afterwards? All of this small print, the patients themselves I encourage them to think about through how their data is collected and where it's shared and how.
SPEAKER_02:Yeah.
SPEAKER_01:And of course, you know, Mike, now go ahead. I know you inspired. I'm so inspired by everything Julia is saying right now.
SPEAKER_04:All right. She gives us great nuggets to pull off of here. And for our listeners, Julia is an expert at this. This conversation is not just a casual colleague in the industry. This is an industry expert with patents around data management. And I want to make sure that you understand if you're listening, this is not always just a conversation about that hospital in your town. These conversations are about you and your data. You're a listener, but you should care about this conversation because you don't want to become a victim because of your data being mismanaged, whether from a secure facility or or in transport for the privacy being managed. So as we talk about this conversation here, Julia, how can AI help identify health issues from providers that are now, they're flagged, they're high risk, high profile individuals, they're in the provider system, and specifically, I need to look at this patient in this one light but I've identified them as having an issue in another life. Specifically, I ordered a screening of their lungs, but I've seen something in their liver. Talk to me about what you're seeing from an AI standpoint and how is that going over for patients?
SPEAKER_00:So that's actually one of the beautiful things about AI because when it looks at, let's say, images, right, it can see more than one thing because usually doctors are focused on one thing, like diagnostic. Let me give you an example, mammography. Every year, a certain percentage of population does the mammogram. But that mammogram actually also shows vascular disease. And in women, it's often underdiagnosed. So the same image can be used for so many different things. It can be used to identify nodules, but also vascular issues. And that's just one example, right? So that's the beauty of it with, well, it's not just AI, it's the question of doing multimodal analysis and multi-diagnostic analysis. It's just that when it comes to humans, it's harder to do because the radiologist has very limited time to identify, let's say it was a mammogram, then they're focusing on the nodules, they don't pay attention to to other things. But with AI, we can actually do all of them from the same exam. And the beauty of things like that exam or dental exam, you know, dental x-rays, they are doing on a regular basis. So you can actually see the progress.
SPEAKER_04:Yeah. Exam right now. Right. I think she's in it right now. And so I hope she's gone to a provider that can provide her that next level of identification. Right. Because you always want, I've been saved by it. And so has my brother. They've both been saved by AI-enabled imaging machines where they were looking for one thing and next thing you know, we're getting calls about, hey, we need to pay attention to something else here. So I think you're right. I know you're right. I've been saved by the advancements of AI in tech and health and in imaging specifically to take better care and predict better care for the patient.
SPEAKER_01:Yeah. And Julia, do you think it also has the opportunity, though, just to help with the burden on providers as well. I always go back to the rural health community where they just don't have the access to the providers that we do in the larger metropolitan areas or even other countries that obviously don't have the level of providers that they need, but the care still is needed, right? You don't get to say, well, only a certain population now gets to be sick, so everybody else needs to stay healthy. But that being said, is there that opportunity to offload some of that burden in the healthcare community?
UNKNOWN:Yeah.
SPEAKER_00:Absolutely. And that's, for example, this concept of so-called zero false negative models. Those are the models that don't miss things. They don't claim to find everything. What they do is they see if nothing is there, then very high percentage, nothing is 100%, but high 90s percentage, there is confidence in the fact that there is nothing there. So let's say 50% of the exams are marked as nothing there by this model, then that reduces the amount of, let's say, mammograms that the radiologist has to look at. Because going back to mammograms, 95% are usually benign, meaning that there's nothing there. So that's one way to do this. The other way is triage. sort of identifying potential issues and raising them to the radiologist, etc. But let me, you know, it's not all doom and gloom, right? Good news about rural hospitals and people who don't necessarily have access directly to the healthcare providers in the immediate vicinity is that x-ray machines are getting cheaper and cheaper, smaller and smaller. So I live in Utah. We have one rural mining community, and they had a museum there. They still have it. It's great. I recommend it to anybody there. It's called Helper. They have a dental x-ray machine from the beginning of last century. It takes a room. Now, if you go to the dentist, they have it this big. I've seen some which are even smaller. So basically, the x-rays and the machines, like the tricorder from Star Trek, it's possible. What is still open-ended question is making decisions, right? Making diagnostics. And their models will not get there. They will just basically die if we don't have access to the data because that's the only way they can do that. Having said that, at the end of the day, it still will be human decision, right? Because sort of AI is a heavy lifter, shortcut. It can do certain things. But at the end of the day, you know, I'm always asked, oh, AI will impress radiologists. Nonsense. It will help, hopefully, reduce the load on radiologists so that the obvious cases, which nothing there. Why would you want to waste time on it? Let's remove them. Or something which is really questionable, let's make sure that the human looks at it. So yeah, it's exciting. We're getting there. But in my opinion, if we don't break this wall with the data access, AI will be one of those things which is, oh, it was a great idea, but nobody trusts it. Because AI, Maureen, you mentioned AI has been around for a while. AI has gone through these places where everybody says, okay, AI will solve all the problems, and everybody says, no, AI doesn't work at all. Then it goes, oh, no, no, AI will solve all the problems. So we're on one of those precipices, and if we don't do anything with data, we'll go down. And people will say, ah, AI, who wants to do this?
SPEAKER_04:Yeah, I think somewhere in the middle of what you just described, we fall with AI. And I think those that have been affected by the use of AI in their care treatment planning, I think they've seen the benefits of it. And I think we will continue to see those benefits, I think, in predictive care. I think that is going to become the new norm for garnering this patient evolution, this consumer of a patient. As you see, these patients look for healthcare facilities where they can actually get to their primary care physician, seek treatment, advanced treatment, and predict when they need to come back to the doctor for care with a certain accuracy because of AI. And I think as that starts to evolve, our care models will change. Our in-facility care will change as well. So I think that we're on an evolution of using AI, but there are always going to be a patient-AI relationship. and a doctor in the room. You always have to have that human in the loop, and I truly believe that.
SPEAKER_00:Yes, and I agree with you. The only advice I have is when it comes to AI, we need to be very thoughtful of how we are doing this, because if we don't think through how AI will help us, AI will make decisions for us, and that's very dangerous.
SPEAKER_04:Well, tell us something, and our listeners out here, What excites you most about the future of AI and healthcare? You're an expert. You've got all these letters after your name, patents listed. What excites someone like you about our future?
SPEAKER_00:Well, it's what you just mentioned, the potential, what we can do, because it can help. If done right, it can make a difference because unfortunately, the population is growing older or growing larger and the number of doctors is not increasing. So we're having a gap and the gap is becoming worse and worse. We'll need to have something in place because we won't have enough doctors for all of us. And AI is very promising. Just as long as we treat it right and also give it data so that it can work for different populations.
SPEAKER_01:I love this. Oh my gosh, Mike, this is so great.
SPEAKER_04:I look forward to it because I am a huge proponent of looking for data to solve our problems. But if AI already knows what the problem is solved, hey, bring it on for healthcare. Bring it on for our patients. Bring it on for our rural communities and provide this care in a ready state. I think that we've got this advancement not too far in the distant future. I'm ready to stand in front of that Jetson screen where you just stand in front of it, and it scans your whole body, and it tells you everything. I'm ready for
SPEAKER_00:that.
SPEAKER_04:It's in the
SPEAKER_00:house. Every sci-fi universe we can think of, Jetsons, Star Trek, Stargate, they all discuss the fact that you have this machine which will scan everything and will give all the details, and then the humans decide what they want to do with it. 100%.
SPEAKER_04:Yeah. Well, Julia, I want to thank you for joining us today. And I want to thank all of our listeners. Tell our listeners, where can they connect with you? Where can they find you?
SPEAKER_00:Well, our website is glender.com, G-L-E-N-D-O-R.com. And also you can find me on LinkedIn. Please connect with me. Let's have additional discussions.
SPEAKER_04:Well, thank you very much for joining us today. And thank all of our listeners for joining the Buzz podcast and listening to an expert like Julia on this show. This show is for you, those who care about the future of healthcare, advanced technologies, emerging technologies, but it's also for those that are looking to listen to someone that's a little different, that's an expert in the field, that knows what they're talking about. We're going to bring the reality of healthcare today and tomorrow. So we thank you for listening. Follow us, like and subscribe on your channels. And until next time, keep buzzing.
SPEAKER_03:Bye-bye.
UNKNOWN:Thank you.