Digital Pathology Podcast

Navigating Ethical Challenges in AI-Powered Pathology | Webinar recording

April 18, 2024 Aleksandra Zuraw Episode 89
Navigating Ethical Challenges in AI-Powered Pathology | Webinar recording
Digital Pathology Podcast
More Info
Digital Pathology Podcast
Navigating Ethical Challenges in AI-Powered Pathology | Webinar recording
Apr 18, 2024 Episode 89
Aleksandra Zuraw

Navigating Ethical Challenges in AI-Powered Pathology

This episode is a webinar recording. 
It  delves into the complex ethical considerations of incorporating artificial intelligence (AI) in pathology. 

Dr. Zuraw begins by exploring the fundamentals of ethics and moves on to discuss the impact of AI in pathology, focusing on:

  • ethical dilemmas, 
  • data diversity issues, 
  • biases, and 
  • the importance of maintaining professional and societal ethical standards in the wake of AI integration. 


The session touches upon the ethical guidelines and regulatory frameworks guiding ethical decision-making in healthcare, alongside the role of regulatory agencies like the FDA. 

It also highlights the significance of data diversity and mitigation strategies to address potential ethical pitfalls in AI utilization. 

The webinar emphasizes the constant balance between advancing technology and ethical responsibility, underlining the need for transparency, governance, and accountability in deploying AI tools in pathology.

00:00 Introduction to Ethics in AI-Powered Pathology
00:30 Exploring the Ethical Dilemmas in Healthcare
00:36 Webinar Overview and Digital Pathology Insights
01:05 Defining Ethics and Its Importance in AI Pathology
02:05 Interactive Webinar Engagement and Audience Participation
03:25 Deep Dive into Ethics: Definitions and Applications
09:31 Ethical Considerations in Biomedical Research
10:34 Navigating Ethical Dilemmas: A Practical Example
13:57 Understanding Ethical Principles in Decision Making
17:14 AI Bias and Representation in Pathology
20:52 Frameworks and Guidelines for Ethical Oversight
25:02 AI Applications in Pathology: Ethical Perspectives
27:21 Exploring AI in Research and Its Capabilities
29:22 AI's Role in Medical Imaging and Diagnostics
33:11 Ethical Considerations and AI in Pathology
34:04 Addressing AI Challenges: Bias, Interpretability, and Security
44:26 AI as a Medical Device: Regulatory Perspectives and Future Directions
49:16 Concluding Thoughts and Audience Engagement


THIS EPISODES RESOURCES:

Support the Show.

Become a Digital Pathology Trailblazer and See you inside the club: Digital Pathology Club Membership

Digital Pathology Podcast +
Help us continue making great content for listeners everywhere.
Starting at $3/month
Support
Show Notes Transcript

Navigating Ethical Challenges in AI-Powered Pathology

This episode is a webinar recording. 
It  delves into the complex ethical considerations of incorporating artificial intelligence (AI) in pathology. 

Dr. Zuraw begins by exploring the fundamentals of ethics and moves on to discuss the impact of AI in pathology, focusing on:

  • ethical dilemmas, 
  • data diversity issues, 
  • biases, and 
  • the importance of maintaining professional and societal ethical standards in the wake of AI integration. 


The session touches upon the ethical guidelines and regulatory frameworks guiding ethical decision-making in healthcare, alongside the role of regulatory agencies like the FDA. 

It also highlights the significance of data diversity and mitigation strategies to address potential ethical pitfalls in AI utilization. 

The webinar emphasizes the constant balance between advancing technology and ethical responsibility, underlining the need for transparency, governance, and accountability in deploying AI tools in pathology.

00:00 Introduction to Ethics in AI-Powered Pathology
00:30 Exploring the Ethical Dilemmas in Healthcare
00:36 Webinar Overview and Digital Pathology Insights
01:05 Defining Ethics and Its Importance in AI Pathology
02:05 Interactive Webinar Engagement and Audience Participation
03:25 Deep Dive into Ethics: Definitions and Applications
09:31 Ethical Considerations in Biomedical Research
10:34 Navigating Ethical Dilemmas: A Practical Example
13:57 Understanding Ethical Principles in Decision Making
17:14 AI Bias and Representation in Pathology
20:52 Frameworks and Guidelines for Ethical Oversight
25:02 AI Applications in Pathology: Ethical Perspectives
27:21 Exploring AI in Research and Its Capabilities
29:22 AI's Role in Medical Imaging and Diagnostics
33:11 Ethical Considerations and AI in Pathology
34:04 Addressing AI Challenges: Bias, Interpretability, and Security
44:26 AI as a Medical Device: Regulatory Perspectives and Future Directions
49:16 Concluding Thoughts and Audience Engagement


THIS EPISODES RESOURCES:

Support the Show.

Become a Digital Pathology Trailblazer and See you inside the club: Digital Pathology Club Membership

Aleksandra:

Today's topic is ethics in pathology, powered AI. And when you say ethics, behaving ethical, everybody knows what that is. But then when you're confronted with an ethical dilemma and I'm not talking ethical dilemma nor your personal life, but there are these scenarios that are designed to kind of. Not have a good solution. And these are like problems in ethics. Like little case studies, they're called ethical dilemmas. When you're presented with one of these, there is never a right answer. So, how should we think about ethics when AI powered pathology is concerned? Today. I had a webinar about this topic. And this episode is a recording of this webinar.

Intro:

Learn about the newest digital pathology trends in science and industry. Meet the most interesting people in the niche, and gain insights relevant to your own projects. Here is where pathology meets computer science. You are listening to the Digital Pathology Podcast with your host, Dr. Aleksandra Zuraw.

Aleksandra:

ethical considerations in AI driven pathology, challenges, and mitigation strategies. We're going to start with what ethics actually is because, um, and it's a word used in, um, common language, colloquial language. Something is ethical means something is good. Something is. Okay, it's not wrong, right? Uh, and, um, we all as professionals, as scientists, and people working in healthcare, uh, we want to be ethical. It's, it has been, like, it has been studied, it has been, uh, there are regulatory bodies for this, there's like a lot around it, but AI is just coming into the picture, so how are we even being ethical with the use of AI? We're going to be talking about it today. But we will start with introductions, uh, introduction to ethics and definitions. And before we move on, uh, I want to welcome, welcome France, welcome North Carolina. Hi Leif, great to see you here as well. Okay. Amazing. Malaysia. Fantastic. Germany. So great to have you here. So today's presentation is going to be a little bit more text heavy, more maybe reference heavy. So, um, I'm going to be using my fantastic tool here, this red pen that I've been practicing since we started those webinars. Um, but. I wanted to let you know that if you would be interested in getting the slides from this presentation to go and read the, um, references, read everything on your own, look at it on your own time, then let me know in the chat, just write, uh, slides. When you write slides. In, with, on whichever platform you are, uh, because we're streaming on YouTube, LinkedIn, Facebook, uh, X, Instagram. Uh, let me know if you're, no, I don't think I have, I can see the comments from Instagram other than on my phone. So if you want the slides, and I'm going to repeat it a little bit later when everybody joins, then just write slides in the comments and I'm going to be, uh, sending you the slides later. So let's, let's start with what ethics is. There's going to be some definition reading, but I thought it was important to have like the official definition of what that is because I know everybody like knows that they want to be a good person. Uh, But there is a lot of discussion going on and my first, my first encounter was probably in vet school when probably it was a topic, it was a subject or part of philosophy or something. But recently I've given a presentation and at a lab conference for laboratory professionals. There was a speaker, she was speaking before me, just, she had a presentation just before my presentation about AI and pathology, and she was talking about ethics, and I probably normally would not have, um, joined her presentation, I mean, as important as this topic is, um, at that point it was more to me, okay, I think I'm a good person, I should be good, um, but I joined her, uh, presentation because she was just before me, so I wanted to be on time, and the, And she was a specialist. She was, I think, also a genetic counselor, and she was specializing in these topics of ethics, um, for biomedical field. And then, and she was giving examples and asking us questions, and I will have one example for you as well, uh, and we'll be waiting for you to let me know in the chat the answers. She was giving us, like, case studies or examples and asking, Hey, what would you do in this situation? Only when you are confronted with these like unsolvable dilemmas, um, do you realize, oh, let's put it differently. Did I realize how nuanced this field is. And it's not just as simple as what I think is right. Um, we're gonna get to it. So, I see people, okay, uh, you guys want slides. Yeah, please keep, uh, as you join, let me know if you want the slides to this presentation. I know I didn't show any today. images yet, but let's start with the definition ethics. Ethics is a study of moral right and wrong, good and bad. So basically like our normal understanding is like somebody is ethical, they're good, they're considerate, they're like good people, good professionals, the good science, right? Um, but we have here moral right and wrong. So morality is another, like, aspect of this that, um, maybe is not gonna be the same in all the cultures, um, right? Then, uh, what's the purpose? It guides behavior by defining moral principles. So it actually is defining those moral principles. And, and we have different branches. There is meta ethics, um, origin and meaning of ethical concepts, we have normative ethics, and something that I think is most, um, close to us and, uh, we're, we're going to be asking ourselves questions is this, uh, branch of applied ethics. So analysis of specific moral issues, um, and Ethics influences personal, professional, and societal decisions. And I think last time I showed you something cool with this tool, uh, this presentation tool. Do you see this, this little number one here? When you hover over it, so when they give you the slides, you'll be able to see them online. And when you hover over it, you'll see the references. Some of the references are going to be just at the bottom of the presentation, and some are going to be, uh, like we are, uh, used to from publications. So I thought that was a super cool, uh, feature of this tool. So if you ever want me to give you a tutorial for this tool, just write tutorial and, uh, I'm gonna make a good video or something. So, um, this is what ethics is. This is what Wikipedia tells us about, uh, about ethics. Um, there's a lot to think and there are diagrams that I'm going to be showing you as well. Um, and we as, um, Biomedical professionals, we have our own, uh, version of ethics. It's the bioethics, right? And bioethics is study of ethical issues in biology, medicine, and life sciences. So I think the majority of us is from those fields. Um, some maybe from the engineering fields, but, uh, if you're interested in this for pathology, then somehow you're touching. some of these aspects, so bioethics is going to be something that is applicable to you. And Gamal, yes, there's going to be slides. I'm going to give you the slides. Um, focus is balances benefits of research against ethical consideration for humans, animals, and the environment. Um, I want to emphasize this. Balances benefits of research against ethical considerations. So this ethics, often, uh, I mean, all the ethical dilemmas are not going to be, uh, a clear cut. They're going to be, uh, maybe clear cut in a certain situation, but maybe in another country it's not going to be, uh, a clear cut. It's a balancing act, uh, which doesn't make it any easier. Right. And, um, the scope of bioethics, it includes medical ethics, environmental ethics, public health decisions, um, a lot of stuff that is heavily discussed that are topics that cause polarization, I would say, because they're very impactful. So, obviously, there is going to be a lot of societal discussion there. And the goal is moral, moral decision making in healthcare research. and policy and bioethics ensures responsible scientific and medical practices respecting life and well being. Let me check what's my next slide. Yeah, I want to tell you something about the importance of this responsible scientific and medical practices respecting life and well being. You, you have heard about this scandal No, I'm going to tell you about the near the next slide. Sorry for the teaser. Sorry for the teaser. Let's move on. So, um, the the importance of ethics in biomedical research, uh, participant safety, of course, right? If we have, um, both human and animal, uh, subjects, they need to be safe. Research integrity. Oh my goodness, this, I cannot emphasize this, uh, strong enough. Public trust, um, because if the society does not trust that the research is done in an ethical way, uh, Um, then there's going to be pushback. There's not going to be funding and we're going to be not move forward. Informed consent, super important. Guarantee subjects understanding and voluntary participation. Equitable treatment. So fair selection and treatment of research subjects. A topic for, you know, a totally full presentation or a whole, um, workshop. Ethical dilemmas. Resolution and societal benefits, uh, drives discoveries to enhance health and quality of life. Um, and, oh, so before, let's start with this example. So, um, this is going to be something I want to ask you in the chat, uh, what you would choose here. So, Let me read you this situation. There is a situation. Hospital has a limited number of ventilators during a severe influenza outbreak. Two patients require immediate access. So two patients require immediate access to a ventilator to survive. But only one ventilator is available. First patient is elderly individual with COVID 19. multiple underlying health conditions. And the second one is a young, otherwise healthy adult. Both have an equal chance of survival with ventilator, ventilator support, but the hospital must decide to whom the ventilator will be allocated. So, Let me know in the chat, who would you give this ventilator to? And I'm not gonna be showing these, uh, these, uh, questions here on screen. Um, but, so my, uh, my, like, first thought, and this is, um, kind of dictated by my background, by where I come from. I come from a, give me, if you're, if you're, Uh up for it. Let me know in the chat Would you give it to the young person and would you or would you give it to the old person? So i'm gonna tell you what like my first impulse is that is dictated by my background, by where I come from, by how, um, so it cannot be given to both. Um, I see answers. You would give it to both. No, we have just one ventilator and one person needs to get it. There is no option to give it to both of them and only one is going to survive. So, um, I come from Poland, right? Now I live in the U. S. So the, uh, way, Healthcare is managed in those different countries is different. In Poland, you have, um, like, kind of a hybrid system where most I don't know if most, and that's a generalization. But, uh, when I was growing up, and in the previous communist system, everything was funded by the state. Now, part is funded by the state, and there is, like, a, um, a separate way of accessing healthcare privately. So, what is the, um, how is care administered when, um, the healthcare system is funded by the state? Well, it goes more by statistics. So statistically, or like, let's call it common sense, I cannot calculate statistics here, but, um, the old person, um, who has underlying health conditions. In theory, if, you know, nothing else happens, they're going to live shorter. So, like, the benefit of this care is going to be better for this younger person. And that was my first, like, impulse. But then I'm thinking, how do you know that, that this other person is not gonna have a car accident? Or, like, I start thinking, and I'm like, I am not in a position to say who should get it. And, yeah, and I see some answers in the chat, uh, no matter. No matter the decision, you must be transparent, uh, and it should be explainable, and it should be based on the same. And another answer is, I would probably give it to the younger person, because, uh, he has a better chance of living longer, right? Which was my first impression, first impulse. Let's call it impulse. Let's move on, because there is a little bit more to this. And Uh, let me tell you what kind of dilemma they have, like, those, uh, people who deal with ethics, uh, on a professional, um, level and, like, really do this for a living and answer these questions for a living. Um, this, uh, particular example is resource allocation in lifesaving treatments. Yeah, we were trying to, uh, allocate resources. Um, and, uh, ethical dilemma. The first one is justice. This principle demands fair distribution of scarce resources, ensuring that decisions are made impartially and equitably. I had this coming nowhere with the decision. It must be transparent and based in some explainable argument, right? So justice, this, this is like an official term in ethical dilemmas, justice, the first one. And the second one is beneficence versus non maleficence. Honestly, I didn't know, uh, This word existed before I started doing this ethical research. So, beneficence versus non maleficence. The healthcare team must act in the best interest of both patients beneficence, while also doing no harm. Non maleficence. However, choosing one patient over the other inevitably leads to harm. to harm by omission for the patient not selected. So, like, to me, this is, how, how are people doing this? I mean, there are guidelines, and that's what we're gonna, um, get to it, to this. But yeah, and, uh, I see comments, and we have people who are older. There are seven, uh, I have a person here in the comments saying they're 72. I'm gonna be 40 this year. You know, family members who just accidentally died, and it's, like, How can you say that this young person who gets this treatment is not going to go out to the street and get hit by a car? Like, you cannot say that, right? What's the probability? Yeah, you can calculate. And that's kind of what the state funded healthcare systems based their decision on. Some of them. But anyway. Let's move on and talk, um, about other considerations, right, about the considerations in, uh, in our example here. So this example of resource allocation in life saving treatment. Consideration. Should the decision prioritize the younger patient, which was my thought, based on potential years of life saved? And here, keyword potential, right? Uh, aligning with A utilitarian approach to the seek to maximize overall benefits. Should the decision be made through a lottery system? Treating all lives as equally valuable and leaving the choice to chance. So we had a comment like that, right? Um, thus upholding the principle of justice without bias. How do the principles of respect for autonomy, And informed consent play into the decision making process, considering both patients or their families preferences and values, right? So there are, like, more things. I have this beautiful picture, um, of, created by AI, with a prompt, pathologists thinking, asking themselves a question. And this, uh, it's, again, in this, um, presentation program that I'm using. If you're interested, you'd be interested in how I work with it, let me know in the comments. Um, write tutorial and I'm going to make a tutorial on YouTube. Um, but only if you want it, if you don't want it, then I'm just going to keep making these presentations. But obviously we see, uh, we gave the prompt pathologist thinking and we have a, uh, not too old, like middle aged, he has gray hair, white male. And who actually is wearing a stethoscope, like why would the AI needs to step up the game a little bit. But basically we have a white male, right? Um, and this is like the most common example of AI biased. Hey, you, you give a prompt and most probably, uh, Uh, it's gonna be a white male if it's a profession that is like a secretary is going to be a woman and a white A doctor is going to be a white male or or things like that, right? Uh, and I wanted to check okay, how? Biased is it so like given the the? information they gave AI, I said, pathologist thinking. Based just on like probabilities, how biased is this particular image, right? Meaning, like, what's the chance that it's going to be a male versus a female? So I actually checked it just before the presentation. I checked the number of active physicians in the US in 2021. This is from Statistica. Dot com and they have amazing statistics, but I think you have to pay to get access to all of them And by specialty and gender so we are not even like talking about race or any other minority aspect or any other characterizing aspect, but we're just talking anatomic clinical pathology And we have a female this shorter bar and male is the longer bar So we have four thousand. I think we have more pathologists than that in the u. s You But anyway, they, they checked and, um, almost, not, not even 5, 000 would be female and 7, and almost 7, 500 would be male. So percentage wise, I would very quickly have to calculate, but definitely over 50 percent is male. Um, 60, 50 percent is, uh, let's say 3. 54 would be 50%. Yeah, let's say 60 percent is male. So, given, like, the chance, uh, this image of a white male being in the previous slide is, like, less biased than you would think, based on the prevalence of male pathologists in society. So, just wanted to insert that because bias is going to be something we're going to be talking about. So, yeah, we have to consider a lot. So, um, who, Okay, and um, talking a little bit more about this research allocation in life saving treatments dilemma, it highlights the complexity involved in treatment and diagnostic decisions. Remember the keywords here? Treatment and diagnosis decisions. We're gonna get back to that in a, in a little bit. When healthcare resources are limited, requiring a balance between ethical principles. Justice, beneficence, non maleficence and respect for persons. So a lot of different principles that we have to consider that honestly I didn't even know were like officially defined. So, uh, here, admitting my own ignorance before I started researching the top, what are the frameworks that we are. Um, moving within like who says what is ethical, who determines what is ethical. Um, we have ethical guidelines and documents such as, uh, declarations of Helsinki and Belmont report. They said the broad principles like what should guide us. Um, then we have regulatory agencies, of course, FDA in the US, EMA in Europe. Uh, and they ensure that. research meets ethical and safety standards. So, I was reading up to this. So, if you ever, let me know if you ever heard a presentation, um, put FDA in the comments if you have heard a presentation or like read on the, uh, like history of FDA. Basically, the history of FDA, uh, or, or like why FDA has the, like, rules that they have right now is kind of like how law is created. It's A history of preventing future debacles that already happened and then they created, um, a law, created regulations to prevent, prevent that from happening. And so you probably have heard about the, um, situation with thalidomide. Thalidomide was the, um, medication given to pregnant women for anxiety and it was supposed to be sedative that later caused, um, short limbs in, uh, in the newborns. And it was used in Germany and in Europe. And actually it was never Officially approved, or like, uh, yeah, approved for distribution in the U. S. And, uh, because the FDA official, Frances, I forgot her last name, which I will Google right now, because this is important. This is very, very important. Who that was? That was Frances, uh, Frances Oldham. She said, no, there's not enough. Proof for this thing to be used in the US and it wasn't at least it wasn't used officially. I don't know. I think there were like in total like 300 people who are actually suffering from from the disorder from those short limbs and because it was. somehow made it to the US unofficially and, and it was being given to pregnant women. So anyway, this was like the most famous debacle that happened. And there is like, when you listen to, to the history of the FDA, it's basically like a series of these things that some by omission or some by like us not knowing things, uh, happened. And then there was regulation. So. Now we have regulatory bodies that, rightly so, um, take care or, like, regulate these things, right? We have our institutional review boards, which are committees that review research proposals for ethical compliance. We have professional ethical committees, like American Medical Association, that, uh, provide, uh, ethical guidelines. We have legislation, uh, for example, the Common Rule and HIPAA and GDPR data protection in the, uh, in Europe. And, and obviously, society values current cultural norms and public opinion also influences ethical standards and practices. Here we need to remember that the cultural norms and public opinion is not the same everywhere, but it does influence. So, um, The ethical oversight involves a comprehensive framework of guidelines, regulations, and societal values. It's very good. How can we even survive being ethical? Should we even, like, touch it or should we not touch it? Especially when AI is entering the space and this AI sometimes goes crazy and hallucinates and, like, what can we do? How can we, uh, can we leverage these tools or not? Shall we or should we just stay clear of this? Let's, let's think about it. Okay, let's think about it. This slide, you might know from our previous webinar already. But, um, I decided to divide, um, the applications of AI in pathology, by what is it being used for. So, we have workflow optimization, research, and diagnostics. And I divided it in a way that, um, kind of when you think about it in the context of ethics, okay, how bad can AI be for workflow optimization? Like how, how, how big would the risk be? How big would the consequences be? I would say here it would be less and here it would be something like in between, right? That's how I divided it. And we're going to go through the, um, applications and, uh, see, okay, how, like, If this would go wrong and would not work and cause harm and how much harm would be caused here versus here so for workflow optimization. So AI for workflow optimization. So an example here, automated slide QC or speech to text for report writing slide QC. Hmm, what, like, if you, we QC slides for, uh, automatically with AI, and let's say there is a workflow designed that, okay, first flag, the slide goes automatically back to the scanner, it's being scanned. Second flag, a person needs to look at it. Like, how bad could that go? In the worst case, like if it's so, so bad, you're going to be scanning those slides twice. And after a day of scanning, you should be able to recognize that something went wrong. So I would say this is not such a high risk. I mean, yeah, you lost a day of scanning and maybe storage and things like that, but it didn't harm anybody. Um, it might annoy people that sure would annoy me, um, but I wouldn't be harmed. Speech text for report writing, uh, if we use it and actually like proofread it, then I don't think that would cause too much harm, right? If we don't proofread it, well, uh, then there might be some issues. misdictations. Uh, so obviously, uh, there need to be some kind of, uh, quality control, especially if it's a person signing this report. So we are responsible for what we sign. So we better read it, right? Uh, what about AI for research? Something that I am pretty fascinated with recently is so called retrieval augmented generation. Why am I so fascinated with it? Because it's like a super powered personal researcher that can go and do research on data and just deliver you the results and insights without going too crazy about it. Because, um, we are limiting The generative power of an AI tool and we're talking here about generative AI and about natural language processing so Although now it's a combination of different AIs, but basically AI tool can go Do survey data for you. You put all the publications information databases in there and it goes and answers Your questions based on only what you've given this model. So it has, it, it, it has, like, the ability retrieve, retrieve relevant things, but it's not gonna generate crazy stuff because it has to reference, um, it took the information from. An example, actually, here, for me, uh, let's, let's call it for research, um, In this, uh, in this presentation, in this tool, when I type something, and, uh, there is an option to find a reference for this. So, for example, if I would, uh, just start typing, like, what I think ethics is, and I'm like, ethics is a branch of science. Science or maybe branch of philosophy that deals with good and bad and moral decisions. It's kind of correct, right? but uh, if I wanted to like have an official reference, which is what I wanted and This this particular tool is gonna ask me if I want to reference and then I say yes, and it shows me Three or how many references but it's still my job to check it This information is actually in those references, so yeah, um, that's, that's how this presentation tool works, right? Then, um, we have research applications of image analysis algorithms, biomarker quantification, automatic lesion detection. So, here I would wanna, when it's applied to something, like an image, that is especially a biomedical image, a medical image, pathology image, Um, like the one here before, uh, below, it introduces another level of difficulty for the AI, for the algorithm, because this is not just a dog and cat and you're not just classifying something, uh, where you have a lot of data to train on. Our medical data sets are so much smaller than all the natural image and natural images are just images, uh, of things outside, like Dogs, cats, whatever, um, and these are medical images and we're gonna, I have a cool comment. I'm gonna pull it, Joe I'm gonna pull your comment in a second. Let me just finish this. So, uh, here, um, Image is like less verifiable for us and there's less data to train. So, um, and I would think, okay Quantification is easier than like automatic lesion detection because you have to train for this lesion, but then I'm When you quantify something, let's say a CHI 67 counting a dark nuclei in a stain, you will not be able to visually assess if all the nuclei in this particular slide were calculated correctly. And if your threshold is, for example, between 10 percent, the threshold is 10 percent, everything below gets one treatment or doesn't get the treatment, and everything above gets another treatment, you probably visually are not able to verify it. And if it collect corrects the correctly, um, quantified stuff in the slide. So here. We need to be cautious. And then AI for Diagnostics is like the highest bar. What we can do is we can automatically prioritize cases, which I thought maybe this is workable, but then I thought if it's prioritizing cases based on It's like a pre, um, pre trained model for, uh, diagnosing something, then it's going to be diagnosed. Computer aided diagnostics and predictions of molecular tissue properties from image, which are all, to me, like, super high bar if stuff goes wrong and we don't have a method to verify and check that it's, uh, okay. then, um, we're in trouble. And not only we are, but, um, the patients that, um, that were in, in treatment or were being seen are in trouble as well, which is worse. Speech text for report writing and image analysis is causing user bias to rely too much on AI for the work. And this is very much a concern for all the tools that, like, Suggests stuff. Um, it's totally a valid concern. We're gonna be talking about it. Mm hmm. And the FDA, uh, is uh, considering allowing a person who is not a doctor to approve negative cases that have been looked at by AI. These people will have some training which is yet to be defined. Yeah, so this is kind of a mimicking of the, for example, of a screening test scenario. And mammography is a screening test in radiology. So there are, there are technicians involved in pre screening the test and then only a medical doctor, um, does other part of the workflow. The same counts for, for cytology, for cervical smears. There is a certain percentages and it's defined in the, in the clinical lab workflow, how to do it. And now, uh, there are algorithms that are good enough on par with the non medical. Uh, medical doctor person. So, um, there is, uh, there, there are discussions how can this be implemented. What do we do? Do we use it? Do we not use it? I want to use it because it's leverage, but how can I use it ethically, right? I guess many of you are asking yourself the same question. So what could ethically go wrong? There is a paper. Let me accept my own cookies here. There are many papers, but I chose one, one of these papers for you because I had the co author, Dr. Richard Levinson, as a guest on my podcast and, uh, and I actually invited him to the podcast because of this paper and, but he's not even the first, the first author is, let me see here, Nakagawa. Keisuke Nakagawa is the first author. Why do I like this paper? Because it has a nice table and it basically tells you what could possibly go wrong with AI. They're very relevant. Um, so in this paper, if you get a chance to read it, you're going to have a super cool table. And basically it says, what can go wrong with AI? And I chose the things that are, um, connected to AI. ethics. So what do we have? What are the problems? Lack of data diversity. Lack of data diversity in a data set used to train AI models. Uh, the data sets are not diverse. The model may not be able to accurately identify pathology in a wide range of patients. Very valid concern, right? If we blame that a model is generalizable, but it was only developed on a certain type of population, it's not generalizable, right? Because it was not developed for this part of population. Then we also have bias in the data. We have biases in, uh, Let's, let, let me talk about me. I have biases in my head, um, based on my experience, based on my background, based on my life, uh, and I guess it's not that different for, for other people, people, uh, And obviously, AI models can perpetuate bias present in the training data, leading to inaccurate or unfair diagnosis for certain groups of patients. An example of bias that is like, not related to this, let's say I would train a chat GPT to speak like me, and I would train it on my podcast transcripts, and then I would ask it to write a letter to somebody, like, very, meh, maybe not special. That's not the greatest example, but let's say a formal letter and if it would only be trained on the way I speak it wouldn't have the data, the, the, the structures necessary to, um, write the formal letter, right? So, but it was biased in the data that they've given this algorithm to train. I only gave transcripts and I talk like I talk. This is not how I write an official research proposal, right? So. And I think lack of understanding. AI models can be difficult to interpret. This is super important as well. And I think we touched, um, on this last time when we were talking about applications and when we were talking about especially mutation predictions. And some people have said this is the worst case. not ready for prime time. I'm in the camp that says, yes, I would like to leverage it, but show me that it actually, um, predicts what it's supposed to predict. Yeah. Because they're difficult to interpret, they're making it difficult for pathologists to understand how a diagnosis was made. and therefore difficult to trust the AI diagnosis. If it's something we can visually confirm, um, that's less of a problem, because being a trained pathologist, you recognize tissue patterns, and you can confirm, yep, this is okay. Like, there, the, the mutation predictions, like, there are entities that have nothing to do Um, Nothing to do with a visual appearance of this tissue, right? That's just molecular property. Maybe there is something visual, but it's not, um, you cannot distinguish, uh, as a human. Then, obviously, security. AI models and data must be protected from unauthorized access, especially when dealing with security. sensitive patient information. Regulation. AI models will have to comply with data protection and health care regulation. And accreditation. AI models will have to be validated and accredited by regulatory, irrelevant regulatory bodies before they can be used in the clinical settings. And this has to do with. There was AI tools being classified as medical devices. There was a, um, a commentary in Nature, I'm gonna show it to you, that came out two days ago, and I had to read it before I gave this presentation, because it would, I saw it, so I couldn't omit this information. So when you go to this paper, You will also find a mitigation strategy. So there's this table one, which is like my favorite table out of Publication. I don't think i've have cited anything more Uh than this particular table because it shows you the challenges And and they're like it's divided then it shows it shows you the impact and then it gives you mitigation Strategies, which is fantastic because it assumes okay. It's going to be used. How can you mitigate? That this impact that is harmful is not going to happen. So this table is my favorite table. So for example, for a limited demographic diversity, the problem is going to be AI may result in AI models which do not generalize across populations, right? So what do we do? Data collection and sharing, sorry, sharing efforts across institutions around the world. We can do that, right? Interpretability, limited interpretability. What can we do? Um, use different tools. These are some computational introspection tools that tell you what these, um, like, how those weights and biases and how this model functions. Um, I would say this is more for computer scientists. Okay, we have tools and, um, there is this deep learning interpretability is a rapidly evolving research area and there are newly created tools. Another trend that I'm seeing is that, um, you have this, um, this digital AI tool. that is predicting something, and then you use a fast, cheap tool to confirm it. Like, let's say a mutation is predicted, and then you use a rapid PCR to confirm it, rather than, like, go and sequence the whole thing. So, here, you save time, you save money. Computing resources. Many hospitals lack the basic computing resources needed to deploy AI models. And, uh, what do we need to do? So, what is the impact? Well, there's going to be fantastic tools that they will not have access to. And is that fair? Is that ethical? It's probably not. So what can we do? And, you know, this is a publication from last year, you know, but they're like, there's new data, there, there, there is new strategies. But I think the way this is structured is just so informative and helpful. What can we do? I need to, I need to tell you a story. So I was talking to, uh, because, uh, for, Like, I always get so super enthusiastic about something, and then, um, I inform myself a little more, and then my enthusiasm is a little bit, um, let's say it's gated. Or it's more So, um, I was reading about, uh, AI that could be used on mobile devices and, um, you know, a phone that you can put, uh, to your microscope and, uh, take a picture and deploy a model, uh, and it would be cloud based. And I was talking to, um, to a person who actually had experience with doing something like that. And trying to deploy something like that in I don't know which african country but in africa on iphones and he said that he went Uh there then visited again and all those iphones that people donated they were uh, just like in this box And he was like what is it not working? What's wrong? And they said well, we don't have internet access It would be fantastic tool if you could actually do it locally on the phone But because you rely on cloud computing we don't really have internet So, those iPhones have been there for a year. I'm like, I wouldn't even have thought of that. Which I should, because when I go to Poland, in my village, I have like, terrible internet. Uh, but things like that, like, anyway, just, we need to think about those things. One time I visited, um, a hospital in Ethiopia, and they had this fantastic molecular PCR cyclers, molecular lab equipment that was donated by, um, some foundation or I don't know, some European union funds. And they had a person who was supposed to be the lab director. He was, um, trained in Germany, had a PhD from Germany. And we go and visit this lab and all this equipment is covered with those plastic dust covers. And we're like, what's going on? Why is it not working? Is it broken? No, no, perfectly fine, but we don't have the funds to buy the reagents. Nobody thought about it. Anyway, um, there's one more paper. This one, uh, is older. There, there are two more, more papers that, um, I want to tell you about. Um, this one focuses like on, on all the dependencies. This one, uh, ethics in, of AI and pathology, current paradigms and emerging issues. It focuses like on all those dependencies. Uh, and also all the. stakeholders, right? Patients, pathologists, institutions, and AI researchers. It's a cool chart that I'm gonna narrow down to this part of the chart. So, um, transparency, governance, and accountability. Transparency of how the tools are, uh, developed, governments with, like, regulations around the use of these tools, and accountability. So, accountability, it's like, who's accountable for this, right? Um, and like, uh, super crude example for me was an accountability day describing this paper like okay accountability when something goes right and accountability slash responsibility slash liability when something goes wrong so um for driving in the car right uh when i drive a car i hit somebody that's my fault right even though i was using At all the car and I should be accountable. I should be responsible for this. But what if the car was something happened, like a part that we got all those, um, uh, in different devices, a recall of parts and like all the owners have to say that they have the car produced in this and this year, because that part is the fact, uh, because of this part or automatic system or AI based system in this car, because of that, something happened. I. did, like, I harmed somebody, right? Who's accountable? Well, um, me, probably, as well, still, I was still driving the car, but then there's going to be the, um, manufacturing, uh, manufacturer of the, of the car, and maybe more parties involved in that, right? So, in that paper, before we go to AI as medical device, um, they discuss something that they think is under discussed in those ethical discussions, uh, is like, okay, who all need, uh, who needs to work together? And, um, That's what they show here, right? Patients, pathologists, institutions, AI researchers, and they have also, um, legal involved in this, so it's more complex than we would think. And AI as a medical device, when we look at the devices that were cleared or approved by the FDA, we have 171 devices. Uh, do you know, can you guess how many, uh, are pathology? These are AI based and machine learning. Let me know in the chat how many do you think are in the pathology space. And here, an important fact, 155 of them, uh, this was between August 2022 and July 2023. So, like, most of them, We're just recent. Uh, you know how many are, uh, in pathology? Any guesses how many? I'll let you know. Exactly three! We have three here. And these are molecular, so I kind of like Christina says two. You were super close, Christina! Uh, you're the closest. And, uh, just at under 30, that's, uh, that's also very, uh, pretty good. Christina, you're closer. Anyway, we have one, uh, that is actually image based and the other one, uh, are some molecular pathology, uh, diagnostics. So the, the, the one that is, uh, most famous in our, like, image analysis AI, uh, computer aided diagnostic space is the one developed by PAGE that, uh, got approved, uh, by the FDA in September 2021. So just one, right? And, and, but there is a little bit of like, why will there ever be more? Did I put this paper here? Yes, I do have the paper. Um, will there be more? We're going to touch on this, but my take home message is, uh, so I'm in the camp. I want to use it. Um, but I want to use it responsibly. I want to use it ethically. Can I always decide whether it's ethical use? Probably not. That's why we have such, um, the regulatory bodies, um, that are dealing with this. And this is a poem from the paper, uh, by Nakagawa et al. And Chad GPD. And it describes beautifully, uh, The machines, they may seem infallible, but their errors, they can be terrible. M is cancer, a false positive too. The consequence is dire and askew, but yet we still march on with this quest for faster results with little rest. But let us not forget the human touch, for in the end it, it is what matters much. So let us tread with caution and care as we delve deeper into the AI affair, for the stakes are high and the risks are low. In the field of pathology, let us not seal our fate with machines, but with our own eyes, and judgment, and expertise that never dies. A poem written by Chad GPD. But, what I wanted to show you that came out just a couple of days ago, 16th of April, by a student. Stefan Gilbert and Jakob Nikolaus Katter, it's a comment, and Nature Reviews Cancer, and, um, they say that as long as, uh, the, the AI use is, and, and they talk about, um, generalist AI, so generalist AI, um, is the one that, like, cannot, can be used for more than just one specific test. So, like, large language models would be, like, it's not gonna just predict one, like, give you one type of answer, uh, it can generate responses based on data, right? So, they say as long as, as, Um, the regulatory bodies see these tools as medical devices. There is not going to be specific enough use case to put it in this medical device regulatory framework. So there is not going to be too many approvals. Um, will that mean that they're not going to be used? Well, no, they will be used. Because, uh, like, um, thalidomide, right, it was not approved. And it was still used and caused harm. How can we, how can it be used? Um, and that's basically what they discuss in this commentary. It's just two pages. Um, you can, you can read it and it's fantastic. Uh, like a good point of view on, on. How to look at it because nobody's going to prevent me Uh from using this thing at home, even if I like don't put the data don't paste anything From work. I still have my head and can ask questions and the same is going to be People are gonna come to the doctor and say hey, I had these results So now you can get all those results, uh in the portal, right? I remember my history This is what I think I have. How about we try this and that and that basically there's going to be patients You Who are enabled with AI coming to the doctors who are not enabled because they're not allowed to use it, uh, at work, and they are, uh, in a disadvantaged position. The guardrails need to figure out how to do this. Good answer. And now, I will let you ask questions. And I'm gonna pull, uh, Um, so what I'm supposed to do now and if anybody is uh from this audience is in the digital pathology, um Club my members we have a private q a session after this, but I would like to take at least five minutes to To pull the start comments Um, I will be there in a second. So bear with me. So, um, Joe's comment, the AI vendors are only responsible per their contract to how much the hospital has paid for the product. This is standard for any platform. So what if, like, there is a recall on the tool? Like what would be the scenario here, Joe? Feel free to Let me know in the comments because I don't know if we have like enough use cases to, maybe there are, maybe you know more, but my question is like, okay, like how, how can they be like co responsible if something goes wrong where the, uh, people who don't, uh, who are in the hospital and the doctors who are using it had no insight into something like, like those car parts. Like we don't know that something went wrong on the assembly line and then they call us and give us. the cars back from 2023. So, and many of these comments or questions are probably not going to be not going to be or will not have a clear answer. A very important perspective, and thank you so much Joe for all your comments, is um, loss of cases and revenue. So revenue, especially in the US, uh, with like the, the way the reimbursement for medical procedures is structured is something that is driving how medicine is practiced. Should it? Probably not. And there's like a lot of guardrails around that, but there are also loopholes. So anytime a new tool is coming out that would like change this, this is. Uh, consideration, very much. Are there any that are not classified as medical devices, uh, but still use any AI tools? Oh yes, so many. So, well, uh, a lot. Like all those, uh, all the tools that we just have on the phone, They're in use, and it's only dependent on the user where you're going to be applying this to, but also there are algorithms or AI tools that were never approved or cleared or shown data to a regulatory body, but then are designed as medical devices. That's like another topic, but yeah, basically like who's preventing me. to ask, uh, chat GPT about something that happened, um, at work and how to solve this situation. Only I'm preventing myself because I might think this is not appropriate, um, but maybe somebody else, or in one scenario it's not appropriate, in another one it might be totally, like, harmless, right? So I'm the only gatekeeper between me using the AI, generalist AI. Or not. So, yes, there are tools. And that's what also, um, they described in this commentary. That there's gonna be, like, if the medical professionals are very, like, strictly banned from using things that are, um, that are not approved because they cannot fit into the medical device mode, so they will actually never get approved because of that. It's not really like a medical device, but there are also they say okay if it's a tool that they give a couple of examples if it's a tool that shows like a Differential diagnosis then it's okay Then if it's a tool that just like shows you one thing Because then you still need to use your brain and you're less biased if you have several options, then if you just have one option, uh, and you have to say yes or no. Then, um, also they say that, uh, the highest risk are things based on medical images rather than text, uh, and tabular data. Um, Checking if, if we have any more questions. I don't see any questions. Um, I will, when you watch the replay of this, feel free to leave your questions in the comments. Of course, I will monitor them and answer them. And of course, if anybody does not have the book yet, this particular webinar was not based on my book, Digital Pathology 101. But next week, or the weeks after, we're going back to going through the chapters of the book. So if you don't have the book, this is for free. The PDF version is for free. You can just download it. Scan the code, get the book, and then you're gonna be getting, uh, all the alerts that there's a webinar, what topic the webinar is on, and thank you so much for joining me today. I appreciate you so much. I don't take your attention lightly, and I talk to you next time. Have a great day.

Thank you so much for staying till the end. If, you know, somebody who needs to listen to this. please share. And if you happen to listen to it or see it on social media, if you can give it a like or a comment, that would mean a lot to me. Uh, it would not only mean a lot to me because then I'm smiling when you like me, but also this helps get it to more people especially when there creating new tools and they're thinking of applications of AI tools. And they need to take care of doing this ethically. You are amazing. Thank you for tuning in. And I talk to you in the next episode.