The Vetrospective

Artificial Intelligence

The Vetrospective Season 1 Episode 7

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 46:02

Send us Fan Mail

Dr. Michael Kent speaks with Dr. Stefan Keller about Artificial Intelligence in the practice of Veterinary Medicine. 

SPEAKER_00

I'm very excited about the opportunities but also a bit concerned about us getting complacent and not checking things as thoroughly as we should fish with dangerous. Yeah, absolutely.

SPEAKER_01

Hello. This is the Vetrospective Podcast, and I'm Dr. Michael Kent, a professor in radiation oncology at the UC Davis School of Veterinary Medicine, and your host. Artificial intelligence is in the news daily and seems to be integrating into our lives, with most of us only having a vague understanding of what it is and what impact it holds for us. Well, no one really knows what impact it's going to have on all our futures. Is AI going to be no more than a hyped up productivity tool? Or is it going to cause massive disruptions in healthcare and other industries? Will this be a productive disruption, ending in better outcomes for patients, or will erode the humanity in our healthcare system? On today's episode, we are talking about AI and how it's being used in veterinary medicine and what the future might hold for all of us. I have invited Dr. Stefan Keller to join us today to take a look at this. Dr. Keller is an associate professor in anatomic pathology here at UC Davis. He did his veterinary school in Berlin and did his residency work both in Zurich and here at UC Davis, where he also received his PhD. He is a European diplomate in anatomic pathology. And Dr. Keller works to bridge really the diagnostic pathology, immunology, and the rapid evolving field of artificial intelligence. Why I invited him here today. His research group develops and deploys machine learning tools to improve diagnostic precision in veterinary medicine, including ANA, an open source analytics platform that links AI models directly with an electronic medical record system to support real-time interpretation of clinical and pathology data. Dr. Keller has contributed to areas such as clonality testing, immune repertoire sequencing, and an AI-assisted histopathology platform. He also teaches immunology and pathology, mentors graduate students, and collaborates with clinicians and computer scientists to help integrate AI thoughtfully and responsibly into everyday diagnostic workflows. So, Dr. Keller, thank you for joining me today on the Vetrospective. Thanks for having me, Michael. Of course. So first, I want to ask you why you became a veterinarian and what made you interested in pathology?

SPEAKER_00

So I guess I started out firsthand thinking I could be a wildlife veterinarian and travel to cool places and see exotic animals. So like the combination of being a veterinarian and adventurer, I think, initially lured me in. But then as you go through the DVM program, you kind of learn to see some realities and you explore or find things that you haven't really paid attention to before. And so I came across pathology, and pathology really intrigued me because it gives you a chance to look at the cause of disease and then the molecular and cellular events that then lead to a clinical manifestation. So, in other words, it it allows you to kind of dig deep and understand, I think, disease on a kind of more deeper level. I think as practitioners, we are oftentimes restrained by financial resources of owners, for example, which has technical abilities. And so pathology, for those of you who are not familiar with what pathologists do, we essentially or anatomic pathologists, which is a subspecialty of pathology, we look at tissue samples under a microscope. So biopsies. Biopsies or post-mortem exams, what we would call a necroxi. Correct. Absolutely. Yes. So sitting at the microscope looking at the cellular level of tissues is just very rewarding and oftentimes offers an insight in why a certain disease developed. Interesting.

SPEAKER_01

And so, in a sense, this deeper diving in and trying to figure out the causes of disease leads us over to your work in artificial intelligence as well. So I've seen artificial intelligence, the concepts broken into many different classification systems, ranging from its capabilities and thinking to tasks it can carry out and how it learns. I think what many of us have interacted with are the online AI platforms using specific applications that can do specific tasks, such as creating an image or writing an email or writing a letter. Now, can you explain to me the different types of AI that are available to us? Specifically, maybe what is the difference between, let's say, generative AI versus machine learning, or or maybe I'm missing the categories totally.

SPEAKER_00

No, that's that's absolutely a confusing terminology and field because there's so many aspects and angles to look at it. Um AI started out in the 1950s essentially and just basically means that we have a machine that kind of does things akin to what humans would do. So kind of AI is just a very general and umbrella term for any computer system that mimics human intelligence. And so then machine learning, and so initially that was done creating rules, meaning that we could say, for example, in in the field of veterinary medicine, if we wanted to interpret blood work, we could create a rule that says if you know the hematocrit is below a certain uh value, then we could call it anemia. So hematocrit meaning what percentage of red blood cells you have in your circulation. Exactly, right. So it's just essentially a mathematical equation that tells you if-then a statement, if you will. And so that became more sophisticated with machine learning, which essentially refers to the fact that the machine can create some of those rules or learn some of those rules by itself. So we don't need a human to explicitly program an if-then statement. But if you give enough data to a machine, especially if you provide a label, so a label in this context might refer to this record is a dog that is anemic, or this record is from a dog that is not anemic, then the machine can figure out by itself the rule, and then you can take, let's say, blood work that the machine has never seen before, and you can classify it with respect to whether or not the dog is anemic based on a rule that the machine kind of figured it out by itself.

SPEAKER_01

Which may not only be the hematocrit, it may take other things into account, right?

SPEAKER_00

Absolutely, yes. So the beauty of machine learning or artificial intelligence is that you can essentially feed it as much data as as you want to. And usually the more data you feed it, the better the model gets it in general. Um, and then kind of the third level of complexity that has evolved only more recently, and I would say the last 10 years is what's called deep learning. So deep learning is a bit of a different, we call it architecture. So basically the computer system that lays at the base of it, um, it's often referred to also as neural networks, essentially, that um is more complex in how it works and usually needs a lot more data. But with deep learning, essentially, what we can do is we can analyze data that is much more complicated than anything we've done before. For example, so I'm an anatomic pathologist, I look through a microscope. And so at least in the olden days, and now we're switching over to using digital images uh instead of the microscope.

SPEAKER_01

So you photograph the slide that you would normally look at, the glass slide, and digitize that.

SPEAKER_00

Absolutely. That's that's how it works. The kind of key difference, or there's two key differences with let's say radiology, for example, is that pathology or histopathology requires you to zoom in quite a bit. So you could imagine looking at a world map where you can see the whole world, but then you need to be able to zoom into California and look at, let's say, the Davis city map as well, right? So what we need are these large image files that allow you to zoom in and zoom out, which is something that radiographic images, for example, don't have that feature, right?

SPEAKER_01

So there's also the 3D aspect to it too, right? I mean, almost a topographic map, because yeah, where we're sitting right now is sea level and very flat. But let's say we move over to the Sierras, the focus will be different. Do you have to take that into account as well?

SPEAKER_00

Absolutely. So usually a tissue section that we look at is three microns thick. And the the microscope that we use to photograph that is not essentially very um, there's there's slight variations in what we call the Z plane, so the third um the elevation, if you will. And up down elevation. Up down, correctly, right? So there's some of that focus variability as well, which usually is kind of tried to account for at the scanner level, right? So the image that we receive usually has that third dimension taken out of place. Uh, for us, the third dimension is more the zoom level, how much we focus in, whether we're on the worldview versus that the Davis City map. So, in other words, it's a huge file that we look at, usually in the gigabytes size. And so we need a computer system that can actually handle that type of image format. And so before deep learning came around, we weren't able to really do that efficiently, or at least with kind of the tasks that we can tell it to do nowadays. And so that's been really a game changer. So looping back to your initial question.

SPEAKER_01

I was about to re-ask you, and let's say, you know, what's the difference when I go on and I put in a photo and I say, make a cartoon out of me.

SPEAKER_00

Yes. So what it Which I've done, by the way. Correct. So if you look at generative AI, so if you look at ChatGPT while you or things like that, they're usually now the third level that I explained, so the deep learning um type of programs that require huge computational resources to do that, right? And so underlying that is what's called a neural network. And Gen EI refers to generative, as the name says. So we can use it to generate text or we can use it to generate images, for example.

SPEAKER_01

So now that seems almost a waste of me using a neural network that's just been really developed to do something pretty superficial, but it it can be fun. So now we've got these fun aspects, but we also have these what people are really thinking about how AI may disrupt industries in a positive or a negative sense, is these deeper machine learning things, right?

SPEAKER_00

Yeah. So in metinary medicine, there's multiple ways how we could and are integrating AI in our diagnostic workflow. Um, for example, here at Davis, we're just now launching a scribe technology. So what that does is it essentially listens to patient um uh clinician conversation. And most do you mean patient or you mean their owners? Owner, right? Yeah. Owner and clinician uh conversations and and transcribes that and modifies that into a transcript that we can now include in our clinical record. So previously we had to take notes, we had to synthesize that into a you know paragraph or multiple paragraphs that that makes sense. And now AI can do that for us. So the advantage of that is obviously it could be a huge save in time. So the productivity tool. Productivity absolutely can focus on on the clinician. They don't have to write down every word. Um and from what I've seen, the the summaries are pretty good, right? The downside of that um that we have to consider are multiple. So one is at this point we have no experience how good the tool is, right? So a lot of it is probably fairly good, but without really checking that, we don't know what the accuracy or the quality is.

SPEAKER_01

And that can have huge implications if we write down in the history that you know the dog was vomiting for three days and the computer writes it was been vomiting intermittently for three months. I'll know it was three days during my visit, but the next doctor who picks this up might be misinformed. Absolutely. Yeah. So that and then what about also? So that's a little bit of an ethical issue too. Like I can't just record a phone conversation. You know, that's someone's privacy. So how do we safeguard that?

SPEAKER_00

Yeah. So the rules that we're setting in place is that we need to have explicit owner consent to record these conversations, and they are obviously not shared, they're uh removed after a period of time. And so um we need owner cons consent for that. Yeah.

SPEAKER_01

So is this considered generative AI if we are actually taking the inputs from what we're saying and synthesizing the conversation? You say it's also I know linked to the transcript, and you can click out to fact check it. But is that considered generative AI since it's creating the summary? Yes, it is. Okay, so but there's obviously machine learning or neural network behind it that's allowing it to do this synthesis.

SPEAKER_00

Yes, correct.

SPEAKER_01

Synthesizing a word, I wonder. But we can ask AI. So, where else in veterinary medicine is it being used? How is this being looked at besides the productivity tool of making it easier for me to do medical records, which by the way is the bane of every clinician's existence is having to sit down at night and work on your records for hours?

SPEAKER_00

Yeah. Um, so so one kind of more superficial AI tool that we're trying to implement here at UC Davis is to interpret blood work, so routine blood tests that you might do if you go to your vetinarian around the corner here at our VMTH. So things like complete blood count or chemistry panel, we can use that to check whether a dog has a certain disease, yes or no, for example. And so colleagues of mine have developed uh three classifiers. We call them so three of these AI tools that are able to tell whether or not a, in this case, dog has a certain disease. And so we're in the process of trying to incorporate those into our clinical workflow as well. But there's several hurdles that we're trying to manage before we can do that.

SPEAKER_01

But now, if I'm like a halfway decent doctor and I just look at the blood work, shouldn't I be able to tell that? Do I need a computer? And and you called it superficial. So is this something that's easy? Well, easy to do and easy to diagnose?

SPEAKER_00

Yeah. So I called it superficial because the underlying algorithm is compared to deep learning or neural networks, it would be considered superficial. Almost if-then statements? Not quite. It's the machine learning level, the level two that before or in between, right? So we don't need as many computational resources. Um and it gives us a simple yes-no answer with a probability attached to it, is about how a certain AI is about a certain diagnosis, it spits out. So with respect to your question about shouldn't a veterinarian be able to diagnose that, absolutely. It depends a bit on what the disease is, right? So, as you know yourself, we have those types of diseases or patients that are pretty straightforward. They're slamming dunk diagnoses, and there's others that are more complicated. Second, we're all just humans, so we make errors. So we still train our veterinarians or vet students to recognize and diagnose the disease. But it is nice to have a backup that in case we do have a bad day or you know, some other things happen, we we we miss it. And so having a kind of backup co-pilot AI that will help us to not miss diseases, I think is desirable. And then third, if you're in a rush, no matter how good you are, sometimes things fall through. You would miss it back. You can miss it.

SPEAKER_01

So it's just we'll flag it for the attending veterinarians so that they can then go and check better.

SPEAKER_00

Aaron Powell Correct, yeah. So it would run in the background, and then we have a section that says machine learning algorithms or decision support tools, and then at the bottom, you can look at that or you can choose to ignore it if you don't want to do that.

SPEAKER_01

Aaron Powell So now I know at least one of the diseases that's been worked on is Addison's disease, which can be really difficult to diagnose at first and can have very non-specific clinical signs, in other words, what the dog is showing to the owner. So for this kind of disease, is this what AI is kind of made for, in a sense, for us?

SPEAKER_00

Absolutely, yeah. Oftentimes it's used in the very initial stages to help us guide further diagnostic workup, right? So things like Addison's disease, we would follow up with further diagnostic testing.

SPEAKER_01

Yeah. And just so people understand Addison's disease is kind of a um hypoadreno corticism. You don't have your um mineral corticoids, which are coming out of your adrenal glands and kind of balance the salts in your bloodstream, right? So you may have some very nonspecific signs at first, but can be life-threatening.

SPEAKER_00

Absolutely. Yeah. Another one we've been working on is leptospirosis, which is a bacterial disease that affects the kidney and the liver. And those animals present usually very least fairly sick. And so it is helpful at the beginning if we know or at least have a rough idea whether or not that disease is due to a bacterial infection, like in this case, or some other causes that can cause kidney disease.

SPEAKER_01

Yeah. So if the computer flags it and you're still waiting for your, let's say, urine culture to come back, you may decide to go ahead and start treating it just in case, because this is life-threatening.

SPEAKER_00

Correct.

SPEAKER_01

Yep. Yeah. And now, um so I guess what other areas in veterinary medicine am I missing any? I've I've heard things like there's now companies out there that are reading x-rays or radiographs, what we would call them. So images of, let's say, a dog's chest and coming up with a diagnosis with AI. So where does that integrate in? Where, what do we need to do to make sure this is safe? How do you test these things? How do you know that the computers, you know, you've all, I think, heard the term hallucination. How is it how do we know it's not making something up?

SPEAKER_00

Yeah, that's that's that's an excellent point. And so what we should say at the beginning is that there's no official agency that checks these types of algorithms. Like in humans, there's the FDA, and every algorithm that comes out is essentially a device that has to be approved by the FDA. So there's no quality control in veterinary medicine analog to that. And so in that regard, it is a bit of a more Wide West situation. So there are certain colleges, for example, radiology, that have proposed rules and guidelines on how to identify good or bad algorithms or at least best practices and how to aid develop algorithms and then disclose the details of those algorithms. But they're not legally binding. So it basically comes down to essentially the veterinarian, if you will, to decide whether or not something is worthwhile doing or not. I mean, what we'll probably or what we've started to see and we'll start to see is that there might be papers that check a certain, you know, algorithm to, let's say, read out x-rays. The problem with, or one thing to consider with artificial intelligence is that there's usually two stages. One is where we develop the algorithm, we call it training the algorithm. And then the second one is where we actually deploy it, meaning we give it something that the algorithm has never seen, like an X-ray before, and then ask it to answer a question. And so one of the tricky things about that is that the area that you use the AI tool in has to be very similar to the conditions in which it was created. For example, in the case of histopathology, we knew that we know that very simple things like the machine we use, a scanner, to take a picture of our histology slide that has a slightly different color output can vastly uh offset the result that the algorithm provides you with, which means that if the model was developed using scanner A and then we deploy it in a clinic using scanner B, the algorithm might perform very differently, right? So it might misdiagnose. Might misdiagnose something. Using it in.

SPEAKER_01

So, you know, I'm I'm full disclosure, I'm a member of the American College of Veterinary Archaeology. And I am aware of kind of the guidelines that they put forth. So let's say I was deciding to start a business and I want to be able to read x-rays. Can I trade it off 10? Do I how do you classify them or how do you go about building this kind of model first?

SPEAKER_00

Yeah. So radiology is not my field of expertise. So and I even if it was, I probably wouldn't give you precise numbers as to how many cases you would need to train it. As a rule of thumb, the more subtle the pattern that you're trying to recognize, the more images or training data you need. In other words, if you want to train or if you need the classifier to identify a big honking mass that's super easy to recognize, you'll need fewer cases to train on than if you have very subtle differences.

SPEAKER_01

But if we need all the differences you can have, let's say, in a chest x-ray or thoracic radiograph, as we would say, and you want to look at the difference between, let's say, bronchitis and asthma and pneumonia, and a large lymph node and or a big heart or pulmonary edema, we could have you might need hundreds of or more of really well documented or annotated radiographs to do it.

SPEAKER_00

Absolutely. Yes.

SPEAKER_01

And then you have to test it on a different set.

SPEAKER_00

Correct. Yes.

SPEAKER_01

And maybe radiographs taken from different machines.

SPEAKER_00

Ideally, yes.

SPEAKER_01

Yeah. So this can get very complex, and there's not rules on how this gets done at this point, is what you're saying, at least on the veterinarian. So what about the human side? Where has this been integrated in? Are you familiar with that? Or is this something that's not your area?

SPEAKER_00

Um I have some knowledge, but I think not enough knowledge to broadly comment on that in the hardcast.

SPEAKER_01

No, no, that's okay. That's okay. Yeah. And um I've also heard people worrying about AI replacing doctors, particularly, let's say, the radiologists or pathologists, people who use pattern recognition. Because really what we train our residents and our BET students to do is recognize patterns, right? You know, you see this pattern again and again, and so you know that this means this dog has pneumonia on the chest x-ray. This means this is a carcinoma or sarcoma on under the microscope. So where do you think we're at? Are we looking at replacing doctors at this point? Or you know, do you see that happening?

SPEAKER_00

Very good question. So one of the things to note with most of the AI algorithms that we're implementing is that they're fairly narrow in scope, meaning that they're meant to diagnose disease X or distinguish between disease A and disease B, right? So it's a fairly narrow scope that if you give the AI something, let's say you wanted to diagnose or differentiate between inflammation and tumor, and then you give it a third group or type of case, it'll perform fairly poorly. So I think where humans shine at this point is that you can give me any type of biopsy, you know, it might be any disease process, most of the species, mammalian reptile amphibians, and I can I can come up with a reasonably close diagnosis. It might not be very deep with respect to you know brain tumor classification into like a lot of sub-entities. Exactly, right? But I can make a dedicated or educated diagnosis on a broad range of different cases. Again, with AI tools, if it's not within the narrow scope that has been trained for, it'll perform fairly poorly. So going back to your question about replacing veterinarians with AI, at this point, I think we will use AI to look at very specific parts of our expertise and replace those. But I don't see any of us being replaced right away, right? That's one consideration. The other consideration is again, we need to kind of figure out how good those algorithms are. And so what we try to propagate is what's called a human in the loop, which means that the AI might initially make the diagnosis, but then we still need the human who will ultimately sign off on the case, right? Because you have responsibility for the case at the end of the day. And so you want to make it worse. Yeah, exactly. That what the AI tool tells you is is really correct. So there's always that consideration to it. Having said that, for radiology, I think you can already submit radiographs online and it's being read out purely by an AR algorithm where there's no true human in the loop anymore. And that that is kind of a tricky thing. If you can be sure that the algorithm is is really good, then maybe it'll work. But as I said before, just because there's a paper published that says the performance of this algorithm is excellent or good, it doesn't mean that in a specific different scenario it performs as good, essentially.

SPEAKER_01

So there's as a radiologist who's trained. So at this point, the neural networks that are out there in the computers maybe aren't as good as your neural network.

SPEAKER_00

It depends. I mean, ultimately I believe, I'm a firm believer, that the machine is a better pattern recognition. And I also believe that if we get to the point where a machine is better at diagnosing the disease than me, then we shouldn't be concerned about my workplace. We should let whoever does the best diagnosis do the job. And if the machine is better than me, then you know we have to look at retraining pathologists to do something else. So I do believe that at this stage, I think, of the game, however, um we're not there yet. And we need, at least in the interim phase, also pathologists, radiologists to make sure that those algorithms actually perform the way we intend them to perform.

SPEAKER_01

So I've heard the term, let's say, robotic surgery, and a friend of mine asked me when I was told them I was going to be doing this uh podcast. They basically said to me, So is a robot gonna be doing my dog's surgery tomorrow or next year? And I immediately said, No. What are your thoughts on where we're headed there?

SPEAKER_00

Yeah, I'm not sure about surgery. Um, I mean, the way I understand it is price is always a big determinant, right? And so I don't know and I know nothing about surgery where we are in in that region. In robotics, yeah. Robotics um and whatnot. I can really only speak to um data that's being analyzed rather than actual mechanics involved.

SPEAKER_01

So the other question I got, which I also thought was um really interesting, is do you see a time when maybe instead of um Google Translate, which can read anything, you'll be able to translate your dog, you know, with having the computer inputs of what your dog's looking like, maybe the sounds they're making, things like that, and and getting an output that you can kind of communicate better to your dog your dog with or understand what they're they're seeing.

SPEAKER_00

Boy.

SPEAKER_01

He's rolling his eyes on this one.

SPEAKER_00

You mean with respect to just general communication or actually a pathology where we're trying to figure out what the problem is?

SPEAKER_01

Maybe more just general communication there. Again, maybe this is the generative AI where we are taking a picture and making it a cartoon. But you know, for pet owners, this might be something that would be really interesting.

SPEAKER_00

Yeah, I think over time we'll have more and more um devices that measure various things. Like we all know, or a lot of us wear some kind of a watch that tracks our heartbeat and whatnot. And you can do similar things now with pets. You can activity monitors. Activity monitors, you can use the litter box to derive certain data. And so I think as these tools become more mainstream, there's going to be a lot more data to analyze. And I think it's a cool and interesting field that's that's really worthwhile. Having said that, some types of input might be more worthwhile than others. Fair.

SPEAKER_01

So now I know you do um specifically, you've been also one of the things you're working on is trying to distinguish between inflammatory bowel disease in cats and lymphoma in cats. And I know that's really tricky sometimes, you know, for a pathologist and a clinician to figure out what the cat has. You know, both are going to cause gastrointestinal signs, diarrhea, some vomiting. And they're almost a continuation of a disease. So, what have you been doing in your lab to try to figure this problem out? Yeah.

SPEAKER_00

As you see, it's a pretty tricky, tricky feel. So the basic dilemma that we have as pathologists is we we get what we call the slide, so a section of tissue, and we have to look at that. And these types of diseases that we talked about are essentially determined by how many lymphocytes, which is a type of a white blood cells, are present in a certain tissue section, and where are they located and what is their morphology? So how big are they and what size their nucleus it is and whatnot. The issue with that is that if you go back to the analogy of a world map and the CD map of Davis, in order to look at a lymphocyte, you have to zoom into the level of Davis, right? And there's different types of lymphocytes. There's different types of yes, yeah. Light microscopically we only distinguish, yeah, mostly one. But yeah, there's different morphologies with that, right? So what we are tasked to do as a human is to essentially look at, you know, a world map and then try to figure out what is the distribution of people in continent A versus continent B or country A or city A versus B, which means we're constantly zooming in, zooming out, zooming in, zooming out, and then we're trying to summarize what we see across this world map. And as you can imagine, A, humans are not very good at estimating or gauging how many lymphocytes are in a specific field of view, and then trying to summarize that across like a large slide like that is difficult too. So, not surprisingly, if you give three different pathologists the same slide, they sometimes come up with vastly different um guesses or estimates or what we call as we graded essentially. Um, and that has led to the fact that histopathology is still kind of a gold standard, but it's not a problem. But it's an art, also, right? It's an art as well, depending on how experienced you are, obviously. And so what MELAB is trying to do is to take the human factor out of play here. So we trained an algorithm to recognize lymphocytes, and then we can measure how big the lymphocyte is, where it is located, how many we have. And so we get a whole bunch of data, and we just take essentially the location, the size of the different lymphocytes, and we can do that across a lot of different cases. So traditionally, if you look at studies, they might have in the two-digit max three-digit number of cases. So what we did is we essentially went back through our archive all the way back to the 1990s, and we pulled every single CAD biopsy that we could find that roughly fits that entity. We came up with some literally thousands of tissue fragments, and we ran it through our software, and we can now determine or we can basically say what is normal, what is abnormal with respect to how many lymphocytes we have, where they are located, how big they are, and then which is really cool, we can take these data and can put it into AI again in something we called unsupervised learning and say, okay, can you find patterns in here? So we, in other words, don't have the human expert define the pattern, but we can let the machine try to find patterns that we may not have seen. That we may have not have seen exactly like that. Um, and so it's a it's a pretty powerful tool where you can get rid of a lot of the subjectivity that humans bring into the game here. And so what we found, and we're about to publish that is essentially their distinct kind of subgroups, subgroups with respect to how lymphocytes are organized, number, spatial distribution, but it's a continuum, right? And so it is not surprising then if you have different humans look at that, that they come to different results. What the tool allows us to do now is though we can take a new case that the AI has never seen, and we can classify it based on the algorithm we have. And if we repeat the 10 times, the machine will always classify it exactly the same way. So it adds consistency. Consistency, absolutely. And so what we're trying to do now is to add outcome data to that because ultimately at the end of the day, what the pathologist calls it is not as important as to how this the cat's doing at home. Exactly. Does it respond to treatment? How does the cat do? How long does it survive? The tricky part with that is getting that information A and B, as you know, treatment varies, right? So cat A will get treatment A, cat 2 will get treatment B. And so getting enough data to make conclusions or predictions about how will the cat respond to a certain treatment, given a certain histopathology phenotype, is tricky at that point. So that's our bottleneck for sure, where we need to get more data moving forward.

SPEAKER_01

And that's our real bottleneck in implementing a lot of these things is having these quote unquote curated data sets where you've proven at the end what that case actually is.

SPEAKER_00

Exactly. Yep.

SPEAKER_01

Yeah. So what haven't I asked you that I should have asked you about AI, sir?

SPEAKER_00

So personally, I find the aspect, because we're a vet school, the aspect of training veterinary students and kind of what AI effect has on our skills as diagnosticians a very interesting one. Right. So for example, if I know that my, you know, AI tool will always pick up Addison's disease or at least at a higher rate than I will, I might not look at the blood work as thoroughly again as I do right now. And as we have more and more of these tools develop, not today and not tomorrow, but you know, a couple of decades or earlier from now, you know, we probably have a machine that can diagnose most diseases more efficiently than humans can. And so what does that mean as a veterinarian? Do we train veterinarians as long as we're needed to keep us in, have the human in the loop? Do we relinquish certain aspects that we train? Going back to the example that we talked about before with a scribe technology. So right now, veterinarians go through a program that teaches them how to take notes and how to create a proper medical record. Now, now we're introducing an iTool that can do that potentially as good or bad as humans do. At what point do we say we're no longer training our veterinary students to learn the skill, right? And so that now goes from a simple scribe technology to diagnosing a disease. And so ultimately That's a big step. It's a big step, right? So we're opening the floodgates here where we say, okay, we might not um wanna or need to teach that anymore. And here at the vet school, we currently don't have a real workflow decision process to deal with that, right? At what point do we say we're no longer teaching that? And our standpoint so far is that the human doesn't have to be in the loop anymore. Exactly, yes. So for the scribe technology, it appears or it looks like we're still gonna require that, that veterinarians um have to learn the skill. But then on the other hand, it is important, I think, that the vet students learn the tools that they will encounter in practice. So we can't just say we're not gonna let them use scribe technology because once they get out into practice, they will use that. And we want them to be critical users of AI technology. So they have to understand the limits and pitfalls of it.

SPEAKER_01

Yeah, and I would argue, just if we're chatting about this, that when I'm teaching the vet students to take a history and do that, I'm not just they know how to type in a computer already. But what questions do you ask and how do you ask them and how do you see how the person's understanding or not, so that you can get the answers that help you understand the problem, so what diagnostic tests you do. So it's it's not just being a scribe, right? You know, that's easy. You know, they already know how to do that when they get here, but how do you actually ferret out the problem? So, you know, we have this saying in vetmed, if you hear hoofbeats think horses, not zebras, but occasionally the zebras are there, and I think so. The computer probably finds the hoofbeats of horses really easily, but can it find the zebras?

SPEAKER_00

Yeah, I mean ultimately I think it might be able to write right now. I'm not sure. Yeah. Um which leads us to the next problem I think with that I haven't touched upon is that the validation and the ongoing monitoring of those tools. Um, because as we said, the environment they were trained in might be different than what we are using in the same way. And that may shift over time.

SPEAKER_01

And that may shift over time, right? We get a new CT scanner that's got better resolution. And do we throw out all the old algorithms? Correct.

SPEAKER_00

Or do we stop learning and building new machines? Absolutely. And so that requires a whole new compute and personnel infrastructure to be on top of that, to monitor these AI devices as we deploy them, but then also ongoing to make sure, as you said, once we get a new CT scanner that it still works adequately. And in a kind of resource-confined environment right now with the state budget, it is hard to do that. So our our IT crew is really awesome, but they have their hands full as is with just keeping the ship afloat. And so now we come in and say, hey, we have three new classifiers where we need to monitor um how they perform and integrate it into our medical record and have it notify the clinicians and don't let it hallucinate. Exactly. All of these things. So there's essentially a whole new field that's added to IT now that they previously hadn't been doing, but they will be expected to do moving forward. And so that's certainly constraint here in my world, in my lab, has been working towards, for example, integrating these classifiers into our electronic medical record system because our IT group does not have the bandwidth to do that. But moving forward, we need more funding for these types of things if we say we're using AI.

SPEAKER_01

So that's obviously going to be really important, the validation and the like. So just to kind of wrap us up, where is all this headed? Where are we going to be in five years, 10 years? Or I know you can't predict that as you laugh when I ask the question, but what do you think?

SPEAKER_00

Yeah. Um, I mean, I'm I'm very excited about the opportunities, but also a bit concerned about us getting complacent and not checking things as thoroughly as we should be dangerous. Yeah, absolutely. So I think what we should do or will do here at the vet school in in Davis is to to to introduce things in a controlled way and and test them thoroughly. Um to make sure that our vet students still learn the skills that they need to learn, but then also become critical users of AI technology. They need to be able to use those tools and be familiar with them once they come out. So I think we have a bit of a luxury here at the vet school in that we still, you know, can teach the old ways, diagnostic ways. I think once our graduates get out into private practice, there might be more of a pressure on them with respect to either using those tools. They obviously have to um perform in very stressful situations. So there is more of a danger, I think, of missing certain diseases and relying more on these AI tools, right? So ultimately, I think it's inevitable those AI tools will be there, they will be used, they will create some damage, they will hopefully create more positive consequences and damage. Um, but it's so difficult to predict and it's so hard to keep up with the field that's moving at such a fast pace.

SPEAKER_01

Yeah, and we can't lose the humanity of medicine either. I know we teach doctoring, you know, and you you still

SPEAKER_00

Aaron Powell Yes. And hopefully we'll never get there. We need the person.

SPEAKER_01

We don't need to replace us as a race, as the human race. Yes. As a species, I guess, is a better way to say it.

SPEAKER_00

Aaron Ross Powell Well, I think that most people are probably comfortable or do want to have that human to interact with. The question is, how much does that human have to know veterinary medicine, right? Like if you real-time transcriber conversation, you could have also the computer put out real-time recommendations or therapy diagnoses, things like that. So theoretically, I think I can imagine a world where you still have the human interface, but that human doesn't understand a lot about veterinary medicine, like in the far future.

SPEAKER_01

I hope that we're far away off from that because it just it just seems to me we'll lose that caring aspect, at least at this point. Yeah. Well, um, Stefan, Dr. Keller, thanks so much for joining me today on the Vetrospective. It's been a very enlightening conversation and hopefully not a cautionary tale, but uh a way moving forward to make sure that we're not giving up too much of veterinary medicine. Thank you for having me. Of course. The Vetrospective, as with life, takes a village. I want to thank those who suggested I start this project and everyone who has encouraged and supported me along the way. Particularly, I want to thank our producer and director, Dene Blyth Unti, Nancy Bay, who is our program coordinator, our sound mixer, Andy Cowett, and theme music was composed and produced by Tim Gehagen. Thank you all, and we'll see you next time.