HealthBiz with David E. Williams

Interview with Carta Healthcare co-founder Matt Hollingsworth

March 07, 2024 David E. Williams Season 1 Episode 179
HealthBiz with David E. Williams
Interview with Carta Healthcare co-founder Matt Hollingsworth
Show Notes Transcript

Clinical registries can provide profound insights into disease patterns and treatment effectiveness, and ultimately improve patient outcomes. New technology, including AI enables faster and better data collection and analysis. 

Today’s guest, Matt Hollingsworth is co-founder of Carta Healthcare, which fuses technology, data, and human expertise in a quest to radically improve outcomes. 

Matt  shares how his mother's battle with cancer ignited his quest to marry tech prowess with a heartfelt mission to improve patient outcomes. 

I like it when he says he didn't actually plan to start a company.!

Matt strongly recommends the book  “The Worlds I See” by Fei-Fei Li.


Host David E. Williams is president of healthcare strategy consulting firm Health Business Group. Produced by Dafna Williams.


0:00:11 - David Williams
Clinical registries can provide profound insights into disease patterns and treatment effectiveness, and ultimately improve patient outcomes. New technology, including AI, enables faster and better data collection and analysis. Today's guest, matt Hollingsworth, is co-founder and CEO of Carta Healthcare, which fuses technology, data and human expertise in a quest to radically improve outcomes. Investors and customers are lining up. Are they on to something big? Hi everyone, I'm David Williams, president of Strategy Consulting from Health Business Group and host of the Health Biz podcast, a weekly show where I interview top healthcare leaders about their lives and careers. If you like the show, please subscribe and leave a review. Matt, welcome to the Health Biz podcast. Thanks for having me Listen. We're going to talk all about what you're doing now and all sorts of exciting questions that you're addressing. I want to roll the clock back a little bit and talk about your background, your upbringing. What was your childhood like, any childhood influences that have stuck with you in your career. 

0:01:10 - Matt Hollingsworth
That's a great question. So I mean, the biggest one is that my mom has had five forms of cancer I was actually. The first one was when she was pregnant with me, so it started even before I was born. It was breast cancer, and so I got to be part of the health system from a very early age, hanging out with my mom while she was going to sort of endless appointments throughout my life, and so that that eventually is what sort of brought me to what the company is today. But my dad, he works in IT generally and taught me pretty much everything I know about computers generally, and so that's that my mom and my dad sort of combined to be what was eventually Cardiff. So, yeah, that sounds good. 

0:01:53 - David Williams
I mean, imagine if you'd have four or five parents, which you could have achieved. So that's pretty good. Now with your, with your mom's health care side. Normally, as a kid or even as a patient or a friend of the patient, you just go to the hospital, the doctor just like assume they know what they're doing and they have all the information available. Did you have a sense at any point that that wasn't 100% the case, that there was some potential that was left on the table? 

0:02:17 - Matt Hollingsworth
Yeah, so this I don't think I really realized that until the fifth cancer actually, because that was the first one where I was an adult and I had a lot more involvement in like going actually to appointments and being there, because before I would be left with someone sort of outside the doctor's office but I had no idea what was going on inside, but specifically what one thing I kept seeing happening which was eventually became the inspiration for me going down this path in the first place. So my mom has an extremely complicated medical history and she carried this binder around that's very long of everything that's happened to her. She's had 31 surgeries and has this whole cocktail drugs keeping her life like one of the cancers with thyroid. So she has hormone replacement which interacts with everything under the sun and it's just like stuff that you have to know if you don't kill her. 

So every time that she would go to a new care team, they do this the intake process and they kept doing it and then turning around and typing it into the EHR and it takes half an hour. So it was like a memorable thing for me and they kept doing it as well as just wondering like what's this EHR thing doing? It's not like this is changing, so what's this for? And that question sort of. I started digging and found out the whole thing was really complicated and decided if I wanted to make any difference I needed to sort of study it, and so I ended up going to business school to focus on healthcare operations, so that sort of, and then it eventually became part of. 

0:03:33 - David Williams
Got it. You know, I think, what happens. It sounds like you were going to the diligent doctors that took it and put all the information in. A lot of times, what you see is that there'll be a big list, in fact, whether paper or electronic, and they'll say you know, just tell me what's what, just tell me what I need to know. As opposed, you know, which is difficult for the patient, of course, to do and not really there, you know, shouldn't really be their job. 

0:03:54 - Matt Hollingsworth
So yeah, I mean that's part of the reason why this was so impressive to me, because my mom my mom's a very organized person, so she she's spent this time, so they kind of didn't have an option but to listen to reading through the document, right, but not everyone does that and I mean I think that's part of the reason why she had such great outcomes. I mean she's had five cancers and she's still around, and plus the care teams were wonderful. So, like on the other side of the spectrum there are problems, but obviously the system works very well in some capacities because my mom's still around, and so I both sort of appreciate it and feel like there are things that could be improved Both dimensions more than your average person, I think, just because of what's happened. 

0:04:33 - David Williams
So you mentioned the business school side. What did you do before business school in terms of education? 

0:04:38 - Matt Hollingsworth
Yeah, so actually I started my career as a high energy physicist. I spent six years lab called Seren in Switzerland, part of the team that found the particle called Higgs boson, and then basically spent the rest of my career doing AI engineering. Yeah, turns out, actually doing doing high energy physics is very similar to big data science project because it's mostly data mining, like it just so happened. When I was there, I got to be part of building the detector, which was awesome, so I got there right before the, the particle accelerator turned on, which was a lot of fun, yeah, but it was a really great education that data science decided to apply to other industries afterwards. 

0:05:11 - David Williams
So that's that sounds good. Well, I saw a few things on your LinkedIn SOAR technology, global Drosage and analytics. Is that the pronunciation yeah, deep, deep field, and Samsung, so maybe kind of you know, lead me through that progression such as it is, absolutely. 

0:05:26 - Matt Hollingsworth
So global Drosage analytics was a company that I was on the founding team for that I started with a friend of mine from Seren actually a professor there, the idea there. So his wife is a Drosage coach and so they have a place where they they teach other mostly kids how to do how to do Drosage, and they also host competitions, and so we figured out that we could analyze the scores and predict how they were performed at a particular competition and also what needed to be trained, and so we sold that to the, the folks who ride the horses that compete in the competitions. It's still sounds good. 

0:06:01 - David Williams
I was going to say we have. We have a fairly educated healthcare audience here, but not everyone knows that Drosage has something to do with horses, so yes, it's horse dancing basically, which is the short summary, but anyway, applied data science, in this case for sports coaching. 

0:06:17 - Matt Hollingsworth
So tech is about applying that to the defense industry, essentially, so trying to find ways to help make that more effective. Deep field was about doing it for telecom analytics, so there's a common theme here. Basically everybody knows what's in there. 

0:06:30 - David Williams
So, yeah, they make the smart, the smart refrigerators, right? That's going to tell me either. It's going to tell me I need some new salary and it's going to tell the Chinese. What I'm doing Exactly Sounds good. All right, let's. Let's talk about CARTA, health care, and which has less to do with horses. Is that fair? Much less to do? Yeah, so what was the? You know, I got sort of like the deep origins of the company, but like, how did how did this company actually come to life? Why, this new company and actually Samsung is a huge company and you go from that to a startup yeah, so, okay, so it began. 

0:07:04 - Matt Hollingsworth
Actually, I wasn't really setting out to create a new company Like. So all this began when I went to business school. I also joined a research group that's called SERF, which is the healthcare operations research group at Stanford, and the easiest way to describe what they do is to talk about the first paper that I published there, which was a paper on these things called preference cards, which is basically the list of things that you bring into an operating room before a procedure starts. We had the idea that we could predict what more accurately what should be brought into the room beforehand than what the current state of the preference cards, and then use that prediction to update the preference card so that you don't waste as much stuff, because the whole point like I don't know if you've ever seen an surgery in the OR, but they bring this huge tray in and then about 80% of it they just throw straight into the trash. 

0:07:50 - David Williams
Yeah, yeah. 

0:07:51 - Matt Hollingsworth
So our thought was less trash, less waste, money savings and then just things not being thrown away. And so, long story short, we've saved about $5 million a year. This is for Stanford Children's, nice Doing that project, which was really interesting, and then went on to do some other things, like the other projects. They're all fit that mold. It's like collect the data set, predict something and improve operations. 

0:08:10 - David Williams
Now it's an interesting thing. I know we defined just dressage, which is not a healthcare thing, but preference in the context of healthcare. What does that mean, a preference so? 

0:08:21 - Matt Hollingsworth
the preference cards are a combination of two things. They're what to bring into the room to prepare for a particular procedure and also how to set the room up. So it's basically the instructions for the team to get it ready for the surgeon to do their work. That's why the preference part is there. It's like sort of up to the surgeon's preferences on how they want things to be set up. So basically, the part we were talking about is just the supply section of that. There's actually a lot more to it that also could be improved in many cases, but that was the thing that we focused on. So really, the thing to think about is like a pick list or like a to-do list. This is the list of things that I want in the room as depart. 

0:08:57 - David Williams
Yeah, and I guess I was focusing on the preference aspect, because it's actually it really is about like this is like the old school medicine, Like what does the surgeon want? We put it there. It's not really evidence-based, so typically you might have something. I'd heard about this I think it's from my brother who was in training at the Brigham in Boston and he'd said you know, like we order all this blood and like all these things that are needed, and guess what happens to that? You know it's not usually needed and so there's not that feedback loop necessarily, except that the surgeon still wants it right. That's the feedback, Yep. 

0:09:25 - Matt Hollingsworth
That's right, and in fact you predicted one of the other, like I was sort of referencing a bunch of other projects that we did. One of the other ones was doing an analysis, a sort of value-based analysis to understand what is an ideal setup of orthopedic supplies, because there are tons of vendors and if you can consolidate you can get cost savings. But does that have a quality downside versus a cost upside? And all this sort of stuff? Because it absolutely right, like for the most part, hospitals want to serve surgeons and I think that's the right way to do it. Like it's you want to make sure that they're happy and that they have exactly what they need to be the best that they can be. But if you can help sort of shine light on what is a good choice versus a bad choice there on the financial side, then many cases you'll find that the surgeon's like oh well, I mean I have like a 0.1% preference on that, but if it costs half as much whatever, I'll use that too. That's perfectly fine. 

0:10:18 - David Williams
So just being able to call those out is perfect, even, I think, just culturally, people would you know, there might have been somebody who made the observation that maybe it's like a resident or a nurse or somebody that would notice that you know that stuff's never used. But then I think if you actually go they may be afraid to tell the surgeon that or they don't realize that that's like $500,000 worth of something that's getting tossed. 

0:10:39 - Matt Hollingsworth
Exactly that. Second one is a huge effect. It's like no one knows how much these things cost. Right Like they have this stuff sitting on a tray and there can be a $15,000 item right next to something that's five cents, and they look the same, right Like nobody. They just throw things away. And if you just point and it's like yo, that's $15,000, they're like what Like? 

0:10:59 - David Williams
a lot of that. 

0:11:00 - Matt Hollingsworth
savings came in the form of those where there is some item that was multiple thousands of dollars that was just been thrown away just because nobody nobody had taken the time to update the list of stuff they're bringing in and they would have if they would have known it was that expensive. 

So just sort of surfacing this information to the right people at the right time is the way to fix that, Like making the decision for them, like one thing this AI did not do. 

It did not just update them willy-nilly, and in fact this is a theme that we'll come back into company later. We gave recommendations to the lead nurse that was overseeing this, but recommendations like just pointing out, hey, you've never used this. And they're like oh yeah, we never use this, what's this doing here? And like just let's get it off. Or you always use this and it's not on. And you're like oh yeah, I have to run out in the middle of procedure every single time to get this, because it's never so that sort of recommendation to a subject matter expert that can then turn that into something. That is a good thing, Because sometimes the recommendations are wrong, Like maybe they just don't document that because it's so small, nobody cares about it. But it's one of those things where, like giving them a recommendation and allowing a subject matter expert to decide is the right path, Just like almost every application of AI is out there. 

0:12:04 - David Williams
Sounds good, all right. Well, I got you out of your childhood and, I think, out of Europe and now back over towards Stanford, but you're still in your research phase. Okay, so the company card of healthcare. So you weren't planning to start a company, but you did anyway. Yes, exactly. 

0:12:16 - Matt Hollingsworth
So we did more projects like that and I think if you were to take someone off the street and ask, like, what's hard in that story, they would say, oh, the predicted, like building this fancy predictive model that's in the middle of that and that's just not true. And the easiest way to describe that is to look at the timeline for how that project can be or any of the other ones, and the relative ratios are the same. But you know, the project length may vary but the ratios are the same 18 months of data collection, which looked like a bunch of nurses typing into this thing that's called red cap, which is basically like sort of hip to comply and Excel for a data capture program. About three days for me to build the model as a weekend, and then about six weeks of change management. So it spit out a whole bunch of recommendations and then we fed them to the lead nurse who then went and acted them. That process took about six weeks. So the vast majority of that time was in data collection. All the other projects same exact thing like the ratio was the same, and so at some point I was like, well, maybe we'll be working on that part because we'd be able to do, literally, you know, 20 times more QI projects with the same budget If we didn't have to rely on nurses to go out and manually collect all this data. So let me see if I can replicate the results of those projects. So like, get the same model output but instead of having someone manually curate it, use AI to pull the data instead. And since we're sitting here, like, well, that worked. And it worked really well, and I was demonstrating it to one of the clinicians that was involved in one of the other papers that I didn't talk about, but anyway he was, he said oh, you built an AI assisted data abstraction system, which I had no idea what he was talking about until he introduced us to. 

This whole group of people are 120 on. This was affiliated with Stanford Children's, so that Stanford Children's 120 people who sole purpose at Stanford Children's is to sit there and fill out forms for either quality reporting or research purposes, and that's what they do all day. And he's like so this is basically the same thing, so maybe this will work, and this is a problem that every hospital has. Like thus far, we'd worked on research projects, because that's what I was working on, but these, these other things are the quality documentation part. They're these things called registries that every hospital has to submit to if they want to be accredited for different stuff, and so it's a problem that every hospital has, and so that that's the thing. 

That, like is a lot more interesting from a scalability perspective is. Thus far, this is an academic exercise, and so what we did is we thought, okay, well, maybe this will work. We tried to prototype it, look like it worked, and then we got a grant from the American College of Cardiology to try this for real for a couple of their registries, and, lo and behold, it worked very well. They ended up investing and then we raised around, and that's where CARTA began. 

0:14:59 - David Williams
So let me ask you about that. Okay, so we have another word there abstraction. Right, it's sort of like, well, we're going to do something specific, it just sounds kind of abstract. It sounds more like the collider or something Like. What's abstract about looking at the record? I guess it's just a. It's a weird name. 

0:15:17 - Matt Hollingsworth
I think people gave it that name because data entry is too simple for what it is. It's a skilled data entry problem, basically. So it's referred to like the sort of I think a better term of art for this is swivel chair analytics where, like, you have the electronic health record open on one side with all of its notes and unstructured data and mess, and then you have a form on another screen where you swivel over and fill it out. So you read and was the patient hypertensive upon admission? And then you go to the form, find that field and say yes or no, and then you go to the next field and go back and forth. So that's what they're doing. And the key is, though, like I couldn't do that, like I don't have enough clinical expertise to do it At this point I might be able to do some of them, but for the most part not. Like I'm not a clinician, I can't make clinical calls, I can't fill those things out accurately, but nurses and physicians can, and it has to be them. So it's not. It's not something where you can just hire somebody off the street and, like fill out the form. It's not, it's not robotic. You're not like going to some place that has answer over here and putting it into the right place. Over there You're actually reading through stuff, interpreting it and filling out a form on the other end. So it's data. Abstraction equals skill. Data entry is what I would say. 

But to give a little perspective here, like these forms that there I don't think there's a word in the English language to describe what these things look like Form. I mean, that sounds like something you do at the DMV and like 10 minutes or whatever. That is not what this is. So just for perspective here, one registry, one of these forms that needs to be completed. It's called cat PCI and that happens when a PCI is performed in the cath lab. So the form takes about an hour and a half to complete, which is often longer than actual procedure takes. So it's like more work. It was just ridiculous, but that's how it is. So, like this is not a DMV form, it takes a long time and a lot of expertise gets dumped into doing this. Yeah, so I think that's why people came up with another word for it. It's just like just data entry. People get the wrong impression. It's something totally different. 

0:17:16 - David Williams
OK. So at the risk of taking us further afield, so if you think about someone's in the cath lab, they're doing PCI, they're documenting it and yet when it comes to doing your registry not from your mom's binder, mind you, from the modern electronic medical record that somebody spent hundreds of millions of dollars on for a single instance at the, probably at Stanford at least that's what they spend in Boston on how much they spend at Stanford, you still have to have it right and it's why, if they were doing a PCI, presumably in a quality way, with a protocol, and they document it in the HR, why does it need to be abstracted for the quality registry? That's a great question. 

0:17:52 - Matt Hollingsworth
So the way it gets documented is a way that makes sense to clinicians, which is our long form notes, like English clinical terminology. That gets in there, and it needs to be turned from that into a form with structured data in it. So it's this unstructured to structured thing that is the data abstraction problem, and yet Chars don't solve that problem. So, first off, let's imagine the way it's done in real life. Is you have a team that does this on the back end, but also the nurses on the front end could do it right, like they have the skills to do it. Why isn't that the case? Well, that's because then they'd have to spend an hour and a half after a case. 

Filling out these fields, like the content that is in notes that are typed or dictated, is a lot more rich and a lot takes a lot less time for humans to dump, just because that's the way that we think. 

To dump into the EHR, then clicking a series of order of magnitude 350 checkboxes and text boxes Like that just takes way longer. So the EHRs can provide the text boxes, but the problem is someone needs to fill them out, and so that's the key and you have to transform it into the narrative that's either in your brain or, more often than not, on a note that you're reading, like if you're doing it in the room, and principal is in your brain, but you still have to go through and fill all that out for an hour Like it's not so anyway. That's the problem. You have to actually fill the thing out. So, and there are hundreds of these fields, so it would just add a huge amount of documentation burden and it doesn't really matter if it's in the back end of the front end per se, because you still have to spend the same amount of time. So that's the reason it's difficult. All right. 

0:19:28 - David Williams
So you had started the problem of abstraction. It's not done very well. Now. The cardiac flow saw you could do it better than they want to invest, hence the company. So you're not the only one that does abstraction, so there's a lot of. Every hospital and department has it, and some have it more centralized than others, and some of it's people that are dedicated and some as part of their time and all that, and there's companies that serve that. But why? What makes you different? What makes you better? 

0:19:57 - Matt Hollingsworth
Yeah, great question. So there's several aspects of that. Let me first start on the technology front, because that's, I think, one of the more interesting ones here. We're not the first people to think about using natural language processing to do this work Not anywhere close. We won't be the last either. Okay, so just an easy way to prove that. 

If you go to Google scholar and type in AI, data abstraction, you'll find thousands of papers like people done this and just to say save you time, like the way these papers look. The abstract will be like okay, we decided to use natural language processing to figure out 10 clinical concepts. Was the patient hypertensive upon admission? Are they diabetic? Blah, blah, blah. So 10 and the. 

The method is that they will then go, build an AI model and try to replicate a gold standard Data set that was abstracted by humans to create it. So they'll go out and Collect you know 500 cases or whatever of those 10 concepts, try to replicate it and then publish an F score you know what, some sort of accuracy metric on the other end and On average what you'll find seven out of the ten can be replicated, retaint. And then they'll say something about like oh, maybe, if whatever, two of these would work or whatever, but the 10th one's just too hard. Blah, blah. That's. That's a summary of all those papers, and again, there's thousands of them, so people have been doing it for a long time, yeah, so that makes the question like why isn't this out there? 

0:21:14 - David Williams
Yeah, like I mean, saved me a lot of time reading all the literature, so that was a quick summary and anybody who's listening to the podcast on 2x. They even they heard it in 10 seconds instead of 20. 

0:21:25 - Matt Hollingsworth
Yes, so, like the main point there, we're definitely not the first people to think about this, and not even the first people to succeed at doing some of it. The problem, though, is when you look at the unit economics of this, because so just imagine that paper for a second. Okay, first off, let me give one more detail here. That's that's gonna be important. The way this works is you do it like at Stanford or maybe at like you know a few sites somewhere. Problem is that the model that you built at Stanford will not perfectly translate without Modification to another place, but the issue is you don't know which ones work or don't until you validate it, which looks again exactly like data abstraction. Like you do, you have to build a gold standard data set and blah, blah, blah. So the problem is when you look at the economics of that class of deployment, it doesn't work, so that what will happen is you have to go out, and if you're just sort of building a spreadsheet of this, the cost is in the gold standard collection to do validation or training, depending on if you're building it or Validating in a new site, but it's the same exercise. You have to go out and get many cases and then check it to see how it performs. So that's the cost, and then the benefit is that you can save that labor in the future, for whichever ones work. So then the math question is how much investment does it take to do the validation and rollout Versus how much time gets saved over time? And then, if you're building a business out of it, how much are you gonna get paid for that? If you do out the math, you're upside down from a net present value sort of calculation or like a, you know, enterprise value Calculation. You have to invest too much for the cash flows that you get over time to work. So just the business just doesn't work, and it's not because people haven't tried it, it just doesn't work. So, and on the client side it's even worse because you have, like you can't add any value until all of that is done, and it takes years, like if you need several hundred cases Some of these registries don't have volumes that exceed that, so you have to wait several years. And so if you purchase this and you don't get any value for years and like just that's why these things haven't worked, people have done it Academically but you can't actually get ROI out of it, because it's just the way that the math works. 

So what we figured out is that we can go and be risk-bearing. So the whole purpose of this is to save money and time at the end of the day. So, if we can take you like, say that today, you know, insert hospital here is spending a hundred dollars per form, and we come in and say, hey, we'll do that for 60, which is sort of the same benefit that you would expect when you sort of add the software cost and subtract the time savings and whatever. Like if you built this yourself Asymptotically, that's what you would approach. We'll just offer it to you now, like we can do it now, yeah. And then our goal is to evolve a manual process into a Thing that allows us to substantiate that price point, and we can do it over time In ways that institutions or other companies that are trying to build this stuff can't. 

The key, though, is on the product front. 

What you need to do to make this evolved is the system actually needs to learn from the abstraction team over time, so that when they come in day Zero, they're doing it manually. 

When they come in and day 30, it's saving them time when they come in a day 60, saving them more time, that sort of thing. So on the product front, what we did here is we added a learning component to it that allows it to feedback and actually learn from the abstractors, which has a lot of interesting Remifications in terms of the way that we can deliver the product and things like that. But that's what's different. We figured out a way to make this learn and improve over time which makes all the unit economics work out on both sides. The client gets the savings immediately, we can Substantiate the savings because we can have a lower, lower labor base over time, and it works out really well. So that's sort of the and so the core of this is that we figured out a way to repackage all this and a risk-bearing sort of mechanism for clients and B we figured out a product way to actually make that work on the business side. 

0:25:13 - David Williams
So Got it All right. So so, when we were talking what I'm hearing about, you know Hospitals and providers as customers and they're doing the abstraction for things like these quality registries for the medical society. So that's, that's one side of it. There's also registries that are done, say in the pharma industry, that have either done post approval for safety requirements or, if you know, to gather real-world evidence. And I was thinking about that because, as you were describing the challenge of this data abstraction, pharma certainly has, you know, deeper pockets than some of the providers and they spent a lot of money on real-world evidence, but a lot of the companies doing real-world evidence don't make any money because it cost them too much to do the abstraction. Do you play in that? Am I right into? You play in that as? 

0:25:54 - Matt Hollingsworth
well, you're absolutely right and that is absolutely something we could do. We're. The only reason we're not doing it now is just focus. Like we picked the registry stuff because it's a problem that we understood. We will definitely be doing that at some point in the company's trajectory. Same exact problem as you mentioned, a different, different sales sort of strategy and cycle and all that sort of stuff. So, because that's different and we wanted to focus, that's the only reason we picked it. But you're right, it's exactly the same problem. In fact, there's there's all sorts of things that are like that. Like all the risk risk adjustment things. Yeah, just Clinical trial recruiting. Same story like you have to go out and figure out who's in, who's out. Just so many things that are out there that are a data abstract. 

0:26:33 - David Williams
What's fun about? What's fun about healthcare from a business standpoint, you know, is that there's a lot of multi-billion dollar niches, and so you don't have to do everything in order to have a decent-sized company. Alright, so you talked about how you know One of the things that makes what you do special is that it can learn as it goes. There's also this concept Within healthcare AI. Maybe it requires more human involvement and training. Then maybe, let's say, for financial services or you know some other areas. Is that right and, if so, why? 

0:27:01 - Matt Hollingsworth
yes, and it's just so much more. The data is way more complicated. Like financial services, at the end of the day, you have transactions that go from point A to point B and prices like that's. That's basically what you need to communicate in some form or another. Says really, really easy to make standards for this, like the standards. Even the standards themselves have millions of concepts, millions like. 

No, no human actually really understands any of those coatings. Like they're not. No one can remember a million Concepts, so it's just way harder. It's and it's very and there's no enforced standard for doing this because, again, like If we forced everyone like one of the standards is called snow made if we forced everyone to document everything in snow Ed to the point where it would fill out the registries like everyone would just be spending, like one a single patient would be you know eight hours of work and yeah, it's just not feasible, so people don't there. Because of that there's no real accepted standard because not everyone can afford to throw an army, as people at Documentation like AMC's can afford more than the sort of regional ones can often. 

0:28:03 - David Williams
And like anyway, yeah, so. So. So am see's being the, am see's being the academic medical centers, as opposed to the theater and and snow that it's like I think there's an expression here wouldn't stand a snow meds chance in hell of succeeding, right as you described. 

0:28:16 - Matt Hollingsworth
All right, that's right. 

0:28:16 - David Williams
So, so you know. So healthcare is such a big mess and while I'm a proponent of technology and change and so on, I wonder you know, do you think AI can actually make a meaningful difference in healthcare? There's been a lot of things that have been tried, haven't been so successful. 

0:28:31 - Matt Hollingsworth
Yes, it can and I think people ought to. Okay. So I think there's been a lot of Overpromising and under delivering and that general umbrella which I I find very, very sad, and also I think, if that is a lot of skepticism which makes it harder to get stuff done, that actually can work, yeah, and I mean, I think the bar should be that people like Basically every Application that I've seen, you could figure out a way to make the contract risk-bearing. It's just the problem is people have come like big, very well-funded companies have come along and been like, yeah, I pay us, you know, three million dollars a year for AI, and like Sort of give nominal ROI and that you know this will revolutionize everything, and then it doesn't and everyone's disappointed and they've Stuck, you know putting a three million dollar year bill for five years or whatever, and doesn't go anywhere. You can just sit down and be like, okay, this is an actual thing that this can do, and I'm so sure about it that I'll bear the risk on it and figure out a way to do it, because so much of AI is about sort of freeing up mundane labor stuff that you can tie it to labor rates eventually and so, just anyway, I think one of the biggest problems is the engagement models in these cases that you need two years before you're adding value. That like the company isn't actually bearing any of the rest of the engagement, all these sort of things. 

And the second bit is I think people need to quit Trying to remove Subject matter experts from the loop. Like that just never works. I I mean, I've been doing AI for my whole career. There's never a case where AI is completely removed, removes the subject matter expert from the loop. Like that's just sort of a truism that if you peel back the curtains on how this stuff is working it, it just doesn't like. The only times where you could make an argument that that's true is where the user is the subject matter expert, like Google, for instance. Yeah, we're like you, you, but again, like it's not. You don't click Google and say give me what I want, and it just pops up a web page Like I have. I'm feeling lucky that no one ever uses around you. Sad, yeah, but like no one uses it because it doesn't work. 

Okay, like yeah, you get a list and then you read through it and you're like, okay, because you're the subject matter expert in that case, right. So like just, I think the this idea that this is gonna put tons of people out of jobs and all this sort of stuff is just not True. You can make them more productive is just another productivity tool. So people would quit trying to like promising that and then not delivering it. People, we've just quit doing that. And if also people would quit buying that because it can't be done, then I think there's a ton of value that can be added. It's just a matter of making sure that we do the right stuff with it. 

0:31:02 - David Williams
So I'm fairly Optimistic that you know companies like yours are gonna solve these problems, like on a B2B basis, put the right business model in place, the right incentives, the acknowledgement of the experts, the need for people and so on, and that it can actually Address in a meaningful way some of the problems we have, which is, you know, chronic lack of staff, burn out, etc. Etc, and that would be good. The part I'm less actually certain about, and sometimes a little surprised about, is that there's a lot of patient Skepticism and even hostility to the use in AI and medicine. Now, if you ask people you know, like are they comfortable using it here or there? Now, granted, we don't expect people to actually completely understand it and it, at the same time, it is important, and you say, hey, I don't want any AI in my health care, how do you, how do you think about that and where do we go from there? 

0:31:51 - Matt Hollingsworth
That's a great question. We actually did a survey relatively recently. One of the questions that was asked there was basically do you think that your doctor uses AI and your care delivery? About 50 percent of the people thought yes and 50 percent thought no, which is interesting because that is objectively. 100 percent of people use AI. For instance, my favorite example this is a Paulsock Symmeter. It uses AI algorithms to do peak detection and there's common filter out noise and it's AI. It's just turned into a hardware package. 

But we have a great solution to make sure that that doesn't screw things up, which is the clinical trial. Just like everything else, Drugs are way scarier than any form of AI means software. It's not actually in your body. It's one of those things where we have a solution for that. We just need to stick to it. If it passes the clinical trial, you should trust it just as much as you trust anything else that passed the clinical trial. If it's actually touching you. If it's administrative stuff, I don't think the patients care about that. I don't care about that, unless they get a random bill or something like that, but then they care about the bill that gets addressed later. 

For the actual care part, I definitely don't think any AI should be touching patients unless it's been gone through a clinical trial. Absolutely, you should be terrified of anything, Because anybody who's that non-disciplined you should be terrified of that but if it's been through a clinical trial. 

It's just like another thing out there. It's no different than a device or a drug. As long as we trust our clinical trial system which I think we have a very trustworthy clinical trial system for the most part then everything should be fine. But I think what people are pushing back against is this idea that they're going to be talking to a robot, which is legit. I don't want to do that either. I don't trust robots. I believe a lot in what AI could do and I would never trust some AI system to go out and diagnose me and do whatever. I think people are pushing back on that, which is a completely legitimate thing. The main point here is I don't think we need to worry about AI in a different class than we do devices or drugs. It's just a new thing that we need to make sure we have controls over and we have an extremely powerful control mechanism that we shouldn't relax for any AI application. 

0:34:07 - David Williams
Sounds good. Let's look ahead a little bit. I've dragged you back into the past a few times, but how do you see things evolving or accelerating or becoming revolutionary over the next time frame? 

0:34:18 - Matt Hollingsworth
Whatever your time frame is a year or five years, whatever the horizon is- I think the fact that there is a lot of During my career, there have been at least two other big height cycles like this Like there was the speech-to-text version, then there was the image recognition version and now there's the generative AI version, like each time we get boost. The thing that's different about this one is, I think, it's something everyone can interact with and understand, which is, I think, chatgpt probably will go down in history as the first time that the average person on the street interacted with AI directly and felt like they were understood what was going on. People don't think of Google as AI. Even though it is just completely smothered, it's more than almost any other product that's out there. But there's like, oh, it's just a search box, right. Like they don't think about it in the way that people sort of think of AI, but I think everyone thinks of chatGPT that way. So now there's like headspace and everybody's mind about what AI is and what could be cool about it and what's scary about it and all this sort of stuff, which I think is net very good. Like first off, it's going to keep people from using it for reasons that aren't kosher, like diagnosing a patient without going through a clinical trial, so it's going to be in our space, but also it will help provide just excitement about it, people wanting to learn about how it works and it's not magic, it's math, and so everyone could learn how it works and learn how to apply it, and I think there will be a lot of interest for it and a sort of necessary thing eventually, like just. 

I think the excitement will overpower the skepticism to where people will sit down and be like maybe software as a service is not the right way to deliver this and start coming out with models that will actually work and business engagements that will actually work. 

So I'm very optimistic about where this goes. I think people have been burned enough with sort of blind application of SAS models to AI in ways that don't work, and over time we'll get more companies that are really good at taking this stuff and applying it in ways that guarantee value and create a lot of value. So I think the next step of this is going to be, I guess, to bottom line here. I don't think that the technology even needs to evolve very much for any of this to be true. It is evolving, but I think what's going to evolve is people's understanding of how to apply it, both in terms of turning it into businesses, but also how to turn it into value for people at the end of the day, just because the population of people who understand it is exponentially increasing right now, which is really awesome, I think it's going to be great for everybody. 

0:36:58 - David Williams
Great. So, matt, my last question for you is whether you've read any good books lately, and is anything you would recommend or anything you'd recommend to avoid? That's a good question. 

0:37:13 - Matt Hollingsworth
Yes, I want to give a good recommendation here, because I've been talking too much when I'm blanking on the most recent one that I read, that is I can never remember the name of the books I'm reading. 

0:37:25 - David Williams
I'll just say yeah, so one with a blue cover that has the whites squiggly on it. 

0:37:31 - Matt Hollingsworth
So yeah, I'll remember it in a second. I'm just totally blanking on it, that's a good question. There is one I want to read, but I'm forgetting. 

0:37:42 - David Williams
I'll put it in the show notes, if we can. 

Absolutely so anybody who made it this far. You're going to look in the show notes and see what's what Well, matt Hongsworth, co-founder and CEO of CardiHealthcare, thanks for being my guest today on the Health Biz podcast. Thanks for having me. You've been listening to the Health Biz podcast with me, david Williams, president of Health Business Group. I conduct in-depth interviews with leaders in health care, business and policy. If you like what you hear, go ahead and subscribe on your favorite service. While you're at it, go ahead and subscribe on your second and third favorite services as well. There's more good stuff to come and you won't want to miss an episode. If your organization is seeking strategy consulting services in health care, check out our website healthbusinessgroupcom. 

Podcasts we love