Informonster Podcast

Episode 40: Unpacking AI and Interoperability Trends with Joerg Schwarz

Clinical Architecture Episode 40

Joerg Schwarz, Senior Director, Global Industry Strategy & Solutions, Healthcare Data & Interoperability of Infor joins Charlie Harp to explore the future of healthcare data. They unpack how AI is raising the bar for interoperability, why data quality can’t be an afterthought, and how the PIQI Alliance is bringing the industry together to drive meaningful change. From FHIR adoption to scalable innovation, Joerg shares real-world insight into where healthcare data needs to go next. 

Contact Clinical Architecture

• Tweet us at @ClinicalArch
• Follow us on LinkedIn and Facebook
• Email us at informonster@clinicalarchitecture.com

Thanks for listening!

Charlie Harp (00:09):

Hi, I am Charlie Harp and this is the Informonster podcast. Today on the Informonster Podcast, I'll be talking to Joerg Schwarz from Infor. Joerg, welcome.

Joerg Schwarz (00:20):

Nice to be here, Charlie.

Charlie Harp (00:23):

We're delighted to have you on the podcast. One of the things I usually start off with is why don't you let the listeners know a little bit about yourself and how you found your way into healthcare?

Joerg Schwarz (00:36):

Yes, happy to share that. I started in healthcare in 1995, so some people say, I am just beginning, but

Charlie Harp (00:46):

You're a young whooper snapper.

Joerg Schwarz (00:47):

Yes, yes, that's right. But time flies when you're having fun. I got into healthcare because my sales manager at the time said, you know what? With the accounts that I can give you, there's not enough revenue that you can generate, so why don't you go and develop the healthcare segment? So that's what I did, and the first customers that I found together with a partner, they bought a communication server, so that was the beginning of interface engines. The partner had a lab system and he was selling the lab system, and I was selling the communication server and the communication server hardware. So that was Data Gate, which at the time was competing with Cloverleaf. I know at the time, I met my fierce competitor who was selling Cloverleaf at the time, and Data Gate and Cloverleaf in the nineties were the original interface engines.

Charlie Harp (01:51):

It's kind of funny how competitors become friends and friends become competitors in our industry. Absolutely. It's a very tight knit, you always have to be careful. You never know if the person that's your mortal enemy is going to be your boss and trusted advisor at some point.

Joerg Schwarz (02:12):

That's right.

Charlie Harp (02:14):

It's a funny industry.

Joerg Schwarz (02:15):

That's why HIMSS is such an interesting show. Right. People meet each other and you have to check what's on your batch this year.

Charlie Harp (02:27):

Exactly. So you work at Infor. Can you talk a little bit about what Infor does and what your role is there?

Joerg Schwarz (02:36):

Yeah. Infor is known in the industry as an ERP company. Infor bought Lawson, which was one of the original ERP piece in the healthcare sector, and Lawson had just bought a year before. Lawson itself was bought by Cloverleaf, so I'm responsible for the Cloverleaf business, as the Industry and Solution Strategy Director, the Cloverleaf business serves over a thousand customers. I think we're at like 15 or 1600 customers worldwide in the US and Europe, middle East Asia, but predominantly in the US and in central Europe where Clover is used as the core integration engine by a lot of hospital customers. But we sell also to payers and to life science customers, especially medical device companies, et cetera, et cetera, enabling the data transfer within the hospital and from the hospital to the outside world.

Charlie Harp (03:53):

Yeah, Cloverleaf has been kind of a staple in the industry doing that type of data exchange for as long as I can remember. Absolutely. As long as I've been plugged into the interoperability space.

Joerg Schwarz (04:06):

It's funny, we have a bunch of people in the Cloverleaf team that have 25 and 30 year tenure, so they were with the original team, and they have been bought and sold many times, but the last 10 years Infor, and it's been a good ride. Infor is a good company and invests into Cloverleaf to keep it at the top of its game.

Charlie Harp (04:35):

That's fantastic. I think there's a lot of people in the industry that started in the late eighties, nineties when we got into healthcare that have been kind of sticking with the same product or in the same place for a long time, and I'm a little scared about what happens when those people start to retire.

Joerg Schwarz (04:56):

Yes,

Charlie Harp (04:58):

That's true. There's so much knowledge there. There's so much historical knowledge there. That's why I tell people, don't worry, I'm never going to retire. You don't have to worry.

Joerg Schwarz (05:06):

Yeah. I'm like, I always say what somebody asked me, I said, I have another 10 years to go, but I've been saying that for five years now. So that 10 years is a sliding scale, but you're right. The downside of having a lot of people with long tenure in our professional services team and consulting team, et cetera, et cetera, is that some of them will retire pretty soon. Yeah. But the good thing is we have young people that are being trained and that will step into their footsteps

Charlie Harp (05:44):

And then they'll be the old timers in 20 years.

Joerg Schwarz (05:46):

Yeah, exactly. And who knows, maybe we'll be still around Charlie.

Charlie Harp (05:52):

Of course we will. What do you mean maybe so in your tenure, in your role at Infor and just in the industry, what's a notable trend or what do you think is interesting about where we are today?

Joerg Schwarz (06:09):

Yeah, I think the role of interoperability has been changing. In the beginning, in the nineties when we got started, interoperability was about connecting different systems with each other just for the bare necessities, and we were addressing problems like systems weren't even designed to be interoperable. So in the beginning, I remember we had cases where there were drivers for printer interfaces, for example, lab systems. So you had to capture the printer output and turn it into HL7 v2 in order to get the lab results connected to the electronic medical record. So these were the problems that we were solving at the time. Nowadays, the role of interoperability is much more versatile because there's hundreds of applications in a modern hospital system, and everybody's talking about AI and machine learning and data analytics, advanced analytics, and somehow you have to get the data to fuel all this. And so there's a lot more demand, a lot more variety of data. We're talking about unstructured data in addition to structured and coded data. So yeah, definitely a lot of change.

Charlie Harp (07:39):

I remember back in my first job in healthcare, I worked for SmithKline Beecham Clinical Labs in their stat lab division, and a lot of what I did was building instrument interfaces, building reference lab interfaces with the old ASTM standard. Before it was HL7, and I remember building interfaces to read data off of RS 2 32 ports

Joerg Schwarz (08:06):

And

Charlie Harp (08:06):

Building my own loop back connectors on paper clips. So I mean, I think that one of the things that's interesting about interoperability is the people that do it today don't fully appreciate the fact that they really don't have a physical interoperability challenge.

Joerg Schwarz (08:24):

Yes.

Charlie Harp (08:24):

Physical interoperability used to be the biggest challenge, and now getting the data to move from point A to point B is probably the most trivial thing we've got going on in healthcare.

Joerg Schwarz (08:33):

Right, right, right. Yeah. But the horizon shifts, right?

Charlie Harp (08:39):

Well, yeah, absolutely. Back in the nineties when we were doing this kind of thing, it was really critical systems. It was getting data out of instruments, it was getting transactions between partners. This idea of exchanging and data for its own sake wasn't even something people thought about because we didn't have a way to do it, and the systems were so siloed. I remember the systems, and this is probably true until, I don't know, maybe 10 years ago when the patient left, they purged the data. That's why you fill out the clipboard every time you went into a doctor's office is because you were repopulating their file with your data. They didn't keep it around. We didn't have the storage back then to do that,

Joerg Schwarz (09:28):

Right? That's right. Yeah. 2009, 2010 was a big push. That was the Affordable Care Act, and with the Affordable Care Act, there was a big push for electronic medical records and also ONC at the time mandated certain interoperability requirements for all participants, and that's when we implemented ihe xds.b interoperability protocols, exchanging CDA documents, et cetera, et cetera. So there was a big push, and it's kind of funny, Charlie, I remember in these days, 2010, 2011, I was at GE Healthcare at the time, customers were saying, oh, if we only could get the data, if we could only get the data from all the different systems, I would be so happy. And we turned it on, and now they got the CDA documents and they had all the data, and then a few weeks later they said, turn it off, turn it off. It's too much data. The CDAs have so much redundant data and nobody has time to read through all of this. Turn it off, turn it off. I think

(10:47):

Be careful what you wish for. It's

Charlie Harp (10:50):

Part of the problem. I think at that point, when the EMRs decided to stop purging data, they never really adopted an approach of creating a true continuum, like a true longitudinal record. They just stopped deleting things. And I think one of the side effects of that is when you go to share data, a lot of that data, a lot of the structured data is just replicated data. It's the same data from every visit over and over and over again.

Joerg Schwarz (11:20):

Exactly.

Charlie Harp (11:21):

Exactly. And I think when I think about the challenges we have in our industry when it comes to the data that we share, we've got to get a lot better at summarizing kind of the current state of the patient and the relevant historical details of the patient and not just basically turn on the fire hose and vomit our history out of our database into the network. I think that that's one of the challenges. I think you're right. The problem people have when you talk about exchanging data, they're like, yeah, but it's so much and I don't know what to do with it.

Joerg Schwarz (12:00):

Right,

Charlie Harp (12:00):

Exactly. If you can't ingest it, then you're putting it in front of a human and what human is going to sit there and read through 10 years of encounter notes, it's not going to happen.

Joerg Schwarz (12:09):

Right. And that was the fallacy or the problem with CDA documents because they just get longer and longer and longer as data is amended and doctors just don't have the time, or any care provider has the time to read through all these documents and find maybe the same encounter five times because it was bouncing back between different providers, and you get all this repetition and redundancy, you get stuff that is not interesting. If somebody had broken a broken bone three years ago that might not be relevant for you,

Charlie Harp (12:49):

They were pregnant in 1987. There are some things that definitely need to be filtered out because the other thing too is, and in many ways we're kind of lucky that we're not truly trying to ingest the data today because what would happen is when we get the data, the data that we ingest, I think we tend to share out as if it's our data also. So we create this echo chamber of the same data, and I think it has the potential to snowball and become a problem if we don't figure out things like Providence and what is appropriate for us to share. I'm of the philosophy, and I'm curious what you think about this. I think that for us to do this better in the future, we need to, number one, come up with a solid way of doing providence of where the data's coming from and what's the format, what's the channel, what's the use case? And I also think that we should have a policy that we don't share data that we didn't create. What do you think about that? Am I crazy?

Joerg Schwarz (13:57):

No, I think that's a good idea. And I mean, from a technology perspective, we can do this nowadays with FHIR, we can keep track of provenance. For example, we have a FHIR server that keeps track of provenance, so you can separate data based on where it comes from. So you could do that, which you couldn't do in with the X-M-L-C-D-A format that well, now with FHIR, you can be much more discerning on what you do. You can easier deduplicate the record. So to make sure that there's not many duplicates in there, you can discern based on where the data is coming from. So there's definitely certain things that you can do nowadays with FHIR that you couldn't do with previous formats.

Charlie Harp (14:51):

What are your thoughts just generally about FHIR?

Joerg Schwarz (14:55):

Well, thought number one is it's obviously not very well adopted right now. I can tell you because healthcare is running on Cloverleaf that 99.5% of transactions are in HL7 V two still. That's the reality. I do think though, that this will be changing here in the near future because there's a couple of use cases coming up where FHIR really shows what you can do with FHIR that you couldn't do with HL7 V two. So FHIR, I think the reason why it's not been adopted very well is the saying, and in our industry, this is more true than anywhere else. If it aint broken, don't fix it. So the core problems that we had in interoperability now have been solved. So you can share the data, you can share it in real time, you can encrypt it, so there's some privacy, et cetera, et cetera.

(16:00):

So some of the core things are solved and it's working, so that's why nobody had a big desire to change it. But when you think, for example about CMS 00 57, the prior authorization, I think this will be a big seed change in the industry because this is a back and forth between the provider and the payer about prior authorizations and something that today could take several weeks to do, which is to get a prior authorization for procedure is something that can be done in seconds or in minutes. So you don't have to do fancy math to calculate the return on investment to do something like that. When you can within second know, yes, I do need a prior authorization, and then a few more seconds, you have all the questions and document requirements that this payer for this patient and for this procedure needs. In order to approve this today, providers have to lock into a portal, figure out what health plan and what requirements specifically apply to the patient. They think because they went to the website, that's what is required and they submit it and they have to wait two year back, et cetera, et cetera. And then on the payer side also, they have to process faxes and all kinds of different documents. I mean, we all know it, but healthcare is probably going to be the last industry where faxes are still used, right?

Charlie Harp (17:47):

Oh, yeah.

Joerg Schwarz (17:48):

Can you imagine another industry where faxes are still as commonly used as in the healthcare industry? I can't. But in healthcare, it's still very common. And prior authorizations, one of those cases where data is faxed from the provider to the payer so they can review it and so on and so forth. And then it comes back. A lot of this will be automated and brought into the 21st century with a FHIR back and forth workflow. I think this will show people what is possible. If we do use FHIR, we build a facade that can work with legacy systems, so people don't even have to upgrade their EMR to the latest version. We can build that on top of existing infrastructure, both on the payer side and on the provider side. On the payer side, we map to x12, on the provider side we connect with legacy infrastructure to initiate the process and enable this real time back and forth exchange. And who knows if this works out and we will know this very soon, then there's no reason why not all claims should be processed that way instead of having the long delay, the three months delay between submitting a claim and settling a claim.

Charlie Harp (19:13):

I've been doing some work recently with claims data, a lot of claims data actually. And the first thing that hit me, and it surprises me every now and then when I'm surprised in healthcare, but I had not been in the payer space. I've been really in the provider space. I've been in life sciences and pharma, but I was expecting when I got involved in some claims activity that I'd be dealing with X 12 and not anything that I got was X 12. It was all these proprietary flat files,

(19:43):

fixed link files, limited files. And so I think that that's definitely something that FHIR would be good at resolving. And I think it's interesting, my philosophy when it comes to these types of transactions, if you look at the way the healthcare transactions that are happening right now with PIQI, we've been doing a beta and we were working with one HIE so far, and when you look at the data, the CCDA data is okay, when you look at HL7 v two bar is not good. ADT is meh, what's good is the ORU messages.

(20:20):

The ORU messages are like 78% on the quality scale bar at like 8% on average. And I think that this payer example you're talking about with FHIR and the CMS 0, 0 5 7, I think that for us to really tighten down and focus on getting something right, there has to be some kind of either regulatory pressure or monetary benefit. And I think that's part of the challenge we've had with interoperability of clinical data is it was always kind of a you have to do it. That's the spirit of the law versus the letter of the law. You have to do it. But I kind of believe today that nobody is truly ingesting this clinical summary data that we're exchanging in a meaningful way, which is why it's never really become something that people feel like they can trust and rely on. And why we've never really focused on the quality before now, because unlike a financial transaction like a claim or a prior auth, there's been no direct benefit that somebody feels like, yes, this delivered a lot of value. I think when it comes to clinical interoperability, the value is speculative, but we don't have a way to put our finger on it and say, this is what drives value. And I think that's starting to change with all the things we're trying to do with value-based care and AI.

Joerg Schwarz (21:47):

Yeah, a hundred percent value-based care is going to be a big driver. The prior authorization is going to be a big value for providers, administrative burden, et cetera. And as you said, value-based care cannot exist without data and data analytics. A lot of people have realized this in the fee for service world, it's all about volume and being efficient in providing large volume of services, but quality outcomes and data analytics and prevention are really not incentivized. And that is really the crux, the core of value-based care. You do have to get into preventive care. You do have to provide quality outcomes, and for that, you do need data analysis. And that requires a lot more data and understanding the data than we've needed in the past.

Charlie Harp (22:51):

I mean, it all goes back to that report from the Institute of Medicine, the two errors human report. It kind of drove CPOE and all these initiatives around the fact that how can we get software to augment a provider to improve the quality of care? And I think that that's been very speculative. I was in the decision support business. And even with good decision support, there's a lot of things that don't get incorporated into the data in a way that can really support the provider outside of the provider making their own cognitive leaps and ideas and decisions about what's happening with the patient.

Joerg Schwarz (23:32):

A good example is care gaps, right? I recently published a paper about this. I think electronic clinical quality measures or digital quality measures are widely underused. There are hundreds of quality measures that kind of represent the best practice of care, but EMRs use them today only to report, meaning you run the report once a year and you say, this is my quality. So we want to change this. We're working on a product right now that you can run every week. You can consolidate the data from across your ACO, your care network and run it against quality measures to identify where are gaps in care. And that's one thing that EMRs don't do today proactively, right? They record what the doctor does, but it doesn't tell the doctor what they should do. Oh, this person hasn't had a foot exam or an eye exam in over a year. This is what should be done. And it's understandable. Care providers have a lot on their plate. They have to see a lot of patients. How can they know what they should be doing with each patient? So specifically for that particular patient. And if we could get closer to this, be more prescriptive, this is where the care gaps are for this particular patient, I think this would go a long way in improving quality.

Charlie Harp (25:08):

Well, one of the things that's interesting, so several years ago, we have a product called the Inference Engine. I think we called the Advanced Clinical Awareness Suite. But what it does is it looks at the data and it's a data quality analysis tool that looks at the data, looks across all the things that it can see about the patient. And we built inferences. My clinical team has to look for things that are missing or that don't look right or that indicate something. And one of those was something that looked for a diabetic patient. So what it did is it looked at the patient's data and if the patient had no mention of diabetes, it looked at their lab results, it looked at their medications, it looked at their comorbidities, and it would basically create a message that said, Hey, this patient looks like an undocumented diabetic. Because to your point, gaps in care to be able to see that there's a gap in care, you have to have a clear picture what is going on with the patient. What was interesting about this, your was in a six month period in this one health system, we found 3,600 undocumented diabetics.

Joerg Schwarz (26:16):

Wow. 3,600 out of how many?

Charlie Harp (26:21):

Out of about a million patients.

Joerg Schwarz (26:23):

Wow.

Charlie Harp (26:24):

That's found I think 2300 undocumented heart failure patients by looking at ejection fraction data. And what was interesting is when we reported this back to the providers, the providers, this is where I got the idea that we need a data steward for patient data because the provider was like, yeah, yeah, I know the patient's diabetic, that's why they're on metformin. And I put that in my note,

Joerg Schwarz (26:50):

Yes

Charlie Harp (26:51):

But it never made it into the structured data. And I don't think people realize that if it's not in the structured data, you're not getting credit for it in your quality measures. Those people aren't getting picked up by your care management programs that are making sure that everybody's getting their foot exam,

(27:07):

you are missing those patients. They are invisible to the technology that we're employing to try to augment and help the provider. And so I just thought that was fascinating because they ended up not turning on the product because the provider was like, yeah, I already know.

(27:25):

Problem is the software doesn't really know.

Joerg Schwarz (27:27):

Yeah, that's right.

Charlie Harp (27:29):

So speaking of software augmenting care, what's your take on all this stuff that people are trying to do with AI, with generative AI right now?

Joerg Schwarz (27:41):

I like a quote that I heard from you at HIMSS that AI on bad data is artificial stupidity. I use that quote several times because we are working on a couple of projects. For example, we have a collaboration with NVIDIA. NVIDIA has a holo scan platform, which is a rack rules as code. And a lot of people now talk about genetic AI. And in order to do a genetic AI, you need to feed data into a rec rule as code and develop the intelligence, so to speak, for a Gentech AI. So in order to do that, we agreed on a data standard that we feed into the hollow scan platform, and then we take all the incoming data and convert it into that standard that Nvidia expects, because Nvidia doesn't want to deal with all the complexity of different implementations. And let me tell you, you go into a healthcare system and even let's say they have Epic, but they have three different instances of Epic, you see a huge variability between the three instances of Epic.

(29:03):

So the saying that I've heard many times in the industry is a hundred percent true. If you've seen one implementation of Epic, you've seen one implementation of Epic. And that can be even true within one system. So if one system has multiple instances, there can be variability and variability is bad. When you try to train a model and AI model, you have to bring it into a normalized form to get good, solid and consistent data. Because if you train the algorithm and you train it on bad data, meaning that sometimes data sits in different places. And so the example that you brought is a perfect example. If I train the algorithm to look for diabetic diagnosis and it's not there because it's sitting in the notes, but not in the coded information, it doesn't pick it up. So it now has patients that are diabetic and patients that are not diabetic grouped in the group of non-diabetic patients.

(30:16):

And so all the lab results and everything that you train the AI on gets completely convoluted because now you basically have diabetics and non-diabetics, but the AI thinks so it cannot make any inferences when you feed lab results in that are accurate. So if you want accurate AI and everybody's, you know that when you go to a conference nowadays, everybody has AI, everybody talks about AI, but if you don't have a clean solid data foundation to base your AI models on, then the AI will not be good. Unfortunately, it's the old garbage in garbage out principle.

Charlie Harp (31:07):

Absolutely. I think one of the challenges we have as human beings is we tend to anthropomorphize everything, whether we're looking at a cloud or we're looking at our cat or whether we're talking to AI. And I think that AI does a really good job of tricking us into thinking it's a really smart person when in reality, AI is just a probabilistic algorithm that's looking at data patterns and turning information, and it does it incredibly well. But I don't think that we fully understand as a lay person. I don't consider myself to be an ai, but I think as a lay person, we don't always appreciate what biases do to ai. And exactly like you said, if I am training AI or I'm prompting AI with data, that's not good. The result is not reliable, especially with the variability in the way that AI works.

(32:05):

And I think it's a challenge. The other thing about ai, and I tell people now, if you remember the beginning of the internet, internet searches were all kind of straightforward, and that's true of almost anything, whether it was YouTube or Google or any of these things was relatively straightforward. And then somebody comes along and figures out how to monetize it. I don't think they've figured out how they're going to monetize AI yet. And I think when that happens, people are going to start sponsoring ideas and say, Hey, when they ask a question like this, I want you to give them the answer that comes to me.

(32:43):

And I'm a little worried about that when it comes to ai. There are two things that worry me because we tend to get hooked on something and then whoever we get hooked on decides they're going to monetize that in some way. So it's free for a while or cheap for a while, and then that changes. That's just the nature of things. But I think the monetization of the AI patterns is one thing. And the other thing that concerns me is how much content today is starting to be produced by ai. So AI originally took human content and used it for reasoning. Now how much of the content that AI is going to be using is originated by ai, where you get to the point where the ai, it's like the lemming effect, the AI is talking to the AI and

(33:33):

Everything we're getting is no longer actually derived by a human being having new ideas or thought, I don't know. Those are things I'm curious how that industry is going to deal with that.

Joerg Schwarz (33:48):

Yeah, it's a little bit like the snake eating its own tail, right?

Charlie Harp (33:52):

That's right. Or boros,

Joerg Schwarz (33:55):

Yes.

Charlie Harp (33:55):

How did I remember that? So one of the things that's come up recently, and I'll just self-serving hit on this a little bit, is the PIQI alliance. And Infor has agreed to get involved with the PIQI Alliance and we're super excited. Any thoughts you want to share or what's your interest in the PIQI alliance?

Joerg Schwarz (34:16):

Exactly what we talked about? So in this, traditionally, Cloverleaf has been more or less shipping data from one place to another place. Of course, we look into the header and we see, okay, this is this type of message, it's an ORU message, and we know these are the recipients that want to get OU messages, and then we send it to them or we open, we decrypt the message, look in the header, encrypt it again, and then we send it where it needs to go. So that's the traditional interface engine, right? But now with FHIR, we start to aggregate the data, de-duplicate the data, de-identify the data, et cetera, et cetera. So we do a lot more higher value tasks. And once you start doing this in the context of what I told you about the product that we're building in for population health analytics, we realize when you aggregate the data from multiple EMRs that that variability is really a big problem, and it's not necessarily the fault of the EMRs.

(35:30):

It's like people put, like you said in your example, they put something in the notes that should in the coded information or they put it in observation, set of results, et cetera, et cetera. So data is either incomplete, and if you want to get good analytics, you have to wrangle that Informonster to use your term, right? And I don't take that personally. Plus your Informonster is so cute, how can you not love it? But yeah, you have to wrangle that info monster. And we just worked with a customer on getting their submission for CMS ready, and we had to go back over 20 times, I think maybe even more than 30 times because data wasn't complete. And we said, well, look at this. There's several hundred patients that don't have medication records. Go back and send us the data with medication records or lab results or whatever it was.

(36:40):

So we found this incomplete data, we had to go back to the source and they had to tune how the data is exported and so on and so forth. So when I heard about the PIQI Alliance where you work with the industry to improve the quality of data, so we can do things like utilizing digital quality measures to identify care gaps, we need clean complete data to do that. And that's why I think the PIQI Alliance is something that has a very good mission, and that's why we immediately joined the PIQI Alliance. I think at some point in time, I will be proud to say that we're one of the initial members of the PIQI Alliance.

Charlie Harp (37:28):

Well, I think that the part of the reason why I wanted to spin it up in the first place is to bring industry thought leaders together to kind of take on this non-trivial issue to help kind of move healthcare forward. And I'm very grateful that the Infor has become a part of it, so I really appreciate that. One last topic I wanted to bounce off of you as a salty dog like me, I've had conversations with folks out in the industry, and one of the things that comes up every now and then, and I want to check myself on this because I have certain feelings about it, but people will say, oh, Charlie, just think in five years we'll look back and say, remember when we used to need terminologies in healthcare? The idea being that with the advent of AI, that we'll just be able to do everything we need to do on unstructured data. Do you think that's realistic? I don't think that's realistic.

Joerg Schwarz (38:30):

I agree with you. I agree with you. I remember many years at the time I worked at Acfa Healthcare and I was with our chief medical officer. We visited a company in the valley, a startup company, and they were working, this was early stages for ai, but they were working on this, and they were explaining to us how they would throw all the data in and then let AI sort it out. And we said, so you don't care about the ontologies and you don't care about this, and you don't, no, no, no, no, no. The AI will figure this out. This is all outdated. And of course, they never got anywhere because they completely ignored this. And I think there's a reason why we have the ontologies, because there's so much nuance that is expressed in these ontologies. That's why there's so complex, there's so many different types of cancer or to describe appendicitis or to how severe it is, what other organs are, might be affected, et cetera, et cetera. So there's a reason why we have these ontologies, and I don't think we can get away without them. Maybe it'll be easier to map between them just translating between the different ontologies. So it'll be probably easier to normalize it if people use different ontologies or different versions and map between them. I can see that, but I cannot see this going away. And we throw everything in a big pot of soup and then the AI will sort it all out.

Charlie Harp (40:17):

No, I agree. I think that the ontologies that we have and the terminologies we build, I kind of see them as anchors, like deterministic anchors and guardrails that we can use. And I think that, I'm not an AI naysayer. I think that AI as a technology has a lot of applicability in accelerating what we can do. I use it with code and things like that today. I think that, but every now and then I've got slap it upside the head and say, what are you doing? We'd said, we're not going to do that. And it always comes back and says, you're absolutely right, Charlie. We did say we're not going to do that. And I'm like, it's acting like it knew all along that it was making a mistake. But anyways, I think that when the person that said that to me said in five years, I'm like, you realize, going back to what you said earlier, I said, you realize we still use fax machines, right?

Joerg Schwarz (41:08):

Right, right. Yes,

Charlie Harp (41:09):

Yes. Maybe in a hundred years we won't have terminologies anymore. Maybe we'll have it all figured out. I think that we can use AI to fuel human discovery. It can see things because of what it can process and how it can process. And I think if we can figure out how to manage the biases of, because it's all based upon all the stuff we did before, whether it was right or wrong, I think it's going to definitely be an interesting journey. My biggest challenge for a lot of folks is in healthcare. I mean, we've done this with NLP, we've done this with other technologies. We try to adopt a technology, we don't use it correctly, and then we decide the technology is bad. And with ai, in fact, I've already kind of heard some of this is happening in parts of the payer industry where they have applied ai, it caused a bunch of issues, and now they're saying, we're not going to use AI anymore. So I just think that we have to be smart about how we apply AI to the problems and not just assume that it's going to replace and handle everything for us. I just don't think that's realistic.

Joerg Schwarz (42:17):

Absolutely. Yeah. Remember when IBM Watson came out and how they said they could replace the radiologist and because the AI can be better, Watson can be better in examining pictures than a radiologist and blah, blah, blah. And that always hype and obviously wasn't true and created a lot of skepticism that vanished when open AI came out with the generative ai. And people are now enthusiastic again. But I think we'll go through this same hype cycle, and at some point in time we get to the realistic expectations and the realistic use cases. And by the way, to pick up on something that you said, I moderated a panel last year with a bunch of people from the healthcare industry, and I asked the audience who was using still faxes, and a lot of hands went up, and at the end of the panel I asked, do you think we will still use faxes in five years? And a lot of hands went up. So I think you said in five years, AI will replace everything. Well, according to my panel and the audience last year, in five years, we'll be still using faxes

Charlie Harp (43:41):

Every now and then, I do come up with a thought where I'm like, I wish I had a fax machine or a dot matrix printer. There are certain things that were just right for certain use cases, but I'm an old timer. I guess that's expected. Well, I'm, the thing that I'm excited about is I'm super excited about what's happening with data quality, and I feel like AI has kind of been a weird driver to a wake up call that the data quality needs to be better because I've been beating my head against that wall for 15 years where people don't believe the data's not good. And I think having a way to measure it will lead to improving it. And I'm super excited about that because I think if we can do that, there are so many use cases downstream of accepting trusted data, whether it's for old school analytics and population health and measures or ai. And I'm excited that there are organizations like Infor that are kind of embracing all this stuff to help make things better. So any last thoughts before we wrap it up for today?

Joerg Schwarz (44:51):

No, it's a valiant conquest to go for data quality. And I couldn't agree more with you, Charlie. That's why I fully embrace Thefor Monster.

Charlie Harp (45:03):

Well, and I officially on this podcast, except Joerg Schwarz is the OG Informonster now and forever.

Joerg Schwarz (45:17):

Okay. Alright. I take it. I take it.

Charlie Harp (45:21):

Alright, my friend. Well, hey, thank you for your time today. I really appreciate it. I am Charlie Harp, and thanks to everybody for listening. This has been The Informonster Podcast.