Informonster Podcast

Episode 14: The Impact of Logica and How the Healthcare IT Industry Can Come Together, Part 2

December 14, 2020 Clinical Architecture Episode 14
Informonster Podcast
Episode 14: The Impact of Logica and How the Healthcare IT Industry Can Come Together, Part 2
Show Notes Transcript

Charlie Harp is joined by Logica’s Chair of the Board, Stan Huff,  Chair of the Nursing LOINC Subcommittee, Susan Matney, along with Clinical Architecture’s very own Chief Informatics Officer Shaun Shakib and EVP of Client Services Carol Macumber to discuss the history and impact of Logica on the Healthcare IT Industry. In this second part of the Informonster Podcast’s first two-part series, they discuss the importance of information modeling’s interaction with terminology, how a standard by itself can only be effective if we all agree to follow it, treating decision support like a medical device, how Logica is changing, and how decision support can ease provider burden in the trenches with open and reliable data.

Contact Clinical Architecture

• Tweet us at @ClinicalArch
• Follow us on LinkedIn and Facebook
• Email us at informonster@clinicalarchitecture.com

Thanks for listening!

Speaker 1:

I'm Charlie Hart.

Speaker 2:

And this is the infer monster podcast. This episode is part two of our two-part series on logical health. Uh, if you have not listened to part one, I suggest you go back and find it. This part will make a lot more sense if you do this episode picks up where we left off with Stan Hoff and Susan Matney from logic and health and my clinical architecture colleagues, Carol McCumber, and Sean, Shakib standing. Susan, can I ask, I mean, um, maybe if we could go back a little bit and, um, is my audio coming through or is it really light? So maybe if we back up a little bit and just talk about you, you've mentioned information modeling and terminology, um, and how those are both important components to achieving interoperability, but we have a wide variety of folks that listen to this podcast and maybe, you know, just some of that, some of the fundamentals about why wasn't it enough, for example, when we just licensed snomad for free for use, why didn't that, that alone just solve the interoperability challenge? What is information modeling, um, Y you know, how did, how does interact with terminology and why is that important interoperability? So I'll share the experience. So, you know, you go back to, uh, the early nineties and, you know, version two of HL seven had come out at version 2.3, 2.3 0.1. And so you had people using the HL seven standard. And if those who are familiar, you know, you have different kinds of messages, and those messages are made up of segments and each segment has fields in it, et cetera. But what you have, basically, if you think about that, that's, that's a data structure. If you will, it's the form, or I don't know, a better word than structure that that needs to be bound to particular meanings. And back in the early nineties, a lot of people using HL seven, there was no common terminology at all. So I think that's the same era when the praise was coined. You know, if you've seen one HL seven interface, you've seen one HL, seven interface, every single interface. Now you had to map manually between the codes that were being used in one system and the codes that were being used in the other system. Now, when we started making LOINC codes, working with Clem McDonald and Regan street, we were struck with the realization that for things to be truly understood and interoperable, they needed to be in the context of a data structure. In other words, you had the, the, the OBX segment and it provided the of what the meaning of the code was. So, you know, you could think that the total meaning per instances, if you're in an order record, that code means this is something that I want you to come and do I want you to come and draw the blood and do the testing on this. If it's in a result record, then the line code is the name of that, of the measurement that you made, you know, a glucose level or a red blood count, or a white blood count. And that led, you know, basically to that, to the understanding that to get to a truly computable, meaning you needed to have a structure and you needed to know where the codes were used in the structure, because the structure adds context to the meaning. And those two things taken together can get you to a pretty computable meaning. But that evolved from sort of thinking about and recognizing that to saying, well, what if we wanted to think about that structure? Not in the context of just an HL seven OBX segment, but we wanted to think about it as a logical model, if you will, that presented the logical structure of the information, but then could be interpreted to many different syntaxes. It could be interpreted or translated if you will, from an HL seven version, two style of information exchange, or it could be placed in a CDA document, or it could be placed in a, in a C disk, uh, message, or it could be placed in a, in a fire transaction using a fire API, you know, information modeling really is not, you can't separate it really into two parts. I mean, there, there are two parts that you can talk about. You can talk about the structure and the terminology, but those two things have to be considered at the same time in order to, to make something that is explicitly defined and computable. Because if you're not careful, you can get into a situation where the same data can be expressed either as the meaning of the code, or is it a separate, separate element in the data structure? And so a common example is that, you know, they're like codes that mean basically, you know, a serum glucose done by a testing laboratory or a glucose level it's done on a home glucometer. And there are at least three codes that in length that pertain to that, there's a general glucose level that doesn't have any mention of the measure, or excuse me, the methodology by which it's done. And then there, there line codes that say, this is a glucose that was done by glucometer or done by a test strip where the loin code itself contains that meaning you have to think about those things so that you're not saying, you know, glucose done by glucometer with a method of glucometer and getting that kind of redundancy and confusion, you know, in, in the representation. So, I mean, that's a little bit at least about, you know, what, what we

Speaker 3:

Mean by information modeling and terminology binding is the idea that you create a structure and the structure really use the, you know, you think of as fields and the field has some name like test ordered or the test that was resulted, and then you have a value for that field. And then you have a field that contains the actual measured value and that information then taken together the combination of the coded structures and terminology really bring together the information model and the terminology. You know, it's interesting when you go back in time. So my first job, um, in healthcare was, uh, developing interfaces for SmithKline Beecham clinical lab in 1989. And that was back when I was writing interfaces for different pieces of equipment in the lab. And I was also creating STM lab messaging interfaces, which, you know, they were OBX records and OBR records. And also a lot of these custom serial interfaces. And what a lot of people don't realize is, you know, back then, when you were getting a reference lab feed from someplace or pushing reference lab data, you didn't really map it because what you're ultimately doing is you were putting it on a piece of paper to put in front of somebody. So the human, the human was the computer that was taking in, whatever was being provided, and they were doing the cognitive lift to figure out what that meant. And you know, where we are today is, you know, we're really trying to let the computer help the human, let the computer help kind of merge the data into a pattern so that we can do things that a human can't reasonably do, or can't easily do a lot of that lift from back when we were putting things on paper in front of a provider to where we're doing things like population health and analytics and patient portals is a big part of why it's important that we have the, these agreements and really that's what it is, right. We are, we are forging agreements amongst ourselves that this is how we're going to say that because a standard by itself is kind of like the pirates code and pirates of the Caribbean. It's more like guidelines. People have to come together and say, this is how we're going to agree to leverage this standard, whether it's a terminology or whether it's a, it's a syntax. Do you think that's a fair statement or do you think I'm off base?

Speaker 4:

I think it's totally right. This is Susan. I mean, they are, they are patterns. We started with ASN one abstract syntax notation one, and now we're on the clinical element modeling language, which we have aligned with fire. We have more elements than fire because we're doing a hundred percent of a wound assessment we're not doing, unless let you just assess 80% of it. We have those patterns at the highest level, and then we extend them or constrain them based on what type of medical clinical care it is. Is it a procedure? Is, is it a medication administration? It is, you know, is it an order? All of those have specific patterns. So,

Speaker 2:

I mean, one thing that I just like to express, I guess, you know, as part of this is, is why more about the why I sent it in general, that we want to create this healthcare ecosystem, where you have a standards-based ability to create applications that are shareable across healthcare providers, going back to things that we've been doing at inter mountain healthcare, starting with Homer Warner and Reid Gardner and Al Pryor, Paul Clayton, a whole bunch of pied in the ears and in the informatics arena. And I would mention specifically also Scott Evans, who did the actual development of a lot of things. We, I say the collective, we, I, I'm a S a very small part of that created hundreds of decision support applications, uh, that did things like suggest that the best antibiotic detected ventilator disconnects in the ICU guided therapy of patients who are on chronic anticoagulation, guided people in ordering the right kind of blood based on what was going on with the patient kind of broad product, whether you use packed red cells or fresh frozen plasma, or other things, et cetera. And over time, I mean, we developed this feeling that we could only provide the highest quality and most appropriately priced healthcare. If we used advanced decision support in care of the patient and, and all of those things that I've mentioned, we provided higher quality care, implementing those, those kinds of algorithms, and almost invariably lowered the cost. And then you think about that as a microcosm of what's possible in the world. There's statistics that out there. Very good studies. Uh, the one, one that I'm thinking about is, is one that was published a couple of years ago in the British medical journal. The authors were from Johns Hopkins university, and their estimate was that 250,000 people a year die of preventable medical errors in the United States. That's sort of hard to grasp. I mean, as bad as COVID is, you know, we're now just starting to approach the number of people in the United States who are dying from preventable medical errors. That's a number that's six times greater than the number of people who die in automobile accidents. It's 10 or 12 times the number of people that are affected by narcotic addiction and abuse. It's an astounding number. There are other that show physicians do the right thing about 50% of the time. Yeah. Those errors and those challenges are not going to go away unless we change what we're doing. That's an unsolvable problem because that's not, it's not going to happen by teaching people better in medical school, and it's not going to happen by the typical zero harm sort of programs, because both of those things assume that somebody knows the right thing to do you do. And, you know, everybody can't know everything. People can all only know and use some very small area of knowledge where they may be the best expert. The only way for us to start making a dent into that kind of medical error is getting to a point where we can share correct local decision support applications and the challenge. And this comes back to, you know, interoperability and terminology and the modeling and all that sort of stuff. The challenge that we have at inner mountain, we did those hunters things. They haven't gotten anywhere. You can't them at some other institution because they don't have the same infrastructure. They don't have the same codes. They don't have the same assumptions. So that's what we're trying to create is an environment where if we figure out a good program for diagnosing and managing pulmonary emboli, if we can establish the infrastructure and the standards and use it standards, as we access and use data in decision support, then that can go to another institution, hopefully with little or no modification. So it's ready to use. And likewise, as if the Mayo clinic does something good, creates a good program for diagnosing sepsis, a cold sepsis in the emergency room, or a great algorithm for weaning people from ventilators, or, you know, you go down the list, we can share those. And that's, that's important because no single institution has the resources to create all of the kind of software that would be useful and appropriate for patient care. Intermountain. Can't do it alone. Kaiser Permanente can't do it alone. I mean, you take the a hundred things that we've done and yeah, without exaggeration, there are 10,000 things to be done or more, probably even more than that. We've done things that are high volume, you know, where they're great high patient risk, but you go down the list a little bit and say, well, what's, you know, what have you got there electronically? That's going to help with management of Hashimoto's disease or management of asthma or management of fibromyalgia, or helping us determine the right time for a C-section or other things. And literally there are 10,000 things like that that we could create. And unless we get to a point where what we create is shareable, we're never going to realize the potential of the improvements that we can realize in the quality of care, based on those interoperability standards.

Speaker 3:

I think you make a great point. I think there's, I spent 10 years at FTB and four years while I was with FTB also at zincs and spent a lot of time in the trenches of looking at the different ways people leverage things like evidence-based medicine and artificial experience and the ability to, to take what we know today and push it out and make it available to people that may not know that is really important. But I also think the other thing that limits our ability to leverage that is the quality of the information that we present to decision support. I think that's the other challenge is data quality being good, being understandable or inter-operable and being complete, I think is another thing that we still struggle with as an industry. What are your thoughts on that?

Speaker 2:

No, absolutely. You know, along with what I would call the technical part of the modeling and creation of value sets and codes, there has to be also a part of this that is based on clinical experts in nurses and physicians and respiratory therapists and pharmacists and other people who say, Oh, you know, the kind of information that we need to make an accurate diagnosis of myocardial infarction in that, in the ER, you know, we need a blood pressure, we need a heart rate, we need a pulse oximeter. And then, you know, we need to different cardiac markers or enzymes that are indications of injury to the myocardium. And there has to be an agreement because if, if there's no agreement, it doesn't matter if we have a model for it. If the clinicians don't agree that that's the data that's needed to make an appropriate diagnosis, if that's not the food that the decision support module needs to eat in order to be effective, then we're no place, you know, just making the models doesn't doesn't help. And so you need clinicians in at least two ways. One is to say, you know, what is the data that we need? And then to agree that it's been modeled accurately. And, and I guess the third part is to actually figure out a way that it happens. You know, sometimes it might be, you know, just teaching clinicians that if you enter the data accurately, we can provide you with more help, but it's, it's absolutely more than the technology. You have to have the people who know what's important guide that creation of the proper data elements and the representation of, of that information to know that you're saying in a way that is meaningful and defined and the way that they would say it, you know, the way that a clinical expert would say it in order that you have the raw material you need in order for these other higher level applications in decision support processes to work,

Speaker 3:

Absolutely decision support, and healthcare is near and dear to my heart. And one of the things that I think we periodically struggle with, and I'm really curious as your thoughts on this is the risk aversion. So, you know, when I've done things in the past, like back in my first days, you know, occasionally we would go to somebody and we would talk about leveraging decision support work they were doing in their, in their system and, and commoditizing it or commercializing, and rather than taking it out and packaging it, and both in terms of what you hear in the industry about treating clinical decision support, like a medical device and, and people that are concerned about if they give advice where they recommend something and it gets outside of their ability to control it, and somebody uses it that that can come back at them. What are your thoughts on that relative to the stuff you guys have been doing with these models?

Speaker 2:

That's an important question. And I, I got a jumble of thoughts about this. One is you think about medicine. I I'll use my own example. You know, I had prostate cancer had surgery. And when I was thinking, you know, once I knew that I needed surgery, I tried to look at the medical literature and understand, you know, the difference between quote unquote, traditional prostatectomy versus a robotic assisted DaVinci device. What I found was I'll tell you that I'm convinced that the surgical literature is not science. There's so many variables in there in terms of, because it's different surgeons. You don't know if they're doing the procedure in exactly the same way. You know, it's not randomized controlled in terms of what procedure people receive, all of that kind of stuff. But my point in all that is, that's just one example within medicine, where we don't know exactly the right answer. And for some reason, people think of clinical decision support as something. And they always think about the risk rather than the benefit. So clinical decision support doesn't have to be a hundred percent accurate to be good for the population medications. Aren't good for every patient. Uh, and we're learning more and more about that. But if the medication in fact, you know, saves 10 lives, and maybe it has a complication of either death or mortality in one life, we saved nine people we've got to do absolutely the best we can. And we have to do real science to understand the impact and to correct things. But we shouldn't expect that decision support has to be perfect before it's valuable to use it and to help patients with it. And I think in terms of, of sort of the legal implications, the greater body of evidence that you get, it's going to turn around and what's going to happen is that people are going to have bad outcomes when people didn't follow the decision support logic, or they didn't even use the decision support logic. And I think we'll see a change where the claims now are, I'm not receiving the quality of care blocking on the buzzwords. What's the right legal term, the appropriate standard of care because you didn't use decision logic. And I know that there'll be a lot of resistance from certain physicians and maybe even physicians groups about that. But in the end, I think what's going to win out. Is, are you providing better care because you use this new tool and we'll do what we have to in court to defend the fact that we're, we're doing the best care that we know how to do. And that involves using these kinds of decision support applications.

Speaker 3:

No, I agree. I think like when I go back to in the early days of CPOD and they rolled out a lot of decision support to the providers, and there was a huge pushback because of alert, fatigue, these alerts were popping up, they were getting in the way. And I think that one of the lessons there, I mean, there's a couple of things. One is decision support is good. If like 85% of the time or better when a provider is presented with information, from decision support, they feel like it's relevant and they are, and they're happy that they saw it, gave them some piece of advice or some, some piece of experience about what's going on with the patient that maybe in the chaos of their daily job, they might've missed it. Or there was an article that came out and they weren't aware of it. The problem we had back then was you're throwing 23 alerts in front of somebody. And every single time, it's like, yeah, I know. Or no, this is irrelevant. And after awhile they just stopped paying attention. A big part of it is us getting better at the precision of decision support and being able to take in the variables, which are much more available now than they used to be and present something that's really relevant to that patient and that provider and their experience. The other thing that's important when people talk to me about decision support, being a medical device, the thing I always struggle with is a medical device is a machine where it's predictable. You put something in and something comes out, whereas decision support, you're dealing with very complex system. And there's a lot of probabilities people's experience the things that are not in the data that they're getting the provider's head or in the note that didn't make it into the end of the codify data. There's a lot of moving parts with decision support. The ultimate processor is the learning intermediary that is looking at everything that they know, getting the, that they didn't know, or the help they got from the decision support or the best practice or the evidence, and then factoring that into their calculus. So they're making those critical decisions about the patient. They're less likely to fall into that statistic of medical errors because we gave them a little bit of help that they, that they needed in that moment when they needed it. Yeah.

Speaker 2:

Yeah, no, it's, it's really interesting. It reminds me of something that, well, a lesson and a warning for the future in exactly the way that you said, you know, we, we have to make, we have, basically, we have to make the decision support better. What we found. And this, this really comes from, from Dr. Scott Evans work is that we never do it right the first time. You know, he did some of the foundational work on drug, lab interactions, drug, drug interactions, checking for allergies of patients, et cetera, the first versions of any of those things that we put in, you know, we would see 30, 40, 50% compliance with the recommendation. And we would quickly look at, you know, the situations where the clinicians didn't follow it. And it, you know, laboratory alerts, for instance, there would be this alert on a high potassium. And then you go in and look, and a big bunch of those people were people who had chronic renal failure. And if you looked, they'd had that same potassium level for months, then you approve the logic and say, Oh, if this is Ty, if it's been high for a long time, and they have a diagnosis of chronic renal failure, don't put that alert up because all you're going to do is frustrate somebody. And what we found is that by doing that and continuously improving it, we got up into compliance being in the high nineties and clinicians would complain when you took it away because it was providing real value to them. Part of this, I came to mind when you mentioned your association with first DataBank, we changed at one point from the alerts that we were maintaining locally to the things that first DataBank was doing. And we had a marked decrease in the sensitivity and specificity of those interactions. They were developed from literature and not on the front line. And it's hard for somebody just reviewing literature to know the real outcome. It's absolutely essential. That decision support, especially in the early phases, is something that you can rapidly iterate on. We were literally changing, you know, the logic and changing limits and levels and other things every few days, or once a week until we got that much more specific and exact sort of representation. And if you throw the thing is a device that needs, uh, an approval from the FDA and the exact algorithm you use is part of that. It means you can't change it at the rate that you need to, to improve it. And yeah, you just really worry about the FDA regulating it like a device, like a physical device, because the whole life cycle and how you develop it and how you tune it has to be different than what you would do for a mechanical device.

Speaker 3:

I think there are parallels to like self-driving car is what I think about because you know, I'm driving a car, I've got a GPS, I've got all the controls on the dashboard, but I'm still driving the car. I'm still behind the wheel. I'm still making those decisions. And I think that, that's the thing that the folks that talk about decision support being regulated in that way. It's exactly like you said, it's the kind of thing where the minute you do that people will stop using it because at the minute it's not relevant for them weird cases where they can go in and tweak it. Just like you said, people are going to say, I can't use this. I just can't use it. And just like with a self-driving car, you know, I would like one, I'd love to have a self driving car. I'd like to take a nap between here and Nashville. It'd be great, but I'm always afraid that if I tried that I just wouldn't wake up because the car would crash between here and Nashville. There, there's going to be scenarios where the person who was testing that self-driving car just couldn't account for that. And the appropriateness of the content. You know, when you talk about drug decision support, when that was first rolled out into the ambulatory and inpatient setting, a lot of that stuff was built for retail pharmacy back for over 90. What I said at the time is, you know, a provider, a physician is not the same as a pharmacist. They are not going to tolerate the same level of pushback. They're not going to tolerate the level of ambiguity that goes into something that's popping up or printing out on a, on a patient info sheet. They're going to want something, you know, we would roll out 80,000 drug interactions back then. And the comment we would get sometimes was, you know, you gave me 80,000 drug interactions, and I need to figure out how to turn off 79,950 of them we've come a long way since then, too, though

Speaker 2:

The advantage that first data bank or all of us have is, is, is scale. You know, if we can share together, we can generate real world data for tuning those on a population that, you know, no single institution could even, even dream about having it comes back to the whole idea of the learning health system. If we, if we represented the data and the outcomes in a computable way, then we could tune first data bank in the same way that we did it locally, but we could do it together. And we could do it on hundreds of thousands or millions of patients instead of on a hundred patients or 200 patients.

Speaker 3:

We have amazing health systems and facilities here in the U S and across the, and traditionally we had this approach where if you look like a first data bank or a Walters gluer or an Elsevier, they have these teams of people that are reading the literature and they're synthesizing decision, support and assistance, and they're creating these things and they're distributing them. And that's great. But one of the things that's exciting about an organization like logic and what you guys are doing is the problem with a lot of these big, these big publishers is you have to live in their walled garden to use it. You have to, you typically use their terminology, you use their structure, you call their API and maybe you can localize it. And that's great. But the beauty of having a accepted and shared an open platform is that if somebody at the Mayo clinic or at partners, or at Kaiser or Intermountain comes up with something really awesome, that is great decision support. They can share it, and everybody can take advantage of it. You don't, you don't have to say, Oh, I'll have to create a custom interface. So I can take advantage of that thing, that Intermountain, theoretically, you should be able to just plug it in and use it. And people should be able to get the benefit of that and not have to worry about reinventing that particular wheel or rediscovering that within their they're silent. Right.

Speaker 5:

In part of what you're talking about is kind of, you know, this vision of a shared repository of standard-based things. And I think part of, you know, the barrier to, to success behind something like that is, is instilling the confidence in the community that the stuff that's been put out there is of a certain quality. You can't obtain that without getting people to use it and test it for you.

Speaker 3:

Absolutely. So I have one other question it's kind of a, I wouldn't say it's a loaded question, but if you say Charlie, I don't want to answer this question right now. I get it right out of the podcast. I have certain feelings about this, and I'm really curious as to what you guys think, and that is the push that we're getting right now in healthcare, around machine learning and artificial intelligence, not in general, because I think machine learning artificial intelligence, when it comes to identifying patterns and looking for, and doing discovery is perfectly cool. Assuming you trust the data that you're pointing it at, but when it comes to people that have tried to implement it, or people who want to implement these types of things for active decision support, I'm really curious what you guys think. This is my thinking that the AI tools and machine learning and all of that are an incredible tool for gaining new knowledge and insight. But that thinking of the analytic part, if you will, the part where I'm gaining new knowledge, that is half or less of the problem, because what I need, then I need an implementation of that new knowledge that guides future actions and care. So the example I would use is that people did research. And I think this was, I think it was Google that did this. It was really nice research, but they could basically

Speaker 2:

Predict with almost a hundred percent accuracy, three or four days ahead of time of people being admitted to the hospital that were going to die during that hospitalization. And so that's an example of predictive analytics and learning and, and recognizing what, what clinical situations, you know, we're going to lead to the, to the death of a patient. Well, my next question would be okay, so now you've identified these three people, what do I do? So they don't die. That's an entirely different question. I need knowledge. Now that says, I've got to treat them with different medication, or I've got to put them in an ICU bed instead of in a, you know, in a regular med surge unit, or I need to do renal dialysis, or I need to do something else. In some sense, you can almost do the analytic parts and have no connection to the real world. And what I mean by that is that you get into an end dimensional space and all you're looking at is eigenvectors and other cognitive stuff that, you know, predict an outcome and correlate, correlate with the outcomes. But then on the other side of the equation, you have to say, okay, but you know, the elements that are contributing to that vector, what are they, what's the treatment. If I'm going to take actions in the real world, you've got to tell me the medication I need to order, or the procedure that I need to order the lab that I need, or the nursing intervention or the physical therapy intervention, or the respiratory care intervention. And so, you know, there's some people who are thinking that, Hey, we don't need to do no structuring of this data. We can learn from the data, you know, just by overcoming the, the errors in the terminology and the model of the data. They can sort of compensate for the inaccuracies in the learning just by doing more and more patients, but that doesn't help in treating the patients and changing the behavior of the people in that way. You have to have explicit models and you have to know how those models tie to actual medicines in the real world and actual therapies and procedures in the real world that are going to improve the care of those patients. So they don't die.

Speaker 3:

Fair enough. I want to, I want to address one other thing. And that is, you mentioned graphite earlier. Tell me about where logic is going and how you see it changing as we move forward.

Speaker 2:

So I don't know that the answer to that completely what graphite is, is a new company. That's being formed by Intermountain healthcare and Presbyterian health in Albuquerque.

Speaker 3:

And we

Speaker 2:

Want to recruit other healthcare providers or healthcare organizations. It doesn't have to be acute care people. You know, Susan would jump in and say it, you know, it'd be great if we had some partners that were skilled nursing, home people and public health people and all kinds of other, but the idea is that we want to create that plug and play interrupt level of interoperability. And it gets back to what we were saying earlier. We have an incredible level of interoperability. That's, that's enabled by fire and LOINC and SNOMED and, and things that you're doing all contribute to. Part of the solution. The thing that we have to have is actually not, not technical. It is a big enough group of consumers, of healthcare providers, people who are taking care of patients and who are at risk for the cost of, of taking care of the patients to say, we're going to do this together because we see the value of, of the shared software of shared decision support. We think that's valuable enough that we'll agree to do it together. Uh, we'll agree together the way that we're going to do it. And we'll back that up by saying, you know, we're gonna, we're gonna buy things that are certified against the platform. If we build things we're going to certify them against the platform that creates the marketplace. It's a realization of the vision of Logica, but it's doing it at a scale that we haven't been able to to realize in logic is so far, we need millions of dollars instead of a few hundred thousand dollars to do the work that needs to be done. But even more than that, we need the commitments of organizations to, to be a part of a voluntary coalition that really wants to create plug and play interoperable applications. And so I see logic and going forward, I see logic being, there are a lot of things because the goals are the same. And for that matter, a lot of the organizations and people are the same. So there are things about the terminology and models. We just want to do that, you know, with graphite and logic together, we want the development sandbox to be a combined effort between logic and graphite. The same thing with the marketplace. I think logic has played a role in, and probably will continue to play a very important role that maybe that's the home for, where we get the input from the clinicians about what we should do and what data we should collect. And what's important to do that kind of stuff. I see, you know, logic continuing, I see a very close relationship with logic, cat and graphite, but there's parts of that that I don't understand. I don't, I don't know in terms of whether we should have a combined board of directors, for instance, between logic and graphite or what the actual legal relationship might be between the two organizations.

Speaker 3:

Okay. So Stan, Susan, if somebody listens to this and they're like, gee, I'd really like to find out more about logic and how I could get involved in logic and, and where things are going, where do they go? How do they get in touch with you guys? We have a website and they're always welcome to just send me an email or send Laurie Herrmann line for an email. And I don't know if we use any way to publish this with it, but I mean, I'm happy to share my email so that people know how to, how to get ahold of me. We will post your website address. And if you really sure you want me to email with the podcast I go out, I, I don't know how everybody that I don't want to know. It already knows my email so much at risk. Well, we'll figure that out. Um, I also want to say that, you know, we've been working with you guys for a while now. I know Sean has had the pleasure of working with, with Dan and Susan for awhile. It's been a shorter time for me and the rest of clinical architecture, but I want to see, I really do appreciate the passion, the ingenuity, and the dedication to help making things better. And the spirit of collaboration we've had working with you guys. It's really been a pleasure. Thank you. Yeah. Thank you for us too. We've really enjoyed working with you. Absolutely. All right. Any final words before I sign off? All right. I think we're good. Okay. Well, thank you, Stan. Thank you, Susan, Carol, Sean. I appreciate it. And, uh, to all our listeners out there, thank you for tuning in this is Charlie Harper and this has been the infant monster podcast. Thanks.

Speaker 1:

Thank you.