Informonster Podcast
Welcome to the Informonster Podcast, a podcast about the Healthcare IT industry hosted by Charlie Harp, CEO of Clinical Architecture. This podcast fosters an educational and professional discussion about healthcare information technology, including events in the industry, interviews with thought leaders, and much more! Have a topic you want discussed on the podcast? Email us at informonster@clinicalarchitecture.com.
Informonster Podcast
Episode 49: Gene Vestel on FHIR, AI, and Data Quality
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In Episode 49 of the Informonster Podcast, Charlie Harp sits down with Gene Vestel, host of Out of the FHIR Podcast and founder of FHIR IQ.
Gene shares his journey from pharmacy analytics to leading FHIR initiatives and advising organizations on digital quality measurement. He and Charlie discuss how FHIR has evolved, where it delivers value, and where organizations still struggle to make exchanged data truly computable.
They also explore AI and data quality and why governance and provenance matter more than ever. Gene offers his perspective on the future of US Core and what it will take to move from simply exchanging data to actually making it usable.
Contact Clinical Architecture
• Tweet us at @ClinicalArch
• Follow us on LinkedIn and Facebook
• Email us at informonster@clinicalarchitecture.com
Thanks for listening!
Charlie Harp (00:09):
Hi, I'm Charlie Harp and this is the Infomonster Podcast. And today on the Infomonster Podcast, I've got Gene Vestal. Gene's going to talk to us about all kinds of things. So why don't we start out? Gene, why don't you tell the listeners a little bit about yourself, how you got started and your journey into healthcare so far?
Gene Vestel (00:31):
Yeah. Hey, Charlie. Glad to be on. Thank you for having me. And this is kind of a roles reverse type of situation because I'm usually the one interviewing people. So this is good to be in the hot seat here. And I've been in healthcare now for 15 plus years, probably longer than that, but started out just interestingly when I was in college in New York City, just walked into a retail pharmacy chain, got a job in the mail room, funny enough, while I was going to school. And from there, I worked my way up to ... By the time I graduated, I was a manager of pharmacy business analysis. So I kind of got into this whole health data space straight while I was still in school. So got my degree, moved out to New Jersey, got a job with Medical Health Solutions that was the biggest, I think, PBM in the country at that point, which since been acquired emerged with Express Scripts and now they're owned by Cigna, but kind of gave me a very quick introduction to the world of healthcare data.
(01:38):
They had been very big proponents of, at that point, what was called Teradata and Business Objects. I don't know if that is a familiar ring to you, but that was all the hottest tools that kind of at that point. And so from there, I met my wonderful wife and we moved to Pittsburgh. That's where she was at, she was from. And Pittsburgh obviously is known as a big health tech town as well. So I worked for both of the health systems here in Pittsburgh, the Highmark led Allegheny Health Network, and then University of Pittsburgh Medical Center, done work on everything from patient safety measurement to quality measurement to leading a quality analytics team that submits all the HEDIS measures. So that's kind of like my jump right into the fire of what some would say the hardest thing that health insurance plans do. And so spent a good amount of time there.
(02:35):
And then last four years, I got a chance to join a health tech data startup called b.well Connected Health. And I was a director of data analytics there for three plus years and was really thankful to that whole team. Got my education in FHIR, in HL7 and all these good things that everybody kind of hears about, but never really does unless you have hands-on opportunity to do so, which I would highly recommend to people that are thinking of getting into it. Just find the project and jump in. And so this last year I started my own consulting company. It was called FHIR IQ and actually had a chance to work with some great companies last year, including NCQA, Particle Health, Strata, other folks that I've had opportunity to have as clients. And yeah, just been really involved in the community, have my own Substack and a blog.
(03:33):
So check it out. It's called FHIR IQ Playbook.
Charlie Harp (03:37):
Sorry, what's the blog? What's the podcast again?
Gene Vestel (03:41):
The podcast is Out of the FHIR. So kind of like fire, but with FHIR.
Charlie Harp (03:47):
I already knew that because I've been listening to it or watching it, but I wanted the listeners to know. Yeah.
Gene Vestel (03:53):
Yeah. Well, thank you. You're one of the few, but there's like a consistent small number of people that do come back and listen, which is great.
Charlie Harp (04:02):
Oh, for people in the health informatics and kind of this health engineering community, resources like your podcast are really great. You learn a lot. I know I do. So what type of, just to kind of plug your consulting business, what type of stuff do you do for folks?
Gene Vestel (04:22):
Oh, so it's really anything to do with FHIR and really outside of FHIR. So obviously we all know that everything that anybody does now is having to do with AI and how do you quickly build prototype things that used to take a long, long time. And now I just participated in the HL7 Connectathon in September in person, and then that's how we met Charlie. I think that was very nice. And then also in virtual Connectathon in January and building things with AI in a matter of one sprint, one day I was able to participate in some tracks and I was able to create a client and a server and then an application. And so what I'm seeing as a lot of demand is larger organizations are not as nimble and they're not always as connected to what is the latest in the world of AI?
(05:24):
What is the latest in the world of FHIR? So I feel like that's where my sweet spot is, is I've been kind of like jumping in and prototyping and doing things and learning. And plus my four years or three plus years of experience building FHIR solutions, helping software engineers build with FHIR and then understanding it also from a quality reporting, quality measurement, like my engagement with NCQA was a lot of having to do with quality, digital quality and things like that. So helping build with CQL, terminology management, everything else under the sun. So yeah, basically if you have any questions about FHIR, feel free to reach out. And sometimes I just field questions over lunch and help
Charlie Harp (06:06):
People. Yeah. It's a good skillset. And I think I was just at the ASTP annual meeting in DC last week. And if you look at what's happening at ASTP, they're doing a lot to push FHIR. They really ... I think it makes sense. I think last time I heard about 13% of what we exchange in healthcare is FHIR today, and there's going to be a lot of incentives both for claims and clinical data to make more use of FHIR.
(06:35):
Thing I worry about sometimes is that people run around with a FHIR hammer and everything should be FHIR. If it's not FHIR, why is it FHIR? And one of my things, I'm a simple country programmer is my background. And one of the things I struggle with sometimes is, if you take a tool that's built for a particular purpose and you try to make it work for everything, it ends up being kind of a lackluster tool. And in some cases, it can even make it less useful for its original purpose. So I think FHIR is a great resource. I think that for me, the biggest challenge is we need to kind of stabilize it and we need to kind of realize it's like AI. What is it good for and what is it really not intended or good for? And one of the things I was thinking about, I was listening to your podcast or your video podcast and you were remarking about CQL and some of the stuff with FHIR and the fact that FHIR people want to put things in a FHIR repo and do analysis on it.
(07:38):
JSON XML, FHIR in its natural form is not the best thing to use in analytics, right?
Gene Vestel (07:47):
Yeah. And I think a lot of people found that out the hard way. I think that a lot of companies just get into it and be like, "Oh, we want to report on FHIR same way we've had a data warehouse for years and we had a relational MySQL or SQL server or whatever it is, then just plug it in and just dump it and you're good to go. " Well, you know what you end up with is you end up with a mess on your hands and you end up each FHIR resource being like 10, 15 different tables when you kind of flatten it. And then you have your kind of Data Mart that is going to be something like 120, 150 tables of denormalized flattened data. So how do you do anything with that? Well, it's very hard. So that's where things like SQL on FHIR come into place.
(08:29):
And that's a great actually initiative that's been going on for a couple of years now with those Health Semurai folks. And Arjun leads this working group. And this is partly also where my whole idea of the CQL to SQL came out of as well because there's this kind of need to provide the analytics and make it easy and simple without everybody just kind of recreating the wheel themselves, which is what has happened lately. And I think it's headed in a really good direction. I don't have to rehash the whole CQL on FHIR thing. People can go read up on that, and I'm sure you're aware of that, Charlie, as well, but it's just like picking what you need out of the FHIR data and then creating views on that, which is a lot better than just flighting everything, right? I'll tell you
Charlie Harp (09:19):
What, I felt very vindicated when I watched your video podcast and when I went and looked at, kind of got into the stuff that you wrote to generate the SQL, because when I looked at the tables you created for the different resources, and I'm not suggesting that I'm the source of this, I think that the way you and I look at data and the way we try to get data computable is the reason for this, is that the picky model for clinical data and the model, when you flatten the things and you pull the attributes out that you're flattening for your CQL work and SQL, they look remarkably similar because at the end of the day, I think that FHIR is a great way to share the robust information in a generic canonical model that we can interpret, but once we get it out of that package, we want to make it computable, we're going to flatten it in some way or another to be able to leverage it.
(10:15):
And so I just, it was interesting when I was looking through the table, the schema for those tables that you were putting the data into, I'm like, it's like totally compatible with Picky. I thought that was kind
Gene Vestel (10:25):
Of cool. Yeah. And then it becomes not just compatible with Picky, but it also opens it up for all these different use cases, right? Absolutely. You're not going to be doing your quality reporting on top of FHIR. And that's what even all the work that's happening with NCQA and CMS that they want to enable the quality measurement moving to a digital platform and be able to use all the wonderful EMR data and the clinical data that we have available rather than claims, but the results of that still need to live then in some data platform as they call it. So it's like, how do you get it into the data platform? Well, you have to convert it to a usable data representation. So that's kind of how you get it end to end. So yeah, I think there's a lot of opportunity there also to standardize the data quality of it, which is I know, Charlie, you've led the charge on that.
(11:18):
And with Picky, that gives us kind of like a single way of how you look at the evaluation of all of this. And I actually remember listening to your ONC presentation that was two years ago, I think that was two years or something like that, or maybe two, three years ago, there was an ONC call you did with a webinar and you went all into the whole deep dive into US core and the number of lab elements that we actually had come through versus what was mandated and how there was like a huge disconnect. I remember just seeing that and I'm like, "Well, I'm sure the way you put that data together was not just pulling out a whole bunch of JSON resources from your FHIRstore." You had to transform that data to be able to get those numbers, right?
Charlie Harp (12:08):
Honestly, most of that data was not FHIR. It was V2. It was V2. Actually, I think when we did the project for ONC, I want to say that less than 5% of the data we got from the participants was other than V2. It's kind of the nature of the beast. I think that when I see people that are talking about utilizing FHIR for FHIR at rest, I just feel like we tend to, we're almost like we're going back to what HL7 tried to do with the RIM model. I don't know if you go that far back, but they created this very prescriptive model for how we're going to represent healthcare data. And of course it failed because you start to sell something for something that's not intended for, and you create this perception that it's not good. It's kind of how I feel about AI.
(13:12):
I think that sometimes AI is an amazing tool. It's a huge accelerator for us as humans, but people oversell what it can do. And when it fails to solve all the world's problems, they go, "Oh, well this thing isn't good." Well, it's perfectly good for what it's meant for. And I think the same is true for FHIR. I think FHIR is a remarkable equalizer for how we use data and the fact that I can take FHIR data, NCCDA data and measure the quality so I can tell you whether or not the information you're sharing is actually consumable is remarkable because I started clinical architecture not because I love terminology and normalization, although I do. I really, really do. But it was because I'm an analytics guy and I thought if we don't have good terminologies, good ontologies, we have this good foundation, we can't really do good analytics.
(14:12):
And I feel the same way about where we're headed with FHIR is I want to get the data that I don't have and FHIR is a great vehicle, especially if people are being driven one way or another to utilize FHIR so I can get more data so I have a better picture so the software can actually do more of the heavy lifting and help me do things that I could possibly do as a puny human. So the trick is how not to get distracted. And the other thing too is to get to a point where we've stabilized FHIR so that people aren't afraid that if they're not constantly waiting to adopt a version of FHIR that they think is going to be stable. And I think we're pretty close to that. And I think that's one of the reasons why the accelerated adoption I'm hoping will really pick up.
Gene Vestel (15:01):
Yeah. Yeah, absolutely. I think all of us are looking forward to the day when everybody just memorizes that the observation, FHIR observation profile from US core and you can look at one health system and then you go to Epic and then you go to another health system with NextGen or Meditech and boom, they're all giving you the same terminology and then they're filling it out the same way. What do you think? Is that possible or is that still going to be- I don't think that's.
Charlie Harp (15:39):
Here's the thing, I'm a firm believer. I've been doing this a long time. Probably this is my 38th year developing systems in healthcare. And one thing that I think that I've reckoned with is that healthcare is local and I think most industries are local. If you walk into retail establishment, they have local data. It just, it's not as relevant as the data that we have in healthcare. And I think that the idea that you would have a standards organization building all the terminology that we're ever going to need is never going to happen because standards tend to build things based upon what they see happening in the real world of healthcare. The people at SnowMed, they're not in a hospital, they're not looking at cutting edge drugs, they're reacting to what's happening in the world. Same thing with RxNorm, same thing with LOINC. So I think what's going to happen is it's really us changing our mindset so when we create things that are local, how do we make sure that we're always aligning it to some standard where the standard exists so we can communicate?
(16:46):
It's the same thing with schema. I think that if we say, "Hey, FHIR or the RIM model or the Odyssey, CDM, this is the way clinical data is stored canonically in all systems, then we're not going to be innovating anymore." So I think that having these intermediaries, whether it's an intermediate canonical format like FHIR and intermediate terminology like the one specified in US core, I think it allows us at the different environments to be more flexible and innovative and accommodate our users in our different regions and different practices. The question is, at some point we've got to have the data governance rigor to when we're building this, we have to be mindful of the fact that we have to share it because I think historically our industry lived in these silos, these episodic silos, and we never thought about the fact that we have to share the data so we don't put energy into it.
(17:51):
And now I feel like we still don't completely understand the value of making the data shareable in a high quality way. And I think that's kind of where I hope we're going. In fact, AI and all the things we're trying to do now with data, I've been trying to get people to focus on the quality of the data for like 25 years. And so I'm excited that now we actually are having some reasons why we need to make that a priority, which forces us to put more energy into governance and think about what's going to happen when I try to share this data. But I don't think we're ever going to be at a point where we're all operating off of one terminology wheel. I think we'll always have to normalize, but I could be crazy.
Gene Vestel (18:37):
Yeah. Yeah, I agree. I mean, I think we're seeing a lot of movement lately with additional use cases that probably have not even been thought of in the last five years. I mean, who would have predicted that now we're going to be sharing our healthcare data with a large language model in our chat, chatbot client, like Cloud or OpenAI or Gemini. And we're now relying on the LLM to basically analyze the data quality of that data and determine whether it's usable or not and then give us back. And then so I've seen kind of like the whole spectrum of that. And I did a whole article on where I took my data recently and I just used all the health tools that have become available in Cloud. So there's a connector thanks to Fast and Health that you can kind of put your data from, pull it from TEFCA connections, which is not very sparse.
(19:42):
At least for me, it was very sparse. It did not include all the data that I needed. So I had to kind of go and start to think, oh, who are my providers and go to them and make sure I have those portals set up. So it's still very much like a gathering exercise. And then when I did though put it all together, I dumped it all in and it gave me better analytics and just a better experience than I've seen from any other kind of tool in the past and just the level of information. And yes, it did have some things that it was not too accurate on, but for the most part, because of the data that I did have was pretty complete and it contained all the right notes and things like that, which a lot of times I forget about that, the narratives that come along with your encounter and the narrative that comes with your lab result.
(20:34):
Those are all part of the data. And I think that what we're missing in a lot of times, like from a data quality perspective, you really have to understand the context of that as well, right? What are your thoughts
(20:44):
On that?
Charlie Harp (20:45):
I think that what's interesting is when you look at healthcare and you look at the two, there's three continuities in healthcare, I think. I'm making this up as I go. So there might end up being five, but we'll say there's three. The first one is there is the clinical notes that providers take for themselves, really, because they want to create continuity that they can read and catch up before they ... They're reading that before they walk into the room to see you, right? They're looking at your notes, your medical file of unstructured notes. So you've got that continuity. You've got the continuity that exists in the structured data that goes into the EMR that was spawned by ... It's been around for lab results and medication administration for decades, but a lot of these things have been introduced are really about claims. They're really about capturing information so people can figure out how to charge for things.
(21:43):
And we treat them as clinical data, but they're really a byproduct of the process of collecting enough data to process transactions. And they're also very episodic. It's like a flip book of this episode, this episode, but so are the clinical notes. And then you have the third continuity, which is the patient and how the patient is thinking about what's going on in their lives, and they're more immediate caretakers. I think that the challenge is they're all kind of wrong. They're all kind of wrong to one degree or another. And I honestly don't think in healthcare we have a true clinical graph of what's happening with the patient. So what we try to do is we try to approximate that by looking at what we do have. And I think that we need to make sense of what we have today to figure out what are the clinical systems, the next generation clinical systems going to look like, and how do we capture data?
(22:45):
Because the other thing that's wacky is if you take the unstructured note and the unstructured note has issues because a lot of that's templated too. A lot of that is not like pros, although maybe with ambient scribes, that'll be better, but there's still a lot of uncalibrated uncertainty in a clinical note. It's also in part because the provider ... There's a lot of guessing that happens in healthcare. They can't yet feed our genome into a machine that says, "Oh, here's exactly what's wrong with Charlie Harp." They're creating a differential diagnosis, they know what's happening, and then they go into this process of trial and error. That's just the way healthcare, that's why we practice healthcare. We don't execute healthcare, right? I think from a structured data perspective, the issue that we have is we're still using pre-coordinated terms, and this is why I don't think that's the way of the future, Gene, because what we do is we create a dictionary of words, pre-coordinated words, and for me as a physician to tell the software what I'm thinking, I type in a search, I find the thing that's the closest to what I'm thinking, and I pick it.
(23:59):
And that's kind of a big pixel of information. It's a very low res picture when you look at things built on pre-coordinated terms, and that's what we feed to analytics, that's what we feed to AI, and we say, "Okay, figure out what's going on. " But what the provider was thinking when they picked that big pixel is probably much more granular and specific than what they picked. And so I kind of feel like the future of healthcare is us kind of abandoning ... We still need terminologies, but instead of having these big pre-coordinated assemblies, we have the ability to articulate a graph of information that we can feed to the AI so it can create kind of this continuum, this temporal continuum of what's happening with these health states that this patient is dealing with. But in healthcare, healthcare is pretty disruption proof in my opinion, because it's so critical and because there's so much inertia that the only way to make really effective change, unfortunately, is through these incremental improvements, these little quantum leaps.
(25:11):
And I think FHIR is one of those. I think the stuff we're doing with ambient, figuring out how to take the ambient data and turn it into something that we can really compute on and not just produce word salad out the other side. Because I also, I don't know, you tell me if you feel differently, but I worry about an AI going to the probability machine in the corpus, looking at a bunch of text and coming up with something. I think with people like you and I who know enough to be dangerous, the fact that we can look at something and say, "Well, that's obviously wrong, but that's right, and that's right, and we can be impressed with the things that are right." I think there's a lot of people out there in the world who are not going to be able to tell the difference, but that's just me.
Gene Vestel (25:59):
Yeah. Yeah, I would definitely agree with you. And I think the potential of putting together all the kind of points of information that are not just coming from that one search. To your point, the EMR data structured really for medical billing, clinical notes may represent or may not represent the actual clinical, whatever the provider was thinking at the time or not. And then where's all the other data, right? We're talking about medical devices, we're talking about genomics, we're talking about even social determinants, right? All those things together really paint the full picture of your health graph that you brought up. And I love that because I think that's what really ... When I put all of my stuff into Claude and I asked that, I really want to create my family's health journal. So I started with that health journal idea of like, "I just want you to put all this together." Because I haven't even been able to get that.
(27:01):
So I have never been able to just get my data that's like my complete data in one place and it was able to do that. And by the way, it creates great visualizations and graphs so it can actually show you, "Hey, your lipids, yeah, you better be starting a statin gene, turning 45, not just colonoscopy, but now statin too." It's just fun, but all that stuff together and it makes me think, okay, well, in the section where it's breaking down and what you should now take action on and what you should now think about, that's where it's, yes, all of this makes sense, but I would want my PCP looking at this with me and saying almost like, "Yes, this is all legit." And right now they're just like, unless I send it to them, which is like, yeah, I could take this link and send it to them and have them.
(28:01):
He would be just like, "Well, I can't comment on any of this because it's all generated by LM and I don't know how they work." So how do we get to a point where we have that through care team, I don't know, your caretaker, your family member, your provider, your physician, even maybe a pharmacist in the loop too, because guess what? I talk to my pharmacist more than I talk to my provider during the year. They should be able to see this thing that I just came up with. So how do we get to that point you think ... And again, I'm the one that's asking the questions as for me, but Charlie, who's getting interviewed here? But I really, honestly, I think that's like the million dollar question is how do we put all of that together and then how do we make this actionable and assess the data quality of it for the answers that are then generated for the patient, which is great because we want to empower the patient.
(29:00):
We want to have everybody have access to the same information, but how do we do that now with providers have their own ChatGPT, then patients have their own, your mother may be using another one, right? So it's like- I mean,
Charlie Harp (29:14):
I think part of the problem too is, you and I, we probably have 13, 15, 20 medical records and the challenge that I try to ... I was saying this in Washington last week, first of all, we should not ... I'm not a healthcare network, but if I'm a healthcare network, I shouldn't be sharing data that I didn't create. So for example, right now, I've talked to one vendor, they're like, "We share everything we have. " So if we got a bunch of data from TEFCA or from an HDU, when we go to share the data with somebody else, we share everything. And I'm like, "Why would you do that? " Because our Providence, the way we deal with Providence and FHIR today is still pretty To be rudimentary, we should probably have resource level providence, not message level provenance. So that's thing number one.
(30:07):
Thing number two, if I'm sharing data and you're sharing data with me and I'm sharing data with you, how do we keep from sharing each other's data?You're giving me data that I gave you yesterday. We're already dealing with healthcare data doubling every 73 days. If we really open up the pipes of TEFCA and we're all sharing whatever we happen to have, not just our own originated data, it's going to be a tsunami of craziness because by the way, every time we share it, we're going through a canonical, a syntactic and a semantic transformation. So even though I shared data with you yesterday, I may not even recognize it when you share it back with me. It could have been back twice from two different terms. I mean, it's one of those things where we are getting to the point where I think as an industry, we need to take a breath and say, okay, what are the rules?
(31:01):
It's like Fight Club. What are the rules of interoperability fight club? We share the information that we produce only. We keep track of anything we've done to the data when we've sent it out to you. So you don't just know the RxNorm code, but you know what the original description was. So you can check the semantics. And if you don't want to go from my code to RX Norm code to your code, you can translate directly from my code to your code and minimize the skip if that's what you want to do. I think that for us to kind of get rid of some of the uncertainty, that's one of the things we need to do. And since you're interviewing me, I'll tell you one more thing.
(31:41):
I think that we also need to have a way in the system to send a correction. And this would be a good thing for FHIR too. If somebody initiates something that says this patient is not diabetic, how do we let the network know because I removed diabetes from my profile and tomorrow you send me their diabetic again. So I just think that it's one of the reasons why I think people don't really share in like a seamless, interoperable way. People give it a lot of lip service, but I think the reality of sharing data at that level is terrifying for most people. I don't know. What do you think about that?
Gene Vestel (32:21):
Yeah. I do see the problem there. And I see now that we're talking about that data should be flowing freely, but you want to make sure it's the same data that's flowing freely, that it's not five different versions of the same data that went through 15 different systems. And we do also want to have that provenance and data lineage all the way down to where it was generated. And if we don't have that or we lose that along the way, it is a big problem, I think. And we also can't deliver that feedback, which I think that measuring data quality is super important and assessing the utility of it, fitness for use, as we love to say. But if we don't give that feedback back to the Epics of the world, they don't know that they're generating all this crowd out there. There was a whole bunch of things about when I first started looking at data that was coming out of the patient right of access connection.
(33:22):
So ProAdata supposed to be US core one, version one originally, then they voluntarily went to the version three, which is what it is now. And there's just different interpretations of even what a US core FHIR profile is. So you have things like SNOMED codes that are supposed to be used technically to identify what type of an encounter it is, but if Epic is not using it, what are we getting? We're getting the custom Epic encountertop, which is not interoperable with the encounter we're getting from MEDITECH. So then if I'm somebody that's pulling all that data and putting it all across, it is not one-to-one. So I think those are the type of questions that we need to just keep addressing and working with the industry. And I saw, I think we're going to touch on this a little bit, Charlie, the new version of US core that's been just released for comment, I think, for draft, the version seven, and all the different things they've added, which is very interesting, definitely.
(34:27):
Do you think more is better or what are your thoughts? Because I know there was a lot of people like Brandon Keeler and others, actually they were like, "Oh right, finally we're getting all this stuff that we've been asking for sometimes. What do you think? Is that going to make it easier or harder for us to standardize all this stuff?"
Charlie Harp (34:44):
I mean, I was at the session where they talked about that last week and I think it's fine. I think USCDI, the biggest challenge I have with USCDI is the way they frame things. And I've shared this with them is that to me, when I think about US core and USCDI, it's like, these are the elements you should be sharing and here's the code systems that we're expecting to be interoperable. But when some of those things where they say, "Oh, you should share blood pressure." But they don't say that, "Oh, when you're sharing a vital sign, you should share the vital sign test code, the result, the result unit is UCM." They just say you should share a blood pressure, but I can't tell you it's a blood pressure if I don't have a loin code that I recognize in a value set that says blood pressure.
(35:26):
So I think that there's some orthogonal way of describing things that we should probably button up and say, "Hey, US core 61 is what USCDI3 is. Here's the types of things that we think you should be sharing, but we can't tell what you're sharing unless you share these elements. That's what PICI's all about. " So when I try to build a picky rubric for USCDI, I ended up going to US core and looking at what's mandatory, what's zero to many, what's one to many or one to one. I think the challenge with US core seven, when you look at the things they've added there, they're okay. I mean, I don't see any issue with it. I think saying, "Hey, we're going to expect this. " The real issue is the when, not just the what, because part of it too is like if you look at USCDI version four, one of the things in there is medication adherence.
(36:19):
Now you're a pharmacy guy like me when I was at FDB. Medication adherence, how do you capture that? Who knows, other than the patient, who knows whether they're actually taking the med? So some of these things are kind of aspirational. It's like somebody builds a quality measure that says, "Hey, tell me whether or not the patient's taking the med." I have no way of knowing whether or not the patient is ... I know that they filled the prescription. They might be hoarding it in their closet. I don't know if they're taking it or not. So some of those things that are there is really, what's the purpose of it? What's the likelihood you can actually get it? I think social determinant of health is another thing where we really struggle to get it, even though it's super important, but we don't necessarily put all the levers in place to make sure we're getting it.
(37:09):
And then kind of determining what's the level of expectation that I'm actually going to get this. Is it a must have or a nice to have?
(37:20):
I think what ASTP is doing is good. I think coming up with new standards that they're going to implement down the line and giving us plenty of time to kind of decide. I also think it's important, and I haven't always been great about this, but people have to give the feedback. If they think something's crazy, they have to say, "This is crazy." If they think something could be better, they have to take the onus to say, "This is how you can make this better." And not enough people do that. Not enough people that are actually in the weeds do that. But I didn't really ... I'm trying to think when I looked through the list of new things, I don't think it was bad. I think that it's just giving the industry time to figure out how to package it up and make it into FHIR and share it.
(38:07):
And also at some point bite the bullet and say, "Just do it in FHIR," as opposed to right now when you say USCDI, it's really CCDA, FHIR, it's really exchange in general, right?
Gene Vestel (38:19):
Yeah. And we're supposed to be getting away wholesale from CCDA, right? You think it's still going to stick around? I don't know. I think that- It's still the primary mechanism. Well, the HIEs I was sharing, David, at least at this point, right?
Charlie Harp (38:37):
If you think about it, FHIR wasn't originally, and I could be wrong, but FHIR wasn't originally a messaging format like CCDA or like HL7V2. It was really much more intimate. It was a much more intimate thing than how we're using it. So now we've created the FHIR bundles and I still think there's some evolution. And I know people talked about bulk FHIR, flat FHIR, and I'm not super plugged into that ecosystem. I've got people that are more plugged into that ecosystem, but I think we still got to figure out some of that. How do we operationalize FHIR to make it less cumbersome?
Gene Vestel (39:22):
Yeah. And I think to your point, the whole US CDI versus US core naming convention, come on guys, just make it the same number. The number of times people mix those two up is probably every single time, just as I just showcased. But guys, come on, USCDI number, US core number. Very simple, right? But nobody thinks about that because we're just like, "Well, what do you mean? We have multiple versions." Okay. So yeah, I think definitely great to put that out. I'm glad they're putting it out ahead of time and saying, "Well, we want to add an appointment," which I'm going to go ahead and I'm going to go add my comment there that we also need the appointment type, which we just talked about is a big problem. And provided that the system that is generating the data actually could generate. So to your point, the medication adherence, not every pharmacy system calculates medication adherence, not every EMR.
(40:20):
So where would you even get that? It's something that you get after you process all these clinical records, then you can come up with your medication adherence ratio or whatever and quality measurements. So that's kind of an interesting component of that where it's going beyond just the raw data. We're into the more calculated things and how do you share those and ...
Charlie Harp (40:41):
Yeah, it's a lot. I mean, the thing that I love about healthcare, but it's also the bane of its existence is we'll never run out of things to do and it's always going to be somewhat complicated. It's kind of the nature of the beast. I think that you and I could probably just keep talking and talking and talking. So we probably ... Is there anything ... Let's wrap it up with this. And then I would love to be on your podcast, even though you're going to take my video and I have a face for radio, that's what everybody tells me. And I'm happy to do this again if you're interested in doing it. We can have like a quarterly Charlie and Gene hash it out, but in the next year, what are you excited about? What do you think could be meaningful in the next year in healthcare?
Gene Vestel (41:29):
Well, thank you first of all, Charlie. I'll take you up on that offer. That would be really cool and we could kind of revisit some of these things. I'm really excited about the potential of getting to my whole issue with we move data around for the sake of moving it around and people talk about interoperability and then they don't want to talk about interoperability, we're just talking about the use cases, whatever. But no, we are talking about interpreter. So now that we've invested all this time and all this infrastructure, all the way back to meaningful use, all the way to where we are now, finally, are we getting into a place where with AI, obviously, I can schedule an appointment, I can take an action, I can share things with my provider, and I can actually do them easier and simpler than I've been able to do them before.
(42:20):
So I'm really excited about getting this stuff to a point where all this hard stuff becomes easy. And whether we can get there or not, I don't know, but a lot of this stuff is pretty amazing that's popping out now with, you can even build your own healthcare assistant now, Charlie, if you want to go out this weekend and get a Mac Mini and put this open claw thing on it. And I don't know if you want to give it access to your health data though, that's probably not smart, but it could probably do stuff for you already. So all this stuff together, it kind of puts this new ... It's this new way of patients interacting with their health and the healthcare system and health insurance and everything else. And that's my thoughts on that.
Charlie Harp (43:13):
And I think if we can find a way to refine the data, that's why I'm such a quality zealot. If we can find a way to refine and clean up the data that we give to AI so we can ask it questions, intelligent questions about what's going on, I do think it could do some pretty amazing things and augment the provider. I personally, I like having a human in the loop. I know there are people out there that say, "Oh, we won't need human doctors. It'll all be AI." I think having a human in the loop who can call BS on something that comes out of the system is something I would kind of require in my interaction with it. But I do think that AI has a huge potential to accelerate what we can do. I mean, I'm a programmer and I've used it to kind of augment ... I've had to kind of go off and program something completely and it was a disaster and I've had it augment things that I was doing and it was very helpful and fast.
(44:09):
So I think it's really a question of us just finding the right balance. And I think things like FHIR and those things can help us get there. We just got to figure out the right balance of everything. But hey, it's been a great pleasure to have you on. Thank you very much. And I'm sure I'll see you out there in the world because we go to a lot of the same places.
Gene Vestel (44:34):
Yeah. Thank you. Thank you, Charlie. Thank you for your time. This is a real awesome to dive into all these important topics with you.
Charlie Harp (44:40):
And when you have me on your podcast, I can interview you. Yes, let's do it. All right, Gene, thank you very much. I'm Charlie Harp, and this has been the Informaster Podcast.