Informonster Podcast

Episode 36: Interoperability Insights from FEHRM and ASTP

Clinical Architecture Episode 36

In this episode of The Informonster Podcast, Charlie Harp talks with John A. Short, MBA, MS-IS, FEMA PDS, CPHIMS, Senior Executive Service (SES), Director, Federal & Commercial Interoperability of Federal Electronic Health Record Modernization (FEHRM) Office and John Rancourt, Deputy Director of ASTP’s Office of Standards, Certification, and Analysis. They discuss how the government, specifically the DoD and VA, are addressing the challenges and opportunities in healthcare interoperability. They cover key topics such as USCDI, TEFCA, and the role of data standards in improving care and reducing costs. The discussion also highlights how data quality and trust play a crucial role in advancing AI and creating a more connected healthcare system. 

Contact Clinical Architecture

• Tweet us at @ClinicalArch
• Follow us on LinkedIn and Facebook
• Email us at informonster@clinicalarchitecture.com

Thanks for listening!

Charlie Harp (00:09):

Hi, I am Charlie Harp and this is the Informonster podcast. Today on the Informonster Podcast, I'm delighted to have John Short from the FEHRM and John Rancourt from ASTP/ONC, and we're going to talk about USCDI, TEFCA interoperability in general and any other topic that comes to mind and we're going to resolve after we book this, I thought to myself, wait, wait. We've got John and John. So the way we're going to resolve this semantic ambiguity is we're going to have John Rancourt as John R and John Short as John S. And so we're going to go ahead and start with John Short. John, can you tell the listeners a little bit about yourself and the FEHRM?

John Short (00:52):

Yes, thank you, Charlie. My background, I used to be a US Army medic and later on the US Army Infantry Officer in 82nd Airborne Division and then transitioned into Signal Corps. And so very broad in the military experience from that perspectiv. A lot of time airborne and some interesting parachute jumps. I went into some telecommunication startups, had about 13 years of work at the Department of Veterans Affairs where I had a unique project where I was able to move 30 years of legacy data from VA's Legacy EHR, Vista, into the new commercial federal electronic health record system. We were able to have the opportunity to clean that data to make it more accurate, and so that way all future uses of that data would be more effective. And then I moved on to DoD Proper and to the DHA Defense Health Agency. And as part of that, I'm in a joint office called the FEHRM, Federal Electronic Health Record Modernization Office. We focus on helping DoD and VA primarily as well as the other federal agencies, improve and increase interoperability in the EHR space. Thank you.

John Rancourt (02:01):

My name is John Rancourt. I am the Deputy Director of the Office of Standards Certification and Analysis in the Assistant Secretary for Technology Policies Office, which is part of the US Department of Health and Human Services. That's quite a mouthful, but we are the agency engaged in trying to achieve the vision of Better Health Enabled by Data. The assistant Secretary for technology policy was up until recently, only known as the Office of the National Coordinator for Health Information Technology or ONC, and we were founded in 2004 under President George W. Bush. Then in 2009 under the High Tech Act, which was part of the Recovery Act, we were establishing legislation and that's when we sort of grew enormously in the work that we were doing associated with health data technology interoperability. In that we built our Health IT certification program. So that's a program that falls under the office, I'm the director for where we take standards and then build out a program where we can say that particular software is stamped certified to do certain functions according to certain standards or functionalities, et cetera. And then that certification can be carried to other programs such as CMS programs or others, and then providers can participate in those programs and say that they are using certified software. So yeah, I've been at ONC or ASTP, I'm still working on it. We just did the reorg this summer, so for 13 years working on interoperability throughout. And so excited Charlie, to be here today.

Charlie Harp (03:47):

Thank you. Thank you very much. So one of the topics that people are talking about right now, well, at least the people I know because one of those people that is very entrenched in the healthcare technology world, but there's a lot of talk around TEFCA and USCDI and those initiatives. One of the things I thought would be useful since you're here is to kind of talk about kind of the goals and the vision of USCDI and TEFCA and where you think it might go in the next four to six years.

John Rancourt (04:22):

Excellent, thank you, Charlie. Yeah, so as I mentioned, the office that I'm a part of is the Office of Standards Certification and analysis of standards is where it starts and so critical to all the work that we're doing across the industry. And one of those standards that is enabling a lot of great outcomes activities is our USCDI or the United States Core Data for Interoperability. This is a data set that way back when was referred to as the common clinical data set, and then we rebranded and created a process for this to be an actual standard that we curate, we update it on an annual basis and is the core data set that's required for data sharing whenever a health IT technology that's certified by us is required to share data. That is sort of the starting point. And yeah, it is a core dataset required for certification for interoperability, and from that, folks can have an expectation of the types of data that they're going to get going to need to share and so forth.

(05:37):

And we go through a process of versioning this standard every year. We are currently on version five and are anticipating soon releasing the draft version six, but right now in the marketplace is still version one. And that's because of the way that the certification program works and is a program that rolls out over time and so forth. But that growth of USCDI has been constant throughout. I think we doubled the number of data elements roughly from the first version to the second slash third version, which is due out in a couple of years here as is required by the natural growth of our program. And then we want to see and we expect to see growth and expansion of that to further versions over time. So that's the plan for how it works. And yeah, there's other parts that I'd love to talk about too, but I want to pass it maybe to John S for the way that they used it over in the VA.

Charlie Harp (06:43):

Sounds good. John S.

John Short (06:47):

Thank you Charlie and thank you John R. As far as USCDI and TEFCA could go with the FEHRM, what we're doing with that, with DoD and VA, when you look at everything, you have your functional need for your clinicians and your patients. To get there, to support that, we need great technical standards to create that foundation. Without that, you can have the greatest needs in the world and most beautiful building, but without those cornerstones of technical's foundation, then we wouldn't be able to do that properly. TEFCA gives us those interoperability capabilities for those standards. The FEHRM regularly is working to develop, maintain the standards needed for DoD and VA. Going back to the NDA 2014 and Congress had the goal of us always making sure we maintained and improved interoperability every year. That's continuously the goal. We work with HHS Office of the Assistant Secretary for Technology Policy or ASTP.

(07:44):

I thought ASTP was a mouthful, but I think that's easier to say than the other. We also work with HL7 other organizations, LOINC, SNOMED, National Library Medicine in advancing the USCDI and TEFCA that John R just discussed. Also at the FEHRM, we're using USCDI and TEFCA as great wa ays to expand interoperability to others, other organizations, other parts of the agencies to drive to higher levels of interoperability, to greater levels of data sharing, deeper levels of data sharing, and as well as moving to higher levels of interoperability where we get into policy and workflow across larger organizations. The USCDI and TEFCA are instrumental in allowing us to unify the systems together as a system and to much higher levels and the degree of our ability. There's many ways to get data. We have sneakernet, we have fax machines, we have different types of interfaces, we have bedside medical devices.

(08:44):

There might be 20 different ways to get data and we don't always need 20 different ways. We need an efficient way to get data. And USCDI creates that framework to help us get there as more manufacturers, as more software developers incorporate that. And as we continue to work with ASTP to broaden that standard, it can do that for us. Data quality is also essential for whatever we do, especially coming up on pre-AI readiness, making sure that we have garbage in, garbage out or it's good data in good data out potential. And so that's why it's critical, whether it's clinical decision reminders, whether it's emergency room decision on what they're going to do for treatment, having that quality data is very essential. So with TEFCA, we have the right standard, the right structure, the right guard rails and security to get there. We also have tremendous amounts of data within the federal EHR.

(09:39):

As an example, I helped move over 30 years of data from the VA side when I was working in the VA in the federal EHR. It's a ton of data into itself and DoD has moved their entire system to the federal EHR. So tremendous amounts of data. We have the right agreements in place for data sharing. We have the right protocols, permissions, and mechanisms so we can rapidly share data out and in as appropriately and legally authorized. So for higher levels and improved care, this reinvents healthcare, as a lot of people know it, most people don't realize that they can get there. They've only dreamed of this type of connectivity. We can expand our interoperability model within DoD and VA as well as the other federal partners and in the public side to take care of everyone.

Charlie Harp (10:24):

Thank you. I think that one of the other things, I was talking to somebody the other day about standards. This was specifically HL7, but I think it applies to USCDI as well. Sometimes people look at the standards and they get frustrated, they feel they're academic and oh, why are they trying to harsh my good time? And the thing that I always say is when you look at the people involved in standards, like USCDI, USCDI is a standard, but if you look at USCDI+, it's also a community. It's a platform. And so it gets people thinking about how to look at data across the silos because most people in industry, and I work in industry and I work with providers and folks across the spectrum. And the truth of the matter is if you're a large IDN, you've got stuff you're doing.

(11:15):

And so you don't always have the luxury on your own to think, well, how should other people get my stuff and what's common if I were to send it to someone else? And so when you think about standards organizations or initiatives like USCDI and TEFCA, what it kind of does is it forces us to think about those things in a more ubiquitous overarching way and come up with those least common denominators because we're not probably in my lifetime, we're not going to get to the point where we're running on one homogeneous utopian platform. But at the same time, there is kind of a minimal viable data product that we should be able to convey to each other, sharing what we know about the patient to increase the chances that we get a better picture. And I think that USCDI, we did this data quality survey earlier this year and it was interesting to see the mix of people that said USCD is not doing enough, USCDI is doing too much.

(12:12):

If you're not making everybody happy, you're probably doing a good job, is my philosophy. So I just think between the two, I think it's interesting and I think USCDI creates a map for quality. And I think that TEFCA to be successful needs to be able to make people feel like what they're going to get from TEFCA is going to be quality. And I think there's a couple challenges. We talked about it prior to this. One of them is making sure I'm getting the right person and I can find the person the whole identity management piece. And the other piece is when I get the data, it's got to be interoperable. So it's got to be something that I can understand and that's USCDI says, here's what we agreed on. It's LOINC, it's SNOMED, it's RxNorm. And then I think the other thing is that the data that comes in that envelope from that source, whether it's TEFCA or Health Data Utility or a direct interface, the data that I'm getting is good data and it's not going to junk up my environment, that's whats keeping people from letting it in.

(13:12):

But one of the things I was curious about, when I talk to my friends in the NHS or I think about the VA or the DoD, I have this kind of utopian idea that since it's one holistic system that all these problems are gone. I see what USCDI and TEFCA are trying to do is get us to think across the country like we're one big system in reality and help each other. And I kind of think, well, the VA and the DoD is kind one big system. There's still going to be these issues of impedance and whatnot. But I'm curious, John S is, do you see what you guys have been able to accomplish in the federal ecosystem leading and maybe a role model for some of the things we're trying to do outside of that federal ecosystem?

John Short (14:03):

Most definitely. Charlie, you look at it, part of my title is federal and private sector in our ability. So that's part of the goal. It's not just a, we made part of the government more effective. I will step back and make one little funny comment to your standards piece. You say, as often sounds boring and academic and everything else about standards, I often ingest, but also seriousness, talk about that, usually when we talk about standards. I'm like, if you need to fall asleep, just start reading a book about medical standards and you can put you, your spouse, anybody else sleep pretty quickly. However, I also make the point that if most people really understood the value of these standards and what it can do to revolutionize AI, healthcare, lowering the cost of medicine, lowering the amount of testing and retesting has to be done. If they really understood it, talking about standards actually be one of the most exciting things they ever talked about.er

Charlie Harp (15:02):

The economics of it. You're absolutely right.

John Short (15:05):

It's really, and people that really understand it, they're like, yeah, you're right. But they didn't even thought about it that way. And people that don't understand it still look at me like I've got a four or five horns growing outside my head. But hey, I just wanted to say that I really, anyone that hears that, and if you don't think standards is an interesting topic, I promise you if you actually understand it, it'll be a red pill moment for your life to go. Wow. Standards actually in healthcare are super exciting. It is better outcome for care, lower incidence of potential patient safety issues, lower costs for everybody, lower retesting, how many extra X-rays do you want of your body? Probably no more than you have to. So again, if you really understood the value of it, you would be excited. We look at it to your question, the core principles in improving interopability, we go to TEFCA.

(15:53):

And while we were developing, when I was at the VA side, the VA contract, and this is all public, people can read the VA contract, but we were working with HHS at the time as they were developing TEFCA, we included in that contract stuff to include for providence and data quality and normalization and pedigree. And that's where we got to with all the legacy data we brought in from DoD and VA. Every one of those locations operated as though they were an island themselves, the commander of the DoD side or the director of the VA side. It was their world. We're not saying in a negative way, that's the way they had to operate. They had to be responsible for everything. And so we had to make sure that we had all that data stamped properly because if it did get shared somewhere else, they had to understand that dimension where it came from.

(16:39):

And so most all the data is stamped with date and time and location. And while that's the base principles you're going to need, if you want to be able to share Providence and pedigree when you share that data out, there's a capability that we've started to deploy in a few locations in the federal EHR called Seamless Exchange. And that protocol was developed to meet part of the needs for TEFCA and sharing providence and pedigree in a way that kept it organized, et cetera. So as we expand that more, because we have all that legacy data stamped that way, we're going to be able to share that and be a leader in sharing providence and pedigree information, which a lot of people are excited about that they get into that. Again, it might sound like a snoozer for some people, but again, if you think about how many times we make a thousand connections and all your data gets to 500 places you call on your data, again, you can get 500 copies of the medication, same printing prescription, you only need one copy if it's the same one.

(17:39):

And so technologies like that, we're deploying with seamless exchange by having the provenancece or pedigree, it goes this date, this time X location. I don't care that I got the thing 499 times from somewhere else. It tells me it actually came from that same one. So I can ignore those other ones and just to note it in the log. So I think that's really powerful. Again, if you love paying for extra storage you don't need, then maybe you're not excited about it, but if you don't like to pay for extra storage you don't need, then you're probably really excited about that capability, which was part of what was driving HHS ONC at the time now ASTP to come up with that requirement in TEFCA. DoD and VA of data stored in a way that allows to support TEFCA in many ways, as I'm highlighting. And as we expand this with seamless exchange and other capabilities and connect to more private sector partners, although right now we're connected to majority of the country, it will allow us to expand that and again, lead the effort for TEFCA in provenance of pedigree as more people in the private sector pick that up.

(18:39):

Again, that's going to drive down cost, it's going to drive down lists, chances of having duplicate prescriptions, et cetera. And having that pedigree of data allows us to de-duplicate multiple copies of the documents, which is another great feature of the seamless exchange we've been piloting and been deploying without damaging the data. And I say without damaging the data, you never want to change the electronic health record. Only the Doctor, or Informatic staff should be the one that modifies a patient's record that because they're very touchy and legal subject. But if you actually can validate through provenance and pedigree that 499 copies of that data we're actually just copied.

Charlie Harp (19:18):

Looks like we might've lost John S just like he predicted before this call. He said, I need to back up just in case I lose my audio. And we just lost his audio, so we'll see if he comes back here in a minute. I think that the point he was making is a great point. I think that this whole idea of provenance and knowing where something's coming from is actually one of the things that could hurt TEFCA if we're not careful. Because when I talk to people out there and I ask them about TEFCA, they really like the idea of TEFCA. But when I say, are you just going to take the data that's available in TEFCA? They're like, eh, I don't know that I trust the data that I'm going to get. I don't want to be buried by a tsunami of data that is not relevant or it's something else.

(20:11):

And it's kind of like getting that, and I use this example on another podcast when people started sending me fruitcakes, but it's like receiving a fruitcake and you put it on the shelf like, that's great, but I'm not going to eat that fruitcake. So the idea I think with TEFCA is how do we get it so that when people are participating in TEFCA that they have confidence that they can identify duplicates, identify where the data came from. I asked a question in a call, welcome back John S, I asked a question in a call with a bunch of folks the other day about what do we have that uniquely identifies the data source where patient data comes from? Like an IP address for patient data? And the answer was, I don't think there is anything like that. And I think maybe this is the kind of thing that exists in the VA federal ecosystem is you have a unique identifier that says this is the system in the facility where this data came from.

(21:13):

But when I was working on the report for the ONC for the lab data quality stuff, and I was looking at the headers from all the messages to try to identify data sources because I was going to try and slice the data by application and other things, it was impossible. And part of that is because right now people just put stuff that's meaningful to them in those buckets in the header, and this is just V2 data. Some people could argue use the HL7 AO8. That's really a physical location. The NPI can be used by multiple places and it's also physical and in reality when we get data from somewhere via TEFCA, that data came from a application that services one or many facilities for a given use case. And I think those are the kind of things where, to John S's point, if I'm going to give something a pedigree and a stamp of I know where this came from and it's reliable, one of the main things I need is something that says, I know where this came from, and I think that's something that we struggle with right now. So you foretold John S that you would be cut off during this conversation and we didn't believe you. We were all in denial. So I apologize for doubting.

John Short (22:33):

It said computer shutting down.

Charlie Harp (22:39):

You're saying that talking about standards actually bore-d your computer to fall asleep. That's the first for me.

John Short (22:48):

Exactly. Because you didn't listen to the rest of my words otherwise it would've been excited too. Or maybe it said, I don't need healthcare, so I don't care.

Charlie Harp (22:55):

That's right. I will say one more thing about standards. I've been involved in AMIA and some of these other groups for a while now, and I have to say and do a shout-out for the people that dedicate time and energy because if you have not gone to an HL7 meeting or an AMIA meeting or USCDI+ meetings, the people that are at these meetings are very, very passionate and dedicated to trying to make things better. And a lot of them, they're not compensated. I actually, I have a number of people that go to these meetings. I do it because to my organization, I think it's really important that we participate in these things. I think if we really care about healthcare and improving software's ability to be a meaningful helper in the care and outcomes of patients that we have to dedicate energy like this to understand how to overcome some of these barriers. But anyways, so I just wanted to say to the people that get involved in these things, it does get in the weeds sometimes and we use acronyms that Muggles don't understand. But at the end of the day, I think that the people that are involved in it deeply care about it and do great work, and I really appreciate them.

John Short (24:13):

Just one comment to your point on going through the lab items and determine where they're at, having a national lab identifier schedule, national pharmacy location, identifier schedule, all those sort of things. I don't believe those would be controversial. I think that would be fully supported and that would really help out and support TEFCA, especially Providence and pedigree along with doing that on labs I think would be really critical to have the lab values of all those labs with that. So you have a lab location identifier and their lab reference values. And so I think that would greatly and tremendously improve in our ability and normalization of data.

John Rancourt (24:57):

I think that there's a lot of certainly things that we want to get to, and there is that vision. There's so many things there that I wanted to touch on. But to start with that directory's question, to build that national healthcare directory, that is a vision and CMS is doing a lot on that. They put out an RFI on the topic asking questions from the community. Thank you everyone for those comments and input to including how things like TEFCA and the directory that is supported under TEFCA would enable that. It is a large scale long-term project to get to the vision that you both are talking about and that we want to get to so that we can have the types of granularity, understanding of data and provenance to be able to better assess quality of data. And there's a lot that's going on.

(25:55):

In FHIR, there is a provenance resource and even within particular resources within FHIR, there are identifiers of the data elements as well. USCDI features author and author role as data elements as well. And that also is an evolution, an ongoing thing. What we've heard from folks is that we're at a good place. We do have more to go. So I think that's what you all are saying as well. And yeah, we're optimistic. We're excited about the fact that TEFCA is a live network right now with data sharing that we have major industry partners that are QHINs on the network. It's really moving forward. So yeah, there is still a lot to go though.

Charlie Harp (26:42):

So wait a minute, you're in the government, John R. You mean you just can't fix it over a weekend? What's up with that?

John Rancourt (26:54):

Right? Well, it kind of reminds me of another point, Charlie, sorry, is just that it is a collective activity. We are, when we're working in the government, we need to think about the entirety of the industry. We need to think about patients, providers, we need to think about payers and employers as well, who are payers and what's the way to get to the best place overall. Really it takes time.

Charlie Harp (27:19):

Well, and there's absolutely, people always underestimate unintended consequences. That's why you guys work so hard to get comments and feedback. And when people complain to me about something, I say, did you read and comment on what they were doing before they did it? And they usually look at me and sheepishly smile and say, well, I don't have time for that. And I'm like, well, then you can't say they didn't ask. They wanted the comments. And back to your point, John S, I think that when I was back in the early days of my career when I had hair down to here, I worked at Smith Cline Beach from Clinical Labs, and we did an initiative that I think would've be brilliant if we could do it on a national scale. And that was essentially a smart accession number where the first part of the code, I'm usually not a fan of smart identifiers, but if you had something almost like a GUID where you say, here's a code that represents the lab, whether it's the CLIA ID or whatever, here's a date timestamp and here's the next sequence number that I got off of a wheel.

(28:24):

And if that was something that was shared with a lab result, think how easy it would be to deduplicate information that we get. I mean, something like that, because when I did the ONC thing, one of the things I found in about a hundred million lab results was about half a million duplicates coming from different places as if they were the source of that data. What I looked at was the test is the same, the date and time is exactly the same, the result is exactly the same. This couldn't have come from 15 different places at that same point in time. But one of the things we tend to do in healthcare is we share what we have. We don't just share what we create. We share whatever we happen to have. And if a reference lab sends data to four places, those four places are going to share it as if it's something they have. And we really don't have a great way if they're not giving us the performing site as you described, to deduplicate those things and say, this is exactly where it came from. But no, I think that would be great. That would make things a lot better for lab.

John Short (29:26):

Yeah, completely agree. You look at it, again, having national lab identifiers, national lab reference values all correlated together, you need that sort of information so you can normalize that data. The reference values from one lab are not the reference values from another lab, and they're all very unique. They can be very similar, but sometimes depending on which lab test you're running, that's very minuscule difference is actually huge. And so it's really critical to have those lab reference values. So then you can normalize that and present that to providers in a way that's useful instead of 25 pieces of electronic paper that they have to figure out themselves from the reference value or may often happen now is run the lab again. I need to be able to be sure on it. It's not because they're just flip, they want to do another test, they want to make sure they have the right information so they can provide the optimal care for that patient.

(30:24):

And there's some things that people can do from a policy and technical standpoint that can provide that to the caregiver without having to do the additional testing and sticking the patient's arm one more time for a little more blood or whatever testing they're going to do. So I think with more, we drive that and everyone embrace that. They shouldn't see it as a threat in any way. It actually should improve commerce. And that way it allows the funding to be used for purposeful testing that hasn't been done, that needs to be done instead of a third or fourth version of a test because we're not sure what the values are, the original tests that were done. So I think we can get there. We just need to work together and we're making progress on it.

Charlie Harp (31:06):

Well, one of the things that I've been involved in recently is about a year ago I started developing this framework for evaluating patient data quality. And recently it's gotten the attention of some folks at ASTP and CMS, VA, CDC, Clinical Architectures bootstrapping it. But our plan is to roll it out as a standard, an open standard for scoring data quality, patient data quality. And so I'm going to be talking with Carol Macumber, who's the chair-elect for HL7. She's been involved in HL7 forever about putting together an informative specification and then rolling this into an open source, open standard solution so that we can have an agreeable way of assessing the payload, a patient's data payload regardless of the envelope. So whether it's HL7, FHIR, CCDA, OMOP, whatever you're sharing, but be able to use a criteria like USCDI v3 and say, I took your message.

(32:08):

I looked at this core clinical, this minimum viable data product for the patient, and out of a hundred percent score, you scored a 60 and you had two critical failures that make your data unacceptable. In fact, I was talking to some folks at the VA earlier today about it. I think that is one of the things that would be really powerful because I think that there's a lot of people out there that when it comes to compliance with a standard or even USCDI, they just kind of check the box. I found when I did the analysis for ONC, I found one site that every single hematology lab result was a glucose LOINC code. Now it satisfied the checkbox that you will share lab test as LOINC, but it's like somebody somewhere just sat down and said, oh, well what's a LOINC code? Well, here's one.

(32:57):

And they just stuck it in their lab results. And the letter of the law, you must give us a LOINC code was checked. But if somebody had actually did a qualitative analysis on that and done some plausibility checking on the data in that payload, they would've come back and said, no, this is bad data. And the nice thing about this approach, and this is the first time I've ever done anything that could become a standard. So I'm excited about it, but the nice thing about the approach is it's not just designed to measure the quality of the data and the payload. It's also designed to identify why you fail. Because I think that it's called the Patient Information Quality Improvement framework. And the idea is not just to evaluate, but to elevate. You go back to the source and say, you fail because you're not giving me a code system, so I can't validate that it's LOINC and it's the kind of thing where theoretically a data source could easily fix it and then all of a sudden they're delivering quality data.

(33:55):

But I think that when you talk about interoperability, one of the things we said is in the prep call was the provenance, knowing where the data came from and maybe who's touched it and how they touched it. Normalizing the data to the terminologies that we agree are interoperable so that we don't have to always semantically unwind that. And the other is data quality. And I think that the idea of being able to measure the quality, whether it ultimately is picky or something else, I think is really important so that we can actually give feedback to people.

John Short (34:32):

There's five things in what you just talked about, I think that are key. We need more data, so we need to have more connections. We need to have the standard so we can connect to that data and not just we got one piece of data from that hospital over there. We got 25 pieces of data that I need in this standard payload. Then we need that data mapped to standards so we can validate against it. We didn't know for this little hospital in Monks Corner, South Carolina and the chart said, big birtha moment, what the heck is that? But it's coded. I looked at it, oh, it's a code for myocardial infarction. Oh, it was a heart attack. Okay, that's what they call it. That one doc shop, a big birthda moment. So now I can normalize that along with other cardiac incidents.

(35:16):

Then we have not just connection to that hospital or clinic in Monks Corner, South Carolina, but across the whole country. So we get all that data coming in and then we need the quality and way to validate it, which is what this new standard developing you're talking about. And I think bringing all that together, which I think is really the ball of a lot of what we're talking about here today. When you bring all that together, that quality piece becomes key because of the really super exciting things people talk about today in AI. DoD and VA who primarily runs my office, they have their own AI shops, and we have two other federal partners with DHS, Coast Guard and Department of Commerce, NOAA, they all had their individual AI office and they're going to direct how they want to do AI. But to me, a lot of the big work in ASTP land at the FEHRM and standards bodies and HL7 and others is the AI readiness and going to the quality you're talking about.

(36:15):

Because again, when I wrote a computer program in, I think it was 1982 to make a screensaver in basic, and I saved it onto a cassette tape and then played it back on my computer, it didn't work first. I went back and looked at the code and I see I have one line wrong. I fixed it. Oh, it works, right? Garbage in, garbage out. And we still have that today. But in AI, it's magnified exponentially, garbage in, garbage out. So having the AI readiness, making sure that you have all the appropriate data, you need the permissions to use that data for, you want to use it for, and the quality there. That way, as long as you have a good AI tool algorithm, the DoD or VA or a commercial or somebody produced, now you need that mechanism so you can test the quality of that so you can ensure that the results coming out of the AI are more prone to accuracy and less prone to AI hallucinations. So I think the standard you're talking about can also be used to help really improve upon the AI readiness that other people will need across the country. Any thoughts on that?

John Rancourt (37:23):

Well, I'll jump in and say, yeah, I agree that this is a exciting opportunity and we're certainly looking forward to learning more about that standard that will be there as we've been very much thinking about data quality for a long time in different facets, whether it's looking at compliance with a standard, while we have numerous testing tools that we pay for that are used under our certification program to test that and certify that software is able to comply with that standard. We also have funded different research projects to examine ways to improve data quality, including we had a LEAP project. This is a Leading Edge Acceleration Project with Boston Children's to look at ways to assess both structured and unstructured USCDI. And then just this year, a couple months ago, we awarded a million dollars to Columbia to build out scalable computational approaches to evaluating the quality of data recorded by nurses so that it can be used in AI. That was part of the executive order on AI as well. And so yeah, we're thinking about this and we want to bring all of these resources to bear so that we as a community can move towards data that is higher quality and better known quality so that we can ultimately get to that better outcome of better health enabled by data.

Charlie Harp (38:50):

I think one of the things we, and I think this applies to a lot of things in healthcare, TEFCA included, I think in healthcare we have a bad habit that something new comes out, and on day one we try to use it. And if it isn't immediately great or perfect, we're like, oh, that didn't work. It's happened with NLP, it's happened with all these things. And honestly, one of my fears, I think TEFCA is fantastic. I think that HDU, TEFCA, our ability to have a common agreement and have a way that we share data is fantastic because it opens up the pipes between everybody. It actually creates that system where we can communicate. And the thing that's kind of scary is the first time people get stuff, it's not going to be perfect. We haven't really focused on data quality the way we should.

(39:44):

And one of the things I really like about measuring data quality is the number of people I talk to. I'm in the data quality business and the number of people I talk to that go, oh, my data is great. My data is just fine, and I want to be like the guy that goes into their house with the water, they test your water and oh, your water's fine. Look at your water. And there's no way to do that. And so something that measures quality allows people to look at it because the scary thing is whether it's TEFCA, whether it's AI or even classic quality measures and algorithms, if your data is bad and flawed and full of garbage and duplicates, then anything you decide based upon that data is going to reflect those flaws. So I've always scratched my head when some people don't seem to get how important the quality of the data is and how we clean it up.

(40:35):

And so as we embark on these things, I'm always very optimistic about, listen, TEFCA is going to open up. You're going to see some stuff. It's going to be a little rough, but you got to give the industry time to do exactly what you said, which is give us time to, once we see the data, we can figure out what we need to do to fix the data. Once we start getting these things working, once we start deploying AI, we're going to start to see what some of this prompting issues can cause. And we also still, in my opinion, I may not be as gungho about AI as other people because I feel like AI is a rear facing technology. It can only do things probabilistically based upon what we've already done. And so I kind of feel like history's hands around the planchet of a Ouija board and giving us an answer.

(41:28):

And the only way for us to steer that or to remove bias is to rebias. And so even when it was classic deterministic decision support in healthcare, when I was building those things at First Data Bank in Zincs, I always said, A human, a learned intermediary should be there to make the decision because the AI is not going to feel bad that it killed your grandmother. Ideally, a human will have that in their calculus. And so I think AI has a lot of potential. I still think we're trying to figure out where it sedges into all the other things we're doing. I don't know if you guys disagree or have a different perspective on that. What's important is now we can say this podcast is about AI and we'll get more listeners. So go ahead.

John Rancourt (42:13):

Well, I can say a few things about our take on AI, which is that we at the department are AI optimists, but also recognize that AI has risks, and those are very real. The optimism is around the idea that AI can improve care, it can reduce burden, improve costs, but there's also the risk that it exacerbates all of those things or inequities. And so yeah, those that are using AI, like you're saying Charlie, they need to have transparency into the tools that they're using, understand what they are. And that's where our work has come in to build requirements for certified health IT that are using predictive decision support interventions to provide what we call a nutritional label for AI that has different source attributes to the provider or entity or person who's using that system knows what's the intended use, what's not the intended use. And so yeah, there is a ton that can be due. There are risks that can be done, but there are risks as well.

John Short (43:21):

I agree with that. Again, I'm the big proponent, again, as I mentioned earlier, of the AI readiness, having the pristine data that's super clean, that's super accurate, that's coded to maximum extent it can be. And then there's a program in place, not necessarily a software program. There's a policy in place where it's continually reviewed and update codes are done because sometimes codes are replaced. There's new codes being created. My team works on a lot of toxic exposure, creating new codes that never existed before and location codes never existed. So there should be a sustainment program that's continually going through and maintaining, okay, I have pristine data. Well guess what happens the next day It's less pristine the next day it's less and less and less. It's becoming tarnished unless you have a good sustainment program to maintain high quality data and then it continuously to stay high quality.

(44:10):

So you should be able to do that. Part of the risk scoring that goes into how well an AI can use, let's say someone built the perfect Hogwarts magical AI that has had perfect results. But if you plug it into trash data, whether it be garbage data or whether it becomes inaccurate data because you don't sustain it, then you're not going to get this beautiful spill out the side, you're going to get something corrupted. It is not going to be a beautiful new dough. It's going to be some what a chupacabra or something coming out the side of this new AI program. And so we all have to make sure that should feed into the calculus. And when standards are developed and looking at that data quality, I think we have to say, okay, we can certify, as you were saying, we get determinance data quality, it's 60% quality or 80% quality.

(44:57):

That scoring that you're talking about for that new standard, it would seem like maybe AI companies and private and government institutions would incorporate that into their AI risk scoring. And that way, okay, we know that the data going in is at a 60% score. And so that should change the calculus of what the risk of any result coming up the AI would be. Because if it's made for use 80% quality data, a hundred percent quality data, and I put in lower quality, then it should recalculate and say, any result you come out of here could be 60% or less accurate. You should know that. And I think those are rules that have to be set in place. I think those sort of things should probably be in legislation. So everyone can have an equalized, normalized trust and ai. We want to make sure people talk about non-biased and ai.

(45:46):

That's important. I talk about the readiness and quality, which I think everyone should focus on, but we also have the AI trust, and I think that's the other thing we have to look in when we do legislation, we have to look at how well is the AI trusted no matter if, again, if it's commercial ai, government derived ai, and you should understand what the trust score is and that way otherwise, many clinicians either won't use it or ignore it. They'll go back to using standard clinical decision support. But all these things are important. And again, let's go back to our base thing. This is a football, this is a standard. We need to make sure we have coding standards, we have data sharing standards, we have policy standards like TEFCA. I look at TEFCA and you look at the interstate road system, okay, we had some roads, we had some cities, but we didn't have enough connectors and people didn't know if it could join the road or not join the road.

(46:42):

And TEFCA says, Hey, everyone joined the road. Everyone's allowed to join the road. Everyone can share the road, but you might have to have these certain agreements in place, purpose of use, that sort of thing. So you look what the interstate system did to explode the commerce and capability and sharing and travel for America. TEFCA does the same thing for all the healthcare data in an appropriate terms of use manner. So you look at it that way. That's another way people can visualize really what TEFCA is doing. I'm not saying TEFCA itself is building the new exit ramp, but it's putting those rules in place to encourage those to be built, rewarding the partners or punishing depending if they're blocking data, that sort of thing. So again, look at TEFCA, how that brings that. If we didn't have the interstate system, we didn't have on-ramp and off-ramps in our country, it would be difficult to travel.

(47:29):

And so TEFCA puts that in place and as people join TEFCA and put that in place for their organization, now we all have an interstate system we can share and we have a better information sharing. And then we go to the standards like you talked about, to make sure we have quality. We don't go to the exit ramp and fall off the side of the earth. We actually go to the city or data lake we're trying to get to. So I think that's where that being able to validate the quality allows us to get to the right data at the right place, right time, and whether we're using the data, whether we're using it for AI readiness, whether we're using it for other purposes, we can be assured that what we're deriving is of high quality, we can trust it. And that's why that standard and quality is really critical.

Charlie Harp (48:10):

And actually used a word a bunch of times that I think is a great word. And as an example, a few years ago we started working with a large provider, one of the largest providers, IDNs in the US and we did a lot of different projects with them about normalizing their data, their data coming from their clinics and their hospitals going into their central data repository. And I was having a conversation, I always think for population health quality measures and all those things and understanding that it's important, but one of the things a person said is in the old days, we would map things as a project and we would use the data and then we'd move on. And they changed their philosophy to it became say, a lifestyle choice. Data quality readiness is a lifestyle choice. You choose to do it, you do it every day.

(48:57):

And it goes, for example, when the hurricanes came last year, we used to have nurses running around hospitals and clipboards identifying where the patients were that they might have to move and what patients were on ventilators. And it was this herculean task. He said, once we got everything normalized and mapped in a central repository, what we were able to do was run a report that took 30 seconds and we knew the exact same information. And when you think about public health, when you think about the pandemic, we just went through, the idea of us having that kind of data liquidity and data readiness, it doesn't matter whether you want to use AI or analytics or decision support or just generate a report for the patient so they can see what's going on. Having better quality data, having more complete data is a no-brainer because it satisfies all use cases. And I think that that's really optimistic, even though AI seems to be really driving this, and that's fine because I'm one of those, it's like a hitchhiker. You get in a car, you're like, I don't care who's driving or why as long as we get there and you don't murder me. So when it comes to data quality, I'll get in the car. As long as you're going to data quality, I'm in the car, let's go.

John Short (50:10):

I agree. Often no one ever has all the right number of people, all the right amount of money. You never have all the oxygen you need to get everything done. And so I look at AI as another tank of oxygen, your 30 fathoms under or you've got to go space, whatever. You need more oxygen to do what you need to do. And AI has a lot of oxygen right now. People are excited about it. Well, again, every time I go to an AI training session and I talk about AI readiness, people are like, what? Yeah, I'm for AI. And then I talk about all the data quality and I go into that in detail and they're like, guess we need to go back and look at that. So they all agree. That's critical now. And people have told me, we've been trying to get people to work on improving our data quality, and they say, yeah, it's in the backlog for next year's funding, next year's funding, and they never get to it.

(50:58):

They give 'em a little bit or whatever. Now they're like, we need that for AI. Okay, yeah, hear you. Here's some money. And so again, I think in an effective, purposeful, strategic way, that's what leaders have to do. You have to find out where there's opportunity to move what you're trying to move, not because I have an agenda personally, the idea of having data quality, no matter what element you're working on in healthcare is going to improve. Healthcare is going to improve patient safety, it's going to lower cost, et cetera, et cetera. We can use a million other phrases, but you can't always get all the oxygen you need to move that forward and connecting that to AI in a logical sound and ethical way, people go, oh, I get it. I need to improve this. Now you all of a sudden now you can improve something that everyone agreed to before but couldn't prioritize it. Now they understand why it's critical to prioritize that. And again, it's going to pay dividends in many other areas besides ai. That's great. Again, it's only going to help everybody in sharing the data and improving patient outcomes and lowering duplicate testing, et cetera.

Charlie Harp (52:02):

So I've kept you guys for a long time. You can probably start wrapping it up. Is there any closing thoughts or ideas you guys want to put out there? How about we start with you, John R?

John Rancourt (52:14):

Well, Charlie, first of all, thank you so much. Appreciate the opportunity to talk with you and John Short, again, always covering most of the points that I'm trying to make, really appreciate the partnership. I wanted to sort of close with a point about that we were kind of getting on earlier, which is just about community driven effort. That this is really a large scale across the community work to try and prove on any of these activities, whether it is network to network data sharing through TEFCA or for standards development activity. And so I do want to put in a plug for our ONC annual meeting. And Charlie, thanks for agreeing to speak there.

Charlie Harp (52:54):

Don't you mean your ASTP annual meeting?

John Rancourt (52:56):

Oh, Jesus, Charlie, thank you. Yes. And yes, keep thinking about our new branding. Yeah, that's in two weeks on December 4th and 5th, and it's, from what I understand, registration still open and it's free here in D.C. We'll have a lot of great panels talking about these topics and more. And so yeah, would love to hear people, folks ever have questions, need, please do reach out to me, anybody at the ASTP team and Charlie, and John Short, thank you both so much.

Charlie Harp (53:30):

Thanks, John R.

John Short (53:32):

Well, John R, you've often accused me of taking your words, so apologies, but I'm glad we can sync up good. And it shows that the FEHRM and HHS/ASTP, formerly HHS/ONC have a tight bond, a lot of interaction with each other, so I'm glad we're able to do that in a way. I did also want to mention, because I only mentioned it one time before we go, the toxic exposure work that we're doing, again, another standard we're working on, so people are exposed to things all the time. Military people got exposed to even more things, and the VA has had recent legislation called the PACT Act that supports this and went through the record. We found over almost 800 different examples of different elements people were exposed to and we're going through in a methodical process, getting all that data researched and so we can get it submitted to the National Library of Medicine.

(54:28):

We're working with VHA Veterans Health Administration and going through and getting these added to for SNOMED, getting added for ICD 10 codes. And that way other elements of versions of this throughout the record, no matter where they went in private sector, can also be coded and mapped. So these are new national standards we're creating for the whole country. Last year we created about 60 new codes, and again, we got about 800 to get through. We're working on process to speed that up. But every one of these takes 70, 80, 100 hours or more of research by clinicians looking at white papers, research papers, peer-reviewed papers. Because when we submit it, it's not just, Hey, I found this thing and Toline, this is the reason it should be a code because I think it does this. Well, what's your evidence? We have to provide evidence. Peer-reviewed from Journal of Medicine, nature Cell Biology, et cetera, that show there's proof that this thing, this chemical causes certain things.

(55:24):

We, again, back it up with 3, 4, 5 different peer review papers. So it's a lot of work. And we recently started doing locations where anytime anyone was in the area for five years in this location, you were exposed to X, Y, Z, or could have been the location codes were available to be used before they weren't being used. So we've pioneered that. We've now identified about the 80 or 98 little locations we're going to be getting coded to. So that's a big effort in my team. We're really excited about it. And again, this helps the whole country. So again, I appreciate the time today, Charlie. I think this was a great subject. Everyone should be excited about interoperability, should be excited about standards. And I think stepping back from that, what should they really be excited about? They should be excited about all those times where they couldn't get their record.

(56:10):

Why can't you can't get much stuff from that hospital? That won't happen anymore. As we improve in our ability, when you show up at the one doc shop in Anchorage, Alaska, ORMs Corner, South Carolina, or at Inova Healthcare System in Virginia or wherever, they'll have access to your full record and it'll all be normalized and there won't be a question. It would be like if you went to the same clinic, same hospital your whole life, even though you went to 50 of 'em across the country. And so that's the goal. That should be exciting to most people because they've been frustrated by that and they had to have another test done, another test done. So think about that. Anyone that hears that, that's what you should be excited about. All these go group work, people talk about, that's where we're going to get to. And the hospital said, well, if you want our records, you got to pay so much money. We're trying to get past all that where people are required to share their data so you don't have a barrier to get your data. I think that's what we're excited about because we know that's the end result and that's what the common public should be excited about. I thank you John R for the time together with this ASTP's efforts. And thank you Charlie.

Charlie Harp (57:12):

It's my pleasure. And for both of you guys, John S, I really, I personally really appreciate all you do for our current and former members of the military. The work that we do with you guys is important. And John R, I think that the folks at ASTP/ONC a lot of you guys have been fighting the good fight. Administrations change and you guys keep fighting the good fight. And I, for one, really appreciate and applaud all the work that you guys are doing out there and do whatever I can to push the agenda. So thank you very much both of you for being on today. And to our listeners, thank you. If you're still awake, we didn't put you out. Expecto good outcome to go with what John was talking about before. It's the expecto good outcome. And I am Charlie Harp and this has been the Informonster Podcast.