Informonster Podcast

Episode 41: PIQI Update and HL7 Connectathon Recap

Clinical Architecture Episode 41

In this episode of the Informonster Podcast, Charlie Harp shares updates on the PIQI Framework and his experience at the HL7 Connectathon. He highlights progress across newly formed PIQI Alliance work groups, from claims and explanation of benefits to USCDI and quality measures like eCQMs and HEDIS. He also talks about new SAM developments, including detection and noise, and how community feedback is shaping PIQI’s evolution to strengthen healthcare data quality.

Charlie also reflects on his first Connectathon, where PIQI sparked engaging discussions and valuable feedback from across the HL7 community. The experience underscored the collaborative spirit and shared commitment to advancing data quality in healthcare.

Contact Clinical Architecture

• Tweet us at @ClinicalArch
• Follow us on LinkedIn and Facebook
• Email us at informonster@clinicalarchitecture.com

Thanks for listening!

Hi, I am Charlie Harp and this is the Informonster Podcast. On this episode of the Informonster Podcast, I'm going to give you an update on the PIQI Framework and talk a little bit about my most recent experience at the HL7 Connectathon in Pittsburgh, Pennsylvania. So in general, the work around the PIQI framework has been going really well. The folks getting involved in the different work groups and in the executive steering committee have really put their backs into PIQI and have helped out a great deal with some of the evolutionary things around the PIQI framework. The work groups we've established in the PIQI alliance are the executive steering committee. We've also created a work group around claims and explanation of benefits based upon CPCDS working with folks with experience in the care and blue button effort. We also have a work group around the USCDI rubric and we have a work group around seeing how PIQI could be used for quality measures for ECQMs and possibly things like HEDIS measures.

(01:12):

And so the work groups are just starting out and talked about what are the things that is within the scope of control of the work groups, and a lot of that is really what are the requirements of the PIQI models and what are the requirements of the rubrics and are there any novel SAMs or SAM patterns that we need to be thinking about? And it's been going really well and we've gotten a ton of feedback. The other thing that's worth talking about is the betas. We've done betas with a number of information exchanges and kind of put PIQI through its paces and it has not disappointed. We've gotten some really great feedback. We've added the concept in PIQI of detection so that you can look at the data moving through a channel and identify whether it's seeing members of certain value sets. Now the reason that's important is if I'm using a data stream to drive some kind of quality measure and the quality measure is triggered by seeing members of certain value sets.

(02:11):

Like let's say you're looking for diabetes patients and you've created a value set with all the terms that represent diabetes patients. If you're getting a data stream and you're never seeing any of those terms that you would expect and therefore you're never seeing any diabetic patients in that stream, then it's obviously not worthwhile to use that stream for a diabetic quality measure. Now it could also be that that stream is not mapped correctly. The terms in the stream are not mapped correctly and therefore you're not identifying those diabetes patients because they're not coded to something that the value set understands. So the idea of detection is really where you identify things that you think should be there and yet you're not seeing them there at all. And so I think that's a very cool part of how PIQI has evolved and we call that signal.

(03:07):

So am I getting signal? The other thing that we're adding into PIQI is this concept of noise. So whereas signals looking for things that should be there, noise is looking for things that are duplicates. So for example, if I'm getting data and I have lab results, and when you're doing a SAM around duplication, you're really identifying what are the fields on the elements in the data class that indicate that it's a duplicate if they're the same. For example, if I'm looking at lab data in the lab result data class, if I've got something where the test, the result, the result unit and the date and time are always exactly the same, that's probably a duplicate. So identifying kind of that noise that duplication in a feed is, it's not really part of the quality measure and neither is detection in terms of the score because if you're not sending me diabetic patients, it's not like I can ding you score wise, it just means I can't use your signal.

(04:08):

And the same thing as with duplication. If you are sending me noise, theoretically I could have some factor that I apply to the overall score, but that's really where you can shove too much things into a numerical score to the point where it's not really meaningful. So if you think about what this means to PIQI is you have the concept of what's the quality score for the things that should be there every time or the stuff that's there. So things that should be populated that aren't, things that are not populated correctly, things that aren't conformed, those things and possibly some of the plausibility stuff. Those things give me a score. But the other facets of the stream of data that I'm receiving are things like duplication and detection, so the signal and the noise in this channel as well. All of those things come together to indicate the overall quality of that dream of data.

(05:05):

So generally those are the big things that are happening around PIQI. And we are starting to deploy the plausibility SAMs and working on models and interfaces for the plausibility SAMs, and that's really exciting. And clinical architecture is creating some content. We've been talking to folks at Deloitte that are also creating some content and hopefully we will have some results around deploying plausibility and seeing what the plausibility stuff looks like. So we've seen just the core atomic quality of the attributes and the elements we've seen what detection looks like. We've got detection Sam's running now. We're working on the duplication or the noise Sam's now as well. And we're also working in parallel on the plausibility SAMs so we can look at things like, does this patient's birth sex align with these procedures and medications, et cetera, as well as things like does the lab test match the unit and the specimen type?

(06:03):

So those are going to be the first plausibility we're going to roll out and make available in the PIQI alliance community. Let's talk about the Connectathon. So I drove out to Pittsburgh last Friday and I arrived at three o'clock in the morning. Thank you, I-70. The Connectathon started the next day at the Wyndham downtown in Pittsburgh. We had two tables for the PIQI alliance and there were people sitting around those tables. This is my very first Connectathon. What I will say is I really enjoyed it. I've been to HL7 work group meetings before, I've been to AMIA meetings, and I always enjoy the friendly intelligent conversations that I have and it's always nice to catch up with people that I've known for decades and that usually happens at almost any conference I go to that is involved in healthcare. The Connectathon was kind of fun though because people were sitting around a table, they were there to try out the PIQI framework, and my team had prepared a lot to be able to make both the open source reference implementation PIQI framework available and usable, as well as our commercial version of the PIQI framework, the PIQXL Gateway and the stuff we built to help transform FHIR CCDA and V2 into PIQI format so that people could kind of mess around with it and give us feedback and see what they could do.

(07:30):

It was a busy two days and there were a lot of people came by and asked a lot of really good questions. We had some debates around the tables, but they were always thoughtful and considerate. And honestly, I didn't feel like people were trying to block PIQI or shut down PIQI. I think a lot of their questions and feedback were really about how to ensure that PIQI could be successful in this community around FHIR and HL7. And that was really nice. It was a really nice experience and I got a lot of great feedback from people like Gay Dolan and Rob McClure, Bryn Rhodes, and folks that the Clinical Architecture competes with that will remain nameless. I can't name everybody. If I name everybody, we'll be here all day. So I thought that the conversations were great and there's some modifications. We're looking to see if SAMs, which were basically code with content, if SAMs can be driven by CQL as a kind of an execution type.

(08:35):

And so we're right now actively looking at CQL and see if we can apply CQL to a PIQI model and then we could articulate a SAM. I think there might be some complexity around that, but I'm more than willing to give it a try. The other thing we talked about is whether rubrics and the scoring could be represented by FHIR quality measures and FHIR quality measure reports. So we're looking at both of those things. Got an opportunity to talk with folks like Grahame Grieve about his perspective on PIQI and where does the FHIR validator end and PIQI began. But once again, PIQI is not just FHIR, it's V2, it's CCDA, it's FHIR, R4, R5. So the idea is that PIQI is this flat representation for checking quality regardless of where it comes from. And that brings me to something that came about that was really exciting about the Connectathon.

(09:36):

I had people coming up to me and say, Hey, Charlie, can PIQI be used for providers? So if I have a provider resource or a collection of resources around the provider, could I send that to PIQI and have PIQI validate it? And of course what I said tongue in cheek is, well, it starts with a P. So yeah, PIQI can stand for the Provider Information Quality Improvement framework. But in reality, since we made it so PIQI can take any flat data model in, you could use it for the provider as long as you create a provider model, you'd use the same SAMs because you are looking at the same types of things you would look at in patient data. You're looking at dates, you're looking at fields that should be numeric. You're looking at codeable concepts and whether they're in the proper code system, I had someone else come along and say, if I've got a manifest of data for DirectTrust, could that be something that I could validate with PIQI?

(10:31):

And I said, so you have a package? And they said yes. And I said, well, if it starts with a P, I don't see any problem, then it's a package information quality improvement framework. I say that jokingly, but one of the big takeaways for me is the PIQI approach to scoring data against a rubric using a set of simple assessments and aligning that to a quality taxonomy. For a lot of these things, it works and it's not that hard to do it. It's not hard to create a model, it's just data. It's not hard to create a rubric. And what we're really doing over time is we're extending the number of SAMs and the SAM patterns. So I thought that was very exciting and I'm totally with respect to the folks that approach me to discuss those use cases you had me at, hello. Because I think that we've been very focused on patient data quality, started out with clinical and then we moved to claims and these other use cases.

(11:29):

But in reality, all those things we work with in healthcare, they all deserve to be quality data and they all play a role in how we take care of people and how we pay forward and all these other things. And so for all of these things, if it's a data asset in motion and we want to validate that it's good before we accept it, PIQI can be kind of that gatekeeper. And that's another big thing by the way that came out of this discussions, a number of people that approached me were saying, well, I thought PIQI was supposed to be applied against my EMRs dataset, and I want to reinforce this because I know that some of the stuff that maybe I've written wasn't as clear as it should be. But this whole idea, and I think maybe it was Rob McClure that said it when we're sitting around the table, PIQI is designed to protect you from other people's data.

(12:25):

For all intents and purposes, it's a mechanism that creates trust because it's telling you, I've assessed this data with an objective rubric for your use case, and I have found that it is good. Now you can test the data that you're sending out to see if it satisfies the rubric. In fact, I encourage that. But really PIQI was built for people that are receiving data who are trying to decide if they're going to let that data in or not. And so that's an important part of PIQI that I want to just restate. The other cool thing that happened at the Connectathon is I sat down with Keith Campbell and the folks at Deloitte and we were kind of talking about knowledge assets because a big part of PIQI is these plausibility content sets and how do I share them? And some of them, like terminologies code systems are easy to check because we have FHIR terminology services that say, is it a valid concept in the code set?

(13:18):

And that's easy to check. And for things that are like value sets, we can do the same thing with FHIR terminology server. We can say, here's the key, here's the members, here's my member attribute. Is it in this value set that I've identified? Those things are both things that FHIR terminology services are well suited to do, but there is a category of plausibility content that is not like that. The things like if I've got this instrument device ID and this test kit, device ID in my lab result and this LOINC code, is the LOINC code appropriate? Is the specimen type appropriate? Is the unit appropriate for this combination? That's more than just a value set. That's like decision support content. It's like a drug interaction or a duplicate therapy check. Now, I'm not advocating using PIQI for decision support, but for all intents and purposes, PIQI is doing decision support.

(14:14):

It's just doing qualitative decision support while it's assessing your data. So there's patterns like that. There are patterns like sanity check on a range, here's the LOINC code, here's the unit of measure, here's a low, here's a high, is my result value in range? I'm looking forward to having more conversations with Keith Campbell and around his Ike initiative and looking at how we can create a common approach to moving content assets from point A to point B, not just because of PIQI, but also because I'm a clinical decision support guy before was a terminology guy. And the more we can make interchangeable clinical decision support as opposed to having these fixed walled gardens of proprietary content, the more we can move assets that are good to where they can be put to use and let the best content win. So if you've got really great information around a certain subset of drugs and you have really great information around a different subset of drugs, I can mix and match the content that I want to use for decision support instead of compromising because I can only work with one content vendor at a time.

(15:32):

And content vendors may not love that, but I think it opens up the world to content that's not just authored by vendors, but content that's authored by institutions that are creating new ways of doing things. And we can theoretically, I mean legal barriers not withstanding and liability, we can fast track cutting edge decision support to someplace where it can be put to use in a more reasonable timeframe. So that was really cool. In general, I had a lot of time to think on the six hour drive back from Pittsburgh Sunday night, and I've reflected this on LinkedIn already, but I really do appreciate the spirit of the folks at HL7. I appreciate the willingness to take the time and learn about PIQI, sit around a table, tell some bad jokes, drink some, ok-coffee and share thoughts and ideas about how we can make this thing better.

(16:31):

Because the universal theme, when you go around from table to table, whether they're talking about measuring AI, whether they're talking about doing things around prior auth or whether they're talking about things like data quality, all of the people that go to these meetings are really thinking about how can we make healthcare better? How can we work together to create these standard approaches to make healthcare better? So I just want to generally thank the HL7 community. Thanks everybody that came by and chatted with me and supported the PIQI framework. It's really been a humbling and wonderful experience. And so that's a wrap for the PIQI update and the Connectathon summary. I'm Charlie Harp, and this has been another episode of the Informonster Podcast. Take care.