Informonster Podcast

Episode 35: Unlocking Value Set Quality in Healthcare

Clinical Architecture Episode 35

Charlie Harp is joined by Dr. Victor Lee, VP of Clinical Informatics at Clinical Architecture, to take a deep dive into how a methodical approach to value set creation and maintenance builds a solid foundation for downstream data analysis and improved healthcare outcomes. 

Dr. Lee introduces the 6 Cs Framework he developed to evaluate value set quality through dimensions like clarity, completeness, congruency, consistency, correctness, and currency. 

They discuss common pitfalls, the impact of high-quality value sets on clinical decision support, and why value set maintenance is essential over time.  

Download the full report for a fresh perspective on creating and sustaining value sets that make a difference in real-world healthcare outcomes. 

Contact Clinical Architecture

• Tweet us at @ClinicalArch
• Follow us on LinkedIn and Facebook
• Email us at informonster@clinicalarchitecture.com

Thanks for listening!

Charlie Harp (00:08):

I am Charlie Harp and this is the Informonster podcast. Today on the Informonster Podcast, you're in for a treat because I've got the Doctor Victor Lee with me here today, and we're going to talk about value set quality. Now for the people that don't know you, Victor, for the unfortunate souls who've not had the great pleasure of being able to work with you, can you give everybody kind of a high speed, low drag overview of your background?

Dr. Victor Lee (00:35):

Yeah, thanks Charlie. It's a pleasure to be here. And so hi everyone, I'm Victor Lee and I am currently Vice President of Clinical Informatics at Clinical Architecture. I've been here about eight years now. I had my early beginnings as an internal medicine physician. I practiced as a board-certified internal medicine hospitalist at Kaiser Permanente in Los Angeles, California. But I've always been a big geek, and so I always was looking for ways that computers and technology could make workflows better for physicians and that it would result in better patient outcomes. So I eventually gravitated into the world of health information technology and for those of you who might not know, I've actually known Charlie for many, many years dating back to when we worked together at a previous company Incs Health.

Charlie Harp (01:35):

It's approaching 25 years.

Dr. Victor Lee (01:37):

Yeah. Oh wow. Yeah, it's been quite a ride. And so I had always been interested in controlled terminologies and what the role of terminology and healthcare data would play in improving quality of care and making the world a better place. So here I am.

Charlie Harp (01:59):

Thank you, Victor. Now you said you're currently the Vice President. Is this something I should know?

Dr. Victor Lee (02:06):

There was no innuendo intended.

Charlie Harp (02:08):

All right, good. Good, good, good. And so one of the things, so Victor has been in this role at Clinical Architecture, essentially is our Chief Medical Officer for a number of years, and the content team at CA does a lot of things to support our clients and help create standard content that we can make available. But one of the things that the Victors team is responsible for is a value set that we call the Clinical Architecture foundations, which we originally built to drive the capabilities of our inference engine, but lately has been used for a number of things. Also, Victor and his team maintain the Align product, which is a value set essentially for managing and rolling up lab data. So Victor, the topic today is value set quality, and I know you've been doing a lot of work around this recently. How do you want to kick this off?

Dr. Victor Lee (03:07):

Yeah, great question. Let's just in case there are listeners who might need a primer on what a value set is, I'll start there. So a value set is really just a fancy word or phrase for a collection of coded terms. I don't know if everybody in the industry kind of thinks about value sets in this way, but I'm going to kind of propose sort of a classification of value sets. And so I kind of think of value sets these collections of codes as falling into three general categories. The first one is what Charlie and I refer to as clinical roll-ups. For example, there might be this thing called type two diabetes and we might want to do something with type two diabetes as a concept. And by the way, I forgot to mention that value sets have many purposes in health IT. They can be used for analytics research, clinical decision support, anytime that we need to understand things about a patient, an individual patient or a population.

(04:16):

There are a lot of codes floating around electronic health record systems and any place where patient data can be stored, there may be coded data, there may be free text, but we're kind of talking about coded data. So just wanted to back up there a little bit. So value sets serve many use cases that I mentioned and there's a lot more that I didn't mention, but okay, so getting back to the three general categories of value sets. So the first one is a clinical roll-up. And so going back to this type two diabetes example, a type two diabetes value set might include terms from potentially one or more code systems. So it could be ICD-10-CM, SNOMED CT, those are common code systems for representing diagnoses. And type two diabetes would be a diagnosis. So we might have many things that all share type two diabetes in common.

(05:11):

So it could be just plain vanilla type two diabetes mellitus. It could be type two diabetes with complications or without complications. We could have type two diabetes with acute complications or type two diabetes with chronic complications. So you get the idea that there can be a combinatory explosion. And so those are clinical and I think a lot of us are kind of familiar with thinking about value sets in that context. But quickly, the other two types of value sets that I also see out in the wild are answer lists. So we might have a collection of codes that might be response values for a question. So it could be administrative gender and the value set might contain things like male or female or unspecified or I didn't ask or unknown flavors of null. And those, unlike clinical roll-ups can be where clinical roll-ups have concepts that are all similar to one.

(06:11):

Another answer list might have diametrically opposite terms, but they still belong in a collection because they serve a certain purpose. And then the third type is what I call subset catalogs. And those can be collections of terms that might be used to validate a certain field of information. So for example, if I have a problem list, I might say that valid problem list entries would be anything under the SNOMED CT diagnosis hierarchy. So that might be different than a clinical rollup. I might have type two diabetes and I might have stroke and a bunch of other things that are heterogeneous. So that's kind of how I think about value sets in general.

Charlie Harp (06:54):

Now I'm going to go off on a tangent

(06:56):

Because this is one of the things in the early days of healthcare IT for me, when I got involved with the clinical informatics as an engineer, I really didn't like the term value set and people of Clinical Architecture know this. I'm kind of militant about it because I feel like when I think of a value set, first of all, these are concepts, not values. So the very first thing that I react to is it's really a concept set. It's not a value set, but that's okay, I'll set that aside. The next thing was when I thought of a value set, it was more like the answer list. So you're saying here's a set of values, which are actually concepts, that I want you to pick from as a dropdown list. That's why when we created in the Symedical product the tool, we called it the Element Set Manager, and we call the roll-ups elements because they're elements in an inference.

(07:50):

So for me, I think that, and this to me kind of feels like more of an academic way to describe it, but I kind of feel like these three things, they're very different and what they do, they're very different in philosophy when you're building them as you've just described. And so one of the things I plan to do, it's on my bucket list, is to completely change the way the industry talks about these things and turn 'em into three separate things because I think people get confused when you throw around the term value set. We might as well call it a something or whatever. But I'm going to stand down. I'm going to climb down my soapbox and I'm going to thank you for giving the listeners a definition of the value sets. And to restate what you said is really there are three types of value sets.

(08:44):

They're the kind that basically roll up a bunch of more specific clinical concepts or concepts in general into a broader concept for the purposes of some type of analytic grouping, there's the things that are basically lists of options or answers that go together. And there is kind of a subset of things usually based on some kind of category. For example, if I were in RxNorm and I say, these are all beta blockers, I'm not really rolling things up the same way I am diabetes. I'm actually grouping things based upon the fact that I've named this group this and these are the members that I've selected in this group. Is that fair?

Dr. Victor Lee (09:27):

Yeah, that's fair. And I just want to add for the listeners out there who might not have some of the context, Charlie and I as well as other members of the Clinical Architecture team have had intense debates about the meaning of these words that we're throwing around. And we don't always have to agree, but we certainly have lively discussions about things like this, which makes us all nerds and we love having these conversations. I will also say that because our software tooling and in this case one of the semi-medical applications called Elements Set Manager, which we use to manage all of the three categories of value sets that Charlie and I had discussed, it does the job for all three, and we have different applications for enhancing the experience and making sure that they're all very high quality. And so I think from that perspective, I guess where we converge on is that no matter how we define these things and how we group or split kind of the meaning of these three different categories of value sets, at the end of the day, they're lists of terms and they should hopefully have a code system, a code and a description, and however we group them or split them, I guess we can for this conversation, we'll just generally collectively refer to them as value sets.

Charlie Harp (10:53):

Now, are you going to get into the whole extensional intensional?

Dr. Victor Lee (10:57):

Oh, I certainly can. Yeah. Actually, this might be a good opportunity to define that when we talk about intensional and extensional, we are really talking about how we're defining value set members. And so let me take one step back here and say that when we talk about value sets, again, they're collections of codes, but typically we think about value sets as having both definitions and expansions. So the definitions are kind of the instructions which when executed will yield a set of expansions, the numbers,

Charlie Harp (11:35):

And usually there's a rationale that says, why am I creating this value set in the first place.

Dr. Victor Lee (11:38):

Right. Some kind of rationale or we might call it a scope statement or we might have a field for the intended inclusion and exclusion criteria. But yeah, some sort of descriptive human-readable explanation for what you should expect to see. So the definitions are the things that can be extensional or intensional definitions. So an extensional set of definitions are basically individually handpicked terms. And you could imagine that if you have an extensional list of 10 terms, then your expansion will be exactly those 10 terms. However, we generally encourage our users to use intensional definitions. And when I say intensional, that's not with a T, it's with an S. I-N-T-E-N-S-I-O-N-A-L intensional, which basically means that they're rule-based definition. So an example of an intensional definition might be take this ICD-10-CM term called type two diabetes mellitus and then take everything in its descendants based on the current ICD-10-CM hierarchy.

(12:56):

And that would be an intensional rule. So if you execute that rule, that one intensional definition might yield a hundred different type two diabetes terms. And so you could imagine that the value of using intensional definitions would be that you could potentially very efficiently yield a whole collection of terms with fewer definitions. And also perhaps equally or more important is that if you care about the accuracy, the comprehensiveness of the value set over time, that if there are changes to the underlying terminology that the way that you execute that rule today might be different, it might yield a different expansion tomorrow. So if I CD 10 through in some more type two diabetes malitus terms under that hierarchy, you don't have to redefine your value set because you can let the rules do the work of updating for you.

Charlie Harp (13:54):

So for all intents and purposes, an intensional value set is like automation. I say, here's the intent I have in building out the members of this value set. And that has pluses and minuses as we know, because sometimes if you have a rule based on hierarchy or things that are there and somebody changes that underlying hierarchy, you have to manage that. It's not like it's on autopilot because sometimes things will drop out that you don't want to drop out. Exactly. You know this. But yeah, it's a good way to define those two things so that people realize for someone creating a simple pick list, you're probably going to do an extensional value set. But for somebody doing something like I want everything that really means the patient's diabetic, you're better off building a rule that you execute that brings in candidate members for you to kind of dial in or dial out.

Dr. Victor Lee (14:45):

That's a really good point because the pick list answers or answer lists or lookup lists, however you want to refer to them, they tend not to change as much over time. So if I had a question about smoking and the answers were current smoker, non-smoker, former smoker, there tend not to be a whole bunch of additional choices over time. But when you're dealing with clinical data conditions, medications, labs, they tend to evolve a little bit more over time. And I want to underscore what you said about the automation, not just kind of leaving it completely up to automation. We feel that the adjudication is of recalculating a value set. Expansion based on rules that you might've defined 3, 6, 12 months ago is important to adjudicate and to see exactly what the differences are with today's calculation versus the one from the past. And that's one of the things I really love about our element set manager tooling because it'll show potential additions and removals and it'll allow a learned intermediary to decide if that's clinically appropriate for what we intend the value set to represent versus, oh, no, that's something a little bit different and I need to craft my rules a little bit differently to make sure that we don't get extraneous codes that are really not in line with the intended scope.

Charlie Harp (16:08):

Absolutely, and I think this is one of those things that when I think about the evolution of how we use terminologies in healthcare over the last 20 years that we've been doing this 20 years plus, I think when you think about people that traditionally use value sets like the things that are in PHIN VADs and V SAC and HEDIS and all those things, what always kind surprised me about value sets is that people create value sets like at one point in the year and they say, these are the value sets you're going to use. But anybody who's spent any real time in the terminology world knows that RxNorm is updated daily, LOINC and SNOMED are updated monthly or bi-annually or whatever. So the truth is that when you build a value set, if you're releasing that value set once a year, there are going to be things that are going to happen throughout the course of that year that should be in the value set, but are not in the value set, which is why things like having intensional rules where you can automate that, and theoretically as we get more real-time in healthcare with these things we can, or if you're not doing it for something like HEDIS or eCQMs, but you are building a value set because you want some intelligence about your populations or whatever you're trying to do, you want that to be updated whenever the member terminologies that you're using update.

(17:37):

You don't want to do it once a year, right?

Dr. Victor Lee (17:39):

That's right. That's right. And I'm going to say something that might get me canceled, but I'm going to say it anyway.

Charlie Harp (17:46):

This is listener warning. Listener warning.

Dr. Victor Lee (17:50):

No, I mean this kind of in jest, but value sets to me, the lifecycle of value sets. I make an analogy to having children because there's a lot of attention paid and there's a lot of excitement around creating a new value set like giving birth to a child. But what I think sometimes we don't recognize is that there's a lot of care and feeding of value sets over time. And I think we all know that raising children is, it's a big commitment both in terms of time and money and your emotional investment, but there are parallels to value sets because we've seen plenty of examples where a value set might be excellent at a certain point in time, but to your point, terminologies evolve, new stuff comes out, medications, new drugs come out all the time and things get inactivated. And a lot of changes happen over time, and you might not notice those changes at any given moment in time. But when 3, 6, 9, 12 months passes, you realize, oh, things are kind of out of date. And my excellent value set last year is not so excellent anymore. And so I think it's a really good transition into our discussion around value set quality and the attention we need to focus on the quality of value sets, not just at a certain point in time, but over time and the care and feeding that we need to pay attention to for value set quality.

Charlie Harp (19:20):

No, I think you're right. I think that it's, like I say about all these things in the terminology informatics space is that it's not a project, it's a lifestyle choice. You create something like this, you really have to be ready to maintain it. In fact, and I don't want this to sound like a Symedical commercial, I usually don't talk about our products and these things, but part of our focus when we built our products is there are tools out there that work fine to do things once you can use Microsoft Excel and knock out a value set a member list fairly quickly. But when it comes to making it something that you're maintaining and distributing and you want to make sure you know how it's changed and what's going on with it, a lot of our energy here at Clinical Architecture goes into putting together tools and processes that are sustainable and efficient so that you can optimize those types of things. But anyways, I don't want to, this is product disclaimer, product disclaimer. So to talking about value set quality, what do you want to say about that, Victor?

Dr. Victor Lee (20:33):

Yeah, so I can share, I don't know our origin story with value set quality. When I first started at Clinical Architecture, I was creating a lot of value sets and I wasn't really doing it with value set quality in mind. I was trying to solve other problems. And I think this is probably similar to the experience of a lot of other people who are trying to solve a problem. And oh, by the way, you kind of have to create a value set to solve this problem. But for example, I was doing a couple things related to first of all, rules-based logical reasoning. And the second thing I was doing was creating disease ontologies. I won't get into the exact reasons why, but let me use an example for rules-based logical reasoning. One of the things we were trying to do was to detect undocumented diagnoses.

(21:31):

So things like diabetes, hypertension, heart failure, chronic obstructive pulmonary disease. And so I needed value sets to compliment the expression logic in our rules modules. So an example might be, and let me back up a little bit. So an undocumented diagnosis could be potentially two scenarios. One might be that let's say a physician didn't know that, let's say someone was diabetic because there was a lab value that was obtained at an outside lab and never got back to the physician. The other scenario might be that the physician knew very well that the patient was diabetic. In fact, the physician was treating the patient with anti-diabetic medications, but the problem list didn't say type two diabetes or

Charlie Harp (22:23):

Whatever it was in their note. It wasn't in there.

Dr. Victor Lee (22:25):

It could be in

Charlie Harp (22:25):

The notes in the discreet list.

Dr. Victor Lee (22:27):

And that's apparently a pretty pervasive thing based on our experience with our clients. So as we craft these rules, we might say things like, okay, if type two diabetes is not present in the problem list, and if I see these drugs, the patient might be on insulin and I'd create an insulin value set. And then it gets a little complicated because you have short acting, intermediate acting, long acting insulins, and if you're not creating your value sets so that you're catching all these terms, it might be that the term that you didn't put in your value set was in fact on the medication list, and that was your clue that the patient had undocumented diabetes and you missed it. And so there are implications of not having a really high quality value set. And also, I'll give the other example where I mentioned we were creating disease ontologies in that use case, we were using disease ontologies to power the filtering of a display within, I'll just say generically like a clinical viewer that might have aggregated information about patients from potentially multiple sources.

(23:42):

It could be an HIE or some type of a thing like that. So we were creating disease ontologies that would power the intelligent filtering of things. So if I just wanted to take a look at a patient's profile from a heart failure perspective, I don't need to know that the patient had an ankle fracture or that they had pneumonia and they were treated with antibiotics. I only want to see things related to heart failure. So show me medications that are related to heart failure, the heart failure related labs, or maybe comorbidities or complications of heart failure. And I needed value sets to represent those things within a disease ontology. And then sometimes you have to get really specific because if you created, for example, I'll go back to diabetes, just a great example. But if I created a diabetes value set and the diabetes condition had diabetes with DKA, and then I want to relate that to a complication of diabetes, which is DKA, if I start conflating things, then I'll start having circular relationships where diabetes with DKA is a complication of itself.

(24:51):

And so things start getting illogical depending on how you use value sets. So you just have to be really aware of the intended use case and make sure you're just being very precise in terms of how you're creating your value set definitions and expansions. And that's when I started because I was thinking, oh, I just need to solve this problem and I'll just slop these value sets together. And I realized, oh, it doesn't make sense sometimes or you miss things or you might have false positives or negatives. And so I started realizing that we need to start paying more attention to value set quality if we spect our investments in health IT to pay off if we expect it to do the right thing. And so that's when I started realizing, oh, I can't just willy-nilly put things together. I need to be very mindful of what happens downstream.

Charlie Harp (25:40):

Well, and like you said earlier, when you talk about a value set, it's kind of like, to me, a value set is almost like a micro terminology.

(25:49):

And what I always tell people when they try to take a terminology, if they don't know what the purpose of that terminology is and what the intent of that terminology is or value set or whatever, and you try to use it for multiple things that are not its intended purpose, you end up with all kinds of unintended consequences. And if you try to adjust that, if you try to take something like a value set, and this is where I think you're going, if it's not, tell me, but if you try to take something like a value set and you say, well, I need a type two diabetes value set, I've kind of got one and maybe I can tweak it so that it can do what I originally intended it for and this other, instead of sitting down and maybe starting with that and creating a separate value set with a slightly different rationale and scope, if you try to lump those things together, you get sloppy results.

(26:40):

And I think we've seen examples of this in healthcare especially whether it's a terminology or when people train models for things like if you take a model say that was trained to do NLP for radiology and you try to deploy it in internal medicine, it's not going to work very well. And if you try to start adjusting it to support internal medicine, it's probably not going to be great for radiology. And it's just one of those things we tend to do in healthcare is when we have a use case, we kind of look in the drawer to see what we have and we try to duct tape something together to solve this other problem without doing the work. And for those of you who don't know Victor Lee, Victor is always willing to do the work. And it's one of the reasons why when Victor and his content team develop something, I feel very good about it as the CEO of Clinical Architecture because I know that we're not cutting corners, we're doing the work, we're making the thing that is fit for purpose, which doesn't always happen in healthcare.

Dr. Victor Lee (27:42):

I appreciate that. We certainly try our best. We're not perfect, but we do put in the effort and

Charlie Harp (27:47):

Wait, wait, wait. Perfect. Was on your resume back when I hired you?

Dr. Victor Lee (27:50):

Oh, that might've been a slight inaccuracy typo. You caught me. I forgot the "im".

Charlie Harp (27:55):

The imperfect.

Dr. Victor Lee (27:55):

That's right, that's right. Yeah, two letters makes all the difference in the world. No, but you're exactly right. And the idea of value sets being fit for purpose, it's a really important thing. And I make this analogy to, I guess I like analogies. So analogies are to me kind of like,

Charlie Harp (28:21):

Are you about to make an analogy about what an analogy is?

Dr. Victor Lee (28:24):

No. But if that was clever enough, I might be able to think of something on the spot but not. But the idea is that- creating and also consuming value sets is kind of like when I try to buy jewelry for my wife. And basically I think there was really only one time that I tried to do that and that was to get the engagement ring. But if I were to walk into a jewelry store and take a guess as to what the gem should be and the size and the setting, I might get close and I might nail it, but I might also be wrong. And so it's probably better just to take my wife and let her decide or have her just purchase it herself because she'll feel that that's right for her. And when you consume value sets, it might be that something in our Clinical Architecture element set foundation will be fit precisely for the purpose that someone is trying to use that value set.

(29:24):

But it's also possible that someone had a slightly different intent. And it might be that that's a good starting point or it might be a good example of how we can craft value set definitions and our clients can take that knowledge and run with it and make the adjustments that they need. And so I think we recognize that two value sets that are from two different sources might sound similar, but you kind of have to read between the lines and kind of understand, well exactly what was this intended to do? And kind of pay attention to the inclusion and the exclusion criteria and whether the stated scope actually is in alignment, what you see in the current expansions, it can sometimes looks can be deceiving. And so I think I'm just trying to shine a light on just being very aware of what you get when you take a look at a value set.

Charlie Harp (30:19):

Well, and that's important because a value set, it's kind of like mapping When you create a content asset that is a value set. And for all intents and purposes, the rollup type value set is really a map. You're taking a code system and a code and you're mapping it to the value set and you're rolling it up to this grouper for whatever purpose you're doing. But that is a blind machine when you deploy that. Now this is me speaking as a technologist, when I deploy that into whether it's AI or analytics or a rules engine, whatever I'm doing, once I deploy that, it's going to operate. So if it's got something wrong or it's got something missing, it's either going to operate or it's not going to operate. So it's not going to stop mid flow and ask somebody a question. So it's one of those things where when you're building it and you're evaluating it and you're trying to make sure it's fit for purpose before you push the button that publishes it out to wherever the machine is going to use it, that's your ability to control that. Right?

Dr. Victor Lee (31:22):

Yeah, no, I think that's right. And we could probably have another podcast about maybe the similarities and differences between a value set and a map, but I get your analogy and I'm going to let that slide for now. This will be a dinnertime conversation that we'll have. But no, I think what you say makes a lot of sense.

Charlie Harp (31:42):

You recently sat down and put together a white paper. Is it property to classify as a white paper?

Dr. Victor Lee (31:49):

Yeah, yeah, that's right. It's a white paper

Charlie Harp (31:52):

That people can download from the website.

Dr. Victor Lee (31:54):

That's right. It's called the Six Cs Framework for assessing Value set Quality. And I've actually been thinking about this for quite some time because through my team's experience with authoring value sets and the value sets that we have seen from various sources, one of which is the value set authority center, also known as vs A, which is run by our national library of medicine, we've just seen a lot of variation in quality. And I don't mean that in a disparaging way, it's just that sometimes we find that there might be clearly wrong members of a value set or we're not sure if it's right or wrong because the descriptions were either sparse or completely missing. So I couldn't even tell what the intended scope was.

Charlie Harp (32:52):

And some of those value sets have been out there a while. And

Dr. Victor Lee (32:55):

Yes,

Charlie Harp (32:55):

And it's kind of like to go back to your analogy, in some ways it's like a value set orphanage. You've got value sets that maybe haven't been updated in a couple of years,

Dr. Victor Lee (33:05):

Right? And so in this six Cs framework, there are six Cs, and I'm going to try to rattle these off from memory. So the sixes are completeness, correctness, currency, clarity, and I think there's a saying that you can only remember three things. So I forgot all the six.

Charlie Harp (33:33):

Charlie is awesome, I think is the last,

Dr. Victor Lee (33:35):

Yeah, that sounds right. But we group them into, so we call these dimensions, but we group them into two domains. And one domain is the accuracy domain, and that includes the completeness, correctness and the currency. But then the other domain is usability, because if I'm trying to use someone's value set, how do I know what your intent was? So that includes the clarity, which is do we have a scope defined? And I'm going to have to go back and look at my notes, but you can download the white paper. And the idea is that there are ways that you can sort of systematically assess the quality of a value set. What are you getting and can you be confident that it's accurate and it's going to represent the things that you need it to do. So just diving a little bit into the accuracy domain.

(34:34):

So completeness, comprehensive and currency, lemme just focus on those. So completeness refers to have I captured all of the terms that I intended to capture? I talked about if I am trying to detect undocumented diabetes and I have an incomplete insulin value set, I might fail to detect undocumented diabetes. So obviously that's not a great thing, but the opposite of that we call correctness. I had to kind of fit these into C words. So correctness is sort of the flip side of completeness because I might over include things. So if I have an insulin value set and then I threw in a beta blocker on accident, then I might say, Hey, this patient is on insulin because I see atenolol, and that's obviously wrong. So it's kind of a false positive. So correctness or instead of the omission of things is what we're looking at for completeness.

(35:36):

On the correctness side, we're looking for commission errors of over inclusion. So both of those can make value sets fail to do the things that you expect them to do. And then the currency piece is really around, we talked about value sets going stale. If you kind of leave them over time and you don't have the care and feeding, then you could have those inaccuracies. And then we have ways of sort of proposing how those can be assessed and even kind of semi quantitatively measured. So those are important, but in the usability domain, so for example, clarity refers to do I have a well-defined scope, we've seen value sets that are just put out there and there is no statement. There might be a title of a value set.

Charlie Harp (36:25):

You almost have to infer the rationale based upon the members,

Dr. Victor Lee (36:29):

Right? Did you mean to include or exclude certain things? And so that can become a usability issue because if people are using third party value sets, you should know what you should be expecting to see in a value set. And you shouldn't have to inspect the entire list of expansions. You might have tens, hundreds, thousands of expansion terms and you shouldn't have to look at a ginormous list to figure out, okay, what was the intent here? So I'm blanking on the other domains, but basically you can download the white paper and you can read about the sixes framework. So that there was kind of a light bulb moment that I realized, oh, I think we can actually characterize quality. And so we think that that's an important thing to consider for anyone who's either authoring their own value sets. The framework might be useful to kind of challenge or thinking about am I sort of checking the box on these six dimensions of quality? Or if you're not creating your own and you want to use someone, those are just things to think about. Buyer

Charlie Harp (37:36):

Beware. One of the things that I think is interesting, a lot of people talking about large language models and AI right now, and what I always tell people is when you look at a large language model, it's really making decisions based upon some body of information, some corpus. And one of the things that I thought was interesting, I remember back when we were doing the work looking for the undocumented conditions, and one of the conditions that you were working on was hypertension. And I know that one of the things that technology people do is they say, well, they tend to lean on certain content assets. One of them is the UMLS Metathesaurus, which I won't get into, which is a fine asset, but it's not finely detailed curated, it's kind of a lumper. And the other was using SNOMED also is a great ontological asset, and they put a lot of work into it. But I remember at the time we were looking at hypertension and one might think, oh, all I have to do is get hypertension in all of its children and I'll get all the hypertension codes that are in SNOMED. But if I recall correctly, there was an issue with that.

Dr. Victor Lee (38:48):

So sometimes when you look at an ontology and SNOMED is a fantastic ontology and it's extremely comprehensive and detailed, and there are a lot of rich relationships, but sometimes that also adds complexity. So if you look at the hypertensive disorders hierarchy in SNOMED ct, you'll find things that you would typically expect to find essential hypertension, which is also known as primary hypertension. And you'll see secondary hypertension. But then you'll see all these other things that when you start looking at it, you'll say, oh yeah, that kind of makes sense that there's preeclampsia and other obstetric related things. And then you have to ask yourself, did I mean to put those in my value set? Because again, there might be unintended consequences for when you execute a rule set or whatever you're doing with the value set, it might yield things that you didn't expect. And so it's not just as simple as saying take hypertensive disorders and everything under it. You might have to start subsetting out certain branches that maybe you weren't interested in the obstetric, hypertensive disorders. And what about a hypertensive urgency or a hypertensive crisis? And those are acute, whereas maybe you were thinking more of essential hypertension, which is more of a chronic disease. So there's all these things that make SNOMED great because it's a poly hierarchy and things can have multiple parents, but then you also sometimes get more than what you bargain for, and you just have to be mindful of what's there. So clinically curated.

Charlie Harp (40:24):

And the point that I'm trying to make is I think for technology companies, I'm the Chair of the Industry Partnership Council of AMIA, and I've had a lot of exposure in my career to clinical informatics. I mean, I started a company called Clinical Architecture for crying out loud. But I think as a technology company, it's very easy for people to look at some of these assets and say, well, I don't really, the clinical people put together, so I'm just going to use it. I don't need my own clinical people to do anything. And I think one of the things that I'm keenly aware of, and I don't think enough people are keenly aware of is the value of somebody with clinical experience who's also an informatician, is that there are these nuances that when you're building something and you want to make it fit for purpose, these assets create.

(41:20):

I kind of think of it as going to the grocery store. When I look at Stoneman, I see the grocery store. And when you're trying to figure out what you're going to do with the things that are there and how you're going to use it or what you're going to make for dinner, you could just say, I'll take everything in the cereal aisle and then pile it into your cart and take it home. But a clinical resource is going to look at those things and say, well, for what we're trying to do, when you were looking at that particular case under hypertension, we can't just take everything here. We're going to have to go in and put in the elbow grease to make it fit for purpose, otherwise you're going to get unintended consequences. And this is a problem we have generally in healthcare, where sometimes we build things in a way that don't actually decrease the burden on the people that are on the front lines of healthcare, but it increases the burden because instead of being complete and correct, we just have everything. And then you end up burdening the provider with a bunch of nonsense as opposed to giving them just the information they need to do their job well and to give them kind of that safety net.

Dr. Victor Lee (42:29):

Yeah, yeah, I completely agree. I'll also, just to further discussion, I'll mention that we recently completed a mini research study where we investigated, I'll say the applicability of this six Cs framework in real life because it's one thing to just kind of come up with sixes, it's another to kind of see if you can actually realistically characterize quality among these six Cs. And so we took a look at value sets in VSAC, which I previously mentioned, and all the stewards of the value sets had been. The purpose is not to cast dispersions on what other people have done, but we just wanted to see if there was any sort of variation that we could detect, at least from the perspective of the six Cs framework. So we took some value sets from Clinical Architecture's element set foundation, and then we systematically and also in a random fashion, tried to pair them up with generally equivalent value sets, at least from what we could tell that we found in VSAC.

(43:39):

So kind of comparing the two, because we have an editorial policy, as you mentioned, we try to be very careful about how we craft our value sets and make sure they're accurate. And we did comparisons and we evaluated them against the six Cs and we found some meaningful differences. And so it was kind of a validation that this six Cs framework could be applied in the real world. And at least today that's not available. But I think it'll be available for download on our website soon. So hopefully by the time our listeners are hearing this podcast, you'll be able to go fetch that from our website. So it's a research brief, it's a comparative analysis of value sets using that six Cs framework. So if you're interested in this topic, take a look and hopefully you'll be able to download that soon.

Charlie Harp (44:28):

Fantastic. Alright, this has been a long podcast. Everybody's asleep, so we can really pretty much say anything we want without fear of cancellation. So anything else you'd like to share before we wrap this up?

Dr. Victor Lee (44:43):

No, I mean, just kind of summarizing this discussion, we make a lot of expensive investments in health it, and I just appreciate the opportunity to come onto the podcast and just share a few thoughts about quality of value sets, because I do think that we could derive more from our health IT investments if we just be mindful about value set quality. So I guess I'll just kind of give a shameless plug as well. If you guys have any questions or would like to engage in discussion about value sets, these are things that I eat for breakfast, lunch, and dinner. So contact us.

Charlie Harp (45:24):

Yes, we're working on the intervention. Don't worry, ladies and gentlemen. Alright, Victor, thank you so much for being on today.

Dr. Victor Lee (45:31):

Thanks for having me.

Charlie Harp (45:32):

It's been my pleasure. And to everybody else, thanks for listening. I am Charlie Harp and this has been the Informonster Podcast.