
Informonster Podcast
Informonster Podcast
Episode 34: 2024 Data Quality Report
In this episode of The Informonster Podcast, Charlie Harp and Clinical Architecture’s Vice President of Marketing, Jaime Lira, dive into the insights from the 2024 Data Quality Report.
As they explore key trends from this year’s survey, the discussion highlights the increasing recognition of data quality's critical role in healthcare, impacting everything from population health to AI initiatives.
Download the full report below and sign up to be a part of the 2025 survey to help shape the future of healthcare data quality.
Download the 2024 Data Quality Report
Contact Clinical Architecture
• Tweet us at @ClinicalArch
• Follow us on LinkedIn and Facebook
• Email us at informonster@clinicalarchitecture.com
Thanks for listening!
Charlie Harp (00:07):
Hi, I am Charlie Harp and this is the Informonster podcast. On today's info podcast, I'm joined by Jamie Lira, Clinical Architecture Vice President of Marketing, and we are going to talk about Clinical Architecture's Annual Data Quality Report. Welcome Jamie.
Jaime Lira (00:24):
Hi Charlie. Thanks for having me today. I'm excited to be on the Informonster podcast.
Charlie Harp (00:29):
Well, you kind of said I had to have you on so.d
Jaime Lira (00:33):
I spent all this time producing them and I felt like this time I deserve to be behind the microphone.
Charlie Harp (00:38):
Alright. Alright, well I feel honored and privileged to have you on the Informonster Podcast. So we're going to talk about the data quality survey. We started sending it out last year. This is our second year. And the purpose of the data quality survey is for us to kind of ask people out in the industry what they think about data quality in general, and then we kind of drill into some specific areas by and large to give us an idea of what people are aware of and what their sentiment is. We're not asking people to get out their yardstick, but to give us what is your sentiment about the quality of your data? And this is our second year. This is our first year where we can do a comparison. We did add some new things this year and we do share this report to everybody.
(01:24):
It's not something you have to pay for. You go to our website, there's a link we might ask you to give us your email, but you get in, get the report. There's a lot of great information there and we are always happy to receive feedback and questions and suggestions. So on this year's survey, we kept most of the questions the same, but we added four new questions to get into a few more aspects of messaging, especially around TEFCA. And we also tried to tune in on people's job funsction a little bit better. We had a bunch of "others" and we wanted to get a feel for the people that are answering the questions. Are they clinical people, are the informatics people? And this year when we look at the breakdown of job functions that responded to the report, I'll give you some numbers and I'll give you a little more data and then I promise I'm going to let Jamie talk too.
(02:15):
About 17% of the respondents were business people. 18% were IT, either developers or tester people. About 13% were clinical people, 5% were academic people and 47% were health informatics or data science people, which is not surprising considering those are the people that we tend to reach easiest. Now in general, a lot of the responses this year were very similar to last year and we might go into some details on that, but 85% of our respondents said that poor data quality had a high impact on their enterprise goals and that's up from 71% last year. And it could be maybe we got some different people or it could be something else. We'll talk about that. Most of the questions mirrored last year's questions and were designed to really give us a feel for how things had changed over the year. And I have some thoughts on that, but I'm going to let Jamie jump in.
Jaime Lira (03:14):
Okay. Yeah, I mean I think it was important this year that we decided to put the job title information in there because I feel like last year when we looked at the data and we saw whatever the trend was and how people were responding, the question was, yeah, I understand what market segment they're with, but what's the breakdown from that point? So I think it was good that we asked this question and I am definitely interested to see how that maybe shifts next year. And as much as it would be perfect if we could have the same people continue to respond in the survey year after year, but then also continue to build and get more of an audience responding to, and I say that as a marketer, I want more people to participate in our data quality survey. So when we kick that off and you see the invitation, please do it. It's actually just a handful of questions.
Charlie Harp (04:02):
It shouldn't take more than five minutes.
Jaime Lira (04:04):
Yes.
Charlie Harp (04:04):
So. And to talk about the market breakdown a little bit this year, the majority of respondents were in the care provider healthcare segment at 31.7%, 16% were considered consultants or they were in the consulting market, 13% analytics vendor, 8.3% academia, and 8.3% public health, 6.7% EMR vendor, 5% payer, 5% life sciences, and 5% value-based care.
Jaime Lira (04:36):
So we have a little bit from every segment. More is always better.
Charlie Harp (04:40):
And I want to also point out that we realize that this is anecdotal, we didn't survey every single person in healthcare, but for us it's really kind of taking a sample, taking a temperature and saying what do we think the state of the quality is now? Now just in general, when you look across all of the quality questions, for the most part the impact was rated a little bit higher than last year. In a lot of the categories, the quality itself was rated the same or maybe a little bit lower, something shifted from poor to mixed quality. The way we kind of categorize it is, I don't know the quality, I think the quality's bad. I think the quality's mixed or I think the quality's good. And I think though when you look at the questions about the impact of quality, one of the things we're seeing a lot this year is a lot more people are worried about the quality.
(05:34):
And I think that's because in part I think the industry is waking up to the fact that the quality of this patient data, it's kind of the ocean in which we all swim. The quality of this data impacts everything. From value-based care to population health to quality measures, to HEDIS measures. And if you want to try to put some kind of an AI function into your clinical practice or into your logistics processing, the quality of the data that you feed these things is really directly proportional to the quality of value they deliver and insights they deliver to you. So I think part of it is people are just realizing, wow, this quality stuff really does matter to me and I really need to find a way to deal with it.
Jaime Lira (06:23):
Agreed, definitely.
Charlie Harp (06:26):
So one of the things that we looked at, when you look at some of the questions that we talked about, one of them was how do you rate the overall quality of patient data in your enterprise? And 7% said poor, 60% said mixed. And this is slightly down from 63% from last year. Once again, what we're seeing there is that not a whole lot has changed since last year. People are having the same quality concerns that they had before. And once again, if you want to see the mix across the different segments, that's all in the report. I'm not going to read you the report, this is not the audiobook version of the report, although that's not a bad idea. I could maybe get R. C. Bray or one of those other great audiobook guys to do it. What do you think, Jamie?
Jaime Lira (07:11):
I think that would be classified as a sleep story.
Charlie Harp (07:13):
How about Morgan Freeman?
Jaime Lira (07:15):
Okay, yes, everybody wants Morgan Freeman to read the data quality report.
Charlie Harp (07:20):
That would definitely be a sleep study.
Jaime Lira (07:24):
But I do think there's something interesting though when we look at that question, how would you rate the overall quality in your enterprise? Because truly when you look at how the different market segments rated it, that difference kind of jumped out at me. I noticed that public health and payers were the ones to rate their quality the lowest. And I guess as I go through the questions, maybe as a marketer seeing things very differently, I have the immediate reaction of why is it that they're the ones that think their quality is the worst?
Charlie Harp (07:54):
I think part of it is, and it's funny, I had a conversation with a guy, a mover and shaker in the payer market earlier this week, and he was very much like, oh, I don't care about the clinical data. The clinical data is to the right of the decimal place and he's a well-respected guy. So part of me was like, why does he feel that way? And I think part of it is when you look at the different facets of the industry and you look at the data that you have, there are people like payers who rely on getting data from other places. And most of that is in the form of claims. And the claims data compared to the clinical data that we share with each other is much more controlled and structured because of it. It's not then you don't get paid. So people pay a lot of attention to make sure that that is correctly aligned and they spend a lot of time doing what it's called mapping, but it's not the way I think of mapping, it's really more of making sure that the things that are going to drive your payment for the claim is making it into the claim in the first place.
(08:52):
That's a very different world than in the provider space in the public health space where we're looking at clinical data. And I think that when we talk about clinical data, we have the data that we have and we have the data that we send out. And I think that when you ask a payer about patient data, my question is are they thinking about the claim? Which I think they think of that differently. I think that's claims data or they're thinking about the clinical data that's coming in with CCDAs or ADT feeds or other things that are not part of the claim. And they might be saying, well, compared to my claims data, that patient data, that clinical data is really bad because it's just something very different to what they're used to getting. That's speculation. I could be wrong.
Jaime Lira (09:37):
Okay, well that makes sense.
Charlie Harp (09:40):
So if we dig deeper into how you subdivide, I use the word domains, but when you look at the data we have for patients, especially clinical data almost through the lens of USCDI, you have these data classes or these clinical domains. And one of the things we do in the report is we dive deeper into those clinical domains. So if you say the data's bad, well let's talk about each type of data and you tell me if you think it's good or bad. And the levels to which we go into for those domains or data classes is medications, labs, diagnoses, procedures, demographics, allergies and social determinants of health. And just like last year, the overall loser in there is social determinants of health. And I think that's one of these things where we still as an industry haven't, I think, wrapped our head around social determinants of health from a lifecycle perspective.
(10:38):
I think people have built some codes. The things that are in SDOH are I think very dynamic and very temporal. It's very kind of a continuum of your life when you think about social determinants of health. And it doesn't fall into the episodic bucket that I think people are used to thinking of. And so we're really just capturing these snippets and we're still struggling as an industry is where does that belong? Does that belong in the case report? Does that belong in the encounter note? Who's going to collect that and how are they going to represent it other than text? Because I would argue a lot of the social determinants of health stuff we have today is in an unstructured note. And so I know there's a lot of initiatives. We talk to clients about driving initiatives to look at an unstructured note and pull stuff out.
(11:23):
But it's challenging. What surprises me is the level to which people still struggle with some of these other categories like allergies and labs, allergies, labs and medications are kind of the hard data points. If I know someone's getting a medication and I assume they're taking it, if they told me they're allergic to something, if I have a lab result or a vital sign, those are all things that I feel are very concrete in the medical record as opposed to the things that are speculative, like a diagnosis, the fact that people still feel like the quality is questionable. There is really to me about how we're capturing it and how we're trying to use it and how we're trying to share it. But those numbers are still well below where I would think they would be for that kind of thing. So when we get into the reasons why quality is impacted, this is another thing.
(12:10):
So we ask the question of is quality important? The next question was how's your quality? The third question was, well, let's go a little deeper into what areas of your data are what quality. And then the next question is we get into is why do you think the quality is bad? And so we throw out categories, is it the effort? It takes too much time to produce quality data? Are there too many standards and is that too hard as the software designed in a way that makes it difficult to create quality data? Is it that we can't share information with each other? Is that it? Is there too much of the data that we need in free text or are people just doing a bad job entering the data in the first place? And those are all categories that we kind of focus in on. And the results this year were kind of similar to last year, if I remember correctly.
Jaime Lira (13:02):
Yes, they very much fell in line with the things that people were saying last year. And I think it also echoes when I've gone to some conferences and I attend the different sessions about different challenges in healthcare. You see that same trend here in the responses that people are saying. You mentioned just a moment ago about free text entries. I think that that's still happening a lot. Maybe at the encounter level, people just don't have a whole lot of time. So instead of worrying about where to plug something in, they're just typing in what the person said and moving along. And that's fine I think while you're there and maybe within that specific hospital. But I think when you go outside to the larger health system or try to share that information on a broader scale, that's where all the problems happen. And I feel like that's basically what everybody's saying.
(13:48):
So it kind of begs the question like, well, what can we do to try to make it easier for the people who are entering that data so that it can actually be captured in a way that is more easily shareable? And I think also you could dig down a little bit deeper into the different market segments and see what did payers have to say about what they felt was more of a challenge. Specifically in the survey, a lot of payers indicated that free text or incorrect information and interoperability were the biggest contributors that they had for the reason why they felt like the data quality was the way it is that they're dealing with.
Charlie Harp (14:27):
So as somebody who's relatively new to this industry, even though you're like a freaking genius, when you look at, let's take interoperability as a problem because a lot of people felt like interoperability is a big problem, why do you think that is? Why do you think that is?
Jaime Lira (14:45):
Why do I think interoperability is a problem? Yeah, based on articles that I've read and the things that I've gathered the past what year and two thirds since I've been in the healthcare industry, it feels like people don't want to share information with each other. So there's a little bit of that. But then also from what I've seen just with some of the client work that we've done, I mean we have health systems that have multiple facilities, and the problem is they're not even all on the same software. So hospital A is on this software, hospital B is actually just a totally different software hospital C'S on a different version of the software that hospital A is on. And so I could see exactly why that's the problem. I mean, if I was an informatics professional at this health system and I'm trying to pull together information from 16 different facilities and none of it is on the same software, how are you even supposed to do that?
Charlie Harp (15:36):
That's very good. See, genius. I think that when I look at, I summarized it, I spoke at the LOINC conference a few weeks ago, and what I summarize it is that we care more about the data that we get than the data that we give. Because it's kind of like if you're cooking something that somebody else has to eat and you don't feel like you're getting anything for it, you might not care about whether it tastes good or not. You just have to check the box and prepare it and send it out the door. And I think it's one of those weird scenarios like an inverted gift to the Magi where we want really good data. I want you to give me really good data, but I don't necessarily feel the need to make sure the data that I give you is good. And to compound that we don't have a system today where you have a mechanism to tell me that my data's not good in a way that I can try to fix it.
(16:27):
And the truth is we're driving TEFCA and I think the vision of TEFCA is great, and I think there's a huge amount of potential in us sharing data across healthcare, especially if we can fix a few fundamental problems that we have. But the idea of me creating something of high quality to share with you so that you can improve your picture of the patient today, nobody ingests that data. And so what happens is if somebody sends you crappy data, you just don't look at it, you're not really trying to ingest it, so you're not going to the person. It's like if somebody gives you a, what do they call those things? The bread you get on the holidays with all the fruit in it.
Jaime Lira (17:11):
Ooh, a fruitcake?
Charlie Harp (17:11):
A fruitcake. You get a fruitcake from somebody, you don't eat that fruitcake, you give it to somebody else, give it to the man, man. Exactly. And that's the way interoperability works. Interoperability is a fruitcake scenario.
Jaime Lira (17:24):
Interoperability is a fruitcake.
Charlie Harp (17:25):
You give somebody data, they don't try it, they don't tell you whether it's good or bad, they just put it on the counter until it's time to get rid of it. And I think that the idea behind TEFCA and the good work the Sequoia project is doing around usability is turning that into a nice banana nut cake, something that people really want to eat as opposed to a fruitcake. And if I've offended the fruitcake lovers out there, please,
Jaime Lira (17:50):
It's too late.
Charlie Harp (17:51):
Accept my humble apologies. I'm just not a fruitcake guy.
Jaime Lira (17:55):
Well, what really is the incentive? I mean, why would anybody care about the quality of the data that they're sending out? I mean, what's the incentive for them to make sure that what they're handing out is good if what they're getting, what they feel like they're getting in return from others is junk?
Charlie Harp (18:11):
Well, that's an excellent point, and I think there are some that would like to put regulations in place to say that if you don't reach a certain quality, then you're not meeting the interoperability requirement. And I think if you use the payer market as a model in the payer market, there's a very simple way to improve the quality of the data. You don't pay somebody because their claims are bad in healthcare, there's no financial. It's one of the things I learned early on because the true answer to your question is people should care about the data they send out because the whole idea of sharing that data is to improve the care journey for the patient that you're sharing the data about. If you can complete that picture, wherever the patient goes, they're going to get optimal care because they have a complete high quality picture.
(19:03):
But the truth is, even though the people on the front lines of healthcare, everybody cares about patient care, they want to take care of people. But what drives our industry is money. And right now there is no financial incentive to doing a really great job sharing that kind of data, and there's no real financial penalty for not doing a good job sharing that data. And the thing we have to be careful about is the road of unintended consequences because we can say, oh, you have to share that data and it has to be like this. And I was analyzing some data for a report we were doing and we had a bunch of lab reports, and there was one particular place that was sending data as HL7, and every single lab test was coded to LOINC. And when you look at it on the surface, empirically everything looked right, everything looked good. But if you actually go in and you look at the data, there were 88,000 records where they were mapping a blood test to a glucose test. So the LOINC code for the blood test was a glucose test. So even though on the surface it looked good, when you actually get into the quality of the data itself, it looks bad. And so when we say things like, you must have a LOINC code, that doesn't mean that they're going to make sure it's the right LOINC code. They might just put a LOINC code in there or they might not know what they're doing. So those are one of the things where I think for interoperability to improve, the first thing we need to do is we have to have a way to measure it.
(20:38):
And that's why I've been working on the PIQI framework for the last year. Is because we need a way across the industry to look at a patient payload and say it's good or it's not good because you can't really tell somebody you sent me bad data and not be able to say, this is why I'm telling you it's bad. So if you send me a fruitcake and I say it's bad, I should be able to articulate why it's bad, not just that now I feel like I should try a fruitcake.
Jaime Lira (21:05):
Now you're going to get people sending you fruitcakes.
Charlie Harp (21:07):
Oh my God, don't send me, well send me fruitcakes, I'll put it out in the cafeteria and I'll run my own sociological experiment here at Clinical Architecture.
Jaime Lira (21:17):
They will go untouched.
Charlie Harp (21:18):
Alright, so I pontificated, what else do we want to talk about? External data, we talked about that within interoperability. I kind of set you up for that,
(21:25):
But this idea that people said, my data is bad and the only thing worse than my data is your data. And I think that that goes back to this idea that you have to realize when I'm using my data, whether I'm an Epic or Cerner or Athena or whatever, it's living in the ecosystem for which it was meant. And even though it might not be perfect, it's still living in its natural habitat. When I take that data and I take it out of its canonical model and I twist it and map it and put it into an envelope like HL7, CCDA, FHIR and I send it to you, it's no longer in its natural habitat. It's no longer in its natural terminology set. And so what you're getting is this kind of, I won't say slap together, that sounds pejorative, but this transformation of data that came from my natural habitat into the best approximation I can come up with to give it to you. So it doesn't surprise me that when you look at the numbers we had, I think 29% said people, they're currently integrating external data. I don't believe that for a minute.
Jaime Lira (22:34):
You think it's less?
Charlie Harp (22:35):
Oh yeah. Oh yeah. Or we got a really rare set of people because it depends on what you mean by integrating. When I say integrating, I mean when I get external data, I plug it in. I put it in my schema in my database, show it to my doctors along with all the data that I've collected. And there are some systems that are probably doing that. 29% seems really high to me when you see that 19% said you're very unlikely. I think the middle ground is people that take the data in and put it in a sidecar. So I have my main data here and if you want to see the data I got in this patient from everybody else, put on your reading glasses and go take a look. It's all there in the PDFs. But I don't think they're formally integrating it. So it could be, I don't know in the question if we were very specific about what integrated actually means.
Jaime Lira (23:25):
True.
Charlie Harp (23:26):
But that number seemed a little high to me because most people I talk to don't seem to be integrated it in that way. And the interesting thing is that's what TEFCA is all about. TEFCA is not just about moving a document that someone can read from point A to point B, it's about actually creating something that can be plugged in and used within that natural habitat I talked about a minute ago.
Jaime Lira (23:50):
With the hope of creating a consolidated patient record.
Charlie Harp (23:54):
Yeah, I want to fill in the gaps if you have lab results for this patient, I don't have or you have medications I don't have or you have problems that they're dealing with or procedures I don't have. What integration really means is I know what I know and I get what and when I bring those together, I have all the information. So for example, I don't order a beta blocker if you're already taking one, I don't suggest a procedure that you've already had. I don't do something that doesn't make sense because for you clinically it's not appropriate. I mean today too, there are things like I don't know your gender identity and I don't want to insult you or make you feel uncomfortable. So there are all these pieces of data that ideally if we bring them together, we create this high-quality mosaic of the pieces from everywhere into something that the care provider can leverage to deliver optimal care.
Jaime Lira (24:52):
That makes sense. So I think I know this year in the survey we also asked people to get a little bit more specific about the formats that they're using with the messaging. And I think that that's a whole other something that people can dive into. If you want to look at it in the report, I think that you can, we have pages and pages of this data where we've dug down and said, what message formats are you using? What are you thinking about using? What are you not using? And all that kind of good stuff. But what I thought was really interesting this year is that we added a question about USCDI and I think it was interesting to throw that out there and kind of see what the feedback was.
Charlie Harp (25:32):
Yeah, I mean we did that. We asked people about the different formats and like you said, we are not going to go into that, but we asked the question about USCDI and it was kind of a, what do you think of it? Are you aware of it? Do you think it's just right? Do you think they're doing too much? Do you think they're not doing enough? And when you look at the numbers, what I found interesting was that 44%, probably not the majority, but the plurality, is that the right term?
Jaime Lira (26:03):
I don't know about.
Charlie Harp (26:03):
Of all the percentages, 44% of the respondents said they're happy with USCDI and that's pretty good. The other response is 20% said they're trying to do too much. 19% said they don't know what USCDI is.
(26:22):
And those could be when you think about our mix, so if you take that 19% and you go to our participation numbers, theoretically, I don't know. I would think that, yeah, I can't justify it. That seems low. I don't know. I was wondering maybe there was a segment of the market that doesn't use USCDI, but I would think most players in those segments, maybe even the payer segment now that FHIR is starting to get more involved in the payer segment. And then 17% said that USCDI is not doing enough. But the interesting thing about USCDI is USCDI is kind of a set of it's guidance that says you should comply with these things. And a lot of those things are terminology-binding. So it's like if you send me a lab test, it should be a LOINC code. If you send me a drug, it should be an RxNorm code.
(27:15):
Or if you send me a condition or a problem or a diagnosis, it should be SNOMED, it should be ICD---10. And there are also other things that just request that you send certain data, there's certain data that should be there. There's kind of a weird group in the vitals where they say, you should send us these vital signs, which to me seems a little orthogonal to the way the rest of USCDI works. But I still think that I'm kind of in the bucket with people that are pleased because it takes a lot to move an industry. And the challenge we have in healthcare is our standards are still guidelines as opposed to rules which is different than the payer market where you have requirements, you must give me this, you must give me this. And I think that that's kind of why you see the bridging of people that are building systems and don't want to have to worry about some of these regulatory guidelines.
(28:08):
The people that are trying to consume the data maybe in the research world or other places and are frustrated that they're not getting more enforcement on the guidelines, but it's still a bulk of the people. 44% are like, yeah, I'm okay with it. Because I always tell people, when it comes to industry wide, society wide change, there's only so much you can apeeve. Can I use that as a verb? I don't know. There's only so much you can do in the period of time. And in healthcare, we always have to be conscious of breaking the machine. And so I'm kind of in the bucket of I'm happy with the progress of USCDI. I actually, I'm probably closer to, they're pretty aggressive than they're not doing enough because demanding these changes of vendors is, it's tough when they're trying to do everything else they're trying to do. And facilities are slow to take uptakes. We have clients that are on older versions of our software because it's hard to upgrade in healthcare. So some of these things are going to probably have a longer tail than most people would like.
Jaime Lira (29:15):
What might be interesting for this question is what if we could pit the 20% that said they feel like USCDI is doing too much against the 17% that said they're not doing enough?
Charlie Harp (29:29):
Like a cage match?
Jaime Lira (29:31):
Yes! And get them to debate why they feel like it's too much and the people that say it's not enough, why they think it's not enough and see if they can meet in the middle ground. Is that just too mind-blowing that I'm introducing that?
Charlie Harp (29:43):
Well, hey, if you're listening to this podcast and you're in one of those two camps and you want to have a public debate on the Informonster podcast, let us know.
Jaime Lira (29:51):
We are inviting that.
Charlie Harp (29:53):
We can have the super ego, I can be the ego in the middle and we can debate the pros and cons. I think that when it comes to any standard, one of the things like when you look at USCDI with ,four, there's stuff in USCDI version four that I think is probably going to be very difficult to implement. And I would love to have a conversation with people of different opinions. And sometimes I talk to people that are like, oh, you got to do more. You got to do more. Or I want you to send me this piece of information. One of the things in USCDI version four, I'll just say it, is this idea of whether somebody is compliant taking their medication. So I prescribe that the patient takes amlodipine for blood pressure and there's a field that says, are they compliant? And I just don't know where that data's going to come from.
Jaime Lira (30:41):
That would be very difficult unless it was some kind of wearable. But still, how do you trust that?
Charlie Harp (30:46):
So when you say, Hey, I require that when you share medication data with me, I want you to tell me whether they're compliant. And one could argue, well, if their blood pressure goes down, I can infer they're compliant. But this is the problem in healthcare, we have all this uncertainty. And when you ask a question like that and you say, I require that you fill it in, how many people are going to do the due diligence to follow up and say, are you really compliant? Or are people going to assume that they're compliant? And if everybody's assuming they're compliant, then you might as well not ask the question because what you're doing is you're injecting a false sense of compliance into the system as a whole to satisfy the requirement. And so those are the things where when I talk about unintended consequences, you can say, well, oh my God, amlodipine must no longer be effective. I've got all these compliant patients and none of their blood pressures are good. You see what I'm saying?
Jaime Lira (31:38):
I do. That's a good example. Well, so really I was just peppering you with these questions and more of the audience with these questions because I am trying to invite people to comment on this data quality report. This is the second year that we've done it, and I personally just have some questions. I would like to know what people really think about this. So not only the fact that you provided your response in the survey, but we've always offered people to write to us with comments and share their own thoughts on this because I would really like to hear that in general.
Charlie Harp (32:11):
So you here, ladies and gentlemen, if you're one of my seven listeners of this podcast, please let us know and well, let's put to the intersection of one of the seven people that have read the podcast and one of the 12 people who got the report. Those numbers might be, hopefully they're bigger.
Jaime Lira (32:26):
Than that. Yeah, you downplayed it a lot
Charlie Harp (32:31):
And you have an opinion, even if it's just, Hey, I think it's good, I think it's bad. I didn't like this. I thought this is off base or that we can improve it or ideas for this wacky podcast. We always love to hear from you guys. And so please don't hesitate to reach out if you have any feedback, thoughts, or ideas.
Jaime Lira (32:54):
I agree.
Charlie Harp (32:56):
What else you want to talk about?
Jaime Lira (32:57):
I think we made it to the end of the rainbow on this one.
Charlie Harp (33:02):
That's Jamie's way of saying I'm done talking to you. So I'm going to go ahead and wrap it up for today. I am Charlie Harp and this has been the Informonster Podcast.