The Signal Room | AI Strategy, Ethical AI & Regulation

Garbage In, Gen AI Out: Data Quality, AI Governance, and Healthcare AI Challenges

Chris Hutchins Season 1 Episode 6

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 45:38

Send us Fan Mail

Data quality and AI governance are the most overlooked risks in healthcare AI. Organizations spend billions on technology but consistently underfund the data that powers it. The result: garbage in, generative AI out.

In this episode of The Signal Room, host Chris Hutchins sits down with Danette McGilvray, a globally recognized data quality expert and author, to examine why data quality remains the most overlooked risk in AI and healthcare transformation.

Danette brings decades of experience helping organizations understand that data is not a technology problem. It is a business problem. The conversation explores how systems grow organically without coherent data strategies, why leadership teams continue to treat data as a byproduct rather than a strategic asset, and what happens when AI models are trained on data that was never designed to support the decisions they now influence.

They also discuss the governance structures that distinguish organizations that scale AI responsibly from those that struggle with trust, accuracy, and compliance. For any leader deploying AI in healthcare, this conversation makes clear that the path forward runs through data quality, not around it.

Guest: Danette McGilvray
LinkedIn: linkedin.com/in/danette-mcgilvray

Support the show

Support the show

SPEAKER_00

Welcome to the Signal Room, where leadership, ethics, and innovation meet at the intersection of healthcare data and AI. I'm Chris Hutchins, and here we cut through the noise to uncover the signals that matter, the choices, the challenges, and the conversations shaping how technology serves people. Our topic today is garbage in, gen AI out, the rising cost of ignoring data quality in the age of AI. Before AI became the default strategy, before data was seen as a currency, Danette McGilvray was already doing the work of making information usable, trusted, and fit for purpose. Today on the Signal Room, we're joined by Danette, the president of Granite Falls Consulting, which is celebrating its 20th anniversary. Happy anniversary, by the way. And with more than 30 years of data leadership, including 13 years inside global high-tech firms, and over two decades in data leadership in the heart of Silicon Valley, she's helped Fortune 5 companies, federal agencies, public institutions all build trust in their data before building anything else. So in this episode, we ask what happens when generative AI is deployed without quality guardrails and why poor data isn't just a technical issue, but a strateg a strategic ethical one. So from hidden costs to real-world harm, the net brings clarity, urgency, and a blueprint for action. As we get started, I'll just want to frame the conversation for us. The race to scale AI is outpacing organizations' ability to ensure the data behind it is fit for use.

SPEAKER_02

Thank you, Chris. Really happy to be here.

SPEAKER_00

Well, I'm I'm excited to be having the conversation with you today. It was a pleasant surprise that we ran into each other at the Put Data First conference in uh Las Vegas a couple weeks back.

SPEAKER_02

Yes, always always nice to meet in person.

SPEAKER_00

Yeah, so I think we had a lot of uh very interesting conversations. I know I learned a ton at the one round table that I got to sit with you. Uh I was really taken by the depth of uh understanding that you bring across multiple industries, not only the ones that are that I'm worried about, most like in the healthcare space. But I think what was really powerful is understanding that there's ethical concerns when it comes to data quality. It's not just having bad data, um, that you know it's just a technical problem or uh incomplete or whatever. There's ethical reasons behind why it's such an important thing and why you've committed your career to really addressing data quality. So as we get started, you know, with with over 30 years working in data quality, long before more supposed companies had chief data officers, which is still a new thing, what's changed most about the way leaders value or ignore data in its quality today?

SPEAKER_02

Oh, well, what what has changed or what is different? I would say um what has not changed is the lack of emphasis on the data itself. There is still a lot of emphasis on technology. I am all for technology. So some of the tools that we use now, even within the data quality space, not counting the tools that organizations use to do their businesses better, I love them. I'm very happy for them. But I have found, and I'm gonna exaggerate this just a little bit to make a point, but I have seen that organizations will spend millions, tens of millions of dollars on technology, and yet they're not willing to spend hundreds of thousands of dollars on the data that powers the technology, the data which is the reason that the technology exists. So that is something that is still a challenge for um anyone who cares about the data and really wants this technology to reach its full potential.

SPEAKER_00

So you what you're describing is not just a technical problem, it's a leadership issue.

SPEAKER_02

It yeah, it is it is absolutely a leadership issue. And I think you really said it well in this introduction when you were talking about the race to AI, you know, very important. Um, everyone wants to be first. And I I'm gonna use this example of something that was really disappointing to me. So when Chat GPT first came out, I feel like it should have had a warning flag and big flashing red lights on the front page saying this can give you wrong answers. This is not fully um tested, shall we say? Now I did find that information, but it took some clicks and very fine print to find it. It's interesting that now I can see um much more clearly that Chat GPT, and I looked at this just a month or so ago. Chat GPT now says Chat GPT is not meant to give advice. Some of the answers may be wrong. Are you kidding me?

SPEAKER_03

Right.

SPEAKER_02

It's a little it's a little late for that. It is especially for the general population who has heard about this is caused a lot of excitement. Um not really knowing that they need to be more cautious.

SPEAKER_00

That's a great point because you know people get comfortable with technology rather quickly these days, and it's kind of sad. I think I don't uh I when I talked to you uh earlier, I had mentioned the conversation about trust that I had had recently with a psychologist. And but we're we've been thinking about trusting AI, but we have a bigger problem, and that people don't trust people like they used to. So what's really frightening is that there's people who are trusting this capability and they're not likely even listening to the instruction around you know what they should or shouldn't use it for. This is this is a real problem.

SPEAKER_02

Yes, and this we're this is going to go back to the leadership piece that you were just talking about. So when AI comes forward, companies uh we heard stories about people who were putting questions to Chat GPT, and in the process of that were giving away proprietary information because people were not aware of how it was going to be used, who else could access it, um, you know, what things were made available. When these new technologies come out, it is up to leadership to say, can we use this? Right, should we use this? If we should, how can we be smart about it? How can we protect ourselves? And I think the race, too many of the basic questions were not asked and answered.

SPEAKER_00

Right. Yeah, it's it's interesting when you think about the natural behavior of the platform. It's it's it's in and of itself proving that it shouldn't be trusted. Um I think we talked about this when we were in Las Vegas together recently, is that whatever you tell it, it seems to acknowledge it, but if you tell it not to do something, the next thing it does almost always is exactly what you told it not to. So it's not even safe to trust it after you tell it what you need it to do. It's just like a a a child.

SPEAKER_02

Well, and it's interesting, and I think you and I might have talked about this um a little bit before, which was it's almost too human in that it's trying to please you. So for instance, we have documented incidences of AI making up references. Like check the references, you look at the reference, right? It looks good, it's formatted, it's in the the same format, looks real, it's a made-up reference. So it's really important for people to take the good that they can get from that, but I would say you need to verify, you need to check. So the same thing would be true. So when you go out to Google and you look for something, it comes up with an AI overview. So you're seeing an AI overview before you see the source data, you need to scroll down. I I have had things where the overview of things that I know and the things that I do uh looks pretty good. It's maybe it's 90% right, but there were things that were wrong. So the thing I worry about is when experts like you and I are gone, how are people going to be able to tell the difference? So I think it is really up to everyone who uses AI. Go to the sources, do some testing, do do much testing. Right. Um, but but we we really need to take it upon ourselves to to not individually just take the easy way out. Oh, yeah, there's the overview, let me just take that. It might be right, is it?

SPEAKER_00

No, this is a really important point because if we don't catch those things, we're also training the model, which means we're further embedding that information to other people can be tripped up and and use inappropriately. So yeah, the this I don't know if we could even overstate the importance of this. I just think it's really, really critical that people understand.

SPEAKER_02

Yeah, absolutely. And when you when you talk about the training, so the training of these models are coming from data that is out in the world, right? We know that bias, and I'm gonna talk about bias for a minute. We know bias is out in the world.

SPEAKER_03

Right.

SPEAKER_02

And there was a an article that came out in Computer World in July about some research that had come out, I believe it was out of a university in Germany, and they had looked at different personas, and they were presenting these personas to AI talking about what kind of salary should I look at, should I ask for? And it was some kind of a medical specialist uh position. Right. And if AI thought it was a man, it told them that they should ask for a starting salary of$400,000. If AI thought it was a woman, it said that they should ask for a starting salary of$280,000. That's a huge, that's a huge difference. So I have had, I have heard the argument, well, it it is it it's not up to us, you know, to uh to be able to monitor that. But if we do not monitor unfair bias that is out in the world, all we're doing is perpetuating that. So we so it is it is the type of thing where we need to be aware of it, and I think we need to take action on that. So we don't perpetuate those bad things. I mean, I'm data quality, right? Data quality is all about the data is right, you can trust it, real reflection of of of what we need, and the whole reason we care about it is because we can trust it and use it with confidence.

SPEAKER_00

Right. You know, and that that brings up a something really important. No, when you're talking about facts, you know, there there's clearly been disparities in pay, um, and they're documented. You know, so if all you if you've got the the larger portion of your data set is based on things that um hopefully have changed and and been changed by uh you know by intention and sometimes by policy, these are things that your models have to be adjusted for. It's just it's not a bias from um even someone's opinion. It's just that historically things have been a certain way, and you have to understand when you're asking these these models to give you back information or recommendations, it can only make them based on what it knows, not on what what it doesn't know. So if it's not educated enough to understand policy changes, it either from a regulatory standpoint or even inside of a company, it's gonna give you bad recommendations.

SPEAKER_02

Yes, and I would rather see AI come back and say, historically, this is what we see.

SPEAKER_03

Right.

SPEAKER_02

And instead of um telling you, ask this, do this, I would really rather have it come back and say, his like I said, historically, something looks uh a certain a certain way. Right. Instead of stating it as fact, that this is what we should do now.

SPEAKER_00

That's that is a really important thing. You know, it kind of gets to another topic. Um our whole conversation can be you know framed in so many different ways because there's so many threads that I'll go back to the pace that we're we're moving. Um we need to slow down and take a take a hard look at what we're doing, how we're doing it, and why we're doing it, and what is it that we're trying to accomplish. Miko, you know, you just touched on this, the illusion of intelligence, essentially. AI systems are confident, they're coherent and wrong all at the same time. How does bad data hide behind good output?

SPEAKER_02

Yeah, yeah. And and and taking the time, now I I am going to push back a little bit about a comment that you just made about um having to slow down. I truly believe that people need to understand that doing things right the first time may feel like it's slowing down.

SPEAKER_03

Right.

SPEAKER_02

But what is the cost when thing when you ignore those basic things that you think take too long at the beginning, but then it ends up completely sabotaging the whole project.

SPEAKER_03

Right.

SPEAKER_02

Or you have to do the rework, or then you do all the things that you should have done first after some time period, much, much more expensive. So, yes, the illusion of going slow, but sometimes that illusion of going slower actually helps us go faster and in the long run actually saves us money. So I think that's a really it and I have seen it again and again over my career. Different new, you know, the big silver bullet technology comes into place.

SPEAKER_03

Right.

SPEAKER_02

People are anxious to put that in, and it always comes back. Let's get the data, let's get the data right. And slow down to go fast.

SPEAKER_00

That's very, very well said. Uh I couldn't agree more. Uh I know it's it's not fun work sometimes. You know, people don't get excited uh necessarily to go and clean up the data, get it fit for use, data wrangling, whatever you want to call it. But sadly, more people that are have that are in roles where they're they have a title analyst in there somewhere, they're they've spent far more time wrangling data, getting it fit for use than they have actually doing the analysis that it was intended to enable.

SPEAKER_02

Yes, yeah, we definitely have definitely seen that over time. The good news is there are people like me who actually get excited about data. Thank you. Excited about data quality. So I've had intersections in my career where I could have gone different places. I always came back to data, always came back to data quality because I understand how important it is. So I know that there are other people that are out there that are like that. And I'm gonna tie this in just a little bit to the leadership uh point that you made earlier. Um we need to not only clean up the data, we need to prevent the problems from happening. So, yeah, there may be some cleanup that we have to do, but from a leadership point of view, we need leaders to understand look at root cause, look at prevention. Prevention is always cheaper than um than the cleanup. It's that difference between fire prevention and fire fighting. Is it better to stop a forest fire or to fight a fire once it starts? I think that makes it really easy to see the difference between just prevention and correction. We have to have some of both, but too often leadership focuses only on the correction piece. We need equal focus on that prevention and getting to root causes.

SPEAKER_00

You said that the ethics starts in the metadata. Can you unpack that a little bit?

SPEAKER_02

So, so metadata, um, data about data. So before some people's eyes glaze over, so metadata is the basic things that you need to know about your data.

SPEAKER_03

Right.

SPEAKER_02

What is the name of the field called where the data is stored? What is the definition? Are there lists of allowed values? If it's some kind of an ID or a code, for instance. So metadata, the basic things that you need to know about your data. So if we do not have those common definitions, and it really comes down to like this common language and understanding about the data itself, we're gonna have a really hard time using it in the right places and understanding the output of the things that come that come to us. And so anything that shows bias, we don't want to have that reflected in the metadata. Everything, you know, that foundation um builds up from that.

SPEAKER_00

Right. Yeah, that's it's an interesting area just because there's all too often, you know, there's not a coordinated um data and analytics inside of some organizations. One group's doing something one way, one's doing it the other. And their their if their language and terminology that they're using is not the same, it can get really confusing. It can. I mean, it's a I don't even know how many times that I've stumbled into things like that where I'm like, oh my gosh, you guys are measuring the same thing, but it's different. You're look looking at it through a different lens. Is there a great reason for that? Maybe there is, but some sometimes there's just not. They just haven't talked and actually had these kind of uh exercises.

SPEAKER_02

Yeah, and and that's where the governance piece comes in. Because if you have good data governance, to me, what that means is you are bringing people together.

SPEAKER_03

Right.

SPEAKER_02

They may have never met each other, but they're all using the same data. They may call it something different, they may depend on each other in a way that they haven't before. Good governance will bring those people together and facilitate the conversations and those kinds of decisions. And things have just grown up. They call things different because we used to have a platform that only looked at sales. We used to have a platform that only looked at the products, we used to have a platform that only looked at the marketing. We now have, you know, all kinds of platforms that bring these various areas together. That is where you really see the data quality problems and you really see the differences in how people have been using, looking at, managing, treating the data, which makes governance very, very important for that reason to bring those um disparate people together.

SPEAKER_00

I love how you're describing it because uh far too often people think of governance as controls. Um, and it to some extent it can be, but it's really more about enabling people uh to really Produce results that you can trust. And it it's there as a protection. But I'm sure you've heard leaders say things like, uh, we'll clean it up later. Uh, what's your response when you hear that?

SPEAKER_02

Yeah. Oh, I love this one. So let's take an example of some kind of integrated platform like ERP, Enterprise Resource Planning. It will have things like your finance and your manufacturing and your sales and you know, bringing all these systems together into one platform. You literally spend tens of millions of dollars in getting the technology and being able to migrate and integrate that data as it comes. And at the last minute, well, it's we just not even at the last minute, at some point along the way, they're saying, you know, yes, let's just load it, we'll clean it up later. Okay. A few fallacies in that thinking. Number one is you really think you can load it if it is not, if it has not been put into the state it needs to go into the technology. So you're going to have a lot of data that absolutely cannot load. The second one is once you get it in there, do you think that you're actually ready to go to production? Do you think people can actually use that data? So I do have experience in seeing the difference between those who took care of the data. This was the same large global migration that I'm not going to give you any more information about, large global migration. All of the sites around the world who bought into the data quality message that we were bringing forward, uh, the things that they needed to do to look at the quality of the data, say, this is what it looks like today in our system. This is what it needs to look like tomorrow in the new system. What is the gap? How do we change that? Every site that did that around the world was able to um close their financial books. Like quarter end was something like two weeks after. Oh, month end was two weeks after this huge migration. Quarter end close was six weeks. I mean, absolutely able to close all of their books. They were able to ship products. The only site where they refused to do any of the data quality work was the only site who could not ship product. And I know because I was on the desk who had to call people in the middle of the night around the world and say we have an emergency and we have something we have to take care of here. So I have seen it and I have lived it. So that that is that that is a common a common thing that really doesn't work.

SPEAKER_00

Yeah, those that example is uh I think pretty compelling if people are not sure whether they should really do this exercise or not. I want to talk about uh another cleaning. I mean, it's interesting because you you and I have used some some similar terminology, but you know from a different lens. So I've talked about technical debt recently and I've talked about data technical debt, but you are talking about something that you're is data quality debt. What does that debt look like and who ends up paying it?

SPEAKER_02

Well, I I think maybe my words around data quality might be similar to the data debt. So people might be more familiar with technical debt, which is the idea there are certain things when we're putting in a new system, and maybe we need to customize some things or we need to change some processes or we need to train people, like whatever that is.

SPEAKER_03

Right.

SPEAKER_02

And they decide, oh, we don't have time to do that. Well, you know, there is an approach there that tracks that, and they understand that if they skip doing that, they have incurred some technical debt, which will need to be paid at some point in time. Right. It is the very same thing around the data and the data quality. Oh, we need to uh analyze and assess and profile our data to see what is that big gap between what it looks like today and what it needs to look like tomorrow. There are certain things that we need to do to close that gap. If we choose not to do that, we have incurred that data or data quality debt just in the same way that technical debt is incurred. There will be a cost to not taking care of it now, there will be a bigger cost to taking care of it later. I mean, you know, I told my kids this all the time, and it applies to everything in life pay now or pay later. But if you pay later, you always pay more. So going along with that debt, if I can give you an example. So there is there is something called the rule of tens. This was something that was identified a long time ago, um, according to when you are doing a uh uh implementation. So let's say that you have requirements, you have design, you have, you know, your coding, you go live, you have uh being in production, okay? If you find a problem, and this applied to software, but it also exactly applies to data that we've seen this over the years. If you find the data problem in the requirements in the design stage, you know, each one exponentially, if it costs you a dollar to fix it early on, the next stage it'll cost you ten dollars. If you don't do anything about it until testing, you're probably at$1,000. So that is an example of the kind of debt you incur. And in fact, I was I did a presentation, a workshop last week in Phoenix at Dama Data Management Association. They have a chapter there in Phoenix, and I was talking about the rule of tens. There's a gentleman who raises his hand and he says, it's not the rule of tens, it's thousands. He says, because when you get into production, and then you have you like business cannot carry forward. He says it's thousands and thousands. So that that is just this example of the progression of the debt and how much more expensive it is when it is delayed.

SPEAKER_00

Yeah, this is this is a really powerful thing to really think about because I don't think people necessarily are doing the doing the math properly to really calculate the cost of inaction. Um it's a it's a big, big deal, and it snowballs. Just like we're talking about training the AI models, you know. You're training it to make things worse by not addressing the data quality. That's what you're building, you're instantiating it into your organizational structure, and you're setting yourself up for a really, really bad outcome. Um, I'm I'm certainly not going to be a person that would challenge you when you come in and tell me we've got to fix the data for that reason.

SPEAKER_02

Well, okay, so here's the thing. Some people they may say, oh, data quality, oh, it's just correcting the data, or who cares about this? And it's like you can see I'm pretty enthusiastic about data quality. I have been for all these years, right? But I never do data quality for the sake of data quality. Why do we do data quality for the sake of whatever you as an organization is trying to accomplish? I don't care if you're for-profit in any vertical, government, education, healthcare. We only do data quality because it helps our organizations do the things that they need to do.

SPEAKER_03

Right.

SPEAKER_02

That's that's why it's there. It's not there to get in your way, it's there to help you do whatever your organization cares about. We we can help you do it better.

SPEAKER_00

Right. So yeah, as we're kind of you know winding down a little bit, maybe can take this a little bit further and maybe talk about some practical steps organizations should be taking to make sure that quality is part of the AI life cycle. It's not a gatekeeping function.

SPEAKER_02

Okay, yes. So thank you for mention for bringing that up because I do have a methodology that I have developed over the years. It's called 10 Steps to Quality Data and Trusted Information. I definitely call myself a second generation pioneer because I learned from the three gentlemen that in the US, I call the first generation pioneers in the late 80s and early 90s who were really bringing visibility to data quality as an issue, as skills, thing, things that you need to learn. So I call myself a second generation pioneer because I learned directly from all of them. Tom Redman, Larry English, and Rich Wang, um, out of MIT. And my innovation and building on the shoulders of those giants was my 10 steps methodology. It is, well, if you don't care, here's the book, Executing Data Quality Projects. The subtitle is the name of the methodology. So there's a second edition that is out where I can feel very confident and comfortable that if you need to know how to deal with data quality, how to create, how to manage, how to sustain the quality of data, this is a methodology that provides you a roadmap. Flexible, scalable. I mean, one person, four-week project, many people, many months project, and it has proven itself. It uh different languages use it. It is available in Chinese, Japanese, the Spanish translation is underway. But I mean, people around the world, different cultures, different types of organizations, big uses, small uses. So to me, this is really the practical how do we go about dealing with this data quality problem that you and I have just been discussing.

SPEAKER_03

Right.

SPEAKER_02

And the good thing about this, it is very complementary to other resources, but it is the roadmap in the middle of the what do we do? And the really good detailed books and resources that go into a lot of detail about a slice of the data quality pie. This is what keeps you from getting lost in all the detail. Where are we? What do we need to do? And you bring in this other information, it complements the other things that are out there. So learning and training your people in how to address data quality is a really, really important leadership decision because it is the leaders in an organization who say, I understand why data quality is important to what we care about in whatever my organization is. I want people to be trained in how to do that. And I am willing to make that um an important part of our funding and our investment decisions. So the tens of millions of dollars we're spending on technology will actually work for us.

SPEAKER_00

Right. I mean, I can't I can't even explain to you how much I I admire that because it's not a it's not an area that is um sometimes it's not an area that's even well received because it it it exposes some things that people um can be embarrassed about. Like I remember doing some data profiling at different times, and there were assumptions made about the values that you might find in a particular field, but when you do the profile and you find out, oh my gosh, no one has really maintained this at all. It's it's terrible. It's a really important thing.

SPEAKER_02

It it is really important, and it is natural for people to feel somewhat defensive, right? Not want to uncover the problems. But I learned early on in my career there was one of the first data quality projects that I had done as an employee, which is where I got my feet wet in global high-tech companies.

SPEAKER_03

Right.

SPEAKER_02

And we found some problems that were completely unexpected and costing the company tens of millions of dollars. And the person who had sponsored this project was also the owner of the system for many years. It would have been really easy for that woman to say, or it could have been a man, it doesn't matter, but it happened to be the owner of the system was a woman. It could have been would have been easy for her to say, you know what, we didn't expect that. Let's just put this under the covers. We're good, nothing to see here, everybody go home.

SPEAKER_03

Right.

SPEAKER_02

I loved the fact and learned from her that she stepped up and she's she shared the problems widely. And she said, if we don't talk about the problems, nothing will get changed. So she took a risk, she showed some courage, yes, and and stepped up. And to me, that's what a real leader does. And I've tried to emulate that and I learned from that. Fortunately, I had that good example.

SPEAKER_00

Yeah, that's powerful. I mean, people really do need they need to understand that leadership is uh sometimes difficult and that you actually have to own something that you'd rather not have to own. But it that's a really important characteristic in a leader. So I mean, thank thank you for sharing that. When we're talking about quality and the the need to really build this into the culture of an organization, and when you're thinking about how important it is when you're building AI solutions that the stuff actually is addressed kind of in in the process, not after the fact. Um, how do you think about you know operationalizing that and what do leaders really need to be doing to ensure that they're they're approaching us properly and it's getting getting into the DNA of the company?

SPEAKER_02

Yeah, so so the the 10 steps are starting with the business needs. They will go all the way to a step nine, which is really a handoff toward operationalizing the things that you found as you were addressing the initial data quality problems. An important piece, uh there's a couple of important pieces there. One of them is how do we even know that it's worthwhile for us to do that? And I have a step around business impact techniques that can help you be able to, in a qualitative and a quantitative way, be able to show the value of looking at your data quality. You don't have to just take my word for it. I have some ways that you can actually look at that and apply that to your company. The other piece is in order to operationalize something, people have to carry it out. People have to be willing to make that change. So within the methodology, there's a step 10 that visually communicate, manage, and engage with people throughout. We have to deal with the human element of our data quality work. Everything we do around data quality triggers some kind of a change. Oh, wow, we need to train someone. That's a change. We need to refine a role, we need to refine or create new processes, we find a technology bug. Like all of those things have to work together for the data to work. What is that triggering change? So it's really important for leadership to understand that human element and to be able to support the things that need to happen in order for people to be willing to work through the change, embrace the change, and be able to see that value. And people will, and then they will get excited, and then that will carry on. But to your point of operationalizing, that is an important piece of making that ultimately work and sustain the things that we're trying to do.

SPEAKER_00

Yeah, and I I I think the the importance from a cultural perspective of making it a safe environment for people to identify these things and just own it. It's you know, my my parents always taught me when I was young, um, if if you do something that you shouldn't do, um, or you've broken something or whatever, don't let us discover it. You tell us.

SPEAKER_02

Yes. And and that that is such a good example. You and I had parents who taught us the same type of thing. It's it's also helpful for people to think about the fact that they may own something that they didn't create the problems for. And these systems grow up in organic ways and sometimes not in good ways. So I really try to focus on the fact that processes have changed, the world outside of us have changed, requirements have changed. I'm really not into pointing fingers at the people that were involved with it. We are really just trying to make this better. And and to that kind of attitude goes back to what you were saying about creating that safe environment where people can talk about the problems and come up with the solutions and and not and and not feel like they're taking a huge risk because it is safe to do that.

SPEAKER_00

So that was the quick word. Yeah, it's always wise to have the people closest to the action be part of the solution. It just it it it should it saves a lot of hassle, and honestly, the more people see that that's an acceptable thing to identify a problem and actually work on it collectively to fix it, the the better off you are because people will feel safe and they'll they'll understand this this is not lip service. You know, that leadership really does want us to be comfortable and they understand we're human beings.

SPEAKER_02

Absolutely.

SPEAKER_00

Um it occurs to me as we're we're to we're talking about you know the the need for this kind of training, that there may be some folks out there listening that they may they may need your services, you know, at Granite Falls Consulting. So I would I'd love to have you just tell us how people can get in touch with you um and where can they find your your book? And you know, certainly in the show notes of later, I'll make sure that all that stuff is available. But if you if you wouldn't mind is telling telling folks where to how to reach you.

SPEAKER_02

Yes. Uh so my company is granite falls consulting, um gfalls.com. So g for granite falls as in waterfalls, g-f-a-l-l-s is in sam.com. So you can find me out there, danette at gfalls.com. Um, connect with me on LinkedIn, send me an email, and we can have a conversation about what it is you need and how I can help you uh be able to bring forward. I do the consulting, I do the training, and do the one on one coaching and mentoring if if that is helpful for people, also. So those are kind of the three main areas, but all around the data quality and governance piece that will help your organization.

SPEAKER_00

Well, thank you so much for that. I I I can't tell you that I run across a whole bunch of people that. That are as enthusiastic and actually enjoy the work like you do. And I think that's a real um uh valuable asset that organizations should be thinking about taking advantage of. Maybe you can uh get your own teams uh energized, uh just bringing in a good coach and you know to taking some instruction. Um this is such an important area, particularly as we're heading into this crazy transformation with AI and things are being more instantiated all the time in the foundations of of this technology. It we can't afford not to deal with the data quality. So thank you for that, and thank you so much for taking the time to sit with me. This has been a fun conversation for me. Uh, and I hope our listeners will have enjoyed will enjoy this as well. And I hope it's been it's fun for you. So thank you so much for that.

SPEAKER_02

Yeah, absolutely. Thanks, Chris. It's just been a pleasure. Love talking with you.

SPEAKER_00

Well, folks, that's it for this episode of the Signal Room. If today's conversation sparks something in you, an idea, a challenge, a perspective worth amplifying, I'd love to hear from you. Message me on LinkedIn or visit signalroompodcast.com to explore being a guest on an upcoming episode. I'd love to have you. We're here to amplify the signals that matter across leadership, ethics, and innovation. Until next time, stay tuned, stay curious, and stay human.

SPEAKER_01

Okay.