The Signal Room | AI in Healthcare & Ethical AI

Data Quality and AI Strategy: The Rising Cost of Ignoring Data Governance | Danette McGilvray

Chris Hutchins Season 1 Episode 6

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 45:38

Send us Fan Mail

Data quality is not just a technical problem. It is a strategic and ethical leadership issue that determines whether AI deployments succeed or cause real-world harm. Danette McGilvray, President of Granite Falls Consulting and author of Executing Data Quality Projects: Ten Steps to Quality Data and Trusted Information, has spent more than 30 years helping organizations understand that the data powering their technology is the reason the technology exists. Her framing is direct: organizations will spend tens of millions of dollars on technology but refuse to spend hundreds of thousands on the data that makes it work.


In this episode of The Signal Room, host Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants, sits down with Danette to explore why the race to scale AI is outpacing the ability to ensure the data behind it is fit for use. Danette introduces the concept of data quality debt, paralleling the well-known idea of technical debt, and explains the rule of tens: a data problem that costs one dollar to fix during requirements costs ten dollars at the next stage and thousands by the time it reaches production. She shares a real-world case where every site in a global ERP migration that invested in data quality could close their financial books on time, while the only site that refused could not ship product.


The conversation covers how AI bias is perpetuated by historical data, citing research showing AI recommended a $400,000 starting salary for a male medical specialist versus $280,000 for a woman in the same role. Danette argues that ethics starts in the metadata, that governance is about enabling people rather than controlling them, and that doing things right the first time only feels like slowing down while actually saving money and accelerating results. Her ten steps methodology, now in its second edition and translated into Chinese and Japanese with Spanish underway, provides a roadmap for any organization that needs to create, manage, and sust

Support the show

About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.


Website: https://www.hutchinsdatastrategy.com 

LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/ 

YouTube: https://www.youtube.com/@ChrisHutchinsAi

Book Chris to speak:  https://www.chrisjhutchins.com

Christopher Hutchins:

Welcome to the Signal Room, where leadership, ethics, and innovation meet at the intersection of healthcare data and AI. I'm Chris Hutchins, and here we cut through the noise to uncover the signals that matter, the choices, the challenges, and the conversations shaping how technology serves people. Our topic today is garbage in, gen AI out, the rising cost of ignoring data quality in the age of AI. Before AI became the default strategy, before data was seen as a currency, Danette McGilvray was already doing the work of making information usable, trusted, and fit for purpose. Today on the Signal Room, we're joined by Danette, the President of Granite Falls Consulting, which is celebrating its 20th anniversary. Happy anniversary, by the way. And with more than 30 years of data leadership, including 13 years inside global high-tech firms, and over two decades in data leadership in the heart of Silicon Valley, she's helped Fortune 500 companies, federal agencies, and public institutions all build trust in their data before building anything else. So in this episode, we ask what happens when generative AI is deployed without quality guardrails and why poor data isn't just a technical issue, but a strategic and ethical one. From hidden costs to real-world harm, Danette brings clarity, urgency, and a blueprint for action. As we get started, I'll frame the conversation for us. The race to scale AI is outpacing organizations' ability to ensure the data behind it is fit for use.

Danette McGilvray:

Thank you, Chris. Really happy to be here.

Christopher Hutchins:

Well, I'm excited to be having the conversation with you today. It was a pleasant surprise that we ran into each other at the Put Data First Conference in Las Vegas a couple weeks back.

Danette McGilvray:

Yes, always nice to meet in person.

Christopher Hutchins:

Yeah, so I think we had a lot of very interesting conversations. I know I learned a ton at the one round table that I got to sit with you. I was really taken by the depth of understanding that you bring across multiple industries, not only the ones that I'm worried about most like in the healthcare space. But I think what was really powerful is understanding that there's ethical concerns when it comes to data quality. It's not just having bad data that's a technical problem or incomplete or whatever. There's ethical reasons behind why it's such an important thing and why you've committed your career to really addressing data quality. So as we get started, with over 30 years working in data quality, long before most companies had chief data officers, which is still a new thing, what's changed most about the way leaders value or ignore data and its quality today?

Danette McGilvray:

Oh, well, what has changed or what is different? I would say what has not changed is the lack of emphasis on the data itself. There is still a lot of emphasis on technology. I am all for technology. So some of the tools that we use now, even within the data quality space, not counting the tools that organizations use to do their businesses better, I love them. I'm very happy for them. But I have found, and I'm gonna exaggerate this just a little bit to make a point, but I have seen that organizations will spend millions, tens of millions of dollars on technology, and yet they're not willing to spend hundreds of thousands of dollars on the data that powers the technology, the data which is the reason that the technology exists. So that is something that is still a challenge for anyone who cares about the data and really wants this technology to reach its full potential.

Christopher Hutchins:

So what you're describing is not just a technical problem, it's a leadership issue.

Danette McGilvray:

It is absolutely a leadership issue. And I think you really said it well in the introduction when you were talking about the race to AI. Everyone wants to be first. And I'm gonna use this example of something that was really disappointing to me. So when ChatGPT first came out, I feel like it should have had a warning flag and big flashing red lights on the front page saying this can give you wrong answers. This is not fully tested, shall we say. Now I did find that information, but it took some clicks and very fine print to find it. It's interesting that now I can see much more clearly that ChatGPT, and I looked at this just a month or so ago, now says ChatGPT is not meant to give advice. Some of the answers may be wrong. Are you kidding me? It's a little late for that. Especially for the general population who has heard about this, it's caused a lot of excitement. Not really knowing that they need to be more cautious.

Christopher Hutchins: That's a great point because people get comfortable with technology rather quickly these days, and it's kind of sad. When I talked to you earlier, I had mentioned the conversation about trust that I had had recently with a psychologist. But we've been thinking about trusting AI, and we have a bigger problem:

people don't trust people like they used to. So what's really frightening is that there's people who are trusting this capability and they're not likely even listening to the instruction around what they should or shouldn't use it for. This is a real problem.

Danette McGilvray:

Yes, and this is going to go back to the leadership piece that you were just talking about. So when AI comes forward, companies, we heard stories about people who were putting questions to ChatGPT, and in the process of that were giving away proprietary information because people were not aware of how it was going to be used, who else could access it, what things were made available. When these new technologies come out, it is up to leadership to say, can we use this? Should we use this? If we should, how can we be smart about it? How can we protect ourselves? And I think in the race, too many of the basic questions were not asked and answered.

Christopher Hutchins:

Right. It's interesting when you think about the natural behavior of the platform. It's in and of itself proving that it shouldn't be trusted. I think we talked about this when we were in Las Vegas together recently, that whatever you tell it, it seems to acknowledge it, but if you tell it not to do something, the next thing it does almost always is exactly what you told it not to. So it's not even safe to trust it after you tell it what you need it to do. It's just like a child.

Danette McGilvray:

Well, and it's interesting, and I think you and I might have talked about this a little bit before, which was it's almost too human in that it's trying to please you. So for instance, we have documented incidences of AI making up references. Like check the references, you look at the reference. It looks good, it's formatted, it's in the same format, looks real, it's a made-up reference. So it's really important for people to take the good that they can get from that, but I would say you need to verify, you need to check. So the same thing would be true when you go out to Google and you look for something, it comes up with an AI overview. You're seeing an AI overview before you see the source data. You need to scroll down. I have had things where the overview of things that I know and the things that I do looks pretty good. It's maybe 90% right, but there were things that were wrong. So the thing I worry about is when experts like you and I are gone, how are people going to be able to tell the difference? So I think it is really up to everyone who uses AI. Go to the sources, do some testing, do much testing. But we really need to take it upon ourselves to not individually just take the easy way out. Oh yeah, there's the overview, let me just take that. It might be right, but is it?

Christopher Hutchins:

No, this is a really important point because if we don't catch those things, we're also training the model, which means we're further embedding that information so other people can be tripped up and use it inappropriately. I don't know if we could even overstate the importance of this. I just think it's really critical that people understand.

Danette McGilvray:

Yeah, absolutely. And when you talk about the training, the training of these models is coming from data that is out in the world. We know that bias, and I'm gonna talk about bias for a minute. We know bias is out in the world. And there was an article that came out in Computer World in July about some research that had come out, I believe it was out of a university in Germany, and they had looked at different personas, and they were presenting these personas to AI talking about what kind of salary should I look at, should I ask for? And it was some kind of a medical specialist position. And if AI thought it was a man, it told them that they should ask for a starting salary of $400,000. If AI thought it was a woman, it said that they should ask for a starting salary of $280,000. That's a huge difference. So I have heard the argument, well, it's not up to us to be able to monitor that. But if we do not monitor unfair bias that is out in the world, all we're doing is perpetuating that. So it is the type of thing where we need to be aware of it, and I think we need to take action on that so we don't perpetuate those bad things. I mean, I'm data quality, right? Data quality is all about the data is right, you can trust it, it's a real reflection of what we need, and the whole reason we care about it is because we can trust it and use it with confidence.

Christopher Hutchins:

Right. That brings up something really important. When you're talking about facts, there's clearly been disparities in pay, and they're documented. So if the larger portion of your data set is based on things that hopefully have changed and been changed by intention and sometimes by policy, these are things that your models have to be adjusted for. It's not a bias from someone's opinion. It's just that historically things have been a certain way, and you have to understand when you're asking these models to give you back information or recommendations, they can only make them based on what they know, not on what they don't know. So if they're not educated enough to understand policy changes, either from a regulatory standpoint or even inside of a company, they're gonna give you bad recommendations.

Danette McGilvray:

Yes, and I would rather see AI come back and say, historically, this is what we see. Instead of telling you, ask this, do this, I would really rather have it come back and say, historically, something looks a certain way. Instead of stating it as fact, that this is what we should do now.

Christopher Hutchins:

That is a really important thing. It kind of gets to another topic. Our whole conversation can be framed in so many different ways because there's so many threads. I'll go back to the pace that we're moving. We need to slow down and take a hard look at what we're doing, how we're doing it, and why we're doing it, and what is it that we're trying to accomplish. You just touched on this, the illusion of intelligence, essentially. AI systems are confident, they're coherent and wrong all at the same time. How does bad data hide behind good output?

Danette McGilvray:

Yeah. And taking the time, now I am going to push back a little bit about a comment that you just made about having to slow down. I truly believe that people need to understand that doing things right the first time may feel like it's slowing down. But what is the cost when you ignore those basic things that you think take too long at the beginning, but then it ends up completely sabotaging the whole project? Or you have to do the rework, or then you do all the things that you should have done first after some time period, much, much more expensive. So yes, the illusion of going slow, but sometimes that illusion of going slower actually helps us go faster and in the long run actually saves us money. I think that's really important, and I have seen it again and again over my career. Different new, you know, the big silver bullet technology comes into place. People are anxious to put that in, and it always comes back. Let's get the data right. Slow down to go fast.

Christopher Hutchins:

That's very well said. I couldn't agree more. I know it's not fun work sometimes. People don't get excited necessarily to go and clean up the data, get it fit for use, data wrangling, whatever you want to call it. But sadly, more people that have a title with analyst in there somewhere, they've spent far more time wrangling data, getting it fit for use than they have actually doing the analysis that it was intended to enable.

Danette McGilvray: Yes, we have definitely seen that over time. The good news is there are people like me who actually get excited about data. Excited about data quality. So I've had intersections in my career where I could have gone different places. I always came back to data, always came back to data quality because I understand how important it is. So I know that there are other people out there that are like that. And I'm gonna tie this in just a little bit to the leadership point that you made earlier. We need to not only clean up the data, we need to prevent the problems from happening. So yeah, there may be some cleanup that we have to do, but from a leadership point of view, we need leaders to understand:

look at root cause, look at prevention. Prevention is always cheaper than the cleanup. It's that difference between fire prevention and firefighting. Is it better to stop a forest fire or to fight a fire once it starts? I think that makes it really easy to see the difference between prevention and correction. We have to have some of both, but too often leadership focuses only on the correction piece. We need equal focus on prevention and getting to root causes.

Christopher Hutchins:

You said that the ethics starts in the metadata. Can you unpack that a little bit?

Danette McGilvray:

So metadata, data about data. Before some people's eyes glaze over, metadata is the basic things that you need to know about your data. What is the name of the field where the data is stored? What is the definition? Are there lists of allowed values? If it's some kind of an ID or a code, for instance. So metadata, the basic things that you need to know about your data. If we do not have those common definitions, and it really comes down to this common language and understanding about the data itself, we're gonna have a really hard time using it in the right places and understanding the output of the things that come to us. And anything that shows bias, we don't want to have that reflected in the metadata. Everything, that foundation builds up from that.

Christopher Hutchins:

Right. That's an interesting area just because all too often there's not a coordinated data and analytics function inside of some organizations. One group's doing something one way, one's doing it the other. And if their language and terminology that they're using is not the same, it can get really confusing. I don't even know how many times I've stumbled into things like that where I'm like, oh my gosh, you guys are measuring the same thing, but it's different. You're looking at it through a different lens. Is there a great reason for that? Maybe there is, but sometimes there's just not. They just haven't talked and actually had these kind of exercises.

Danette McGilvray:

Yeah, and that's where the governance piece comes in. Because if you have good data governance, to me, what that means is you are bringing people together. They may have never met each other, but they're all using the same data. They may call it something different, they may depend on each other in a way that they haven't before. Good governance will bring those people together and facilitate the conversations and those kinds of decisions. And things have just grown up. They call things different because we used to have a platform that only looked at sales. We used to have a platform that only looked at the products, we used to have a platform that only looked at the marketing. We now have all kinds of platforms that bring these various areas together. That is where you really see the data quality problems and you really see the differences in how people have been using, looking at, managing, treating the data, which makes governance very important for that reason to bring those disparate people together.

Christopher Hutchins:

I love how you're describing it because far too often people think of governance as controls. And to some extent it can be, but it's really more about enabling people to produce results that you can trust. And it's there as a protection. But I'm sure you've heard leaders say things like, we'll clean it up later. What's your response when you hear that?

Danette McGilvray:

Yeah. Oh, I love this one. So let's take an example of some kind of integrated platform like ERP, Enterprise Resource Planning. It will have things like your finance and your manufacturing and your sales, bringing all these systems together into one platform. You literally spend tens of millions of dollars in getting the technology and being able to migrate and integrate that data as it comes. And at some point along the way, they're saying, you know, yes, let's just load it, we'll clean it up later. Okay. A few fallacies in that thinking. Number one is you really think you can load it if it has not been put into the state it needs to go into the technology. So you're going to have a lot of data that absolutely cannot load. The second one is once you get it in there, do you think that you're actually ready to go to production? Do you think people can actually use that data? So I do have experience in seeing the difference between those who took care of the data. This was the same large global migration that I'm not going to give you any more information about, large global migration. All of the sites around the world who bought into the data quality message that we were bringing forward, the things that they needed to do to look at the quality of the data, say this is what it looks like today in our system, this is what it needs to look like tomorrow in the new system, what is the gap, how do we change that? Every site that did that around the world was able to close their financial books. Month end was two weeks after this huge migration. Quarter end close was six weeks. Absolutely able to close all of their books. They were able to ship products. The only site where they refused to do any of the data quality work was the only site who could not ship product. And I know because I was on the desk who had to call people in the middle of the night around the world and say we have an emergency and we have something we have to take care of here. So I have seen it and I have lived it. That is a common thing that really doesn't work.

Christopher Hutchins:

Yeah, that example is pretty compelling if people are not sure whether they should really do this exercise or not. I want to talk about another concept. It's interesting because you and I have used some similar terminology, but from a different lens. So I've talked about technical debt recently and I've talked about data technical debt, but you are talking about something you call data quality debt. What does that debt look like and who ends up paying it?

Danette McGilvray:

Well, I think maybe my words around data quality debt might be similar to the data debt. People might be more familiar with technical debt, which is the idea that there are certain things when we're putting in a new system, and maybe we need to customize some things or we need to change some processes or we need to train people, whatever that is.

Danette McGilvray: And they decide, oh, we don't have time to do that. Well, there is an approach there that tracks that, and they understand that if they skip doing that, they have incurred some technical debt, which will need to be paid at some point in time. It is the very same thing around the data and the data quality. Oh, we need to analyze and assess and profile our data to see what is that big gap between what it looks like today and what it needs to look like tomorrow. There are certain things that we need to do to close that gap. If we choose not to do that, we have incurred that data or data quality debt just in the same way that technical debt is incurred. There will be a cost to not taking care of it now, there will be a bigger cost to taking care of it later. I told my kids this all the time, and it applies to everything in life:

pay now or pay later. But if you pay later, you always pay more. So going along with that debt, there is something called the rule of tens. This was something that was identified a long time ago according to when you are doing an implementation. So let's say that you have requirements, you have design, you have your coding, you go live, you have production. If you find a problem, and this applied to software, but it also exactly applies to data, we've seen this over the years. If you find the data problem in the requirements or design stage, each one exponentially, if it costs you a dollar to fix it early on, the next stage it'll cost you ten dollars. If you don't do anything about it until testing, you're probably at a thousand dollars. So that is an example of the kind of debt you incur. And in fact, I did a presentation, a workshop last week in Phoenix at DAMA, the Data Management Association. They have a chapter there in Phoenix, and I was talking about the rule of tens. There's a gentleman who raises his hand and he says, it's not the rule of tens, it's thousands. He says, because when you get into production, and then business cannot carry forward, he says it's thousands and thousands. So that is just this example of the progression of the debt and how much more expensive it is when it is delayed.

Christopher Hutchins:

Yeah, this is a really powerful thing to think about because I don't think people necessarily are doing the math properly to really calculate the cost of inaction. It's a big deal, and it snowballs. Just like we're talking about training the AI models. You're training it to make things worse by not addressing the data quality. That's what you're building, you're instantiating it into your organizational structure, and you're setting yourself up for a really bad outcome. I'm certainly not going to be a person that would challenge you when you come in and tell me we've got to fix the data for that reason.

Danette McGilvray:

Well, okay, so here's the thing. Some people may say, oh, data quality, it's just correcting the data, or who cares about this? You can see I'm pretty enthusiastic about data quality. I have been for all these years. But I never do data quality for the sake of data quality. Why do we do data quality? For the sake of whatever you as an organization are trying to accomplish. I don't care if you're for-profit in any vertical, government, education, healthcare. We only do data quality because it helps our organizations do the things that they need to do. That's why it's there. It's not there to get in your way, it's there to help you do whatever your organization cares about. We can help you do it better.

Christopher Hutchins:

Right. So as we're winding down, maybe take this a little bit further and talk about some practical steps organizations should be taking to make sure that quality is part of the AI lifecycle. It's not a gatekeeping function.

Danette McGilvray:

Okay, yes. So thank you for bringing that up because I do have a methodology that I have developed over the years. It's called Ten Steps to Quality Data and Trusted Information. I definitely call myself a second generation pioneer because I learned from the three gentlemen that in the US, I call the first generation pioneers in the late 80s and early 90s who were really bringing visibility to data quality as an issue, as skills, things that you need to learn. So I call myself a second generation pioneer because I learned directly from all of them. Tom Redman, Larry English, and Rich Wang out of MIT. And my innovation and building on the shoulders of those giants was my ten steps methodology. Well, here's the book, Executing Data Quality Projects. The subtitle is the name of the methodology. So there's a second edition that is out where I can feel very confident and comfortable that if you need to know how to deal with data quality, how to create, how to manage, how to sustain the quality of data, this is a methodology that provides you a roadmap. Flexible, scalable. One person, four-week project, or many people, many months project, and it has proven itself. Different languages use it. It is available in Chinese, Japanese, the Spanish translation is underway. People around the world, different cultures, different types of organizations, big uses, small uses. So to me, this is really the practical answer to how do we go about dealing with this data quality problem that you and I have just been discussing. And the good thing about this, it is very complementary to other resources, but it is the roadmap in the middle of the what do we do. And the really good detailed books and resources that go into a lot of detail about a slice of the data quality pie. This is what keeps you from getting lost in all the detail. Where are we? What do we need to do? And you bring in this other information, it complements the other things that are out there. So learning and training your people in how to address data quality is a really important leadership decision because it is the leaders in an organization who say, I understand why data quality is important to what we care about in whatever my organization is. I want people to be trained in how to do that. And I am willing to make that an important part of our funding and our investment decisions. So the tens of millions of dollars we're spending on technology will actually work for us.

Christopher Hutchins:

Right. I can't even explain to you how much I admire that because it's not always an area that's well received because it exposes some things that people can be embarrassed about. I remember doing some data profiling at different times, and there were assumptions made about the values that you might find in a particular field, but when you do the profile and you find out, oh my gosh, no one has really maintained this at all. It's terrible. It's a really important thing.

Danette McGilvray:

It is really important, and it is natural for people to feel somewhat defensive. Not want to uncover the problems. But I learned early on in my career, there was one of the first data quality projects that I had done as an employee, which is where I got my feet wet in global high-tech companies. And we found some problems that were completely unexpected and costing the company tens of millions of dollars. And the person who had sponsored this project was also the owner of the system for many years. It would have been really easy for that woman to say, you know what, we didn't expect that. Let's just put this under the covers. We're good, nothing to see here, everybody go home. I loved the fact and learned from her that she stepped up and shared the problems widely. And she said, if we don't talk about the problems, nothing will get changed. So she took a risk, she showed some courage, and stepped up. And to me, that's what a real leader does. And I've tried to emulate that and I learned from that. Fortunately, I had that good example.

Christopher Hutchins:

Yeah, that's powerful. People really need to understand that leadership is sometimes difficult and that you actually have to own something that you'd rather not have to own. But that's a really important characteristic in a leader. Thank you for sharing that. When we're talking about quality and the need to really build this into the culture of an organization, and when you're thinking about how important it is when you're building AI solutions that the stuff actually is addressed in the process, not after the fact, how do you think about operationalizing that and what do leaders really need to be doing to ensure that they're approaching this properly and it's getting into the DNA of the company?

Danette McGilvray:

Yeah, so the ten steps start with the business needs. They will go all the way to a step nine, which is really a handoff toward operationalizing the things that you found as you were addressing the initial data quality problems. An important piece, there's a couple of important pieces there. One of them is how do we even know that it's worthwhile for us to do that? And I have a step around business impact techniques that can help you be able to, in a qualitative and a quantitative way, show the value of looking at your data quality. You don't have to just take my word for it. I have some ways that you can actually look at that and apply that to your company. The other piece is in order to operationalize something, people have to carry it out. People have to be willing to make that change. So within the methodology, there's a step ten that involves visually communicating, managing, and engaging with people throughout. We have to deal with the human element of our data quality work. Everything we do around data quality triggers some kind of a change. Oh, wow, we need to train someone. That's a change. We need to refine a role, we need to refine or create new processes, we find a technology bug. All of those things have to work together for the data to work. What is that triggering change? So it's really important for leadership to understand that human element and to be able to support the things that need to happen in order for people to be willing to work through the change, embrace the change, and be able to see that value. And people will, and then they will get excited, and then that will carry on. But to your point of operationalizing, that is an important piece of making that ultimately work and sustain the things that we're trying to do.

Christopher Hutchins:

Yeah, and I think the importance from a cultural perspective of making it a safe environment for people to identify these things and just own it. My parents always taught me when I was young, if you do something that you shouldn't do, or you've broken something or whatever, don't let us discover it. You tell us.

Danette McGilvray:

Yes. And that is such a good example. You and I had parents who taught us the same type of thing. It's also helpful for people to think about the fact that they may own something that they didn't create the problems for. And these systems grow up in organic ways and sometimes not in good ways. So I really try to focus on the fact that processes have changed, the world outside of us has changed, requirements have changed. I'm really not into pointing fingers at the people that were involved with it. We are really just trying to make this better. And that kind of attitude goes back to what you were saying about creating that safe environment where people can talk about the problems and come up with the solutions and not feel like they're taking a huge risk because it is safe to do that.

Christopher Hutchins:

Yeah, it's always wise to have the people closest to the action be part of the solution. It saves a lot of hassle, and the more people see that it's an acceptable thing to identify a problem and actually work on it collectively to fix it, the better off you are because people will feel safe and they'll understand this is not lip service. That leadership really does want us to be comfortable and they understand we're human beings.

Danette McGilvray:

Absolutely.

Christopher Hutchins:

It occurs to me as we're talking about the need for this kind of training, that there may be some folks out there listening who may need your services at Granite Falls Consulting. So I would love to have you tell us how people can get in touch with you and where can they find your book. And certainly in the show notes later, I'll make sure that all that stuff is available. But if you wouldn't mind telling folks how to reach you.

Danette McGilvray:

Yes. My company is Granite Falls Consulting, gfalls.com. So G for Granite, Falls as in waterfalls, g-f-a-l-l-s.com. So you can find me out there, danette@gfalls.com. Connect with me on LinkedIn, send me an email, and we can have a conversation about what it is you need and how I can help you. I do the consulting, I do the training, and I do one-on-one coaching and mentoring if that is helpful for people also. So those are kind of the three main areas, but all around the data quality and governance piece that will help your organization.

Christopher Hutchins:

Well, thank you so much for that. I can't tell you that I run across a whole bunch of people that are as enthusiastic and actually enjoy the work like you do. And I think that's a real valuable asset that organizations should be thinking about taking advantage of. Maybe you can get your own teams energized, just bringing in a good coach and taking some instruction. This is such an important area, particularly as we're heading into this transformation with AI and things are being more instantiated all the time in the foundations of this technology. We can't afford not to deal with the data quality. So thank you for that, and thank you so much for taking the time to sit with me. This has been a fun conversation for me. And I hope our listeners will enjoy this as well. So thank you so much.

Danette McGilvray:

Yeah, absolutely. Thanks, Chris. It's just been a pleasure. Love talking with you.

Christopher Hutchins:

Well, folks, that's it for this episode of the Signal Room. If today's conversation sparks something in you, an idea, a challenge, a perspective worth amplifying, I'd love to hear from you. Message me on LinkedIn or visit SignalRoomPodcast.com to explore being a guest on an upcoming episode. I'd love to have you. We're here to amplify the signals that matter across leadership, ethics, and innovation. Until next time, stay tuned, stay curious, and stay human.