Leadership Tea
On Leadership Tea, we talk about what it takes to reach the executive level, and how to thrive when you get there. Powerful leaders share their journeys, insights, and triumphs in conversations with hosts Shelby Smith-Wilson and Belinda Jackson Farrier.
Join us every other Wednesday to be inspired by the unvarnished stories of amazing executives who know what it's like to be "the only" at the table and who have succeeded regardless. They have proven leadership experience in their respective fields, from international affairs to the private sector to academia, and want to help others create their own success stories.
Leadership Tea
AI Can’t Replace Leadership: Wisdom, Strategy, and the Human Edge
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI is moving fast, but leadership still requires judgment.
In this episode, Shelby and Belinda are joined by Nadeen Matthews, Founder and CEO of Crescent Advisory Group, for a thoughtful conversation about leadership in the age of AI.
Rather than focusing on hype or tools alone, Nadeen explains why leaders need to connect AI to business strategy, customer impact, employee readiness, and organizational risk. She also breaks down what executive AI fluency really means, why bias and fairness remain critical concerns, and why women must take their seat at the AI table.
This episode is a practical, human-centered look at how leaders can engage AI with more wisdom, responsibility, and clarity.
NOTE: Since this episode's recording, Claude has surpassed ChatGPT in terms of Apple store downloads; some other data mentioned in this episode may have changed.
We publish new episodes every other Wednesday.
Follow us on Instagram @Leadership_Tea for more inspiration and insights.
LT - S5 Ep5 - Nadeen Matthews - Final 2
[00:00:00] Belinda Jackson: Leaders are spending too much time chasing AI tools and not enough time asking the right questions. One of the biggest mistake leaders are making with AI is thinking about AI the wrong way. In this episode of the Leadership Tea Podcast, we sit down with Nadine Matthews. She's an AI consultant, educator, and founder, and CEO of Crescent Advisory Group.
[00:00:25] She's gonna help us unpack what executives really need to understand about AI right now. Ian challenges the idea that leaders need a separate AI strategy. Arguing instead that the real question is how does AI strengthen your existing business strategy, your people, and your customer experience. She also offers a powerful reminder for leaders.
[00:00:49] You do not need to be an engineer to lead in the AI age. Generative AI runs on language judgment and clear thinking, which means that leaders already have more of the skills they need than they realize. This conversation is practical. It's human centered, and it's honest about both the risks. And the opportunities.
[00:01:15] If you've been curious about ai, we're cautious about AI, or you're just wondering where to begin, this episode is for you.
[00:01:25] Shelby Smith-Wilson: Thank you so much, Nadine, for joining us on the Leadership Team podcast. We're gonna jump right in and ask you a question about clarity and confidence at the executive level, and we're curious to know.
[00:01:41] Where do you most often see leaders overthinking ai? And where are they under thinking it?
[00:01:50] Nadeen Matthews: Thanks for that question, Shelby. I see a lot of leaders very obsessed about the tooling and the technology, so they're obsessed with creating an AI strategy and not thinking about what I think are. The key question, so actually, how does the technology advance our existing business strategy?
[00:02:18] We don't need to create this brand new strategy. How does it disrupt it? Yes, I believe CEOs are thinking about that. What do we need to be doing with our people? How do we need to be preparing them? And by people, I'm always thinking about team members, but also customers. Because at the end of the day, the changes that you make strategically in terms of your products and services impact them.
[00:02:47] And if we're not talking to them and understanding how they're thinking, their willingness to adopt, and then we're gonna run into some problems later on. And then the last question is just what risks does AI introduce to the business? There is a fair amount of. Thinking around that. I do think there is a lot of overemphasis on the AI strategy as like a standalone thing and the tools and not enough thinking around the human beings, our customers as well as our employees.
[00:03:26] Belinda Jackson: Oh, thank you. I hear you. You know, it's really interesting thinking about how we are overly focused on strategy, and in that effort we're forgetting about the client. And the outward facing side of what we're doing. That's, that's very interesting. I wanna build on that and ask what does AI fluency mean from a senior leader's perspective versus the perspective of, say, an engineer who's kind of down in the weeds,
[00:03:57] Nadeen Matthews: AI fluency.
[00:03:58] At the senior leader perspective means being able to articulate a response to those questions that I mentioned. How is ai. Changing your strategy. How will it disrupt your strategy? What are the risks that it introduces? How should you be thinking about your people, employees, and customers? In addition to that, you have to know the basic terminology in the same way that.
[00:04:28] C-suite, irrespective of your position, you have an understanding of financial statements, balance sheet, income statement, cash flow, whether you're the finance person or not. AI and digital literacy has become one of those things where we do need to actually invest the time. And learn some of the key terminology and what it means.
[00:04:54] Um, but the most important is starting with how does it relate to my business and the world, and being very clear about that.
[00:05:06] Shelby Smith-Wilson: It's interesting because Belinda and I usually say that leaders need to be strategic about everything. But what I'm hearing you say is there's a moment where you need to step back.
[00:05:17] And really ask these critical questions before you jump into building an AI strategy. Yeah. Um, it's something that you said about as a leader, as an executive, as a CEO, making sure that, you know, the basic terminology, uh, when it comes to ai, there's so many tools out there. We're wondering like, where, where do you even begin at, at least at the executive level?
[00:05:43] You know, to piggyback on what Belinda said, when you're distinguishing where you are versus, you know, your engineers and the folks who are more hands-on, what are the top two or three AI tools that you think executives need to adopt? And I'm sure you get this question all the time, but
[00:06:02] Nadeen Matthews: I get, I get this question a lot.
[00:06:04] Um, so I'm gonna go two steps back. I totally agree with you and Melinda around the importance of strategy. I think the, the focus has been around creating standalone AI strategies as opposed to, here is my business strategy, here is what I represent at the end of the day, and how does AI. Integrate with that.
[00:06:26] So when we have two separate strategies that can become very dangerous and disconnected, it's great if they come back together. The starting point has to be your enterprise or business strategy in terms of determining what it is that you need to do with ai. And I'll give an example about blockchain because three years ago, blockchain and NFT, that was the big talk and, and and person spent a lot of time on it.
[00:06:57] Granted, I do believe that AI is more broadly applicable, but the truth is that. 95% of enterprises never got anywhere with blockchain. So we have to be careful that we are not running after the shiny things because the, everything about the shiny thing may not be relevant. And this is someone that is, you know, a big proponent of not being afraid of technology and, and adopting it, of course, but you still want to lead with strategy.
[00:07:33] In terms of the top tools for CEOs and senior leaders themselves as persons, I'm gonna separate that from the enterprise stack because you want to be using tools that are so compliance and, and put you in a position to protect your customers. Privacy and all of that. So from the CEO and the senior leader standpoint, I think chat, GPT is an important tool to have in the stat just because it has the most global users and, and understanding what it can and can't do.
[00:08:11] It's kind of like. Not knowing how to use Microsoft Office. So a decade ago, all of us had on our resume, Microsoft Office proficient. Remember that?
[00:08:27] I
[00:08:27] Shelby Smith-Wilson: think there's,
[00:08:29] Nadeen Matthews: yes, pretty
[00:08:29] Belinda Jackson: much like Excel, proficient
[00:08:31] Shelby Smith-Wilson: Excel.
[00:08:35] People think you're a moron. Like what?
[00:08:41] Nadeen Matthews: Who does that? So AI is that thing, and when people put it in on their resume, they're talking about generative ai. They're not talking about traditional AI that has been around from the fifties. Uh, so chat, GPT is a great tool to start with. I believe all senior leaders should be using AI note takers.
[00:09:01] It's a great tool to facilitate attention in meetings. Quick distribution of the notes. And now many of the note takers are integrated with project management tools, so you can make sure that the action items are easily integrated into your project management tool of choice. And I'll slide a third one in there, which is gamma.
[00:09:29] Creating PowerPoints in minutes. I don't know why we would spend any more time than that. Creating the actual PowerPoint, obviously you're gonna do the work in thinking about the content, that doesn't change. But in terms of printing up the slides, I remember when I used to be sizing the boxes and making them a line and all of this stuff, or sending it out to a graphic designer.
[00:09:55] Absolutely not. And in terms of super personal, I'm a big fan of Notebook lm that's in the Google stack. It's an amazing tool for helping your kids to study. You can upload lots of material, all their study notes, textbook photos, YouTube videos or what have you, and it creates study materials, note cards, quizzes.
[00:10:21] Podcasts, videos, presentations, and that's amazing because kids learn very differently. And so for my daughter in particular, I use that so she can have different types of study materials, but for executives who are inundated with emails as an example, and these meeting notes coming from note takers, it's also a tool that you can upload those things into it.
[00:10:50] And listen to it as a podcast or just as a reporting on your way to and from work as opposed to reading through. So those are the, the tools in terms of the, the personal productivity, I'm, I'm gonna get killed because I didn't say copilot. And most enterprises are, have adopted copilot because Microsoft was already there in terms of the general intuitiveness and level of adoption.
[00:11:19] We're not seeing that copilot is there. Um, on the enterprise side, it really depends. I like exposing team members to the generative AI tools because they're using it in secret anyway. The research shows that even the senior leaders are using it, but they're afraid to admit it, and that introduces a number of risks to the business if persons are using it, but not educated to use it.
[00:11:47] So think about your kid. Taking the keys and driving the car out at night because you didn't bother to teach 'em to drive, doesn't mean that they won't try to drive. So that's the example that I like to use. Um, but there are, I love the use cases around ai, for instance, and I'm not of. Fan of the core premise of using it to replace employees.
[00:12:12] I think AI is quite powerful in those cases, such as customer care as an example, or fraud alerts or loan approvals where you'll just never have enough humans to do it. So if you're running a 24 7 business and people are calling. All the time and waiting and angry and so forth. That's a fantastic case to leverage agentic AI because you're augmenting and not replacing, and then you also want to make sure that you still always have humans in the loop.
[00:12:49] Um, so what you deploy from an enterprise standpoint really goes back to what's your strategy, um, but I would bias towards allowing your team members to use the productivity tools such as chart GPT Co-pilot. Gemini, Claude. I'm in love with Claude right now, but training them how to use it responsibly and then exploring those agentic AI use cases where AI can be used to augment particular processes that you just don't have the human capacity to do well today.
[00:13:26] Belinda Jackson: No, that all sounds great. I know I feel a lot better because at least everything you've mentioned, I've at least worked with a little bit, so I feel like I'm
[00:13:34] Shelby Smith-Wilson: not lost.
[00:13:35] Nadeen Matthews: Right.
[00:13:35] Belinda Jackson: No, I'm, I'm, I'm in the loop. Right. Awesome. I agree with you on Notebook, lm, I think it's just so many use cases there and, uh, I think Notebook Lamb is a really good example of.
[00:13:49] And I apply this to other uses of ai, kind of trash in trash out. Because Notebook LM is dependent on what you are giving it.
[00:13:57] Shelby Smith-Wilson: Yes.
[00:13:57] Belinda Jackson: Um, versus searching the whole web. You really need to be very thoughtful about what you're providing it and the prompts that you're giving and how, and what tools and elements you're using.
[00:14:09] And I try to apply that to my usages to Claude or Chat GBT or other tools, like really thinking. What am I asking in this prompt? Where am I pushing you? How can I push differently? How can I prod? So definitely it's. We can't be afraid of it. We have to dive in and figure out how to use it safely. You know, I'm big on safety.
[00:14:33] So with that said, though, there's a lot of opportunity there, but there's still a lot of fear. Mm-hmm. And so when you engage with senior executives, what are some of the misconceptions that they have about using AI tools?
[00:14:49] Nadeen Matthews: Many of the senior leaders that I interact with, they have a misconception that they need to be technical in order to use the tools or get the tools, um, deployed in terms of the generative AI tools, there's a big, I, I don't know if it's a misconception, but it goes back to what I mentioned about persons hiding that it's somehow cheating.
[00:15:17] So I'm, I'm just gonna spend a little bit of time on, on that because you can absolutely cheat, if you will, using generative AI tools. And my best comparable for using generative AI is one that Andrew Ang uses, or he says, treat it like an intern. And so if you had a very smart intern coming from university, they're super smart, but they have no domain expertise.
[00:15:48] And that domain expertise is yours, and you're still using that to guide them and shape them as you assign things and get it back from them. So today, or even before chat, GBT Senior leaders were using ghost writers. And as someone who has worked in marketing and strategy, I've been one of those persons writing the things that senior leaders are reading and not mentioning me.
[00:16:19] And I'm not saying that they're cheating. I'm not saying that they're cheating because one, they. Participated in shaping the content, and at the end of the day, they're accountable for whatever they deliver. They must believe it. And they also would've fine tuned whatever they would've received from me or my team.
[00:16:40] And that's kind of the relationship that I. Expect us to have with the generative AI tools. There's a human always in the loop, but we're leveraging it as a co-pilot of thought partner and intern while retaining our role as the domain experts.
[00:17:03] Shelby Smith-Wilson: It's funny, when you were talking about this aspect of cheating, I was immediately reminded of, you know, Belinda and I, as former diplomats, we are, we're always talking about briefing memos and writing memos, and it was funny just hearing you reflect on that aspect because it's, it's the same, it's the same premise.
[00:17:24] Like you write, you draft a memo. It gets cleared by like 15 people, and then by the time it reaches a senior leader, it's like, okay, who was the true author of this product? You've got lots of people contributing to it, but do you give credit to, you know, the 25 people that touch that particular document?
[00:17:43] So the, the, the parallels that you're drawing, um, when it comes to generative ai, it's very powerful. I think a, a real practical way that leaders can think about it.
[00:17:54] Belinda Jackson: That really struck me. The idea of comparing it to speech writing or talking points writing or, or something like that, I think is a really powerful description.
[00:18:05] Yeah.
[00:18:06] Shelby Smith-Wilson: I do wonder though, and I wanna shift the conversation a little bit, um, we're, we're talking a little bit about boundaries and, and safety, but
[00:18:14] Nadeen Matthews: mm-hmm.
[00:18:14] Shelby Smith-Wilson: You know, here on the Leadership Tea Podcast, we really. Focus on a human centered, I would even say empathetic approach to leadership. And I'm wondering, based on your expertise and what you've seen and all of the examples that you've shared with us in terms of where AI can really be a tool that informs strategy and to make operations more efficient, obviously.
[00:18:41] But there, there comes a point where it's like, well, where do you draw the line?
[00:18:45] Nadeen Matthews: Mm-hmm.
[00:18:46] Shelby Smith-Wilson: What stays, you know, in the art artificial intelligence realm and, and what are the things that, that really are meant to be driven or, or decided by humans? I just wonder what your thoughts are on that.
[00:18:59] Nadeen Matthews: Yeah, there, there's this framework that, um, professor brought Anan.
[00:19:04] He was at Harvard, but he's now at, um, NYU shared at a talk that I went to that I quite like. He talks about low and high cost of errors and tat versus explicit knowledge, low cost of errors, if there is an error. And no one is harmed. It's okay. So today, if the agent sends an email with the wrong information, we, the data shows that customer care agents send things with the wrong information lots of times.
[00:19:41] So if we're talking about your Amazon package, that can be corrected. If it's a decision around what led to amputate, there's a high cost of errors. You amputate the wrong leg, we have a problem, you can't undo that. And then there's tacit versus explicit knowledge. So explicit knowledge, it's there anybody who can read it's available to.
[00:20:06] But then, um, tacit knowledge. The nurse who has information about the patient because she can see and she can understand. I mean, when we talk about healthcare and some things such as, you know, black women and their hair, that's tacit knowledge, not explicit knowledge. It's not in the medical textbooks, unfortunately.
[00:20:29] And so where there are situations where there's high cost of errors and there's a lot of tacit knowledge, that's the area where you want to be very, very, um, say very, very clear of having too much ai. The things where there's low cost of errors, you know, lots of explicit knowledge. And we're seeing that today with emails.
[00:20:55] I mean, Google picks things as spam sometimes. They put things that are not spamming, but I mean, it's no big deal. You, the person doesn't get it. They write you back, or they're like, I sent you an email. They're like, okay, I didn't get it. I, you go and look, it's there, but low cost of errors, right. Um, relatively speaking.
[00:21:13] So that's the, the kind of spectrum that I like to, to look on. But even in cases of low cost of errors and lots of explicit knowledge, um, my personal, uh. Perspective is that where we are today? There must always be a human in the loop. And Dr. Fife Lee, she put it well when she said, by hearing artificial intelligence, we've sort of disassociated from it in a way.
[00:21:46] However, human intelligence created artificial intelligence and continues to make decisions on how it behaves and how it's utilized. It doesn't. Operate in a vacuum by itself. And so I draw the line on interventions where we are thinking about deploying and not having any human intervention, um, involved at all.
[00:22:13] We're not there. Um, in terms of the technology.
[00:22:17] Belinda Jackson: Yeah, so as people experiment with AI and, and try to learn about it so that they can, you know, be very intentional about how they deploy it in the workplace. And so let's say someone's kind of using this on their own casually, what are some of the risks, the reputational risks, the security risks that they should be aware of as they're learning and experimenting?
[00:22:42] Nadeen Matthews: In, in the workplace, ideally, persons are not using it casually, and, and this is, this is why it's important for organizations to train persons and be very clear about their policy and, and, and identify the approved tools that persons can use. Tell them how to use it safely because even a tool like email can be used in a way that's unsafe.
[00:23:09] You attach, you click on a link that is phishing link, and then you have the, you can bring down the entire organization just like that. And so it's important for organizations to. Take that approach. But in terms of our general usage, I recommend a few things. One, be very clear about the security and privacy settings.
[00:23:36] So the default on many of these applications is like, oh, um. They, they're like opt-in to, not even opt-in because it's the default. We use this data to learn and make the experience very easy for you. Some, some variation of that, which essentially means that they're using your data for training. Some people are okay with that, some people are not.
[00:24:01] Be very conscious about it. Um, there's two factor authentication. It's important to do that in any digital platform that you are using, whether it's your banking app. Or your, um, generative AI applications. Um, we still don't know, although there have been commitments made in terms of how the data is utilized and all of that.
[00:24:27] We don't want to put confidential data in your personal chat, GPT. Claude and other LLMs, uh, or be very conscious of what could happen. So as an example, uh, chat, GPT open AI because of legislation, um, not legislation, but litigation that's still happening. They were supposed to destroy. Um, some, some history and that didn't get destroyed because it's held up.
[00:25:01] So thing, things like that. We have to be conscious of the enterprise tools. Uh, I mean, we've long, we've been in the cloud with, with Microsoft for a while. Your, all your thing, your, your things are in the cloud. Um, it's, it's not. And so that was already. Opportunity to be hacked because there are opportunity for breaches.
[00:25:26] So it is not only the threat surface didn't only exist with the introduction of generative ai. And I would say the same precautions that those organizations are taking on the enterprise side. Are the same ones that we see that the open eyes and the Googles and so forth are taking. However, organizations need to be very vigilant and you need to, um, read all the terms and conditions your attorneys need to be involved.
[00:26:00] Um, your risk team needs to be involved and not just your IT practitioners to make sure that you are. Are in alignment with all the various regulations and and laws in all the jurisdictions in which you operate in. Because if you're multi-jurisdictional, that's something to think about. But beyond that, back to the customer, you also want to be.
[00:26:29] Very mindful of. So just because something is legal doesn't mean it will sit well with your team. Doesn't mean it will sit well with your customers. And so wherever that conscience sits within your organization, and that's why I was so happy to be on your podcast, you know, share the tea, um, because of your empathetic approach, wherever that conscious sits.
[00:26:55] Activated to make sure those persons are pressure testing what you're doing with the technology.
[00:27:04] Shelby Smith-Wilson: I do wanna go back to something that, that you mentioned on organizational training, how important it is for organizations to have actual training programs for employees on AI and in just a very systematic approach.
[00:27:21] In terms of how an enterprise is going to, you know, integrate, utilize, adopt AI tools, that that is, it's very important to train people and not just have a willy-nilly approach where everyone is doing their own thing and before you know it, like you said, there, there are other security risks, um, that they're, you know, even more vulnerable to because they haven't taken the time to put the, the proper processes in place.
[00:27:49] And so. I wonder, just from an organizational culture standpoint, um, what do you think about the debate around bias and fairness? You know, because there are, even when I'm using chat, GPT, I can sense that it doesn't necessarily get everything that I. I'm trying to convey from a cultural standpoint, you know, my background is different from, you know, a, a white heterosexual male.
[00:28:20] And so I feel like because I've been using chat GPT for a specific purpose for so long, I've kind of trained it, you know, to think like me, but I also know that it's not gonna think a hundred percent like me, and that there are still some inherent things that it can't do. Because there's bias in, in how these systems were created in the first place.
[00:28:43] And so I wonder from your perspective, how should organizations address bias and fairness when it comes to AI at an enterprise level?
[00:28:55] Nadeen Matthews: Consciousness of it is important, and that really is a cultural thing in organizations because the reason AI has bias is because it has ingested data from biased humans.
[00:29:12] And so now you have the bias proliferating in the same way in the HR function. There used to be bias. Before ai, some resumes come across, oh, it's not from a particular school. People may look at names and their name. Taken out of the pool. It's those same biases that now have been ingested into the, the, the AI platforms.
[00:29:41] So the, the way to address that and the organizations that address it are the organizations that were addressing it before. It was exacerbated by technology. You made sure that you had a process, um, for looking at. Um, are we being fair? Are there any biases you have a process for raising it because it may not be easily detectable.
[00:30:10] Um, lending models, even before generative AI would've had bias. If you've never made a loan in Jamaica as an example. There is a legacy of farmers not getting loans, and in the US there are a legacy of, of specific communities not getting loans and how an AI model translate that is that these, this community doesn't qualify for loans and so they automatically reject.
[00:30:38] You have to be conscious of that going in so that you create alternative pathways to address the biases that you can think about, because some of them, it's very clear and you can, um, address them. But then you have, I mean, whistleblowers is a strong word. You have mechanisms for persons to say, Hey. Hmm.
[00:31:02] Uh, I don't really think that's hitting the mark because of these cultural issues. And the only way that you can do that is if you have a diverse team. And therefore the, the companies that are set up to do that well are the companies that were already, um, committed to that. Pre generative AI because it's a cultural thing and not a, and not a technology thing.
[00:31:29] Belinda Jackson: That really resonates with me. I think I've worked in spaces where maybe the table wasn't, didn't have a wide range of voices and experiences at the table, and. But people felt like they were reflecting that diversity by, I would say, approaching it almost like engineers saying, well, we've, um, made sure to include these words that data shows are inclusive, so solved.
[00:31:54] Right. Or right. And so those structural things are not always enough.
[00:32:00] Nadeen Matthews: Yeah.
[00:32:00] Belinda Jackson: And. The issues that we're discussing are often nuanced. And so I think it, it is really interesting to think about how organizations need, still need to have tough conversations, even when introducing technology. One question I have for you is, let's assume, actually I've seen this, uh, this happen, let's, AI is introduced at the enterprise level training even happens on basic usages.
[00:32:26] At the working level, right? This is how it can help you organize your email. This is how it can help you organize your drive of saved files, et cetera, et cetera. This is how it can help you schedule meetings. But if you were leading an organization like that that did do a training that has introduced something, say like a co-pilot at the enterprise level.
[00:32:46] What's week one? What should I be doing? What should I be thinking about? My employees have this, but now I, I don't, I don't quite know what to do. Mm-hmm. How can I help them be more efficient about their work using it?
[00:33:02] Nadeen Matthews: Yeah. And, and, and that's happening a lot. And the thinking about that should precede that moment.
[00:33:12] And, and, you know, that's what, uh, getting back to the whole thing around, uh. AI being something that powers your business strategy. So it could be that the, the use case was around, you've done an assessment and you see that your employees are spending way too many times in meeting, and that data exists, many organizations by the way, and we're seeing data where it shows that people are in two meetings at a time.
[00:33:40] You're like, how does that happen? You know, corporate gone awry. Yeah. So you say, okay, we're going to, our goal is to, to cut, um, you know, 25% of admin time around meetings or something like that. The, the bigger point is we, we can't have these massive technologies, or we should not have these massive technology deployments before answering the questions around.
[00:34:10] What exactly is this going to do for the organizations? Now you can do tiny experiments 'cause maybe you don't know. So then if you don't know, you do a tiny experiment. You do one team or one customer group or something like that, and you get some insights and you, you, you tweak and you refine and you decide if that's something that you want to scale.
[00:34:35] Um, but I think too often we are seeing. These, these massive investments. And then the, oh, well, what do we, what do we do? Now,
[00:34:44] Shelby Smith-Wilson: piggybacking off of Belinda's comment, I did wanna ask a question because I think. As executives, and I'll even speak for myself as an executive woman, learning to use AI tools, knowing that the people who report to me were way smarter than I was.
[00:35:01] You know, when it comes to, to really manipulating AI in ways to make our work more, more productive and efficient. I'm just wondering, you know, for the senior women leaders who are listening to this particular episode. Who are both, you know, cautious, curious, you know, wanting to use, knowing that they need to use AI obviously in this day and age, but are still a little hesitant or not fully assured that they're qualified or that they're, you know, fully competent to, to use ai.
[00:35:36] What would you want her to trust about herself?
[00:35:42] Nadeen Matthews: Trust that you are fully competent to use AI and there's never been a better time to learn AI because when we were students, everything required going back to school and getting some other formal degree. And the, the, the knowledge has been so proliferated now where there are so many low cost options, online options via these platforms.
[00:36:11] You have executive education, you have. Coaching programs. There are lots of different ways to learn, and then the tools are widely available for you to play with and get used to it. But even YouTube, YouTube has become a master academy of, I mean, if there's something new that I really want to learn about, I'm mostly just jump on YouTube or even TikTok.
[00:36:37] First before I dive into any, any force work. So that's one thing I would say. You, you have all the tools to leverage AI because it's not a tool that, you know, generative AI doesn't require technical knowledge. It operates on language and many of us, we are. Excellent communicators and we are excellent at providing guidance, and that's what drives amazing results from LLMs.
[00:37:13] So that's one thing. The second thing is, I mean, when we look at the adoption numbers for women, the gap. It's closing, but there's still a gap. And it's really important that women take their seat at the AI table because we do have all these ethical issues that we still need to contend with, and it's really important for women to be at the table shaping the policies that are.
[00:37:40] Still needed, the laws that are still needed and the changes that are still needed in the data sets to continue to minimize biases, you'll never eliminate all bias. But, um, we are, it's important for us to be there in order for these things to change. For the future. And so it's really imperative that we, we take our seat at the AI table.
[00:38:07] I feel very strongly about that. There are lots of strengths that AI brings to the table. There are lots of imperfections. It doesn't mean that we abandon it though, and I like to use analogies as you ladies may have realized. So I'm gonna talk about when cars were just a thing, by the way. When cars are just a thing, people are still talking about, we just need faster horses.
[00:38:30] Right? But cars come in, people are dying left and right and that because we didn't have seat belts, we didn't have stoplights, we didn't have all those crazy senses that we have now. We're like, I'm not that anybody.
[00:38:49] Shelby Smith-Wilson: The yellow flashing light when a pedestrian across the street and it's like, oh, oh my god. Yes.
[00:38:57] Nadeen Matthews: So over time with all technology, which by the way, everything can be used for harm, even the most innocuous things in our home, you see two things happening. There's innovation to make the technology better. So all those things, safety features, got added to vehicles over time.
[00:39:16] But then the, the, the regulation and the laws. Also catch up, and that will and must happen. With ai, we never mitigate or eliminate rather, all the harm that can be caused by vehicles because today people still are impacted by. Drunk driving, speeding and all of those things. But based on where it was initially, we are far, um, we have far less incidents than when these machines were introduced, and so I feel very strongly.
[00:39:54] That, um, women must take their seat at the AI table. They are abundantly equipped to use the tool and to shape the future of the world with this technology.
[00:40:10] Belinda Jackson: A hundred percent. You know, I feel like we have learned so much during this episode and, um, I know I'm walking away with a lot even things to research, and most importantly, knowing that our voices need to be at the table.
[00:40:26] Yes.
[00:40:27] Nadeen Matthews: Yeah.
[00:40:27] Belinda Jackson: In order to mitigate a lot of different things, but our voices do need to be at the table. We cannot be as afraid of this technology. It's here, like it's here, it's happening. We have to use it. So with that said, I, I wanna make sure people can stay connected to you and, and find out more and reach out to you.
[00:40:45] So what are the best ways that people can find more information about your work and what you're doing?
[00:40:51] Nadeen Matthews: So I love when people reach out to me on LinkedIn. So Nadine Matthews, bear on LinkedIn, founder and CEO of Crescent Advisory Group. So my website, www.crescentadvisorygroup.com, is also another great way.
[00:41:06] Or you can email me at info@crescentadvisorygroup.com.
[00:41:11] Shelby Smith-Wilson: Nadine, it has been a true pleasure, honor to talk to you and learn about. Just aspects of artificial intelligence that I hadn't really considered before I learned a new tool. Well, you mentioned Gamma at the beginning, which I've heard of, but I haven't really.
[00:41:29] I'm familiar with it, but I haven't used it before. But this convers thing has motivated me, um, to, to do just that. As you said, women need a seat at the table. Mm-hmm. You know, to make the world go round. We do things better anyway, most things, yes, we do.
[00:41:44] Nadeen Matthews: Yes,
[00:41:44] Shelby Smith-Wilson: better.
[00:41:45] Nadeen Matthews: Absolutely.
[00:41:47] Shelby Smith-Wilson: But no, it's, it's, it's really been a, a pleasure chatting with you.
[00:41:50] I really appreciate the, the very matter of fact way that you have broken down what I think some people feel. Like our intimidating topics, but I think the analogies and um, the human-centered approach that you bring when discussing these issues. Has been really helpful and I'm sure our audience will, will benefit a great deal from it.
[00:42:14] So thank you so much.
[00:42:16] Nadeen Matthews: Thank you so much. I've enjoyed the conversation and I really appreciate you having me on the Leadership Team podcast.
[00:42:29] Shelby Smith-Wilson: Thanks again for tuning into this episode of the Leadership Tea Podcast. We are so grateful for your support, your viewership, if you are watching us on YouTube, and we appreciate those of you listening to us on Apple and Spotify. If this content resonates, please do take a moment to leave a review. And rate us.
[00:42:52] It's the only way that the algorithms will know to push our content to other potential listeners. Again, we're just grateful for this community and really appreciate your support. Belinda and I are also very excited to announce a new initiative that we are launching. It's called The Centered Leader.
[00:43:11] This is something that we have nurtured and curated with care, and essentially the centered leader is. It's a long-term container, a long-term space for leaders who are committed to practicing alignment, clarity, and sustainable leadership. If this is something that resonates with you, we invite you to go to our website.
[00:43:35] Stirring success.com and look for the Centered Leader tab to learn more information. Thank you again for watching, and we look forward to sipping wisdom and stirring success with you again real soon.