Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools

The Board Takes on AI With Gerard McInnis

Jonathan Green : Artificial Intelligence Expert and Author of ChatGPT Profits Episode 384

Welcome to the Artificial Intelligence Podcast with Jonathan Green! In this enlightening episode, we delve into the strategic role AI plays in board decisions with our distinguished guest, Gerard McInnis. Gerard, a seasoned board member, offers a wealth of experience in navigating AI's impact on corporate governance and risk management.

Gerard opens up about the challenges and responsibilities boards face as they integrate AI into their strategic planning. He highlights the importance of balancing technological adoption with maintaining core business values and emphasizes the need for AI tools to enhance, not replace, human elements in business operations.

Notable Quotes:

  • "AI in the boardroom is not just about technology; it's about redefining our business models while preserving our core values." - [Gerard McInnis]
  • "AI should support your team to do higher-level tasks, not replace them, enhancing customer relationships through efficiency." - [Gerard McInnis]
  • "The board's role is to optimize returns for the shareholder, balancing risk with opportunity." - [Gerard McInnis]
  • "Customers value empathy and sincerity; these can't be compromised by AI-driven automation." - [Jonathan Green]

Gerard provides insights into how boards assess risk, the importance of aligning AI strategies with organizational purpose, and why maintaining a human touch is crucial, even in an increasingly automated world.

Connect with Gerard McInnis:

Gerard shares his perspectives on leveraging AI for strategic advantages, urging businesses to embrace innovation while being mindful of ethical and risk considerations. Whether you're a board member or a business leader, this episode offers valuable takeaways for anyone looking to responsibly integrate AI into their operations.

If you're intrigued by how AI is reshaping corporate governance and want insights from an industry expert, this episode is essential listening! 

Connect with Jonathan Green

The board takes on AI with today's amazing special guest. Gerard McInnis. Welcome to the Artificial Intelligence Podcast, where we make AI simple, practical, and accessible for small business owners and leaders. Forget the complicated T talk or expensive consultants. This is where you'll learn how to implement AI strategies that are easy to understand and can make a big impact for your business. The Artificial Intelligence Podcast is brought to you by fraction aio. The trusted partner for AI Digital Transformation at fraction A IO, we help small and medium sized businesses boost revenue by eliminating time wasting non-revenue generating tasks that frustrate your team. With our custom AI bots, tools and automations, we make it easy to shift your team's focus to the tasks that matter most. Driving growth and results, we guide you through a smooth. Seamless transition to ai ensuring you avoid costly mistakes and invest in the tools that truly deliver value. Don't get left behind. Let fraction aio o help you stay ahead in today's AI driven world. Learn more. Get started. Fraction aio.com. Now Gerard, I'm so excited to have you here because this is a really interesting topic. We've heard so many CEOs complaining, the board of directors told me to do AI, but they don't know what it means. And I was like, it's time to finally let the board defend themselves. And I guess that's a great place to start.'cause we're in this cycle now where everybody wants ai, but they're not exactly sure why and they're not exactly sure what it means. And I've seen this happen a couple of times in my lifetime where everyone needed a website. And I said, why does your, why do you need a website? I'm not sure, but I know I need it. Yeah. And then I need a Facebook page, but I'm not sure. And certainly AI can help a lot of companies, but I think the uncertainty is why we're seeing some just vagueness and decision making. So I'd love to start with that perspective on from coming from the board's perspective. What is your mindset when there's like a hype cycle like this or a new technology? How quickly do you wanna adapt it? Do you wanna be at the front of the line or do you wanna wait till it's a little proven or do you wanna be late to the party and like it's really secure? Great question. And that would be actually the question the board would ask themselves. I thought it might be helpful to start even from a perspective around the differences of what types of boards are out there. Like if we're talking about a, a full fiduciary board that has what I'll say, full governance responsibilities. Or an advisory board where, they're more associated with the management team to guide and direct and particularly maybe in areas where management doesn't have the deep expertise and they have advisors on a panel or something like that. So I guess I've started with that AI is just so you know, per prevalent now that I'd say every director would be feeling some sort of responsibility. Oh, what does this mean for me? What does this mean for the business that I'm in? And they might not even. Like they, they like legitimately might not know the answer. So they might just be simply asking of management, Hey, what does this all mean for us? You'll have some directors that have actually taken on quite a bit of interest, have been taking their own education programs on it, maybe have seen applications and, other businesses that they serve on boards for. So then they try to port that over too, right? So you've got. Board of director members are people as well as management are people, and they have varying skills and various levels of education. And when it comes to ai so I guess I'm really trying to say that question is broad and it purposefully would be a different answer depending on who's asking it and what context for what type of role they've taken on. So some of what I've seen is that there's this desire to implement AI and it rolls downhill. So I've worked on a project where. The CEO was told by the board, you have to do some ai. So he found something and bought it, and then he told the CTO, Hey, look what I got. Figure out how to use it, justify my purchase. And so it's like it's starting to roll downhill. So I've been the second step down where I'm like, what? Why do, what problem does this solve? He goes, I don't know, but it seemed really good and. There's such a wide spectrum of pricing and different elements, and one of the things I wonder about a lot is we're seeing a lot of security issues or mistakes happen. We just had a lawyer who got into a ton of trouble because in his brief it covered 20, it mentioned 24 case laws and 21 were fake. So if only three were real cases and it's the biggest fine so far, I'm sure they're gonna get bigger and bigger as people get caught making these mistakes. And so there's this danger with AI that we just assume it's always correct because it's so confident it will say a lie in the truth with same level of confidence. And we also, once you connect an AI to your systems, and I've seen people connect it to everything. All of your data is now transferring around. So they'll say, oh, we have this new AI tool. Just connect it to our payment processor. Just connect it to our bank account. Just connect to our customer data. I'm like then it's not, what's their security policy? What's their app appro like? Do they have a SOC two? Do they have, what's their policy on? If there's a data breach, will they notify us? All of these things, because it's it's just a fig leaf. If we say, oh, they promised us they're not gonna keep a copy of our data. But like it still can be a really big problem. So from the board's perspective, you also look at kind of the security and do. Maybe participate in before they make a buying decision, go let's talk through this.'cause this is such a novel technology, there's probably perspective you haven't thought about just because it's, and the buying cycle seems so much faster. Like the buying cycle for technology used to be one year, two years, three years, and now it's one week, two weeks. I've worked on a project where they talked about something on Tuesday and then signed a contract on Wednesday and I was like, wait, can I, we need to audit their technology first. And it's. It's because we're so excited we're bypassing our usual caution. Yeah, no, that's fair. So it's been said at like, when it comes to the role of the board, this expression of noses in fingers out. And that traditionally is how a board has functioned. So they'd be asking guideline guiding questions and high level questions of management. And the board has the ultimate oversight for risk in the organization. So everything you just mentioned there, Jonathan, a hundred percent. Like those are the questions again, that the board would put to management to say how are we protecting our data? How are we making sure that we're not breaching, confidentiality, et cetera? So the identification of use case specific use cases. Probably doesn't need to go to the board like that might be getting too close to fingers in, but knowing that, or having confidence that there's policies in place and that. The procedures that management's taken to protect data have, are in place. And that might mean slow down in order to speed up, right? That would be like a guidance from the board. But AI can be used as, like for maybe internal purposes or maybe just operating efficiencies. Or it can be, more of an external application that's really more direct for use of delivery of the product or service. So those have different implications in terms of the exposure that you're gonna get with going outside of your own enterprise. My understanding, and I wanna be clear, like I'm not a techie, although I play with this as much as I can myself. There are the equivalent of intranets. Within an AI environment nowadays, right? And that means you can have sandboxes inside of your own organization where you can start to experiment with different AI tools and applications without exposing your data, through external to , through your firewalls. So you talk a little bit about risk management, which is something that as someone who came from the world of entrepreneurship and bootstrapping, I always. Try to risk manage with the type of clients I work with on the projects where there's projects where you get paid up front and then projects where you get paid a percentage of the backend. And I try to not have all of one or all of the other, there's no risk and there's too much risk. And so I'm very interested. How does the board assess risk? Is there an algorithm or a system for how you measure. How much risk you're willing to take because I, I think this is especially important 'cause we have a lot of startups who listen to the show. And I think that when it's your first business, you are, I certainly see this, it's like growth. And there's never, and oh, we'll deal with cybersecurity and all the other risk things later. So what is like your framework from assessing risk? Yeah, no, great question. Maybe I'll start by going back to what the role of the board is. The board is an intermediary between the capital provider, let's call it the shareholder and and management, right? So that's the board role. They sit in the middle, but they are a representative of the capital provider. So when it comes to decisions around risk, they need to, they, the board needs to be. In sync with the risk tolerance of the capital provider, if that makes sense.'cause again, they're the agent of the capital provider. So I use the word risk tolerance, but you actually start broader and there is, there are objective ways of defining your overall risk appetite. And that might be measured in dollars. And and then, and taking that down on lower level, you have individual tolerances, so something like a a data breach. The tolerance might have some callers on it. Yeah, we're willing to take a bit of risk and we might get a wrist slapped and there might be a penalty to pay, but don't let that slow us down. Something like harassment in the workforce or a safety issue or someone might be seriously injured or killed on site. Zero tolerance, right? So like for each aspect of a, of of a business, the risks are really you can calibrate them, based on these tolerances. I chair the board of a private company where the capital provider has a quite high appetite for risk. So then decisions that the board make and guidance that the board gives to management. Respects that you might have, other capital providers who, maybe they're, pension funds that might have a lot more regulation or stricter expectations on them from their shareholders and what have you. And then, their risk tolerance is gonna be different. So if you are on the board of two different companies and the financial provider one is very risk tolerant and the other one's very risk adverse, you would make different types of recommendations at each of those companies? A hundred percent, yeah. Because again, the board is trying to make sure that, as the agent for the capital provider is really making sure that management understand is giving management the appropriate direction. So if they're saying, Hey guys, we don't have time to wait. Our business is gonna be disrupted if we don't, change and change quickly because one of the business I'm involved with is financial services. What's happening with FinTech and move to open banking, et cetera. The automation of tasks, that's been with us for a while. Putting that on steroids with AI and, agents and stacking agents and like the pace of change in terms of executing on the business model was so fast that, you, the management needs direction. We're, this isn't the time to sit around and pontificate or, take our time with it. We gotta get moving. So you're in a situation like that. The decision is to take on a, to accept a higher level of risk so that you can have a faster moving operation. So you're trading risk for speed. Correct. But the board still says we gotta put some callers on that. We can't afford like a privacy breach because that's, that would kill us in financial services where we'd have to shut down. We can't afford reputation risk'cause we're brand heavy and our, our brand and reputation means so much. So there's you say, it's one of these go fast. But make sure you don't do this, and this. Don't put us in the ditch. I see. So there's different categories for the type of risks. So if it's something internal, 'cause one of the things that I see a lot, there's a lot of language, and it's coming from my industry, which is oh, you could replace all of our employees with ai. Yeah. And it leads, and then they come to me and say, we wanna do a pilot project with our employees. I'm like you haven't. If you tell the employees we're gonna launch an AI program and once it works, we're gonna fire you. They're gonna sabotage it, right? You create, they have a negative incentive to participate. So I always say, let's start, especially if you have employees coming into the office with how many people or don't wanna go back to the office if they don't wanna work at the Amazon offices where there's five star chefs and ping pong tables. I've never worked in an office like that. Like I can't imagine how nice that is. If you don't wanna work there, you don't wanna work anywhere. So I always say if you have people that come into the office and they're loyal, it's better to upskill them and per an employee plus ai. And I see these like dangerous with morale.'cause sometimes when I'm talking to the employees, the first thing they'll say is don't optimize me so much that I become redundant. I've had multiple people I work with say that to me specific. I'm like, that's not what I'm not that type of consultant. Yeah. Like I'm not the biotech people like that. I understand that's more of an m and a thing, but I am aware of that. So with kind of the mindset of ai, sometimes there's the hype is so big. AI can't replace employees now. We constantly see people go too risky. And then there was a lawsuit in Canada recently where a support bot said something and then the airline goes, that's not our policy. And then the court said it said it, so you have to honor it. So it's it doesn't matter. That's an ai. If it's your representative, whatever it says. And that's, people aren't. Paying attention to that little risk that it 'cause. The beauty of AI is that it's creative and that's also the dangers.'cause it's gonna say something different to the same question every time. And every once in a while it's gonna say something you wish it didn't say. Yeah. If your guidelines aren't perfect. Yeah. So when it comes to the morale perspective, we're bringing AI and looking at the holistic and the value of the team. Do you think it's better to do 'cause I've seen two approaches where it's a small pilot program, let's start with a couple of people, let's solve one problem and then another. And I've seen other ones where they say things like, you have to be 70% AI by the end of the month that you're fired. I've seen like the two ends of the spectrum. What do you think is a little more, is the more measured approach better, or is the accelerated approach better? Or does it depend upon the risk tolerance of the business? I think it does depend on the risk tolerance of the business, but it also depends on where you see the biggest disruption from ai. So if it's in the direct delivery of product and service to your customer and you're and there's other players that are going to, or that you've seen, or that you fear are gonna react quickly or quicker, then you have to keep step, right? If it's a. If it's a process improvement, cost efficiency type of internal initiative, probably can go a little slower. At the end of the day, it's gonna catch up to you because everyone's demanding shareholder returns and everyone expects profitability to be increased with productivity, with the use and adoption of technology, et cetera. But if it's an internal use, you've got a little bit more time. If your business model is at risk because. Of the adoption of ai, then you need to move more quickly. I love your example though. I agree. I think in the early days, and we're talking six months ago, if we wanna call that early, there was this, get buy-in, let the use case come from the workforce. Prioritize use cases with input from your employees. Get some quick wins, get their trust and confidence that you're not there to replace them, et cetera. That approach is fine. I just feel I think there's a greater sense of urgency to move that along quicker. And I believe that the sophistication of the tools has already leapfrog from where they were. Six months ago, and I think people are, like you were saying, the adoption of the internet initially. I think people are just more comfortable. So the fear factor of it. I think it's subsiding a bit and people are starting to say, Hey, I can actually use this to help me in my job. I can do higher level tasks 'cause the lower level tasks, I can get assistance using my ai, AI agent. There's a recent example that I became aware of where, I think it's a great example where it, the exposure of the AI agent was only internal direct interaction with the employee and management. And it has to do with field staff. So when field staff are on the clock and they have a ticket and they're being tracked by GPS you can set parameters for when tasks, certain tasks are supposed to be completed. And if the task isn't completed by that time, an agent can reach out and say, Hey, do you need some help? Do we need to send someone else? In, in the stacking of agents, which is something that's I've just become aware of recently as well, an agent will actually connect with another agent if something needs to be escalated. But these are in, but this is all within the walls of the organization and it's not, it doesn't have any risk or visibility to the customer. That's a great example of just starting to use it in internally for your own operating efficiency. One of the things that. Is interesting is that as soon as we get more time to work on a task, we just get assigned more tasks. So what we've seen is that, oh, you can, with ai, you can do five days of work in four. So we're just gonna give you another, like more assignments, more in science. I've seen this at some big fortune to 500 companies that I've talked to that do a lot of consulting. Like now we expect our agents to deliver fully ready presentations instead of rough drafts, and they have to do three presentations a day instead of two. And it's, this idea of just constantly maximizing for speed, which risks efficiency. The danger, the biggest danger with AI is that it's so easy to push the button and then not double check its work. And that's the danger. I think that's the danger of it, is the easiness, and that's why you have, we even had a judge who had to withdraw a decision recently because someone said. You wrote your decision with ai and he goes lemme double check that. It's so it's not just the lawyers doing it, it's the judges doing it. And we've seen it all these little cases and probably for every case I hear about, there's a hundred, I don't, the danger is that it's easy to cheat. Like when I was in high school, cheating was hard 'cause there's someone walking around the room. But if there was no one watching you take the test, I'm sure cheating would be way higher. Yeah. So because it's so easy to just go, you know what, I'll just copy and paste it. The last nine emails the AI wrote were fine. This one's probably fine. And that's. That's when the holes in the Swiss cheese line up and that's when something bad happens. So there's I've also heard this approach. There's top down AI approach, which is, Hey, this is the system we're gonna use. We're gonna train you on it. And then this is, and I've seen the top bottom up where they go see what tools you like, and then we'll approve it or not approve it. And. I've even heard someone say to their team everyone needs to learn ai. And they go, which tools? He goes, just use tribal learning. Figure out with each other. And I was like, what? That's too broad. Yeah. Give a little bit of a direction because there's a hundred thousand or 50,000 AI tools. Like how could you possibly, you need a little bit of guidance in my opinion of what problem you're gonna solve, at least our starting point. What's your perspective on kind of. How to bring it in and the type of strategy that you see as most effective. Again, you touched on a great point. You have to legitimize the use because the use is gonna take place anyway, right? We are, we know of examples where employees have just taken work home and then spun it through, an application that they like at home and bring it back. So I think you're better. It's bring your own computer policy, right? Like the end of the day, you just throw your hands up and just say, okay, so we're gonna legitimize the use of ai. And in agentic, tools and generative a, generative search tools. But we need to trust each other and we need to be transparent. So if you make it again, it comes back to policy, you make it such that, you're not trying to catch someone else, say, aha. You use the tool to generate that. That doesn't sound like your language. You actually, you legitimize it, right? And then that way you can, again, you can put the parameters around it in terms of how it's being used. You make sure that things like concern around plagiarism or copyright violations are being respected. Again, it's like trying to keep the calculator out of the finance office. Like eventually you gave up on that, right? Or, it's, these tools are here. They're not going away and they're only going, they're say they're only just getting more and more, powerful by the minute, literally. Transparency, communication policy, all has to be com commingled, I think. And let them have fun with it. Like again, I don't believe in being a police agent. I think you gotta trust that people understand why you need to put the call callers, I'll call it, or the policies in place, but, make them as liberal as possible so that you encourage innovation. Most of that innovation and is coming, from a younger generation as well, right? You want to, you want that to percolate up through the organization. I think that you just have to create guidelines for how you get a tool approved. I think that's important so that people can say, oh, I need this tool. And I always ask this question, which is, what problem is it gonna solve? Yeah. And that's my core question. And even when I'm working with CEO, like C-Suite, and they bought a tool, I go what problem does this solve? And if they don't have an answer, that's when I get worried. It's. Because it, what happens is it's the same thing. If I go to the hardware store with my wife, if I buy a measuring tape, when I come home, I'm measuring stuff all over the house to justify my purchase. I'm measuring the kids like, I'm measuring stuff. You don't need to measure, like you wanna make it look like I bought, I didn't waste money. Look, I'm using it and I. I do think there is room for innovation because sometimes the employees go, we really want this tool. And I go what problem does it solve? And if you have Exactly. Because if you don't have any system where they're allowed to ask or tell you, they'll just do it in secret. Yeah. Which creates that security risk. And I think that. Especially now, we have to have more of a security policy because if you accidentally give something to an ai, there's no undo button. If you accidentally give it credit card numbers or home addresses or social security numbers, there's no reverse. And when I was working on a larger project, we had a. License with Google Business. So we have the BAA with them. We have the HIPAA Comp Con, we have the more accelerated contracts and the security stuff, which is we're not gonna copy your data. And they were like I wanna use this tool. I'm like you have to use, the reason we use this tool is because we have a security and a compliance issue. If you want us to add on another suite of tools and some people really need a ChatGPT, I said, okay, we have to get a BA, A with them first. So make sure you're using the corporate account and. There's a, sometimes there's decisions are made at the top and the team isn't un, doesn't understand well, what's a BA, A and why does that matter? And some they're just like, oh, compliance, that's just the IT department and this the tech department. It's no, everyone, once it gets a little like what a little bit of the decision making processes. Then people can come more on board and go, oh, the reason I shouldn't just use chat GPT on my phone. Is because it is a security risk, and now I understand that and that if I need a tool, we'll find a way to bring it into the ecosystem. And I think that's kinda been my experience that you at least need to create. What's the framework for a. Our decision making of whether or not we'll accept a tool and 'cause I've seen where someone like created a really long proposal for a tool that's $20 a month. I was like, this is too long. This is not a, this doesn't have to go to the board of directors for a $20 month decision. And then we bought something else that was like $20,000 a month with no decision making. I was like, what's happening here? So I've seen both ends of it, where it's real. For a small startup that's a big decision. It was the same company and it's like that. Understanding at this price point. Here's the process at this point, here's the process, what parts decision I have to make, and then what we have to look at, especially if you're taking a tool from a startup, you have to look at if they're, they've only been in business for six weeks, they don't have a security policy, they can't have possibly done SOC two'cause that's a six to 12 month process. They don't have a cio, they don't have a CTO, they don't have any security policies. And also there's one thing people often forget about is when you're moving data between different countries. So if you're, if their servers are in Canada and you're in America and it's going back and forth, there's a bunch of rules about that with compliance that sometimes in our, that a regular employee wouldn't, it wouldn't even cross their mind. Yeah. Because something they've never dealt with before. The reason I use the kind of bring your own device to work kind of example is, for years organizations tried to fight that and no, you gotta use our. Computer because we know how it's configured and we know, and eventually it's okay, we'll let you bring your own device, you need multi-factor authentication, or you need this, or, whatever. So you put some parameters on it. A, ai, the use of AI tools is the, I think the most democratized. Technology that's in recent times, right? So that horse is out of the barn. Trying to create that awareness is all you can do. The, and it comes back to, like I say, a general sense of risk. But do it I believe, do it in a way that. Is encouraging the use of the tools and creating an exciting culture to really, truly take advantage of these technologies as they emerge. Not a compliance fear culture.'cause I, I really, yeah, I think that's gonna go the, that'll go badly. I think so. Yeah. With how easy it is to get an AI tool. There's free versions of everything. It's impossible to Yeah. Unless you take everyone's phone at the front door. Impossible to lock and. It's certainly one of the things that I find interesting because I started off in IT way back in the 1990s and back then you had an intranet. No one had access to the internet at work. Yeah. You just had this, and it was a complete opposite of like you had only the internal documents. If you wanted something external, you had to justify it. Now it's the opposite of where we tried to, you have access to everything and then we block. Chat GPT if we're a Gemini company or we block Gemini if we're a chat GPT company. And it's the opposite perspective of everything's open, but we block the bad things. Yeah. And I've really seen. Shift in mindset. And I remember, yeah, everyone would have, everyone used to have two phones. Your work phone and your personal phone. Yep. And you had your work computer. And I remember when my dad wanted to work from home, sometimes the IT department came over, they set up the computer with a special internet wire and he had a little to in that did a different code and for him to log in, I remember it was like four password. Oh yeah. It was really a little secure ID token. And you have to wait for the authentication number right. But to bring this back to the board, maybe to bring this back to the board, a lot of this, I take a personal interest in, it's at a level of, it's probably at a level more, more detailed than a board discussion normally would go, what the, what a board discussion should be. And in fact, I've done this and I've used generative AI to help develop my board. My questions for the board, what are the 10 things I should ask of management when it comes to security? There's the 10 questions. It just creates a checklist for management to say, have we communicated a policy clearly? Have we, are there any tools that we've authorized or there, are there any that, that we've clearly said we don't want our staff to use or whatever, are we only using, the, these tools within our, I'll call it our intranet within a safe internal environment or have we authorized the use of tools for external use or whatever, right? But those are the types of questions that a board would put to management. And from that you get an assessment of how current, like management's currency with these issues. And you really wanna get confidence. You don't wanna be. Hanging over the shoulders of management telling them how to do things. That's not the role of the board. But you do wanna have confidence that they are in tune with these types of issues and concerns. But again, I, I look at risk management as underpinning the liberation of then using the tools appropriately to drive value in the business. I wouldn't want it to be view that the board's role is only to manage risk. The board's role is really to optimize, it sounds crass, but to opt, optimize returns for the shareholder, the capital provider. Value is a trade off between risk and opportunity. So you wanna understand those risks. You wanna manage those risks appropriately, but you do not wanna do it at the expense of capitalizing on the opportunity, if that makes sense. So when boards are making decisions right now, looking at things, how does AI rank in terms of importance? Is it, are you talking about it all the time or is it like the seventh thing on the agenda? Where does it rank in importance and top of mind for people right now? Definitely top, top three. Yeah. Yeah, so I, I like to look at the way I structure my board agendas is working working in the business and working on the business. I. And working in the business historically has been the compliance aspects of the board. Governance, historical financial performance, safety records, employee turnover, whatever those compliance issues are. And then working on the business are your strategic discussions. AI appropriately fits in both of those buckets. How are we employ employing AI in our business? How is it helping to drive efficiencies in costs or opportunities in revenue? How is it effectively impacting and contributing to our bottom line results? And then ai working on the business, how is this shaping, the business models of our clients, what's the impact of our clients? What's the impacts within our supply chain? How does it impact our own products and services? What are our competitors doing? Those are questions that you would ask if you have the working on the business type discussion. So AI is I'd say, top of the agenda on in both of those. I put it third because, you still need to look at. Current day issues, financial position, et cetera. There's other things that have to be dealt with in your board agenda. But AI is absolutely full on, on the agenda for every board discussion these days. One thing that I see a lot of AI startups and AI consultants talking about is increases in efficiency. And so they say, we'll make your staff 20% more efficient. I saw someone post this on LinkedIn and I couldn't believe it. He said, if you have 40 employees in sales and we make 'em. This much faster. You save this much money. And I was like, yeah, if you fire two of them, like you don't. You still have their salary. Like even if they are working seven hours, they're getting the work done in seven hours instead of eight. And it's like my opinion, it's the wrong calculation. What you really care about from the board or from the business perspective as much veteran is the increase in revenue. Because efficiency, I can, I couldn't tell you. If I'm more efficient today than yesterday I don't know. It's so hard to measure efficiency unless it's a factory and you're measuring output, like it's a very, you have to have really specific metrics. Efficiency is so vague or hard to grasp, but I see so much of this language we're 7% faster, we're 17% faster and it's undetectable. It's sometimes I'll see these two ais and they'll go, we're 1.1% better. I'm like, that's not detectable. I can't tell if something weighs one gram more. Like it doesn't really make a difference for me. If a laptop's 1.3 pounds and 1.31 pounds, I won't be able to tell the difference. So how do you feel about this? Focus on efficiency or is it more important to focus on increasing top line revenue? I'm more a fan of this strategic applications of AI for revenue and product and service development. I do think, as I mentioned this earlier in our conversation, I do believe that there are opportunities for efficiency and I think there is an expectation that productivity can be improved with the use of these tools. So I think it go, I think it's both, but I. I get way more excited about the product and service opportunities, like to increase revenue. And to be honest, like we've had process automation now for a long time, right? So there's not really a new thinking around, a, a robotic process or a chat bot or what have you, right? I think what's different is, again, I've talked about this idea of stacking agents. I think the sophistication of the tools. Has changed. And I, I don't mean this in a derogatory way, but what I'll say is low skill or low level tasks. We've already been identifying those as being, capable of being automated. And where did those employees go? Hopefully upskilled, hopefully doing higher value tasks within the organization. But we will see, I think we'll continue to see these tools being used. What for what I call those low level applications I had a recent example. Where I was trying to get in touch with a, an airline company because I had to make a change in my flight. And obviously I was dealing with a bot. And they tried to sound like they're a person, but it was a bot. And then that bot wasn't able to set, sell off my problem, so I'll, they said, I'll transfer you, I'll transfer you to another agent. They transferred me to another bot, so I just felt, so now I have another. Agent because what's interesting is like basically triaging a customer issue with a kind of a low level bot and then it triaged it up to a bot that only took, only, took exception cases, but it was still a bot. So those, I guess I'm just saying that did displace a human, like there would've been a human taking that call previously. Now you have to go through three sets of bots before you finally press zero and get to a human, I don't know if that's more efficient or not, I always ask people who wanna implement that. Are you gonna give a bot the ability to give someone a refund? If it doesn't have that, and they're like, no, what if it makes a mistake? I was like that's what the customer goes in knowing that the bot can never solve its problem if it hits the wall. Yeah. So you go in knowing right away, this person doesn't have the power to solve my problem. And I've worked, at a lot of different companies, and I remember my friend was head of all support for one, like the top computer company in the world, and it was about 20 years ago, and he was like. I can't even set. He's I have to get all fill out alls paperwork to send out a replacement mouse. And he was in charge of thou, like thousands of tech support people. And it was like it, it was all, there's always these kind of inefficiencies and all of these things and because you want to have all the bots in place. Don't provide a better experience. It never provides a better experience.'cause I've never, I always ask everyone, have you ever had a bot solve your problem? Yeah. I'm like, I've never had that. Yeah. And I'm like, it doesn't create a positive experience. And I'm okay with putting bots at other places in the sales cycle, but when someone calls customer support, they're already not happy. No one ever calls customer support. I just wanna call you guys and say, I had an amazing flight. That doesn't happen. So it's already someone who's a little bit upset, so you want to, that's the worst place to test it in the process. Whereas if someone's like doing an inquiry and a bot can end, those are simpler questions. So I always say, let's start it with something a little less risky where you don't make some, someone's not already mad. Sorry. I'll just give you an example. We're using it for loan adjudication in a business that I'm associated. The task of the person that receives the inquiry from the customer is simply to capture information and put it into the system. And it does like a background check or with, bank statement check or whatever. It's an automated, so that's an example where that task can be automated. The customer phones in, they provide the five pieces of information that's required, and then. The system does its thing. You still want a human there to manage the customer relationship. So now the human's time hasn't been used to try to input data and to do all that. They're like, they can be there right, right away to create a relationship with the customer to say, great news, you got approved. And now you're building a relationship and you're not using your time. Just to try to try to just get the data captured and be measured on why is it taking you so long to input, data you're the next takes you two minutes longer than the other person and to get your data input or whatever, right? Yeah, I think. We're gonna see this shift towards an increasing value of human to human connection because we want to know, we're not talking to a bot. Like I just, I don't see the appeal in talking to a bot on the phone because there's no chance the bot is gonna. Feel bad for you. Yeah. When you call customer support, you're hoping that you can make a case like, please have mercy on me. You're trying to create rapport with the person, with a bot. There's no chance of that, and there's no human connection. They don't know how you feel. And I think. As I always hear about these tools oh, you can create an AI generated podcast who would listen to that? Who wants to listen to two AI talking to each other? I can't think of anything more boring. And we sometimes think of these use cases that nobody wants. Just because you can do it doesn't mean you should do it. But there's another example though, like if you go back to like role of the board, that's what I alluded to early. If your secret sauce is customer relationship and and your brand is associated with sincerity and empathy and caring and then. The board doesn't oversee management, making changes in that compromise, those values, then the board has failed. Like the, that's where it's I'm trying to use that as an example to say the board puts these expectations on the management team to say, sure, you can, we need to empower and take advantage of these emerging technologies. But remember, our secret sauce is empathy, caring, sincere customer relationship, and we cannot compromise that. Yeah, I think that's such a strong thing to remember is that not everything just 'cause it can do, it doesn't mean it's a good idea. Because I have seen where you start using AI to put out all your social media content and people go, this is AI generated. And you decrease your stock, your brand value. There's something about, knowing that the company or the business or the people that work at the company are paying attention to you or care about how you feel, or it's if you didn't take the time to write it, why would I take the time to read it? My mindset on stuff like that. Sense of culture as well, because, it is, it's a dichotomy. Right now we've all this emerging technology, but to your point about human connection, everyone like and particularly in employee relations, they expect bespoke management. They expect you to care about them. And so you're right. I think the more that we become. Autonomous or automatic robotic we're gonna, we're gonna fail. We'll fail our employees and we'll fail our customers. How can we kind of balance efficiency without losing our central thesis or our humanity or kind of the thing that makes a business special? Yeah, that's a loaded question. I do believe I'm a big purpose sorry. I'm a big believer in organizational purpose and having employees, be truly aligned to purpose. And that's, every corporate strategy should be anchored in purpose. So I think it just goes back to what I was just saying there, like sincerity in that. And then that has to permeate down through, the values of the organization how we treat each other and how, and then, how and what the expectation is relative to how we treat our customers. And that's a tone at the top, like that the ultimate guide for culture is the board, again, as a representative for the shareholder, but that tone at the top has to come from the board. I think that's really powerful because I always wanna remind people that. AI is just another tool. It's just a technology like the calculator, like the car, phone pager, cell phone, smartphone that we. Want to buy from people, we wanna do business with people. And like when I see a company saying things like, oh, we're replacing all of our employees with ai. I am like I don't wanna support that. And we've seen companies get backlashes from saying that and it's it's appealing to the shareholders financially, but it can really kill your relationship with the customers.'cause No. It's what do you mean you're firing everyone? I don't wanna people don't wanna support that. People are more. Pay a little more attention to the companies they do business with and I think that's an important thing to end on. And I think this has been a really powerful discussion. Certainly very educational for me, so I appreciate your time. Gerard, for people who are interested and wanna see more of what you do, is LinkedIn the best place to see what you're writing about in the amazing things that you're doing and the board's perspective, which I think is really, it's really nice to see that. The LinkedIn would be the best. I I call it my portfolio life. I do board work, I do corporate advisory work. I do some peer counseling, so my LinkedIn kinda represents all the different things I do in the community. Thank you so much for your time. This has been an amazing episode of the Artificial Intelligence podcast. Thank you for listening to this week's episode of the Artificial Intelligence Podcast. Make sure to subscribe so you never miss another episode. We'll be back next Monday with more tips and strategies on how to leverage AI to grow your business and achieve better results. In the meantime, if you're curious about how AI can boost your business' revenue, head over to artificial intelligence pod.com. Slash calculator, use our AI revenue calculator to discover the potential impact AI can have on your bottom line. It's quick, easy, and might just change the way. Think about your business while you're there. Catch up on past episodes. Leave a review and check out our socials.