Tech in Translation
Welcome to Tech in Translation, the podcast where we break down the complexities of today's most innovative technologies into something we can all understand. Brought to you by Iron Bow Technologies, a next-generation solutions provider, we're committed to helping our customers across the government, education, commercial, and healthcare sectors turn their tech challenges into real, actionable outcomes.
At Iron Bow, we believe that technology should be a tool for empowerment, not a source of confusion. That's why each episode of Tech in Translation is designed to demystify the latest trends, tools, and technologies, making them accessible for everyone—whether you're a seasoned professional or just tech-curious. Join us as we translate complex tech into everyday language, one episode at a time.
Tech in Translation
Demystifying AI: Why It Matters for Today’s IT Leaders
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Artificial Intelligence (AI) is more than just a buzzword—it’s a transformative force shaping industries worldwide. From automating tasks to driving innovation, AI is fundamentally changing how organizations operate. But despite its growing presence, many IT leaders remain uncertain about how to approach AI adoption. In our inaugural episode of Tech in Translation, we sat down with Nick Serapiglia, Managing Director of Digital Transformation at Iron Bow, to break down why AI is so critical for today’s IT leaders and how they can navigate its challenges and opportunities.
(0:02) Welcome to Tech in Translation, the podcast where we break down the complexities of (0:07) today's most innovative technologies into something we can all understand. (0:12) Brought to you by Ironbow Technologies, a next generation solutions provider. (0:16) We're committed to helping our customers across the government, education, (0:20) commercial and healthcare sectors turn their tech challenges into real actionable outcomes.(0:26) At Ironbow, we believe that technology should be a tool for empowerment, (0:30) not a source of confusion. (0:33) That's why each episode of Tech in Translation is designed to demystify the latest trends, (0:38) tools and technologies, making them accessible for everyone, (0:41) whether you're a seasoned professional or just tech curious. (0:45) Join us as we translate complex tech into everyday language one episode at a time.(0:56) Welcome back to Tech in Translation. I'm your host for today, Tom Wooten. (1:00) Today, we're diving into one of the most transformative technologies of our time, (1:04) artificial intelligence.(1:06) Joining us is Nick Serapillia, Managing Director of Digital Transformation at Ironbow. (1:11) Nick, can you tell us a bit about your background and what brought you into the world of AI? (1:15) So first, I noticed you did not use your hands when you said Serapillia. (1:21) Anyways, Tom, look, I just want to say it's really exciting.(1:25) I really appreciate you and Fran reaching out and giving me an opportunity (1:28) just to give you a little bit about my background. (1:30) We didn't get into this. You and I have talked quite a while, (1:33) and you learned quite a bit about me before we started recording.(1:36) So something that's interesting, though, that most people don't know is I have a psychology degree. (1:40) I don't actually have an IT degree. (1:42) So my background, just a real quick story, my background in IT was really a really good (1:47) friend of mine who just saw something in me and being someone that can solve problems (1:51) and meet challenges and just kind of figure things out, right? (1:54) So he helped me get into a job at EMC.(1:56) I think my title was customer engineer, which sounds pretty nebulous. (2:02) But you engineered customers. (2:04) I did.I did. (2:05) Hey, you want some customers? (2:07) Here you go. (2:10) So no, I was the guy that would go out in the field.(2:14) I've been to every data center, probably in the Tampa Bay area. (2:17) I would go out there and replace parts, hard drive, motherboards, (2:20) do code upgrades, rack and stack, that kind of stuff. (2:23) But I quickly realized, because I thought, I was like, I'm an IT guy.(2:26) But I realized after a time, I'm not an IT guy. (2:29) I just know how to replace parts and solve problems. (2:32) So I had an opportunity to get into, and this is where I cut my teeth, (2:35) really actually started to understand IT.(2:37) I got into the resident program at EMC. (2:40) And think of a resident is whether it's somebody that's going on site to be a sysadmin, (2:45) somebody going on site to be a consultant. (2:47) I mean, it depends upon the customer and what they need.(2:49) But the U.S. Central Command there in Tampa for a McDill Air Force Base, (2:52) the EMC resident that was there was moving into a presales role. (2:56) Long story short, I took the job over, didn't know what I was doing, (3:00) figured it out on the fly. (3:01) And now I'm here, an IT engineer.(3:04) Did that answer your question? (3:06) Yeah, I mean, it's a fascinating background to get to where you are now. (3:11) We're digging into the topic of artificial intelligence during this podcast. (3:16) So you hear it, you see it.(3:18) Well, I don't know about see it, but you hear about it everywhere. (3:21) There's some sort of AI buzz having to do with just about every sector out there. (3:26) Automating tasks to transforming entire industries.(3:29) So why is understanding AI so critical for IT leaders and organizations at this moment? (3:34) This kind of goes back to tie up the story before about CENTCOM. (3:37) When I was at CENTCOM, what I quickly recognized, and this leads into what makes AI so interesting (3:42) and what leaders really should be paying attention to, what I quickly realized is (3:46) there's so much data out there and so many things that you can do with data. (3:50) I realized then, and it's still a problem today, that nobody understood their data.(3:54) And once you understand your data, what can you do with it? (3:57) Okay, now I start to understand what I can do with it. (3:59) So as an IT leader, I think what makes it important for them to start to understand is (4:05) AI and why it's so important is it kind of provides a strategic advantage. (4:10) You as an IT leader, you can use something like AI to give you the business owners, (4:15) whether that's government or some sort of corporation.(4:17) It gives you a strategic advantage to identify and implement solutions that drive innovation, (4:22) can improve your efficiency, or maintain competitiveness, depending upon what you're (4:27) trying to do. (4:28) And it's all about data and kind of deriving information from your data. (4:32) There's massive amounts of data.(4:34) AI can help you get the actual insights from that data. (4:38) So I mean, AI, data-driven, it applies across any sector. (4:43) So you work with organizations across all the sectors, from the government to healthcare, (4:48) SLED, beyond.(4:49) So what's the number one challenge or opportunity you see when it comes to AI adoption? (4:54) The number one challenge I see is, I would say it's fear. (4:58) It's that resistance to change. (5:00) And this is a common human condition, I think.(5:03) Something new is out there. (5:05) We're afraid of it. (5:06) So we resist it.(5:07) We don't want to adopt it. (5:08) We don't want to use what's available to us because it's different. (5:13) It's new.(5:14) And a lot of it is a lack of understanding. (5:16) There's a lack of training. (5:17) Well, yeah.(5:17) When you hear about AI, you think about, okay, this is taking my job. (5:21) This is going to replace me. (5:23) It is.(5:26) But I mean, that's a common problem. (5:28) It's absolutely a common problem. (5:30) That's what I think is the number one challenge that we face today.(5:32) If you look back at the history of all of the technology changes, not even technology, (5:37) but just the advances that we've made, and I'm probably diving way too deep into this, (5:41) but think about when the internet came about. (5:44) Think about when Google came about. (5:45) Think about when email came about.(5:46) Each time those came about, people were thinking Skynet. (5:50) Right. (5:50) Yeah.(5:51) Every time, oh, oh, you know, the post office, I don't need stamps anymore. (5:55) Right. (5:56) I've got this email.(5:57) You know, or when Google came out and people resisted that because they didn't understand (6:02) it and they were fearful of that change. (6:05) Right. (6:05) And it's no different today with AI.(6:07) I mean, it just, it feels like of the technologies that have come out, AI is a little bit more (6:12) intrusive, a little bit bigger deal than say the latest email technology. (6:17) A little bit. (6:18) Or exactly.(6:19) I would say a lot. (6:21) I mean, I use that sparingly, but yes, it's huge. (6:24) It is.(6:25) It's, it's annoying. (6:26) I mean, that's the challenge we're faced with, but the opportunity that it provides is incredible. (6:31) Everything that we want to achieve is made more possible with AI.(6:38) I mean, the advent of email solved a communication challenge, right? (6:42) I don't have to send you a letter anymore and wait for you to get that letter and read it (6:45) and respond. (6:45) So that solved the challenge. (6:47) How impactful that was.(6:48) I mean, it was a big impact, but that really just solved a delay in communication. (6:52) Same with collaboration software. (6:54) Yeah, right.(6:54) That just kind of, and it's a big challenge, a single challenge. (6:57) You take AI and it's solving multiple challenges and not only is it solving multiple challenges, (7:02) but the rate and the speed of innovation is accelerating because of AI. (7:07) So AI itself is accelerating its own innovation.(7:10) So it provides a massive opportunity. (7:13) But because of that, it's driving that fear of adoption far greater than we've seen before. (7:21) Well, okay.(7:21) Let's talk about it on the practical side then. (7:23) What are some powerful AI use cases you're seeing in the industries like government and (7:26) healthcare? (7:27) You know, I'm going to break down one. (7:28) I'm not going to say the agency, because it's probably a little sensitive, but this was a (7:31) little bit near and dear to my heart when I heard about this.(7:34) And believe it or not, this was actually three or four, maybe five years ago. (7:37) And I'm saying that because this is kind of how long some of this technology has existed (7:41) and people have been taking advantage of it. (7:43) So imagine a situation, you're investigating a crime and part of your job to investigate (7:48) that crime is you have to, you, the human, because you have to be a subject matter expert, (7:52) you have to watch a video of the crime.(7:55) So this crime has happened. (7:56) You have to watch a video because you have to be able to say, me, the human saw this (8:01) video and now I can go and testify. (8:05) And you're probably starting to figure out a little bit.(8:07) Based upon what you've seen. (8:08) You're probably starting to figure out some of the industry I'm talking about. (8:12) Some data scientists at this particular government customer, what they started to do was figure (8:17) out how can I use things like machine learning to identify markers where maybe a crime happened (8:23) in a video without someone looking at it? (8:26) Sounds cool.(8:27) Going back to why this is such a big deal to me, and this is a little bit of a sensitive (8:30) topic, but I think it'll be near and dear to everybody's heart is think about that video (8:34) being crime against a child. (8:36) And think about how terrible that would be. (8:38) You, the investigator, having to watch this repeatedly to annotate the video, document (8:43) what you've seen, and then go testify on this.(8:46) And now think about the fact that that's all you do all day. (8:50) Bring in AI, bring in machine learning. (8:51) And now I have something that can train it on a model that I'm reducing the amount of (8:57) hours that not only am I identifying crimes faster, so that's a great thing, but for me, (9:02) more importantly, the human element, I'm relieving that human from the horror of watching (9:08) such a terrible crime and the pain and all of the things that kind of, you know, I think (9:13) all of us in various instances have dealt with pain in our lives, right? (9:16) If I could say, I've got this system that would reduce that long-term traumatic PTSD-like (9:22) symptom because I've got an AI something that's helping relieve that.(9:27) So that's just one use case, but you could take that across industry. (9:30) I mean, it's cancer detection. (9:32) That's a huge one.(9:33) We, as humans, we might miss something, you know, maybe we don't see it. (9:37) I train a model that understands what to see and improve that accuracy. (9:41) And now I'm potentially catching something far sooner than I would have otherwise.(9:45) And I'm improving the quality of life of a patient that may have been, A, suffered for (9:49) a long time, and B, maybe they've had this thing for so long because it's gone undiagnosed (9:53) that now their quality of life is significantly reduced. (9:56) And maybe their long-term outlook on life is potentially gone. (10:01) Now, unfortunately, and I understand everything you just said as far as the benefits, the (10:07) human benefits to that.(10:09) Right. (10:09) But as a cybersecurity guy, I'm also thinking of the deep fakes, the information that is (10:16) corrupted, hacked into, messed up to where AI is not making the proper markers, is not (10:22) categorizing things the way it's supposed to. (10:25) And it's actually doing things in contrast or in contrary to what we need it to do.(10:30) And threat actors are using it already. (10:33) Yeah. (10:33) What about that side of it? (10:34) So, Tom, with great power comes great responsibility.(10:39) Yeah. (10:40) Outstanding. (10:42) Everything in life, there's, you know, I'm going to get philosophical here.(10:46) Everything in life is a little bit yin and yang. (10:48) You have a good and a bad. (10:49) We do something good for society, there's going to be somebody else who's going to figure (10:53) out how to take that good thing and do something bad with it as well.(10:57) I don't think that what we should approach it with is, hey, yeah, AI can do these cool (11:01) things, but look at all these bad things it does. (11:03) Unfortunately, that's a downfall, right? (11:05) So you're not looking at, you shouldn't make it prohibitive because it could be done bad. (11:09) Look at the benefits it does have.(11:11) Right. (11:12) Okay. (11:12) So bad is going to happen regardless.(11:14) Bad is going to happen regardless. (11:16) And we could probably spend an entire podcast or 10 on talking down that rabbit hole, right? (11:22) But I think you focus on the good, but you acknowledge the negative side effects of things (11:29) like AI and what the threat actors are doing with AI. (11:32) And that's when you start to figure out things like governance and ethical conversations (11:38) and making sure that the people that are creating these systems are acknowledging the fact (11:45) that, yeah, I have good intention with this, but I need to make sure that I'm accounting (11:51) for what could be used from a bad intention perspective.(11:55) And that's where government can help. (11:57) And that's where some of the policies and the access, that's where all of that stuff (12:01) comes into play. (12:02) We need to make sure that while we're doing these cool things, we're accounting for the (12:06) negative side and building up systems in place to help prevent that.(12:11) Well, let's jump back to your example with the video. (12:13) It's basically dealing with decision-making and things like that. (12:17) So AI is often seen as a tool for better decision-making.(12:21) How can organizations leverage AI to make faster, smarter, and more accurate decisions? (12:26) The difference between us and AI, we sleep, we entertain ourselves at times. (12:33) And where I'm going with this is AI is always working. (12:36) So I create a model.(12:37) It's just some sort of an algorithm to do something. (12:40) And I tell it to do it. (12:42) And it's always going to do that.(12:43) It's always working. (12:44) It does not help my feelings about Skynet. (12:46) Just saying, well, it's always watching.(12:51) Great. (12:53) So my point is that that is the benefit of AI. (12:57) I mean, I know it's scary, but that's the benefit of AI.(12:59) It's always working. (13:01) And not only that, but I can take the human emotion out of my decision-making process. (13:07) And sometimes that's necessary.(13:09) I'm sure all of us can think of times where if we just took a little emotion out of our decision, (13:14) we could have made a better decision. (13:16) More rational or logical thought versus emotional. (13:19) AI doesn't have emotions, which is scary.(13:22) It doesn't have emotions. (13:23) It doesn't care necessarily. (13:26) Actually, I think this is a good opportunity for me to just to say, I mean, AI, (13:30) it's not this sentient being that's, oh, I'm out to get you, Tom.(13:35) No, it's just really a set of instructions that me, the human, have coded it with. (13:41) So it's inherently flawed, right? (13:43) Okay. (13:44) So that's something you got to keep in mind.(13:46) You know, any AI system is going to be inherently flawed. (13:49) And, you know, how we get better with that is we recognize those flaws and we try to (13:54) code out those flaws. (13:55) I'm oversimplifying a little bit.(13:57) Point is, as I get better at telling it what to do, I could then start to back out my human (14:03) emotion and I can stick to that algorithm. (14:05) My if-then. (14:06) If this happens, then this is what I want without me interjecting an emotional response to it.(14:12) Okay. (14:12) Based on that, where do you see AI making the biggest impact? (14:16) Would that be across operations or customer experiences, driving innovation? (14:21) What do you think is going to be the biggest impact for artificial intelligence? (14:24) Yes. (14:25) Yes.(14:26) Okay. (14:27) And yes. (14:28) So improving operations, as you're asking that question, that was immediately what came (14:32) to mind is how can I improve my operations without necessarily bringing on, and we're (14:38) going to start getting into the part of the conversation where people are going, I knew (14:41) it, without bringing on more and more people.(14:45) And, you know, by the way, I'm sure you can appreciate this. (14:48) You know, you've worked projects before. (14:50) Adding more people to a project doesn't always make the project better.(14:54) Or make it go faster, right? (14:55) Or make it go faster. (14:56) And why is that? (14:56) Because again, I'm interjecting different personalities. (14:59) I'm interjecting different human emotion.(15:01) I'm interjecting potential for more flaws. (15:04) And by the way, all this stuff, there's nothing wrong with that. (15:07) If you want me to tell you where AI can make the biggest impact, it's I, and again, going (15:11) back to the take out the emotion, I set it to do a task and it's going to do that task (15:15) very well, do it very repetitively.(15:17) And no, I no longer have to do that task. (15:20) I can do more important things. (15:22) So across industry, regardless of industry, regardless of whether it's innovation experience (15:27) or what have you, it's just the simple removing the emotional component.(15:32) Yeah. (15:32) Removing the emotional component and doing something that us as humans find boring. (15:36) What happens when we find something boring? (15:38) We start making mistakes.(15:39) Why do we make mistakes? (15:40) Because the task is boring and I'm not paying attention. (15:43) Boring or over test. (15:45) You're suffering from burnout and that's not something that AI is going to suffer from (15:49) at all either.(15:50) No, it's always working. (15:52) So that is, I would say, where I see the biggest impact and we're getting into the (15:56) field and you may have already started to hear this term. (15:59) It's called agentic AI, right? (16:01) So now I'm taking AI constructs and I'm creating a type of digital agent.(16:06) It acts as, and this is where I think it's cool. (16:10) If you're leaning into it, it acts as something that helps me. (16:13) It isn't something that you should be afraid of.(16:15) It helps me do my job because now I can do the more interesting things that I want to (16:19) do because I've got my digital assistant, my agent, however you want to put it, doing (16:23) the task. (16:24) And let me give you a quick, for instance, how often do you get an email from somebody (16:28) that you need to take action on and you're over-tasked and you forget about that email (16:33) and now three days go by and you didn't go back to that email? (16:36) Me? (16:36) Never. (16:37) Never.(16:41) But I see your point. (16:42) Okay, so Tom, I don't know if you're kidding. (16:45) Maybe you are really good and that well organized.(16:48) But that is one of those things that I don't have much tolerance for is anything in my (16:53) inbox. (16:53) Good for you. (16:56) I'm weird that way.(16:57) Don't look at my inbox. (17:01) So my point is, look, am I guilty of this? (17:05) Yes. (17:05) Do I know others are guilty of it? (17:06) Yes.(17:06) Are some people good at it? (17:08) Of course. (17:09) But point is, something that would make a big impact for me is I'm busy. (17:13) I'm working on stuff.(17:14) I get an email for something. (17:15) I look at it like, oh, yeah, I got to take care of that. (17:17) But I'm busy over here.(17:18) And then I get busy and I forget about it. (17:20) Well, I take something like an agentic AI construct, and this dives into gen AI and (17:27) things like that. (17:27) You're getting into like a digital assistant.(17:30) Well, in this instance, I'm talking about a digital assistant, but that's the world of (17:35) agentic AI. (17:36) I have this system that can read an email and understand the context of that email and (17:40) understand that there's a do out on that email. (17:43) And I have it built.(17:44) It's an agent. (17:45) So I have it built that can now interject into my calendar at a reminder, go check this (17:50) email out. (17:50) You need to do something with this.(17:52) There's an action there that you need to take care of. (17:54) All right. (17:54) I absolutely can see the benefits to that.(17:57) So have I improved operations, Tom? (17:59) Yeah, absolutely. (18:00) Have I made my job more impactful? (18:01) You've improved operations. (18:02) You've improved collaboration.(18:04) But outside of humorous, maybe not so humorous things that I've talked about, the sky nets, (18:09) the replacing people, what are some of the common misconceptions around AI that you've (18:14) experienced? (18:16) AI will replace you completely. (18:18) So yeah, we talked about it. (18:20) Let's talk about that, though.(18:20) We kind of dance around that a little bit. (18:22) You're talking about Skynet. (18:24) AI is inherently flawed.(18:26) It's built by humans. (18:27) If we are building it, is AI going to replace us? (18:30) No, because we need to continue to build it and make it better. (18:32) It's not going to completely replace us.(18:35) It's a common misconception that people are afraid of that everybody will be replaced (18:39) by a robot. (18:40) But if everybody's replaced by a robot, what is there left to do? (18:43) There's nothing left, right? (18:45) AI is not a sentient being that's there to be a self-licking ice cream cone. (18:49) So that is one of the biggest things that I hear people, just talking to different people (18:53) in different industries, are so worried how it's going to replace them.(18:57) Another misconception that I hear a lot is AI is like magic. (19:02) No, not really. (19:03) It's just years and years of really smart people that understand really high-end math (19:08) equations that far exceed my capabilities of somebody that understands math, that have (19:14) built algorithms around, again, it's basically a coding principle.(19:19) Do this. (19:19) If you see this, do this. (19:21) And it's a math equation, right? (19:23) So it's not magic as much as it looks like magic.(19:26) That's what Chad Chibby says. (19:31) There's two more that I want to bring up. (19:34) AI is only for big tech.(19:36) Not true. (19:36) AI is for everybody. (19:37) Everybody can take advantage of AI.(19:40) Now, could you say that some of the more advanced things only big tech can pay for? (19:44) Yes. (19:45) It's very expensive when you start to get into some of these higher-end use cases. (19:50) But AI is impacting all of our lives.(19:52) And then the final one that I want to talk about is AI is inherently neutral. (19:57) I don't want to get into a political conversation, but I just want to bring up a recent story that (20:02) I found humorous. (20:03) Not all that long ago, you asked Chad Chibby about President Biden.(20:08) Specifically, you would ask it to, you know, you would phrase it in a negative way. (20:12) And it would just very neutral. (20:14) It would say, you know, Biden is the president, and it wouldn't say anything negative.(20:19) It's the same question about President Trump. (20:21) And again, I'm not getting, I don't want anybody to think this is a political thing. (20:24) It's just an interesting outcome.(20:27) You'd ask it about President Trump, and it would start to say negative things. (20:31) So naturally, people are like, oh, AI doesn't like President Trump. (20:34) No.(20:35) Whoever coded it obviously had some... (20:37) Whoever trained a lot. (20:38) Because it's, again, when you're talking large language models, it's training. (20:41) So it potentially could be garbage in, garbage out.(20:44) Correct. (20:44) And that's the key that you have to remember. (20:47) AI reflects the bias that is present in the data or the design of its algorithms.(20:53) Okay. (20:54) Right. (20:54) So there is, and I go back to, it's inherently flawed.(20:58) Is that, is something like that always going to exist to a certain degree? (21:02) Probably. (21:04) All right. (21:04) Well, let's step into one thing that you talked about.(21:06) When you're talking about the use of AI, and only some businesses can afford that cost (21:12) of certain aspects or features of AI. (21:15) Well, let's talk about the adoption. (21:17) You know, it can't be easy.(21:18) So what are some of the key challenges organizations face, whether it's data readiness, (21:23) costs, or workforce skills with adopting AI? (21:26) The number one challenge that all businesses, government agencies will be challenged with (21:31) is being data ready. (21:33) You said it, you know, in your question, and you're absolutely right. (21:37) And this goes back to the bias in your data.(21:41) Garbage in, garbage out is a really good way of thinking of this. (21:44) If I don't have good, clean data, if I don't understand my data, if I don't have a process (21:50) around governing my data. (21:51) Or know where it exists.(21:52) Or know where it exists, or it's siloed. (21:54) How can I introduce something that it's lifeblood? (21:59) It's source of income, so to speak, is data, right? (22:04) If I don't have all of those things that we just talked about, then AI adoption is, it's (22:10) quite frankly, it's not going to be a good adoption. (22:12) So how can IT leaders overcome those challenges to successfully integrate AI into their organizations? (22:19) Call me at Ironbow.(22:21) Outstanding. (22:22) Yes, I figured out a good plug here for the sales team. (22:28) It's taking a step back.(22:30) First, I would say, identify what your challenges are as an organization. (22:35) Understand what it is that you see that is preventing you from taking the next step. (22:42) So understanding that challenge.(22:44) Once I understand that challenge, I can start to identify, okay, if I've solved this challenge, (22:48) what is the outcome that I'm driving towards, right? (22:50) I know this sounds super nebulous, this conversation about AI, now I'm talking about business practices. (22:55) But this is leading to, once I understand my challenge, I understand what I'm driving (22:58) towards as far as the outcome that I'm looking for. (23:01) I can now start to wrap my hands around, okay, now that I know what I'm trying to get to, (23:07) what are the systems that I have in place that would help me achieve that? (23:12) And what do I need to do to fix those systems? (23:14) Quick example, we here at Ironbow are looking into, and when I say we, Matt Mell and his (23:20) team are looking into how do we help increase order processing better? (23:24) That's one example.(23:25) Another example is, how do I help if I have a sales team that says, hey, I'm going after (23:29) this bid and I need past performances, and they go to someone like Jen O'Brien and want (23:34) past performances, they're both challenged with, well, if I don't understand where that (23:38) data lives, or if I have to search several different places, then it's going to take (23:43) me a while to get there. (23:45) Point is, is everything I just described, I haven't even talked about how AI can help (23:49) that problem. (23:50) Really what I'm after is, how can I improve my data systems? (23:54) How can you get more data ready? (23:55) How can I be more data ready? (23:57) Right.(23:57) And I think that's a number one issue that all of our customers face today. (24:04) And it was going back to what I said earlier with CENTCOM, they were challenged with that (24:08) issue then, they're probably still challenged with it today, but they're not unique. (24:13) There's nothing they're doing wrong.(24:15) It's nothing Ironbow's doing wrong. (24:16) That's just, we're all so used to doing our, we're really good at this thing, but we don't (24:21) look at the big picture to make sure that where our data lives. (24:25) So similar to a modernization effort where you're trying to figure out where all your (24:29) assets reside, when you're dealing with an AI adoption, you need to know where your (24:33) data resides, where it is, how ready you are.(24:37) Okay. (24:38) Absolutely. (24:39) Awesome.(24:39) Hey man, to wrap up, is there one key takeaway about AI that you'd want IT leaders and listeners (24:45) to walk away with? (24:46) Yeah. (24:46) Lean into it. (24:48) Don't be afraid of it.(24:50) Even if it's just as simple as starting to figure out how chat GPT can impact your life. (24:57) Don't be afraid. (24:57) I mean, going back to our conversation around when, you know, Google came about, when email (25:02) came about, there isn't something that you're immediately going to be able to, you know, (25:07) hold onto is this is the benefit that I'm going to get.(25:10) It's going to save me X amount of dollars. (25:11) This is my return on investment. (25:14) We don't know that yet, but that doesn't mean that you should shy away from it, lean into (25:19) it, be willing to kind of take that step, you know, into your AI journey and you're (25:24) going to fail just like everything that we do in life to get good at it.(25:28) You're going to fail a couple of times, but the longer you wait to dip your toes into (25:32) this arena, the further you're putting your business behind and maybe achieving whatever's (25:39) next. (25:40) So if there's one thing that I could encourage the IT leaders, lean into it, get training (25:45) on it, get your folks trained up on it, be prepared. (25:48) It doesn't mean that you have to solve the world problems overnight, but just be ready (25:52) to start to think more organically and more strategically around AI.(25:58) All right. (25:59) You heard it. (26:00) Jump in.(26:00) Thank you, Nick, for breaking down AI in such a practical way. (26:03) If you enjoyed this episode, be sure to subscribe to Ironbow's tech and translation podcast (26:07) on your favorite listening platform so you don't miss future conversations. (26:11) And for more insights on AI and digital transformation, head over to ironbow.com. (26:15) Thanks for listening and we'll see you next time.