AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
AI deployment and adoption is complex — this podcast makes it actionable. Join top experts, IT leaders and innovators as we explore AI’s toughest challenges, uncover real-world case studies, and reveal practical insights that drive AI ROI. From strategy to execution, we break down what works (and what doesn’t) in enterprise AI. New episodes every week.
AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
The CEO’s AI Roadmap
Artificial intelligence is reshaping business faster than any technology in history. But while employees are already using AI daily, enterprises are struggling to capture value at scale. In this episode of the AI Proving Ground Podcast, WWT Co-Founder and CEO Jim Kavanaugh explains why executive leadership is the critical factor in moving beyond pilots, how companies should build the right foundations for AI and why waiting on the sidelines may be the biggest risk of all.
More about this week's guest:
Jim Kavanaugh is a visionary and inspiring leader, who co-founded World Wide Technology in 1990 and serves as CEO — steering the company from its roots as a small startup into a global technology solution provider that helps organizations conquer the speed and complexity of technology, harness the power of digital transformation, and make a new world happen.
Jim's top pick: A Guide for CEOs to Accelerate AI Excitement and Adoption
The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.
Learn more about WWT's AI Proving Ground.
The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.
Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.
AI is not just another wave of technology. It's fast becoming the defining force of business in our time. The largest companies on the planet are betting hundreds of billions of dollars that AI will reshape every industry. And yet, for most executives, the question isn't whether AI matters, it's how to harness it before competitors pull ahead. That tension between urgency and uncertainty is where today's conversation begins. Jim Kavanaugh, CEO of Worldwide Technology, has a front row seat to the rapid evolution of enterprise AI. He's seen the enormous investments in infrastructure, the policy debates happening behind closed doors, and the very real frustrations inside companies trying to move from experiments to results. What emerges is a simple truth. Leaders can't afford to sit on the sidelines. To unlock the potential of AI, they'll need executive sponsorship, cultural buy-in, and the courage to act before the playbook is fully written. In this episode, we'll explore why the pace of AI adoption is unlike anything we've seen before, why it's both thrilling and daunting, and what it really means to lead an organization through this moment. So stick with us because what Jim says might guide you on how to position your organization to win. Not just today, but over the next decade. Let's dive in.
SPEAKER_02:Finally made the cut.
SPEAKER_00:Yeah, yeah. Finally. Well, welcome. You are about as plugged in as anybody that I know uh into the enterprise AI landscape. So just to start, what are you seeing right now in the market as it relates to AI? It could be adoption, it could be innovation, it could be breakthroughs or even challenges. What are you seeing on the market and what does that mean for our listening audience out there today?
SPEAKER_02:Yeah, that's a it's a big question. You know, there's so much going on uh that's literally mind mind-blowing. Uh, you know, I was just up in my office, you know, and I'll between calls and meetings, uh, you know, be watching CNBC and different things going on. And, you know, as recent as yesterday, you know, you see all of a sudden, you know, you got uh Jensen coming on with Sam Altman and OpenAI and NVIDIA coming together to talk about this, you know, 10 gigawatt uh facility that they're looking at co-investing and and building, which is just massive. I mean, it's uh it's even it's it's hard to comprehend the size and scope of what they're talking about here. Uh it's something the world has never seen. Uh so when you ask a question, what's going on? And you know, uh what are you seeing? It's it's massive. It's massive. This is the most transformational uh technology business uh you know transformation that humankind has ever seen. And I think we're very, very much in the the very, very early stages of this. So so many things going on from an infrastructure standpoint, as I've just mentioned. And this is coming off other massive investments that are going on. So you have the the infrastructure side, you and you have the largest of the large companies in the world, the largest market cap companies in the world that are behind uh these investments around AI. You know, you're looking at Google, Microsoft, Amazon, OpenAI, with you know, a new company coming out of nowhere in the last five years or so, uh and so many others. So these are not, you know, when we start and we compare back to like the dot-com days and the overhype, there were a lot of companies that were not well funded, and they were startups without revenue and without profit. And so when you look at what's going on today, you've got the biggest of the big, the largest market cap companies. These are multi-trillion dollar market cap companies that are leaning in. And just when you think that they've invested, you know, a monumental amount of money, all of a sudden they come back and they're doing more. Yeah. And so uh the amount of money that's being invested in the AI infrastructure and these AI large language models uh is so much larger than what anybody could have ever expected, and it continues to happen. And so with all that being said, there's so much going on, and we're just really in the early stages, the infancy stages of I would say the the real outputs and uh innovations that are going to take place because of the capabilities of these large language models.
SPEAKER_00:Yeah. Well, you mentioned you know, so many companies investing in these AI capabilities from the biggest of the big on down the line. You're you're a member of the business roundtable, which includes a lot of the leading CEOs in the world today. I'm curious, you know, without getting into specifics about what they're saying, as a group, what are they, you know, what's their thinking as it relates to AI? What are they getting right? What do you think maybe they're getting wrong?
SPEAKER_02:Well, first I would say that uh being a part of that, it's it's uh you know, it's an honor. And uh it's also a great opportunity to get insight from some of the, you know, the the CEOs of the largest companies in both the US and globally uh at times. So uh a lot of that organization uh is there to help set policy. And uh so if you think about the the policy uh and structure that AI, how is it going to be governed, how governed, how is it gonna be deployed, uh what are the rules and regulations? So there's a lot of that going on to make sure that uh the US is doing it as efficiently as possible. So if you think about, you know, it's just not uh chips and technology, it's also power and infrastructure. So there's policy and things that the administration needs to be doing that need to be done in tandem. Also, how is AI going to be managed and governed in a very responsible way? So things like that are going on on top of what every CEO in that room is scratching their head, and much more than that, you know, of thinking about what are they doing? Are they leaning into AI aggressive enough? Are they making the investments in how they're going to be an AI-led organization and how are they going to leverage AI into their business and their enterprise and just you know throughout their entire organization? So it's a it's a it's a great opportunity to see a lot of that going on. And there's you know, a lot of questions and a lot of thoughts in that room.
SPEAKER_00:Yeah. Well, you talk about you know, every CEO in that room wanting to lean in and learn from from their pe their peers and other organizations on what's happening. You've been bullish on on AI since day one or maybe day zero or pre-day zero, whatever you might want to call it. And you've been um, you know, you've you've been loud about wanting to be an AI-first company. I'm wondering in concrete terms, what do you think AI first company means? And does every company have to be an AI-first company uh or is there a scale there?
SPEAKER_02:Well, um my advice that every company and CEO should be thinking about how are they an AI-led organization. It's AI is going to be uh ubiquitous. I mean, uh you you you already look at it today. Uh and where the the the most traction has taken place around AI is just think about is around your personal consumption. So if you think about anybody uh you know around the world, you know, practically, uh that has access to a phone, you know, to uh a digital phone, uh you know, you have access to the web and to the internet, they most likely are tapping into these large language models. So the proliferation uh of these large language models, it's everywhere. And it is going to continue to be more and more progressive uh from an individual use case, how you use them. I would be incredibly surprised that anybody yourself or anybody in any of the companies out there aren't writing prompts into Gemini, into OpenAI, ChatGPT, you know, perplexity, you name it, Meta, you know, the there's there's there's other ones out there, and then there's more specialized, bespoke large language models that are being built uh that are more industry-based. Uh but that being said, so as as you know, that is happening today, and the the traction that is taking place there is just going to continue to build and get bigger and bigger and better and stronger. Then you have to look at inside an organization and some of the challenges. So from a CEO and a leadership perspective, you you have to see that this is not going away. This is not a fad. And then it's really thinking about well, how are you going to be an AI-led organization? And that doesn't mean drop everything that you're doing today. It's thinking about how do you leverage AI to create operational efficiencies for your employees? How do you build out a platform within your organization that will allow you to innovate faster, to prototype faster, to uh create better products and services, you know, that also provide a level of scale to them. So these are things that if you're the CEO and you're not thinking about these things and you're not having these discussions with your leadership team and your IT organization and potentially uh consulting and advisory firms that are specializing in this, like worldwide, uh I think you're missing the boat. And I think you're gonna put the company at risk. So I believe every company CEO, this needs to be top of mind. You know, if I look at it from a tech standpoint, you know, 25 years ago, tech's a back office, you know, operation, just keep the lights on, maybe some email as it's coming into adoption, uh dial tone. Now you're looking at it. Every CEO and board should be an executive team should be thinking about what's your digital transformation? How am I leveraging generative AI? Where are we today? And how are we doing that securely? You know, and thinking about all the aspects from a cyber standpoint.
SPEAKER_00:So it's interesting you mention how on an individual basis we're all using our phones or our devices and we leverage AI. I mean, AI is baked into a lot of the apps at this point, and there's certainly, you know, you know, apps like ChatGPT or Gemini where you can go out and learn and and and leverage AI. Do you get the sense that other executives, you know, from the industry at large are underestimating how much employees are using AI? And I guess just to parlay that into an actual question, what is the gap? Like so many people are using AI on an individual basis, yet there's a there's a an adoption challenge from an enterprise perspective. So what's the disconnect there?
SPEAKER_02:Yeah, I think it's fascinating. Uh you know, I would draw a comparison that uh there is no stopping this train. So uh how you manage and govern the deployment of AI usage in your company, you know, there are going to be a multitude of different ways to do that. Uh but I would I would compare a little bit of the adoption, you know, my my view is just assume every one of your employees is using AI today, one of the large language models. Now, you may put controls and things in place that allow them to go through your corporate network or they can only access certain things. Uh, but I but do just assume that every employee, you know, through their own means is going to, you know, these large language models in one way, shape, or form. Uh and if you think about it, you go back a number of years and you think about the deployment of these little devices that we call these cell phones that keep getting smarter and smarter, and you like you have an IBM mainframe in your pocket now. But if you think about it, it wasn't that long ago when a chief technology officer or chief information officer for an organization basically said, if you have that device, that device is never going to be connected to the corporate network.
SPEAKER_00:Yeah.
SPEAKER_02:You're not going to be connecting. That's going to create too much havoc. It's it's going to create, you know, just challenges in regards to security. The technology was so compelling. There was no stopping that. Right. I'm like, no, that device is going to be connected to your network. People are going to be using their personal device as an interexchangeable device, whether it's for personal use or for their business use. So if you think about that and how where the the CTO and maybe the chief security officer said, no, that's not being connected. No. You know, the tech was so compelling. And the impact on usability, both from a personal and a business standpoint, uh AI is going to dwarf what that comparison looked like. So there's no stopping this train. It's a matter of how are you going to manage it. And then I think how do you govern it, secure it, and and these are things that are things that need to be done thoughtfully. So when CEOs and the line of business and the internal IT organization, they all need to be working in tandem. And they need to be brainstorming, collaborating, and thinking about how they're going to leverage AI throughout the entire enterprise. And these are for large Fortune 50 companies all the way down to startups, and to be thinking about how do you do that in a in a very thoughtful way, uh in a very innovative way. And there's going to be, I would say, a tone within each organization to determine, depending on the space you're in, how aggressive do you want to be? You know, because in this space, I firmly believe you need to be creating an environment in the right spaces that you've got to be pushing the envelope. You've got to be testing things. If if you're not going to take any chances, I think you just you're going to put the company at risk. But you need to be doing that thoughtfully, depending on the industry and what area of your business that you are going to be rolling some of these things out. But that being said, I believe every organization and every CEO should be building an AI plan. And along with that comes one of the most important things, and I think one of the most challenges, uh challenging things that is creating a level of, I would say, frustration within organizations in regards to the actual rollout of really impactful AI use cases is that this is more complex than you as a user just going out and writing a prompt about something that you're very interested in. That, oh, by the way, the large language model is getting smarter and smarter and doing amazing things that you can collect data. However, to do that internally, you need to aggregate your proprietary data. And you need to do that securely. And you need to think about that data across the enterprise and what's the most efficient way to aggregate and organize that data and then govern that data, and then to be thinking about how are you going to actually bring different silos of data and different areas of the business together. And that's where like enterprise architects from a data standpoint, from an AI standpoint, from thinking about building out use cases that certain data may be commingled, and there may be other data that's you know governed in a way that your financial operation. So all of those things are creating a level of complexity and a level of frustration because I think the CEO and the line of business would like to see the same immediate gratification that you as an individual can get by going out to a large language model. You're not, it's not there yet, so it's gonna take time to look at business processes across the board, automation, governance, control. But believe me on this, the the uh the outcomes are going to be there, and they're going to be quite compelling. It's just gonna take a little time. So as organizations are making those investments in the infrastructure, the AI infrastructure, whether it's on-prem that they're building, and it could be a variation of some RAG models that they have, some of the data structure that they're looking at, the the use cases that they're building. But there's a uh a basis of integration and automation that still needs to be built to connect the data with a specific operational function that ultimately is going to deliver massive automation and capabilities. And you know, we're hearing things around agents and agentic platforms. All of those things are very real. And they are going to provide massive uh differentiation, scale, efficiency, uh, and capabilities for organizations, but they are taking time to do that.
SPEAKER_00:And I think that's one of the biggest, well, as you mentioned, one of the biggest frustrations is that it does take time. You're going to need to commit time and resources to making this a reality. Um I let's just use ourselves because you know we have been a pretty we have been a very innovative company over the years, at least as long as I've been here and I know even prior to that. Walk us through our journey a little bit with AI. Um, what did we do that others can learn from us to make all that happen? Identifying the use cases, talking about guardrails and compliance and making sure that we're moving as fast as we can, but doing so in a safe way. What did we do that you think would be lessons that our audience could learn, learn here?
SPEAKER_02:Yeah, a number of things. Uh and as as you know, I'm actively involved in this process. Um so first, I would say, you know, over two plus years ago, I I have my executive meeting every Monday morning here uh in St. Louis at 7 a.m. uh with a team of about 15 people, uh leaders of the organization, and we'll touch on all the things that you would touch on very, you know, r religiously and rigorously every Monday morning. You know, finance operations, numbers, year-to-date performance, new hires, people that may have left, why the cultural and operational things. So that has gone on for um over 30 years now. Uh part of that now is a focus that there is an hour at it on to focus on nothing but AI. And what are we doing in regards to collaborating, uh, brainstorming, and bringing in large groups of our data scientists, our AI consultants, software developers, our internal IT organization, and our leadership team. And this is all about uh being an AI-first-led organization. And it's being led by me, by the CEO, and by the line of business owners, the leaders of the company, along with a collaboration with our IT organization, our data scientists that we have, management consultants that we bring all of these people together, and then we identify the kind of the vision of the company, and then we identify the specific use cases that we're going after. And we also concurrently look at what is the enterprise AI architecture that we need to be building out internally that's gonna enable that. And all of those things are moving. And along with not only I have that meeting, but I'll have multiple meetings throughout the week that will drill into specific use cases, whether it's a business use case that we're focused on, like building out our own kind of Chat GPT-like assistant that we call Adam. Sure. Uh, or if we're building out another specific application called our RFP assistant that we have built out that has provided great capability and differentiation for us, or working with our internal IT organization and our data science group and software development group to be thinking about how are we building out our back office data infrastructure along with potential RAG models, connecting to large language models, what's our strategy around agents and our agentic architecture. So there are things from a technical standpoint that is being worked every day, uh, but also looked at in a in a very call it specific way of how it could impact a use case, but also then how it is applied from a macro standpoint across the entire enterprise. And this process is a never-ending process. Right. Uh I will tell you, with all that's going on, we have made great progress, and I'm really uh happy with uh and you know, pleased with uh the effort and the results that we are seeing in a lot of areas, but not even close to being satisfied. So I'm not sure if you're feeling like, oh, that's a little bit of a conflicting, confusing message. Are you happy or not? Well, at the end of the day, team is making great progress, but I'm not happy with where we are because I think there's so much opportunity, and we are seeing things that we're doing that we are sharing with our customers and our partners of how the call it the methodology from the CEO down the line of business, our methodology and approach to the cadence and the methodology of how we communicate, being an AI-led organization, but also the specifics around how we're working with the line of business owners who need to be thinking about how AI is doing this. They need to be actively involved with these use cases, along with our technology leadership organization and our data science and software development team of how they come together to work in a very collaborative way. Because this is the these things are are changing and morphing on a daily and weekly basis. When a large language model comes out with an upgrade, it may have a comp you know, it may dramatically in a very positive way impact the capabilities that you have from call it multimodal functionality that you have as you're building out prototypes and doing different things. So these things are happening very quickly, and the teams need to be very integrated uh uh in a real-time basis to capture those and figure out how you're gonna incorporate it into the capabilities that you have today. So uh we we are doing we're doing a lot and we're sharing that methodology and approach from both a business and a leadership standpoint, communication standpoint, but also from a deep technical standpoint.
SPEAKER_00:At the beginning of your answer, I think you said something illuminating, which was fairly simple, led by the CEO, led by this whole the whole initiative led by you, and um you know, certainly many others involved. But that executive sponsorship sponsorship, that executive leadership seems to be one of the things that can take those AI use cases, those AI pox and push them into production faster. I'm sure you're well familiar with that MIT study where 95% of projects uh you know have not been working out. Well, one of the secret sauce of that 5% is executive sponsorship. I wonder what do you think effective executive sponsorship looks like as it relates to adoption of AI in the enterprise?
SPEAKER_02:Yeah, I I think it's I think it's really, really important. Uh and I realize uh that CEOs, you know, don't have the time to be a so-called project manager for an IT initiative. You know, that's why we bring people on. But this is much bigger than that. And and I think it's really important that CEOs lean in to learning. You know, this this is a way for them to learn and really understand how fast you know AI is is is continuing to morph and learn. If you think about the models today, you know, there are learning models now, there's reinforcement models today. This this is happening in an incredibly compelling way. So for the for the CEO to provide guidance and support, they need to understand how a lot of this is working. So somehow they need to figure out how to allocate a certain amount of their time to this, and I think bring people in from their organization that can help them stay in the loop. So I think I think that's very important. I I also think you know, when you reflect back on uh the new study that came out, the MIT study, there's a lot of studies coming out saying, oh no, the adoption and you know of use cases is much higher. And uh I would actually have to agree with the MIT uh study that there is a high level of frustration uh and there because there is uh a desire for immediate gratification. Yeah. And like an individual user can go write a prompt to uh you know a large language model. That is not the way it's going to work in the enterprise. So these things are going to take some time, but I absolutely and then I look at it internally 100% convinced you have to take the time to put the work in to get your data right. And data is king, you know, when it comes to a lot of organizations, and and data is very complex and complicated. Uh so uh these things that are happening by kind of building the platform and the infrastructure of your organization will put you in a position that you'll start to see the flywheel kick into effect. If you don't do it, you're gonna put yourself at a real disadvantage. So I would say where you know some would step back and say, wow, is there something we should really be concerned about? The adoption and attraction in enterprise uh organizations around AI because of the MIT study. And my answer is no. I'm like, you you need to understand that, yeah, this is requiring a lot of heavy lifting and work, but the ones that put that in, I am absolutely a believer, I see it internally, that you will be the ones that create a massive competitive advantage in regards to the ability to automate and scale your organization around operational efficiencies. And you will put yourself in a position to be able to create and innovate new ideas, products, capabilities, services that you would not necessarily have been able to do if you are not building out an AI enabled organization.
SPEAKER_00:Right. Everybody wants to get to that flywheel where it gets easier and easier and easier to start to prove that value. But identifying and proving ROI seems to be one of those things that is confounded. A lot of our clients out there, a lot of our listeners out there. I'm wondering from your perspective, how do you think about ROI on AI? And how has your perspective of ROI shifted, if at all, over the last four or five years when Gen AI has started to explode?
SPEAKER_02:Yeah, it's it's a great question. Uh I'm obviously very focused in, you know, return on investment. Uh this is one again, I think uh there is a bit of an art and science, I think, of the investment cycle here uh for organizations. I truly believe that uh if you're going to wait for uh the perfect uh scenario and the perfect set of data to to validate your call it business case or return on investment, uh that kind of analysis is going to paralyze your organization. You're not going to move fast enough. I believe that you have to make the investment in bringing the right people, the right technologists, the right use cases. And the way that you're going to figure out and start demonstrating that return on investment is by building these kind of incubation capabilities out internally. And then you're going to be working through them and you're going to start to see where the AI models can really provide differentiation, scale, and efficiency and breakthrough capabilities. Uh, but you're not going to get there if you're expecting the perfect ROI use case. Uh so there's there's definitely an art and science of this, but I see it internally. There are things like our RFP assistant that, you know, we've talked about that it has taken a lot of work and massaging of the data and lack of better training of the data because it was hallucinating, you know, uh for some time. But then all of a sudden you find 200-page documents that now we're ingesting into the model, and it's kicking out what may have taken us two to three weeks to build out 80% of the response is kicking it out in, you know, could be three to four minutes, you know, where all of a sudden, and it keeps, you know, we keep training and massaging the data and the the algorithms and the things that are going on. And it gets better and better and better. And you think about, you know, we we acquired Soft Choice, great company as part of the worldwide uh organization. They have an entire, you know, uh business development organization that we all of a sudden now can take that RFP assistant that we built out, and it demonstrates the scale and just turn it on by training their b business development group to be able to use that and to be able to leverage all the intelligence that's in that model. And that really demonstrates and shows the scale and the knowledge uh that that we're able to provide and the capabilities they would not have been able to really understand, you know, because of all the things that that that we're doing from a deep tech standpoint. So uh these are examples where uh it's very real, you know, but and we feel like we're just scratching the surface relative to the capability that we can bring. And you think about building out capabilities and the multimodal capability of building out engineering drawings and proof of concepts and you know, PowerPoint presentations, all of that is is morphing on a day-to-day basis. I mean, I've had this discussion this week about those things and not only morphing from the RFP assistant to building AI products in specific areas of our business around cyber, around AI, around software development, around networking, compute, high performance architectures, all of those things, we're building bespoke models that literally are leveraging the underlying unified data structure to actually drive that. So we're not having to put the same amount of effort across every one of those product capabilities.
SPEAKER_00:I think the RFP assistant is such a great example that many can learn from. And you know, by the way, if you're if you're listening to this, you there's there's lots of great content on our RFP assistant, how we developed it, what it is, how it provides value on WWT.com. But but Jim, that wasn't a solution that just easily came about. We had struggles with it, but from my understanding, we were at one point we were looking at it thinking, do we move forward, do we not move forward? We did decide to break through. It did provide value. And my question is going to be on the other end of it, though, there is a team and people that have to use these tools. And that's not always easy either. This is a change management discussion. AI, the more you learn about it, the more you start to understand how it's gonna affect your job on a personal level, and change is scary. How do you get, you know, you you can decide whether you want to use the RFP assistant as an example, but in general, how do you get people to buy in to a technology that's gonna be so disruptive that just in your own mindset, it does look like it's gonna have scary change. But the hope is that it's going to provide more value for you and your company.
SPEAKER_02:Yeah, it's it's a real it's a real deal. You know, you know, the the the threat, you know, let's just cut to some of it, you know, the threat of losing your job because of, you know, automation and efficiency. So, you know, when I look at it it worldwide, we all we all we need to lean in because if if we don't lean into AI, it we we will be at a competitive disadvantage individually and collectively as an organization. So as an organization, we need to lean in and push ourselves to be an AI-led organization that is looking for ways to drive automation, to leverage the technology, to do it in a very thoughtful but a very aggressive, innovative way. Uh at the same time, in every individual in the company, we need to be thinking, including myself, I need to be a constant and continuous learner of what's going on. We need to push ourselves to learn. If you don't, you think you got out of school, you went, you know, high school, college, PhD, you're never done learning. You need to continue to push yourself, and especially in the world that we live in today. And if you do that, I believe you will, you will be in a good space. And that's what we're looking for. We're looking for people that are lifelong learners, that have that thirst. And if you do that, just like any RFP assistant, yeah, we've created efficiency and scale. Uh and in some cases where we've provided scale, we've looked at, okay, where can we take some of those talented people? And some of them, in that case, that actually are have become somewhat experts on the tool and the capabilities that we're bringing. We're using them to help train others to think about one, how do you use the ERFP assistant? And two, how do we help them to train other individuals about how to think about use cases and how these models work? Because to your point, Brian, uh, yeah, at one point we're like, okay, do we need to shoot this project? And sometimes you need to do that. You know, it's like it's just not working. Well, in this case, yeah, the the platform was hallucinating. We're plugging data in, things just weren't working. But in the spirit of you know, iterative learning, there was an upgrade of one of the large language models that literally had a fairly dramatic impact on the capability of the RAG model that we were building. And so these things are iterating in really a daily basis. Uh and but you do need to choose. You can't be all things to all people. And sometimes some projects need to be shot to advance the other ones as you see those making great progress. But back to your point, the people side of it is a really, really important one too. And culturally, when I look at worldwide, you know, the one thing I can say that is a constant over, you know, the the last 35 years is that, you know, technology has changed in so many different ways. Like you say, we have the dot-com days, we've had, you know, uh software development, you've you've got AI, you've got networking, you've got so many things that have happened that have been very disruptive. Uh, but the one common across those that you would look at is that what's the number one most valuable resource we have, it's our people. It's our people and our culture. And I don't think any of that has changed with you know the emergence of generative AI. It is actually probably more important because getting people on to understand how important they still are. And I even I'll I'll go down the road of software development.
SPEAKER_00:Yeah.
SPEAKER_02:You know, you have people coming out, uh, even saying, uh, Dario the and you know, CEO of Anthropic, and I look at some of this and it's like, you know, software developers are gonna go away. And, you know, and I and I understand what he's saying. I think it was a little extreme at one point. But my view is I look at our developers internally, we need them more than ever. I mean, the the complexity of software development, it's it's just gonna be different today. And I'm spending a lot of time with our software development teams, both our internal IT organization and our digital team that goes out and builds, you know, state-of-the-art enterprise, you know, mobile applications, all the way into the back office enterprise of financial organizations, uh QSR, QuickServe restaurants, you know, very, very enterprise uh front-end customer-facing applications that software development's not going away. It's going to be done differently. And these AI platforms that have emerged, like you know, Windsurf and Cursor and GitHub and others, we are intimately involved with those. And our innovation ecosystem collaborating of how do you use those platforms to engineer, architect, and write code faster and more efficiently. It's not just a flip of the switch. There to do that, there takes very talented engineers and architects and developers to figure out what are the right use cases and where should you do that. And oh, by the way, I'm a firm believer that our entire organization, we are leaning into, you know, in a big way internally and for customer projects of how we use these AI-enabled software development platforms that I just described to help train up all of our software developers, our developers, our top architects, but also it's expands into our engineers because they are also developers in some way, shape, or forms, our management consultants, our data scientists. So bringing those together to figure out what are the right areas and how do you use those tools and do it in a responsible way around, you know, things that we have found is like prototyping. I mean, you can prototype, you know, on the fly, you know, where it may have taken you, well, okay, we'll go talk to a customer. Customer kind of paints a picture of I want to do this and this and think, okay, we'll come back in a week and show you a prototype. Well, now, you know, the ones that really know how to use those tools, we're prototyping potentially in meetings, right? Showing them the look of what something could could be like. So the ability to innovate, brainstorm, and prototype is very real. You know, that you think about on the quality side of using some of these code assistants to actually do debugging and to, you know, drive your quality inities and then actually writing code, but it needs to be done in an area and spaces and places that make sense because at times when you let these tools out and they are writing code and you don't think about downstream effects in regards to the supply chain process, they may create other problems that you weren't thinking of because of how fast you know they they can do things. So my overall point is everybody needs to be thinking about how do we work together to train each other about where the puck is going. And this is not about compli you know eliminating jobs. There will be areas of the business that are call it manually done and things that people don't want to do that we're gonna automate. And but there is gonna always be the need for individuals to come in. It's just gonna be different, and we all need to be thinking about it that way.
SPEAKER_00:Yeah, absolutely. My next question was I know you like to use the analogy of skate where the puck is going. We're coming up on the bottom of this episode. Where is the puck going right now? Is it are we gonna have people managing teams of AI agents in the future? Are we gonna be um you know uh totally reliant on agents? Where, based on your conversations that you're having with you know very high up uh folks within the industry, where are you seeing uh that puck going as it relates to AI uh and organizations?
SPEAKER_02:Yeah, I probably need to figure out since I played soccer, how I see where the ball is going. Uh but let's ask AI.
unknown:Right.
SPEAKER_02:Right. Uh no, it's it's a it's a great way, you know, just to be thinking about where do you think things are going in the future and and how are you taking where you are today and thinking about how you're gonna get there and how you're gonna get the organization there. But I I absolutely believe that you are going to see organizations where if you look at today, you have an organization of 50,000 people. Uh today there may be 10,000 agents that you're using. And, you know, depending on how they're being leveraged uh as you go forward. In the future, you know, as you build out uh a very enterprise capability around agentic architectures and platforms, you're gonna have thousands and thousands and thousands of agents that need to be managed. And those agents are gonna do very significant things, very meaningful and productive things for your organization. But again, they need to be managed and it needs to be something that's very thoughtful. So when you you think about the the the world of managing a business, you're gonna be managing people like we do today. You're also gonna be managing agents, and those agents also there needs to be controls and governance around what those agents can and cannot do. And that's gonna be an evolution that happens because you don't want to have this massive sprawl uh within an organization of people or technology and agents uh that's not done in, I would say, a very controlled and organized and scalable way. But that being said, um, the the output, I I firmly believe that you go back kind of the MIT study, that there's a level of frustration in regards to the speed inside of enterprises and deployment of use cases. Uh that doesn't change my mind at all in regards to, I think right now, that frustration in a year and two years, you're gonna start seeing real demonstrable use cases like we're seeing internally. You're gonna see that happen in a more meaningful way. And when that happens, I I think you're gonna start seeing the the scale and the impact of these agentic platforms, the the proliferation of agents, and just where generative AI can continue to impact enterprise organizations in an incredibly meaningful way. But you won't get there if you're gonna sit there and say, I'm gonna take a wait and see. Right. I'm gonna wait until I see this. I I think you're putting your organization at risk of being disrupted uh by competitors if if you're not really pushing this. And and I go all the way back to our original conversation. If it's any indication of the speed and the I would say the confidence and commitment of what's going on around these large language models and generative AI, just look at there's never been the kind of capital and dollars that have been invested in anything in the world today. And if you even go back and you look at, you know, where we're the largest CapEx consuming companies in the world, probably 10, 15, 20 years ago, you'd look at car manufacturers, you'd look at utility type companies, manufacturing type organizations. You go look at the amount of money that's going into these large language models, uh, these large enterprise AI organizations, as we described before, that, oh, by the way, are some of the most profitable companies in the world, uh are dwarfing the CapEx spend in these other areas. So as that plays out and and enterprise organizations are able to adopt and implement these AI capabilities, uh the the the flywheel is gonna kick in and it's what it's gonna actually deliver, I think is gonna be game changing. And I and I'm very excited, and I know we're we're getting close to time here, but some of the areas that I think uh personally I think are gonna be just just mind-boggling impactful are gonna be around healthcare. I think, you know, when you think about some of the investments that are being made by institutions, uh healthcare organizations, uh around, you know, genetic genomics, uh, you you think about uh protein folding and splicing and you know, a lot of these things, cancer, uh, you know, cures to cancer. Uh I I think it's just incredibly exciting. And then you look at it like in a space that we're in where you you look at also wearables. You know, you think about Internet of Things was a big thing years ago. Well, Internet of Things is still a big thing, but it's an even bigger thing now as you think about your own personal well-being, that whether you have an aura ring or you know, you have a watch or you know, all the diagnostics that can be collected on your individual body, and your body is so individually different than everybody else from you know stem cells to genomics to, you know, every everything, to collect that, continuous glucose monitors, all these things are going to be aggregated, and the ability to provide very custom care is very close, you know, as those things come together. And then you think about how large language models will collect data across millions of people to actually find trends and solve problems that we weren't solving before is gonna change the healthcare industry. And there's gonna be breakthroughs that I think are gonna be mind-boggling in the next couple of years.
SPEAKER_00:Yeah. I mean, it's an exciting future, and there are gonna be companies that figure that out. But to your point, that's not gonna happen unless they get off the sideline into the arena and make it happen with AI. Uh, Jim, thanks so much for joining us on the show today. I promise it won't be another 35 episodes until we have you on again, or at least extend the invite. So thank you again. My pleasure. It's great to be here. Okay, thanks to Jim for taking time out of his incredibly busy schedule. From this discussion, three key lessons to keep in mind. First, lead from the top. AI that matters doesn't just bubble up, it's sponsored, prioritized, and protected by an executive sponsor. Second, foundations beat fireworks. Real value shows up only after you invest in the unglamorous work, enterprise data architecture, security and governance, and a platform that lets you prototype quickly and scale what works. And third, ROI is an outcome of motion. Incubate targeted use cases tied to operational efficiency or revenue moments, iterate fast and let wins compound. Kill what stalls, double down where the flywheel starts to spin. The bottom line is the money, momentum, and models, they're all already here. The gap is leadership. If you're not actively turning AI from experiments into governed, scalable capability, you're gifting your advantage to someone who is. If you like this episode of the AI Proving Ground podcast, please give us a rating or a review. And if you're not already, don't forget to subscribe on your favorite podcast platform. And you can always catch additional episodes or related content of this episode on WWT.com. This episode was co produced by Nas Baker and Kara Kuhn. Our audio and video engineer is John Knoblock. My name is Brian Felt. We'll see you next time.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
WWT Research & Insights
World Wide Technology
WWT Partner Spotlight
World Wide Technology
WWT Experts
World Wide Technology