AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
AI deployment and adoption is complex — this podcast makes it actionable. Join top experts, IT leaders and innovators as we explore AI’s toughest challenges, uncover real-world case studies, and reveal practical insights that drive AI ROI. From strategy to execution, we break down what works (and what doesn’t) in enterprise AI. New episodes every week.
AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
The Multi-Cloud Survival Guide
Artificial intelligence has pushed cloud into overdrive. In this episode of the AI Proving Ground Podcast, two of our top cloud experts — Jack French and Todd Barron — reset the approach and detail why cloud is the launchpad but portability is the strategy; how to start greenfield with containers and abstraction; what a real FinOps model for AI looks like (unit economics, tagging, token/GPU visibility); where neo clouds fit versus hyperscalers; and how to handle cross-cloud risk and skill gaps; and the governance moves that accelerate—not restrict—innovation.
Support for this episode provided by: Trellix
More about this week's guests:
Jack French brings a depth of experience in public cloud technologies to WWT. As the Senior Director of the Cloud Global Solutions and Architecture team, his responsibilities include leading the Cloud presales, business development and solution development efforts around public cloud. This also includes cloud consumption resell, cost management and marketplace.
Jack's top pick: The Cloud Advantage for AI
Todd Barron is a Technical Solutions Architect at Worldwide Technology (WWT), specializing in cloud and AI technologies. With over 34 years of experience, Todd joined WWT in 2023 after serving as a Partner Management Solution Architect at Amazon Web Services (AWS). In his role at WWT, he supports the pre-sales organization and leverages his expertise to help customers navigate the complexities of cloud and AI, particularly focusing on AWS platforms.
Todd's top pick: Enterprise Cloud Transformation: Practical Approaches to Migration and Modernization with WWT & AWS
The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.
Learn more about WWT's AI Proving Ground.
The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.
Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.
From Worldwide Technology, this is the AI Proving Ground Podcast. Today, the cloud was supposed to simplify everything: instant scale, infinite compute, endless innovation. But when AI entered the picture, something changed. That same infrastructure that powered agility now demands unprecedented energy, data, and governance. Companies are scrambling, rethinking architectures, retraining teams, and rebuilding trust in their ability to control cost, risk, and innovation all at once. Today, we're talking with two top cloud experts in the industry, Jack French and Todd Barron, who unpack what happens when AI meets this cloud reality. How organizations can learn from past mistakes, avoid the pitfalls of lift and shift, and design for a future defined by portability, modernization, and constant change. Because in this new era of cloud and AI, speed matters, but so does strategy. So stick with us because what Jack and Todd say will reshape how you think about building, governing, and scaling AI in the cloud. Let's jump in. Todd, welcome to the podcast. How are you doing? Doing well, thank you. Great. And Jack, welcome to the table.
SPEAKER_03:Thank you. Thanks for having us here.
SPEAKER_02:So we're talking cloud and AI. And uh Jack, I'm gonna start with you. I want to go back to early 2023, maybe even late 2022. ChatGPT had just launched. The Gen AI hype was just about, you know, to go into overdrive. And there was a lot of talk out there about how much this was going to increase cloud spend, put complexity in the cloud. Um, give us a little bit of a walkthrough of how the industry has shifted as it relates to cloud from the moment AI kind of popped onto the scene to where we're at today.
SPEAKER_03:Yeah. Uh that's a good question. So the cloud has been growing by double digits for years, but absolutely over the last few years, we've seen it just explode because of AI and not just because of chatbots, but also because of these huge environments that organizations are building in the cloud to serve their needs outside of conversational AI or chatbots. But yeah, it's been a huge boom for the industry. It's also driven a ton of innovation. So we're seeing new services and capabilities available in the cloud at just an incredible rate.
SPEAKER_02:Yeah. Yeah. Todd, do you think what we anticipated in 2022, 2023, did that hold true or did something change up until now?
SPEAKER_01:Well, I mean, it's it is obviously I don't know if anyone could have anticipated the going the need for you know nuclear power that we're seeing out there right now, the need for as much liquid cooling uh that you're seeing you know across your your GPUs and the density that you see in data centers. The good part is that the cloud providers have been looking ahead since they've been in business and had already been developing liquid cooling, had already been developing how do we get the power that we need. So I think that it wasn't a as much of a shock as maybe some people think, right? Although it has definitely pushes them on a much higher or faster timeline to do the work.
unknown:Yeah.
SPEAKER_02:And and and Todd, where do you think where are we at today? Are most organizations thinking of cloud as kind of that launch pad for AI initiatives, or are we still in a very much uh it's like more of a snowflake type of uh mentality?
SPEAKER_01:Right, good question. And the the vast majority of AI workloads are developed cloud first. Typically, due to the agility and the frictionless uh entry point into cloud, that is where they're building to start with. Like you look at most startups, if you look at even the large enterprises that have really embraced AI, they start with cloud, build it, and then from there may figure out the disposition of okay, does it start to get to the point where I may want to bring this in-house or not?
SPEAKER_03:Worldwide, I think a few of our first projects have been in the cloud just because we can spit up environments in hours, weeks versus you know, months. So yeah, right.
SPEAKER_01:We built I mean uh our team has done a lot of things internally for worldwide that we've started cloud just initially, as you said, because it's the speed to be able to bring it online and to iterate and to do things. Yeah, you see that across the entire industry. And like I said, it is the vast majority starting cloud.
SPEAKER_02:Yeah. Well, Jack, Todd mentions you know that frictionless experience, but nothing is ever that easy. Um, or at least I hope it's not, otherwise it'll be a quick conversation. But what is what is some of the complexity that arises when we're talking about enterprise IT teams driving AI initiatives and thinking about those workloads in the cloud?
SPEAKER_03:Yeah, I I think there's a few things to think about. One is um if you could look at the way organization organizations are adopting cloud today, a lot of them moved lift and shift into the cloud, or they they moved faster than I think they would have chosen to looking back on it now. And so AI presents a really unique opportunity for them to start fresh and look at, okay, now that I've got this new workload that I'm moving into the cloud or that I want to build in the cloud for AI, what is the right way to do this to leverage cloud and the way that it should be leveraged? And so architectural design, security, costs, these are all things that they have an opportunity to think of up front right now instead of um build it and then go back and figure out how to tweak those things, which unfortunately we saw with some of the infrastructure that moved over the last you know decade of cloud.
SPEAKER_02:Yeah. Todd, you uh Jack mentions kind of that start fresh mentality. What can we learn from kind of you know pre-gen AI when you know cloud costs were rising and people were struggling to understand exactly how to optimize their cloud spend? Are there lessons to be learned during that period of time that we can apply now?
SPEAKER_01:Yeah, absolutely. And it's an interesting position to be in because you know I heard a comment before that a dumpster fire on-prem is a dumpster fire in the cloud. All right. So a lot of people have learned, like you said, with cloud increase cost increases that they didn't exactly expect. It's because they didn't optimize before moving. So a lot of customers are now saying, well, wait a second, let me optimize more before just lifting and shifting. You'll hear lift and shift. Well, you really should modernize. Here, modernize is really bubbled up to what customers are doing now when they're moving things, thinking about AI, but also just the modern data center and the cloud. You really need to think about am I optimizing the application for what the cloud can do? Right. And that is, I think, one of the one of the biggest differences uh you see there. And when you know, when you're building something new, that gives you the other benefit of, you know, before it was all about my technical debt and modernizing, and that's a big scary word for a lot of IT teams, is having to modernize of all I have to rewrite my applications. I need to go through and modernize the operating system and compatibility issues, licensing issues. Well, if you're building something new, that's no longer a concern. And you can start fresh with, okay, how should I build this now that I have none of the debt, technical debt, right? How should I go about optimizing this specifically for the cloud? And I think that's why it presents a great um opportunity. Because now that you're building something for the cloud, you can ensure you're doing it properly.
SPEAKER_02:Yeah, well, well, maybe dive a little deeper there. Uh, let's put the audience at ease that maybe is scared by that modernization uh term. Sure. What is the right way to start if you got a fresh start, if you're more of the green field uh space?
SPEAKER_01:So where you should look at what I mean, some of the very tactical decisions, you hear containerization, you know, that that is a big one. Uh you know, even though you're building for cloud and for cloud native, you should still have an abstraction layer, meaning that if I build something for, let's say, AWS, I should use containers that I could then move to Azure or to Google, right? Or to spread my workload across all of them. You want to build abstraction and cloud first, cloud native, but also think about portability. And just because it's in a particular cloud provider doesn't mean it's not portable. It is very well portable if you design for it. So when you're making your design design decisions like things like queuing, I mean it seems very basic, right? Well, are you using the queue systems that are already built by the cloud provider, or are you going and installing your own queuing system on a bunch of servers? Right? Well, if you use one of their services, it is far cheaper and reliable that it would be doing it yourself. And so when you're looking at making decisions on how do I make it cloud first, look at each component of your stack, each component of your architecture, and figure out okay, what service should I use? Should I buy something? Should I build something? Should I run my own? You know, make those decisions. And as I mentioned, now you're building up fresh, all of that is up to you to make. And cloud is easier to use, but it doesn't mean you shouldn't be thoughtful about what you're doing.
SPEAKER_02:Jack, I'm gonna come back to you here in a second. Um, but Todd, what are the benefits? You know, what you just described, starting fresh there, what are what are the outcomes and benefits that organizations would realize from from doing it that way?
SPEAKER_01:Well, you know, one of the things that I'm I'm sure Jack, you've heard this, right? Our customer, we hear governance a lot, right? AI governance. You need to think about governance. Well, what does that really mean? Right. And um that is where when you're building new, you can put your model down and say, okay, we are going to follow this model going forward. I'm gonna build a FinOps model that I build in chargeback and show back into everything we do. When you deploy a workload, it's going to be tagged properly, it's going to be labeled properly. We're going to ensure it goes through a compliance check, right? Uh, and this is where you can put those foundations and guardrails in in the beginning. You're not having to go back over a million applications and figure out what to, you know, after the fact take care of. You do it up front. And then that helps you ensure if you have proper labeling, tagging, et cetera, that you can have visibility into what your costs are, right, into your architecture. It just helps you build a better outcome.
SPEAKER_02:Yeah.
SPEAKER_01:Because you're able to start fresh and think, okay, how should we have done this?
unknown:Right.
SPEAKER_01:What did we learn in the past? How do we apply that going forward?
SPEAKER_03:I think a lot of organizations don't know the ROI some of these AI projects bring to their company either. And so starting with that as a beginning design principle gives a company a quick look at, okay, here's what this is truly costing me, and here's the benefit I'm getting out of it. But without that, you probably don't understand what the actual workload costs you to calculate a true ROI. So right.
SPEAKER_01:And it's not a one-time thing, right? ROI is not the first time I put the application in the cloud or build the AI tool. It doesn't have to be AI, right? You should build the flywheel of checking against it and to say, for example, you know, a new um new hardware comes out, right? Nvidia uh or any of the other providers come out with a you know a new chip. Do you have processes in place that will run your existing applications against that new technology and give you the weighted cost of okay, well, here's what it costs, here's the performance. Is it what I should go to? Right? That's you know, a new something new comes out. Everyone wants to rush to go acquire that, but do they have the ROI and the analysis built into their systems to know, is it worthwhile? Right. Or how should I utilize it best? You know, there is something new, there's a reason for that. How do I best put that into my workloads?
SPEAKER_02:Yeah. Jack, knowing that things change so quickly in the AI space these days, does that change how organizations think about their FinOps models? Or how how do they account for GPU usage or token utilization or whatever it might be within does that change the FinOps equation at all?
SPEAKER_03:Or I think it actually is exactly like Todd was just saying. I mean, if you've properly deployed FinOps in an environment, that unit economics uh idea of I know exactly what this feature would cost me or this new way of doing something would cost me, and now I know if it makes sense to roll it out or not. Uh to Todd's point, the cloud is releasing new services and capabilities regularly. And if you don't have that visibility into what the true cost is of deploying that new service, um you're just making decisions maybe based on competitive threat or thinking you're gonna get left behind instead of real metrics for your business and the features that are coming on.
SPEAKER_02:Yeah. Before we we move on to some other uh topics, does this place any other complications or challenges on on other parts of the the IT infrastructure team, like unforeseen pop-ups or anything?
SPEAKER_01:Uh so you mean like along the lines of any anything new that they wouldn't have thought of before? Not, I mean, not really. So it's I I think that the fundamentals are going to be the same, right? You've even when we went to microservices, you know, a couple decades ago, you still needed to understand well, what does it cost to me at the underlying transaction level to build proper financial models? I'll say proper is a key word because a lot don't go down to that layer to figure out, okay, well, what was this transaction costing me in my network before? AI adds in new things you didn't maybe you weren't tracking possibly before, but it's still the same rules. It's still the same fundamental ideas. Uh, it's just potentially a higher volume. You know, token economics, uh, I'm sure you've heard that used a lot, but it's like, you know, what does a token cost me? Well, uh, in reality, you needed to have been tracking that in the past. It may just bring it to the forefront of we really need to pay attention now that we're using this tool that drives so much processing on the back end. You know, one of the one of the side effects of AI is it's requiring a whole lot more capacity than anyone had before. And that may be the different thing is it's not adding anything different. It's the volume and how much it's adding that is what is really impacting teams.
SPEAKER_02:Yeah. I'm gonna read a couple of statistics here from uh a recent Deloitte survey that I was looking at. And and they surveyed a bunch of IT leaders and they found uh most of them expect major increases in workloads. Um, in traditional public cloud, around 50% said that they expected big uh uptick there. But also in edge computing, even more so, 62%. And then emerging cloud providers, nearly 90% figured that they'd see upticks in cost and utilization there. Is that where we're talking starting to talk about, like the neo cloud providers? Or is hoping you could provide some clarity on the difference between the traditional cloud players and what these neo clouds are?
SPEAKER_03:You know, with neo clouds, they're usually very purpose-built. And so it's an industry or a vertical or a use case that uh a company is going to start leveraging one of these neo clouds for. Maybe it's uh a compliance or governance requirement for an organization that they have to meet, or maybe it's uh geo-specific data has to exist in a certain location and they can't find that same thing with one of the other public cloud hyperscalers. In some cases, it may even just be uh lock-in, vendor lock in. They want they want to be more multi-cloud, so they're trying these other neo clouds. But yeah, I think to answer your question, I think those neo clouds are very industry or use case specific. And um, if you don't have an entire environment built up in Microsoft, Amazon, Google with all of the data already existing in there, it makes it pretty easy to go build and test in those neo clouds. But I think we're finding that if if you're heavily invested in one of the other major three, it can make it a little bit more difficult to spin up or or use one of those other new pop-ups.
SPEAKER_01:Yeah, it's it's going to come down to the workload that you're using. So it is not a one size fits all by any means. Right. You're not going to say, well, I'm going to use one of the NeoClouds for everything I do in AI. That would not make any sense. You really need to look at what does my workload need to do and what is the lifecycle of that workload. You know, for example, maybe I can go and train uh a model. Let's say I go train a model in NeoCloud. Well, now I need to productionalize the inference that I'm going to build on top of that model. Well, the NeoCloud uh has limited services to do that. We may put that over into one of the major CSPs for the inference component. Now, that's just one example, and it's definitely not saying this is how you should do it for those particular workloads, but that's just an idea where you would have to look at what am I doing initially? What is the lifecycle of this? And then where should it go? And that is why you know we recommend to most of our customers a hybrid approach. You need probably you're going to be multi-cloud, right? Typically, I I recommend it if you're at the scale to do that. Multi-cloud is challenging, right? But you need to look at where should I put the AI workload for the task at hand and where it is in the life cycle. And that is the when you say what is the difference. Well, the difference is the NeoCloud is very specialized for a particular part and component of AI, depending on where it is in that lifecycle compared to the CSPs, cloud service providers that have 200 plus, 400 plus services, they're much more encompassing and cover can cover a lot more use cases. So maybe you need to narrow down into one specialty item in this particular area. Could use NeoCloud, but then when I get into this part of the life cycle, I use a CSP, vice versa. It's just going to depend on what you're doing.
SPEAKER_03:Like uh I travel a lot and all the Marriottes are the same. I know exactly what I'm getting, and they're everywhere in the globe. But maybe in this specific city I need a boutique hotel, and that's gonna fit exactly what I need for that area. That's how I always do it.
SPEAKER_01:Yeah, a special event going on. Okay, well, I'm gonna go to the hotel and then that area for that sp for the what they their facilities they have, uh, but then I'm gonna go back to doing business as usual in the rest of them.
SPEAKER_00:This episode is supported by Trellix. Trellix offers extended detection and response solutions to adapt and evolve with the threat landscape. Stay ahead of cyber threats with Trellix's intelligence security platform.
SPEAKER_02:I I want to touch on Todd, you you know, you mentioned the the multi-cloud approach. I I I also read an article recently where there was an analyst um from Greyhound Research who brought up cross-cloud traffic as a potential uh security concern. Um, where that you know that I guess it's when data's hopping from cloud to cloud. I know this you two aren't necessarily the security team here, but what types of challenges uh arise from a security perspective when you're talking about that that multi-cloud approach?
SPEAKER_01:Aaron Powell Well, I mean, today's technology with encryption, right? If you're following, and that's really a a basic concept, right? Of using secure transmissions, yeah. Uh, you know, until we get to a point where technology challenges that with, you know, quantum, for example, right? But they already have quantum security products and things coming online that you can use to even better uh encrypt your traffic. You know, I don't think it brings anything new uh to you because if if your data and what you send is secure by nature in in what you're sending. And what I mean by that is, you know, if if you run into a customer that says, well, I I don't, you know, what about um social security numbers, right? How do I keep the social security numbers I have from being stolen in my database? Like, well, why are you sending that? Why do you have it in your database? Well, I don't really need it. Okay, can you one-way hash it? If you're using it for analytical purposes, you don't need the PII to begin with, right? So it starts to get into and you're asking about the transmission, but you need to look at what is exactly you're sending that you are concerned about. Let's remove the concern and there you go. There's no concern. Right? But I I don't think that adding multi-cloud or cloud to cloud adds anything if you're following uh security first practices to begin with. If you're not, yeah, that you're opening yourself up, but you shouldn't be doing that.
SPEAKER_02:Right. Yeah, no, absolutely. Let's go to uh the the topic of talent. Uh Jack, from what you've seen from you know, talking to a lot of organizations uh that we deal with, do they have the talent in place right now to to succeed with AI or is you know, how are they filling that gap?
SPEAKER_03:Well, not even just AI, but multi-cloud in general. So speaking of talent, that's what I was thinking of as you were answering that. A lot of times the biggest risk of introducing a new cloud provider or something different is your entire organization may be well trained on one specific CSP. And as soon as you introduce something else, do they understand all the policies and tooling and services and everything in that new world, the same that they would know it in the world that they're comfortable comfortable with? And if they don't, that provides a pretty big security vulnerability for your organization. So no, I think almost all enterprise, large enterprise organizations that we're working with are trying to figure out how they can upskill their talent for multi-cloud, for hybrid cloud, for AI, uh, for all these new services that are coming out. So that's a pretty common theme we hear.
SPEAKER_01:Yeah, and that's where when I mentioned abstraction earlier, that would be key in the multi-cloud world, right? So if you're building your tools and you're building your applications and AI to run on containers, the teams that are doing that don't have to necessarily understand what is happening at the lower levels. Right. So you have uh you have a distribution of knowledge and training that may or may not be affected by multi-cloud, right? And the one thing different with the AI brings brings to play is okay, as I mentioned earlier, well, things like liquid cooling and things like the uh density of your servers. And if you're trying to run everything that becomes the advantage of cloud, if you're trying to do everything in that stack, you need to train your engineers on how to run it at the physical level, but you also have to now train your engineers on how to actually use the applications. If you're doing it in the cloud, you're focused on the application layer. So from a training perspective, it's how do I train people to improve my business, is what you're focused on, and less on how do I train them to run servers. Yeah. So it's the distribution of talent as well as the decisions you make when you're building, as I said, can have an impact on how much do you really need to teach people if you abstract things in a properly architected way.
SPEAKER_02:Yeah. It it feels like, I mean, well, maybe not even feels like. I think maybe it's happening right now. The big cloud players are part of the winner's circle as it relates to what's happening right now with with AI. You think of even like Oracle coming out with these$300 billion deal with OpenAI or or Meta or whatever it might be. Um, give us a little bit of the landscape. You know, maybe Jack will start with you. Like what are the big what what's the roadmap for the big cloud providers right now? And what does that mean for the rest of us in terms of how it relates to uh where we put workloads, why we put them in certain places, and and how we just in general move forward with AI and hybrid cloud, public cloud, whatever it may be.
SPEAKER_03:Yeah, I mean, first I think we're gonna see that they're gonna continue to invest heavily in their own data centers and infrastructure. So continuing to build out massive global scale, uh huge environments that organizations can leverage and trying to reduce the amount of time it takes to spin up environments. So chip availability, infrastructure availability, all of that, it looks like they're gonna continue to dump quite a bit of money into and invest in. Uh, second is I think easier ability to access new services. So whether it's um some type of image recognition service or some new AI capability that they want to quickly move out into their set of tools, into those Lego blocks that clients have access to. Uh, they're gonna continue to invest and build those services to make them easier for clients to quickly uh build into their own applications. Um and then, yeah, I think global reach, you mentioned neo clouds, sovereign clouds, like making sure that compliance, security, governance, and that they've got infrastructure and resources available in all the different regions across the globe where they need to is an area they're gonna continue to invest in. But Todd, what else would you?
SPEAKER_01:Yeah, you know, it's interesting because uh you look at the lifecycle of the wave of where we're at, where we're a lot of people say we're kind of at the peak right now of the wave. I don't know, I don't know how true that is. It may continue to stay up there for a while, right? But the cloud providers have already come out with the initial uh, you know, like you like you said, like the you know, the chat GPTs or you know, the different tools that sit atop the clouds that people are now utilizing. So now really where their focus is going to is okay, I've built these systems that can handle AI. I have these systems that can handle building out uh agents, et cetera. Now let's ensure that what we've done is enterprise grade and can scale. I've already built the initial application, me talking as a CSP, the initial thing people need to use. Now I need to for the when they build their own versions of these things, I need to have the scale ready to go so that they can support 100 million, a billion users. Right? And so if you look at the wave of AI, I'd say typically the CSPs are ahead of it and they've already built the initial. Now they're focusing on, okay, how do I get the core to work? How do I get my agentic systems to function? How do I build uh you know, the different gateways I need or the MCPs, the data layer, all the things that will be used by AI, they are now focused on that next piece, which the industry will eventually catch up to that component, and the cloud providers will then be thinking about the next layer, right? And that's one of the beauties of going to and using clouds is you know they're thinking about what's coming next. So you can focus on building what we need to do today for business value and not necessarily be as concerned about what's going to happen in the IT landscape.
SPEAKER_03:It's great for the industry, right? I mean, if you've got these huge players dumping billions of dollars into innovation and scale, and we'll all benefit from it, regardless of where the workload exists.
SPEAKER_02:Simple question, but why are we going to all benefit from that?
SPEAKER_03:So maybe you're not going to move your workload into the cloud. The capabilities, the services, what they're learning in the cloud with this massive scale and these services that are rolling out will be capabilities that will become available on-prem in the cloud and hybrid environment, wherever it may be. Okay. I mean, think of ChatGPT, right? So what that transformed a whole bunch of generative AI solutions in our industry. Uh, whatever the next thing is that an AI startup that's happening in the cloud today or that one of the major CSPs is building for us to have access to will drive massive change for all of us. So it's billions of dollars of innovation is pretty exciting.
SPEAKER_01:Right. I mean, it's available to you, but they're you're also benefiting from the research and development they're doing for you know, for example, a CSP uh building native um liquid cooling for all their GPUs. And you think, well, what does that what does that do for me? It just helps them with their heat problem. Well, GPUs, when they're cooler, operate faster. So and they're not increasing the price of what it costs you to utilize it. So you're utilizing the same hardware, but you're now getting better price performance, and you didn't have to do anything. You benefit from what they're building in their own data centers, right? As a matter of fact. So that's what we mean by everyone can benefit because they're building things that you can take advantage of. Now you don't have to. I'm not gonna force you to take advantage of it, but it's absolutely out there for you to do so.
SPEAKER_02:Yeah. Absolutely. I mean, the hyperscalers blazing a path that you know that'll benefit all of us. Bring back the the neo clouds here. Uh, Todd, do you foresee um a future in which the hyperscalers are acquiring some of those neo clouds to kind of bolster their offerings, or will they introduce their own kind of subset of of neo clouds out there? And what what does that kind of consolidation mean for the market right now?
SPEAKER_01:Well, I mean, if you're yeah, that's it's it's interesting because if if you look at the uh you know mergers and acquisitions are taking on a neo cloud, you're having when you you know when you buy a company, you're paying more than just the premium on what they have. And so as a CSP, I would question why are you buying the premium of everything else when you can just go buy the GPUs or whatever the neo cloud may have? Sure. You and and just put it in. Why are you paying the premium when you don't need it?
SPEAKER_03:Yeah.
SPEAKER_01:Right. So I you know, I will they buy one? I don't know. Is my you know, I'd be a great investor if I knew that.
SPEAKER_03:Well, there could be some consolidation within the neo clouds themselves. Maybe that's where that would happen versus uh the hyperscalers.
SPEAKER_01:Yeah, if they join the join forces, you know, maybe to be to be more uh but then again that you know still comes down to is it a capacity versus a service? Yeah, you know, is a CSP competing on the capacity of the NeoCloud, which is what the Neo Cloud brings, is I well, I have 100,000 GPUs to use and I'll offer it to you for cheaper for this specific specialty workload. Is that a challenge for the CSPs? That they're not they have capacity.
SPEAKER_02:Yeah.
SPEAKER_01:They have the capacity, just like why do they need it?
SPEAKER_02:Yeah. Would that then put, you know, if if they're not gonna buy the premium or maybe you know, or likely not gonna buy the premium, does that introduce any risk to relying on these neo clouds?
SPEAKER_01:Um it comes back to architecture, right? It comes back to what I was saying about abstraction. You don't build your tools to only work in one place. Build it, architect it so it can work in multiple locations, then it really doesn't matter. Uh and that's really where you want to be, right? You want to be a consumer where you can utilize what makes the most financial decision. You know, it doesn't matter if it's a neoclide, it doesn't matter if it's a CSP. If it doesn't matter as much, then it's a much easier decision. For you to move.
SPEAKER_03:Trevor Burrus, Jr.: It's such a good point. Regardless if it's NeoCloud or CSP, building that application or environment so you could have portability if you needed it. I mean, we don't know what's going to happen to an AWS, to a Google, to a Microsoft. I mean, one of them may get into the industry you're in and there's a competitive threat there, and you may want to move out of them the same way that something could happen to a Neo Cloud that you need to get out of quickly. So yeah, that's a good point.
SPEAKER_02:So we so we started our conversation here on you know kind of us bringing us up to speed from pre-gen AI up until the explosion of all the hype in 2022, 23, what we've seen in 24. What are we gonna see in the next year or even beyond the next three to five years that's gonna put IT teams in a position to uh you know have to respond to?
SPEAKER_01:Companies are just now figuring out how what can I do?
SPEAKER_02:Yeah.
SPEAKER_01:Right, what you know, how this thing works and what it is. Once they start grasping it very well in Tarnepal money, you're gonna see the transformation of businesses in a lot of loc a lot of areas, right? Uh and like AI has put a spotlight onto technical debt. So if your company has a lot of legacy systems, a lot of legacy controls, AI is showing the weakness there. And either you are going to evolve or you're you're going to go away.
SPEAKER_02:Yeah.
SPEAKER_01:Right. And so as far as what's coming next, companies are going to have to evolve, they're going to have to modernize. They're going to and you can the cool thing is you can use AI to do that. Right. AI can help you modernize. But if you don't, you're not going to exist anymore. And so that's where it's the AI is this new thing. Okay, how do we use it? What are we doing? People figure it out. They're going to evolve very quickly. And if you don't figure it out, you're going to get left behind. And I think that's what's going to happen next is the acceleration of evolution. It's a forced evolution, is what it is. Yeah. Does that make sense?
SPEAKER_02:I mean, so much is introduced when you start to talk about the unknown future of what's going to transpire. How can organizations or listeners out there right now, how can, you know, Jack, how can they be more resilient to that change?
SPEAKER_03:Oh, listening to this podcast and reading all of WWT's research reports in your your mic drop. Yeah. No, I mean I think it it that's what every executive is trying to figure out, right? Is making sure that they've got strong talent, that they're staying on top of the industry, that they've got good relationships with the partners that they are depending on, so they know what the roadmaps are and they understand what's coming for them around the corner so they can be ready for it. But yeah, I think it's a that's a challenge everybody faces.
SPEAKER_02:Yeah. Todd, any any type of thoughts on what organizations can do today or by the end of the year to best position themselves for a little bit of an unknown in 2026 or beyond?
SPEAKER_01:Right. I think the initial reaction from a lot of in of a lot of internal uh owners at at organizations would be, well, I want to control this, right? Oh, this is a threat. Or how do I get the cost on this, right? You know, I have this, I have shadow AI happening all over the place. How do I deal with this? And I think you need to step back a little bit and think, you don't necessarily want, you don't want to control it. I mean, you do want to have control of it. So that is different than saying I want to control it specifically, right? So you have to think of are you setting up the foundation for success of all your users that want to use AI because they want to improve what they're doing. That's why they're doing shadow AI, is not because they want to spend a lot of money. They're doing it because they want to improve their workplace. They want to improve how they do their jobs. Well, so enable them to do that. Right? Start setting up code base or development systems that that leverage the same code base, for example. Start evaluating, okay, these are the models that you can go and do what you want with. Instead of every time a new model comes out or something comes along that you're not sure if it's secure. Well, have you told your organization what they can use? Right? Have you enabled them with the tool sets they need that you know is secure and you know is governed and covers all of your compliance issues? Because if you haven't done that, then you're asking for it to continue. Right. So you really need to focus on enabling your citizen AI developers to do what they want to do, which is to improve your business for you. Right. But you have to lay the foundation. And that is the key thing to your governance all the time. What does that mean? It's like, well, have you identified here's the data set you can use? Have you identified here are the tools we're going to allow you? Everyone that does AI or agent development, you're going to use this tool. Right. Or here's the pick of three you can use. That is what you really need to be as a leader in your company, figuring out what am I going to go and say, okay, everyone, here's this great set of tools and data. Go make our business better.
SPEAKER_02:Yeah. Yeah. I know we're coming up on the bottom of the episode, but Jack, anything based on what Todd just said, you know, he's talking about ways in which organizations can accelerate AI adoption in some cases through cloud, but a lot of it was just had to do with you know basics. Um, in your experience, what are some common mistakes that organizations are making today as it relates to to cloud and AI that should be easily avoidable?
SPEAKER_03:Well, I would even, without even saying a mistake, uh, three times in the last week I've I've spent time with organizations that are trying to properly define those exact things. What is the service catalog of resources that my team has access to? What are the policies and controls that are in place and are they are they followed? Not just are they written somewhere, but is there actual automation built in to follow these security policies that put into place? Um without those, you find a lot of time playing whack-a-mole, turning people off, you know, slowing down innovation. But as soon as they're defined and you've you're actually enforcing them, we find that that teams can move much faster. And uh the same thing is true with AI as it is with just traditional cloud. Properly defined uh governance, security rules, you know, service catalogs, those site sorts of things will help you move much faster. And um also an endless trust in the executive team that you know I can let them move fast because I know that we're we're protected here. Right. Yeah. Yeah.
SPEAKER_01:Yeah. And you know, build in the I mean, as uh and yes, what I said was it's basic for all of AI, but you know, cloud is not that far removed from if I built something on-prem or in the cloud, I can still follow the exact same protocols. It's do I take advantage of what the cloud is doing for me. So, you know, I talk about abstraction a lot. Well, cloud has a lot of tools where you can pick well, which container strategy do you want to use? You don't have to go out and buy a lot of licenses necessarily to try different technologies, right? It's it's already baked in to the system. And this is where, you know, I mentioned earlier about the flywheel of continual testing with having the cloud out there and multiple cloud providers, you can set up automated tests to say, oh, there is this new thing from a CSP. Is it valuable to my chain? Right. You can, as long as you have that automation and those systems set up, then you it gives you the ability to try a lot of different things. You know, it's another thing we haven't really touched on is that uh if there's a a different chip you want to try, well, do you want it to go and buy that chip and install it and try it out? Or do you want to go where they already have everyone you can think about uh ready for you to test?
SPEAKER_02:Yeah. Right.
SPEAKER_01:And that is really when it comes to AI, is you have a lot more buffet of things you can try to see if it really gives you the value you need without having to obviously go and purchase it and support it.
SPEAKER_02:Right, right. Well, we'll put a pin in that one and maybe bring you back for another episode and dive deeper there. Uh Todd, Jack, thanks so much for joining the podcast today. That was great, that was a great conversation. Thanks again. Yeah, thanks.
SPEAKER_03:Absolutely, thank you.
SPEAKER_02:Okay, great conversation with Jack and Todd. Thanks to them for joining. A few key lessons stand out. First, treat AI as greenfield where you can. Design for portability with containers and clear abstraction layers, modernized before or instead of lifting and shifting. Second, install a FinOps flywheel, tag everything, no unit economics for tokens, GPUs, and services, and continuously test new chips and services against cost and performance, not headlines. And third, govern to accelerate. Publish a service catalog to find allowed models and data sets, automate policy enforcement, and upskill teams for multi-cloud and AI, enable citizen developers inside safe guardrails. The bottom line is the cloud is a launch pad, but architecture, economics, and governance are the engine. Build for portability, measure relentlessly, and you'll turn AI from an experiment into durable advantage. If you like this episode of the AI Proving Ground podcast, please give us a rating or a review. And if you're not already, don't forget to subscribe on your favorite podcast platform. And you can always catch additional episodes or related content to this episode on WWT.com. This episode was co produced by Nas Baker, Kara Kuhn, and Amy Riddle. Our audio and video engineer is John Knoblock. My name is Brian Felt. We'll see you next time.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
WWT Research & Insights
World Wide Technology
WWT Partner Spotlight
World Wide Technology
WWT Experts
World Wide Technology