AI Proving Ground Podcast

Private AI vs. Cloud: How Enterprise Leaders Can Make Smarter Build-or-Buy Decisions

World Wide Technology

Is your organization ready to own AI or are you better served by leveraging the speed and scale of the cloud? In this episode of the AI Proving Ground Podcast, WWT High-Performance Architecture Director Jeff Fonke and VP of Advanced Technology Solutions Jeff Wynn break down the toughest question facing IT leaders today: should you build or buy your AI capabilities? From the economics of inference costs to hybrid cloud realities, the two Jeffs share practical strategies on private AI, workload orchestration, data readiness and overcoming the enterprise skills gap.

Support for this episode provided by: Google Cloud

More about this week's guests:

Jeff Fonke is a passionate technology leader, who leads new and growth solutions areas at World Wide Technology, including High-Performance Architectures that support AI and Data. With over 25 years of experience in the tech industry at WWT, he has a proven track record of building scalable data center architecture solutions within WWT's own IT organization, WWT's Advanced Technology Center, leveraging those experiences to help advance customers along their journey to simplify the complex.

Jeff F.'s top pick: Avoiding an AI Nightmare: Strategies for Scalable IT Infrastructure

Jeff Wynne is a highly accomplished technology professional with an unwavering commitment to client success. As Vice President of Technical Delivery & Engineering, Jeff is responsible for leading teams that help clients achieve their business goals through technology solutions. With more than 20 years of experience in the industry, Jeff has developed specialized expertise in cloud computing, networking, cybersecurity, and software development. He is a strategic thinker and adept problem-solver who excels at delivering outcomes that exceed client expectations. Above all, Jeff is deeply passionate about building lasting relationships and creating positive impact for clients, which he believes is the true measure of his success.

Jeff W.'s top pick: Modernizing the Point of Sale at Jack in the Box to Drive Efficiency, Insights and Growth

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.

Brian:

From Worldwide Technology. This is the AI Proving Ground podcast. Today, ai is no longer a question of if or even when. It's a question of how. For enterprise leaders, the conversation quickly gets to a question of buy versus build, but the choice isn't always that simple. The real challenge is knowing why you're building it all, when to go private, when to leverage the cloud and how to make them work together In a world where your AI bill can spike overnight. Your data lives everywhere and technology changes faster than your refresh cycle. So today we'll be talking with two of WWT's own Jeff Funke, an AI and high-performance architecture expert, and Jeff Nguyen, who helps lead our advanced technology solutions team, about why and when enterprises should go private, stay in the cloud or adopt a hybrid AI strategy. You'll learn the key readiness signals, how to tackle technical debt, where the real skills gaps are and why flexible data-driven architectures will define the future of enterprise AI. So stick around and get ready to learn.

Brian:

This is the AI Proving Ground podcast from Worldwide Technology everything AI all in one place. Let's jump in. Yeah, jeff Funke, jeff Nguyen thank you both for joining the AI Proving Ground podcast. Somewhere along the way, our talent manager decided to book two Jeffs. So it's going to be a little bit of a balancing act here, but to the two of you, thank you. Thanks for joining Jeff Funky second time right.

Jeff Fonke:

Second time. Yeah, we did one on the Proving Ground last time, so thanks for having me again.

Brian:

Yeah, jeff Nguyen, a little bit of a rookie here, but I'm sure you'll do great. Definitely a rookie, but I'm excited about it. Yeah, we are talking about, broadly speaking, private AI and how to develop it, how to deploy it. As we've experienced it, a lot of conversations with our clients start with kind of that build-burst-buy mentality, and I don't know if that's exactly the right question. Maybe the right question should be when is the appropriate time to build a private AI, jeff, when I'm wondering if you have a concise answer for when we should build in verse, when we should buy?

Jeff Wynne:

Well, I'd maybe add one more W to that, and it's why why are you building this? I think that the same premise that we've looked around any data center is that the mix is really what's optimal to most of our clients. There's some value of looking at like I have workloads that need to be in a cloud, but there's some workloads that really have to be from that private perspective and how you want to transfer all of the data back and forth. So the why becomes to me the one of the primary drivers, and so the why, Jehovah, I'd love to get your take on this as well. It's going to start and end with what are you trying to accomplish? What are the workloads that you're placing on there? What are the business outcomes that you're trying to achieve with it? And I think if we can start there now, we can start to inform ourselves about private versus public and where that goes.

Jeff Fonke:

Yeah, no, I love the way that you lead in with the why, because we talk about it as a practical approach. It's meeting them where they're at and maybe starting off with a SaaS based solution that doesn't need to run on a private data center depending upon their use case, is the right way to go right, and we want to help customers figure out the right way to go about it. And the why is so important? Because we can't design a factory if you will, ai factory industry term now without knowing what they're doing or why they're doing it and what that business outcome that they're going after is. So it's really about meeting them where they're at, understanding where they're at in the journey and then figuring out how we can help navigate where and why they're doing that type of thing.

Brian:

Yeah, I like the idea of doing it the practical way, and that certainly makes sense. I'm wondering is one or the other easier build versus buy or they each come with a unique set of complexities that's going to make it an uphill climb no matter what I mean.

Jeff Fonke:

When you look at what we're customers can go out and do things very quickly in the public cloud, right, I mean, it's a very fast track. But over time what we've seen is inference costs can go up the more they get adoption within their organization for the use case. So at some point there becomes a bit of a tipping mechanism that we need to start looking at potentially, looking at the private focus and building potentially our own cluster out, and then they come up to a point where we have to figure out where the best place is to run that. Do we have the power and cooling needed in our facilities to do the use cases we want, or do we look at a GPU as a service type of a deployment inside of what industry is calling NeoCloud now or AI factory, gigafactory, that type of thing, sure.

Brian:

Yeah, sure. Last table setting question, jeff Wynn. I think when we think of private AI and you know, dedicated on-prem infrastructure, there we're thinking of larger enterprise organizations. Is private AI a solution that would apply to all sets of organizations here, whether it be down through the mid-market, or is it just for the big players?

Jeff Wynne:

I think it's for everybody, and we're starting to see those use cases of, let's call it, a single pod or the micro data centers that are consuming a smaller amount of power. You have a lot of processing units, because I think the more evolved and certainly forward-looking companies recognize this has to be a part of their business going forward, and so they're taking part of what they can do to at least trial it. We saw an advertising company just put up a single pod in a colo space to give themselves some of that privacy and really experiment with some use cases around how they can better engage with their clientele, leveraging AI to curate the right things back to the clients. So I think it's absolutely, absolutely a play there.

Jeff Wynne:

I do want to come back to that, that what's easier harder. I think it's. From a cloud perspective, it's certainly easier, and that's the track you know, that's the attraction to it. You sign up, you put your credit card out there, the PO out there, and you can start to work with it almost immediately. And this is the value of what WWT has built is that you have that accelerator with an RAI environment as well for our clients to go with it. The question, though, is is that ease worth it, for all the reasons that Jeff Unke here talked about.

Brian:

Is it that ease from an integration perspective or answer? Let's talk about easy.

Jeff Fonke:

Easy can come in a lot of ways right, like, I think, when you look at, we haven't talked about data yet Where's your data at and what are your data sovereignty concerns? And I think, yeah, is it easy to start in the cloud? Yes, I mean, there's use cases, but if your data is not in the cloud, it may not be. And if your data, if your organization is not, you know, doesn't have the aptitude or ability or you know, the resources to move the data and bring the data to where the AI compute is, then it's not as easy. So you have to really think about, then and I get back to the practical approach trying to understand the use cases and understand why they're doing what they're doing, and then their approach to data the data is the foundation and lifeblood of all of this and really making sure they've got a strategy around that. That gets into where you know whether it's easy or not in the public cloud.

Jeff Fonke:

Yes, like you said, jeff, I think you know, we see them. You swipe the credit card, get the. You know whether it's in Google or it's in AWS, or it's in Microsoft, or in a co-location, or a gigafactory, if you will, or a Neo cloud. It really just all depends on you know what, what their resources are, on that they have internally to be able to do these types of things, and then you know what. You know that data piece is so important where it lives and how they are using it for the use cases they have.

Brian:

Yeah, are we seeing any instances where a client thinks they want private AI or thinks they want the opposite end of the spectrum and we're in a position where we have to say, actually, you can get away with doing something on the cloud, or, yes, you do need to have a private instance? Do organizations understand where they need to be running these AI workloads from the beginning, or what should they be looking out for to try to figure that out?

Jeff Wynne:

I think there's a lot of storming going on in the marketplace right now and I would say that not knowing what that is, or at least coming in with an open mindset about where you're at, that seems to be the norm, and a lot of the reasons that Jeff just talked about is that the data states that you have may be disparate already, so you get a little bit over here.

Jeff Wynne:

A little bit over here. The use cases we got back into the why aspects of it start to dictate some of those things and as they're formulating their strategy, they're recognizing I got a pretty long lead time to get into private AI. I've got to build out some facilities, I might have to procure the equipment and hardware and I might see a long lead time on that, and so a lot of that's going to kind of factor into it, and so what we don't want anyone to do is necessarily allow for speed to marketplace to dictate what the right solution is, and this is the practicality I think that we continue to refer to is that pick those things out, and it may be both. It may be how do I accelerate right now? And that might be a cloud instance with some data migrations, as Jeff talked about, and then leading yourself into a longer strategy because the right spot is for that private AI in their particular use cases. So a little bit of both in that case. Yeah.

Brian:

And I'm thinking that it's not going to be just one solution for the end of time. Here I'm looking behind the cameras. Here I see a mixing board. Is that a good analogy of like sometimes the knob will go up for cloud uses and sometimes the knob will go down for on-prem, or we might raise it up, is that?

Jeff Fonke:

accurate, you think? Yeah, I think. The way I look at it is there's a bit of a flywheel and things get adjusted. You can use that analogy as the mixing board We've actually, I think there's a slide floating around with that.

Brian:

Maybe that's how it creeped into my head.

Jeff Fonke:

Build versus buy discussion and build versus buy can be multiple things. Right, it can be. Can I buy a SaaS application that already has all the things I want to do for my primary use case? Maybe it's an AI agent or chatbot or something like that? The answer is yes. There's dozens, if not hundreds, now of use cases where companies have kind of AI in a box and they'll run it for you as a SaaS application and you're okay with your data being consumed in a SaaS-like application.

Jeff Fonke:

But when you're looking at the alternatives, you know I think you're going to see over time that that may not be everything they need and what they're doing in the cloud it's an and conversation. They're probably going to have parts of their data sets that live in both places and to be able to navigate those types of hybrid environments is going to be critical moving forward Because, let's face it, most enterprise customers are in both the public and in a private cloud, not all in one or the other. I mean, you know when the cloud, all the things with cloud, started, you know, a decade ago. We've seen that transition and it's really an end conversation, even the largest enterprises in the world. They're in both private and public.

Brian:

From an infrastructure standpoint, how does everything we've already touched on right now, recognizing it as a hybrid environment how does that affect infrastructure decisions you're making on-prem? What do you have to do to account for the future, where you're going to be making different choices?

Jeff Fonke:

I'll take a swing at that and, jeff, I'd love your perspective on it, but it's really one. We've talked use cases and the practical approach, that kind of thing, but it really depends upon how you're developing and if you've got the skill sets. We see oftentimes, I think and I talk about four pillars of high-performance architecture right, it's high-performance compute, high-performance networking, storage and then workflow. And workflow is where we're spending the most time helping customers understand how to bring these things to life. But when you look at that workflow and the automation needed, it really is, you know, very complex and to be able to structure what you need in the long term, designing that in the right way. That doesn't put you in a position where your pigeonholed into one or the other.

Jeff Fonke:

And I'll just kind of complete the thought with the software needed and the tooling that you build these things on. If you do things in a very specific way in a public cloud, that may not be very easily ported back on-prem, and there may be some really, you know, interesting critical work, whether it be at the data layer or in the software layer, that you need to be able to port those things back on-prem. I mean, you know, nvidia does a really good job of doing some things in the public cloud with their DGX cloud solution. That's a good bridge play to NVIDIA architecture on premise, but you know there's other ways of doing it with completely open source. There's other, you know, tooling things. You can consider different platforms. So I think those are things that we help our customers make sure they understand before they go all in one way or another. Right, jeff, do you?

Jeff Wynne:

No, I think it's you're stating it well, maybe to demystify it is. It's the same challenges they're going to have today in any other IT technical debt. It's just going to be amplified with the cost and the intensity of the high performance environment on to six or seven different AD domains. That's going to hurt you regardless, but it's really going to hurt you when you suddenly have this massive demand from your workforce and your clientele to start using these HPAs. And so what I would suggest, folks, is like get after that technical debt as quickly as you can. It's going to make this transformation so much easier.

Jeff Wynne:

Go even more practical, as if you already have a data center and you're starting to contemplate bringing in your own private AI. You need space, you need power, you need cooling, okay, well, no questions. Everybody's talked about this consistently with HPA. But where are you going to get that from? Well, you can endure the cost of bringing it all in or you can start to look at is. Is your environment structured correctly? Have you already done those consolidations of all of the data storage? Have you looked at your compute environment? Do you have seven different versions of an application running or do you really have a good structure or those applications are living and you've optimized for it. So really, really strong piece is that you probably had this on your roadmap already. This is a great catalyst to get it moving forward so that you're ready to adopt that AI pathway when you're ready for it.

Brian:

Yeah, yeah, jeff Funke, you had mentioned workflow as the area that you spend the most time with clients. Why is that?

Jeff Fonke:

Well, there's a lot of ways to do it right. It's the new DevOps, right. It's really the automating and doing secure multi-tenancy, like if you're going to build your own private AI. There are ways and tooling and capabilities that sit on top of that. There's a lot of different open source capabilities and you know, you think about GPU as a service and being able to serve you as an IT organization, serving your organization, to be able to do that type of a deployment, really kind of develop a plan around GPU as a service to be able to adhere to our data scientists and our architecture folks that are working and building these proof of values for our end customers. And there's a lot of complexity in that, right, if you want to under, you know, like LLM as a service is another thing and AI gateways to be able to change LLMs out on the fly.

Jeff Fonke:

We're doing a lot of POCs on that right now, just really, you know, kind of helping customers navigate those waters. And that's what I mean about the workload orchestration, right, I mean, should I use Run AI and NVIDIA's mission control along with, you know, bcm that comes with that or should I? Where do I insert, maybe, an OpenShift platform and build my own automation that I'm already doing. And then, what tools should I use? Maybe from an overarching capability, there's these tools like Apollo and Arna and Rafi that we're helping customers navigate and demystify the complexities and what one does versus the other. So that's the one that's really coming up a lot. It's just that it's something that there's a gap there, because it's kind of a new space to live in.

Brian:

Yeah, we've also mentioned skill sets a couple of times and a gap of skills. Where do we see the biggest gap right now with our clients or the industry in general? Because and I'm thinking that that's the gap is going to be you know, even more in effect, because once you have a solution, you have to have somebody that's going to manage it and tend to it on an ongoing basis. But where do you see the skills area right?

Jeff Wynne:

now I'll start with. I think our hypothesis is where we thought the skills gap would come in has not necessarily played out, and so what we're finding is most of our clients have a very strong you know that, almost that data scientist level. They know what they want to try to accomplish and I think in hindsight, when we look back at it, it's like, okay, that actually does make sense, because if you're an oil gas company, you need to understand, like geological formations, to identify where you're going to be able to pull your next reserve from. If you're a pharmaceutical company like talking about drug interactions has been something you've been doing for decades. You're just up-leveling that conversation. So what we're finding is certainly with our larger clients, is that they've got a really strong understanding of their business Okay, no, no questions there and they've started to apply that into those use cases of where they go, whether or not and Jeff kind of mentioned it there are so many new technologies flooding into the marketplace and we're seeing new revisions of the software, new revisions of the hardware coming out as quickly as we can because your cycle is so fast of what you need to get from performance.

Jeff Wynne:

That's putting a ton of pressure, I would say on the traditional IT teams that we're seeing. So, as Jeff mentioned, you put this thing in there and you're like now, what, how? How do I configure this optimally If you run into trouble? Do you have folks who have that experience to do it? And they don't. And so a lot of times where we're finding, you know, being able to add value to our clients is coming in there because we see these things dozens and dozens and dozens of time. We can help optimize it so that we can unlock their data scientists, unlock those use cases to go after it, unlock their data scientists, unlock those use cases to go after it.

Jeff Wynne:

Yeah, I think we'll have a second set of kind of that skills gap area, probably in the broader organization. So what we've observed, even in our own workforce, is that a lot of folks are incredibly interested in it, and so they're interested in your own homegrown tools, but they're also out there with Claude or Gronk or anything else that is sitting out there and they're trying to learn with that perspective when they come back into the homemade tools or they're coming into the corporate approved tools. Those are different interfaces and so how do we optimize them to be able to consume that as well. So one of the areas that we're pushing quite a bit of like, hey, here's an investment you need to make is that what's the center of excellence for your organization, what's your strategy for adoption through the organization?

Jeff Wynne:

And right now it's not a key focus for many because they have very defined and very small. Here's what we're going to do. Here's the group that's going to have it. We're air gap from everybody else. This is our private playground, but that's going to change quickly, and so really, really encourage folks to start to think about where are you going to go, how are you going to enable what is the training path for everyone to come on board with it?

Brian:

And is that because, whatever the use case might be, it's going to naturally expand itself, or there's going to be other use cases that can plug into it and you're going to need skills there and there and there?

Jeff Wynne:

Absolutely. I was, you know. I'm sure we all have stories like this with my father and he's like, well, you know something, ai, I don't even understand it. It's just, it's like a search function. It's just a better search function, and so his entire world is around an enhanced. I think he uses Google. So it's enhanced Gemini search function, right. What he's not in there is he's not querying about his next business. He's not querying on here's my likes and dislikes. Bring me a travel plan that's curated specifically to him. And so here's an individual that he's going to use it. He, we all and I know I challenge myself with this every day am I using that the most effectively as we can? Yeah, now take this out a year that every aspect of every business we work with is going to have some form of AI. Are your employees using it the best they possibly can? Because if they're just using it as my father's using it as an enhanced search function, we're missing so much value.

Jeff Fonke:

Business values 100%. It's detrimental as part of that, you know, and I love it because skills gaps can mean a lot of things, you know. I mean it can mean you know I need a PhD in computational neuroscience to help me get this project off the ground, and it can mean I need to learn how to prompt. That's more effective for myself, right? So I was just on a call today about prompt engineering. We do a fantastic training class around that to just get our own internal employee base familiar with chain of thought and reasoning, familiar with how to use an agentic approach, how to prompt with the right mindset. That was very specific. So there's skills gaps within the organization as an employee base, and there's also a skills gap on just how to bring AI to life inside of an organization the most effectively.

Jeff Fonke:

And I think, jeff, you did a very good job explaining kind of both lenses. But I think what we see is twofold right, I run an AI practice, so finding the right resources that can help educate, build things, do the right things. They know MLOps. That's a skills gap that our customers, you know. We help with strategic resourcing in that case to bring the right resources into an organization to help with those types of things. But I mean we've also built learning paths where we have to get an AI driver's license to be able to do things within our own organization. So it's very broad and it can mean a lot of things. I think it means as inference continues to scale in organizations and in our personal lives, like your father's example. I mean we all have those, like it's just. There's such a wide array and if you're not using and prompting every day, then you're probably not learning the techniques that are out there today.

Speaker 4:

This episode is supported by Google Cloud. We are experiencing one of the most significant shifts in history, where AI is creating entirely new ways to solve problems, engage customers and work more efficiently. Google Cloud is ready to help organizations build a new way forward in an increasingly AI-driven world.

Brian:

I do want to get back a little bit to the buy versus build and the hybrid. How does an organization what are some of the calling cards or signals that would say I am ready? Because readiness comes up so much when it comes to AI, how would an organization recognize that it's ready to start building a private solution versus building, or whether they're ready to put workloads in the cloud?

Jeff Fonke:

I'll give you one quick example, because it's an easy one. Like we've had customers come to us with a bill that is massive and they're like we can't keep doing it this way, there's got to be a better way. Yeah, that's the easy one, that's the tactical. Okay, yeah, that makes sense. If you're spending that amount doing it that way, we can. Probably. It's a financially very easy equation to say you can do it in two other ways and be this X amount cheaper.

Jeff Fonke:

I think the other way I would say from is a data readiness. If you come into it with an open mind and you've already curated two or three use cases and you know the data that is needed and you've maybe done some data work. And you know the data that is needed and you've maybe done some data work. I mean you mentioned, you know, maybe you know in genomics or maybe it's oil and gas or whatever. You've got the seismic data and you already know your business and you just know the use case you want to do. That's one that where we can say, all right, we know, there's an equation, we've got. That's like you know how many, what are your input parameters, how many tokens per second, or, and what's the scale look like we built a sizer in our lab where we can kind of go through some of those things based on use case.

Jeff Fonke:

So I think if you've got a really good angle around your data and we can help you, then, like, this is what we expect it to cost, and we're doing that right now. Right, whether it's working with our NeoCloud providers that we work with or the public cloud. The thing is is everything's a bit of a snowflake when it comes, because everybody's got a different cost structure and they may be doing certain things. So, but really to me it's centered around use case and data and then, ideally, have a good idea around what you want to do. As you mentioned earlier, jeff, you know we can start very, very small with just a handful of GPUs if there's a few things you want to do and your data tolerance isn't there for the public cloud.

Jeff Wynne:

And maybe I can take that from kind of an operational viewpoint and adding to it. So, as you get very, very practical, you know, the first place is do you have a strong kind of data center strategy and what's your business continuity strategy? What's your disaster recovery strategy? So that's going to inform you where you want to place this. So once you identify where you want to place that whether that's going to a colo or you have excess data center space that's already available and that you can use it now we can start to plan for it.

Jeff Wynne:

And so take a lot of the things that Jeff and his team are going to walk you through what's that? Why? What's that practicality to it? We can start to size some things up. Hey, it's a handful of GPUs much simpler problem to solve for. Hey, we need to go with a super pod or something even greater than that from a size and GPU perspective. Now we really have to optimize that data center. So I would say that's the second kind of piece we would want to see and walk through is saying all right, you know, ideally, geographically, we want to lay these things out here. This is where your workforce is, this is alliance with your current strategies and we work through that analysis of the data center and that's going to give us. Do we have the right amount of power? Can we support the weight of this, everything else that's coming in there? And this is where we start to have, I would say, those next order conversations when does the data live? Put it near the storage arrays to it and start to keep moving up those that stack layer and keep having very practical conversations through each one of those to make sure that they're actually ready to consume this when it goes live.

Brian:

Right and moving into the future, what is that balance going to change? I know there's always going to be considerations on where and how to do it, but are there any signals in the market right now that we're going to be running more workloads in the cloud versus on-prem, or is that remain to be seen? Or is that even a question that we're tracking?

Jeff Wynne:

So for me at least, the conversations I'm having it remains to be seen to a lot of things that we talked about before. Again, the whys continue to evolve and continue to change. Now, my personal gut is that you're going to see a good, healthy mix between the SaaS work that we have and that's manifesting already. Google's embedding Gemini into all of their work products. We've seen that with Copilot, and so we'll see that. You know, whether it's a CRM or any other type of piece, you're going to see that embedded AI that's likely to live in the cloud, but it's going to be very meaningful for each one of our clients, I think.

Jeff Wynne:

Then, secondarily, it's going to be really around. What do the economics look like from moving that data around is going to help dictate a lot of the decisions. And then, last but certainly not least, security. What's the security and the proprietariness of each one of these points of data and how much do they want that to leave the boundaries of their own four walls, and what can they trust and what can't they trust from outside of that, and we see a lot of different opinions starting to form on that. I think it's loosening a little bit over the past year that there's more allowance for those GPUs to live outside of those air-gapped environments, but it's still going to be a key consideration that we're hearing from a lot of our clients.

Brian:

Jeff Funke you mentioned a while back. Just, you know it all starts with data. No earth-shattering news there, but then you're talking about it's all about who gets the economics right of moving that data around. I'm curious, within the AI proving ground, you know, which is the namesake of this podcast, a lab environment that we have here at WWT for clients and organizations to test out AI solutions Are we seeing anybody doing anything innovative?

Jeff Fonke:

you know, without naming names per se, anybody doing anything innovative with how they tackle the economics of how data moves. Yeah, yeah, I want to revisit real quick. Just, you know, completely agree with you. Know where we're at in the enterprise. I think we're still yet to be seen. Completely agree there. We're still early and it's moving so quickly.

Jeff Fonke:

There's so many things that are happening so very quickly. Meaning, last year we're talking about retrieval, augmented generation. Now we're talking about A2A, agent to agent and model context protocol and all the things, and that's a good lead into the question that you just asked. It's moving so quickly. How do they get their arms around it? Well, I mean, I'll tell you that it's moving fast for everybody, even us, the leads on our teams, the folks we've got the architects, data scientists and the things that we're doing in our AI Proving Ground and it's the AI Proving Ground podcast. But it's an area where we do a lot of testing, a lot of integration work, a lot of thought leadership, thinking through the right ways of doing things right. Right now we're testing AI gateways. We're comparing and contrasting various tools to be able to swap these out so we can tell our customers with confidence that we've touched and worked with these products and these are the ways that they work right.

Jeff Fonke:

But to get back to your data question, we're doing a lot of data integrations with our SaaS-based application with Copilot, but we also have built our own agentic frameworks and platforms with Atom AI, and you'll see that if you go to the AI Proving Ground, and that has evolved immensely over the last year, couple years, and it's now a fully agentic platform that we've integrated into multiple data sources and that work that we do one we don't just do that in a vacuum for our own internal knowledge. We like to share that with our end clients. So not just that, but the demos that we build and the lessons we learn we're trying to apply to those industry verticals and we're not going to be better than GM or Ford at manufacturing automobiles. They don't know their business, but what we may have is to be able to help them specifically with the process by which they bring the use cases to life, with the data that they got and the awesome expertise they have in the manufacturing field, right, just as an example.

Jeff Fonke:

So the proving ground is a place where, yeah, we've got the blinky lights, we've got the storage arrays. We can talk about the differences between all of that. But when you talk about data mobility and you know, extract, transform, load for needed for moving data around, we're doing all that with our own internal projects and so we have it from both lenses. If you will. We can talk tech all day long. I mean, that's been our bread and butter for the last couple decades. But I would say, when you marry that to the data science and the data needed, you take the work our data science teams are doing, you marry that to the technology that we have in the proving ground and that's a pretty compelling place to go get information. So, yes, those are just some examples of our internal and then the way we're doing from a technology perspective.

Brian:

Jeff, when I mean so, Jeff Funke had mentioned. Everything's moving so fast, it's so hard to keep up and that's something that we hear from clients time and time again and everybody just wants to under feel like they understand and have their arms around this thing so that they can make informed decisions. Not necessarily a tech question here, but curious how do you find time or how do you go about making sure that you're feeling up to speed on everything? Ai?

Jeff Wynne:

I think forums like this, these are critical.

Jeff Wynne:

There are so many new thinkers in this space and staying abreast of the podcasts and those folks, and that's, I think, the wonderful time we're in is that people want to share this, and I think that as we go on to this and for everyone's out there, it's like, oh my God, I'm falling behind.

Jeff Wynne:

Jeff stated it, we're all falling behind because it's just the speed of this technology, but at that same time, I think it creates this camaraderie within the industry for all of us to share what our best knowledge is at that moment, and so I think you know to get your hands on it. So the simplest thing you talked about the prompts and those spaces is that sure, we can create an entire library of prompts out there. That's an easy way to enable an entire workforce, smart way to enable an entire workforce, but we also need to encourage our workforce to go out there and play and get your hands dirty. So that's a simple way, and then, as you move up the stack and you get into open environments like what we have here, it is going to give you a very visceral experience with it, which I think is going to be better than any of the books, because you can't even publish a book fast enough to keep up with this um, to go through it.

Jeff Wynne:

So those would be the two ways. Is you know, number one is just listening to what other, uh, thought processes and thought leaders are, and then two is getting your hands dirty as much as you can in the space. How about you, Jeff? What are you doing to stay up?

Jeff Fonke:

Yeah, I mean, you know, I learned from the folks that work in the organization just understanding what we're doing from an internal, just immersing yourself in it. You know it's like it's moving, like I mentioned earlier, and you doubled down on what. And somebody said in our organization I think his name was Tim Brooks but the worst AI you're going to use is the AI you're using today. He said a couple of years ago and we're fortunate because we do. We are being forced from our lines of business and everywhere to be the best in AI and I think having that mindset from the top down is super critical for our organization and organizations alike to want to adopt it. You know, just look at the kids in school. Like to me, they need to be using generative AI every day of their life because they're going to use it in the real world. It shouldn't be used as a mechanism to cheat.

Jeff Fonke:

Of course, obviously we can laugh about that, but that's a lot of administration inside of these organizations. My wife's a teacher so I get to hear about it a lot. But and it is there are risks there. But there's also risks for our students to not be using and leveraging it If we're, as business folks you know, in corporations, whether you're in healthcare, manufacturing, retail or oil and gas, you should be using it to be competitively advantaged in the industry that you're in. So, yeah, those are just a few thoughts that I've got. I mean like, yeah, we are, we're doing so much right now in our AI proving ground. That's where my team spends a lot of its time with the teams that run it and it really has been a really valuable place right to be able to explore, build and do these demos for clients to be able to see the art of the possible to explore, build and do these demos for clients to be able to see the art of the possible.

Brian:

Yeah, jeff, I, you know I liked how you mentioned. You know, um, things are moving so fast. You may feel like you're behind, but the truth is, you know you're. You're probably not as far behind as as you may think. Sure, very big enterprises are doing incredible things, but, by and large, you're probably not as far behind as you think you are. Just to end us here real quick and I'll ask both of you but, jeff, I want to start with you what's the one or two things an organization or an IT leader can do right now to get themselves in position to take advantage of AI as it relates to the buy versus build?

Jeff Wynne:

versus cloud. I would say the first piece is I would look at the assets that you have right now. Say the first piece is I would look at the assets that you have right now. What's been weighing out there that you're sweating out just a little bit longer than you should have been sweating out? How can you help clean those things up and get yourself ready? And something as simple as just looking at the entire ecosystem, consolidating that storage, getting your data strategies correctly put together that would be element number one, and that way you can start to have the conversation from a data centric position and make those calls.

Jeff Wynne:

Number two look at the people and how are you enabling them? Jeff's well stated it's up to each of the leaders to set that example. Are you using AI in your day to day examples? Because when you make that investment and that's going to be a large investment for your firm, not just from the capital that's expended but in the transformation it's going to make within the workforce are your people ready to absorb it? And so that's going to maximize that ROI. So, looking at it from those two lenses to me would be the. If I have only two, those are the two that I would choose. Those are good.

Jeff Fonke:

Yeah, we've always done it that way. It's never going to work again Like. Those are the most you know non-innovative words ever spoken. So I think that, doubling down on what you said there, that's, you know, making sure you're constantly looking at different ways of doing things Obviously, modernizing, modernizing your infrastructure today, day. If you don't have plans already in place and you're just doing the traditional refresh cycles and waiting it out, it takes two years to get power and cooling in in some cases, depending upon where you live and where your data center's at. So you're going to be behind the eight ball even further if you're not looking at better ways of doing things with less resources inside your data center. If you're running spinning disks in your storage array, for example I mean just a very tactical, pointed way like you got to get that stuff out of there, right, you've got to modernize. You should be running all flash. You should be doing a very dense, you know, storage platform that can be able to do a lot less power and rack space to be able to make room for the compute needed. That kind of very simple tactical thing to look at. But yeah, that's that. So I'm agreeing with you there. If I had to pick two. You know I'll probably agree with Jeff on the two that he mentioned there.

Jeff Fonke:

One other thing, you know leverage RAI proving ground. I don't want this to be a commercial for it, but I would say we've gotten hands on a lot of different things. We've written some really compelling articles from the lens of our CEO, our CISO and our CTO as well. So I think there's some really good things and those articles are like painting the Golden Gate Bridge right. Once you finish it then you start back over because there's so many new things that have been released in the last month. So it's an iterative thing that we continue to do and go on. So just the research and the things that we do at WWT, I think, can open eyes. Not that we're the end-all, be-all, but we work with just about every one of the large OEMs that are out there and we're seeing kind of the things they're doing and working hand-in-hand with their products and then bringing that with people, process, silicon and software all together in one place.

Brian:

Yeah, love that. I mean take advantage of the information that you have out there, whether it's your own, you know, news sources or podcasts or, like you mentioned, the WWT research, and we have that out there on WWTcom. It could be a CTO guide to AI, ceo guide to AI, the CISO guide to AI and lots of other guides and research reports that we have published. We are running out of time, jeff and Jeff I don't think I mixed up any of the names, so kudos on me there, but I appreciate the two of you taking out time to join me here in the studio and we'll have you back soon. Sounds good. Thank you so much. Appreciate it. Okay, lots of great context from the Jeffs.

Brian:

After our conversation, I walked away with three key lessons. First, start with the why Technology decisions should be driven by clear business outcomes, not by fear of missing out or speed to market. Not by fear of missing out or speed to market. Understanding why you're deploying AI and the data workflows and outcomes it serves will determine whether private, public or hybrid makes sense. Second, readiness is multilayered. True AI readiness isn't about having GPUs in a rack. It's having clean, consolidated data, a modernized infrastructure and people who know how to use AI effectively, from engineers to everyday employees. Addressing technical debt early will accelerate AI adoption later. And third, hybrid is the new default. The future isn't all cloud or all on-prem. It's a mix. Costs, data sovereignty, security and evolving use cases will keep shifting the balance. So leaders should build flexible architectures, experiment in controlled environments like the AI Proving Ground, and treat AI deployment as an ongoing cycle, not just a one-time project.

Brian:

If you liked this episode of the AI Proving Ground podcast, we would love it if you gave us a rating or a review. And if you're not already, don't forget to subscribe on your favorite podcast platform, or you can always catch additional episodes or related content to this episode on WWTcom. This episode was co-produced by Nas Baker and Cara Kuhn. Our audio and video engineer is John Knobloch. My name is Brian Felt. We'll see you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

WWT Research & Insights Artwork

WWT Research & Insights

World Wide Technology
WWT Partner Spotlight Artwork

WWT Partner Spotlight

World Wide Technology
WWT Experts Artwork

WWT Experts

World Wide Technology
Meet the Chief Artwork

Meet the Chief

World Wide Technology