AI Proving Ground Podcast

A Practical Playbook for AI: Driving AI Adoption in the Enterprise

World Wide Technology

Many organizations are chasing AI's promise. But who's actually delivering results? In this episode of The AI Proving Ground Podcast, we take you inside WWT's AI Day STL, where real-world success stories meet tactical, boots-on-the-ground guidance for enterprise AI transformation. It's what we call "Practical AI." This episode explores the four phases of AI maturity, why clear use cases are more valuable than code, and how agentic AI and data readiness are reshaping the enterprise landscape.

Speaker 1:

As AI's breakout continues. Generative AI feels like it's hit an escape velocity in the enterprise, and yet many organizations still feel like they're spinning their wheels. On today's show, we go inside AI Day, where leaders from WWT laid out what they're seeing in the field, the challenges, the success stories and what they call a practical path to AI transformation. We explore what it really takes to move from experimentation to execution cutting through the hype, aligning stakeholders and building the infrastructure, data strategy and use case clarity that can turn AI promise into practical, business-driving results. What does it really take to move from experimental hype to practical outcomes? This episode will be structured into three key sections Practical approaches to accelerating AI results from John Duren. Agentic AI in action from Mark DeSantis and Harry Covey. And why data readiness is foundational to AI success, which will feature Jonathan Gassner and Bill Stanley.

Speaker 1:

This is the AI Proving Ground podcast from Worldwide Technology. I'm your host, brian Felt. Let's get started. Let's start in an unlikely place a public high school somewhere in the Midwest. Student discipline problems were on the rise, engagement was low and then something happened.

Speaker 2:

Sonny doesn't have a desk and you won't find his picture in the annual school yearbook. But what Sonny does have is time for these students.

Speaker 1:

That was John Duren, an AI and data solutions expert from WWT. Now Sonny isn't on the school payroll. He or maybe it's a she it doesn't grade papers. That's because Sonny isn't a person. Sonny is an AI developed by Sonar Mental Health, a digital companion available 24-7, trained on a data set and under the guidance of mental health professionals.

Speaker 2:

Sonny- has seen hundreds of students at this point. Worked with hundreds of students, he's been able to recognize signs of loneliness, anxiety and even early stages of depression. And he does this silently, without judgment, without delay.

Speaker 1:

Initially there was skepticism, but as time passed, so did that concern.

Speaker 2:

But the results have been incredible, to the point that Sunny is now distributed across 50 school districts, not to replace the human connection but to ensure that no students fall through the crack when they're looking for that human connection.

Speaker 1:

This John said is practical AI.

Speaker 2:

Not hypothetical, not futuristic, just a real approach to a real problem serving real people at scale.

Speaker 1:

Sunny's story is powerful, heartwarming for sure, but for most organizations trying to implement AI the journey is a little bit more complex. John said customers usually move through four key stages.

Speaker 2:

Exploratory experimental operational transformative.

Speaker 1:

In the early stages there's often a flurry of experimentation, proofs of concepts, hackathons, isolated pilots, but without a centralized strategy it's easy to stall. So what's the most common pitfall?

Speaker 2:

First, and probably the single most important thing that we start every meeting with is understanding what your use cases are. This is crucially important. Without a great use case, ai projects are doomed to challenge.

Speaker 1:

We're coming off a year where companies are investing billions of dollars into AI, often without a roadmap. A recent Gartner report warned that by the end of 2025, over 70% of Gen AI pilot projects will never make it into production. Why? Because the use case wasn't clear or or wasn't feasible or didn't matter to the business. Even when the use case is right, there's another challenge change, which can make things uncomfortable.

Speaker 2:

AI is evolving almost every day, not just every week. In the process of building many of the tools that you'll hear about today from Worldwide where we've worked with AI, the technology has evolved underneath our product and development three and four times during the development cycle.

Speaker 1:

And if your plans can't flex with that, you're toast.

Speaker 2:

Design knowing that there's going to be challenges. Heck design knowing there's going to be mistakes. Be willing to accept those, roll with the punches and move on.

Speaker 1:

Next comes the question of talent.

Speaker 2:

I'd be willing to bet that every organization in this room has already underestimated the talent skills gap in your organization.

Speaker 1:

Sound familiar. According to LinkedIn's 2025 Future of Work report, demand for AI-related skills has outpaced supply 6 to 1 in the enterprise sector. John didn't mince words.

Speaker 2:

Those who embrace AI will definitely have a business enabler competitive advantage. Those who don't, well, I'll leave that to you.

Speaker 1:

His advice invest now and invest early.

Speaker 2:

Focus on training your folks on AI and how it's going to impact your business.

Speaker 1:

So you've got a good use case, You're adapting to change and you're investing in your people. You're on your way right Almost, but not if you leave out what might be the most overlooked pillar security.

Speaker 1:

Security is critically important at every stage Dealing with AI for security security for AI, security of AI, security for deepfakes In other words, every organization deploying AI is also expanding its threat surface and with deepfakes, hallucinations and prompt injection making headlines almost weekly this year. There's no such thing as waiting until phase three to think about this, so how do you know when it's all actually working, Working?

Speaker 2:

across well over a hundred major projects with customers in AI over the last few years. We found that really there's three major milestones that have to be crossed before you can actually say I'm making an impact to my organization. The first of those is when the company begins to make, or the organization begins to make, complementary investments in both talent and the infrastructure. Ai is different. It's different than anything we've done before. When you start deploying workloads, it requires a serious investment in what that's going to look like. What is that architecture needed? How do I feed those GPU cards that are so expensive?

Speaker 2:

The second is when you really begin to leverage internal data. We've seen this and you'll hear a lot about this during the breakout session, where we talk about atom. Ai mark will probably mention it shortly in his part of our presentation, but adding your internal and private data into your ai infrastructure is where you start really seeing the value and you find. The more data you add, the better the results become. Lastly and this is really the key metric, as companies begin to rapidly approach 25% of your business processes and your workflows being fundamentally impacted by AI, we see this stark turn up in the curve of value out of the AI investments and the momentum. The flywheel effect is taking hold and you're beginning to see the AI projects are driving the next AI project, and so forth.

Speaker 1:

There's a concept in the AI world that explains part of this data gravity.

Speaker 2:

The amount of data required to truly feed the GPUs is substantial, and that data absolutely has gravity. It's hard to move. You end up bringing your applications to the data instead of trying to move all your data to the applications. Dealing with that is fundamentally an issue we have to focus on, and that's why data readiness becomes a critical part of the practical approach to AI.

Speaker 1:

It makes sense right the more data you generate, the harder it is to move your stack. So your architecture and your AI need to come to the data. It's why data readiness is the foundational layer of practical AI.

Speaker 2:

Every AI project we've been involved in was at least 80% a data project.

Speaker 1:

In the end, practical AI isn't just a theme, it's a mindset. It's about asking not if you can do something, but why, how and to what end. As John said, putting AI into practice is only as effective as the data that fuels it. And that's where many organizations hit a wall, because the real engine behind practical AI isn't just algorithms, it's data readiness. Clean, connected and accessible data is what turns an AI strategy from theoretical to transformational. Without it, even the most advanced models can't deliver real business value.

Speaker 3:

Most AI projects don't fail because the model wasn't smart enough, but they fail because the data wasn't ready for AI.

Speaker 1:

That's Jonathan Gassner, a technical solutions architect here at WWT.

Speaker 3:

And it's not your AI model, but it's your data supporting your AI model.

Speaker 1:

And, more often than not, that data is broken in ways companies don't expect Data silos data managed by different parts of the organization.

Speaker 4:

There's no data sharing going on. The data processes are ad hoc at best.

Speaker 1:

It's not that the metrics are wrong. They're just calculated from different data, using different definitions, based on different sources.

Speaker 4:

Turns out that all of the metrics are correct, they're all calculated correctly, but they're all using different inputs. They don't all have access to the same data variables to calculate that metric. You have to have that North Star, that guiding light, so that all of your activities, data and technology, are aligned around the same objective.

Speaker 1:

That alignment starts with understanding how your business is structured and which approach to data strategy makes sense for you. Bill Stanley laid out two options data fabric and data mesh. Each one serves a different kind of business and understanding the difference could be the key to unlocking real value. Let's look at two examples. First, imagine a regional manufacturing company.

Speaker 4:

They do one thing, and they do it really really well They've been around for 20, 25 years and they've grown organically, not through acquisition.

Speaker 1:

Operations are centralized, but their data still fragmented, still hard to trust.

Speaker 4:

This is where data fabric as centralization, bringing everything together the data management, the data governance. We're curating that data and serving it up in a metadata-driven fashion.

Speaker 1:

In this model, the organization pulls together all their data across departments into a single source of truth.

Speaker 4:

Governance, the integration of those diverse data sources. They need to automate that right. We want the data available when we need it. We don't want to have to wait for that. And then we want to wrap it in this warm blanket of data security and compliance.

Speaker 1:

Data Fabric allows this kind of company to align everything from HR to finance to operations under a common, trusted foundation. But what if your organization is different, say a global conglomerate?

Speaker 4:

Here we have a large, diversified company with a long history, very distinct business units and data is often siloed by business. They operate in distinct verticals like mobility, energy, finance and construction.

Speaker 1:

In that case, pulling everything into one place, as Data Fabric suggests, might be a little bit unrealistic, can you?

Speaker 4:

imagine trying to bring all that data into one place from all those businesses and provide the management and the governance for that. That's Herculean. That's where Data Mesh comes in. Data Mesh focuses on decentralization Decentralization of data ownership, data stewardship federated governance.

Speaker 1:

Each business unit owns and governs its own data, but shares it across the enterprise in a governed, standardized way.

Speaker 4:

Here we're going to serve up data as a product. The businesses will be responsible. They're the owners. They will serve that data as a product to the data mesh. It requires a holistic approach. It's a combination of tools, processes and cultural change. It's not a product.

Speaker 1:

And that brings us to the next piece of the puzzle execution. Once your data strategy is clear, you need to build your team.

Speaker 4:

Maybe it's just a team, maybe it is a COE, but there are a couple key components you should have, right. You should have an executive sponsor, some sort of leader championing your efforts. And then the business. We have to have the business involved. We have to have representation across the business. Are we going to ensure the adoption throughout the organization? Now we're ready for execution. We've got our team in place. We have representation from the business. We can ideate on these business ideas and figure out where we have some synergies across business. Right, and we'll take our first look at the data as we're ideating through this and we'll figure out is the data available to do what we want to do? Does it possess the quality that we need? Are there risks associated with using this data in the way we want to use it?

Speaker 1:

From there you build your roadmap, and that's when data engineering enters the story.

Speaker 3:

Overall, you might be feeling like you're doing a bit more of some data archaeology, you know, dusting around in your source systems looking for those rare data artifacts, the fix, the data engineer. In a nutshell, data engineering wraps their arms around the information and puts it in an easy to find place.

Speaker 1:

Data engineers don't just move data around. They enhance it, augmenting it with context, standardizing formats and scrubbing unnecessary information.

Speaker 3:

Our hero comes in, bundles the data up in a nice, neat little package and delivers it for our AI to consume, removing any unnecessary data points, making sure that our AI model stays lean and clean Data engineering, Jonathan says, is built on three pillars.

Speaker 3:

First, your data architecture these are the plans or your end-to-end picture for your AI and, more importantly, this is why the data will move through their plan. Secondly, data infrastructure these are your Lego pieces, or your building blocks that support the plan, that support your architecture and this is how the data is moved throughout the thing. And lastly and most importantly, data security. Just like Bill said, this is the warm fuzzy blanket that wraps around everything.

Speaker 1:

If there's a central theme emerging here, it's this AI doesn't work without the right data, and getting the data right means rethinking not just your tools, but your people, your processes and your purpose. Bill may say it best.

Speaker 4:

The key takeaway is for your data hero.

Speaker 3:

it requires a holistic approach and Jonathan, Without data engineering, your AI will fail silently and confidently, and I'm willing to bet there's a few of us in this room that have experienced this.

Speaker 1:

As more organizations begin to operationalize practical AI, automating the workflows, enhancing decision-making and improving efficiency, all fueled by a solid data foundation, a new frontier is already coming into focus, and it's called agentic AI. Unlike traditional AI models that require constant human prompting, agentic AI systems are designed to act with autonomy, continuously learning reasoning and even collaborating with other agents to achieve goals. It's a major leap forward and it's forcing enterprise leaders to rethink everything from architecture to accountability. Mark DeSantis opened the session with a story about Atom AI, wwt's internal AI assistant, and how it's evolved from a simple chat bot to something far more powerful.

Speaker 5:

Partners by the end of this month. So I'm excited to say that All right Through our journey here, what we've done is created an AI assistant. This is actually the second iteration of our AI assistant, and the first iteration is probably similar to what you've seen, maybe at your own enterprise, which is a RAG-based AI assistant. That's kind of what we started off with last year. That was, you know what was available at the time, and RAG is great for many things, but it doesn't do everything that we want to do as an AI assistant. Rag makes a great AI chatbot, but it doesn't do tasks.

Speaker 1:

So what does a real AI assistant need to do For WWT? The answer came in four parts reflection, tool, use planning and collaboration. Let's break that down a bit. First, reflection, as it may sound, that's the assistant's ability to self-correct.

Speaker 5:

AI makes a lot of mistakes, and so reflection is this technique to use iterations to refine and self-correct. Ai makes a lot of mistakes, and so reflection is this technique to use iterations to refine and self-correct.

Speaker 1:

Then comes tool use, the heart of the agentic system. This means AI can actually interact with software tools and not just return text. Want to summarize a report pull live data from Salesforce. The AI assistant needs to do more than just talk about it.

Speaker 5:

The idea that you can add tools into an agent to extend the capabilities of that agent. So this could be to query live data, it could be to update data or create data, carry out a task, things like that. So these are the actions that LLMs can now help us to use throughout our AI assistance.

Speaker 1:

Third is planning the ability to break a task into steps and execute them in the right order. And finally, multi-agent collaboration, where one agent can pass tasks to another. That's the magic trick that allows Adam to act more like a team of coworkers than a single AI assistant. Behind the scenes, WWT has built over 1,150 agents and counting, but they're not doing it alone.

Speaker 5:

We've established an AI center of excellence team and that team is really building the foundation but then enabling this as a capability throughout our organization.

Speaker 1:

It's not just about building AI. It's about empowering others to build. If you've been following AI headlines in 2025, you've probably heard the term Model Context Protocol, or MCP. It's a standard that companies like Anthropic, openai and Microsoft have started adopting to make agents interoperable across platforms. And, yes, wwt is already thinking ahead on this.

Speaker 5:

We've seen Microsoft co-pilots say that it's going to support MCP as well, and so the idea there is that you know the agents. So, in our case, you know, I kind of showed you some of the agents that we've built. The agents can be, you know, your own, you know custom built framework, or it could be a third party agent, each of them using whatever model works best for that agent.

Speaker 1:

In other words, a world where your legal agent runs on one model, your customer service agent on another, and they all speak the same language. It's a vision of modular, composable intelligence. But as with any new architecture, there are hard lessons too. Mark pointed out that earlier versions of the assistance sometimes failed, not because the model was wrong, but because the data was missing, incomplete or flat out inaccessible.

Speaker 5:

AI is a magnifying glass on top of your data issues, right? So if you have data that's old or data that is insecure and open to people that it shouldn't be open to, AI is going to help you find that really quickly.

Speaker 1:

Remember what Bill and Jonathan told us earlier in the day that no AI transformation can succeed without a foundation of trustworthy, accessible and governed data. This is that principle in action. When AI encounters friction, it almost always leads you back to the same root cause the data itself. In the early days of deploying Atom AI, users would ask simple questions like how many labs are on the platform, and the assistant couldn't answer, not because the question was unreasonable but because the legacy architecture had been designed for static documents, not dynamic live systems. Rag-based models would try to find an answer in the documentation, but if that data wasn't indexed or up-to-date, the AI was just stuck.

Speaker 1:

So WWT had to rethink how it exposed its APIs, how it structured its data layers and how it enabled real-time queries across internal systems. And that was just the beginning. Then came the security curveballs, the kind you only encounter when real users start experimenting. There were cases where people tried to jailbreak the system using base 64 encoding tricks, obscure translation loops and adversarial prompts to get the assistant to reveal more than it should. In one case, a user asked Atom AI to translate system context from Italian back into English, essentially sidestepping access controls by speaking a different language, a different language.

Speaker 5:

We saw one last week where somebody was doing something asking it to translate the system context that we were providing and in that example it did respond with the system context in.

Speaker 1:

Italian. To counter these types of attacks, the team implemented a moderation API, a guardrail that screens queries in real time, especially for external users. They also added a loop protection to make sure that assistants didn't spiral into expensive recursive calls Because, yes, that happened too.

Speaker 5:

You're issuing an LLM prompt to get a tool call response and, as it's responding back with the array of tools that you should use, it's hallucinating and giving you thousands of tools. So it's like what happened there.

Speaker 1:

The result Conversations costing over $35 per prompt a massive red flag for any system trying to scale responsibly. With LengFuse as their observability layer, they could visualize each agentic loop trace tool calls, analyze costs per conversation and pinpoint the moment where something broke.

Speaker 5:

So you can kind of go through each step and understand, like, how long it took, you know, for that step to complete, which tools were invoked, you know what parameters were sent off into that tool, what the tool responded with. You know, trying to really give you a picture of the conversation.

Speaker 1:

Every misfire, every hallucination and every false assumption. All of it became fuel for improvement. The team used it to build better agents, sharpen the feedback loop and prioritize fixes based on what users were actually experiencing. So, yes, agentic AI is powerful, but it's also fragile, unless you build it with real-world feedback, real-world use cases and real-world costs in mind. So what did we learn today? First, it all starts with use case clarity. If you don't know what problem you're solving, ai is not going to save you. And even if you do, be ready to adapt. The tech is evolving fast, and so must your approach. Second, none of this works without your people. Invest in talent, upskill your teams, because the biggest gap in AI today isn't the tools, it's trust, capability and culture. And third, data is everything. Broken data breaks your AI, whether you're running in a centralized shop or a global enterprise. You need a data strategy that fits and a team that can make it real. Bottom line. The path to AI maturity is practical, it's purpose-driven and it's powered by data, people and processes.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

WWT Research & Insights Artwork

WWT Research & Insights

World Wide Technology
WWT Partner Spotlight Artwork

WWT Partner Spotlight

World Wide Technology
WWT Experts Artwork

WWT Experts

World Wide Technology
Meet the Chief Artwork

Meet the Chief

World Wide Technology