AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology

Google Cloud’s New Math for AI

World Wide Technology: Artificial Intelligence Experts Season 1 Episode 58

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 31:36

Enterprise AI just grew up—and the math has changed.

In this episode of the AI Proving Ground Podcast, Francis deSouza, COO of Google Cloud, breaks down why the era of scattered AI pilots is over—and why the winners are moving fast, focused, and top-down.

We unpack how leading enterprises are shifting from “let a thousand flowers bloom” to a tight portfolio of high-impact AI use cases that actually ship, scale, and deliver ROI. Francis explains why data strategy—not model choice—is the real competitive advantage, how agents need secure access to data where it already lives (no massive migrations required), and why AI is quietly rewriting the enterprise attack surface.

The conversation also gets real about people. The next generation of AI-ready companies won’t just hire specialists—they’ll build AI-fluent teams where every employee is bilingual in their domain and AI.

If you’re building for 2026 and beyond, this episode is your signal:
 less hype, fewer experiments, more execution.

More about this week's guest:

Francis deSouza is Chief Operating Officer and President of Security Products at Google Cloud, where he leads operations to scale the business and oversees Google Cloud's global security portfolio, spanning products, threat intelligence, consulting, and governance. He joined Google in January 2025 after three decades as an engineer, technology executive, entrepreneur, and investor. Previously, Francis served as CEO of Illumina and President at Symantec. He has co-founded three companies and serves on the board of Deel. Francis holds BS and MS degrees from MIT and is driven by technology's power to improve lives.

Francis's top pick: The AI Multiplier: Creating a Sustainable Competitive Advantage with Google Cloud COO Francis deSouza

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.

SPEAKER_02:

Hey everybody, before we jump in, a quick favor. Here at Worldwide Technology, we believe the future belongs to those who understand how to apply AI responsibly and at scale. And that starts with conversations just like these. If you're finding value in the conversations we've been having on the AI Proving Ground podcast, please take a moment to follow or subscribe wherever you're listening. It helps you stay up to date with every new episode and it helps others discover the show. And if you've got a few spare moments, we'd love if you left us a quick review or rating. Those reviews make a huge difference in helping more people discover the show and learn how to put AI to work inside their organizations. So thanks in advance from all of us here at the show, and with that, let's get started. For Worldwide Technology, this is the AI Proving Ground Podcast. As we settle into 2026, AI is no longer inching forward. It's accelerating. New models land every few weeks. Agent frameworks evolve faster than most companies can evaluate them, and the pressure on CIOs and CTOs is growing louder to drive ROI, mature go-to-market strategies, and simply keep pace. Oddly enough, in many organizations, AI can feel simultaneously everywhere and nowhere. A few leaders have a clearer view of this moment than our guest today, Google Cloud COO Francis D'Stoza. He sits at the intersection of global enterprise, hyperscale infrastructure, and the rapid evolution of the AI stack from chips and models to agents to the platforms that knit them all together. Francis is going to share his insight on why the organizations winning with AI are narrowing their focus, picking a handful of high-impact use cases, and backing them with serious change management. We also dig into hot topics like agents, data readiness, neo clouds, security, and why there's no such thing as an AI strategy without a data strategy. A quick note: Francis actually appeared on an episode toward the end of last year alongside WWT CEO and co-founder Jim Kavanaugh as part of his participation in WWT's Business Innovation Summit. And we absolutely encourage you to go check that episode out. But for now, let's jump in with Francis. Francis, welcome to the AI Proven Ground Podcast. How are you today?

SPEAKER_01:

I'm doing great. Thanks for having me, Brian.

SPEAKER_02:

Excellent. Thank you for making the time. I know you have a super busy schedule. Um, I do want to dive right in. I mean, obviously a lot's going on right now on the AI landscape, whether it's related to adoption, innovation, cloud in your case. Tell me, you know, where do you think we're at right now? What are what are organizations getting right? What are maybe they getting wrong as it relates to driving forward to realize kind of that future that everybody wants?

SPEAKER_01:

You're right. It's an incredibly exciting time to be in tech and in enterprise tech and all driven by what's happening with AI. AI is moving into enterprises, uh, frankly, at a pace that we've never seen before, right? If we were talking 18 months ago, uh, we'd be talking about how companies were in pilots and sort of evaluating AI. And only 18 months later, we have so many companies that are in production using AI in customer-facing scenarios and internal-facing scenarios across so many use cases in the company. And a lot of it is because there is a hard ROI people are getting from rolling it, rolling out AI in their company. And so how is it playing out and what are people getting right or wrong? Well, one of the things we're we're hearing from customers a lot is that companies initially tried the approach of letting a thousand flowers bloom. And right, we're saying, let's just open it up, let people start playing with the frameworks, the models, you know, build their own agents. And while that approach was very helpful in terms of increasing AI literacy in a company, and that's very important, by the way, companies of the future are going to need bilingual workforces where they're where their employees not only understand their domains very well, marketing, sales, coding, but they'll also need to know how they use AI very well. So they'll need fluency in both. So allowing a thousand flowers helped with that. But what it didn't help with was deliver the big ROI that people were looking for. What did that instead was an approach where companies would pick five to seven use cases and then top-down drive adoption. Now, those use cases could vary from things like you know helping the developers in their company be more productive in writing code. It could be call center scenarios to actually help external customers get to resolution more quickly without having to talk to an individual, or it could be internal use cases, like, for example, uh managing the RFP process or contract management or vendor management. And so what companies have told us is where they've really seen the big ROI come is when they pick those five to seven use cases top down and roll it out in their company and invest in the workforce training and the change management associated with that. Where they do, the benefits have been huge. In in places like the security operations centers, for example, companies are reporting very dramatic improvements in their ability to identify, you know, bad actors in their environment, to kick off work off workflows and remediate much more quickly than they could before. Yeah. So that's what we're hearing.

SPEAKER_02:

No, absolutely. I mean, and scaling down those use cases to get to a more focused, purposeful, that you know, you get those out the door, then hopefully you're starting to work towards that flywheel effect where you can then start to apply some of that stuff horizontally. You know, we talk about you know the pox kind of living there and stalling, and you know, everybody's kind of seen, you know, their own reports of what's working or what's not working. Where do you see the difference or the gap between the organizations that are that are, you know, they're not just experimenting anymore, and you probably touched on it a little bit, but they're not just experimenting anymore. They're actually implementing and realizing some of that ROI. Are there common traits there?

SPEAKER_01:

Yeah, one of the things that I'm actually responsible for at Google Cloud is something internally we're calling Google on Google, where we're cataloging the many, many use cases. And look, we've been at this for years now. So we have many, many places in the company where we use AI to improve our internal process. And what we've done then is, you know, quantify the specific benefit, the ROI we're getting from each of those use cases. So now, you know, we when we have the conversation with our customers, we we lay that out for them. And and and so, you know, we can talk to them about here are all the different places we have got the benefit from from AI. And then customers pick, you know, the five, seven use cases, and this is leadership teams, executive teams, are picking those use cases that they want to drive, and that's the approach that yields the most benefit. Now, once you get through the first five or seven, you can expand. And by then you've also expanded the AI literacy in your company. And so now you can enable people to start their rolling out their own agents. We have an approach internally too where you know we have uh different offices run contests on, you know, basically hackathons around the use of agents. And I was in in one of our uh offices in Asia a few weeks ago, and it was interesting that the last two sort of innovation days were won by our people operations team, our HR team. And what was interesting what is exciting is first of all the benefit we were getting from the technologies that they developed using our no-code approach to developing agents. But it was also good to see that we were getting that AI fluency, that our people operations teams were confident enough and capable enough to create the agents that were delivering value to our employees. And that's where we need to get to, which is you the way you get the real benefit and not, you know, not have a thousand POCs that aren't generating the value is pick the use cases, go deep on those while investing in the literacy to enable your company then to more broadly expand the deployment.

SPEAKER_02:

I like that you bring up um agents. Certainly that's an area right now that many organizations are trying to look to dive deeper into, building agents that are gonna, you know, not only meet specific needs, but scale to meet broader needs. Have from your perspective at Google and the teams you're working with, have you had any lessons learned about what you know what really works to develop these agents and make sure they're working appropriately, have the right access, so on and so forth?

SPEAKER_01:

Yeah, there are a few lessons we've learned. And and uh we also are have heard from a lot of our customers who are deploying agents. And so I'll distill the lessons from both. For example, you know, we're working with a company called Color, and what they've done is created their own agent that helps, you know, consumers, you know, navigate the process for breast screening to understand you know eligibility and then and to set them up for breast screening. We have other companies that are using agents to streamline internal processes, contract management, and so on. And there are a few lessons that there's sort of a consistent across both our experience and our customers' experiences. Uh one is that you know, from the beginning, it's important to create an a an agentic platform that you know is compliant, is secure. So you're providing a tool set that is future-proof, right? And it is much better to do that than to let people sort of create you know agents on their platform of choice and then later try and retrofit security or compliance. That doesn't usually work. And so what does work is picking a platform to develop agents that has those things baked in and offered. Now that agent, that agent platform has to have both an ability to allow sophisticated developers, create complex agents, but also allow business users, and so that so the platform must have low-code or no-code approaches to allow business users to easily create agents. So that's one thing, which is pick a platform up front that already has compliance, security, governance built in. So you can easily identify what agents you have, you know, what access controls they have, and and do that up front. The second thing is look, there is no such thing as an AI strategy without a data strategy. And so it's really important to think about an agentic platform that connects into your data sources in your company, which means you need to have a very rich connector story to say how do you access the data where it is in your company, whether it's in the productivity suites like workspace or office or in your business applications, like your financials or your ERP or your CRM. And and you need something that can do that without requiring you to move data or or clean your data because you want to get to value sooner. And so, you know, you need to think about how do you roll, how do you get access to data where it is today in a way that allows you to enable those applications. The then the next thing I'd say is, you know, it's really important that the agents can talk to other agents in the company across different platforms. And so you need an agent architecture that you roll out that's open, that supports not just MCP, but things like A2A, so that you can access the tools in your company, but you can talk to other agents because in the end, you will have agents from multiple platforms in your company. And it's important first that you can identify them, but two, that they can work with each other. And so, you know, again, it's important to pick a platform that is future-proof in terms of supporting these open, you know, these open standards.

SPEAKER_02:

Yeah. It's interesting that you mentioned that, and I love that you, you know, really kind of grounded it in the fact that you need to have a data strategy to accomplish any of this stuff. You know, many of the organizations we work, they don't they don't necessarily talk about how the platform is is the issue or the model is the issue. They they talk about the data is the issue. You touched on it a little bit, but maybe dive a little bit deeper. How do you, or how does Google advise customers on how to approach their data estate or get it ready so that they can move to that value much, much quicker.

SPEAKER_01:

Yeah, I I think you're completely right. I think there's a fundamental truth, which is again, there is no such thing as an AI strategy without a data strategy. And and I that's just essential. Data is the fuel for AI within your enterprise and and and outside. And there are a few things that are essential as part of a data strategy. You know, one is you need to be able to access easily the important data repositories within your company. An agent platform needs to support access without moving the data. Right? So it's important that you're saying, look, I know where the data is today. In some cases, companies have very large data repositories that they're just not gonna be able to move. And so you need to support getting the you know agents access to the data, you know, without moving them. You know, that could be creating a unified data access layer using tools like our BigQuery. A lot of companies use that to say, I'm gonna keep the data where it is, I'm gonna use BigQuery as the way you can create this data lake and give access to data across our company. The next thing is that it's important to realize that you also need then to increase the security posture of the data in your company. You know, I've talked to CIOs who tell me that they were comfortable before in having multiple SharePoint repositories, maybe hundreds of SharePoint repositories in their company without fully knowing if all of them, you know, were secure and that the right access controls existed. And you know, AI agents can discover those. And so it's important to start to really ramp up your cybersecurity posture around your agents and around your models. That you need, you know, we have something called model garden that we allow you to roll around your model. And and and that's important because you you're now creating a new surface area for a tag with your AI infrastructure, whether it's the data and the expanded access to data or the AI models themselves. And so, you know, we're investing at at Google in providing our customers with things like security from prompt injections, for example, you know, making sure you don't let your models be attacked or drift as a result of those. So the next element of a data strategy is an enhanced data security strategy. And think about all the cybersecurity controls you want around that.

SPEAKER_02:

Yeah. What what about how should organizations think about structured versus unstructured data? I that just came into my mind because you mentioned all these SharePoint repositories, and that list can go on and on and on. Is there a nuance or a a specialty there as it relates to treating structured versus unstructured data and how you leverage it?

SPEAKER_01:

You know, I think we'll look back and realize that AI was probably the big unlock on unstructured data. Okay. And especially generative AI. And and you know, we've been able to access and query structured data for a long time. But for most enterprises, the the majority of their data is unstructured. Yeah. You know, it's the emails and the SharePoints and the docs and so on. And that's where the majority of the intelligence is, too. And so what's really powerful about generative AI is that it can understand unstructured data. And so to your point, I think we're finally now getting to a place where we're able to unlock all the intelligence that lived in the unstructured data in our organization. And so it's why we're seeing so much use in industries like healthcare of Gen AI, where they're saying they can use Gen AI to create notes for doctors and and you know to then use the existing notes to kick off processes, whether it's you know, charting or coding or claims processing or prior authorization, all of that was challenged before because of so much unstructured data that came from the doctor-patient interaction. And so I think we're now at at the ab uh able to unlock all of that similarly in scenarios like customer support, you know, being able to, you know, get access to unstructured data, mine the intelligence from that, and create free-form conversations with customers, you're able to deliver so much more value than you could before.

SPEAKER_00:

This episode is supported by Wiz. Wiz provides cloud security solutions to help identify and mitigate risks across cloud environments. Secure your cloud infrastructure with Wiz's comprehensive security platform.

SPEAKER_02:

Yeah. Recognizing that unstructured data can, you know, that that can lead to just another trove of data that's at our disposal. You know, many organizations that that we interact with are wondering, you know, what's the appropriate place to store, to run these workloads, run AI workloads, send this data? You know, do we put it on-prem? Do we put it in the cloud? Is it a hybrid approach? How do how does Google Cloud kind of advise its clients, its customers on, you know, what where the appropriate workload should go?

SPEAKER_01:

Yeah, I think one important principle that we work on is that the world is heterogeneous and most enterprises are heterogeneous. There are companies that are started today or recently that are purely digital native, that are AI first, that have the luxury of being able to architect it, you know, right for today. And if you do that, then clearly you'd start with a cloud-first, AI-first approach, because that's where you get the best access to your data and are able to leverage it the most. Most companies don't have that luxury, and certainly most big companies don't. And so from our approach, it's important to meet customers where they are and make sure that the infrastructure we provide gives it gives the ability to leverage infrastructure as it exists today. You know, gives you a strategy that allows you to access data that exists in data silos across, you know, multi-clouds and on-prem. And so, you know, I talked about the tools we have like BigQuery, for example, and an important use case there is to be able to provide a unified data access layer across a heterogeneous environment. And that's true across all the tools we provide to say, look, even all the way up to cybersecurity, you know, our approach in cybersecurity is to provide multi-cloud cybersecurity. And we recognize that, you know, whether it's a data strategy, an AI strategy, or a security strategy, it won't be as effective if it only supports, you know, one cloud or one environment or one data source. And so that's it's sort of an organizing principle for uh the way we think about development.

SPEAKER_02:

Yeah, well, I mean, let's let's go down that road a little bit more. I love that you mentioned that. And earlier you mentioned MCP, A-to-A. Um, you know, still lots of customers are looking to not necessarily consolidate full force, but they just want flexibility and ability to maneuver for whatever's next, understanding that anything could kind of happen moving forward. How does Google Cloud think about interoperability and offering its customers kind of that flexibility to meet them, as you mentioned, on their own terms?

SPEAKER_01:

It's a great question. And and it's an important one for us. And when I talk to customers, they when I talked about you know what's driving the growth we're seeing in Google Cloud? And you know we're seeing fantastic growth. And then, you know, last quarter, for example, we grew at 34% in in Q3, and and that was up from 27% the quarter before. And and we talked about we're already at a scale, we're over you know,$50 billion in ARR. So at this scale, we're seeing that growth. And when I talk to customers to say why, why what's driving that growth? There are three big reasons they tell at a very top point. You know, at an individual product level, they'll tell us the benefits they see. But when they you know roll it up and they say, why Google Cloud? There are three big reasons. One is they are very excited about the fact that we are the only hyperscaler that offers a f full AI stack. Yeah. You know, the the only hyperscaler that has, you know, chips, that has models, you know, that has agents that we have developed, and we've been working on for over a decade. And the reality is AI is the top of the agenda for almost every enterprise. Most of the conversation. I have, if not all, you know, are about AI and how it's going to impact the applications. And so it's really exciting for customers to be able to talk to a hyperscaler where they can actually talk about where AI is going, where the models are going. That's very important. And they see the innovations we have. So first, you know, reason they they're choosing us is they're saying, look, you are the only hyperscaler that has an AI stack, and we need to know where it's going. And we get that in our conversations with you in a way we don't get from the other hyperscalers. The second reason they talk to us is just the strength of the our hyperscale infrastructure. You know, the infrastructure we provide to customers is the same as the infrastructure we use to run Google search and ads. And so for us, it's essential that that infrastructure is as bulletproof as possible. And that shows up in the stats. If you look at the uptime of Google Cloud versus the other hyperscalers, you know, it's it's it's better. And the reason for that is, again, this is essential for us. You know, for us, seconds of downtime really matter, and and we convey that to or deliver that to our customers too. But the third reason is is what you talked about, which is uh we are the most open of the AI stacks and hyperscalers out there, and we have been, you know, you know, from from the beginning, in the sense that being open in open standards has been part of our ethos. So for example, we have our own chips, the TPUs, and we just announced the seventh generation of our TPUs, which you know, we've been investing over a decade in that infrastructure. But we're also one of the largest partners in the world to NVIDIA and GPUs. And a lot of our customers use GPUs on Google Cloud. Similarly, we have our own models, the Gemini family of models, 2.5 is out and 3.0 is coming. We have our own scientific models, we have our own video and image models with VEO and Imagine. But we also support other models. If you look at our Vertex development platform, we have over 200 models in our model garden. If you want to use Cloud from Enthropic, you can get that in our model garden. If you want to use DeepSeek, you can get that in our model garden. And then similarly with agents, you know, we support, uh we developed and open sourced the A28 protocol, the agent-to-agent protocol for agent interoperability. And so customers appreciate that while we provide cutting-edge technologies at every layer of the stack, we're also open at every layer of the stack. And so there's no lock-in, there's flexibility for our customers that they can pick and choose the technologies at every layer of the stack that they want to run at Google. And so that openness, as you as you pointed to, is very important to our ethos and is very appreciated by our customers, given how quickly things are evolving, especially.

SPEAKER_02:

Yeah. Well, that's kind of where I was going to go next was as things evolve, what do you think openness looks like and means in kind of the next era of AI to the to the best you can predict it right now?

SPEAKER_01:

Yeah, I think we're going to continue to see a push for you know, customers asking for the flexibility to say, again, for platforms that they pick, they want to make sure that the platforms interoperate with other platforms, and and so they can get access to the data that exists wherever it exists in their company. Similarly, they want to make sure that the governance, the security, the compliance controls they have work across the different platforms that they have in their company. Yeah, that's going to be important, and that no part of their infrastructure exists in a silo. They want the flexibility to either move or combine technologies from different stacks and still have the application work. And so I think all those are going to be important principles, even more so perhaps as we're going forward.

SPEAKER_02:

Is that part of that open conversation, a viable path, or what do you see from Google Cloud's perspective with those kind of niche plays?

SPEAKER_01:

Yeah, I think the demand in the market is so large and growing so quickly that there is a sort of a need for and an opportunity for as much data center infrastructure as as we can get for customers now. And I think that's driving a lot of the growth that we're seeing. Especially what we're seeing is sort of AI-focused neo clouds given the demand for AI infrastructure and AI stacks and purpose-built AI stacks. And so that I think is what's driving the neo clouds in the Trevor Burrus.

SPEAKER_02:

Is it kind of like a a breathing organism where it's kind of expanding now and then maybe we'll start to consolidate later on?

SPEAKER_01:

I think the market as a whole is going to grow. And I think there's room for multiple players in the market. Like any market, I don't know that all the players today are going to be the same players that exist five, 10 years from now, but I absolutely believe the market, you know, five years from now is going to be much bigger than the market that exists today. And there'll be a lot of players in that market.

SPEAKER_02:

Yeah. You know, one of the things that a lot of organizations that we talk to are looking for, especially considering how rapidly evolving the AI landscape is moving, is just understanding forecasts and costs and things of that nature. How do you talk to customers about the the best way they can get their hands and wrap their minds around, hey, what is this going to cost moving forward, knowing that things can change at any moment.

SPEAKER_01:

Yeah, you're right. That is an important conversation. And customers have asked for a variety of options. For some customers, they do want a fixed, you know, sort of predictable cost they get every month. And so they're looking for pricing models that support that. You know, either a, you know, let us use as much as as we can until a price point and then stop, or, you know, let give us access, you know, we'll we'll pay a certain amount, but give us access to as much as we need. And in the future, we can sort of pick up if we need the amount that we buy. Some other customers, though, like the flexibility of pay as you go. And they're saying we don't know, you know, where it's going to go. And right now it's it's small. So let's let's roll it out that way as a pay as you go model for a few quarters. And then once we sort of understand, you know, the growth and the baseline, then we can sort of commit to a certain amount. And so given how early we are in the evolution of this market, we're seeing customers ask for all the different flavors depending on you know their profile of you know financials.

SPEAKER_02:

Yeah. I mean, along the cost lines, you know, compute, energy, sustainability, those are also important questions there. How does Google Cloud think about kind of the future of compute, knowing that power is going to be more of a demand, energy be more of a demand, sustainability be the top of mind? What types of conversations are we having there?

SPEAKER_01:

Yeah, sort of a a big a big point in the sense that when we talk to customers, on the one hand, you know, it's clear that the demand for AI, especially compute and storage and data centers, is just going to continue to grow. That the applications that customers are are looking to deploy and use, you know, there's there's a huge appetite for more off them. And they're going to drive more sort of compute intensity. At the same time, you know, we recognize that there's a limitation on the amount of power that's available and and you know budgets. And so from our perspective, you know, there's a lot of investment going in increasing the uh the comp the computational power across our stack. And we just announced our uh seventh generation of TPUs. You know, that from a performance perspective is, you know, ten times better than you know the the the two versions ago and four times better than the last version. But it's not just about compute power, uh, it's also about power consumption. Right. And so for us, we look at it not just at dollars per token, but also power consumption per token. And it's important that we as a company and we as an industry make very significant strides forward. And we're talking, you know, not you know, incremental, but you know, order of magnitude type strides going forward in terms of reducing the power consumption per token, in terms of making the cost per token drive down, because the ramp we've seen in token consumption, even in the last year, has been astronomical.

SPEAKER_02:

Yeah. I want to be respectful of your time because I know you got a flight to catch here. So just last question. Given everything we've touched on, which has been a lot, which is accurate for just what's on the plate of all of these IT teams for many organizations. What should some of the one or two priorities be as we move into 2026 to take advantage of the moment now so that they said that organizations win with AI? It could be with cloud, it could be with infrastructure data. Pick pick your place. What are the priorities?

SPEAKER_01:

I think AI is going to be an essential part of every organization going forward. So there is sort of uh an urgency around, you know, sort of getting familiar with AI that companies need to to and boards need to sort of push for now. Having said that, I think it's important that customers also, that organizations also look for the ROI with their AI investments. And what we have seen is the way to get that ROI is for leadership teams, board executive teams to pick again a handful of use cases that is driven from the top, that are driven from the top down into their organization. You know, what we've seen be successful is if companies pick a couple of use cases that are around productivity, for example, you know, greater output that could be in their engineering teams for coding, that could be in their customer support teams, but then also pick some use cases that are around driving the top line. It could be, for example, you know, new personalized, you know, broad approaches to marketing, you know, and outbound marketing. And then pick a couple of use cases that are about new revenue streams that get generated through Gen AI. And so if you pick a portfolio, you know, five, seven use cases that can be driven by the top-down, that approach yields the highest ROI for organizations. Right. And done right can yield an ROI, you know, even within a year. The third thing I'd say is really focus on the business transformation and the workflow force transformation associated with this. This is not just a technology. This is about, you know, sort of retooling your workforce and and enabling them to be, as I said, bilingual. Everybody in the company needs to be good at their function and understand AI. And that's the workforce that'll carry you into the future. So maybe those are some of the things I'd yeah, I'd encourage people to I love that.

SPEAKER_02:

I mean, I like it the way you put it, that with the bilingual. That's just an easy way to think about it. Uh, Francis, thank you so much for for taking the time and for the partnership that that you provide with us at uh here at Worldwide Technology. So thank you again.

SPEAKER_01:

My pleasure, Brian. Thank you for having me.

SPEAKER_02:

Okay, thanks to Francis for sharing his time today. If there's one key takeaway from this conversation, it's this. The companies that are getting real value from AI aren't actually doing more. They're doing less, but doing it better. They're resisting the urge to chase every new model or agent and instead focusing on a small number of use cases that matter, backing them with the right data, security, and leadership commitment. AI literacy still matters, openness still matters, but without focus, none of it turns into ROI. This episode of the AI Proving Ground podcast was co produced by Nas Baker and Kara Kuhn. Our audio and video engineer is John Knoblock. My name is Brian Felt. Thanks for listening, and we'll see you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

WWT Research & Insights Artwork

WWT Research & Insights

World Wide Technology
WWT Partner Spotlight Artwork

WWT Partner Spotlight

World Wide Technology
WWT Experts Artwork

WWT Experts

World Wide Technology
Meet the Chief Artwork

Meet the Chief

World Wide Technology