AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
AI deployment and adoption is complex — this podcast makes it actionable. Join top experts, IT leaders and innovators as we explore AI’s toughest challenges, uncover real-world case studies, and reveal practical insights that drive AI ROI. From strategy to execution, we break down what works (and what doesn’t) in enterprise AI. New episodes every week.
AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
AI Isn’t a Pilot Anymore: How Cox Scaled It
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI pilots are easy.
Scaling AI across a live network is not.
As organizations move beyond experimentation, the challenge isn’t better models — it’s building AI that operates like infrastructure: reliable, secure, and measurable.
In this episode of the AI Proving Ground Podcast, Matt Shorts, AVP of AI at Cox Communications, joins WWT’s Greg Schoeny to share how Cox moved AI from isolated pilots into production across a customer-facing network where uptime, trust, and scale matter daily.
You’ll learn:
- Why operating conditions expose bad AI decisions fast
- How to modernize without waiting for perfect data
- The risks of rushing into closed platforms
- Why momentum beats perfection
- How framing AI as augmentation — not replacement — drives adoption
This episode is for CIOs, infrastructure leaders, and platform teams responsible for deploying AI in complex, regulated environments.
The bottom line: AI only delivers value when it behaves like infrastructure.
And infrastructure rewards discipline — not hype.
More about this week's guests:
Greg Schoeny is Senior Vice President of Services and Strategic Solutions at WWT, leading services sales, business development, and architecture for the Global Service Provider (GSP) business. His team delivers outcome-driven solutions across AI, cloud, security, mobility, automation, and advisory services, supporting customers with professional services, managed services, staffing, and strategic consulting.
Matt Cox is a technology leader with 20+ years of experience driving digital transformation and product innovation. He specializes in AI strategy, multi-cloud optimization, and building high-performing teams. Matt has led successful product launches, developed IP strategies, and architected IoT solutions that unlock new revenue and operational efficiency, aligning technology with measurable business impact.
The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.
Learn more about WWT's AI Proving Ground.
The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.
Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.
Waiting Is Losing
SPEAKER_03Waiting for AI to mature is not a strategy. At the enterprise scale, it is a conscious decision to fall behind. If you're working with a leader craving more certainty, remind them that while certainty is comfortable, it's also the fastest way to become obsolete. Because that certainty does not exist in a high-velocity AI market. In this episode, Cox Communications Matt Schwartz and WWT's Greg Shoney are here to show us that winning with AI does not require perfect data. It requires the guts to learn on the fly. And in the case of Cox, a live network that serves millions of customers spread over dozens of geographies. So stick with us. This is the AI Proving Ground Podcast from Worldwide Technology. And today's show is about what happens when you implement AI across a network where reliability, people, and credibility are on the line, where certainty is far from a guarantee. Let's jump in.
unknownAll right.
Meet the Operators
From ML to GenAI
SPEAKER_03Well, Matt, thank you for joining the AI Proving Ground Podcast. How are you today? Hi, thanks for having me. I'm awesome. This is great to join you guys today. Good, good. And Mr. Greg Shoney, thank you. Yeah, happy to be here. Thank you. Awesome, awesome. Matt, I want to start with you because, you know, Cox, uh amongst, you know, one of the biggest innovators in the industry. Take me back to the time where you realize, you know, the company, the organization was moving beyond the hype and into kind of rolling up your sleeves, actually having to execute on some of this strategy and hype.
SPEAKER_02Yeah, yeah, great question. I mean, we're if you think about uh the network and the size of network we have, we actually are doing what you would, we started with what you would now call like traditional AI. Yeah. Right. Is like looking at machine learning, how do you man manage the devices that we have? How do we make sure we have reliable uptime for our customers? And how do we take really structured data and get some insights and predictive analysis out of that? So we started really early on with establishing that as a foundation, didn't get into the generative cycle until just a couple of years ago because we saw so much value from like the traditional side of things, and then have started to really dive deep on the generative side of things in conjunction with kind of the traditional ML side of things. So we've been in it for quite a few years and just now starting to get some of the significant uh savings and improvements in production just in the past six, nine months to really drive things for the organization.
Where AI Actually Pays Off
SPEAKER_03Yeah. And great, you know, Matt talks about recognizing that value and wanting to jump in. I mean, the spectrum is wide on where that value comes from, but broadly speaking, um, in the industry, where are we seeing AI provide that value right now?
AI That Fixes Tickets
SPEAKER_01Well, it it's a it's a great question. And I would put it probably in two big buckets, if you will. Uh the the first is how can you use AI and transformers and generative AI in general to improve the revenue performance of the business? And when you think about the service provider industry, I mean, highly competitive, and in some cases, you know, fully mature market, share shifting back and forth. So, what can you do to better service your customers? What can you do to use AI to glean better insights, to have your contact center employees interact more effectively, customer delight? So, all the revenue performance portion of that, and and potentially even in a low-cost way to go get in front of a new segment of customers you otherwise didn't make sense to service in the past. The the bigger thing we're seeing right now is what can you do to use AI to save money, to deliver service more effectively, to operate your network operations center, to operate your SOC, your fleet. And so it's not doing more with less people, but it's certainly making the employee base more effective, things like performance uptime on the network. And so, and and within each one of those, there's probably, you know, 50 or 100 things that Matt could probably talk about, low-hanging fruit, if you will, things to do right now. But certainly those are the the two big areas we're looking at.
SPEAKER_02Yeah, I would largely agree. I mean, I think if you think about as a service provider, most people aren't going to call you because they're happy. They want to get to resolution really quickly. And so to your point, what we're trying to do is find ways to one, either anticipate the call or what if someone's going to do something, or when someone is going to call, we want to equip our people with information as quickly as possible to get them to resolution. And so to your point, you know, there's an opportunity to grow market share, there's an opportunity to get more personalized with content and really get down to micro segmentation, if you will. But we're seeing a lot of a focus on how do we improve our people to be more effective in every layer of the stack so that they can keep our customers happy and potentially grow the customer base.
Why Adoption Stalls
SPEAKER_03I'm glad you mentioned people. I mean, adoption of these tools and systems with broad swaths of employees. I mean, that's a challenge that all of our customers and clients see, no matter the industry. Have you had any single point in time or initiatives that you think have really helped move the needle in terms of adoption?
SPEAKER_02You know, I think it's it's an interesting thing because it's a constant battle within the organization. I think we're still in the cycle where there's a little bit of the initial educational awareness. And it and as an enterprise, you actually have this battle between what people see in the consumer market and what's available in the enterprise space. And so we've gone so far as to say, hey, this is what you see in the consumer. Here's the alternative and what we want, what we have approved as a tool for your usage so people can make that connection. But then inevitably a new tool comes out or a new improvement comes out, and you have to take the cycles to educate people on those things. So we've really focused on kind of the foundational level to say, here are the things you should know, and here's how you can equip yourself mentally. But then we've also turned it around and said the tools are there. Ask the questions. You know, if you don't know how to use AI, oddly enough, one of the best things to do is ask AI how you can use it. And so we have a lot of people just doing that as an initial phase. And we so we introduce the tools to them and let them run a little wild and explore what they can do with it.
SPEAKER_03Yeah, Greg, are we seeing adoption challenges anywhere else in pockets of the industry? Or have we reached a point where people are kind of you know willing to jump in full force?
Move Fast. Don't Break Trust.
SPEAKER_01I think there's an interest. I think it's mapped with uh trepidation about potentially investing down the wrong road, trying to get the infrastructure up and running, spending a year on a use case that maybe looks promising and is not, and without going to a full infomercial, I mean, this is really where worldwide comes alongside our largest service provider customers and and really our largest customers in general is uh delevering the risk to get in the game. You know, whether it's helping develop the use case, taking something in in one segment that that is working, that we can then go replicate into another segment, uh, getting the data organized so you can go go do something with it. I mean, these are all just like how do you get out of the gates quickly, move, make progress, see the progress, and and continue to invest in the right direction and potentially not the uh the wrong direction.
SPEAKER_02Yeah. I I personally looked to worldwide really early on for some of like the insights that you guys just publish. And so, like going and saying, hey, here are the tools that are on the market today, here's what we're seeing with each of those, and we can have some real conversations to say you guys have actually tried this, you've done some of those things, that's been invaluable to us so that when we go and try something, we're not trying it blindly or we're not trying it without some kind of foundational concepts to really, you know, discover on our own. It's a collaborative uh conversation.
Betting on the Right Use Cases
SPEAKER_03Yeah. How did how did you how did you all think about use cases in the beginning? Were you flush with just a bunch of ideas and you had to really narrow it down? Or were you, I mean, I'm sure you weren't scrambling for ideas because it was just such a ripe opportunity. How do you think about use case prioritization, what's going to prove to have the most value, things like that?
SPEAKER_02Yeah. So early on, we thought one of the things we wanted to do is kind of go back to the educational side of things, let everybody know kind of what the technology can do and how it can help us out. So our goal as an AI organization is to almost make ourselves irrelevant at some point because we've proliferated it throughout the organization. And so, because as a centralized team, it would be very hard for us to get the use cases that can cover the whole organization. I'm not a marketing expert. I have a very, you know, broad technology background. So I focused on that personally, but I can enable the marketing team to understand what it can do, and they can start to say, here are the use cases I can do with it. So we focused on try to, we we had some initial ideas, you know, and those are the things that were kind of the low-hanging fruit that could provide value to the organization. But then equipping the rest of the teams to come up with their own ideas, and then we still have, you know, times where we talk with them and say, Hey, yeah, that's a great idea, or hey, we've had we've talked to the partners that didn't prove out to them, so help steer them in that direction. So equipping the rest of the organization to do things was really the way that we helped proliferate more ideas than what you could do with just a centralized team.
Stop Waiting for Perfect Data
SPEAKER_03Yeah. Do you think you guys, I mean, let's kind of back up from the use case standpoint. You had mentioned working with AI for for a number of years before the Gen AI explosion. How were you working with data or your infrastructure? Like what type of architectural shifts did you have to make to position yourself to succeed with Gen AI or you know, start to succeed? Or was there a lot of work to do there?
Move the Data - or Move the AI?
SPEAKER_02I I think in any enterprise the size of ours, it's it's always a it's a journey. You're not gonna, you know, the it's the the garbage in, garbage out kind of thing with data. We're lucky in the early phases because it's very structured data on the network side of things, where we see the most challenges is on the generative side of things where you're in a very unstructured space. So we're making moves to make sure that we understand where everything is. So cataloging is a really critical thing to do to say, hey, where are the the data sources that actually are the source of truth? Because you know, at some point you get drift and you get a derivative of a derivative of a derivative, and one team depends on the third derivative, and another team depends on the source of truth. And so making sure we can catalog all those things so that when a use case comes up, you can focus in on something is critically important. And then the transformation, I would say we it would it's really difficult to say, hey, let's do this holistically, let's transform all our data. Let's look at the transforming the data you need for the use case at hand and the rest of everything comes with it rather rather than taking on a huge project to say, hey, we're gonna pause all work and focus on getting the data perfect, because the minute you do that, it changes and your use case changes, a new industry use case changes. And so better to focus on what you need for the use case at hand and then evolving it.
SPEAKER_03Yeah. I mean, Greg, from a modernization standpoint, I don't want to assume, but I have yet to meet a client or an organization that just thinks that they have everything set from a from a data standpoint or an infrastructure standpoint. How much modernization has to happen to kind of you know really start to see the promise of AI that we've you know we've talked about moving forward?
When Data Starts Lying
SPEAKER_01Yeah, I think and and Matt hit it from you know a great perspective in terms of let's say that you're working on a use case for contact center modernization, right? What data streams are associated with that? Where are they? Where do they live? Are they secure? And like how are they fungible? Can you can you go do something with them? That might be very different than something uh focused around fleet operations, right? Or revenue management. So, you know, getting a sense in terms of like what kind of data are you are you going to need to execute execute the use case? I'll tell you the the the bigger thing we see just again and again is some of it's in the cloud, might be in a hyperscaler, some of it's on-prem, some of it is in a sim. It it just might be all over the place. So I, you know, I like to say with turning the lights on the room, right, to really understand, okay, what does these data data look like, the corpus, and and how do we begin to get it organized so we can actually go do something with it. And then again, being able to move quickly, prove success, and and come back into the organization, demonstrate success and and have something to build on and just keep keep pulling the thread.
SPEAKER_02Do you see a lot of people kind of try to centralize all that data? Because you know, everybody, to your point, there's data in all kinds of systems, and it's all over the place. And so they're almost calling like little data puddles, right? Do you do you guys see more people centralizing that, bringing the AI close to the the puddles, so to say? What are you guys seeing across the board?
Fresh Data Wins
SPEAKER_01Well, I I think broadly we we do see some movement back to on-prem, right? For various reasons around building, building these models and the importance of doing that. That might be in Europe around you know GDPR or privacy in the healthcare industry, around HIPAA. In in the SP industry, it's if I can get it into one place, then I I can move very quickly and and and move forward with it. The the other thing that I would say though, in in that I think that this is a problem set is large enterprises struggling to get everything perfect so then they can go do something with it. As you said, versus hey, I got this puddle over here, this one over here, put those together, get moving, and and prove success and move forward. That makes sense.
SPEAKER_03Yeah, I mean you you mentioned you know, data puddle, it could be a data lake, data ocean, whatever you might want to call it. But I also want to, you know, you mentioned the structured versus unstructured data. How are you tackling that knowing that the unstructured is is pretty hard to wrap your hands around? Yeah.
SPEAKER_02Pushing the puddles together is the best way I can think about it. You know, with the structured data, it's a little bit easier, right? I mean, it's in the definition, there's some structure around with the unstructured data, a little bit more difficult because you have to think about the drift of the data over time. You have to think about the fact that you may have stuff that is legacy and kept there. We had had some discussions earlier this week where people talked about stuff that was just security through obfuscation because people forgot it was there. But now when you bring AI, it sees and sees everything and it will bring some of those things to the forefront. And you'll be like, oh, we forgot we even had that. And so we're trying to look at things about going back to what is the master of stuff, try to build some some nomenclature, some thinking around kind of almost ranking by experts. So if you're the author of a document and I find a document that has been changed by a few other people in that case, I'll go back to the author as the expert in that case. And that raises the quality of that data so that we don't have to look at cleaning things up, but just more letting the the cream float to the top, so to say, rather than trying to like do all that rigmarole underneath.
SPEAKER_03Yeah. I mean, Cox or any service provider in general, you know, sitting on just a massive amount of data, probably more so than a lot of other industries, does that does that add complexity or is it just a scale issue? Probably a little bit of both.
Customer Truth > Boardroom Hypotheses
SPEAKER_02I mean, there's there's a complexity because depending on how your policies are for retaining that data, that could cause, you know, a lot of you know fractionalization of the data or even some staleness. So, you know, one of the things we think about sometimes is like maybe you should have policies that retire things off significantly more than you historically would, because you don't want that stale data to influence your decisions and things going on. The scale, uh, you know, I I personally see it as you know a differentiator in a lot of sense, so that we can use that to really make some big decisions that historically you couldn't do because of the scale. So the technology allows you to actually take this scale of data that was hard to tackle before and use it in a much more meaningful way than you could if you were trying to comb through all of it and you know, feels like old things, but it was like two or three years ago at this point.
SPEAKER_03Yeah. I mean, I like that leaning in. Does that mean you think that's par for the course across the industry or maybe more of a unique view?
Agentic AI Gets Real
SPEAKER_01I don't know about par for the course, but certainly certainly directionally directionally what we're what we're seeing from any of our large, large customers. I I'll tell you the thing that we haven't really talked about, and Matt, I'd love your perspective is you know, think about the physical real estate footprint of somebody like Cox or a large service provider, right? In all these markets. And so, you know, touching the edge of the network, central offices. And so there's been so much invested uh around training large models and putting together these massive GPU facilities. But think around inferencing at the edge and potentially like what you could be able to do if you're a service provider. So you're training the data in one central location, either on-prem or in the cloud or some combination, but then transforming and leveraging your physical real estate at the edge of your network to fundamentally slice the network and deliver different service, you know, based on various protocols to different businesses, to different subscribers. I I really believe, and I'm Matt, I'd love your perspective, is that that to me is this profound moment in terms of what is the service provider industry today versus what it could be, you know, over the next three to five years. So I'd love your your perspective on that.
Build Open or Get Locked In
SPEAKER_02Yeah, I mean, I I agree with you. There's a there's a ton of opportunity to push those things to to the edge. And it's kind of funny, it's like when you see these transformational technology things going on, and it's like everything was on-prem and then everything moved to the cloud, and things are starting to come back on-prem, and then the technology is emerging where you can actually push from on-prem to the edge and really do things in real time. You know, I think the with a footprint of any large company, the the there's a ton of opportunity there, to your point. And the challenge is going to be finding the differentiated services that are really needed at that edge because every market's slightly different, which is one of the unique things about these things, is like there's one thing from being a service provider for like basic services. When you get into differentiated services with edge intelligence and making different decisions, whether it's your own personal stuff, that is kind of easily defined in my mind. But if you're going to try to build an offering that's going to go out to a market, that's a little bit more interesting because every market's a little bit different. How do you attack each one? How do you find what's differentiating to them versus what you think is differentiated? It's a lot of you know, going out into the market and getting that data from your cut, your potential customer base so you can see what really changes things for them.
From Cost Savings to Revenue
SPEAKER_01Yeah. And and here's here's what I would say on on just to build on that is the truth lies with your customers. So if you're Cox, the truth lies with your subscribers. The danger would be for us to sit around and hypothesize around what is the killer use case or what are we, and we see a lot of that happening versus just observing what how are they interacting today? What can you do to get kind of signal-to-noise ratio in terms of launching a new service? But then if you look at NVIDIA, right, and you look at what Jensen talks about, it's it's something transformative where potentially there's a market there today that doesn't exist. And so it's it's making a new market. And I look at our large service providers, and Cox is a great example of if you have the right innovation, intelligence, rigor, and discipline, it's amazing what the business, the revenue profile, the customer segments could look like over the next next few years. I mean, we are really truly at this transformative moment in industry and and how businesses are operated, what they will look like, how they'll make money, how they'll operate the employee base. So it it's uh it's like nothing I've ever seen before.
SPEAKER_02I like your comment about hypothesizing. Like there's always those decisions you see in the market that are feel like boardroom decisions, not to speak bad about the boardroom. I had a I had a coworker years ago that had this thing that has always stuck with me. She talked about Nihito, nothing important happens in the office. It was a very product-centric mentality. And I try to bring that to all of the teams that I work with is to say, you may think something's right until you actually talk with a customer and talk with like who who potentially would use this. And chances are they're going to want something totally different than than you think they are. And so getting out, talking to people. And the interesting thing is your customers in this space, specifically in AI, are both your external customers, and as a technology provider internally, are my internal customers. So going talking to my fellow coworkers to say, hey, how can this be used to help you get better at your job, be more effective? And so it's really a it's actually quite fun. It's a lot of conversations, a lot of people get a lot of ideas, and the challenges then distilling it to get something that really drives it.
SPEAKER_00This episode is supported by Veronis. Veronis provides data security and analytics to protect sensitive information from insider threats and cyber attacks. Secure your data with Veronis' comprehensive protection solutions.
Don't Break the Architecture
SPEAKER_03Yeah, I know that's a that's a great approach. Certainly takes some of the guesswork out of it, and especially that's pertinent knowing how quickly things are changing. Uh Matt, I want to shift a little bit, ask you about agents, Sugentic AI. I know you gave a presentation recently just talking about how to think about scaling AI agents across the enterprise. It's certainly a hot topic amongst many organizations that we deal with. So, how are you thinking about agents? How's Cox thinking about agents?
Self-Healing Networkds
SPEAKER_02I think you got to think as it needs to be as open of an ecosystem as possible. And you need to think about the how you're going to drive the proliferation of agents within your organization. And we really think of it in a couple different levels. One of the things is your agents need to, if you think about it, a fundamental side of things, agents will talk to agents to talk to agents at some point, right? And they're not going to always be a human in the loop. And so everything you do from that point forward is going to influence how you think about your architecture and how you think about your interoperability. So there's a lot of great platforms out there today that provide you end-to-end agent capabilities, they provide you orchestration, they provide you governance, they provide you visibility, all those types of things. And the challenge with those is that it locks you into those things and it prevents you from being open. And some of them poke little holes with like enabling technology like MCP or A to A. But the agents generate data and that data is valuable to future agents and the agents themselves. And that's actually one of the things that people overlook the most is that what is locked in from those platforms. And so if an agent is generating data and it's only in that platform, for example, you're actually renting your own information back from that platform. Rather than you using it to help yourself. And then further up the orchestration of those agents and allowing them to talk, if that's locked into something, it makes it very hard to pivot or open other agents that may not be compatible with that platform. And so I think about it as a way to open things up and really kind of enable a proliferation back and forth. And it's it's kind of a funny thing because if you think about, you know, microservice architecture and where we shifted years ago with you know being cloud native and having all of these things talked together with, you know, open API layers and published contracts, all that kind of stuff. The minute agents came into the space, we all forgot about that stuff. And we all were like, hey, let's just build them as quickly as possible. And so there's some basic fundamentals from an enterprise architecture perspective that I think if you just take a step back and say, hey, those same principles apply here, just because it's an agent versus a microservice doesn't change that in my mind. You still want them to be open and talk to each other. Yeah. And so if you do that in everything you do, then you can really help push them out throughout the whole org.
SPEAKER_03Right. So are you, I mean, are you starting to get into like data sovereignty and like just ownership of your data and the value that that brings?
AI Doesn't Replace People
SPEAKER_02I think it it allows you the value of it, it allows you portability. And so one of the things that I I personally espouse to be is as open and transparent with ever whatever platform I'm using and whatever thing that I build, so that that data we know what you know, what's in that data, we know how we can use that data. And if we need to move that data at any point for any particular reason, something that we don't anticipate in the future, we can, and we're not locked in and can't do those things. And so I'm trying to do that in the most open way possible so that we can have that portability and that optionality, but still have the power. It's not you're not you're not giving up anything in my mind. It's probably a little bit more to build an open system early on, but that flexibility increases your speed, increases your innovation, increases your ability to really kind of push it out once you have that foundation set.
SPEAKER_03Yeah. I mean, Greg, he's talking about speed to innovation and kind of you know, thinking of new things here. You mentioned early on, you know, a lot of the AI use cases right now are about, you know, maybe saving money, efficiency, as what he's talking about. Are we talking about getting into more like new revenue streams, or how do you think about that?
Freeing Humans for Higher Work
Where to Place Your AI Bets
SPEAKER_01Yeah, I mean, I I'd come at it like so. Here's the way we're we're getting organized at worldwide in terms of how we think about AI and and the services and the value we provide to any of our customers is this concept of an AI studio, right? Helping to define use case, figure out what the customer base looks like, how are you gonna compete more effectively, how are you gonna service your customers more effectively, and bringing those to life very quickly, failing fast, looking for pearls of wisdom, you know, building on things that are working, not spending on time on things that are not. The middle we talk a lot about is the AI foundry. And this is really like how do you get the data organized, right, in order to go do something with it. And then there's the factory, right? And this is the thing the worldwide's been doing for you know 30 years is building scaled technology, particularly around building some of these massive GPU firms as well as inferencing infrastructure. So the the easiest way to answer it is the insertion point at any of our large customers might be different. They might say, Hey, I need help standing this up. And then we get in there and they find that, hey, I actually need help with the use case, right? Or it might start with a use case and improving value, and then okay, what what sort of supporting infrastructure do I need to deliver against that? But doing it in a non-production environment, doing it in our AI proving ground, right, helping somebody go fast, our business works well when we show up with an informed point of view. Uh we bring the experts out into the field and and and we prove success, right? And and and it's it's not simple, but it's you know, at least we know kind of how that works. And that's you know, similar to how we've been adding value to these customers for 35 years, just just different technology. One other thing I will say, and Matt hit on is enterprise architecture, right? Because it's AI doesn't mean we we restart over. Having that enterprise architecture approach, things like human supervised machine learning, like some of these modalities, methodologies that have been used for the past 20 years are still in use. The horsepower now under the hood of the car is a lot more powerful, but still the same approach in terms of ITIL and enterprise architecture. I don't know, Matt, those things don't seem to change.
Stop Planning. Start Building.
What Actually Matters
SPEAKER_02No, and they should and they shouldn't, right? Like I I think we got all excited about this new hype, right? And then we just kind of threw it out. And we're like, hey, we're gonna do it different. And then what you'll find is you do it different, and then you are like, wow, I really should have done it the way I used to do it, right? Yeah, and and you you kind of you touched on a little bit, but I want to dig into a little bit more is like this is an opportunity to experiment on things a lot faster than we ever historically have. And I would say there's probably more use cases that most of us have done that have not proven out how we thought they would. And so going back into kind of your basics of like trying things out, doing kind of POCs or POVs on things, but going back to the principles, right, is like just doing things in a non-prod environment, trying it out. If it proves value, you scale it up a little bit, see if you have more value. If it doesn't, you're like, hey, we learned something from that, we're gonna steer something a little bit different. Those are tried and true principles that we've all had for years, and we've seemed to have thrown them out and just said, hey, this is new cool stuff, we're gonna go do it differently.
SPEAKER_03Right, right. Uh another little quick shift, Matt, you know, every organization that worldwide deals with has has a network. Don't you know depends on a network. Cox just so happens to have, you know, one of the biggest networks in the world. I'm wondering how you're applying AI there for network uptime or network maintenance. You know, Cox, you know, lots of critical services uh depend on that network. So how are you applying AI there to maintain uptime or or just maintenance in general?
SPEAKER_02Yeah, so that was back to our original old school uh uh AI use case. So we early are on put machine learning in to say, hey, what is happening with the nodes and and the data endpoints in our network? How can we identify places that need us to do preventative or proactive work to keep that high level of availability? Also, how can we use those signals to learn something before it gets too critical or learn something a lot faster than we have? So a lot of the focus has been on speed to ideally the the we would get to a state where we can anticipate something happened before it happens, right? That would be your gold standard. Like we know that that node is gonna have issues, so we're just gonna go proactively do those sort of things. And I think at some point the technology will get us there. Now it is how do you quickly address those things, or how do you start to see early signals that were really hard because they were deep in the noise of a regular network and peel those out as kind of anomalies and say, hey, we're starting to see more anomalies here than we normally would. We should start to dig into that space a little bit more. And again, the technology has allowed us to do that, but before it was a very hard data analysis question, and now we can have the you know, the agents and the things do that for us.
SPEAKER_03Yeah. I mean, Greg, are we starting to see either within our own environment or with clients more of that shift from reactive to predictive?
SPEAKER_01Yeah, I I like what Matt said there. And and you know, again, I come back to enterprise architecture and leveraging AI. I can only imagine how much AI Cox has in their network, any large service provider in in terms of how they've been operating it for, you know, good 15 years, right? But think about the transformers now and think about what generative has brought. One thing we hear again and again is these large language models to deal with massive problem sets and things like drug discovery and building new businesses. I mean, I mean, they're profound. The implications are profound. However, this idea of putting a bunch of small language models into your network using generative AI to dynamically reroute traffic, congestion issue. Matt said something a moment ago where if nine of these 10 things happen, that typically something's gonna trip. And if you can see that earlier and you can redirect that traffic and anticipate that ahead of time, that's very powerful. And that is AI, and that's the promise of AI, right? And that is running your network, not necessarily with a lot less people, but some less people. And and there's been something, you know, in the in the cyber industry that I always liked. And it was that, you know, the more that you can use AI in a cyber operations center, a SOC or threat intelligence program, it doesn't mean you need less people. It means there's more time your people have to work on meaningful work. And I really see that that is to me where this is all going. And imagine a field engineer at Cox or somebody working in a contact center. If you can use AI to better service a bunch of mundane tasks that you don't have to do anymore, then your personnel is going to be happier. Right. You might not need as many people, but maybe you do. And maybe they're working on new things to help drive a creative growth and revenue into the business. And to me, that's the that's really the promise. The the passion for learning, though, and getting our people and your people like up to speed to learn this entirely new language is paramount. And if people do not do that, they will be left behind. I mean, I'm saying words, Matt's probably maybe not as much Matt, but I'm saying words here that, you know, I wasn't talking about two years ago, right? And we're having to go out and relearn and learn a new language, but it is all based on kind of the principles and the fundamental of networking technology, data centers. But it it certainly is. I mean, Matt, I love your perspective on what you all are doing to, from a competence perspective, to get your people up to speed so that they can participate in this new world.
SPEAKER_02I think you go we go through kind of the stages of grief in a sense, like people think AI is going to replace them. And I think the one of the unfortunate things with it is it probably is not going to replace you, but someone using it will replace you. And we we you hear that a lot of a lot of times. And so it's about getting people to understand the acceptance of it within their space so that they can make themselves better. And I and always tell people, I don't really frankly, I don't think it will largely replace a lot of people in the workforce, but it will unlock them to do things that are high value for where you are. And so if you think about your day to day, I encourage people to like just try one thing here or there. And the minute you can get five hours of time back because you've automated something, you have something make decisions for you, you'll never look back. Like you will continue to adopt and adopt and adopt. And you get the mundane, error-prone things that a human should probably get out of the way on and go to the higher value things where the technology doesn't do that for us yet. And it doesn't, you know, it's not driving an overall strategy for an organization. And those are people doing those things. And once we know where we want to go and we define our North Star, then you have the technology help get you there. And so the, yes, these things, these things seem magic and creative, and they can tell you whatever you want, but it's just regurgitating information largely. And so you need to direct it how you want to do it. And so if you get those mundane, error-prone tasks out of the way, you're now in a space where you can use it to really double, triple, quadruple your output personally and your team's output even more because it's a force multiplier rather than an eliminator in my mind.
SPEAKER_01Yeah, no, absolutely. I loved what you said around unlocking potential, right? For the employee, for the customers, for how you how you run your business. And to me, that's like the great use case there is like art of the possible. Like, what could we do if we had infinite resources? You know, I think about these agents and these agentic frameworks. It's like, what could you do if you had a thousand new trained employees that never had to sleep, right? And never went home. And we're working on things 24 hours a day. And then yeah, and and to me, if you think about these agentic frameworks and and what you can accomplish there, so then what do you do then with your employees? Like, and and how do you make their lives better so that they'll service the customers in a more meaningful way? And and that is, you know, that's painting with with broad brushstrokes when you start thinking like that, right?
SPEAKER_02Yeah, to your point about never sleeping and other kind of thing, there's other also kind of like the morale, the kind of like the in a call center, for example, like you if anybody sat through one or two or three or a day's worth of calls, that could be mentally draining, right? Um, and so and and working through as an engineer, I would work through a bug and it would just, you know, really kill you because you're working on it and working on it, working on it, and you can't quite figure it out and you get frustrated. So then you start to make mistakes or you forget something. You know, when you build an agent or a network of agents, they don't get offended when you talk to them, right? They don't get tired, they can work as much as you want them to work, and they don't get mentally drained from things. And so seeing how you can take that part of a job which really hurts the human workforce and move it onto something that is unaffected by it allows the human to kind of unlock more of what they want.
SPEAKER_03Yeah, and we've had episodes on before where we talk about that, you know, that desire to move humans to a higher order of work, and but it is still a leap of faith, but that's where kind of culture, company culture comes in, having a trustworthy relationship with between employees, managers, et cetera, is so valuable. Um, it's an exciting future for sure. Uh, we're coming up on the bottom of this episode. So just a little bit more of a future-facing question here. You know, Greg, we can start with you. You know, maybe not so far in advance because who knows what the future holds, but at least over the next maybe three, six months, maybe 12 months, you know, where should service provider leaders be placing their bets or starting to make moves right now to win next year?
SPEAKER_01I I would say, you know, getting the data organized around a use case that is proven, right? To to run your network, and maybe you're using traditional AI, human supervised machine learning, and what could you do to use generative capabilities on top of that to do that better? And to me, that's that's tangible. It's a place where people can make progress right now. The the other piece too is in and it's maybe less of a technology conversation, it's just competitive. Like, what can you do to compete more effectively? How can you deliver more value to your customers? How can you acquire customers more effectively, retain them, understand when there's a problem that they don't, you know, have to go through to three different save desks and get to the wrong person. By that time they're ready to cancel their service. Like, how do you just uh improve the operation of the business? And I do believe that there is so much that can be done there in three to six months, and we're seeing that move out. And then it is fun to sit and think about you know, the the longer term, three to six years, and what this is gonna look like and what these networks are gonna look like, and what Matt's business and and our business is gonna look like. And and you know, it's fine to get excited about all that, but but the investments and the profitability in the near term, you know, fund our ability to think about those things over to over time. Yeah.
SPEAKER_02Yeah, to add on, I think one of the interesting things that most people think about when they hear it moving so quickly, they're like, I'm gonna wait and I'm gonna see what happens three to six months. That is the worst thing you can do right now, right? Yes, we know it's going to change. And it goes back to the fundamental principles. So unlock yourself into something and and how can you how can you find an open way to do stuff? And you should be experimenting more so than you've ever experimented in your life because you can do those cycles so much quicker than you would before. Like if you had a concept before, you'd have to create a mock and you'd have to write, you know, some code and you have to try it out and those kind of things. We could go over to my laptop right now. If I wanted to create a new app for you know someone in the field, I could have it done in probably an hour and I can see if it would make sense or not and immediately prove or disprove that random thought. And so it's a the ability to take this huge backlog of ideas that we had and push through it faster than ever. But sitting and waiting is the worst thing you can do right now, just because you're concerned about what's going to happen in the future, because it will change. We all admit it will change, but you can find value now if you just go through and try those things.
SPEAKER_03Yeah. No, the advice, jump right in. We'll we'll end the episode right there. So whoever's out there listening can just, you know, go out and start doing. So uh to the two of you, thank you so much for taking time out of uh your busy schedules. I really appreciate it. Hopefully, we'll have you on Zoom. I had a blast. Thanks for having me. Thanks, Matt. Thanks, Matt. Appreciate it. Okay, thanks to Matt and Greg for joining. What this conversation reinforces is that AI only creates leverage when it's treated like enterprise infrastructure, not a side project. The organizations making progress aren't chasing perfect data or waiting for the market to settle. They're experimenting in a contained ways, staying architecturally disciplined, and letting real operational wins fund what comes next. This episode of the AI Proving Ground Podcast was co produced by Nas Baker and Kara Kuhn. My name is Brian Felt. Thanks for listening. We'll see you next time.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
WWT Research & Insights
World Wide Technology
WWT Partner Spotlight
World Wide Technology
WWT Experts
World Wide Technology