What's Up with Tech?

From Hyperscalers To Neo Clouds: Rethinking Enterprise Networks For AI

Evan Kirstel

Interested in being a guest? Email us at admin@evankirstel.com

Forget the old playbook of carrier hotels and cross-connects. We sit down with Dave Ward, Chief Technology and Product Officer at Lumen, to map the real shift to Cloud 2.0—an AI-driven rearchitecture of how data moves, where compute lives, and how enterprises keep control as speed and scale explode. Dave explains why data movement has become the bottleneck that decides AI ROI, and how distributed on-ramps, 400G to 1.6T connectivity, and network-as-a-service can shrink time-to-first-token while cutting operational drag.

We unpack the rise of neo clouds—GPU-first data centers with new commercial models—and what that means for planning training and inference. Instead of buying vague capacity, teams now rent defined GPU clusters for six to 36 months, often in nontraditional metros with power and cooling to match. That shift demands a new connectivity strategy that bypasses hourglass on-ramps and drives data directly to AI factories and hyperscalers. Dave makes the economics tangible: when moving a petabyte takes hours instead of days, GPUs stay busy, costs drop, and models get to work faster.

Control doesn’t have to disappear as DIY fades. Dave outlines how design, price, order, provision, and assurance can live in one digital platform, giving IT the same topology and policy control without the burden of racks and cross-connects. We cover why many SD-WAN and SASE deployments need deterministic bandwidth channels, how to build a data fabric across 30+ sources, and the practical first steps: inventory workloads, map your data flows, and match bandwidth to business outcomes. If you’re plotting an AI strategy without a network and data plan, you’re leaving value on the table.

If this conversation helps you think clearer about Cloud 2.0, follow the show, share with your team, and leave a quick review to help others find it. What’s the first workload you’d accelerate with a true high-speed data fabric?

Everyday AI: Your daily guide to grown with Generative AI
Can't keep up with AI? We've got you. Everyday AI helps you keep up and get ahead.

Listen on: Apple Podcasts   Spotify

Support the show

More at https://linktr.ee/EvanKirstel

SPEAKER_00:

Hey everybody, Cloud20 is coming fast and it's about to change everything about how the entire internet works. And today we have a great insider thought leader to unpack it for us. Jay from Lumen, how are you? Doing great, Evan. Great to be talking with you again. Good to talk for you again. For those who aren't familiar, you're quite um a visionary in our space, if I do say so myself. Perhaps introduce yourself, your journey, background to Lumen, and how do you describe Lumen these days?

SPEAKER_01:

Sure. So I'm Dave Ward. I'm the Chief Technology and Product Officer at Lumen. I've been here uh about 18 months. And while Lumen has changed quite a bit, um, not just because of me, but because we have a great team here. Um, I came to Lumen before that. I was the CEO of a network as a service company called Packet Fabric. Before that, I spent two 10-year stints at Cisco, last one as the chief architect at Cisco. In between those stints, I was uh at Juniper as a fellow, first stint at Cisco as a Cisco fellow. So I've been flinging bits and votons for a while now, Evan.

SPEAKER_00:

I love it. And uh just to dive right in, you said Cloud 2.0 is gonna really reshape the internet as we know it in about two or three years. Uh, what does that mean? What is cloud 2.0 and what does it mean in particular for enterprise IT leaders?

SPEAKER_01:

So uh cloud 2.0 really is born out of the whole AI movement. And I I assume everyone's heard about AI. If you haven't, you've been gonna run around. But um, look, I'm not here to talk to you about all the glory of AI. There are plenty of people in the industry who want to do that and do that better than I do. But I want to talk about a couple specific aspects of it. And cloud 2.0 for me is that as cloud 1.0, which emerged with the hyperscalers, let's say 15 years ago, or maybe a little longer, um that created a certain architecture of the internet and a certain architecture of the way enterprises transformed their IT into hyperscalers or with SaaS providers, or moved into storage providers or security providers, all of that's well known. But what AI is bringing is a couple of really, really interesting pieces of this. One, the amount, the amount of data that needs to move is basically 100 times larger than it was in Cloud 2.0 for training. Second, there are in the next three years, between 25 and let's say in the end of 28, there's going to be a four times increase in the number of data centers in the lower 48 of the US, and going from 240 million square feet of data center center space to a billion square feet of data center space. Wow. That that's that's a monumental shift. We haven't seen this type of investment on infrastructure, um, compute infrastructure, et cetera, in ever. And correspondingly, as we know, these data centers need real estate, they need water, they need power, but they also need fiber because you've got to move the data in and out of these places. And so that that data movement that I described earlier uh associated with AI means that enterprises must realize that everything they we all know about IT, data movement, workloads, where they're gonna be, how to plan for that, is being challenged by AI. And what's what's really key about the monumental investment that's happening is that thankfully Lumen is uh earning a lot of that, a lot of those routes and those fiber routes, and we're putting a massive amount of fiber into the ground, over 44 million uh fiber miles. Hey, that's that's great. But do we really want to stick with the architecture that we had of the cloud on ramps and going to carrier neutral facilities to buy physical cross-connects? This is this architecture of cloud 1.0 came out of the 1980s, 1990s telephony of long distance exchanges and does not fit where the data centers are being built, number one, and does not fit the traffic patterns for us to get to those data centers or get to cloud. So, therefore, cloud.0 is, and the way I'm describing it, is critical in that it's a fundamental change to the internet architecture that requires connectivity to those data centers, a brand new reconstruction of on-ramps, fundamentally different speeds starting at 400 gig. And as soon as we can get there to 1.6 terabit, pardon me, as well as a rethinking of how the workloads are going to operate, because the commercial nature of cloud 2.0 with neo clouds and AI factories on the rise have a fundamentally different economic model than cloud 1.0 does. And I can go into that later in our conversation if you like.

SPEAKER_00:

Amazing. Well, there's a lot to unpack there. So rethinking cloud, data, networks. So if I'm an IT leader, business leader, what's the first step my enterprise should take to stay competitive in this brave new world?

SPEAKER_01:

Really, it's it's match up the your CEO's desire to bring AI into the company and to partner with AI, whether they're clouds or their SaaS providers or others, but turn that into a plan and a strategy. And to have an AI strategy, you've got to have a cloud strategy. You have to have a data strategy. And what I'm here to talk about is you have to have a network strategy of how that data is going to move. So I'll give you a quick example. We've seen in our enterprise customer base, they have on average, and averages are averages, on average, 36 different data sources. So what does this mean to an in? I'm tossing this out for conversation. It could be in S3, that could be in Snowflake, it could be in Wasabi, they could be in a data silo, sitting in a data center operator, on and on and off. Could be on-prem. That is the intellectual property of the enterprise. And how do you build a fabric between those data sources that then can be used by the large language models or by AI factories to train if an enterprise is choosing to do that themselves? So that cloud strategy, data strategy, network strategy is so intertwined with the transformation for a CIO or IT leader, where each of those uh decisions need to be made together. So that that I that CIO IT leader really needs to understand what is my plan to translate my CEO's desire to be an AI first company, AI-based company, into something that is meaningful, adds value, and returns value to the company. Could it be efficiency of internal systems, or could be customer interaction, customer service, or could be just straight up some new consumption models that they have, but it's it's forming that strategy. We've found that there's the full continuum, as you'd expect, of enterprises, those that are tip of the spear and going hard at AI, big giant middle of I'm trying to figure this out, and then there's some folks who are gonna wait and see. I really want to challenge the wait and seeers because if wait and see is not a strategy uh in the AI economy. It's uh it is build.

SPEAKER_00:

Well said. You mentioned neo clouds. Tell us about the rise of neo clouds, what they are, and why they've become so important.

SPEAKER_01:

Sure. Uh I'm gonna summarize it at a super high level first. Neo clouds are effectively very similar to a colo model that we know. But what's different is that you're renting clusters of GPUs plus CPU and storage at a time. Frequently, you can get a small cluster, let's say 256 GPUs, but those that are AI heavy are or are getting up to 16,000 GPUs at a time. So those clusters are really quite large, and there's a full continuum. The commercial motion for neo clouds, and these are specific data centers built to house and home GPUs. That's the fundamental difference. They've got the power, they've got water cooling, but they're not being built in traditional locations. They're being they're potentially diversifying into rural areas, as we've seen, and as I write about in my blog as well. So a NeoCloud is a data center, data center operator that is specifically building facilities to house GPUs and a corresponding new commercial motion, which is renting clusters of GPUs, CPU storage, but we'll just talk about it as GPUs, renting clusters of GPUs to an enterprise at a time to do training, to do distributed training, to do inference, and to do other jobs. And it's critical to understand that the consumption model of GPUs from hyperscalers and GPUs from neo clouds is a different commercial motion and is a different connectivity motion as well. The difference in the commercial motion is that at an AI factory, frequently those jobs and the and the rental of those clusters of GPUs is on a six-month, two-year, three-year time frame. And the hyperscaler is that ongoing consumption model. And so it's it's really do you know your workloads well enough that you know exactly what activity you want to do, what size cluster you need, or are you gonna go with a hyperscaler uh and that connectivity model, which gets me to the point of how are these gonna be connected together? And that's that's where once you get into the AI strategy and you make a choice of who your GPU supplier is going to be and what tools and systems come with it, you then have to marry that with your connectivity, uh your connectivity choices as well. Because simplistically, Evan, moving a petabyte over a 10 gig link compared to a 400 gig link is a 40x improvement in performance. So I don't want to be the networking guy who's explaining just feeds and how multiplication works. But here's the commercial impact. When you want to do an AI workload, uh when you start renting that GPU and you start moving that data in, if it takes about 22 hours to move a petabyte over a 100 gig link, and or obviously uh much, much longer over a 10 gig link, when you're able to reduce that time because you've moved your data in faster, you have a faster time to first token, and you have the most economically efficient use of those GPUs that you've been renting. So that's the data part of this and the data movement part of this. Where are my sources? How to create a fabric to them, how do I move that data into my into my either NeoCloud and AI factory of my own uh GPU clusters or into hyperscaler, and then how do I move it the most efficiently? Therefore, all the way back to the top, I need new on-rants, I need high speed, I need to be connected where I need to go, and I need to be able to control my bandwidth such that I can move that data as fast as possible uh to uh economically efficiently use those GPUs.

SPEAKER_00:

Fantastic. And we're GCR, so I'd love to understand best in class, leading edge, fiber network connectivity these days. Uh, what sort of boundaries are you pushing at Lumen and where are you headed?

SPEAKER_01:

Yeah, really what I'm what I'm pushing to disrupt uh is creating that cloud 2.0 agnostic fabric, connecting together data centers, neo clouds, hyperscalers, putting wave Ethernet IP, private IP equipment, and creating that fabric to be able to move that data and connect as in real time on demand with our digital platform and a network as a service motion, all of those data centers and clouds together. Now, additionally, working very closely with hyperscalers to fundamentally change the on-ramp model. The on-ramp model today is oh, I want to get to a hyperscaler. Then I have to DIY my network to get there. And as much as wait and see isn't a strategy, DIY is is absolutely not a strategy. And DIY is, in my opinion, is dead because a NAS platform allows a customer to design and control their network and move that data automatically in real time. So back to the on-ramps, there is no reason in this country, our cloud on-ramps, which are in all three hyperscalers, are only in 17 buildings in the US. That is an hourglass design based on 19 long-distance telephony that drives me bananas. There's no reason why the architecture of our internet is bound to these particular buildings. So working with the hyperscalers to pre-light new on ramps distributed around the country and to neo clouds and to these data center operators, in which I can move my data directly where it needs to go. I don't need to DIY. And I can have this all in real time on demand. And so I'm absolutely challenging the construct of those current cloud on-remps to be a fully distributed architecture around the internet. Thus, why cloud 2.0 is a fundamentally new and different architecture for the network as we know it today, the cost structure of connecting to cloud and to data centers, and the ability and the way you consume it, which is network as a service.

SPEAKER_00:

Wow, incredible opportunity there. So you said the era of DIY networks is coming to an end. How do you then manage this new era without losing control as an enterprise?

SPEAKER_01:

Well, those are the two most things, two most important things that I want to preserve for the enterprise: design capability and control capability. The design capability of what are the dynamic fabrics that I need to build between my data sources or between the clouds and data centers or prems that I have. And the control piece of it is a notion of design, price, order, provision, and assure is what we're building into the top of our platform. The ability to have the same amount of control over a network that they had when they built it themselves. But instead of owning, managing, and operating it in a DIY fashion, Lumen can do that. Yet the enterprise, the IT professional, has complete control over the design and control over the assurance, the paths, the routing, the topology, and the security of that fabric that they built.

SPEAKER_00:

Well, so critical. And it's not just a technical discussion. You've done the economic analysis of this shift and uh big changes when it comes to cost efficiencies that come into play. What do we need to understand about the economic side of the equation?

SPEAKER_01:

Well, on just let's talk about on-ramps because uh we just were and it's kind of right there in front of us. Today, an enterprise needs to build a circuit to carry into facility, they need to get a cross-connect, they need to get a rack, they need to put in a router, then they need another physical cross connected to the cloud. I want to vaporize that. But I want to do it by adding value, which is having the pre-lit on ramps distributed around the country where it's real-time on-demand, prelited to the hyperscalers, preluded to the data center operators, take out the cross connects, take out the racks, take out the routers, and be able to do this with a multi-cloud gateway, which is a physical router itself, but with uh multi-tenancy and the ability for every for an enterprise to control it themselves. And by changing the economics of how I get to cloud, one, I believe enterprises can absolutely now consume more and join the AI economy faster. And two, without all the hassles and overhead and headaches of going through the old style way of cloud 1.0.

SPEAKER_00:

Oh, really exciting. Uh speaking of cloud 1.0 and uh 2.0, there are lots of misconceptions in the press, I'd say in the media, headlines uh floating around, lots of questions about AI-driven architectures and where they're headed. But when you talk to CIOs on the ground, you're a real practitioner. What are the biggest misconceptions you're you're seeing out there? How would you suggest they be uh right-sided?

SPEAKER_01:

Um couple of couple of different items on that. So, one, this is the time to invest. Like, well, there's a lot of legacy network access and legacy network attachment that has run its course, and we need to be able to shift from whether it's copper attachment to fiber attachment, potentially Leo or fixed wireless access. Really, the notion is get more bandwidth and then get control of that bandwidth. So, a couple of other things that we've announced recently to get that control is is uh a new device. Oh, pardon me. Okay. We're gonna have to pause there.

SPEAKER_00:

No worries. We'll uh we'll try to cut that out as we crash, but I'm back. No worries. I think we can uh take it out again. Okay, we're back.

SPEAKER_01:

I don't know.

SPEAKER_00:

Um so when you think about a cloud 2.0 ready enterprise, you have so many thousands of customers. I don't want you to pick a favorite child, but um any anecdotes or stories, uh customers you can point to who are uh implementing cloud 2.0 in practice?

SPEAKER_01:

Sure. In particular, um those with really large data movement, financial services, media companies, health and pharmaceuticals, those are those are enterprises and and segments of the industry that are at the tip of the spear. They're they're reevaluating their data center model and their cloud model to be one. They're uh using or investigating the the neo clouds and and and building AI factories of their own. They're they're adopting strategies on how to bring AI into their company. Now, this frequently means that they need to change their access technology, rebuild their their uh cloud from their 1.0 use of bandwidth to a cloud 2.0 use of bandwidth, which 1.0 could only have been 10 gig or some sub-10 gig. We're not moving petabytes of data. I mean, just the difference between 10 gig and 1 gig is 222 hours versus 20 hours. 22 hours. That's a massive amount of time to move that data and build the data fabrics. So those companies um that we're working with directly are those that are incorporating those uses of bandwidth. Now, there's one thing I did want to mention. The company, you know, SD WAN and SASE has been such a large topic in the industry. And in the in my blog, I also posit that SD WAN and SASE are at, as we know, the technology today are not fit for purpose in the AI economy because they tunnel over the swamp of the internet. There's no bandwidth guarantees, no latency guarantees, and no redundancy guarantees necessary. So we're working here at Lumen and have announced that we're building a platform called Berkeley, which can take an access link and create bandwidth channels from prem to cloud, to data centers, to neo clouds, et cetera. So if you have a 100 gig access link, you can now carve up that bandwidth, both on net to Lumen and off net, and get that full end-to-end design and control of the network to match with your data and match with your cloud strategy and the and the customers that are thinking along those ways. And then thinking to take their SD-WAN and SASI, which is a beautiful IT environment, take it off the swamp of the internet and push it into these bandwidth and latency and redundancy controlled channels, that is a marriage made in heaven. Because then you get the elegance of SD WAN and SASE from an IT experience and one in which you can control your bandwidth and tie it directly to those to the workloads, whether they be SAS IT, my CRM, my ERP, meaning NetSuite SAP, my CRM of Salesforce. I can now manage my bandwidth to my workload, to my enterprise needs, and continue to use my IT consoles as I know them today.

SPEAKER_00:

Fantastic. And for those who want detail, you recently put out a white paper navigating the Cloud 2.0 Evolution. Folks, uh I'll I'll include that in the uh links uh in this podcast. But but Dave, for leaders who are looking to get started, what's the practical first step beyond the white paper uh to prepare for this brave new world of Cloud 2.0?

SPEAKER_01:

Uh really, I think to prepare, it's you've got to know your workloads. You've got to know your data sources, you've got to know the transformation journey that you're taking your enterprise on, and then matching that up to the bandwidth needs for each of those workloads and transformations, such that your employees can have the best experience, your customers can have the best experience, and you can use all of the tools that your CEO is asking you to use to get on the AI journey. But matching those up and creating the strategy in play and then getting it in budget and having an architecture for those plays that matches up data centers, cloud 2.0, neo clouds to data to network, and the evolution of how prem connectivity is going to occur, those are all being challenged. And as an IT professional, I'm asking that folks seize that challenge and realize that this is a this is a whole new world out there. And it requires the investment of time, of study, and then of course, of how is my enterprise going to transform? An IT professional now and CIO now has such a key role in the transformation of an enterprise, and such a key role of how to achieve both being AI first and to achieve the goals of the company more than ever before, because the three big pieces, and I know I've said this several times cloud strategy, data strategy, network strategy, and now workflow strategy, that that's where the IT professional and CIO actually has such a monumental uh role in that transformation.

SPEAKER_00:

Incredible. Well, always illuminating, Dave. Really appreciate your insight and uh the update on the amazing work you and the team are doing. Congratulations, onwards and upwards. Hey, thanks, Devin, and tune in. I've got more chapters coming. I'm sure you do. Thanks, Dave. Thanks everyone for listening, watching, sharing this episode. Take care.