AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology

AI Can Move Trillions. You Won’t Let It Send an Email.

World Wide Technology: Artificial Intelligence Experts Season 1 Episode 82

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 32:01

AI moves trillions of dollars in milliseconds. So why won’t companies let it send a customer email?

In this episode of the AI Proving Ground Podcast, Erik Anderson, CEO of Singularity University and former Topgolf leader, breaks down the real constraint on enterprise AI adoption: trust.

We talk about the shift to agentic AI, how it's reshaping thought work, and the tension leaders face between productivity gains and the social contract with employees. If AI gives your team 20% of their time back, Erik argues that's not just cost savings. It's an opportunity to invest in growth, quality, and long term value.

We also cover:

  •  Why trust becomes the bottleneck before the technology 
  •  How AI agents reshape brand and customer experience 
  •  What leaders should actually do next 

Most companies won’t hit a technical limit with AI. They’ll hit a trust limit first.

More about this week's guest:

Erik Anderson is Founder and CEO of WestRiver Group, a global investment platform focused on the innovation economy. He previously served as Executive Chairman of Topgolf, leading its rise into a global brand, and is now Executive Chairman of Singularity Group. An experienced operator and investor, Erik holds leadership and board roles across technology, energy, and consumer sectors, focused on scaling companies that shape the future.

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions. 

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments. 

AI Moves Trillions. You Won’t Let It Send an Email

SPEAKER_00

The New York Stock Exchange used to have hundreds, if not thousands, of people on the trading floor. Today that number has shrunk considerably. That's because algorithms are doing the heavy lifting, executing trades, moving capital, making decisions in milliseconds. No one's calling that a crisis, but ask a CMO if they'll let AI write and send marketing emails to their customers, and the hesitation can be felt almost immediately. Eric Anderson, today's guest on the AI Proving Ground podcast, noticed that gap, and he thinks it tells you everything about where enterprise AI adoption actually stands right now. Eric is CEO of Singularity University, an organization built to study and teach exponential technology. He also led top golf from startup to global brand and today advises boards navigating the agentic AI transition. His framework for the moment we're in is simple. There's awareness of AI, there's experimentation, and then there's adoption. And while awareness has spiked, who hasn't heard or tinkered with AI at this point, adoption, at least the kind that actually changes how an organization works, is a different story. So today we get into what that gap means for enterprise leaders, what the agentic AI First Shift is going to demand from boards and people managers alike, and why the companies getting it right aren't the ones cutting fastest. So let's jump in. Yeah, not a but not a bad place to uh record a podcast. We'll get you in and out of here as soon as can so you can uh go hit the course. So obviously, I want to shape this conversation around AI, AI adoption in the enterprise, just kind of really broadly speaking. Where do you think we are right now in terms of AI adoption? What are organizations and leaders getting right? What are they perhaps swinging and missing on?

Everyone Talks AI. Few Get It Right

SPEAKER_01

Well, we have almost by definition, we have to be in early days of AI adoption. Yeah. It is fascinating if what what we're talking about, open AI has 800 million customers or or something. So clearly, awareness. I make the distinction between awareness, experimentation, and adoption.

SPEAKER_00

Okay.

SPEAKER_01

So I think you would have to be under quite a large rock to have no awareness. Yeah. So I would say the awareness of AI has to be one of the fastest, you know, brand rollouts, good or bad, as as people might see it ever. The experimentation has to be at an incredibly high rate because it's such digitized, right? Everybody's got phones, everybody's connected, and you know, it's very low cost. So, you know, that's all happened. The terms of adoption, you know, has to be barely, you know, barely started because it's an entire, you know, it's that essentially adding intellectual result, intellectual capital's always been limited to essentially what's in human brains. And now we clearly, we're clearly adding some form of intellectual capital that's not just in human human brains, which means interfacing with it, using it, partnering with it, whatever. So just very early days.

SPEAKER_00

No, absolutely. I mean, it's it's it's early days, as you mentioned, but there is a lot of progress, a lot of movement going on. From what you kind of gather from the industry, do you think kind of the current crop of of Fortune 500s or even just kind of the current business landscape, are we built to survive this this era of AI and what the potential holds there?

SPEAKER_01

I think so. I'm I'm more on the more positive, positive side of that. Um, I certainly understand the the dystopian views, you know, with that singularity where I'm you know actively involved. This is a this is a very big, you know, very big debate all the time. But yeah, I'm I'm I'm pretty confident that you know the interface and the partnership, partnering or you you know, using this as a tool, uh, you know, will over time will end up being the good thing. It's it it I do also believe it'll be incredibly disruptive. And I think disruptive to new parts of society. Okay. Industrial things typically got rid of labor, uh huh. Right, which was energy, and AI is displacing thought. And so people who've never, you know, never thought so much, and we're hearing it immediately right in coding. Right. I mean, this is the most obvious kind of one of the most obvious places where you know people who thought, well, I've learned how to you know code and those sort of things that I'm in a good place. And then a lot of that a lot of parts of that, whether it's adding things, you know, your our friends at Cisco, you know, telling the other day, a lot of those things can be displaced. And we've never really faced that very much of that.

SPEAKER_00

Yeah. Well, you talk about kind of the the internal debate going on within you and your team at Singularity. How has your maybe thought process shifted over the years since kind of the ex you know explosion of open AI and ad adoption? Have you shifted at all on your thoughts on AI and the the future impact it'll have on business work and just life in general?

SPEAKER_01

I think it's coming faster. And we were always on the fast side. You know, Ray Kirschweil, who's you know, one of our founders at Singularity, he was he's been pretty accurate if you go back, even then it would be happening around now. So he, you know, credit, credit to Ray, he's been a little more aggressive on the adoption side. I think, you know, what I've seen and experienced experience, you know, myself is it's it's just getting better. And the interface is so natural because you know, it's a question and answer. And if you think about the human brain, it's a hypothesis testing, likes to explore, likes to ask questions, all those things. And there, you know, there you have, you know, that kind of a perfect, almost a perfect interface, right? To go down the rabbit hole or wherever you like to go. So sure. I think I think I've been impressed with how complete how fast it will get better and how complete it looks like it's gonna be able to get.

AI Is Replacing Thought Work First

SPEAKER_00

You sit on a number of boards, you're certainly active within your own business interests. Where do you see a dis or if you see any for that matter, a disconnect between how organizations are approaching AI and how it can best serve their interests versus maybe what's practical right now in this moment?

SPEAKER_01

Well, the tension point I see is and and you WWD incredible leaders, right? The agentic, the agentic world is what does it mean? What is what does it mean to be AI first? We see big companies, real companies starting or new companies, starting with you know, first get all your agents ready, yeah, then go find the people.

SPEAKER_03

Yeah.

SPEAKER_01

I think that's I think that transition is is really interesting. If you haven't if you have no base of people and you're just starting something new, it's very interesting what uh modern AI infrastructure will look like to start companies.

SPEAKER_02

Right.

SPEAKER_01

Here's that, you know, that interface, and then as you guys have talked about so much, the security, you know, of all those things. You you have massive data. If you're starting, you have you have no data. Right. So I I think this how you integrate you know, agents, the agentic world that you uh that you guys pioneer and do so much with is is very you know is very challenging. And if you don't lean into that and you don't figure out how to trust that, it's it's pretty interesting. One of the questions I ask people is like, so we have a lot of we have a lot of algorithms, if you will, that trade security source, right? You look at two sigma and they you know no human makes the investment decision. Right. So as a thought exercise, I'll ask somebody who's marketing, will you let the computer like do the creative and and just send out emails to your customers? And like, oh, I don't know, that's a scary thing. It's like okay, but you'll let it let it trade, you know. Yeah, you know, 30% of the global yeah will let it trade all the stocks, but I don't know if I'll let it send out yeah.

SPEAKER_00

Well it's a leap, it's a leap of faith, right? Right. Yeah.

SPEAKER_01

So but it's I think when when people are coming more most people don't come face to face with like, you know, AI or algorithmic trading and is using a stock market. Here we're all coming face to face with, you know, algorithms, machines, AI, whatever you happen to want to call it any given. Every every everybody is be is coming face to face with what that looks and feels like.

SPEAKER_00

Yeah. That's interesting. I mean, so do you think that's kind of one of the because adoption seems to be, at least amongst the clients that that we speak to, getting their workforce, their people to lean in that coming face to face and understanding this is you know, from the trading perspective, that was always kind of a a a thing that uh a a trader would do for you. It wasn't necessarily always a machine, that's right, but individuals.

SPEAKER_01

Yeah, you you can see it, you know, the New York Stock Exchange. We were there, they used to have, you know, I don't know, 800 people on the floor. Yeah. Now they have like 20. Yeah, the rest is all done, yeah, you know, digitally. But yeah, that that interface. I I think managing the the leadership environment between the let's say the the shareholders and the application and the speed of growth that's associated with you know AI, and then your social contract with your people. Yeah, you know, is very interesting. So I I'm not aware. So like Amazon and Microsoft, you look at the last whatever 18 months or something, what they've all doubled in value and they've reduced employment. Sure. Not not slowed it down, yeah, but reduced it. That that makes it a very difficult trust factor. And Satya said I have to figure out I have huge respect for it, I have to figure out the trust thing. Yeah. So so where that shows up practically is like, well, what's the vesting schedule for my options? Four years? That makes no sense, right? I'm I'm I'm working, I'm gone in two.

SPEAKER_00

Yeah.

SPEAKER_01

Right. So, you know, I, you know, so there's a this big negotiation now, and it's a very interesting, you know, leadership challenge.

SPEAKER_00

Yeah. Yeah. What does that future of leadership look like? Is it just transparency? Is it really kind of developing more trust? Or is it kind of an undefined horizon here on how what's the relationship is going to be between executives and the workforce or even people managers on down the on down the line?

AI-First Companies Already Have an Edge

SPEAKER_01

Well, that's uncharted, right? Because we haven't, remember, we haven't really dealt in a place. We've dealt with that, I would say, at the you know, what at the we'll call the labor level. And and of course there's infinite amount of intellect applied, you know, in obviously in in labor. So it's but this is there's no labor typically up at the, you know, when when you're just dealing with this other class, right? So you you know, we've just not we've not we've not faced that, but if you listen again, we use the two it's it's Cisco, right? It's very interesting. We're we're not trying to get rid of people, we're trying to have them do more, yeah, which is great. But if you're not hiring people either, then you know, folks have to take this AI next door with the people who wouldn't come into this part of the economy, into the new sort of AI first generated economy. Yeah, I we don't know how that's gonna we don't know how much disruption there's gonna be that I guess would show up in the unemployment right you know, in the either in the unemployment rate or the wage, you know, or the wage rate, right?

SPEAKER_00

Mm-hmm. Well, I mean you're a future thinking guy, so w what do you see? You know, we don't necessarily know what it's going to be, but what's your what's your best guess right now?

SPEAKER_01

I I'm I'm hopeful that we're going to see you know lots and lots of of of new opportunities and a distribution of AI which creates continues to create either a creator economy. You can't underestimate like the the social or the entertainment economy.

SPEAKER_03

Yeah.

SPEAKER_01

But that empowering, you know, people with that is going to allow them to create new opportunities. If you looked at a curve of how much t human time, especially say in the United States, is actually spent on sustaining life, right? That's been that's been falling precipitously for quite a long time. You know, you you don't have to go farm, right? We don't we don't have gardens. Right. I would die probably. We don't know how well I do in a garden-rich environment. Right, right. So, you know, we've been on that, we've been on that path where people have more and more free time. This just accelerates that. Of course, that's that's why you have all the other parts of you know, parts of the economy, right? Yeah. So I'm I'm hopeful that you know that there are things that will fill that up.

SPEAKER_00

Yeah. I mean, we talk about kind of developing and fostering trust from a leadership perspective. Let's kind of go go at it from the other end. What do what do everyday people such as myself or you know, just an employee of a large organization, what do they need to do to kind of be ready to handle this disruption, this change? And is it just gonna be continuous education, a passion for learning? What how would you advise somebody you're mentoring or somebody that you know works for all this, whatever it might be? What what are they gonna do? What should they do?

SPEAKER_01

Uh well again, some of the speakers you you've had here, right? You absolutely there's no excuse and no barrier now to learning.

SPEAKER_03

Yeah.

The Trust Problem No One Wants to Admit

SPEAKER_01

Right. So if you're curious, people and humans by nature are curious, you you do have the opportunity to turn that curiosity back on. I think it's very interesting. Like if you look at something, the apps now that maybe you get on, you know, on your mobile phones, right? There are some which are Instagram and there's some which are, hey, you can learn something. Right. Right? Five minutes of learning, three minutes of learning, where you can play Tetris for five minutes, or you can interface. I I think we're gonna I think I'm hopeful, right? That the human brain sounds it's really more interesting to go learn about that than just to be distracted by Tetris. That's so that's never been available. It's never been available with an agent that can actually learn your personality, learn your humor, right? It follows your questions. So I think I think being curious about learning and then the opportunity set that that comes from that is available. But this makes you know specialized, personalized learning you know, possible. Yeah. Singularity we're starting, we we talk about and we try to imagine I don't know if you have children or what age they are. Two kids. Yeah, but if you have young kids today and they they have uh an agent, right, that that works with them and can is always happy and always gets to know them, will always answer any question. I think those agents are gonna become some of the most attractive things. Sure. More tra and it's not passively watching a screen, it's no longer screen time. Yeah. Right. I call it curiosity time. Yeah. And so I think there's infinite curiosity, and you know, they're never tired, they're never gonna yell at you, they never stop, they and and they're gonna they're gonna know more than we know as parents. So yeah, yeah. So I think I think the I think there could be, you know, as I think about it, and I exist my own experience, I don't know what a curiosity learning explosion might look at. But people don't talk very much about that. The press is talking about disruption and fear and how to lose. But if you have a bunch of time and you have a personalized tutor, you know, I don't know, could go, you could go a long way with that.

SPEAKER_00

Yeah, well, I mean, that is interesting. I mean, and that kind of applies to an enterprise setting, right? Like there is fear amongst the employee base or amongst anybody, just about what does this mean for my job? What does this mean for my role? How you know, even if you're not worried about losing your job, you may be worried about this is what I've always done and now it's gonna change.

SPEAKER_01

100% changes.

SPEAKER_00

How does an organization navigate those choppy waters until things start to calm down? And I don't know how long that is gonna take, but if you were advising a company or a client of yours, how would you navigate them dealing with that? Is it communication?

SPEAKER_01

Is it well it has to be communication?

SPEAKER_00

Yeah.

SPEAKER_01

Oh, I also think you have to so let's just imagine you and I are in the boardroom. Here we sit, they come to us and we say, okay, let's look, let's look across. We've got we've got warehouses, we've got robots, we have, you know, we have people, we have all these things, we have this capital. And you imagine how you're how you're allocating or reserving capital and for what you're reserving for.

SPEAKER_03

Okay.

SPEAKER_01

And so I th so if when these things come in and they say, okay, well, we can we can spend you know this amount this amount of money and we'll get these robots and we'll do this and we can do all these things, and then that you know, that would mean we could get rid of 20% of the you know our workforce. Sure. I think at the board level, on that level, you have to say, do I want to get rid of 20% of the workforce or do I want to warehouse that like it's excess capacity in my factory?

SPEAKER_03

Mm-hmm. Okay.

SPEAKER_01

Right. So so I think you're gonna have to think there was never a sense before I think people always have to believe the people and the intellect is pretty much fully employed.

SPEAKER_02

Right.

SPEAKER_01

And now I think you have you have capacity. So do I do I want to okay, just because I can, right? And then am I making the use case for those other things just based on your reduction in labor, or am I making that use case which is efficient, quality, and and I want to preserve that that workforce. So that's never been, you know, it it maybe it's never been quite as explicit as that now. But now I think that rate of disruption, uh, if we're at the board level, we're like I got a hundred people, right? I do all these things, uh I really like those hundred people. I know I could I know today I could do I could get rid of twenty.

SPEAKER_03

Yes.

SPEAKER_01

Right. But maybe I really don't, maybe I don't want to. So I think it makes you be really thoughtful about the other advantages that you're getting from the say sort of from the application and adoption of we'll just say in this case, AI.

SPEAKER_00

Do you think that's where organizations are today? Because I mean, I feel like so many are just thinking about you hear AI, you think efficiency, you think productivity. You're not thinking more from the people side of things. Do you think there needs to be more of that?

AI Just Gave You 20% More Time. Now What?

SPEAKER_01

Oh, yeah, 100%. And people are clear but again, I, you know, we people are clearly missing the possible explosion of I have a hundred super loyal, super mission-aligned people coupled with this technology. What's the explosion of opportunity, not the reduction of intellectual capital that's that's that's loyal and mission-driven intellectual capital? Uh yeah, I'm sure that that conversation there needs to be, there needs to be more of that convert, you know, more of that conversation. And it should be exciting because the opportunity of people partnering with AI, yeah, uh, you know, should create better products, better experiences, like, and it should allow all of those people to do more. Yeah. And it should be better than just saying, I've got rid of this intellectual capital, now I have this, but I must have lost something, right? Because they're not the same. Yeah. They're not the we know they're not the same. So I think you're gonna come down to this idea of preservation and and you know, the allocation of that free time to more ideas and more things we would do. You know what Jim was talking about, Jim Kavanaugh, your CEO, and time, but there's no lack of ideas, so there just means most lack of time. And boards are gonna have to be better investors in the possibility of what that free time means. Yeah. Not its measurable unit input into a cost function.

SPEAKER_00

Yeah, no, absolutely. I mean, and that's where trust comes in too, right? Like if you don't, if you don't have an environment or a culture of trust, your people are not gonna lean in and offer up those ideas for fear of well, it would be it would be a it would be irrational to do so. Yeah. I want to shift a little bit here. Um that makes sense just kind of interesting. Absolutely. I mean, it's just an interesting thought exercise to understand how the dynamics between leadership and executives and middle managers and people is is if I'm telling what's good for the company or or or good for customers and the and everything, but the net effect is you know, I hope to get rid of 30 percent of the people, the people are have a right and a logical thing.

SPEAKER_01

Well, that's okay. I need to think about that, right? Right. That means one of three of us.

SPEAKER_00

Yeah. You're right, exactly. So yeah.

SPEAKER_01

It's it's I I I think there should be faster growth than you would hope rather than just you know reduction.

SPEAKER_00

Yeah. Just a a little bit of a pivot here. So, you know, during your time leading topgolf, which is fantastic, by the way, thank you from a consumer of that of that brand, you kind of famously said, you know, we're not we're not we're thinking too much about the tech, we're not the tech, we're the experience, right? So I wonder if you think that applies right now in the AI landscape. Are are organizations thinking too much about the technology and not enough about the experience?

SPEAKER_01

Oh, sure. But that's natural because it's new. Okay. So you have to, you're you have to you have to look at this new thing and go, I don't know, but put a PC on someone's Desk, right. When Top Golf started, you know, we were started before Instagram. Right. I mean it's like hard. I mean, that's even was there a world before Instagram and TikTok for all these things. So sure, you know, I think the there's a rotation to the to any new technology, especially because you have to re a lot of your time it has to be, I don't know what to do with this. Yeah. So then I think people people can step back to think, well, how does this actually enhance you know the experience or the product?

SPEAKER_00

Well, how did you and your team did you take a purposeful approach from the beginning, or did you how did you get into that experience mindset versus the technology mindset?

SPEAKER_01

Well, you know, this goes back to a funny origin story. You know, I was kind of raised, I was raised by a nurse, so I have a little talk I do, it's called Raised by a nurse, and the reason that's relevant is the hospitals have always been full of technology and technicians and doctors or whatever. But the care and the experience is driven by the nurses, right? Period. Yeah. Like people. Nurses. Yeah. People who have a very specific mission in mind, right? They just care. Right? And so that was really I didn't realize how foundational it was too experiential, right? But the experience is 24-7 in a hospital. Four o'clock in the morning, nobody's there. Right. Technology is pretty quiet, it's only experience. The experience you have at 4 a.m., if you're really suffering, defines everything.

SPEAKER_03

Right.

SPEAKER_01

So when you come to Top Cough or any or or WWT, right? How what it sounds like, you know, the salesperson, all the great places, all the great investments, but how that feels when someone calls you up, like we're WWT, we're here to help, or we're WWT, why are you calling me? It's it's right, it's in the facts, frequently asked questions. You could just go over there, right? You know, what in the world? And they'll that's why we put it there, right? So you don't bother me. It's a frequent question.

SPEAKER_02

That would be a it's called frequent frequent question. You should look there first, right? That would be not, you know, I'm sure Jim would take great exception to that.

SPEAKER_01

Sure. Or Mad or anything. So, you know, that's so you have to realize that's so that's so important that you just need to pull all these things together and then figure out how you both how you apply it, but also how your agents apply it. Right. That's also what is what does it feel like to talk to an agent as well as to talk to you. And so, yeah, experience has to be yeah, has to be at the at the front of things. Everything just supports that.

The Hardest AI Decision in the Boardroom Right Now

SPEAKER_00

But to your point earlier about this isn't we're now becoming f we're coming face to face with this technology, it's kind of the same way with the experience. The experience used to be delivered via software, a website. Now that experience is now digital humans, right? All within deep fakes, whatever. So what is what does that mean for the kind of continuing blend of thinking about digital and physical and and technology?

SPEAKER_01

You know, you have to control the brand, right? So that that means the what's at the heart of brand. First is trust. Nobody should be lying. No entity, AI or human should be lying to anyone or trying to sell them. Yeah. Quote unquote, right? So you have to design, you have to design that in the combined form and both individual, both individual form. Yeah. And we have a lot of thoughts about the UI user interface, right? We think about like we just used to think about like, you know, how you click around or go around the mouses or all the things.

SPEAKER_03

Yeah.

SPEAKER_01

It's just gonna be another, it's just gonna be another level. Now the good news is AI will be able to teach itself how to do that based on the parameters, yeah, the input that we know, we we give it.

SPEAKER_00

Yeah. We're we're coming close on time here, and and thank you for all the time that you've shared already. Give me kind of a best case and worst case scenario from like either your personal perspective or maybe Singularity's perspective on on what the AI in business uh future looks like.

SPEAKER_01

Well, I think I think the best case is we harness it for growth and quality.

SPEAKER_03

Yeah.

SPEAKER_01

And hope. Right? Again, I'm working with a company right now, Lumen Bioscience. You're always getting a pitch for when you're going to be able to do it. But Lumen Bioscience is I'd say is built on the foundation of the utopian view of the convergence of important technologies. So biotechnology, the ability to, you know, work and manipulate you know genes to, you know, in this case to be able to create a therapeutic and algae, which can be very low cost, and the application of AI to into protein uh into protein design so that the proteins that become the therapeutic are much more expressive. And the net result of that, right, which would be to dry, would be to make you know biologic therapeutics available to billions of people.

SPEAKER_03

Yeah.

SPEAKER_01

Like Ashprin. Right. Right. So that's a utopian view, right? It's built on it, it's built on we've we've learned how to genetically modify things, and we've learned how to use you know compute power and AI to make these designs better. Yeah. And as a result of that, uh we can we can provide you know care and prevention and you know health. Right. That extends to everything. To the global, yeah. To the to the global. So that's the it's a that's utopian. I think if we use it for, you know, disinformation, control accumulate, you know, excess accumulation of power and wealth in any particular particular place, right? And we don't use it for growth and optimism, you know, then there could be some un unfortunate. There could be some unfortunate side effects. Obviously, if someone were to use it to manipulate, you know, things for, you know, bio, I mean, you know, that sort of intellect applied, you know, there can be some dangerous aspects.

SPEAKER_03

Yeah.

SPEAKER_01

Obviously dangerous aspects to it. So that would be the that would be the the downside.

SPEAKER_00

If we're on a spectrum right now between that utopian and dystopian spectrum, where are we at right now in terms of where we might end up? Is it a 50-50? Are you 60-40 for the utopian side?

SPEAKER_01

I'm ahead on the utopian, but I'm you know, you know, I'm an optimist. You're gonna have to be, we'll have to pay attention, right? It's clear you know, that you know, that we'll have to, you know, we'll have to pay attention. I think um, I think you know, folk people will will pay attention, but I think we're on the you know, we're on the side of doing that. Well, just the fact that you have a billion people who can ask questions and learn. And learn, right? That we we you know, we have that has to be that that has to be on the utopian side. It's uh it's amazing.

SPEAKER_00

Yeah, absolutely. Well, uh what what this would be my last question here. What should enterprise leaders do now to make sure, you know, kind of like a priority set heading into the bulk of 2026, what should they be doing to make sure that we don't end up on that dystopian side and make sure that we are on that utopian path? Is it just is it better use of data? Is it making sure that there's safeguards?

SPEAKER_01

Well, the it but I think safeguards and security are being talked. I mean, you're such a leader in that, we we hear it everywhere. Uh I think for me, being ahead of that the interplay and the trust relationship between your existing teams and you know the attractiveness or the allure of oh, I can get fast reduction in costs. Right. And that makes me look good. And I think that would I think that's a bad way to go. Yeah. I I think the leadership and boards and communication to shareholders, if we're looking in the public side or anything else, is we're gonna we're gonna believe that uh we have a very engaged, thoughtful work you know, team. And we're gonna balance the volatility in employment. Yeah. We're not just, oh, this is great now, I can get rid of X. Right. That just means the only thing that person has value is what they're doing today. There's no learning, there's no value to their so I I think you have to be really, really explicit and get ahead of both how you communicate that, what you truly believe. But equally important, the challenge is if everyone gets 20% more time or do something, how do I how do I take that time and learning and redeploy it for growth?

SPEAKER_00

Yeah.

SPEAKER_01

Not not take it out.

Your Customer Is Talking to AI, Not You

SPEAKER_00

No, absolutely. That's a fantastic idea. I think we're, I don't think, I think we're I think we're early days in that conversation. Yeah, yeah. Well, we'll see. When we uh we have more data points on that, we'll bring you back on the uh show and we'll discuss. That'll be great. Eric, thank you so much for the time. Thank you. Yeah. Okay, thanks to Eric for joining us today. The idea from this conversation I keep coming back to, boards have always measured people as an input cost, but Eric's framing is different. When your team gets 20% of their time back, that's not a line item to optimize. It's an opportunity account. The company's leaning in to invest that right now in growth, curiosity, and what loyal and mission-driven people can do with more capacity, those are the ones worth watching. This episode of the AI Proving Ground Podcast was co produced by Nas Baker and Kara Kuhn. Our audio and video engineer is John Noblock. My name is Brian Phelps. Thanks for listening. See you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

WWT Research & Insights Artwork

WWT Research & Insights

World Wide Technology
WWT Partner Spotlight Artwork

WWT Partner Spotlight

World Wide Technology
WWT Experts Artwork

WWT Experts

World Wide Technology
Meet the Chief Artwork

Meet the Chief

World Wide Technology