AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology

AI Is Writing Code Faster Than You Can Review It

World Wide Technology: Artificial Intelligence Experts Season 1 Episode 79

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 30:08

AI is writing code faster than most teams can review it. That’s the tension.

Recorded live at NVIDIA GTC with Nate McKie, we get into what happens when developer speed takes off but security and quality don’t.

The middle of the development process is collapsing. Code is cheap. Mistakes aren’t.

So what actually has to change?

We get into AI-native engineering, agentic development and the shift from code generation to code governance. From code review bottlenecks and the “hourglass effect” to model selection, RBAC and secure data access, this is how enterprise teams scale AI without breaking things.

Support for this episode provided by: Thales

More about this week's guest:

Nate McKie is a Senior Executive AI Advisor with more than 25 years of experience in software and automation engineering. He helps organizations translate AI into real business outcomes, advising on strategy across data, infrastructure and applications to drive effective and responsible adoption.

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions. 

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments. 

AI Is Outpacing Your Reviews

SPEAKER_01

AI is now writing code faster than most teams can review it. And depending on how you look at that, that's an opportunity or a risk, or both. Whether your developers are shipping more with Claude, Codex, Copilot, or other coding assistants, the real value is now coming from more than just speed. It's coming from building the guardrails, review discipline, and workflow control to scale that speed safely. So in this episode of the AI Proving Ground podcast, which we recorded live on the show floor of NVIDIA GTC, we're talking once again with WWT's senior executive AI advisor Nate Mackey about how to increase engineering throughput without creating security debt, quality issues, or governance gaps. We'll get into AI native engineering, code review, security guardrails, model selection, and the shift toward agentic development, including what changes for software teams, enterprise architecture, and even the rise of the citizen developer. So let's jump in. Yeah. Well, let's start there. I mean, you you obviously listened closely to Jensen's keynote. What did you think? What'd you pick up on? What do what do people really need to be uh thinking about as strategic based on what he said?

SPEAKER_02

Yeah, I I mean it's definitely about the rise of the software application, which is which is great. I mean, from my side and my background, it's exciting to see us getting to this point. I feel like there have been so many, there's so much innovation around the the platform, the hardware, the models, like just more and more coming out all the time. But honestly, the software that actually takes advantage of all that capability just really hasn't been there to a great extent. So it's really cool to see that starting to happen. So, you know, for us, we're getting excited about not only that we can use AI to write software, but also that we can use AI to write software about AI and how much faster that can go. So it's a it's really cool world. It's great to see all of that kind of getting featured by his keynote today. And, you know, the whole well, there's lots to talk about.

Autocomplete Is Dead. Agents Aren’t

SPEAKER_01

I'm sure we'll get into it. No, absolutely. Well, let's let's get in a little bit more into AI native engineering, AI coding assistance. I mean, it's really emerged as a primary use case for driving ROI with AI. But, you know, none of this is not necessarily new, and but it's been rapidly evolving over the last several years. Maybe a quick catch us up to speed on on what's been going on with coding assistance, when they kind of popped into kind of the lexicon of what we're doing here and where we are today. Yeah, exactly.

SPEAKER_02

I mean, it's it has been slowly escalating really since Chat GPT 3 came along, yeah, where it started being at least reasonable to have AI helping you in those kinds of tasks. And it was one of the first things it was really good at. So even early on, it was a it was good ROI to get that in place and start using it. But you know, you've seen you know continued improvement, it's tending to get better. Obviously, there was the rise of the the agentic coding tools that would not only do you know glorified autocomplete, but would actually start writing all of your code for you. You know, vibe coding came into play. But it was really just you know, a few months ago when the new model from Claude came out, Sonnet 4.6 and everything that's built on that, new models from OpenAI and Codex that just seemed to take it just you know right over the crest to something that that it was helpful and uh did great in small doses, but if you tried to use it at scale, you're really struggling to the point where now it is really, really starting to make sense. And not only are you using coding assistance to write code, but you people are using coding assistants within something like Claude to just get a task done, even if they've never written code before, right? To you know, I was just using one today to convert an HTML file into PowerPoint, you know, and it will just write up the code that you need to do that. It didn't really have to pay much attention to it. So it's it's really starting to pick up speed and really change the game.

SPEAKER_01

Yeah. Well, you mentioned Claude. I mean, what other tools are out there right now that are viable products for software teams to to leverage? It seems like there's new ones popping on the scene every day.

SPEAKER_02

Yeah, I think the the big the big guys are are really starting to maintain a hold, cursor, windsurf. I mean, GitHub has always been the you know 800-pound gorilla in the room, lost a little bit of ground, I feel like, in 2025, but has come roaring back more recently with the agentic capabilities they provided. You know, Google and Gemini, and now what they're doing with the anti-gravity. I mean, any of these hyperscalers that already probably have capture with with an organization and probably they probably already have an enterprise license. You know, they are offering tools that are are really good and it's gonna continue to hook you into wanting to use those. But you know, a lot of our customers are are not happy using just one. They feel like, you know, there's some tools that are great for for one purpose and others that are great for another. The engineers always, you know, are very particular with what they like to use. So we're seeing multiple ones getting used. So I don't know if there's gonna be a clear winner that she's gonna dominate the market. I think we're gonna continue to say, see a healthy number of players, yeah. And just like we are today.

SPEAKER_01

Well, I mean, uh, you probably anticipated I was gonna ask you, well, which one's the leader right now? But maybe the better question is how can organizations position themselves so they're utilizing the right tool at the right time? Do we have any type of or do you have any type of like framework we like to talk about where it, you know, we can make sure that they're using the right tool for the right task at the right time?

SPEAKER_02

Yeah, I mean, it's it is very dependent on context, like you said. What we have done to just try to bring some sanity to this is we've created our own tracking mechanism. We call it our our radar tool that allows us to go out and and rate these tools and kind of understand where they are along certain uh vertices or you know, of their the usability, the the governance is a big one for organizations, like how much control can we really have over it, how much it's using AI for autonomy, just so grading them in a few places and then tracking you know what's happening. And so we've even we've got this changing every time there's a new version. Uh I mean, honestly, every time there's like an acquisition, which there's been a fair amount of in the market, that can that's a reason for a re-rate on tools that we've already looked at. But we've already got you know 50 plus tools that we're looking at in this coding assistant ecosystem tracking with this tool just to just to keep up with it. So it's been great, though, for the purpose you were talking about. If if someone's looking for something for a particular purpose, it's maybe not the general way that people would normally use the tools. We've got just some great information we can search on and try to figure out what's the right place for them.

SPEAKER_01

Yeah. And is it just about speed or is it are there other outcomes here that are you know worthwhile discussing as well? Oh, absolutely.

SPEAKER_02

I mean, in fact, you know, if if all you do is make your software engineers write code faster, that's not really gonna help you out. There's a lot more to the software development lifecycle than just writing code. So you've got to think about, you know, what happens at the beginning when you're actually figuring out what you want and how, you know, how you're gonna form this product and how you're gonna specify what's what's gonna what's gonna go into it. And then also at the end, you know, to make sure that what came out is what you want, that you haven't created security vulnerabilities or quality issues in your software. All of those things still need to happen. Now, the good news is that these tools are actually pretty good at those things as well. And while there are some great opportunities on the market to to specify, you know, tools that allow you to do code review faster or tools that allow you to prototype faster, those are definitely in place. But you could almost use any of these tools for any part of it because they're all based on these models that are just so good at thinking about how to take a question like that and convert it into something usable.

SPEAKER_01

Yeah. I mean, the first part of your answer, you start to get into a little bit of how the developer's day-to-day is starting to shift. Yeah. And I've seen you present on it an interesting graphic about kind of an hourglass uh figure about where the bottleneck now sits. Where does the bottleneck now sit now that you know AI can handle a lot of this code generation for us? And what does that mean for the developer's role? What does it look like in the bigger parts of that hourglass?

SPEAKER_02

Yeah, I I mean, I I there's a place where it is sitting and maybe there's a place where it should be sitting. Sure. I think that uh code review and review of what's going in place is probably a little more overlooked than it should be. Some of that extra time gain that we're getting, we should be spending more in that area because I mean, some of these tools are generating thousands of lines of code in just, you know, a day, a few minutes, whatever it is. It's hard to sit down and take a look at that and make sure that it all looks good. You know, it's kind of mind-numbing to do that. Again, there are tools that will help you, but it's critical that we do that. So the more you can automate that, the more you can put that in place, the more you can have humans sitting down talking together about what was done and taking a look at it and making sure that the intent and the structure and the constraints and all of that were weighed is being considered. So that's where I'd say it should be spent. I would say probably more where it is being spent is a little more on the quality side and not being quite careful and then realizing after the fact that something went wrong that you have to go and clean up. But it's but it has been interesting to hear the stories of it's it's hard to make a roadmap these days because you can move so quickly that your you know three to six month roadmap can get accomplished maybe in a month, and maybe now you're not ready to decide which which is the next place we should go. And there can be a lot of wasted time not knowing exactly what's the next right path if you're not really thinking far ahead.

SPEAKER_01

Okay, that's an interesting angle that I haven't necessarily heard at least you speak about. How do organizations guard against that? Where you might be caught flat footed that you know things are moving rapidly. What it what is it just planning cycles? Is that where you know more strategy comes in?

What Devs Are Quietly Losing

SPEAKER_02

I think I think the key is it's you know, it's the same thing that we've always needed to do, which is to try to think about the future in broad swaths about what needs to be done, and then as you get closer, start breaking those down, dialing them to those down in exactly what you need and what will be working. But it's been a little bit too easy in the past to just think about what's the next thing. Yeah, you need to be pushing that a little bit further ahead so that when when your software starts to get developed faster than it ever has before, that you're really ready for what that next phase might be. So, you know, again, it takes a little more discipline, takes some forethought, takes getting your executive leaders more involved in the process and what needs to go on. They're probably used to having a backlog that's ridiculously long and looking at it maybe once a year. You probably need to get them more engaged because things are moving a lot faster these days, and it can really make a big difference if you can stay on top of that. Yeah.

SPEAKER_01

You have deep roots in application development. I'm just curious, overall, how are you thinking about this? Is that does this get you excited? Does do you have any kind of melancholy about like, oh, you know, what used to be back in the back in the day? How how do you feel about this? Just general shift overall.

The Rise of AI Slop

SPEAKER_02

You know, I I've actually thought about this recently, and I I've come to the conclusion that the fun part to me was never writing the code. To me, it was solving the problem. It was breaking things down, figuring out, you know, it was accomplishing things, certainly, which, you know, like thinking of something and then and then seeing it work. And and that stuff that happened in the middle, you know, personally, I didn't get a lot out of. Now, a lot of people do, yeah, and that's totally fine because it's fun to do that kind of thing as well. And I do know that there are engineers who are sad to see the way things are going because that a thing that they were good at, you know, whether it's remembering algorithms or or idioms or or certain little kinds of code or learning new languages, those things are kind of going away. But I think I would ask them all to look at the the joy of both problem solving, thinking of an idea about how you could do something and solve an issue, and then seeing that come to life so quickly and being able to know did it work, did it not work. So so to me, that's that's what's exciting and fun about it. Now, what's even more fun is this this whole Jevons paradox aspect of this, which is you know, that that it's a the paradox is that you think when something that you used to have to do takes less time than it did before, that you'll just finish it up and move on to something else. But that's that's not what happened happens when that thing you're doing has value or adds value. When it adds value, you tend to do it more rather than less. And I don't think anybody or or many people have any idea what a world where building software is essentially free is gonna look like. I think we will build so much software, it will become like almost like you know, documents, and as many documents and slide decks and everything as we put out there, that's what it's gonna look like. It's up to be software instead. Every time you think about a new task you need to do, building software to do that task, even if it's never existed before, will just be natural. Yeah. And we'll find ourselves doing it constantly. So that's exciting from a perspective of someone who's seen software, you know, change people's lives over and over and over again to think that's something that we're gonna get to do just as a regular, you know, task during our day. That's really cool.

SPEAKER_01

I I've talked to you about this before. I mean, you know, I'm a writer by trade, used to be a journalist. And one of the interesting things that I see out there as it relates to AI is just the idea of AI slop, right? You know, you can kind of tell when it's been AI, is does that bleed through in AI code generation as well? Oh, absolutely.

SPEAKER_02

I it the one of the main keys for doing using coding assistance well is constraint. So if you just go out there and tell it what you want and don't give it a lot of guardrails, right, or you're not specific, or you are, you know, I think we talked about this before in another podcast. If you give it a very general task to do, it's not gonna do a good job with that. And you're gonna end up constantly having to push it again and again to fix it and change it. AI works best when it's given like a box to work in. Yeah. And so when you can say, you know, don't do this, don't go pull in new libraries, you know, don't go and make massive changes to our architecture without talking to me first and let's let's figure out if that's necessary. The more you can do give it constraints, the better it's gonna work. And if you don't do that, you do end up with slot because you'll end up with, you know, 20 different ways in your code base that something gets implemented or repetition, constant repetition within the code base, where instead of you know trying to reuse something that's out there, it just builds it again, you know. So you have to watch out for that kind of thing. And again, the the key is giving it not only the instruction about what you want it to do, but giving the instruction about how to do it well.

SPEAKER_00

This episode is supported by TALAS. Talus delivers data protection and cybersecurity solutions to secure critical information. Trust Talus to safeguard your digital assets with advanced security technologies.

SPEAKER_01

Yeah. What can leaders do right now to make sure their teams are within those guardrails or are within the box that you're talking about?

SPEAKER_02

Well, again, we're going to go back to review. You know, the review needs to be happening by your senior engineers, probably by automation on some level, you know, at whatever level it makes sense for you to make sure that what's coming out is is is what you want. And you need to give your people time to be able to do that step and hold them accountable to doing it. You know, it's going to be tempting to just, you know, it's it's exhilarating to get that stuff so fast, but you're they're gonna need time to make sure that it's done well. So leaders need to be ensuring that whatever processes are in place are including the kind of rigor both at the front end and and the back end. We call it the the hourglass effect of the software development lifecycle that that you you've shrunk the middle, right? But you need to instead of just letting every thinking everything's gonna shrink, you need to apply that mass that you shrunk to the to the front and back and right and holding your teams to that.

Tools Aren’t the Problem. Your Team Is

SPEAKER_01

Yeah. Well what's more important in terms of success with these types of tools, is it the model or is it integration into workflows that that teams have? Or do they just do they balance each other out?

SPEAKER_02

What's more important? That's tough, because I mean both of those are are important, it's hard to even compare them, you know, and it's not just it's not just a question of a which model, because it's really about which model in which part of the process, because not only are models good at different things, but they're also some are more expensive than others, and you know, you don't necessarily want to use your you know, someone who is the absolute best chef in the world to just come and cut your vegetables, right? You you probably are okay with with someone who has just a general idea of how to use a knife. And the same applies with models. So when when you're doing something that's a little more simplistic, when you're maybe trying to kind of uh uh actually implement a set of instructions, you may not need your model to be that that uh sophisticated. But if you're trying to solve the problem, you're trying to figure out how do I break this down into pieces, how do I create a good set of instructions, how do I implement those constraints we were just talking about? That's when you go to your expert and say, help me figure this out and break it down. So thinking about which model to use in in which situation. And and some of this is driven by financials, obviously, because the the more complex, the higher level models cost you more, sure from a token perspective. But it's also just thinking about what how how do I best solve this problem and how do I make that work? And the the teams that have figured out how to do that, I think, are the ones using it effectively. But but the process question is is equally important, is making sure that you're using the tools with a group of people where everybody has agreements on how that works and and what needs to be done, because these AI tools are designed to work really well one-on-one. If you want to build your own application with them, they're fantastic for that. Where it's really gets difficult is when you're trying to build something along with a bunch of other people, also trying to build things with very powerful tools at their disposal. You got to figure out how to make all that work. It's it's really critical. So, you know, if the more complex your software is, the more the bigger it is, or maybe the more it is vital to the enterprise that it works, the more you need to be talking with with everyone around the team constantly and and comparing notes and making sure things are working together. So, you know, as as you say in every consultant situation, it depends.

Let Agents Do the Dirty Work

SPEAKER_01

Yeah, it depends. So that's that's a little bit about you know how teams should be working with these tools or with each other. Let's talk about what they should be working on. Um, is there anything right now out of bounds for these these uh coding tools? Or what's the what's the right level you know type of work that we should be you know giving these tools versus what should still be human-led? Or is there is there no gray area anymore?

Citizen Dev Without Guardrails Breaks Things

SPEAKER_02

Yeah, I mean there's I would say there are some languages that they're not great in if the if the languages are are really new. I've from from the folks that that have used them, I've heard that maybe some of the native mobile languages aren't great for these tools yet. They just don't have enough examples to to go from. But in general, I mean, they're there for just about anything. And I, you know, what what should they be working on? I would say almost what kind of tools should be should you be using based on the task. So so let's say you've got a task that involves you know replatforming your code base. Either we're moving this this whole code base from one JavaScript library to another, or maybe we're just even trying to get it to the next version because the old version's being end of life. That is a really tedious job to do. Even if you've got a tool like a coding assistant, going through and trying to make that happen is a lot of time and effort. Maybe you should be looking at something more like an autonomous agent tool like Jules or Devin that can go and do that on a broad scale for you and you know, do it overnight so that you're not having hundreds of engineers go and spend a bunch of time trying to make that happen. So the the tools are certainly capable of it, but if you should save the you know more one-on-one tools for the complex problems that you want your developers to solve and try to find more and more agentic or automated or autonomous ways to solve the the kind of the lower level problems in your code base.

SPEAKER_01

Yeah. Well, you mentioned kind of lower level problems. What about even like you know, low-code, no code solutions? Where does that fit in within the realm of this conversation? I know one of the things that I talked to you about a while back was like, hey, does the company want even me, somebody with no coding experience, playing around with these tools? And and you said, yeah, likely so.

SPEAKER_02

Yeah, absolutely. Well, if you want to get into you know, really being futurist about all of this, I I'm thinking that that there'll come a time where it won't make sense or won't make much sense anymore for anyone to write a user interface that's supposed to be used by a broad group of people. Right. All software is gonna. Become more and more specialized so that it will either be tuned to you particularly because you have described it and you have brought it into being and it works well for your process, or maybe a small team of people who all do kind of the same thing. So uh to that end, there's a few things that need to be in place for that to happen. One is the enterprises need to start thinking about how are we going to make our data and our you know governance available so that anybody can write software on top of it. So you don't want to give some, you know, someone who doesn't have uh understanding or experience full access to your database, just raw data out there to try to manipulate. That could end up really badly. You know, anything that's worth protecting, you need to have some kind of wrapper around, whether it's built into your data warehouse or whether you've written software around it, whatever it is. So, but beyond that, once you've got those rules in place and once you've defined here's how you do something in the system, at some point it's gonna make more sense for everybody to just define, okay, here's my job, here's what's available to me, here's how I want to work to get this done, you know, given the rules and constraints that are there, create an application for me to we're gonna work together to make this, you know, to get this job done. So that's one end is you gotta provide the platform, but the other end is you gotta have people willing to do that. You gotta have them comfortable with it and saying, you know, I can do this even though I've never ridden software before that that I can get into it. And there's lots of cool ways out there today, not only with something like you know, Copilot Studio or if if your uh organization supports that, but you know, go out there and get a free account on Replit or Lovable or something like that, and just try writing something that would help you in your personal life and just kind of see how it works. And I think you'll be pretty impressed at what you can do without really understanding a whole lot. Because again, when software is cheap to build, we're gonna want everybody to do it. Yeah.

Agents Are Breaking Your Security Model

SPEAKER_01

Maybe, and I think you got to a little bit of this in you know, very early in the episode when we were talking about what you what what you see here at GTC that's getting you excited. But what are you seeing here? What have you heard that you think is accelerating our emotion towards getting to that future where it's you know, everybody can kind of get in, there's not gonna be a lot of broad UIs anymore.

SPEAKER_02

Yeah, well, the whole you know, open claw concept has got everybody excited. And it's really great to see NVIDIA stepping up and saying, we see the value of something like this, we're gonna come in and make this work for enterprises. That that was key. And if you know, if listeners aren't familiar with open claw, essentially what that what it's doing is is everything your normal LLM would do, except that it it kind of works more a lot more on its own. It will watch things to see if they change, it will do more research to try to get something done, it will go and get the access it needs to whatever system to be able to accomplish your goals. So it's a lot more active than what we're used to, where the chatbots are a little more passive. You have to get them to do, tell them you want something, and then they'll go out and do it, they'll give you an answer and then they sit and wait. You know, open claw is not going to sit and wait. You give it a job to do, and it's gonna keep doing it, and it's gonna keep doing it even if it's something that's it's never really done, you know, watching your stock prices or whatever it is. And that's incredibly valuable. I think that is when people thought of agents originally, I think that's what they were thinking of, is something that I can sort of set off to go do a job and it will let me know when it needs me, but it's gonna go and do those things, and I'm gonna just expect now that they're getting done. That system, that way of having a software platform to work on is incredibly powerful. And if you know, if being able to put the kind of secure wrapper around it that uh Jensen was talking about today, if if that works and that gives us what we need, that's gonna be huge. And it's gonna be yet another way that people can interact with AI without having to have a lot of experience writing code. They're they know how to use chatbots, it's really you know limited by their imagination and what that software has access to and can actually do. So yeah, I think that's pretty exciting for for the advent of how we can start using AI truly in a native way to do just about any job.

SPEAKER_01

Yeah, how does that change the equation from like an enterprise architecture or just a an enterprise strategy type of mindset? You mentioned security. If you can get that security wrapper right, I mean, is if you go a little bit deeper, is it just identity and access and things like that? How does it change the strategy overall when you have true agentic? It makes them even more critical, right?

SPEAKER_02

It it it's not just about security by obscurity. You know, you you don't know where this data is, and so you're never gonna try to get to it. So yeah, we'll secure it on some level, but but we're not too worried about it. Something like a a claw is gonna go find it. Yeah, and it's gonna go get it whether it should or not. So really needing to think about how to secure your data, how to put those those bubbles around it, those constraints around it so it's being used properly, so that the you know, the roles, the RBAC kinds of just you know, nuts and bolts of what organizations should be doing is happening. And again, if you think about the time we're saving by writing software quickly and where do you spend that time, I don't think it's just okay, well, you go on and everybody takes a break. There's so many of these things that we've sort of let go for so long. Uh we've already seen it with data. You know, organizations have always known we need to be pulling our data together, we needed to be cleaning it up, we need to be putting rules around it, but they've never really had that impetus to really make it happen. This is one of those same thing for security. Like it's time to think about, you know, how do we really secure this so that we feel like it's pretty much impenetrable because tools like Claw are gonna go and find their way in, even if even if it's an innocent request, you know, with even if it's not malicious. So it's it's critical to spend some of that extra time you've got thinking about those problems. Yeah.

SaaS Is About to Change. Fast

SPEAKER_01

We're coming up on time here on this episode, but you know, let's just pretend we're here a year from now. What do you think some of the main themes are gonna be? Are we still gonna be talking about some of the same challenges that we're experiencing today, whether it relates to adoption or tooling or anything like that? Or are we gonna move beyond those and we'll have a whole new set of challenges to break through?

SPEAKER_02

I I can't help but think we're gonna be talking about the you know, the rise of the citizen developer and how there's just a lot more of that going on, as well as the way that SaaS, you know, is changing. And and Jensen mentioned it today, today, that they're gonna become a gas, a gentic as a service. And that that's really gonna need to be where they go. They have to stop thinking about how do we create a user interface that works for everyone, because all that does is create bloat and frustration. They need to be opening up what their application does so that these tools can go and access what they need, and individuals can start building what works for them, but still, you know, dependent on everything that they've built and all the criticality of what's underneath. It all that is worth something. It's not that SAS is dead, it just needs to adjust. So yeah, I think we'll be talking more about that next year.

This Isn’t About AI. It’s Governance

SPEAKER_01

All right. Well, we'll hold we'll hold you to it next uh year when we talk about it. Nate, thank you so much for the time. I know GTC is a busy time for everybody. You certainly as well. So thank you again. A pleasure as always, Brian. Thank you. All right. Anytime. Okay, thanks to Nate for joining. AI may have compressed the act of writing code, but it has raised the importance of judgment, governance, and review. The enterprise value is not in producing more software for its own sake, it's producing the right software safely at a pace the business can actually use. This episode of the AI Proving Ground Podcast was co-produced by Nas Baker and Kara Kuhn. Our audio and video engineer is John Nomblock. My name is Brian Felt. Thanks for listening. See you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

WWT Research & Insights Artwork

WWT Research & Insights

World Wide Technology
WWT Partner Spotlight Artwork

WWT Partner Spotlight

World Wide Technology
WWT Experts Artwork

WWT Experts

World Wide Technology
Meet the Chief Artwork

Meet the Chief

World Wide Technology