AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
AI deployment and adoption is complex — this podcast makes it actionable. Join top experts, IT leaders and innovators as we explore AI’s toughest challenges, uncover real-world case studies, and reveal practical insights that drive AI ROI. From strategy to execution, we break down what works (and what doesn’t) in enterprise AI. New episodes every week.
AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
The AI Dev Tool Showdown
Artificial intelligence is reshaping software development faster than any shift in the last decade. In this episode of the AI Proving Ground Podcast, WWT’s Nate McKie and Andrew Brydon break down how AI coding assistants are transforming engineering teams — from productivity gains and data quality improvements to the emerging risks leaders can’t afford to overlook.
They explore which tools actually matter, how to evaluate them without chasing hype, and why agentic AI will redefine the software lifecycle by 2026. The message is clear: great engineering fundamentals still win, but AI now amplifies everything — good and bad.
If you’re an enterprise leader navigating the future of software delivery, this conversation gives you the frameworks, watch-outs, and real-world insight you need to make smart, practical decisions.
Support for this episode provided by: Cloudflare
More about this week's guests:
Nate McKie has loved computers since he was a child. With a father who worked at Radio Shack, he spent countless hours in the store playing on machines and teaching himself the basics of programming. That early fascination carried into adulthood as he pursued a B.S. in Computer Studies and built a career spanning more than 25 years overseeing services in software and automation engineering. Today, he serves as a senior-level AI Advisor. His focus is helping customers understand how to best apply AI technologies to achieve their business goals—whether through hardware, software, or data strategy. Nate is thrilled to use a lifetime of experience building impactful, innovative solutions to guide others in making confident, well-informed decisions on their AI journey.
Nate's top pick: Inside the AI Coding Revolution: Tools, Tradeoffs and Transformation
Andrew Brydon has more than 25 years of experience designing, building, and delivering technology solutions and over 15 years in leadership, Andrew brings a passion for driving meaningful outcomes across a range of industries. As Managing Director in the Digital team, Andrew leads a group of Architects, Creative Technologists, and Technical Consultants with a broad range of experience across software analysis and design, architecture, custom build, and digital transformation.
Andrew's top pick: AI Coding Assistants: Enterprise Market Landscape and Tools
The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.
Learn more about WWT's AI Proving Ground.
The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.
Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.
From Worldwide Technology, this is the AI Proving Ground Podcast. For years, conversations about AI in the enterprise have felt theoretical. Big visions, big platforms, lots of promise. But over the past year, something far more practical has happened inside many organizations, and probably yours too. Your software engineers are going all in on AI. Coding assistants have made their way into everyday workflows. Models started writing functions, refactoring legacy code, and generating tests. And almost overnight, software development, the engine of your digital business, began to speed up. Gartner predicts that by 2028, roughly 90% of enterprise software engineers will rely on these tools, and Nate Mackey of Worldwide Technology thinks that number may already be reality in many teams. Which means the question for IT leaders has shifted from should we use AI for software development to how do we choose the right tools, measure real ROI, and scale this safely. These are the questions our guests work through every day. Nate leads engineering teams that are already seeing meaningful productivity gains. And Andrew Bryden, managing director of WWT's digital practice, oversees product and software delivery in an environment where new tools are appearing faster than most enterprises can evaluate them. So today on the AI Proving Ground podcast, Nate and Andrew help us make sense of this moment. What's real, what's hype, and how enterprise leaders can harness AI-powered software development without getting left behind. So without further delay, let's dive in. Nate, welcome back to the show. How are you today? I'm doing fantastic. Glad to be here. Excellent. We're glad to have you back. And Andrew, welcome. How are you doing today?
SPEAKER_04:I'm doing great. Yeah.
SPEAKER_02:Excited to be here. Awesome. Well, we've got a lot of exciting stuff to talk about today. We're going to be talking about AI as it relates to AI-powered software development, coding assistance, and the future of what things hold there. Nate, I'll start with you. You know, we've heard you mention a couple times that this coding assistance seems to be the quickest, easiest way to generate some ROI on AI, which is a fantastic start for any organization to start getting to that flywheel. And Gartner, I've read Gartner reports that have said, you know, by 2028, I think 90% of enterprise software engineers expected to adopt these tools. So clearly we've moved beyond the should we consider AI-powered software development and into the, well, how do we assess these tools, which tools right, and so on and so forth. So, you know, if AI coding assistance are becoming more universal, how do we start to think about what tools are right for an enterprise setting?
SPEAKER_01:Yeah. And you know, and with AI being the fastest adopted tool of all time, generative AI, Chat GPT, um, it's been interesting to see that enterprises have not been on the same curve, right? So you you've gotten the downloads and everyone has used it in some uh fashion, but so many that we encounter just haven't hit it from you know, from what they're doing at work every day. But that really hasn't been true on the software engineering side. Uh software engineers have have latched on to coding assistance very early, and that's continued to grow and be a part of what's going on. And so uh, as you mentioned, Gartner's saying, you know, 90 plus percent by 2028. And I think that's probably um that's probably conservative. I think we're we're already seeing 90 plus percent adoption on some level by software engineers. So those organizations that are out there today and are still waiting around or maybe haven't even thought about it are really starting to fall behind. Because if you think about, you know, what drives this innovation, what's driving uh AI innovation in general, it's software. And if we can develop software more quickly with the tools that we have, that's just gonna make that innovation go even faster. And so if if your engineers are not taking advantage of that, then ultimately you are gonna be left behind in what you're able to do. So I hope that uh everyone listening out there, if they're listening to this podcast, they're probably already thinking about this if they if they haven't moved on it already. But if they haven't, uh it really needs to be high on their priority list. Yeah.
SPEAKER_02:And Andrew, we've talked about it before as well. There's so many of these tools out there on the market. Um, should organizations just be using all of the tools, or how should they start to wrap their heads around which ones are right for their specific environments or teams?
SPEAKER_04:Yeah, I mean, they definitely shouldn't be using all of the tools because I mean, we have teams that spend a good chunk of their time just looking at these tools. And frankly, it's overwhelming. There isn't a week where there isn't a major announcement. Even within the tools, probably most weeks there's a new model, a new combination, a new pricing scheme that comes with the tools. It's a lot. Um, so I think it's important to just think about this in categories. What is the class of what this tool does, and not get too caught up in the details of who's where this week? Because if you're chasing, uh, you know, Google's a great example where Google comes out with a new product and everyone's rushing to it and checking it out. And it is a really good product, it's interesting, but you can't, as an enterprise organization, switch tools that fast. There's a certain amount of learning, even between tools that seem to do the same thing. There are subtle differences in how they configure their rules, how they hold on to memories. Um, and these are all great, interesting things, but you could spend your whole time playing with the tools and understanding them. So we're really taking the approach as an organization of let's have a small group of people who can really lead the charge, share these big messages, look for tools that might be differentiated or might be worth considering. And then how do we have a very mindful way to scale this up across the organization?
SPEAKER_01:Yeah, Andrew, I'm curious. Um, you know, when you have our customers coming to our teams and saying, we see these tools you're using, what should we do? Uh, what, you know, what's out there, what should we adopt? I think you're right. You know, it's hard to uh just zero in on one thing uh because they're they're constantly leapfrogging each other. Uh at the same time, you don't want to sit around and wait and see who the winner is going to be uh because you really need to be using these tools today. What advice would you give those customers about how to proceed?
SPEAKER_04:Yeah, I mean, I think as of today, there's a large category of waiting products. Uh so obviously we're big fans of Windsurf. I think Curse is in that category. I think Claude Code is quickly getting into the mature enterprise product. Um and there's a group behind that are like, yeah, this is interesting. They have some experimental features six months from now, it could be a decision. So I think you have to be comfortable with saying we have to make a right decision for now. We have to pick one that's in that leading class that has the sort of security, stability, licensing, data protections we need. But within that class, don't get too caught up on whether your product A, B, or C. And we haven't seen people get in big trouble as long as they're making sensible first decisions. Um, and then have the confidence to stick with it and then have a plan in the background to constantly validate that decision.
SPEAKER_01:Yeah, yeah. I uh it may sound anathema to a lot of IT teams, but we've even seen customers pick like two of them, yeah, you know, and say, let's have, uh especially if they're different, slightly different categories of tools, let's have a couple that that we're gonna let our engineers choose from. And that way we're not gonna fall too far behind. We're making kind of you know, laying our bets out in a couple different places.
SPEAKER_02:No, that makes sense. And, you know, you know, Andrew, you gave a little bit of a lay of the land in terms of where we're seeing leaders, where we're seeing uh, you know, tools that are falling maybe slightly behind. But to your point, these things change constantly. Maybe, Nate, the better way to approach this, um, and you can always go to WWT.com for those out there. We have an assessment where we have a little bit of a map of of what tools are are doing what, but what's the criteria here? How should organizations think about measuring what's right for their ecosystem? Maybe some some bullet points there.
SPEAKER_01:Yeah, absolutely. There's there's uh actually a research article that we put out not too long ago that very comprehensively covers a lot of these categories. But you know, just to hit on uh some of the elements, I mean, for one, you're gonna want to look at the maturity of your staff. If you've got uh a group that's mainly, you know, more junior engineers and you're sort of used to getting things done by uh having a volume of people versus a few very experienced people, that's great. You're probably gonna want to provide a tool that is uh more tailored to helping them do their job versus expecting them to have already thought of everything and exactly what the tool would do. And uh and then use those, right? Recognizing that you're still gonna need the that senior help or at least team help to review the work that's being done. These tools are, they may seem like magic. They're not magic, they're not generating perfect code in in every situation, but that's absolutely one thing you're gonna look at. Another one you might want to look at is you know the security of your IP, of what it is that you're creating. Um, how, you know, if you're using these more, these higher-level agentic tools, they're going to send your code out to the cloud. Um, now you can have protected areas where you can make sure it stays within your private cloud, but it's gonna be out in the cloud. It's it's not gonna be something that you can generally just buy and and install into your data center or um on a on a computer sitting by you. So you need to be aware of that. And if that's a problem, there are some options for you. Uh you you you limit yourself pretty severely in that case, but there are some options that that allow you to run on-prem. And so you might want to look at that or look at the kinds of uh of the kinds of code that might be going out and saying, well, when we're working on this, maybe we won't use these assistant words versus when we're working on this kind of project. So um, so that's a couple. And then also, you know, do you want um, are you looking for your work to be something that's more creative, something that you're building new, greenfield kind of application, where you want to give a lot of freedom and capability and need the engineer to be working very closely with the tool? Or is it more just maintenance refactoring? We've got an older system, we need to move uh into a more modern platform. Those tools actually, if the more it's it's something that's more maintenance or more um kind of typical on and on kind of work, you might want to go with something more agentic because the agents are gonna handle that kind of task really well versus trying to work on something brand new, collaborate with a whole bunch of people, and then figure out how to how to have something consistent that works for you.
SPEAKER_02:Yeah, Andrew, anything to add there? It almost sounds like it's it's a lot of readiness considerations is the idea that you, you know, depending on the level of readiness, uh, Andrew, that you could have two companies using the same tool, but depending on their maturity, they'd get different, much different results.
SPEAKER_04:Absolutely. Or even, and I think Nate was kind of getting to this, even different parts of the organization that are in different parts of the product life cycle. So one of the things I think about is, you know, we've said this for a long time. We don't segregate our engineers away from our testers, away from our designers. We think about holistic teams, and you almost have to think about a team of AI tools as well. Um, so in a typical product team, we're spending a lot of our time in the visual design phase. Like, what does the UX look like? And really the tools that are strong on that are related but different to the ones that are the best for coding. But what's interesting is they're all starting to bleed over into each other. So what a year ago may have just created you a mock-up using AI, and you're like, this is great. Now we'll produce working code that's maybe suitable for use as a prototype. And so thinking about how to blend those tools together and a lot of the same things happening with QA tools. Whereas in the past, you might have thought about what's the right AI QA tool. Now you have to think about well, am I going to have six different tools on my team, or is there enough crossover that maybe I can blend three of them across the team? And then I may have another team in a different part of the organization that's in a completely different place where they're like, this is a mature product, our main interest is how do we keep everything patched? How do we keep those zero days under control? Uh so they'd be using a very different tool to solve for that problem.
SPEAKER_01:Yeah. And you're reminding me, Andrew, of of something I've heard you talk about a lot, which is um you're not benefiting yourself by optimizing one part of a process. So if you just make one aspect of what you're doing go really fast, you're just gonna create a bottleneck down the line. So you were mentioning tools that not just for software engineers, but for designers, for QA, you really need to be thinking about that full life cycle and making sure that you have got tools in place and optimizations or automation or ways to speed everyone up. Otherwise, you're just gonna end up with a big pile in front of someone's desk and not making any more progress than you were before. Yeah.
SPEAKER_02:Andrew, is that a big barrier that you're seeing kind of in the real world right now?
SPEAKER_04:Yeah, I think it was, and even for us, it was very much a learning experience. So one of our first projects to embrace these tools discovered immediately they had to have their engineers go and help test the product two days a week because they were moving so fast. So, what we did is we leaned into that and said, how can we enable the QA to be using these tools as well? Uh, another great one and very classic story in software development at this point is deployment and automation, what we've kind of called DevOps over the years. But it's no good generating all this code if you can't get it out. Uh, certainly we see customers who, for very good reasons, have manual change review boards and gated processes to get things out the door. All of these things are right for automation with AI. We're not going to have enough people to sit there and manually assess the security on the code that's coming out of all these tools. So as soon as you drastically speed up the coding part of this, security and deployment in general is a huge part of the conversation you have to have to say, how do I keep pace with this machine that's moving faster and faster?
SPEAKER_02:Nate, we mentioned ROI at the top of the episode here. What is the as we move and get more mature with these tools, how do you articulate ROI to stakeholders that that need to be updated there? Is it is it shifting or is it kind of always just productivity code that gets pushed out?
SPEAKER_01:Yeah, it's interesting because uh there's been kind of an industry standard of multiple surveys that have done it have talked about productivity of engineers and it comes out at around 20%. So you consistently see that across these kind of broad surveys, which 20% maybe doesn't sound like a huge amount when you're talking about some AI and it feels like, oh, it just does things instantly. Why isn't it, you know, 10 or 20 times? Yeah. But thinking about a tool that you can give your just a class of people in your organization you can give it to and see a 20% productivity increase across the board, there's just not many tools out there like that. So it's actually pretty stunning in the amount of uh difference that it can make. At the same time, there's ROI that's not obvious. Um, and I know Andrew has some good stories around this of things that the tool can do that just either weren't possible before or do things at a magnitude that that it's hard to imagine where you really do start to see some of that. Maybe Andrew, you could share a story or two of what you've seen out there.
SPEAKER_04:Yeah, I mean it sorry, I get a little late over there. Um yeah, it's interesting that the unintended consequences of speeding up. So I think for anyone who's read like the DevOps handbook or all these classic software engineering texts, it's probably not a big surprise to think about deployment. Uh from my sort of more software background, I wasn't thinking about data analysis as a big ripe area that would be integrated into the software process. So, what we found somewhat accidentally was give a team tools, give a motivated smart team tools, and they'll figure out really interesting ways to use it. So we had a really messy data analysis problem with one of our customers where millions of records come in, they're e-commerce records, they're messy, they're poorly formed. And we knew we had to write eventually some SQL to transform this stuff as it comes in the door, so our nice shiny you system could understand it. And certainly we have people who are really good at that, who can think through all the corner cases. That was a great example where in this case we gave someone Windsurf. Not the perfect tool, but it was the tool they had. And underneath it had an LLM. And by using that LLM and using the fact that these tools are actually pretty good in ingesting a lot of data, we were able to very quickly figure out that we'd missed a major corner case. And that in itself was a great finding to go, you know what, the tool got us 95% of their the way there, but there's 5% of the data that it actually couldn't handle. And what we discovered is we were missing data. We would have found that out three weeks later if we were doing that manually. So being able to move very quickly has these business consequences as well. The business had to be ready, they had to be able to pull new data. But the upshot of this is yeah, in two days, this four-week plan process was essentially complete to over 99% accuracy on the data ingestion for a software team that hadn't set out to solve this problem.
SPEAKER_01:That's really cool. I love that story. And and so, yeah, there's all these magical things that can happen. Um, but I think the problem around the ROI question that uh a lot of executives run into is they instantly start thinking about how it can translate to staff reduction. Yeah. Right. Um, oh, great. I've got a tool that makes everyone more productive. So I don't need as many of these kind of people. Um, and you can certainly think that way. And uh and you could probably, you know, substitute these tools and and get rid of part of your staff and see, yes, the same kinds of productivity you were seeing before, more or less. But you you've got to remember that this tool is available to everyone, right? So if you're doing that, you're essentially doing the same thing in a world without coding assistance of just cutting your staff. Why didn't you do that before? If you, if it was just so much easier to have less productivity from your staff, why didn't you just do that before? You're really going to need these tools and the same staff that you have today to be able to work at the same level as your competitors. So rather than thinking about, you know, how many people can I cut from this group, the the question is, what can we do today that we haven't been able to accomplish before? Because that's that's the real magic of these tools, is it lets you do things like what Andrew was describing, dive into problems that looked too hairy or too difficult in the past, um, come up with new features that you hadn't even thought of before, because now you have not just a code completion tool, but something you can actually have a dialogue with that can help you generate ideas and do things more creatively. So there's a real opportunity there of seeing massive productivity. And it's not just around, you know, let's have fewer people do this.
SPEAKER_02:Yeah, we're not necessarily idea constrained, we're we're execution constrained. Absolutely. I can't take credit for that when I heard G2 Patel say it at the Business Innovation Summit. But um in any case, it's certainly a risk that an executive team or an organization might take that approach where they're potentially slashing uh staff. It's it's a risk, um, something to think about. Nate, what other what other risks arise when um when we do use uh uh coding assistance? I know in that research report that that you and Andrew helped uh publish, it was things such as you know over-reliance or um getting too much into vibe coding, maybe a little bit more there.
SPEAKER_01:Yeah, yeah. I mean, what can definitely happen is um first of all, you know, just kind of a laziness setting in, right? Um an engineer sees a tool doing all this work that they were having to do before, and it could naturally turn into, you know, uh tell the system to generate something, oh, it didn't quite do it right, tell it again, oh, tell it to fix those bugs, and your day becomes just mindlessly pressing the try again, try again, try again button versus being creative, thinking about the problem, thinking on a broader scale. The danger is that AI uh these coding assistants suffer from the same limitations that generative AI itself does, which is that it tends to be a very one-on-one tool. Um, it's hard to, you know, uh, how many people are doing a Chat GPT where several of them are doing something at once, right? That's not really how the tool is built or made. It's it's it's working on what you asked it to do. It's not thinking about the rest of your team. So you need to be, as the engineer, be the one that's thinking about the rest of your team, thinking about what is the what are the people beside me going to do? How are they gonna approach this problem? Am I creating something that's consistent with what they're doing or that's gonna be in conflict with it that we're gonna have to work through? So you have to be turning your brain on. And you can't just be expecting that the tool is gonna magically do all your work for you and and move forward, just like anyone using uh AI today would do. You know, as a content generator, you can't just have AI generate all your content. That's slop, right? So it the same thing applies for coding. You can't create slop, or you're going to end up not being able to accomplish the purpose you were looking for in the first place.
SPEAKER_00:This episode is supported by Cloudflare. Cloudflare enhances internet applications' performance and security, ensuring fast and reliable user experiences. Protect and accelerate your websites with Cloudflare's global network.
SPEAKER_02:Well, I mean, we I I know there's other risks to get to, but Andrew, how do you combat that laziness? I mean, uh obviously it's human in the loop, but um, what about as we introduce agents and we hear about um, you know, software developers or just anybody for that matter managing teams of agents that can do this? You're potentially starting to talk about collaboration. Is that the right path or the wrong path?
SPEAKER_04:I think it's very interesting, and the agents are kind of extending this idea. What I've seen a lot with AI is if you give it something very general, it will do a huge amount of work, it will run out of context, and it will blur the middle. So we've all seen this. If you tell it to create a picture and you're in a picture AI, and you say, create me a picture of something very explicit and very targeted, it tends to do a pretty good job. If you tell it to go do something very broad with lots of detail, when you look closely at that picture, what you find is it's taking a lot of shortcuts on the detail. Kind of like my kids. It's like, I'll do it at a top level, but if you look really closely at the details, I've kind of got bored and started to rush. I think as we're thinking about building these tools, if you give it something very general, like, hey, go build me an e-commerce platform, you're gonna get something that at a very superficial level looks like an e-commerce platform. And so, really, to me, what's intellectually amazing about doing these things is a lot of the time as you're thinking about how do I chop this up into the right-sized piece that the LOM is gonna be very effective at solving this problem. So I'm not gonna tell it to go build the product, I'm gonna go tell it to build a set of endpoints that do this one well thought out thing. And then I'm gonna come back in a different session and I'm gonna go, okay, now, and maybe even a different tool, but I'm gonna say, now build me the best UI that can utilize that API. That's a lot more targeted and it's almost always way more successful. So whether you're thinking about agents or locally, now I'm thinking about how do I become the orchestrator of this pattern. Maybe I have six agents and it's great that they can do all these things, but now I've got to think about which agents are good at building those APIs, which agents are good at looking into my data and understanding my data structures and how I'm structuring my database, and which ones are going to create me something visually stunning. Um, so a lot of the art and the skill, and frankly, what makes this a lot of fun to do is now I get to think about how I'm decomposing the problem. But kind of to Nate's point, if your decomposition of the problem is go solve the problem, it's kind of boring. It's not much fun, and eventually it becomes pretty frustrating because the LLM is not good at that. Um, no one's good at being directed with vague goals. And so you just kind of get into this backwards and forwards of go do something vague. You didn't do what I wanted, try again. Um, but actually that's kind of fun and inspiring and a really interesting engineering problem because I think our best engineers, that's the way they've always thought. So that's why we see our senior people kind of embracing these tools. Oh, that makes sense to me. That's the way I always decompose the problem. And so they tend to be very successful with these tools.
SPEAKER_02:Yeah. Oh, when you're talking about senior developers there, what about what about junior developers who are coming in or the future uh workforce that's coming? I mean, chances are they're learning with these tools already, but there's got to be some risk that they're just not learning maybe some of the fundamentals or some of the stuff that you would learn from a junior-senior kind of relationship.
SPEAKER_01:Yeah, exactly. The the way that junior engineers learn and what they learn is going to change dramatically because it's not just about, you know, the the path used to be um understand basic programming and then understand how to do that in a language, and then understand what some of the algorithms and and libraries around that language can do and how you use them, and then start building systems and thinking about how systems are putting together. And it's almost like you cut out that whole part about the language, where you go from I need to understand how software works in general, good, but then I need to understand how big pieces of software are architected and talk to each other. And, you know, just what Andrew was talking about earlier about, you know, how an API makes sense over here, you know, and having another system do this job rather than reproducing it here. Like you've got to be thinking about those things. And that's gonna take coaching, that's gonna take trial and error, you know, and it's critical for organizations who are releasing these tools to their teams to recognize that there's gonna need to be some more collaborative time with between their engineers, especially the senior to junior engineers, to help them see yeah, okay, yes, you you had a task to do, you used this tool to do it. The way you did it does technically work. But here's what is not quite right, here's what doesn't quite fit into our system. Uh, here's where the LLM fooled you into thinking this was a good way to do this, you know? And uh, and here's what here's how we really want to make sure that's done. And it's it's it becomes so much about uh just like Andrew said, how to constrain, like how to constrain the task, how to constrain the system. So it's not, you know, reinventing the wheel every time it's doing something. It's using what it already has versus trying to recreate it. So it becomes about understanding where those constraints fit in, how the pieces fit together, and how to get your piece to work properly and and and and uh communicate with the LLM in the right way to do that. So it's it's a very different uh it's a very different learning path than what you would have done, you know, 10 years ago.
SPEAKER_04:Yeah. Yeah. I think we've been leaning into that model on my team, and it's been very interesting because traditionally I've had very senior people, architects, senior architects, and for the first time in a long time, we're actually bringing in junior people who are AI natives at this point. They were spending their time at college writing websites on the side using AI. Like we don't have to convince them on the tools, we don't have to tell them they're gonna use the tools. It's just kind of assumed. What's been great is for the senior people on the team, we've deliberately paired them up. And it's actually been very rewarding for the senior people because they get to have those coaching and mentoring moments. Hey, these are the architectural principles. Even if it did get it right, this is why it got it right. This is the interesting thing it's doing, that you're just kind of taking it face value that it's building that way. Um, so I think the junior people are really appreciating it. I think the senior people are getting a lot out of this way of working.
SPEAKER_02:That's interesting. Not necessarily something I would have thought of off the top of my head. Andrew, how I mean, how else is that changing the team dynamics where you're now at a point where you're having juniors come onto the team who have those AI skills natively built in? Is it creating any different ripples that maybe surprise you?
SPEAKER_04:Um, I mean, really, it's been that positive ripple of it's invigorating. It's great. It's great to have people on the team who don't have the bad project experience from 10 years ago and are like, oh, we better not do that because I tried it once and it didn't work out. And so it's really good to have people who are coming in. Um, one of the things I love about these tools is it makes trying something cheap. So, you know, we've always talked about proof of concept and prototyping, and it's a great and it's a valuable thing to do. I think we've all fallen in the trap of spending so much time doing it that that's the thing that ends up going to production.
SPEAKER_03:Yeah.
SPEAKER_04:It's like we're gonna go, we're gonna build the POC, and six weeks later you're like, man, we just spent six weeks on this thing, we better shift this. Now we're in a world where I can turn to the team and I can go, wow, you've got some really interesting ideas there. Everyone spend one day and go build the prototype, and then we'll pick which one of those three ideas is the best idea. Um, so it gives you the luxury of time. And it's a weird thing. We talked a little about what this means for team configurations and how everyone has a backlog. But a thing I've noticed is for the best engineers, it gives them the luxury of thinking time. Um, that I think we've all, you know, pulled cards or worked that kind of story or feature where you get to the end of it and you're just exhausted. You're like, it took twice as long as we estimated, and it had a pile of bugs, and I finally got it fixed. And all I want to do is get this thing committed and into the bill pipeline so I can move on to the next card. And not that we won't have work like that. But what's nice about the AI is often the feeling is, man, that code got written a lot quicker than I expected. Maybe now I can go back and ask the LLM, hey, are there better ways to do that? Or did I miss anything? Or did I miss the corner cases? And when we talk about quality, that's one of the stories we're hearing a lot is it's not that the happy path was necessarily overwhelmingly faster, but I got enough time that I could actually think about the corner cases and get in front of the defects and you know, run an informal security order to go, you know what, I don't really have time to go back and check the permission on every endpoint. Would you mind just examining my code base to make sure I haven't accidentally leaked the wrong data out? And they're the luxuries that we just don't have when we're all heads down on keyboard every minute of the day. So it's sort of opening up a space for quality here that I think the best people are taking that opportunity to go, you know what? I can actually add some polish to this thing. And often that's the fun part of working a story.
SPEAKER_02:Well, that's the fun part. I don't mean to take us to the unfun part, but you talked about data leakage. So, Ned, I'm gonna ask you what other security concerns might exist. I know there were several security points in the research uh paper that you had talked about earlier.
SPEAKER_01:Yeah, I think one of the biggest ones that we continue to hear from customers is in relation to uh model context protocol, MCP interfaces, because uh model context protocol uh just briefly is a way for um an LLM to talk to another system and be able to go and get what it needs and inject that into its thought process, its its answers, how it's concocting whatever whatever the outcome is that you're looking for. It's it's great. It's I mean, it's basically just a uh a format, an interface. But uh what's interesting is that uh MCPs have come uh or or saw their main use basically with code assistance, because uh very often as you're creating a system, having a way to go and uh either query another system that's out there that's got information that you need, or maybe even just trying things out internally to see what your database says about what does this data look like and incorporate that into your response. All of that's tremendously useful. But at the same time, it's a pretty lightweight protocol. It doesn't have a lot of heavy security capabilities built into it. So if if you're just sort of wide open, able to talk to any MCP server that's out there with the coding assistants that you're using, you could very easily leak IP, you could leak data, or in the uh you know, in the most devious case, you could poison your own uh code base or LLM with something that comes back to you that's that's really not appropriate. So uh we've seen a lot of customers saying, how do we work with this? And uh and the tool vendors are seeing this, and we're definitely seeing some ways that they are allowing you know, basically a whitelist of MCP servers as a as a start, or maybe some ways of evaluating what's out there or watching the uh information that's going across the wire and essentially making sure that that nothing unsafe is happening. But that's just another factor that you need to take into account with these two.
SPEAKER_02:I mean, Andrew, as you see this kind of press, you know, unfold in real time. Are there guardrails frameworks that we've put in place to kind of tackle some of these security issues?
SPEAKER_04:Yeah, it's definitely a challenge. And I think it's another version of one of the things we've talked about is when you go faster, when you have these tools, every part of the pipeline is under pressure. So, you know, maybe a few years ago we would have set up our build pipeline, we put a would have put some static security tools in that build pipeline, and that would have been enough. I think now we're really, again, turning to AI and saying, what can we do? Is there a Gentic-based LLM that can sit there on every pull request and actually do that security evaluation? Now, obviously, you don't want to have MCP putting a pile of bad code into your code base. But if you are in that situation, it'd be nice to have a different LLM sitting further down the pipeline that can detect it well before it's deployed to a system. So it's a challenge and it's an opportunity at the same time. And I think this system level thinking becomes more and more important. And as always, security is part of this whole system. Um, you know, robustness, I think, is another one. Um, error tolerance is something that really has always been a sort of really advanced level engineering skill. It's one thing to throw a database transaction into a code base, it's another to think of all the different ways it can fail and what all the non-happy paths are. And I think that's where mindfulness from project leadership may be at that level to say this is an opportunity, you have tools that actually understand this stuff. Um, it's fascinating just to go out to co-pilot or your LLM of choice and have a conversation about some of these bigger discussions. It's not an obvious, we don't think of copilot as the tool that an engineer is going to go to to figure out how to make these big decisions. But sometimes it's just useful to get out of the IDE and think about these architectural questions in another context. And I think that's where it's nice to think about I use my family of tools that I have here to help me through this journey.
SPEAKER_02:Yeah, no, absolutely. Well, recognizing that things do change all the time at a rapid pace. It's getting faster. Nate, how's that scorecard? At at you know, at the beginning of this episode, we talked about assessing some of these tools. How is the scorecard going to shift as we go into 2026? Is it, you know, agentic AI is gonna change the the way we look at these tools, or is there other technologies or other things that we need to be looking out for?
SPEAKER_01:Yeah. Well, I mean, first of all, agents are are definitely here to stay. And the uh the power of having not just you sitting and looking at a module at a time and and coding through it and getting some help on that versus an agent that's going and actually creating its own modules or uh modifying multiple areas of your code uh practically at once, all of that's going to that's gonna be the future. And it's a matter of how do we understand that and make sure that we're constraining it properly, as we were just talking about. But what's interesting, uh, I think in the that we're moving into, we're already seeing customers looking at this is what kind of what we said before, that it's not just one tool. Um we we're gonna start to see, and we're already seeing an ecosystem of tools that are building up around this act of using AI to develop code. Um, whether it's a tool like uh like Coder is a partner we've worked with who uh creates um virtual environments for engineers to work in, gives them all the tools that they need and gives them really total freedom and safety from some of these other issues we were just talking about, about uh the possibility of code leakage and all that. If everything's happening within a virtual environment and you've got your walls, you you don't have to worry about that. So, you know, tools like that will be great. There's um tools like uh Sonar Cube from our uh uh uh from Sonar Source where it will go after the fact, after you've done your work, and start looking for what are the common areas where you know the agents tend to fall down or uh systems tend to not work very well after you've used these assistants. Let's find those for you and pull them out and make that part of the build process so that we're seeing them right away. Just a couple of examples, but we're gonna see more and more of those kinds of tools that help you overcome some of the limitations as well as enable different parts of your team so that you're not having that you know local optimization we were talking about earlier.
SPEAKER_02:Yeah. And Andrew, certainly a lot of potential with what Nate's talking about here and you know the opportunity is ahead of us. But what types of dynamics will that place on software teams? How are they gonna have to react to those to those advancements?
SPEAKER_04:Yeah, I mean, and maybe the one nuance I'd say is absolutely agency hit as they're gonna be additive, not replacing. We're still gonna have software engineers sitting down with these tools, doing that one on one. This is a tool on my laptop. You know, the cloud didn't make laptops go away. People still have computers sitting in front of them. And they're more and more capable every year. It's not that we all move to Chromebooks that don't have their own capabilities. I think for me it's a very natural progression. And really, like Sonar Cube is a great tool, and I've been using it for a long time, and even pre-WWT, so more than a decade at this point. And I think it's natural. All these tools we rely on, the security tools that live out on the network. Like as every part of this becomes AI enabled, everyone else gets pressured to build that in. Sonar Cube can't keep up with a list of static rules if that's not the way the code's being generated. So I really I think really for teams, this is going to feel like, oh, finally, the tool set that I'm used to using is catching up with me. You know, some of the deployment platforms, a lot of these tools are becoming full stack because, you know, it it's a comment that goes back a long way that, you know, you start off with an IDE and eventually it turns into an operating system. And so slowly these IDEs want to be able to deploy the code and they wanted to be able to deploy it to their service, living on top of the cloud platform. I think more and more of the stack will um absorb these AI capabilities. It's just going to be the expectation and everything. So for me as an engineer, I'm sitting here going, oh, finally, if I'm using an AI tool, I want my build pipeline fully AI enabled so that I don't have to go up manually drag that stuff back or bring it back through an MCP and fix it on my laptop. I just want to see the uh fix out there in the wild, and then I can go to GitHub or whatever my repo is, look at the pull request and go, thank you, LLM, you solved the problem, you fixed the bug, it all looks good. Um so yeah, I think, and we sort of said this at the start, software engineering is the start of I think, something bigger in the workplace. These tools were built by software engineers, so not surprisingly, it was the problem space they knew. So it's not surprising this is the one that's really got a lot of traction and is really feeling transformative at the moment. It's like, hey, a pile of smart software engineers who help build the idea of LLMs. The first tool they're gonna go after is the one they sit in all day. But I think we're gonna see more and more of this. So we're talking a lot, and I'm hearing this from my team of architects that, hey, I can design epics at a technical level in the LLM now. So it's not just writing the code, but it's actually doing the technical planning. And I think the business planning, we're seeing a lot of that as well. It's not much good just having a pile of engineers can write code if they don't know what they're doing. So now the people doing the product planning and the business planning are relying on these. So it's really all the way from how does this get to people's desktops, how is it maintained, how are those updates done, all the way back to how can we think about great new ideas for products? It affects that whole pipeline. And if you leave part of it out, it sort of doesn't work.
SPEAKER_02:Right. Right. Exactly, right. No, love that. I mean, just just to end maybe Nat, or Nat, Nate, sorry about that. Uh Nate, um building on what Andrew says here, is it like, you know, we're gonna see the like all areas of the organization start to play around with these tools? And if so, what is that what are organizations gonna have to do to respond to that? Or maybe the answer is nothing. They're they're gonna embrace all this citizen developer type activity.
SPEAKER_01:Yeah, I really do see uh what's happened with coding assistance as the model for how you as an individual work with AI in the future. Yeah. Um, you know, everybody that thinks about the chatbot interface, and that's that's certainly one that makes sense. But integrated interfaces that take whatever task you're trying to do and integrate AI into that so that you are doing your task, being advised on how to do your task, have the grunt work and the difficult parts taken out by an AI assistant that's right there with you. That that's exactly how we're all gonna eventually do our jobs, especially those people who have something in their head that needs to make it into some other form, right? Um, which is a lot of jobs. So that that kind of aspect, I think this is just setting the pattern and the template for what it looks like. And the software engineers are seeing that before everyone else. But you know, right now, I think people need to start thinking about um, you know, whether you're a software engineer or not, how do I start taking those problems? Just like Andrew said, you know, something's bugging me. I've got, I've got a problem to solve, I've got an issue I've got to do. Instead of just cranking away on it the way you normally would, how do you start thinking about taking that to an AI format to at least get started, get some ideas, be able to understand what direction you might be able to go. In some cases, you know, as long as you're within your company's policies, maybe building something um that will help you do your job better, even if you're not a software engineer, because there's opportunities for for the AI to help you do that. So that's just gonna help us all uh move even faster into the truly AI-enabled organization. Yeah, love it.
SPEAKER_02:Well, we'll end there. Um, we'll have the two of you on again real soon because I'm sure just within a couple of weeks or you know, certainly a month or two, things will change yet again, and we'll have to have this conversation all over again. But this was worthwhile for sure. And thank you to the both of you for joining. Absolutely, thanks a lot. It's a pleasure.
SPEAKER_04:Like, thanks.
SPEAKER_02:Okay, what today's conversation makes clear is that AI and software development isn't a future state. It's already reshaping the way work gets done. The leaders aren't the ones chasing every new tool. They're the ones building the readiness, the guardrails, and the team dynamics to use these tools well. The key lesson is AI doesn't replace the fundamentals of good engineering, it amplifies them. The organizations that win will be the ones that pair accelerated tools with thoughtful structure, strong mentorship, and a willingness to rethink how ideas move from concept to code. If you found this episode useful, share it with a colleague and leave us a rating or a review. It helps more people discover the show. This episode of the AI Proving Ground Podcast was co-produced by Nas Baker, Kara Kuhn, Maggie Ryan, and Stephanie Hammond. Our audio and video engineers, John Knoblock. My name is Brian Felt. Thanks for listening, and we'll see you next time.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
WWT Research & Insights
World Wide Technology
WWT Partner Spotlight
World Wide Technology
WWT Experts
World Wide Technology