AI Proving Ground Podcast

Inside the AI Coding Revolution: Tools, Tradeoffs and Transformation

World Wide Technology

As AI innovation intensifies, one domain is already feeling the impact: software development. In this episode, WWT experts Nate McKie and Andrew Athan explore how AI-powered coding assistants are improving developer productivity and reshaping enterprise engineering. From Copilot to agentic tools capable of autonomous code generation, they examine how organizations are navigating this transition, balancing speed with quality and redefining the role of human developers. Whether you're leading a dev team or charting your company’s AI roadmap, this is a must-listen for understanding the real-world implications of AI in engineering.

Support for this episode provided by: Windsurf

Learn more about this week's guests:

Nate McKie's passion for computers started as a child, inspired by his father's work at Radio Shack. With a B.S. in Computer Studies, he has over 25 years of experience in software and automation engineering. Now a senior-level AI Advisor, Nate helps customers leverage AI technologies to achieve their business goals, combining his expertise in hardware, software and data to guide smart, impactful decisions on their AI journey.

Nate's top pick: Unlocking the Power of AI Coding Assistants

Andrew Athan is a Technical Solutions Architect III with deep expertise in high-performance, low-latency, distributed computing, consensus and blockchain and high-frequency trading. With a strong background in networks and software, he specializes in designing and implementing cutting-edge solutions to meet complex technical challenges. Andrew's work focuses on optimizing systems for speed, scalability and reliability in fast-paced environments.

Andrew's top pick: Codeium Windsurf Coding Assistance Demo

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.

Speaker 1:

What if every developer on your payroll suddenly worked like two? Ai-powered coding assistants are rewriting the math on developer productivity, boosting output by as much as 50 to 60 percent, but the real question for leaders is how to capture the upside without inviting security gaps and culture shock. In today's episode, you'll hear Worldwide Technologies' Nate Mackey and Andrew Athen will pull back the curtain on what many are calling the killer app for generative AI. By the time we're done, you'll know whether these assistants are a shortcut, a crutch, or the new competitive baseline, and what it will take to stay in control. This is the AI Proving Ground podcast from Worldwide Technology, and even if you've never touched a line of code, you'll want to listen closely, because AI coding assistants affect timelines, budgets, security and talent strategy, and this episode breaks down each of those angles in everyday terms so that you can join the conversation and steer it inside your organization. Let's get to it, nate. Andrew. Thank you so much for joining the show today, yeah absolutely, yeah, absolutely Excited.

Speaker 1:

We're talking about AI-powered coding assistants, and certainly AI been around for a while, but Gen AI kind of still relatively new here. Coding assistants have been around for some time. Nate, I'm curious, you've probably used coding assistants for some time. What is the landscape there and where they, you know, were perhaps in the mid to late 90s, versus where they are today? And that actual, true AI?

Speaker 3:

powered function. Yeah, I remember getting excited about them in the, you know, mid 2000s of the idea of autocomplete is basically what we would say is you know, you're working along and oh, how do you spell this? Or what's the name of that function? I can't remember being able to hit tab and you know, instantly get that done for you. And that kind of thing has continued to grow and get more sophisticated, but generative AI just took it to a whole new level of what it could do, because not only are you getting the benefit of some names here and there, but generative AI, really understanding what your code is and looks like and where you might be headed, just gave you a whole new ability to save yourself time and energy.

Speaker 1:

Yeah, and Andrew, you know what are you seeing from the AI-powered coding assistance? What is it actually enabling coders to do in this AI?

Speaker 2:

age. It's really interesting. You know, like you, nate, I come from a background of starting from text-based editors. You know that could index your code base and maybe answer some very simple questions as to where is a function, or you know what is a variable name that should be completed. You know, today we're in a spot where the tool that they're using as a developer really understands the full context of what you're doing and can answer questions that you know. The question pops into your mind. You don't even have to go out to a browser anymore. You pop over into your chat window, which is your coding assistant. It knows probably more than you do about a wide range of development topics and tool sets that you might be using. You ask a quick question and then you can continue with your task.

Speaker 3:

Yeah, it's really the killer app for generative AI right now. I mean, while it's great at coding or content generation in general, you know there's a lot that you have to deal with with hallucinations and being factually correct when it comes to code. The benefit there is that things are sort of black and white. You know your code works or it doesn't on a lot of levels, and so the ability for generative AI, which is really good at predictive text, to be able to use the corpus of everything that's been out there on the Internet around code and help you do your work, that's what brings it to a whole new level and makes generative AI particularly effective at helping you out?

Speaker 1:

Yeah, and is it just a speed thing right now? I mean, I've seen numbers as high as maybe upper 50s or 60% productivity increase. I've seen them down, maybe in the 30s or 40s. Are we talking about just speed here, and where should we think?

Speaker 3:

about that. I mean there's speed, absolutely, because just having things kind of built out for you and being able to fill it in rather than just spend time typing. You know that sort of thing certainly happens, but it's also that knowledge that Andrew was just talking about is that, whereas before, when you're thinking about, all right, how am I going to fix this bug or how am I going to add this feature, the first question is asked where do I need to go in the code base to do this? And you might spend, you know, half an hour an hour even, depending on how large it is figuring out where to even start and where you should put this, whereas you can ask now generative AI to recommend where should I go. Where is the part of the code base that handles this particular element of what you're trying to do and have a conversation versus? You know, remember the right commands to give it to be able to get where you need to be and have it direct you immediately and then probably give you suggestions, for you know what you should do to make it work.

Speaker 3:

Another example is I remember, specifically in my career, one night being all by myself in the office. Everything is dark, everyone is gone trying to figure out why this code was not working. When ultimately it turned out it was a misplaced comma over in a file. You know that I would never have noticed on my own until I just ran through all these tests. That's the kind of thing, generative ai can say, oh, here's your problem and be able to solve something. So that was hours of work that it would have saved me that particular evening if I had had something like that. So those are the kinds of time savings you can see.

Speaker 2:

Yeah, I agree with that, and I think that the way that the coding assistants become part of your workflow really depends on your persona as a developer as well. You know, a new developer maybe a novice developer is going to find a different utility for the coding assistant and might also use a different type of coding assistant. You know, there are those that are integrated with your IDE and then we find others that are sort of vertically integrated for a particular task. For example, I want to, you know, create an e-commerce website. I go to one of these coding assistants that's really built to spit out whole, hog, complete e-commerce site, including all of the back end, Whereas if I'm more of a general purpose coder, I'm going to go and use a coding assistant which is inside the IDE.

Speaker 1:

Yeah, I like, nate, how you mentioned how this is kind of the killer app right now for Gen AI. But it's not an app, it's a bunch of apps. There's a lot of coding assistants out there. What is the market like? Is it confusing because there's new coding assistants popping up every week, or are there different coding assistants for different situations?

Speaker 3:

Yeah, I mean there's certainly the 800-pound gorilla of Microsoft's GitHub Copilot, which was already the top autocomplete capability but very early on integrated generative AI features into just being able to use it for your code base. Now downside is you need to have your code on GitHub. You may or may not want to do that, depending on what you're doing, but that you know. Github is a great platform. It's a great place to be able to store, understand, build your code, so for a lot of people that was great.

Speaker 3:

Now there are a lot of competitors out there and, honestly, you can even just use an LLM straight up to do code generation. You don't necessarily even need a tool. To some degree, you can help yourself be able to do that, but there have been a lot of companies out there doing some great work integrating it with your development environment so that you can easily use it. You can kind of have a side-by-side companion as you're working through your code base to be able to help you rather than having to jump to another application. The market is broad. There's a lot out there, but there do seem to be a few players kind of rising to the top.

Speaker 2:

Absolutely, and I think it's important to note that coding assistants really, when you decompose them, they have several parts.

Speaker 2:

One of them is the back-end AI that's being used to drive the behaviors of the coding assistant.

Speaker 2:

The other elements include what tools are available to that AI in order to help the developer perform tasks, and also what those tools can do for the AI element itself. Also, what those tools can do for the AI element itself. For example, if the AI element is multimodal and can do things like interpret images or even interpret video, then you might have an element which can monitor your screen so that, if you're building a GUI, it can see how the GUI is behaving, or perhaps render the GUI and see what's wrong. Or you can give it a picture and say build me this GUI. All of those elements are where you're going to see variability relative to how the coding assistant is presented to the coder sometimes within an IDE, sometimes as a website and whether or not it has access to you, know your desktop and your code base and is able to create files and directly edit files. Those are all questions that you want to answer when you're choosing. You know which coding assistant you want to use.

Speaker 1:

Yeah, I want to dive deeper into the enterprise adoption and integration. But real quick, a little bit of a pros and cons of coding assistance. Nate, you wrote an article on WWTcom not too long ago about unlocking the power of these assistants, and in it you had a good section called the hidden gems of coding assistance. Among them, just a couple was, you know, onboarding, software developers and language learning. Maybe a little bit more about what these hidden gems are, or just what it's good at.

Speaker 3:

Yeah, I think these are just the kinds of things that are not obvious to you when you're thinking about what a coding assistant is and what it might do. The fact that you can go from what, if you've, you know, programmed in Java all your life and now you need to switch over to a new platform. We're trying to figure out how to get this into our react you know code base. That's our new standard of what we're doing Having the ability to have the system help you understand. You know, I know how to do this in Java. I would use this how do you do the same thing in react and have it you know? Teach you actually to do this in Java? I would use this how do you do the same thing in React and have it you know? Teach you actually talk to you while while you're doing these things is is mind blowing. It's amazing to be able to get that kind of assistance.

Speaker 3:

We we had an example where we built internally at WWT an application in Python because you know that is the language of AI, right, and so we initially built something in Python to be an AI application for ourselves.

Speaker 3:

But our IT team was like well, we don't really support Python as one of our standard platforms for deploying applications. We really need this to be in a JavaScript language, and that seems really daunting. It's like wow, wow, we got to write this from scratch. Well, it's much easier with these coding assistants, um and and to be able to kind of help you along to figure out how would I change this. They can take an initial crack at it and you can take a look and see if that works. They can help you in writing tests. So it's. It's not only only being able to help you save some time, but it is truly like having someone alongside you who has the ability to understand the code base, understand multiple languages instantly, be able to give you advice and thoughts on how something could work that you can talk to and converse with, and I don't think people always understand that's what a coding assistant can do until you get it in there and start using it.

Speaker 2:

Yeah, yeah, and I would say also, going back to this notion, that the coding assistants are composed of a UX via which you access the AI and that has different capabilities. Many of those will allow you to point to a different AI, so you can say, okay, I want to use Cloud 3.7, or I want to use OpenAI 4.0 or 0.3 as the back end, and you'll find that each of those elicit a different personality when you're using it. So it's kind of interesting on a human level too. You know, as you interact with the AI, you learn what its strengths and weaknesses are, and I think that's actually a very important element to it's going to really exceed your expectations in some cases.

Speaker 3:

Yeah, we've been talking about, you know, coding assistance, AI, coding assistance, for a few minutes here. Even really address the elephant in the room of these agentic coding assistants that are not only, you know, your kind of side-by-side companion while we're working, but are almost like your junior engineer that's going out, you tell it what you want done and it's going out and doing it for you. And that has been, you know, pretty revelatory in the industry for having it, as Andrew was pointing out. You know it's going out and making changes for you. It's not just suggesting what kinds of things you could do, but actually going out in your code base and saying, well, let's go make this change over here and we're going to add this new library here or we're going to implement this framework.

Speaker 3:

It's amazing what it can do and it's kind of resulted in sort of a lot of conversation in the engineering circles about. You know, is this going to replace engineers? Is this actually a tool we want to continue to use? How good is it? What can it actually do? And I think that the jury is still out to some degree, but it's really interesting to think about where some of these tools are already going. I mean, it's not the future. A lot of these things are here, and how do you properly use these tools as you get your job done?

Speaker 2:

I have an opinion about this, having used some of these tools, even in relatively complex tasks, and as far as the state of the art today, I would say that it's very important to continue to have a human in the loop. First of all, you know there's a security concern. You want to make sure that what you've asked the AI to do has been implemented correctly. You know addresses all of the requirements of your organization. These tools will come to an enterprise relatively unconfigured and many of them will have the ability to provide system prompts or content which will drive its behavior. And you really want to take the time to make sure that you've given those initial prompts to the AI so that it knows exactly how you want it to behave in critical context.

Speaker 3:

Right. So just to illustrate what you're talking about, you can give these tools a system prompt to say you are this kind of developer and you're going to follow these kinds of standards. We want to always have only one action happening in our function, or however you normally want your developers to behave. You can describe that to the tool so that it will attempt to follow your rules, so to speak, very strict structure.

Speaker 2:

That is a way of causing procedural systems to behave in very procedural ways. When we look at how an LLM works, it treats everything as text. Okay, and this is one of the just as free form pros. And this is one of the places where we're going to see differences in behavior as we look at which tool I want to use. You take a specific tool I'm just going to give an example something like Windsurf.

Speaker 2:

You know that firm has spent time making sure that the code that's presented to the AI has been syntactically analyzed in a way that when the LLM is presented with that code, it understands where the blocks are. It doesn't purely rely on the general training corpus of the LLM to do that task, so they have special RAG indexers that will allow the tool to be more effective. And yet, when you ask it to perform certain tasks, it may inadvertently go into a block of code and reformulate that block of code, even though all you wanted it to do was to move it from here or here or to refactor a big file into two smaller files. And that's where because we're at the bleeding edge with some of these capabilities that's another example as to why it's so important to have a human in the loop.

Speaker 1:

Yeah, I couldn't agree more. A lot of that is speaking to adoption within the enterprise of these tools, things you have to account for. Nate, I'm curious, you know where? How did we start to? We've always been familiar with paired programming or having that type of help from our end, so maybe there was a little bit of a because, like I said, this was the one of the first applications of AI that seemed like an obvious benefit.

Speaker 3:

And so we've had teams using tools. We've evaluated most of the tools, if not all of them, that are out on the market to figure out you know what are the pros and cons in different places, have settled on a few. Like I said, github Copilot has gotten a pretty broad adoption, I'd say around worldwide, because of its ease of use and just generally the capabilities that it provides. So we've been, you know on this for, I would say, a couple of years now, using these kinds of tools to figure them out. At the same time, it's not everywhere yet, because there are certainly teams like we do a lot of custom software development for customers.

Speaker 3:

We still have customers who aren't totally comfortable with AI seeing their code because there's, you know, there's an element here.

Speaker 3:

Seeing their code because there's, you know, there's an element here, even if you know if you're on, if you've got your code out on GitHub, then you're probably already have decided okay, I'm good with my code going out to the cloud, but if you're not there yet and you really want to keep things on prem, there's been a lot less willingness to adopt.

Speaker 3:

Not knowing like, is it okay for our code to go out and be evaluated by AI? What are the security risks? So I would say, even with Worldwide, who's been very eager to adopt some of these things and try them out. You know, when it comes to dealing with some of our customers, we want to be respectful of what they're ready to do as well. So, but it's given us the opportunity to try out a lot of these different technologies and see them work in various situations in the enterprise with teams. I think we would wholeheartedly support this kind of side-by-side coding assistant that we were just talking about, where you're getting suggestions as you go. We are moving into the world of the more agentic code generation style of Coding Assistant and trying to figure out what are the right ways to use that in various situations for our customers so that we can best advise them.

Speaker 2:

Absolutely. You know, I've thought of three jumping off points for other elements of the conversation based on what you just said. One of them is that it's important to point out that, because most of these AIs have been trained on corpuses that are relatively old corpuses of code right, I mean they extend back 30, 40 years they tend to contain mostly open source tooling in that code that's publicly available. One of the elements that we see some challenges existing with use of coding assistance is that they're not necessarily completely up to speed on the latest tooling from some of our partners. For example, do they know how to best use NVIDIA NIMS, you know, maybe not Right, and so that's where, again, some of the system prompting and some of this preparation of the of the context in which you ask the AI a question is so important. So then, instead of answering your question by using an open source tool, it might instead use the specific commercial tool that you're that you're looking to use.

Speaker 3:

Yeah, I would also say that applies to languages. Yes, there are some languages that have a lot of code out there on the internet for them, and there are others that don't have as much.

Speaker 2:

Zig, for example really exciting new little language, but you're going to find very little code out there in that. The other element I wanted to mention is that the most capable AIs are going to be the frontier models, right? There's only a few companies in the world that can train those, and super fine-tuning them is also a task that requires a huge amount of compute. That's probably why we're seeing some of the announcements relative to acquisitions, you know, in the space and consolidation in the space. Once you have one of these frontier models, it's unlikely that you're going to run that on-prem. So then the question is, as you were saying, do I want my intellectual property finding its way out to essentially an inference provider? Right, that's the coding assistant companies. A way to think about one of their functions is that they are inference providers inference in the cloud. So confidential computing is going to be an element in this space that has to develop and continue to develop.

Speaker 2:

Security for AI is going to be incredibly important to the continued development of the space, and what's exciting there is that the mathematicians have figured out ways to. For example, when we think about security in the traditional context, we think about things like SSL. Everybody knows what that is right. I take my text, I encrypt it, I send it over the channel. The problem is at the other end. Your inference provider has to decrypt that channel. It's going to see your text, meaning it's going to see your code. Now the mathematicians have figured out a way to transform that text so that it's equivalent to freeform, open text that you can read to the model.

Speaker 1:

But if a human were to look at it, they would have no idea what it says right To the model. But if a human were to look at it, they would have no idea what it says Right. So confidential computing solutions of that, like you said, coding infrastructure considerations how did we work with our internal teams to make sure we are accounting for all those? Or is that an ongoing conversation?

Speaker 3:

Yeah, I mean it is ongoing. One thing that we did on that Worldwide did early on we developed this AI driver's license training capability that helped walk through with each of our employees. Here's how AI works and here's some of the things that you need to keep in mind. It's not a magic black box. There's, you know, there's actual data moving around, going to different places. How do you keep us safe? What kinds of tools should would we recommend that you be using? So that was definitely part of it.

Speaker 3:

Another is that we looked at the market and possibilities there for how could we use these technologies and try to limit some of the risk. So Windsurf, for example, has when it comes to what they call Windsurf extensions, which is the sort of side by side auto complete style. They have an on prem capability. So, as long as you've got the, the GPU computing power to be able to install it, you can have your own coding assistant within your firewall and and your code's not having to go anywhere. So, looking at what the different options are, this is what we can best advise our customers, both the ones that we're working with and building them code, as well as the ones who are looking at these products and try to figure out which one is best for them, because that's absolutely going to be a consideration and something we want to make sure that we're fully versed on before we get out there.

Speaker 3:

But as far as security, this is a little bit of a tangent, but part of what I'm interested and excited about is, as we continue to train these LLMs that are helping us from a code perspective, some of these techniques that we've always tried to teach developers how to do around things like accessibility and security and the ways that you have to actually you know implant that in your code. Security and the ways that you have to actually you know implant that in your code. Think of it from the beginning that these tools can help us do that, can help us maintain that kind of discipline, by saying, you know, hey, if you use this kind of model, this would be more accessible on the front end. Or if you you know, if you, this algorithm or this, this idiom that you're using within your code, has a potential to create vulnerabilities, you know, let's try using this alternative instead. So there's also possibilities for actually increasing security in what you produce and how you do it if we can start to bring those kinds of capabilities into the forefront as well.

Speaker 2:

Absolutely. I'll point out an example relative to WinSurf, relative to the capabilities of on-prem versus not on-prem. All of the agentic capabilities of that particular tool actually come from this SaaS component of the tool. The on-prem is really focused, as you were saying, towards providing autocomplete, but it's important to also note that autocomplete in the context of AI is not the autocomplete that you're used to right. It's very useful, particularly for an advanced coder. It removes all of that boilerplate stuff that you have to write. You know what variables are going to go into that for loop. You know that the body of that for loop or that while loop is going to contain you know certain things. Typical autocomplete is going to be hit tab and get the name of a variable. Autocomplete in an AI context is hit tab and you get the body of the function and maybe you have to fix a couple of things in a minute, right, and so that can accelerate your tasks quite significantly, even if you don't have access to agenda capabilities.

Speaker 3:

This episode is brought to you by Windsurf. Windsurf's AI development tools give your team the power to build faster, work smarter and raise the bar for what great software can be Turn time saved into product shipped and breakthroughs made for what great software can be.

Speaker 2:

Turn time saved, into product shipped and breakthroughs made. I do want to get we've been talking a lot about- what these assistants do very well.

Speaker 1:

We've talked about some considerations, but what about the flip side? What is it not ready for? This episode will air sometime in May and we know things advance quickly, but what are they not ready for right now? I'm thinking of a couple quotes that I've seen that has a potential to mask, you know, poor coding, or just you know hide or lead to technical debt, or just you know, continue on bad habits. What are some of these coding assistants not ready for now or in the next couple months?

Speaker 3:

The more code that's being generated by the system, code that's being generated by the system, the less the developer, the one at the keyboard, is actually understanding exactly what's there and being able to keep it in their head and ensure that they're not introducing some you know unintended consequence down the road.

Speaker 3:

So you know when you're talking about for loop, you know your risk is pretty low.

Speaker 3:

But when you're looking at these agentic tools and it's going out and doing something exactly as you've asked but you're not thinking about all the implications of what that means, you could definitely get yourself in trouble and into a situation where you don't even really know how to fix a problem that's come along. And if it's sure, if it's a if error or something that is easy to spot, the AI is going to help you fix it. But if it's a business logic error that comes from multiple dependencies in the system reaching some conclusion, and now the conclusion has changed and you have no idea which piece of that actually caused it, that's where you can really get yourself in trouble. So that's one area I mean I can think of several. That's one I'm worried about is that the more you know it's just human nature Once you start trusting what it's doing, the less you're going to inspect it and be aware of it and understand it and take ownership over it, and the more likely that things are going to happen that you didn't intend.

Speaker 2:

Yeah, you know how. I would add to that is let's think of you know Hollywood for a second. You know you think of a movie I don't know what was it, inception or whatever and the guy's out there and he's manipulating things in the air and the AI is doing all kinds of amazing tasks. The first time you start to use one of these AI coding assistants, you start to think that that's its level of capability. You know it tricks you very quickly into believing that it's an omnipotent assistant. But it has a limited context window. You know it cannot see your entire code base all at once.

Speaker 2:

And as you begin to use it and you hit accept, accept, accept on all of its changes, you develop a technical debt to yourself because, unless you've taken the time to internalize all of the changes, you've read line by line, all of the things that it's written and you've taken. I mean it takes a human a little bit longer than a machine to. You know, read the text. So what picture do you have, what working model do you have in your head about the overall system that's being created If all you've done is punch through 50 accepts, each of which you've seen independently, looks correct, doesn't contain a bug doesn't contain a security issue. Do you have a clear picture of what your overall system is doing? At the end of that, I wouldn't say that's a reason not to use a coding assistant, but it is a reason to use it carefully, right, right.

Speaker 3:

Yeah, I expect that we're going to change our way of developing systems oh, 100% Right, so that it's not about just generating modules and getting something to work, that our entire way of having a job as a software engineer is going to change, so that we're thinking about what kinds of problems need to be solved and what's the best way to do that. I don't think we even have the structures you know the structures and the patterns for that yet, but because we can't, I don't think it's sustainable for AI to just domain specific language that we call you know Python or C plus plus or whatever it is.

Speaker 2:

It's a way for us to look at a unit of information and understand what's happening there. Is that really going to be the best way to present what a system is doing 20 years from now, or are we going to really be presenting concepts, perhaps in a different visual language, which isn't text on a page right? And this leads to one of the challenges, also with coding assistance. You know there are firms out there that have done some analytics on what has occurred to coding code bases, what's happening in GitHub relative to the impact of coding assistance and the code that's going into them. We're seeing a lot more copypasta. Now why is that? And what I mean by copypasta is the same for loop with the same variables repeated multiple times in the code.

Speaker 2:

You know, as a coder, what I would typically do. There is write a utility function and I would call that utility function and what you would see is just the call over and over again. Now you see that code repeated over and over again Because our presentation layer is still these text files. We see those things. In the future, I would imagine an AI-mediated presentation of that code would not show me all those repeated for loops at all, so it doesn't even matter that they're in the code base. Instead, I would see you know a reference to that function that I would have written out by hand, right? So it's both a challenge and an opportunity, yeah.

Speaker 3:

Again, you know we think about the problems. There are problems with the way it is today, which is why I don't think I would recommend like whole teams taking this agentic coding assistant and implementing it Code bases. You've got basically the Gentic coding assistants competing with each other and seeing lots of conflicts when you're trying to pull this all together and having no idea how to resolve them. I mean, I can see a lot of issues, but the fact is this is too useful to just say we're not going to be able to make it work no-transcript it as a crutch right.

Speaker 2:

Particularly, you were saying before so there's this new language that you know. I'm now coding it. Let's say I haven't used Rust. I've been traditionally C++ developer. Now the coding assistant is helping me learn Rust because I've asked it to develop a certain part of it. I've seen the example. The next time I run into that, it's important to step back for a second and attempt to write the code myself so that I can continue to be asking it cogent questions. Basically right, exactly, so yeah.

Speaker 1:

Yeah, it's interesting A lot of the parallels. I myself have never written a single line of code, but you know I'm I'm a writer by trade, um, articles, blogs, whatever it might be. There's a lot of parallels between what we're talking about. I rely on AI very much, which can be an impediment to myself staying sharp as a writer. So I'm going to ask you, Nate, how do you keep your developers sharp so that, until the time comes when AI is writing everything for us, we're able to interject where we need or understand where we need to pivot this way or that way?

Speaker 3:

Yeah, I think the thing to do now, with the tools as they are, is to lean even harder on the coding rigor and discipline that we've known that we've needed since the late 90s. You know, being able to have, you know, good test coverage for your code base, being able to create simple, readable code. If you are going to use these tools, take the time that you save and pour it into that discipline, because what that's going to allow you to do is force you to understand what's happening in your code base and to make those broad problem solving kinds of decisions that are really what we need our coders to be doing, our coders to be doing. Like we don't need our coders to know 10 programming languages. That's not that helpful. Let the systems be able to do that kind of thing for you, but take ownership of that code.

Speaker 3:

Think about how you would want it to look if you didn't have these assistants. If you were coming in and you needed to be able to understand this code base without an AI helping you, what would it need to look like? How could you rely on it? The exciting thing about these tools is that they give us the time to do that. I think, to some degree. We've abandoned some of those disciplines because of the speed of change and the need to get features out, et cetera. This gives us an opportunity to let the tool do the grunt work and you to focus on what a good, maintainable, you know, easy to change code base might look like.

Speaker 2:

And, yeah, I would build on that by going back to the statement that you made earlier what is the level of productivity gain that you're going to get from a coding assistant? One way to think about that is sure you get 60% productivity gain If you are a perfect organization that does all of the things that you just mentioned. You have perfect unit test coverage, you've got all of that stuff. Then you get that 60% improvement and you're done right. I think in most organizations, we can all agree that they're not perfect and in the end, you know the market is driving.

Speaker 2:

Uh, your, your, uh, you know your time to completion, what have it? As a developer, you're cutting corners, right, and so it's important to have executive management uh understanding of that issue so that they're not, once the coding assistant is implemented in the organization, they're not expecting to see every project completed 60% faster. Rather, they're expecting that that 60% improvement in productivity is now being utilized to increase security or increase unit test coverage or whatever it is. In other words, the project might still take about as long as it did before to complete, but it's going to be a much higher quality product, absolutely right.

Speaker 1:

Yeah, that's interesting because I had a note down here that said speed doesn't always equal quality, absolutely.

Speaker 2:

So yeah, I mean you just said you know it may be moving the unit tests to ensure that the regressions aren't there to do all of those other things that maybe would have fallen by the wayside, because if the coding assistant hadn't been there, you wouldn't have been able to do it.

Speaker 3:

I mean, like you mentioned, the assistant's go-to in most cases is going to be just to do the simplest brute force thing to solve your problem right. So you need to spend that extra time to make sure that it's been done the right way. So in your example of, you know, this little code block getting repeated all over the place, that's not you know. You might say, oh well, if a human had done that, that'd be wasted time and that's why we wouldn't want them to do that and that's why we wouldn't want them to do that.

Speaker 3:

But there's more at stake, because if that code block ever needed to be changed, if your business changed in some way, that needs you. Now that code block, that assumption that you made when you wrote it, needs to be changed. Now you've got to go change it in 10, 20 places in the application to be able to make that feature work or whatever it is. So it's always there's benefit to as much as possible not repeating yourself in the code base. So take the time to actually examine have I created a bunch of repetitive code here? Use your tool to help you, you know, figure out how to refactor that into a much better state so that you can be in a place where you can make changes again in the future.

Speaker 2:

Yeah, and that's a good example relative to sort of learning the AI's personality and learning how to use it best, because if you think that's going to happen, you prompt it, then you look at what it generated, you factor that into the reusable function, you let it know that it exists by you know there are various ways for the tools to allow you to do that and then it's more likely that it's going to use that function that you created rather than repeat that code block for you all over the place.

Speaker 3:

Exactly right.

Speaker 1:

As these tools become more mature. I want to run a statement that I'm sure you've probably heard from the industry North Star Jensen over at NVIDIA that everybody will be a coder in the future. How do you feel about that statement? What does that mean for the future of the software?

Speaker 3:

It is another interesting facet of these agentic tools that can do things for you. You just describe what you want and it creates it for you, and I've already heard stories of non-developers using these tools to build themselves tools to be able to do their job, which sounds fantastic and IT going. I know Exactly, right? Yeah, I mean, I can't imagine how scary that is to the IT department to think about. Well, they're building their own applications. Are they injecting security issues in those? Are they? Are they putting something out that customers are having access to, where they could create vulnerabilities in our entire network? You know what that? That is frightening in and of itself.

Speaker 3:

But even if you solve that problem, it kind of reminds me of, you know, the propagation of Excel worksheets, right? So people use Excel. A lot of people use Excel to make their job easier, and you'll find people who have this Excel workbook that they've curated over time that does all this amazing automation. And as a software engineering organization, we've had situations where someone brought us one of these spreadsheets and said can you turn this into an application? It's something that looks so simple. It's one workbook how hard could it be? And we come back with yeah, that's probably going to be eight months and $2 million to be able to do that. So it's the same kind of idea.

Speaker 3:

If you've got this functionality that's just popping up all over the place, what opportunity are you losing to bring that together and be able to do something that helps a broader number of people, and not just that one individual? And if one person's doing something in a way that's really effective for them, but the other person doesn't know about it and they write their own tool that does it in a much less efficient way, you never find those opportunities. So there's danger there, too, right, and I think that's another area we're going to need to continue to explore and figure out. How do we create the right way to use these tools? Yes, that's awesome that non-developers can create their own tools to solve their own problems and not have to put it on the IT list and they don't get it for two years or whatever. That's all great, but how do we avoid the pitfalls on that as well?

Speaker 2:

I mean, if you take Jensen's thought to its end point, where do you end up? You end up with a question as to what is software right? No-transcript thought process as to what are we really talking about here and where are these AI assistance really leading to? Relative to achieving acceleration of business within an enterprise. Acceleration of business, you know, within an enterprise. Soon we'll have coding assistants for Excel, right? Maybe that doesn't exist fully today, but, man, that'd be a killer app. Right, Go into this Excel sheet and figure out how to take this thing and do the pivot table the right way. I would be fantastic, right? Exactly. So, yeah, I mean, it's a really interesting thought process.

Speaker 1:

And is that, at risk of putting myself out there for being wrong? Is that where we get into the buzzword that I've heard a lot of vibe coding? Is that kind of where that arena is?

Speaker 3:

Yeah, the vibe coding idea is that I don't have to think, you know, I can just tell the AI what I want and it will create it for me. And what's funny is that, while this is true and there's, there's been some funny stories of people like creating their own video games and releasing them and one person you know making uh, six figures on this video game they they made with vibe coding. Um, it's really right now. It's it's more of a of a, a fad or a or a meme in what's going on, because these these things are awful that they're creating that way.

Speaker 3:

If they're not thinking about it, if they're not putting the effort into considering what am I creating and what are all the facets and how could it work, and you're just trying to get something to work, you end up with a mess, you know right now, and with what you have today.

Speaker 3:

So so it it describes where we could be and you know I guess it was only a year ago that I saw a demo from OpenAI where they showed how they were just talking to the AI and the AI could see their screen and they were working on a web page together and you say, well, I'd really like these columns to be able to shrink when I change the borders, and can you help me do that and have it go in and give you the code to do that? You know, that sounds more realistic to me about where we could end up is a conversation and almost like you're talking to someone side by side and you're working together on where it's going, versus just I'm going to sit back and pull my recliner and just tell the AI what I want.

Speaker 1:

Yeah, Well, Andrew, there's probably a kernel of truth there, though, too. Like where there? What's's the middle? Where? How can a software developer utilize the idea of vibe coding to actually push a business?

Speaker 2:

forward. Great, great question, I mean. Look, the counterpoint to that is, I think if you go out on on the web and you search for, uh y, combinator and coding assistance, you'll find that they're making statements that that they're you, you know they're. They're incubation. Uh, what do they call it? What's the word I'm looking for? The group of companies that they're currently incubating are seeing a significant acceleration in getting their first MVP out the door because they're using a certain amount of quote-unquote vibe coding right. But the key is that there are developers in there that are sort of watching the machine do its coding right. But the key is that there are developers in there that are sort of, you know, watching the machine do its thing right. It's not pure vibe coding, yeah, yeah.

Speaker 3:

Yeah, and you know, and we've also seen reports of that going on, and they put it on the internet and immediately it's hacked. You know not sure if because because they didn't have the rigor there. So, yeah, yeah, I mean we'll find that middle ground. It is amazing and, like I said, there's no stopping it, like it's going to happen. We just have to figure out the right kinds of discipline and the right way of working with it to be able to get to that promise.

Speaker 1:

Knowing that we're moving towards that future potentially, what are you looking for in a new software engineer that you're bringing on to the team? What types of skills will they require in the future and how do they feel about that? Are they excited about that? Other early adopters who are ready to go and eager for whatever they see.

Speaker 3:

Honestly, I think, what we always looked for when it came to software engineers. We cared a lot less about what specific language do you know or how long you've been working in this framework, and more about how do you solve problems and how do you think about how to make these things happen. And it's interesting I'm on the board of a local university's computer science team. They just have an industry board where they bring people in and you know those conversations can get pretty depressing sometimes if you start talking about the future. They're trying to teach students how to get better at, you know, dealing with computers and at the same time, all these things are happening that make it look like that idea is going to go away.

Speaker 3:

But my point to them has been you know, yes, let's have them use AI to be able to do these things. Encourage that, don't restrict it, like that's where we're headed, but start evaluating them on. How did they go about solving the problem? How did they use AI? What was their method? How did they determine that it actually did what they wanted it to do? How resilient is it if you ask them to go and change it? Those are the kinds of decisions that you're always going to need to make, no matter what happens, and that's what we continue to need to hone.

Speaker 2:

It's important to remain realistic, and any computer scientist will tell you that there's a thing called the stopping problem. It's not computable to look at a piece of code and predict, or you know in general, to look at a piece of code and predict what it's going to do. It's impossible, it's mathematically not possible. So you are not going to ever be in a situation where a computer system can look at a highly complex problem and reduce that to what's going to happen in the future.

Speaker 2:

We, as humans, continue to be excellent at understanding complex system behaviors and synthesizing those from components, and obviously, you know the AI assistants will grow in their capabilities in that area. But I think that's where coder you know the coder. That is a coder because they know a language. Sure, that might go the way of the Dodo, right, but a software architect who understands what the right algorithms are, what the right ways to connect agentic systems are, uh, what the right ways to create a service architecture is, you know which vendors I want to bring into my software architecture in order to achieve certain goals. Those are going to continue to, I think, remain in the domain of humans, absolutely.

Speaker 1:

I wonder if one of my favorite things about using AI, or the recent features of AI, is when I can see the AI thinking like. I'll prompt it and then it'll say thinking. And if you click on thinking, it'll expand. It'll say Brian asked for X, y, z, which means I have to do one, two, three. Do coding assistants do that and if so, would it be valuable for coders to understand and see how the machine is thinking?

Speaker 3:

Yes, Especially in these agentic tools, they have what's called a planner where they say here's, you know, you've asked me to do something. I'm going to come up with a plan and you, you know, if you're really seriously using these tools, you're going to want to see what that plan is and you'll say something like you know, this is what I want. Come up with your plan to make this happen and show it to me, and then we'll talk about it before we actually make changes like that. That's how it should work and I mean, that's that's been one of the recent revolutions with LLMs the fact that they have, they can put together a plan, they can think about a process, they can show you what they're thinking about. All of that is incredibly helpful when you're trying to do these things, just to keep you from making a big mess and having to destroy it and start over. So, yeah, absolutely. That's a key part, just like it would be with trying to have it write you an article.

Speaker 2:

Yeah.

Speaker 3:

You know, being able to see what is the plan, and let's make sure we're on the same page before we move on the notion of federated models and we've seen Google recently announce a protocol called agent-to-agent A2A.

Speaker 2:

Recently, mcp model context protocol has come to the forefront. These are mechanisms that allow multiple AI models to interact with each other, gain additional context, perform tool, use all of that. These are being integrated into AI coding assistance, and where I would like to see those models go not the models, but the use of A2A and MCP go is that we have other tooling that not a lot of developers are familiar with or use because of the complexity of these tools. We have formal mathematical languages and systems that we can express formally how a system should do so that we can compare what it does do with what it should do. It's kind of like a linter on steroids. Right Now, the use of those formal languages to express those formalizations of what code should do is a complex process. The coding assistance could help more of the developer community begin to use those tools and that's going to hugely increase the quality of code and diminish the error surface that we tend to create as we write code.

Speaker 1:

Well, this has been an excellent conversation. It resonates with me super well. Hopefully it does with our listeners. We're running short on time. I did just want to ask you know, wrapping up, what should business leaders be doing right now to optimize their workforce's use of AI-powered coding assistants? And then, on the flip side, what should developers be doing to make sure that they're optimizing as well?

Speaker 2:

Why don't you start, andrew? Well, I would say that the first thing they have to do is give them access, and that might mean that they have to relax certain constraints relative to what's okay to do in an organization. Give coders tasks that are okay to use in a SaaS environment. You know, if I'm going to use an agentic system that requires me to send some IP outside the door, then let's do it. You know, not necessarily in a toy scenario, but at least in a scenario where we know this IP is relatively less important. The other thing to do is to really, you know, create centers of excellence around the use of AI, you know, within your organization and create evangelists for this, because there are going to be those in the organization that will poo-poo the idea of using, you know, a new tool. There's others that are going to be super excited, and we see this every time there's innovation anywhere, you know, in technology.

Speaker 2:

It takes, sometimes it takes an evangelist to sort of push that forward and show people why it is that you might want to change your ways. Look, as a coder who has become very, very productive. I have a certain tool chain that I use. I have a certain way of using those tools that I'm used to. If now I have to change from using Emacs to using Windsurf and Visual Studio, I'm going to resist that until I experience that and really see what it can do for me. Yeah, exactly.

Speaker 3:

Yeah, no, I think that's exactly right. You know, we encourage organizations a lot now to find ways to start having your people interact with AI Because, you know, while you don't necessarily have to go and try to solve your most difficult, hairy problem with AI right now, maybe it's not ready for that, but AI is going to incorporate itself into what everyone is doing ultimately. So this is a great way to get a group in your organization using AI on a regular basis and starting to understand it. And, honestly, coding assistants I kind of see it right now as a bellwether for what is really working in the AI world. It's the first place. I'll start that over.

Speaker 3:

Coding assistants. I see right now is a bellwether for what's really working in the AI world. Coding assistance for the first place. You saw agents For the first place. You're seeing things like MCP and A2A, because it has so many tie-ins and it's so easy to prove out does this work or does it not work? Easy to prove out, does this work or does it not work? So just keeping an eye on what's going on with those tools is a great way to just kind of keep up with what is going on in the AI world.

Speaker 1:

Love that. That's a practical tip, just for anybody interested in developing their own AI experience and readiness. Well to the two of you, thanks so much for sharing your time today. Super helpful conversation. I'd love to have you back on the show again sometime soon. Yeah, I'd love to do it. I've seen you talking. All right, yeah.

Speaker 3:

Thanks a lot, thank you.

Speaker 1:

Okay, we've covered a lot of ground, but three key lessons stand out. First, speed doesn't always equal success unless you reinvest it. Coding assistants can unlock productivity gains, but only if teams plow the saved hours back into test documentation and refactoring. Second, guard the crown jewels before you press accept. Security and IP posture should dictate whether you use cloud model, an on-prem instance or nothing at all. And human craftsmanship still plays and wins at this game. The best returns come when developers use assistants as tireless apprentices and rely on fundamentals like readable code, tight architecture and ownership of every line that lands in production. Bottom line. Ai coding assistants are a force multiplier, but not an autopilot. Treat them like a junior engineer. Give them clear tasks, review their work and they'll elevate your team instead of steering it off course.

Speaker 1:

If you liked this episode of the AI Proving Ground podcast, please consider sharing with friends and colleagues, and don't forget to subscribe on your favorite podcast platform or on WWTcom. This episode of the AI Proving Ground podcast was co-produced by Naz Baker, cara Coon, mallory Schaffran, ginny Van Berkham and Stephanie Hammond. Our audio video engineer is John Knobloch and my name is Brian Felt. We'll see you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

WWT Research & Insights Artwork

WWT Research & Insights

World Wide Technology
WWT Partner Spotlight Artwork

WWT Partner Spotlight

World Wide Technology
WWT Experts Artwork

WWT Experts

World Wide Technology
Meet the Chief Artwork

Meet the Chief

World Wide Technology