The Amplitude of Tech
Welcome to The Amplitude of Tech podcast, produced by Amplix, a leading technology advisory firm, where we bring the voices of technology thought leaders, subject matter experts, and enterprise IT decision makers to you to talk about today’s transformative technology and how it can create opportunities for increased success.
The Amplitude of Tech
From Vibe Coding to Business Value: A CTO's Playbook for AI in Software Development with Vin DiPippo
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI is rewriting the rules of software development, but are your teams and processes ready to capture the value? Vertikal6 CTO Vin DiPippo joins host Shawn Cordner to cut through the hype, revisit the hard lessons of vibe coding, and lay out what it actually takes to turn AI tooling into measurable business outcomes. A must-listen for technology leaders navigating the gap between experimentation and execution.
What You'll Learn:
- Why vibe coding fell short — and how spec-driven development is changing the approach to AI-assisted coding
- What the data actually says about AI productivity gains, and why the real-world numbers often tell a different story
- How to integrate AI into DevOps and DevSecOps without sacrificing code quality or security
- Why AI in development is a lot like offshore outsourcing — and the lessons from that era tech leaders can't afford to ignore
- How to set realistic expectations with business stakeholders and measure ROI before and after AI adoption
- Practical advice for standardizing AI tools across your development team and building a culture of shared learning
Hey everyone, thanks for joining the Amplitude of Tech Podcast. I'm Sean Corner, Chief Marketing Officer. Today I spoke to Vin DePippo. He's the CTO of Vertical 6. We talked about how AI is transforming the software development lifecycle, the multiverse, AI and consciousness, and a whole lot more. This one starts off very technical and dense, but he's a really interesting guy, and the topic is super relevant to today. And then when you go on the other side of that, you're rewarded with some really fun talk about AI. And uh I hope you enjoyed as much as I did. Vinda Pippo, welcome to the podcast. Thank you. Appreciate you uh spending some time with us today. Looking forward to this conversation. It's a little bit different from some of the other conversations that we've had recently, uh, topic that we haven't touched on, so I'm excited about that. Maybe we could just start off with a little bit about you and the company that you're with and uh you know where you've come from.
SPEAKER_01Sure. So my name is Vin DePippo. I'm the chief technology officer for Vertical Six. We're a uh technology solutions company from Walwick, Rhode Island. I actually was originally from Brave River Solutions. I was the CTO there, and Brave River was acquired by Vertical Six uh July 1st was the closing date of uh 2024. So from that point, you know, uh around I think November of 2024, I became the CTO of uh the overall group Vertical Six. And last year we had a really great year just developing things. Uh integration happened in phases, as you know, but uh really uh by the fall, we were a completely integrated company, and then 2026 is our inaugural run uh together as Vertical Six. So pretty excited about that.
SPEAKER_00It's uh no small feat to integrate a company. We've been through, I think, 12 or 13 acquisitions at the time of this recording. So I know what those challenges are like. Talk to me a little bit about Brave River and what you guys did before the acquisition.
SPEAKER_01Yeah, and it was interesting too because it was not just an additive thing, you know, buying accounts and and aqua hire and that kind of thing. It's a totally new business lines. So uh Brave River, the smallest group we had really was the uh the IT group. It had probably five or six people in it, and that was the group that did the traditional MSP services. We, you know, we had a great client base, very synergistic to uh some of the clients of the same size. Uh so that was just really the thing that that started the conversation. The way we did IT services, the way Vertical Six did, very compatible with each other. It's just, I believe our book of business in that area was maybe uh a quarter to maybe even 20% of what Vertical 6 had. So from that perspective, it was 20% additive. But the other 80% of our company was for development. We'd had some consulting things we did, like software selections and ERP implementations, the general stuff. But uh the majority of the company was development. We had a web practice that had really kind of spun out on its own as an agency. It started off as an adjunct to the dev because in the early days, and we're talking about like the early 2000s, we had developed our own content management system and e-commerce system on top of that. So most of the web presence things that we did along the way had heavy involvement in the dev team. At some point, you know, we we went to a more agency approach and stopped developing our own CMS because that isn't really something that has a good return on investment anymore. And uh, so that web team grew up there. Brave River also had acquired a couple of companies along the way that did digital marketing. So we also have that that was a separate line of business, but obviously tightly related to the to the web practice, right? So as we came together with Vertical Six, the first thing that was integrated and arguably the easiest thing, except for say tool stack and some of the common things, were the IT folks. They came in, and by I think April 1st of 2025, we were uh 100% integrated on that side. And then that was just our choice, right? And then July 1st, I think, is when the company really became integrated in its systems and uh, you know, then went there. But you know, we still have distinct lines of business. We have the web, the digital, the AI and dev, and we have the uh the IT and uh technology services, the managed services, all of those things.
SPEAKER_00Right. Uh we don't usually get into so much detail, at least on the front end, of what the company does that my guest is part of, but I wanted to get that out there because I want to position this conversation the right way, right? So what I want to talk about today is how AI and generative coding are changing DevOps. And so needed to position you as someone who is an authority on this and someone who's got you know deep experience in doing development, right? So let's start off with what is DevOps?
SPEAKER_01Yeah, so DevOps that's that's a term that has been very highly used, right? And it has cousins too, right? It has SR SRO or SRE, which is site reliability, and it also has DevSecOps, which is more concentrated on security. But if you partition the the developer experience, right, into figuring out the requirements, architecting a solution, building it, getting it to work, right? All of that is your developer engineering, software engineering experience. And then DevOps is what takes over for stuff that it's formalizing what developers did a lot of times by hand, right? Which is how do we get the code into someone else's hands to test? How do we get the code into production? What's the security and other checkpoints along the way? And all of that, right, became DevOps. Work management also was always a you know something that was uh important to developers, but it really didn't have an owner. And now DevOps is that gets owned by DevOps as well, meaning work management meaning tickets, right? So when you look at the different ways of doing ticketing, you know, we have the the CMMI, which that's the um one way of doing it. You just have kind of Scrum, you have Kanban, you have Agile, um, you have the uh Agile Enterprise, you know, variants and all of those things, right? So those those really didn't have the word DevOps over them. Those grew up as ways of better managing work by teams and interfacing with the client and whatnot. But those have now come under DevOps as well. So when we talk about DevOps, we talk about everything that's sort of not related to the actual nuts and bolts of architecting a solution and building it and testing it and getting it to someone's hands. How that's done, the mechanisms, the tools that are used, the automations, right? That's really where DevOps has grown up. The adjunct to that is site reliability engineering, which is okay, so the final thing is to actually get it to production where a wide group of people can use this, depending on how wide the group is, right? It can be an individual company you're building something for, or it can be a SaaS solution with really no ceiling as to how many people can use it. And how do you make that system resilient? How do you make that system available, right, and consistent? There's there's a couple of axioms that come into play there where you know those things play against each other, right? Availability, consistency, those kind of work against each other. You can have two, you can't have three, you know, there's there's a couple of axioms about that. But that's that's what the site reliability engineer ends up working on. So when you look at all of that, right, I think that's where the conversation is going, right? When you look about look at all of that, both sides have changed with AI and and more specifically generative AI, right? The ability of consuming content and then statistically and probabilistically reproducing content based on prompts, right? Or based on context, really, is the better way to say that. So it's it's hit both. It's hit how we do specs, how we build, how we architect, how we build software, how we test it, and then all of the mechanisms in place to get it through the the uh SDLC, which is the software development lifecycle, and then obviously eventually ending up somewhere where public people can see it or it's in production and it's reliable. So it's that way.
SPEAKER_00Let's start with the actual development portion of this then, right? Because you know, you had kind of indicated in the past, and I've built software in in my past as well. I've been uh a business stakeholder, I'm not a coder, uh, that's for sure. But that that process was always challenging because uh there's a lot of code to write. You need a lot of people to write that code. It takes a lot of time. Uh people approach it different ways. Sometimes you have kind of hiccups in the process of developing that software, and you bring in new teams, new people. There's sometimes not consistency, but you can write code and then you bring someone in afterwards, and they're like, this is all trash that someone wrote, right? So these are these are all kind of challenges that you have in the software development process doing it the old school way. But now we have AI, and AI has become a tool, not a replacement, but potentially someday a replacement for the work that the developers did. So talk to me a little bit about how developers are using AI in their day-to-day writing software.
SPEAKER_01Yeah, so there's obviously there's a good and a bad there, but it it actually parallels very well with good and bad developers, right? I I like to say that AI is used either as a tool or a crutch, right? But AI just happens to be one of the most effective incarnations of that. And, you know, the parallel you could draw is something like Stack Overflow. Stack Overflow is a site that actually has really suffered in the age of AI, both because it was polluted by terrible AI answers. Used to be where you could ask a question and really seasoned, good developers who had a high reputation on Stack Overflow would provide you answers and kind of get you right to where you needed to be. But regardless of Stack Overflow's fate, terrible developers would not think critically. They couldn't think critically. And what they would do is they would kind of search in Stack Overflow for something that sounded like what they needed. They'd copy and paste it, they'd cobble it together so it could, it would get, it would build and run. But, you know, up upon code review or upon functional review, the system was horrendously broken, you know? And if you look at that and you say to yourself, hey, what what actually happened here, it was that the developer did not think critically about the the actual solution and they just cut and pasted stuff that didn't work together. So really at its heart, AI is not that bad, right? However, if you don't watch AI and you don't properly use it, you use it as a crutch, right? You can have developers that aren't doing enough quality control on the code, and you get code that's uh that's not nearly as bad as what some human developers have created, frankly, but completely wrong nonetheless. You know, doesn't build, doesn't actually function and whatnot. So that's something that's been pervasive, right? Now I will say that we've we've actually made strides with that, both in how we use it, how we govern its use and do quality control on it, and also the just the very paradigms we're asking AI to do. So that uh we on the right track, by the way, with the answer, yeah. So so the way that code was written by AI at first, once it started to get to a point where it could actually assemble code that that was nearly complete, right, uh folks started to do this thing that's known as vibe coding, right? Where they would just prompt AI, say, hey, I need this, and it would write it. And then, you know, they would run it or they would look at it and they would they would also tell it more things about what it needed to do, and AI would rewrite the code. And if you ever if you've ever done this with English, right, there's a probabilistic side to this where if you tell it, hey, I need, you know, a quick blog post about this, right? And then you or you ask it for an image even, and then you just want to focus on one spot and tell it to revise that, right? Different models are better at different aspects of this, but they all suffer from getting you into a loop, right? Where you're actually the context of what you're trying to do, the probabilistic nature of the engine and the inference that's happening will make you chase your tail, right? And in a in a reasonably complex system, building it iteratively that way has produced some spectacular failures and generally was thought of as something that needed to be rethought. And what do we do next? You know, that was a great experiment, but what do we do next? So the way we use that that paradigm now is it's 100% okay to vibe code, but it's it's okay to vibe code in in a few specific instances. One is if you want to have it stand up something very small, analyze code and and recommend suggestions, that kind of thing. You know, that's direct AI to code, sort of AI to action. That's the paradigm there. Um a lot of times if you're unfamiliar with configuration files or some other system things, you know, AI is great to kind of give you a direction and even give you some scripts or some things that will help you get there. And then the other way that it's really good is to if you can think critically about code, right, and you're doing some new library or you're doing something that you've never done before, the old way of doing it was to actually sit down with the documentation, if you could find good documentation, right, go through it, kind of uh understand the entirety of what is happening, and then you could then sit down and write stuff. But one of the things that I always admonish developers for, and I always encourage them to do, depending on if you want to go negative or positive, right? Was to be idiomatic about what you did. And in order to be idiomatic, to know the idioms of a thing, you have to really kind of pop your head up and say, I need to write something in Python, right? I've never really written Python before. And if you do that, and let's say you're a C sharp developer or a PHP developer, you're gonna end up writing Python code that looks like PHP. You're not gonna use the idioms of Python, you're not gonna build Pythonic code, right? And a lot of times folks would only go that far, and they would also be very anxious about doing a project in Python, right, if they had any sort of vigor, right, and uh uh sort of uh structure to where they wanted to do code because it's like that's a lot, right? It's a lot to learn the nuances of a language just to write this application. So, you know, one of the things we can be guaranteed of is especially in its latest incarnations. Previously, I think that a lot of the models were trained on all Python code, for instance, that they could get their hands on. I think that the models have done a really good job in their later revisions of wanting to at least somehow identify good Python code, right? And train it on that and understand what anti-patterns are and make sure that it adjusts its own training as a feedback loop. So now you have the ability of leveraging an AI that knows what good Python code is, and then you can ask it to do something, and you can be guaranteed almost that it's in really good Python form. And then you can just use your critical thinking skills to look at the algorithm and say, does it actually do what I want it to do? Right. So removing that necessity of really learning something idiomatically and removing the anxiety a developer has because of that of doing something new is also something I find very good with vibe coding. Now, the key there is that all of those are relatively small. And if you do something big, like you can have AI convert something from one language to another or one framework to another, and you can sort of look at it and understand what it did do and understand what it will do, right, just by your critical thinking skills. But that's that's really starting to push the boundary. So the new thing that came out, and the new paradigm, it's an old paradigm, but it's a new paradigm for AI, is to do this in stages. And the the overall paradigm is spec-driven development. Some tools that have come out, like Kiro, which is an AWS project, that does it in in multiple stages. Other tools like Cursor is a very widely used fork of Visual Studio Code, and it has plant mode, which is essentially spec-driven development. Anti-gravity is Google's new entry. It's a VS Code fork. It uses Gemini very well. It actually uses multi-agent models very well. So all of these things have started to come up with this paradigm. And what this paradigm is, is you start off by reasoning with AI to ask you what the business requirements are. And it does two things. First off, vibe coding for people that aren't developers, that's really where it was absolutely at its worst. Developers were sort of lulled into trusting all of the code that came out with vibe coding. But the accessibility of things that can write fairly complicated code bases, right, by someone that can't read code at all. That was some of the worst outcomes we possibly had. So at least now we can say that we can still democratize the process more than we were ever able to. But for instance, you would work on the business case, the specification. What does this want to do? And you would come up with, just like when you write, if you iteratively edit with AI, again, you can get into loops, but you can usually get by them and say, whatever it just wrote, that sounds very natural to me. You know, it's it's words I would use, or, you know, it's the way I would say it. It helped me get off the ground and it gave me some directions to go in. But this is something that I now am proud to send to someone, right? The same way you would do that, you can get to a complete definition of exactly what you're doing, what you want this software to do, no matter how complicated it is. Then, you know, you can collaborate with someone that asks questions that you might not have thought of, right? And again, still working on that top-level spec. Then you can move down and you can give AI instructions on how to architect it, which is, you know, architecture is defined as the decisions that need to be made early enough, right, that involve things that are very expensive or very time consuming to change and correct later. Things like which database are we gonna build on, what stack are we gonna build on, how's it gonna be architected, you know, from a scalability and elasticity and all of those things, right? So then you can have the architecture discussion, and that actually could just be an architect, right? That's not actually gonna be involved in the final, final go. And then you can have an engineering manager, once that's totally buttoned up, you can have an engineering manager talk about the technical implementation plan. And again, this is something that they would do with with a you know kind of uh mid-level developer, right? Tell me how you plan to attack this, right? You got something. So in a bigger environment, right, you have the business analyst that would do the spec. It would go to enterprise architecture to get architected, it would then go to a developer to start breaking down, it would have engineering review on how you plan to do it, and then it would be given to developers at different levels, right? So that right now we're at that third stage. And then finally, the fourth stage is the implementation where the AI will actually write the code. And since it's built to understand the context all along the way, it takes all of these things into account so that the code it's building is much more focused than just prompts, even if the prompts are huge, right? So the way that these things have have come down to try and converge is increasing the context window, right? Like Gemini, I think, has a million tokens now. So that means that you could paste in enormous amounts of code and ask it to deal with, you know, analyze it. We found that, you know, as the con, and I say we, I think it's an industry, a research thing, right? That you need a lot of context, otherwise it sort of is untethered. But if you have too much context, you start actually getting to a point where all of the different paths through the neural network, right, become that much less deterministic, right? So it's sort of a balance. And and using this staged way of developing, you know, code, um, and it's really not developing code, it's developing complete systems with the help of AI. It really has been a game changer and has much better outcomes. You still need quality control all along the way, and you still need critical thinking all along the way to help AI out, but it really takes a lot of the the um the stress out of it. You know, where you would have to wait for someone to develop a business requirements document, they can do that in half the time with the vibing with AI. But then you take that and you say, okay, this has now been blessed by people who can think critically and say this is an accurate representation of what we're building. Then you can move to architecture. And again, you can you can, you know, hey, what kind of database offers the best latency and it gives suggestions, and you know, you can do some research with AI, and then you reason with it about the architecture and technical implementation, and then finally the code, right?
SPEAKER_00So that's is that one of the challenges too of this is that you know you've got developers using these as tools in their process, but it's not necessarily something that's been kind of standardized in how they're going to use it and what they're using across the development team, right? So you you end up with kind of uh fragmented architectures, you you end up with maybe inconsistencies in the appropriate approach. Like I can see the upside of this. It sounds to me like you're you're saying developers can now be kind of generalists. You don't need to worry about specialists that are writing specific types of codes. You could take someone that's got a s uh a um a competency in one type of code and they are able to apply that using the AI tools to be competent in other types of of coding frameworks. You know, I can see how this would allow you to bring in talent that is maybe a little bit uh I don't want to say entry level, but a little mid-level and a little lower cost and raise kind of the the the output and the quality that they're putting out, right? But it does seem to kind of create the potential at least for silos in the development process. Is that a a challenge?
SPEAKER_01Yeah it it's it is a challenge. It's also a challenge that some of these tools, first off the discipline of making sure folks aren't going straight to code, right? It takes a little bit of training because clearly you know unless you have this huge partition team, a lot of development teams are not that not that big. Even if you outsource, right? For you to outsource a team of 12 developers to build something, it's uh it's you're usually dealing with a smaller number of developers, right? And also like there there are certain limitations too on pricing models and you know how how these things are licensed and whatnot. It's it just hasn't really settled to where you know you have kind of the predictability you had with other development tools, right? So and the maturity too you know Visual Studio Code is a really popular tool cross-platform but Visual Studio has gone through so many revisions right in the tool set very integrated you know you had kind of like Eclipse JetBrains really owned a lot of the market for some of their stuff very stable and well thought out and evolved software based on great user feedback, X code for Apple, you know, so all of these things while you while you can pull AI into them, right, the actual companies that are providing them, you think about anti-gravity Kiro and Cursor, the things I just said, they're babies, right, in the industry, you know, as far as the maturity of of what they provide. And that that's okay because the the maturity of what we're actually doing with AI is also about it. But it's it's interesting, you know, when you s well we kind of put in some checks right that's where the DevOps actually can also help. So using DevOps using automation that's something that you you have to be doing anyway right because you have to be sure that the code that you were demoed is the code that gets committed the code that you tested in dev is the code that actually makes it up to UAT and then the code that the customer accepts is actually the same stuff that gets pushed to production. Right. And then there's other things like infrastructure as code that came up part of DevOps which basically means that the first act of QA is deployment. There's so many times when the code was tested it worked great you know and then it goes to UAT the whole thing fails. Or worse, it goes to production and the whole system fails and why is that well it's because in dev and UAT you had people that had access to go in and install a widget on the servers that ran it, right? So cloud-based cloud deployment, clean deployments, you know, all of the the things that do that but that's where SRE comes in where you have infrastructure as code. So if you're not small enough to just deploy on serverless infrastructure, if you need dedicated gear, then using infrastructure of code works. So you take all of that discipline right and now you say well how how do we build AI into this and it's very similar to how we built static application security testing and dynamic application security testing SAS and DAS into the DevOps practice where it gated it and depending on you know your your tolerance for pain I guess you could either have it gate every single check-in you could have it gate the check-ins to the main branch you could have it gate the check-ins to the UAT branch you know wherever you wanted to actually fail and say hey the reason is because you come up with a code change and you're like we just changed this code here right and then it goes through SAS but hey by the way this library that you're using this suddenly has a critical security vulnerability that affects 16 other components so your code change is failing to get integrated because you have to fix 16 other things. Now you're faced with a serious decision right you have to decide whether or not you push that in because the functionality fix or the functionality or the fix is a net sum, a net gain, right? And your your exposure posture is the same as it was yesterday. You just happen to have one more thing fixed or working right but then you have to have a discipline to go back and fix that thing because now you know that it's a it's a a vulnerability. So do you actually fail the thing that you're pushing in because you need to fix the stuff or do you you know you just know about it and you push the change because you're you're really not losing any security ground based on yesterday and you come up with a plan to fix it within 24 hours, 30 days, whatever the vulnerability actually does, right? So that's the security angle that we had to wrestle we've wrestled with for years. So now AI comes in and you can do things with AI this is getting a little bit better. It's still not where I'd love it to be where AI can review code that someone writes and you can make that be a game a gating factor and there's a lot of tools that do that. You can also have a and this is where I think there's a lot more to be done but you can have AI scan code, identify code that might have been actually written by AI, right? This is where the same work that they're doing to catch plagiarism and that kind of thing comes in. But you could you can have something that says hey if this code was written by AI, right, then we need to know you know what it was written by and have it checked by a different model if you want to go fully automated or have it checked, you know, have the requirement be that it's checked by a human. If it's not written by AI, that it's written by a human, then have AI check it a different way, you know? So there and I'm I'm sort of speculating there because that's uh somewhat vaporware I there's a lot of stuff that's that can do all of this, right? But really the maturity of it is something that I'm tracking. And those are kind of the paradigms that I think you'll see in the future is where AI is just another resource for you to use. And you know I was talking about the terrible developers that cut and paste from Stack Overflow. Let's talk a minute about how real development teams work right which is outsources or virtual employees right you have a certain number of on you know onshore near shore certain number of offshore and there's a lot of differences there.
SPEAKER_00You have a lot of differences in just the the culture the work ethic the time zone especially right um and and it's just if you look at the total cost of getting a project done you can reduce your labor costs by using outsourcers but if you don't do it right you'll increase your overall project because you have to redo things you have to spend so much more time specim things because and this is a common complaint right depending on the level of resources you have working on it you get back something that's sort of nonsensical and the answer is well that's what the spec said right and whereas if you're working with a closer team yeah you're laughing you've seen this painting working with a more a more tight knit team they'll be like hey by the way this I'm not exactly sure but I guess you want this to happen if this happens right yeah yeah oh well let's we'll implement it that way so there's an as built that was the problem that I ran into was you know we we had teams that were US based and then we had offshore teams right and they were because we were running agile so we would break it down into pieces and we would assign it off to different teams and those teams would be kind of working on their own but they lacked the context window often right and maybe that was a mistake in how we architected how we were going to work with these development teams right but you ended up having um you know creating all these problems that cascaded down throughout the the software we spent a lot more time doing QA and going back and fixing things than we ever should have. And that was actually a question I was going to ask you is you know the the AI is saving us time and saving developers time on writing the code, but is it potentially creating more work downstream in the QA process?
SPEAKER_01Vibecoding absolutely the new spec driven development and as we mature to be determined. But I can tell you that you know for naysayers and I I'm not a naysayer per se, but I am definitely an anti-hype guy and we're we're so still in a huge hype cycle right you know you think about the maturity this is an analogy right but you think about the maturity of the security tool market right if I came to you and said I have a security tool that you need to implement it's the only one you'll need and it's better than anything you've ever used before. You'd be like let me think about that. Now if I also told you that I wrote it and I'm an MSP, right? And I that my job isn't writing security tools, but I wrote it. So you know you would totally be like this is just complete nonsense, right? And yet we're in that kind of infancy of of AI-based solutions where uh that really the market hasn't really settled. And frankly even even the folks that are pouring insane amounts of money into this thing, right, are they building models, they're getting better. Now they've just started right in the last year or so I guess started creating models that are tailor-built for code but how those things are used and applied is still still ongoing right but the same thing is going to happen except hopefully we we understand the mistakes of the past and we won't just blindly repeat them. So when you look at AI, AI is like a very cheap, very fast outsourcer exactly how much critical thinking it can do, I would argue is the same sort of a of a problem we have to really think hard about to overcome that we we didn't do very well when we started using outsourcers. And a lot of folks now have decided that we need a senior engineer and the senior engineer needs to be like a senior engineer. When we're talking to them about what we're giving them that they're gonna parcel out to the team that's assigned to us, we need to feel like they really get it and they're driving as much of the conversation as we are, right? Is that person going to be the same amount as the under studies that they have? No. But you know it's still worth it right and then how much time do we need to spend and does our job change? So everybody's talked about this idea of like well software developers are going away and then other folks say no no they're not going away their job's just going to change. Well frankly an engineering manager or even a senior software engineer on a team that had outsourcers their their job needed to change and we didn't change it. We just let the outsourcers work in isolation like they were junior developers working next to us and we didn't have a lot more oversight built into the senior engineer's time. So we need to do that. We need to get that right with AI and I feel like we are I don't feel like the practices and the tools and everything have have settled down to where we can say yeah we really got that but I do feel like there's a lot of especially in this I'd call it like a second revision with this spec-based spec driven development I feel like there's a lot of awareness that we need to do that better. Because if you look at some of the early articles that people have written I used co-pilot for this is the the GitHub co-pilot I used co-pilot for a year and at the end of the year I found out that it was a zero sum game. Why is that? Because they made all the mistakes of vibe coding right so that's not just saying that ah AI is who we know if I use I don't know how old that word is I'm dating myself but you know AI is complete it's sus. I think that's what the Gen Zers say right AI's totally sustainable 6-7. Did I do that? I did it it's on the podcast can't take it back.
SPEAKER_00Don't edit it on that's the first 6 7 on our podcast.
SPEAKER_01All right you got the second there you go that counts so yeah so you know that that that's where I feel like it it's still TBD but at least we're going in sort of eyes open. I just think that people need to catch the plot. Like they need to understand you know yeah vibe coding really fell on its face in some spectacular ways but that doesn't mean that it that it totally discounts AI or even totally discounts vibe coding right you just have to now say okay well that has its place now how do we do these other things and then you know five years from now development will be completely different. But what won't be different is it's still going to need really good practices, really smart folks you know running the running the the critical parts and it's going to be using these things as the best tools we've ever had right I want to I want to read you a stat in response to something that you said about it being a zero sum game.
SPEAKER_00A July study by the nonprofit research organization model evaluation at threat research showed that while experienced developers believed AI made them 20% faster objective tests showed they were actually 19% slower. What does that say for the promise of these AI tools?
SPEAKER_01Exactly I mean I I hate to say it this way because it sounds self-serving but exactly what I said right it's TBD and I feel like I don't know enough behind that behind those statistics. Because if they're saying that they feel like it's 20% faster for them writing code but them having to knit things together and them having to QC the code and go back to AI and have it fix it, absolutely it can either be a zero sum game or you can lose ground. There are there are developers even in our small team right 10 people there are developers that have had the experience of writing something, going through it and then basically having to scrap it and just deciding to write it themselves because they were up against the deadline. And the you know that it's a really difficult thing. Fortunately I think everybody, you know, we've had enough conversations about it where we make that a learning experience. Why did it fail, right? And what was the thing we did, right? So we we had another example where uh someone had a solution that they needed to do, right? Or that came in. And they decided to use that as a use case to see what AI could come up with. And AI generated a whole thing that that did the the functions that we needed to do but it was completely out of context. So then the team went into this cycle of saying okay so how do we take that and have AI morph it into what we need and then we had to do a lot of changes and whatnot. It ended up taking taking probably three times as much time to come to that conclusion. That's okay though, because it was all a learning experience the learn but the biggest takeaway was if you do something as an experiment, the entire team needs to know it's disposable, right? We look at what AI did we look at how that you know could be improved or what that teaches us. But the the thing we can't be doing is saying okay you did this proof of concept now how do we edit the proof of concept to actually become the actual code because these are truly experiments. They're not proofs of concept or they're not pilots or they're not you know drafts or beta versions, alpha versions. So I feel like if you don't take the time to learn and you don't implement those kind of controls and those kind of practices, you will end up sp spinning your wheels a lot and you will find that all of the the the gains from AI will evaporate.
SPEAKER_00I think that's uh you know we're seeing that in other areas of enterprises deploying AI in that um there's been a lot of experimentation and a lot of pilots that were done and famously there's this MIT report that 95% of AI pilots last year or whatever uh timeframe they looked at didn't produce an ROI but I think that that kind of neglects um the truth is that enterprises are not deploying these pilots expecting them to scale and expecting them to generate an ROI. It's more kind of in the experimentation phase. And just to further make that point, I I'm looking at three other studies here GitHub, Google and Microsoft, they found that you know AI is helping developers complete tasks 20 to 55% faster. But then there's another report from Bain saying that world you know real world savings are unremarkable. So I think we're just we're kind of at this stage here where it's like okay we've got an awesome new tool this could be transformational for us but what do we do with it? How do we make it work? And and you know software development is notoriously fragile right like you have to be careful in how you deploy these these kind of tools.
SPEAKER_01No absolutely and I I just I think that that that what you see folks saying hey it's negligible right and MIT said that the majority of those things fail right to produce an ROI. So I don't think the AI fails and I don't think that's the case with developers either. I don't think that AI is not at least 20 to 40% faster at doing things. But I think that I'm gonna talk on both sides, right? In development's case, you know, it's 20 to 40% faster at doing things. What you actually do with that output, you know, you can actually then set yourself back instead of propelling yourself forward right so the uh as far as enterprises go, I don't think the AI fails a lot in some of these other things because we we have we do that too, right? Our our whole consulting gig my my job as CTO, I'm over what's called innovation services now at a vertical six and that includes like the the data and AI projects that we do. And it's more that that folks either don't establish an ROI, even even a simple one like what's the adoption? Like we're going to implement you know Claude for everybody. And it's okay what to what end? Right? You have to have some goal in mind. Even if it's just we want to get 40 to 50% of our people using Claude on a day-to-day basis because then three to six months from now I'm sorry not three to six months 30 to 90 days from now we can come back and have some sort of a mashup and say give us give us something that it helped you with you know what are you doing with it? And all of a sudden you get a newsletter that you can start sending out to people. So that's that's one thing. The other thing they'll do is they'll say like okay we have this process it's really really bad. We want to put A AI in and just completely revolutionize the process. It's like okay well would the investment be worth it if you move the needle 20% right because well I don't know I was isn't this supposed to like help us m cut our um you know head count by two thirds it's like well that means that you're thinking that you know roughly speaking it's going to be 60 six percent cost savings right and if you shoot for 66% and you only get 10, right, it's a failure. So I I feel like that's part of it. And it's the same thing with development. You say to yourself I want my developers to be 20 to 40% faster right what in everything? So that basically means that if you do the the math in the inverse right that they're doing you know so much more output every single week right with AI. And it's like not not yet right not yet. But if your developers are doing doing particular tasks AI can be even far more productive than that. Right. Yeah AI can do stuff in a couple of hours that would have taken you you know eight to ten say right so that's that's a fivefold and if you add that to some other metric right and it moves your needle by 10% then then you're good. Right. So that's that's what Bain is finding and that's what Microsoft is showing you one side of the coin and Bain's looking at the other side of the coin. One side of the coin is yeah the tasks everything that's being done with AI is five times faster. Okay. But then you look at what that particular task should be integrated into a larger process to not lose your ground and use it right, use it properly that is where you'll find a decent gain. But if you don't do that right, you know that 40% will just end up having you spin your wheels and you'll have a net loss. So it, you know, I and I I don't I can't speak authoritatively about that because I've read the same studies and I've looked and I've seen that it's got you know one one side or the other. So what does that mean? Well it certainly doesn't mean that AI is sus, right? It it totally means that we haven't figured out how to use this thing yet, right?
SPEAKER_00And we haven't communicate and set real realistic expectations right exactly challenge that we see a lot too is that uh you know the C-suite and the boardroom have expectations that AI is going to be this transformational cost saving engine right and and that's not always the case especially where we're at in the early adoption curve and and there are places where you can have that kind of an impact like CX is a a perfect example of where you can buy some AI off the shelf plug it into your contact center software and you can divert a bunch of calls and and messages that are transactional in nature that normally would have gone to a human being an agent and you can give it to the AI to solve that problem. And so now you actually have a real measurable cost savings but that's a very confined little use case right and you know something like what we're talking about with software development has a bigger enterprise impact I think.
SPEAKER_01Yeah and you know I I would say that based on the CX case alone, what I think that uh is emerging even with that use case is that you have to have muted expectations because you should give people a a very easy way of bailing out of that. Otherwise what you'll gain in not having to handle those calls, you'll lose in customer satisfaction. So you implement this expensive AI system and you're like, well, I'm giving people a way of out of get getting out of it, right? And I find that, you know, 80% of the people are are opting out immediately, not even giving it a shot. Okay. But for now, you've got 20% of usage that that will continue to help you train the thing, right? And you've got 20% of whatever that that percentage rather, the 20% that is staying in it, those number of calls, you're taking that off your call center. So that's a win. Consider it a win is version one. And then steadily you know make your AI better and then think to yourself, okay, I might delay their opt-out just a little bit longer because I've made AI so much better. And then you can balance that out. So there's a reputational risk. Like some of the folks that that did the original vibe coding things, right? There were incredible problems with them, you know, security issues. There was an entire site I won't you know name it, but let let's say theoretically there was a site that, you know, people who were at risk were in a community, right? And because of the terrible coding and a lot of folks attribute that to AI, right? Because of the terrible coding, they their membership was exposed. People at risk, right? Great. So there was another, there's another case, and this is actually a local case where an organization used AI to do uh to replace a graphic designer and there was a subtle issue in the logo that they created, right? And before anybody realized it, it was printed on apparel, it was on flyers, on social media, it was just a huge reputational risk and a hard cost problem because they had to throw all that stuff away, right? So, you know, when you when you talk about the potential harm and you think about developers, right? What are you developing? I would also say at this point, I would partition code like UI data entry, you know, I would still have security, have multiple checkpoints. But if you're building OT, right, operational technology, if you're building something that controls lasers or you're building something that controls paint lines that have certain things that over pressure they'll explode or even things directly related like your flammable gas control system, industrial control systems, you know, you don't vibe code those.
unknownRight.
SPEAKER_01You just don't. At this point, you just don't. But let me let me tell you like progressive thinking. Do we do we have a minute to go down a little rabbit hole? Yeah, sure. So let me tell you some of like the progressive thinking. There's um SIG plan right is part of ACM. It's the uh special interest group for programming languages. I'm waiting for the Sigplan academic articles to come out about computer languages that are no longer human readable, right? Now why why is that an interesting thing to think of? Well right now you're probabilistically writing code, but you're writing them in our languages, right? So when you think about C sharp, C sharp is in version something like 12, 13, 14, I forget. So what was wrong with versions one through 12, right? That you needed 13? What was wrong with C, right? That you needed C and then you needed C sharp, we could go on about that, right? There's a ton of programming language. Why are they all here? Right? Rust is a great example. That's a programming language that prides itself on safety, memory safety. Most of the the billion dollar mistake by the way was the null pointer exception. Anybody that's a computer language geek will know what that is, right? But also memory safety has been one of the biggest security problems that we've had with code, huge problems, right? So Rust is a language that was developed because human beings needed more to be able to write memory safe code. So what don't we worry about anymore though? We don't worry about the Rust compiler much. So the Rust compiler either puts it into bytecode, that's what Java does, that's what C sharp does, and then that can run anywhere. But at some point that bytecode gets compiled into an actual processor instruction. And those processor instructions while they can be looked at symbolically it's called assembly language they're actually just numbers right that eventually make up the program. So you have this great huge system object oriented functional programming whichever paradigm you're looking at and you have something like Rust that implements that with memory safety and it's it's just super right. And yet we're having AI write code in that and the only reason is because we need to be able to look at it because it's probabilistic not deterministic. We've got to inspect it. We do. Now when Rust actually gets no nobody, no developer, no Rust developer, maybe the Rust people themselves right the Rust team themselves but you know 99.999% of Rust developers don't take the Rust code, look at the object code that was generated to run on the processor and say, let me check that out, right? Let me make sure the compiler got it right. Compilers can have bugs in them, sure. But by and large going from source code to object code is a deterministic function we trust, right? And then processors can have bugs. I remember them there haven't really been one recently but there have been some security issues. And then of course there was the whole Pentium bug, which I'm dating myself, but that was when a CPU actually proved that it couldn't do math, right? So but if you look at that, you know how the actual object code is executed on the processor, that's a deterministic function by the microcode and the actual etching on the processor. We don't question that at all either. Again, bugs can surface in those but those are all deterministic. So bringing us back up at what point do we trust AI enough to generate code that is as close to deterministic in its source code as possible? And then at that point, why on earth are we having it develop rust? Why on earth are we you having AI build this system that eventually will run on a processor, right? But it's building it in Rust for us. And then we say, oh no, we don't want well let's build it in Zig or oh no you know what I I decided again I want to build it in Go, right? None of that should really matter. AI should be able to write something that eventually runs on you know this system and has the desired outcome and does it need to, you know, and a great example of that is something called WebAssembly, which is something that runs in the browser but it's really if you look at the definition of WebAssembly it's super low level. So you have like Rust is famous for being one of the first languages to support compilation right to WebAssembly. So it can run in the browser as if the browser was a CPU. It can also compile to run on Intel it can run on you know a uh a CUDA right it can run on the the GPU languages or the GPU architectures it can cross compile everywhere. You know when you look at that and you say to yourself well if I ask it to build a website couldn't it just build you know the WebAssembly bytecode directly why do we have why does it have to build Rust code to then be compiled into WebAssembly? And that's only because we don't trust it. Right. So why why is this rabbit hole interesting? So this rabbit hole of something futuristic right is just to say that things are going to get different when we go from probabilistic more towards the deterministic. And that's that kind of loops back to what I was saying about so what if we increase the context window to this enormous amount give it all the code and ask it to build something so are we getting closer to a deterministic solution? Maybe maybe not how about if we do this spec mode which is staged you know and we get each place going is that getting us closer to deterministic maybe maybe not right that's why I say it's TBD. But one thing is for sure we're going from probabilistic random code right that doesn't work more to deterministic. Where we are on that continuum is a matter of debate, but we're moving in the right direction. So eventually this is going to happen, right? What I don't think will happen until we have another breakthrough and this is across the board with AI is I don't think we'll get to the point of the critical thinking this is getting into the realm of like AGI and that kind of thing. And if you think about why this generative AI really has taken leaps forward it was algorithmic things that came up right the attention matrices and the transformer algorithm completely blew the lid off of generative and large language models. A stable diffusion and its cousins, right? That was the image, you know, the image algorithm rather than just using an existing you know recombinant neural network or something like that. Those t sort of increases in in algorithms are one part. Folks don't really look at that I think uh the gestalt philosophy if I could use that is something like ad summative I think they call it where just more of something is not necessarily better because the sum of the parts is something different than the whole right that's the gestalt philosophy. And when you think about some of the things we hear and I know they're not totally focused on this but when you think about you know raising trillions of dollars putting data centers in space right that's kind of like that ad summative thing where it's like more of this is going to get us to AGI. And really AI has been the confluence of three things and you can't you can't forget the third. It's been data storage, right? The access to so much information it's been processing power, the incredible advances in GPUs and just raw processing power. And it's also been algorithms right breakthroughs in the research of actually how to structure neural networks mathematically structurally whatever. And I I believe that that's you know that's we need another breakthrough. Maybe it comes with quantum because quantum forces you to think completely differently than classical. So when you when you merge AI and quantum I don't think folks that are thinking about it as like oh well it'll just be much much faster. It's like no when you think about how you solve a problem with quantum it's completely different than classical. So that sort of machination in your head getting someone to think about a neural network and these things in quantum terms right that could be it but who knows but just all of these things that are like really futuristic they come down to that continuum. Right now we're somewhere in the probabilistic zone we've come further but we need to get more towards deterministic before we can really start being as um as aggressive as we are with send this source code off to the compiler to be run on the CPU and don't worry about it. Right?
SPEAKER_00Maybe Vin, just maybe consciousness is not substrate dependent and it's just an emergent property of complexity and we're getting to the point where AI is actually going to be conscious it's just going to do all this for us. But until we get to that point we're starting to totally you totally went there because I brought up Gestalt philosophy I did. But you know may may maybe we were we're stuck with vibe coding until then but I did want to bring up do you know Max Tegmark from MIT?
SPEAKER_01I do well I don't know him but I I've seen yeah yeah.
SPEAKER_00I didn't anticipate you were grabbing a beer with him after this but you knew of him. So he's introduced something called varicoding which my layman's understanding of it is it's kind of vibe coding but with some sort of underlying mathematical structure that validates that the code is actually going to work. So you know do is that starting to make it into the development world and is is that potentially the answer for you know giving us the the intent of vibe coding and in being able to code with natural language but also the the veracity of the code actually working?
SPEAKER_01Yeah so you know I have I've read some of the work on that right and it's it's reminiscent to me of the uh I mean essentially computer science came out of mathematics right and a lot of the functional programming you know paradigms those come out of mathematics you know uh rigor but I think that the ability can obviously you can model anything mathematically right that's what a mathematician will tell you you can come up with an equation X's and Y's or whatever that that uh describes the human brain and the human heart. We haven't really yet but you could right that's that's simulation theory so you know that that's where the algorithm I'm speaking about might come from right absolutely because if you look at like simulation theory and say okay we have a massively complex system can we model that mathematically right I think that saying yeah well okay so isn't that what we did right you know awareness self-awareness right all of that consciousness that's a a massively complex system that involves I mean just the number of variables alone first off it involves everything going on in your brain but it also involves everything that happened to you from the time you were conceived to now right every experience you've had has built up your neural network so that's an incredibly complex thing. The the algorithms that we had right which were n-dimensional error surface reduction type things right if you consider n inputs to a neural network and n outputs right and you looked at that as an n-dimensional surface and you tried to find the local minima and maxima that's that's kind of like the old school stuff right that was that was trying to get some mathematical rigor around how do we model these these neural networks. So all of the things that have come out since then have basically been more mathematical the mathematical representation of neural networks. I mean attention matrices are matrices right I forget what they're called low order I don't remember the uh Laura maybe low order rank algorithm or something like that. That um I don't remember the actual uh acronym but it's it's using lower order matrices and all linear algebra. The thing that's driving a lot of the AI is vector databases, right? Which is tuples, you know mathematical tuples. So yeah it's all related whether or not whether or not those particular concepts that are actually coming up the application that we're reading about in those will make it I don't know but it's 100% certain that making sure that something the new algorithm comes out and establishes a new mathematical um a new mathematical guardrail right that guides the output of the neural network that's how we got LLMs that weren't even LLMs but the LLMs neural networks that were doing a really variable probabilistic job right of predicting how to interpret and respond. And suddenly we got the transformer algorithm that essentially added a new mathematical construct to the neural network and now all of a sudden it can solve that problem. So I absolutely think that they're on that that's on the right track. And I think that as we are able to more accurately simulate the things that happen in a complex software system, that's when we'll be able to drive that at least that use case right forward more.
SPEAKER_00So yeah it's remarkable to me that you really when you have a conversation about AI at this level you really it's hard to discern AI from the actual experience of consciousness that we know right because our brain is nothing but a predictive modeling engine, right? And to your point earlier, we build these neural pathways based off of experiences those experiences tell us what to expect. That's how we make our predictions when we go through our our day really not scrutinizing anything without paying any attention to the things like when you're driving to work you get lost in uh in that drive right you've done it a million times you're not thinking about the drive you don't have to pay specific attention to the drive because you already know what to expect with that drive but when something happens that is unusual to that commute that's when you start to pay attention and that's really just how our brain works it's nothing more than an algorithm and you know that what you're talking about with simulation theory that's that's what is given credence to simulation theory is you can look around and you can see that math is underlying everything that we know in the universe right and and so who's to say that this isn't just a mathematical yeah representation of some simulation right no I do 100% right and that's that's a great pursuit academically right however one of the things I we have to say now right we we're getting into this territory is you're talking about the natural right so you know that that's where things things get a little bit faith based right so in the natural of course but if there's a natural is it not conceivable that there's a supernatural right now we've we've gotten close to that no we're totally off of you know how to you are but I'm gonna argue I'm gonna argue right back that anything that actually happens to us in the known universe cannot definitionally be supernatural. It is natural if it's occurring in this universe that we know.
SPEAKER_01That's causation it's not source though right the source of it can be supernatural and cause a change in the natural just like we we know now right that things that are not perceived by us can affect things that we do perceive. And also if you think about science the way science has evolved in this you know the the multiverse just saying that the multiverse is there that when you get past the universe right our universe could be in a multiverse where the laws of physics are different in a different universe right so our universe so if if we coexist right do we not affect each other and if we do affect each other how possibly could you calculate or model the effect if the only thing that we can do in the observable universe is apply the laws of physics changing as they may be right to another um another universe right and what happens between the two two universes in the multiverse. But you know we're way we're way out there but I you know I mean I have a a very active spiritual life right I'm a Christian and you know I I really have a lot of respect for science you know and it's uh the big bang theory you know totally love the science behind it right everything started from a a singularity where'd that come from you know just you can ask stupid little questions right it's the prime argument I totally get it absolutely even you know like Stephen Hawking right he was one of the folks that believed in the uh the concept of panspermia right which is that okay maybe life didn't just spontaneously happen here but it was seeded as part of the Big Bang right okay well where'd that come from and I think Einstein was the one that said he he knew that there was something but he didn't know if it was supernatural or just superintelligent right as far as like a deity is concerned. And superintelligent to him was a lot more possible right if you consider just things bound in the natural. And I agree with you that the argument has always been that how can there be a supernatural because you can never prove it or disprove it, right? So it's just basically a myth. But we seem to believe right without a problem with taking spirituality out of it we believe in the multiverse which is something that would logically have a cause and effect right things that happen in another universe affect our universe somehow right and yet we can't really describe those because the only thing the only language we have of you know cosmology is physics and the rules of physics are changing as our understanding increases but we already say that in other universes the the laws might not be there. Anyway we we got way off but that's that's the whole consciousness thing can I just say that we're still trying to generate good Python let's let's get Python code to be probabilistic. In your example the algorithm that's going to come out is not it it has to be mathematical it's an algorithm right and it's gonna be some mathematical model that says here's the complexity of modeled right the simulation theory modeling a complex software system right and here's some mathematical things that we're missing from the validation. By the way once you have validation you can have generation right because once you figure out how to validate something you can build that into generation to say don't build things that don't validate right so I think it's just fascinating where everything's going but how how that actually applies to our developers again this is where it doesn't help that every news article tries to sum this up in a five second 10 second spot and they happen to have a picture from iRobot right on there. You know, we're just not there yet. So it's a tool it's amazing but how do we actually apply it and apply it inside a system with enough controls to to make the output or make the gains not be overtaken by the lost time and trying to correct things that you didn't think out well. And that's what we're talking about about the Bain study. So anyway all of this stuff is related but it it always comes down to that. Now from a business perspective right this is where it's super enjoyable to to just follow this and you know let your mind wander with the the people who are some of the great thinkers in this space right but at the end of the day you have to you have to decide how do we actually provide business value here? And that actually starts with saying what is the expected outcome can I measure it today can I measure it tomorrow and how do I expect it to change? And that we can do right that we can do we can measure you know I don't want to go back to K Loc, right? That's thousands of lines of code or anything but we could you know we have good measurements. We can measure how things are going and we can learn from our mistakes of saying hey you know the measurement here and the measurement there, either we lost Brown or it wasn't appreciable. I don't I think Word you said Bane used, but you know, they weren't negligible negligible gains. Yeah, so is it negligible? So why is it negligible? Can we do something better to make it, you know, more pr pronounced and whatnot? And that's that's with everything. You know, that's with using copilot to generate PowerPoints. That's with using copilot to help someone do a spreadsheet. You know, I've seen people um like figure out how to ask AI to do something, and they come up with an array formula in in Excel, which is something that Excel has had for a long time, but no one really knows how to use it, right? And so all of a sudden it's like, okay, yeah, I asked Excel and it gave me this. I pasted it in, I did all my math to make sure it was right. I played with the numbers to make sure that they came out right, and this is really cool. All I have to do is update this and this, and the whole spreadsheet changes. And it's like, great, you just learned how to do array formulas in Excel. Good for you, right? So now we can measure something, and now we can measure the outcome and say, hey, AI just saved you an hour off a task you do once a week, right? That's that's agreed.
SPEAKER_00I want to bring up another stat for you, and it's regarding maturity and adoption. So Microsoft and Google say 25% of their code is AI generated. Anthropic predicts 90% of all code will be written by AI within six months. Stack Overflow says 60% of developers asked said they use AI code generation tools weekly. So I want to I want to kind of parse out uh the reality of where we're at in terms of maturity and adoption versus the optimistic thinking of some of these futurists. And you know, a company like Anthropic, of course, is going to take the most optimistic position when it comes to adoption on something like this. But I find it very hard to believe that 90% of code is going to be written by AI within six months. So what are your thoughts?
SPEAKER_01Well, I think I mean anthropic's um that that's just complete fiction, honestly. The the reason is because in order for you to have 80 or 90% of code written by AI, you would have to have a much larger number of developers who are responsible for 100% of the code using AI to write 90% of their work on average, right? And that's just not going to happen in six months. No way. Also, I would I would put more stock into the Microsoft and some of the larger teams. I would put more stock into them saying that 20% of their code, 40%, 30% of their code, whatever you know stat you look at is written by AI. I would have no problem believing that. I can actually have AI write 100% of my code, right? But there's a paired stat that you have to have. How much time did that save? Right. So if you say your developers are 20% more efficient, I want to know what part of their day is 20% more efficient and how you're measuring that, right?
SPEAKER_00A developer's whole day isn't development, right? It's not in code, right? It's it's dealing with business stakeholders, it's having meetings, it's it's not just they're not just sitting, despite what the movies will tell you, right? They're not they're not sitting there with their headphones on in a dark cloud writing code all day.
SPEAKER_01So I I think it's part of the hype. I think it's also part of the, like you, like you said, Anthropic, Google, Microsoft, OpenAI, they all have a vested interest, right, in this stuff. I don't think they're lying. And I know that they're they're also not fabricating data, but I I think that if you really look at it, that's why Bain and some other folks have said, well, what's what's actually the experience here? How are we measuring it? And that's why I feel like it's so difficult because businesses are working on FOMO, they don't want to miss out, they're working on fear of uh just you know, fear of doing the wrong thing security-wise and whatnot. But they're not really looking at something you can measure. And then what is our expected outcome and something we believe we can achieve, and what's what's that going to be? So if anybody actually comes up with a stat and says, look, developers, you know, they spend X amount of their time, right? I want I want 50% of our code written by AI, right? If that's a goal, then I would venture to say that you have to say, I also don't expect my developers to be 30% more effective. Because writing 50% of my code with AI is gonna take more work, right? But the main question is what stat are you looking for? People who are looking for the more efficient because they want to reduce headcount or get more done or whatnot, those are the ones that are a little bit ahead of their time, frankly, because the output, the quality of the output is going down, right? So we have to we have to figure that out. So people are finding that to keep the quality of their output the same or better, right? They have to spend time with AI. And at the beginning, it's a lot of time with AI, a lot of mistakes, that kind of thing. So, you know, I mean, like people could argue, this is a religion, right? So people could argue with me about that, but I guarantee you I could have AI write 50, 60, 70% of my code, but I won't, it won't be as efficient, right? Because I'm gonna have to do a lot of work to make sure that that code, that's that's exactly like saying I want 70% of my code to be written by outsourcers. So is that gonna cost you less money in the long run or l or more money in the long run? But on the other hand, if you're doing that in a team that you built offshore, right, and you're saying I'm gonna write 70% of my code with them, it's gonna be terribly inefficient for the first year. But then after the first year, they're trained up. We have a good working relationship, we've readed through bad developers, right? That's good. So if I say that I want to write 70% of my code, if anthropic is saying 90% of code should be written by AI by the end of this year, why? Well, because you're gonna have your developers are gonna spend 35% more time getting that right. But two years from now, I think it's five years, five years from now, right, AI is going to be autonomously writing X amount of code.
SPEAKER_00And I don't think it's 90%. I think that's where they're coming from, right? I I think that there is a presumption that this is going to compound, right? AI writing code is going to allow more code to be written, and that's potentially how But no one interprets it that way.
SPEAKER_01You know that. Everybody that reads that article, they're interpreting it as, well, 90% is being read, so I need to hire developers for the other 10%.
unknownYeah.
SPEAKER_01It's like, no, no, 90% of the code could be written, you know, and it's not will be, but should be. I'll go with that. 90% of code should be written by AI by the end of this year, as long as you realize your developers are gonna have to, your whole process is gonna have to be reoriented and it's not gonna be as pretty as you think it is, right? But give it time, you know? So over time.
SPEAKER_00That's a perfect segue to you know how I want to bring this home is practical advice for the technology leader that's listening to this conversation, right? How can you make sure that the development team and DevOps and DevSecOps are in a position to start to leverage AI into the development process lifecycle now? And what is the groundwork and what are the realistic expectations that they should have about the pains, the trade-offs, and the benefits that are going to come from that?
SPEAKER_01Yeah, that that's well, that's a lot to summarize. But I would say, I would say that a few of the tenants that have worked for us and that have worked for me and other companies with whom I consult on this is, you know, standardizing on tools is very difficult nowadays because things are changing so fast. But I would make it a team exercise, right, to adopt tools and to evaluate new tools and to introduce new tools. And I think that a lot of teams are having a problem where that's an individual pursuit, right? Everybody's using different things. And I think it's really important to make that a team effort because when you've pick a tool that you're using for a specific use case, even if someone says, Well, I was using ChatGPT directly and I got better results instead of using cursor with Gemini or whatever, right? It's it's gonna be a lot better if you if you have a uh shared experience and you can, you know, you can uh evaluate together and improve together. The second thing I think is hugely important is you got you've got to think through your process with a heavy emphasis on QC and a heavy emphasis on understanding use cases and which ones apply to which modes of using AI, right? That's huge, especially now with all the spec driven development stuff that's come out. So I think that making it a team effort to choose tools, standardize on tools, and manage the change, because you can't expect to pick a tool today and have it be the best tool even in six months to a year, right? But you have to go in with your eyes open. You also have to make sure you think through your processes very well to have quality control. You know, have your good developers reviewing code that AI is generated from generated at the hands of younger developers or weaker developers, have the uh AI, look at code that's developed by humans, look at projects, you know, that kind of thing. So figure out your use cases and and really uh you know utilize your resources. AI is a resource, so are your developers. And I also think that as a technology leader, you have to be willing to invest in anything to get the payoff. Like, for instance, if you decided that, you know, us writing all of our web apps as PHP round trip, you know, old-style web pages with our own custom written JavaScript and you know, CSS written from scratch, we can't do that anymore. We have to adopt a front-end framework, we have to adopt these new tools, we have to adopt a uh a new CSS thing for the you know, for how it works, how it looks rather, the layout and so on. You know, you have to invest in that. You can't say that that's gonna make our code so much easier to read, so much easier to modify, so much more reliable, and make us so more, so much more efficient without saying that, hey, for the first few projects, and could be for a period of time as we adopt this across our our team and retrofit our team, it's gonna it's gonna be an internal investment. You gotta be thinking of AI that way, and that's so tough to do because technology leaders will get it, right? Uh business leaders might not. Business leaders might not get the fact that, okay, yeah, we're gonna adopt AI, but it's gonna make us less efficient until we get better at it. And we might make some mistakes and have to redo things and all that. So, really that investment, you know, in retooling is something that you have to have to break to people. You have to say this could be a game changer, but not unless we retool. Just like a factory, right? Could be a game changer to have CNC machines where you had people doing all of it by hand. But it's a huge investment. It's a huge change in process. We're gonna get some things wrong. You have to bank, you have to bank that. Even combining two companies, right? Just to use another business example, there's gonna be great synergies here. It's gonna be a huge game changer. But there's gonna be a charge against earnings and stuff of all the stuff we got to do to integrate those two companies. So I think that just looking at it from a business perspective, technology leaders in a business role, you cannot overlook the fact that this is going to have an investment phase.
SPEAKER_00That's a great place to end. Vinde Pippo, thank you so much for your time and expertise.
SPEAKER_01This is great, man. Boy, we some we covered some ground. We even made it to the multiverse.
SPEAKER_00Okay, we're gonna uh we're gonna do a part two, and it's just gonna be stoner talk.
SPEAKER_01Great. Thank you very much.