Develop Yourself

#270 - Coding in the Age of AI: What No One’s Telling You

Brian Jenney

AI is changing coding faster than anyone expected. 

Two years ago, autocomplete felt wild—now people with zero dev experience are shipping apps over a weekend. 

The question isn’t “is AI replacing developers?” It’s “how do you actually use it without wrecking your codebase?” 

In this episode, I’ll break down when to use AI (and when not to), why Retrieval-Augmented Generation (RAG) is the real game-changer, and how you can stay valuable as a new developer in a world full of hype.

Send us a text

Shameless Plugs

🧑‍💻 Join Parsity - Become a full stack AI developer in 6-9 months.

✉️ Got a question you want answered on the pod? Drop it here

Zubin's LinkedIn (ex-lawyer, former Googler, Brian-look-a-like)

Speaker 1:

Welcome to the Develop Yourself podcast, where we teach you everything you need to land your first job as a software developer by learning to develop yourself, your skills, your network and more. I'm Brian, your host. Software development is undergoing a fundamental change. Just two years ago, we were all shocked that autocomplete tools like GitHub, copilot, were finishing functions for us, and now we have dudes like Chet, who works in marketing, vibe coding a SaaS app over the weekend, and, to be completely honest with you, it looks really good. Unfortunately, though, too many software developers, especially newer ones, don't fully realize that the game has changed, and I know a lot of you that follow mainstream media. You probably think it's already over for software developers, right? Okay, let's say it is. You probably think it's already over for software developers, right? Okay, let's say it is. Now what? What do you do? You could hide under a rock, maybe, enter into a trade that's less likely to be automated, perhaps, but what happens when they automate that? Or you could start upskilling yourself, learning a lot more about how things are changing, how to use AI, and then be on the ground floor of this monumental shift. So I wanna walk you through some things that I've learned as a software developer working now at two AI startups, and how things are changing and how I'm trying to navigate this new territory where there really aren't any rules right now. Now, despite AI being a really good tool and basically it being forced upon us at many companies right now, you still have to have some intuition about when to use AI versus when not to use AI, because just because you can doesn't mean you should.

Speaker 1:

Ai can absolutely write your entire web app data pipelines, your backend. It can probably do it really really fast too. None of it will work, but it'll look really, really good. The most concerning thing that I'm seeing with AI code generation tools is that less experienced developers they can't really tell the difference between code that works and code that lasts. Ai agents tend to write really brittle code, meaning it doesn't really maintain itself or it breaks when it comes across different edge cases. It lacks context, like business specific logic and, let's be honest, it just writes too much code and you could say, oh, use rules or write better prompts, and, yes, all these things are true. You can totally write better and better prompts, but these tools are obviously incentivized to write lots of code. You're paying for them. They understand you're a customer. Anybody that uses these tools sees that they write more code than you would possibly need.

Speaker 1:

Remember this fundamental concept Code is a liability. More code equals more surface area for bugs. Also, more code has only ever meant more code, because the more code you write, the more code you have to write to maintain it. That's why often, the most important thing that a software developer can do is delete lines of code. And if you're not a software developer, this may come across as a shock to you. You're like wait, what If you're in a million line long code base, which is not uncommon at all. Some software has been running for 20 plus years and you're maintaining it. It's called legacy software, which is what most of the internet and large companies that you're familiar with run on. So when you're looking at that legacy code that was written by some guy 15 years ago, or even a person five years ago, you're reading over the code, and the more you can take away to make it more streamlined, more efficient, less surface area for bugs, the better. Deleting lines of code is actually much better than adding lots of lines of code, because guess who? You write code for Other humans and, as other humans, we have to read and digest this code, and if you say we'll just have AI do that, you've already lost the plot. Just turn this off and go do something else.

Speaker 1:

I ran into this issue at work where we were writing code that was actually doing pretty well all AI-generated code but at some point it became like a house of cards. The more we stacked on top of it, the more brittle it got and the more it was likely to fall over, until at one point we basically just coded ourselves into a corner and we're like okay, we've been moving really, really fast building this stuff. Now we have a bunch of messy code that's not really usable at this point, and that's fine, because we're past this prototyping stage, which leads me to here are my hard and fast rules for now about when to use AI versus when not to use AI. These are my rules. I'm still figuring out the best ways to do this myself. So this is my opinionated list and my opinionated opinions on when you should versus when you should not use AI. So number one when I have tedious work that I could easily do, but I just don't wanna do like updating types or something in a TypeScript project or breaking up a big component and react into multiple components great for that kind of stuff. Ui updates really, really good for that kind of stuff. You can drag and drop something from Figma or an image and say, hey, do your best to make this saves you a ton of time.

Speaker 1:

Prototyping is where AI really, really shines. If I'm curious what a really big refactor might look like or we have to do a proof of concept just to test something out, I'm not really going to code anything at all. I'm going to have the AI agent do all that for me. I use Claude, I use Cursor Whatever I feel comfortable with that I want to use for that project. I'm just going to build it really quickly. I'm not going to care much about the quality and I can get out in literal minutes right now. Now you may be thinking but wait, haven't you said that AI won't replace developers? If it's doing your work in minutes, that took days. Well, isn't that taking over developer jobs? Prototyping my friend keyword here prototyping If you're doing prototypes nonstop, then I don't really know what kind of work you're doing. Maybe you're working at a really early stage startup and again, this is where AI agents can totally be a game changer. But once you move past the prototyping stage, which we'll talk about shortly. This is when you need to turn it off.

Speaker 1:

Another really good use case is bug discovery. So I use CodeRabbit for code reviews. I often use Cursor to tell me what's happening in code that I'm not super familiar with. I'll ask it to poke holes in my logic. I'll ask it if I missed an edge case. I will use it as a thought partner and be like hey, how would you do this database schema? Or does this make sense? The way I'm phrasing this, I'm trying to tell something to a CEO and making sure I understand my own jargon. Or does this make sense? This code change I've done here? What are some issues with this? How could I make this cleaner? Can I shorten this code in some kind of way? Those are really good use cases too. At least it gives you some validation at the very least. And sometimes it can actually find significant issues and tell you oh, you missed an edge case. You're like, oh, great, glad you found that. And the last use case is CodeWhisperer.

Speaker 1:

I work at a really small company and sometimes I'm looking over the work of our data scientists, who write mostly in Python, and I use AI to verify my understanding, especially of mathematical concepts because a lot of what they do is using mathematical libraries like NumPy and making sure that I understand what's going on in the code before I just give it my stamp of approval, because these things are beyond my simple web developer brain right Now. Here's when I tell my little code monkey it's time for you to pause, right Time for me to take back control and leave you alone for a little bit. Anything mission critical this is number one If the code breaking means that a severe incident might occur or there's a potential to lose money or something that's generally just off limits. A good example might be a data pipeline. If I'm doing some sort of really complex data merging or adding lots of data into tables and SQL or some sort of database, I wanna make sure that's done right. The room for error is much lower. Ui updates you can see easily on a screen if the UI is broken. If data is incorrectly inserted into a database at scale I'm talking millions of rows then you could have a serious issue on your hands. If you're calling some API that costs money and you don't catch some sort of weird condition where you might be hitting it a million times in a row or something dumb like that, then running that on your computer or running that in the cloud could have severe consequences. This is when you just can't trust AI to write the code. You could verify it, you could even have it help you write some of it, but you really need to take high control of your code base, I think, when you're doing things that are mission critical. Again, my opinion.

Speaker 1:

This second case is one that I'm sure a lot of people are going to disagree with me on, but this is my experience Test-driven development. I really thought AI would be way better at this, but I was shocked to see how bad it was and even some sinister stuff that I saw taking place. Maybe I'm being a little hyperbolic, maybe I'm being a little dramatic here, but what I saw was really not cool at all. So the jury's still out here, but here's the thing I saw. Tdd, test-driven development, basically means hey, write tests for this feature, this functionality we're going to implement. The reason why we're going to do that is so we can think through all the scenarios that we want to guard for or write our code for. So we're going to say it should do the following things. This code should check the database for duplicate entries before inserting, or something like that, like really mission critical stuff. So I'm writing tests before I even start writing code.

Speaker 1:

I had AI do this. Here's the thing it just cheated, it just hard-coded values and it made functions and mocked those functions. It made clones of those functions that would return the values that would make the tests pass. Can you imagine if a human did that, if they say I know this doesn't work, but I'm going to fake it so it looks like it works, so the tests pass. The code didn't work at all and I'm thinking who in the hell would ever do this? But I get it. It was given a task, it was given an assignment and says, hey, write these tests, they're not passing. Oh, I'll make them pass. Just a second. And it made them pass by just cheating. So not too cool at all. Pure insanity, right? So now I can't trust the code or the tests. So now I'm starting over from scratch Again, really shocked that this was not actually a slam dunk for AI, but my experience and apparently a few other very seasoned developers that I follow on YouTube.

Speaker 1:

This is not their experience either. If you go to Internet of Bugs, one of my favorite channels on YouTube that discusses software. He actually speaks about this and how bad AI is at writing tests, which really shocked me. My last hard and fast rule for when not to use AI is when I want a simple solution. I don't need a rate limiter for an API for an internal web application used by 10 people. I don't need a readme for every single component. I don't need you to check all the types in the build of an entire app when you build a single component. Why would you do that? Why on earth are you doing that? So the TLDR of this is like AI, treated like an intern. It's really good to get started with, but it's increasingly less useful the more complex things get.

Speaker 1:

I also suspect this is why a lot of more junior developers find this like oh my God, this is mind blowing. This is taking my job Because essentially, it kind of is doing a lot of things that you probably are struggling to do at this point. But remember, being junior is the shortest stage of your career as a developer, and even a junior wouldn't fake tests, would they? Because that, in my opinion, would be a fireable offense One of the few offenses that I could see you actually needing to be fired, for If you knew code wasn't working and yet you tried to merge it and get it out there to hurt people by faking tests. Could you imagine if somebody did that They'd be canned rightfully so? So now let's get into some of the new technologies that are really going to be shaping this new software world. This is again my opinion. This is my strong, strong bet.

Speaker 1:

I was at a conference a couple years ago and the head of Vercel, the company behind Nextjs he had this slide on the stage and it was really interesting. Interesting, it said the ai engineer of the future is a typescript engineer, and people rolled their eyes and twitter jumped on this and I kind of wrote it off too. I'm like, oh, that's kind of interesting, that's a hot take, right, and I might just, yeah, whatever, just some fancy talk at a conference, get people riled up. Then I joined a really small startup full of the smartest people that I've really ever worked for and they worked at all the biggest tech companies, right facebook, facebook, amazon, google, all the big players, all people from top positions in these companies, figma, and yeah, just an incredible team of people. Honestly, I was wondering what the hell I was even doing on this team. And then the entire app that we were building from the data pipelines that ingested hundreds of thousands of documents per day, the API routes, everything was written in TypeScript and I kept coming back to that slide. I'm like, oh my God, we're doing the thing that guy said. I mean that's probably why he's the head of Vercel or why he's like way up there at Vercel, because he's obviously not a dummy, right?

Speaker 1:

We were also using a type of database that I've never heard of, that I've spoken about a few times on this show and I'm sure I'll speak about a few more times in the future Vector database. We were using this to retrieve documents that we had scraped from the web and then we were feeding this to an LLM, a large language model that would use this to construct responses that we would then stream to the front end. Streaming meaning when you see, like text flowing, like in chat, gpt, your text doesn't come in one big block, it like streams to the front end. We were building something very similar and in the background we were using something called RAG retrieval augmented generation. I was a little bit shocked at how simple it was to get this set up and also how new it was to me. I'm like this is fundamentally different than anything I've built in the last 10 years. Right, I'm like this is a pretty massive shift.

Speaker 1:

It felt like when you were writing things in HTML and CSS and then React came out and it was like mind blowing. I felt the same exact way and once I saw it, I just can't unsee how useful it is. It felt like I glimpsed into the future for a little bit, because here we were with some of the smartest people I'd ever met and people that had been on the forefront of AI technology. These weren't people that were just you know, and people that had been on the forefront of AI technology. These weren't people that were just noobs to this thing. I was the noobest person in this group.

Speaker 1:

We're building this for Fortune 100 companies using Retrieval, augmented Generation, nextjs, typescript and Vercel's AI SDK library. This library, in my opinion, should get a lot more attention, because this is the library for working with large language models. It involves chat operations. It involves things like saving messages and history streaming, streaming components all the cool things that you want to do as a developer, working with AI tool calling all sorts of really cool stuff that comes out of the box and it turns out a lot of other companies are doing this. They might not be doing the cool Vercel AI, sdk stuff or doing everything in TypeScript, but they are using RAC. They're using retrieval augmented generation.

Speaker 1:

This is honestly the most practical use case for AI. It's also the most boring and it requires some technical knowledge right? You can't? Vector databases aren't sexy, right? We want headlines. Like you know. Ceo of some company says AI will replace all developers in time for the next quarterly earnings report. It's a much cooler story than you know.

Speaker 1:

Most companies find RAG as a solution that makes sense for AI and actually contributes to their profits or whatever. Boring, right? People want to hear about us getting fired. Can you blame them? And partially, that's exactly why we're going way deep into RAG retrieval, augmented generation because I see this as a core skill, a fundamental thing that's very tangible, that you can actually learn and put into practice immediately. If you don't have full stack developer skills, it's already going to be over your head. So not much has changed. This is just something to add on, rather than replace your core skills.

Speaker 1:

I'm genuinely excited about this. I've talked about it quite a lot and I really hope to see you in Parsity so I can learn this with you. It's gonna be so much fun. Here's the last really interesting thing I've noticed right, there are no rules. You are the expert, or at least you need to be the expert, because here's what happens when you have billions of dollars in investment, economic instability and tech bros who are basically celebrities right now on podcasts, you get hype that we've never seen before.

Speaker 1:

Reality has left the building. Sam Altman said AI replaces developers in six months. Nvidia CEO says it. The newspaper said it. Your mom's telling you, the dude down the street's telling you that software developers don't have a job. It doesn't matter. Reality has left the building. And can you blame people for wanting to see us fired? Let's be honest. We had those day-in-the-life TikToks, with a 24-year-old dude working in big tech making more than it costs to buy a two-story house in the Midwest from doing nothing all day. Perfect storm, right. So we have economic instability, we have high interest rates, we have a disdain in general for the laptop class and people that worked as software developers and, let's be honest, people are getting laid off. It's just all happening Now.

Speaker 1:

Is it because of AI? That part is highly doubtful and there's nothing to prove that that's the case, because the only studies that we've seen have showed the exact opposite MIT 95% of AI projects fail to launch. Cornell developers are 19% slower using AI tools. So what is exactly happening here? The AWS CEO coming out and plainly stating that AI replacing junior developers is the dumbest thing he's ever heard of. So where does reality end and the hype begin? To me, it's obvious, but if you don't work in software or you're really new, then maybe it's not, or maybe I just don't know what I'm talking about. I don't know, but your job is no longer just pumping out code. You have to be able to explain and fight the hype, because higher up people now the CEOs, people that have no business touching code now have opinions on code. They wanna see. Well, why is this taking so long? Because I just put this in chat GBT and it spit it out in two minutes. So now you're having to explain to people what you do as a developer and why it's well beyond just pumping out code, why things like security, accessibility, complicated API integrations, rate limiting, legacy code, maintainability tests, why these things matter, because they honestly don't know.

Speaker 1:

As annoying as this can be. I actually think this is a good thing, because for too long we've told people this fantasy that anyone can code. I often hear people say this. It actually gets on my nerves. People will say things like oh, you know, I kind of failed at school, I'm not doing well at my job, I think I'm just gonna learn how to code. That's like your last ditch effort. I'm like this is not the kind of thing you just kind of nonchalantly enter into. Right, like anyone can learn to code, in the same way that anyone can get a six pack of apps. Like, is it physically possible? Like, yeah, right, to some degree you can have visible apps. Is it going to be likely? Probably not.

Speaker 1:

I think it's time we finally learn how to articulate what we do, the value it creates and why. Things are never as simple as they seem. We've made this job seem way too simple for too long. We've glamorized it. We put out stupid TikTok videos, making people think that this is just the chillest job in the world, and that's just not the case. So I think it's time for us to take back the narrative and also educate people so they know a little bit about, too, how these large language models work under the hood, and if you're a developer who doesn't know how these large language models work under the hood just a little bit and understand things like embeddings and transformers, I really suggest you watch Three Blue One Brown's video series on YouTube so you can understand a little bit more about these and you can then explain that to people who may be wondering why is this button taking two days to build, when I just did it in chat GBT in 10 seconds? That's a question you're going to come up against in this day and age.

Speaker 1:

And finally, maybe, just maybe, don't fall into the doom and gloom narrative or the overly optimistic happy path. Maybe I'm that overly optimistic happy path. Don't believe me, don't believe the news, don't believe anybody. The problem with the internet nowadays is not that everybody has an opinion. It's that the most polarizing ones get the most attention. I've had bootcamp dropouts tell me that my job is at stake. I've had people on Reddit say that I don't know what I'm talking about, that I've never been an engineering manager and that I've been lying about my past and that I've been lying about my past. I've had career coaches who don't know what HTML is explaining to me that companies simply don't need junior developers anymore because AI can do whatever it is that they think that junior developers do. These people may be smarter than me, they may even be better looking, they may be more popular. That doesn't make them right. It also doesn't make me right.

Speaker 1:

I would tell you to try the tools, read the studies, understand what's going on just beneath the surface of the tools you use and then draw your own conclusions with your own original thoughts based on your lived experience, and then maybe you'll agree with me. Maybe you won't, but at least you'll have the knowledge and the confidence to have an opinion right. Good luck out there. Always hope that's helpful. Maybe I'll see you in Parsity so you can learn RAG with me and other people out there doing the cool work to actually be ready for the ground floor of this new wave of software development. See you around. That'll do it for today's episode of the Develop Yourself podcast, if you're serious about switching careers and becoming a software developer and building complex software and want to work directly with me and my team.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.