Develop Yourself

#288 - AI Hype is Ruining Software (A Rant)

Brian Jenney

"We don't really need developers any more"

"In 2 years, the tools will be so good, that no one will be writing code"

"Learn a trade bro"

I've heard all of these phrases in the last week and it's honestly getting to me. Too often, the people with the loudest microphones have the least experience with the tools we're using.

Besides being a talking head on a podcast, I'm also a father and full time software developer. I made this episode not only to vent about why AI hype is so dangerous but why the future isn't as bleak as the mainstream media is leading us to believe.

Send us a text

Shameless Plugs

Free 5 day email course to go from HTML to AI

Got a question you want answered on the pod? Drop it here

Apply for 1 of 12 spots at Parsity - Learn to build complex software, work with LLMs and launch your career.

AI Bootcamp (NEW) - for software developers who want to be the expert on their team when it comes to integrating AI into web applications.

SPEAKER_00:

Welcome to the Develop Yourself podcast, where we teach you everything you need to land your first job as a software developer by learning to develop yourself, your skills, your network, and more. I'm Brian, your host. So I just want to go on a little bit of a rant here. This is a totally unscripted rant about the state of software and AI. I am honestly getting sick of like defending the job of a software developer. But I was recently at a small conference. I was in a room full of people at this conference, and I was hearing the people on the stage talk about AI. These people are non-technical, they've never coded in their life. And I heard something that they said that I'm sure they didn't even think about that kind of irked me. They said, Well, we really don't need software developers anymore. And I spoke to a young woman after this, and I could see that she was, you know, feeling a little bit of this like anxiety around AI and like what it means for the future of work because what they're being sold and what a lot of younger people are being sold, and older people, like what we're just all being sold right now, is this fantasy that AI is going to take your job. Now, I can look at statistics, we can look at jobs reports, we can look at different pieces of data that could support either one of these stories. I could turn to true up.io slash job trends and say, hey, look, the number of software engineers is actually risen throughout the year. And we're basically getting back to like pre-pandemic levels of employment. Somebody else could say, Well, look at the unemployment rate of CS grads. And then somebody could say, Well, look, senior developer jobs have like greatly skyrocketed, or AI engineering jobs are skyrocketing, or machine learning engineering jobs are skyrocketing. And then another person could say, Oh, but look at the junior job, they're all disappearing. And it's like, let's just take a step back for a second here. Let's look at this premise, this statement. We don't need software developers, or we don't need as many software developers. Are we all using the same tools here? Are you all using chat, Jipet, and Claude and Codex and these agents? Are you all using the same thing that I'm using? Because let me tell you, these tools, as cool as they are, do something that I think that none of us have really ever spoken about, are potentially doing way more damage than any productivity gains you might be making from them. I have yet to see a study. And if you have one, please point it to me. I'd love to see OpenAI or one of these platforms come out with a study that says this is the percentage of work that AI can make you more productive by. Here's how much X you've improved from using AI. Because if I read one more article or one more LinkedIn post from some dude, it's always a dude, who says, here's how I 3X my workflow. Here's how I 10X my workflow. You didn't 10X, brother. There's zero way you've 10X'd unless you are working on something completely green field. Greenfield meaning it's brand new. Like, yeah, for a side project, I can blow away a side project, like the initial part of it, super duper quick. I can get that done in like minutes, maybe an hour. What used to take me, maybe a couple days, can honestly take me a couple hours. Now, here is where you hit the big old plateau, right? You're coding along and you want to create a new feature. You want to add something into an app that has some sort of features and functionality. It's something working, right? Something working that actually provides some sort of value. Now you want to add a feature, right? And you tell Claude or Cursor, you say, here's the feature I want to, and you give it specs and you give it the MD files, and you do all this stuff, and you prompt it, and you say just the right incantation of sentences for it to go out and do the thing. And it doesn't do the thing. Or it kind of does the thing. This is where it really lands, right? This is what really happens. It kind of does the thing, but the code's not quite right. It's either not quite working, or it didn't use the library in quite the way you wanted to. It made more files than you wanted it to. So now you're going back and you're reviewing the massive amount of files that it created. Maybe it just created three or four files. Maybe it did a bunch of different changes across different files. Now you're going back and reviewing all those manually. You're changing things here, you're changing things there. You're basically making it to what you would have yourself, or you're at least making it work. And if you're not just making it work, you're making it so it's not so verbose. These LLMs, these little code agents, have a tendency to write way more code than you want. And then you're going back and pruning it down and you're thinking at some point, would this have just been faster if I did it myself? And also, why am I offloading the most fun part of my job to this dumb assistant? Right? Because now my job has gone from code creator and wrestling with the problems, and then I finally figure out the problem and I feel really good, to saying, hey, you little code agent, you go do the job for me, and then I'll review your code. I don't want to review your code and be a project manager to an AI robot, especially when that little AI robot is not quite as good as what I would have produced if just left to my own devices. You know what was really good? You know what really X'ed my productivity, like 1.2x to my productivity? GitHub Copilot. The little autocomplete, making my functions, writing for loops I didn't want to write, figuring out how to write a function definition, things like that. Like I could write out some comments and it would just kind of go and finish the function for me. That's what I miss, right? I don't necessarily want a team of agents going ham in a code base and then me having to clean it up. Let me tell you a little anecdote here. So I left a company recently where we were working at like breakneck speed. And at some point we all kind of knew, we all talked about this among the developers. We said, you know, the AI tools are cool at first, but like, are you still using them the way that you used to? And the overwhelming response was, no, I can't keep using them at this speed because we're just producing slop. We're producing junk that is brittle that will not scale, quote unquote. I don't mean scale to like the number of users it can support. I mean scale for like how many other developers can work in this code base, right? AI slop leads to more AI slop. Here's an underrated rule slash law of code. More code always equals more code, right? Code is a liability. One of the best things you can do as a developer is actually to remove lines of code, not add more and more and more lines of code. And what do AI tools like to do? Add more and more and more and more lines of code. Now, back to my original beef with that person's statement about we don't really need software developers anymore. This tells me a few things, right? That the idea of what we do as developers has been so trivialized and commoditized that people think that you don't really need developers for anything. Also, this is blissful ignorance. There's a phenomenon called the Dunning-Kruger effect. And I know it's often misquoted or misunderstood, but it goes basically something like this. This is what people mean when they say the Dunning-Kruger effect. When you don't know something well enough, you don't know it well enough, right? Your own ignorance is basically like a blocker for the reality. So you think you know a lot about a subject because you don't know much about that subject. This is the problem with coding, right? For years, now nearly a decade, we've been telling people anybody can code. Anyone, a horse, uh, a man that has never even touched a computer, he can code. He can work at Google and make hundreds of thousands of dollars a year. Anybody, anybody can code. No, not anybody can code. Anybody can code in the same way that anybody can get a ripped six-pack of apps. Is it possible? Absolutely. Is it physically possible? 100%. Is it likely? Depends on where you're starting from, right? For the kid that grew up using computers and writing code and HTML to mess up his MySpace page or something like that, or the nerd with a software engineer father that grew up writing, you know, CSS or learning C in his spare time, of course they're gonna get to that goal a lot faster than you or I, who did not have those advantages, right? So, yes, is it possible? Yes, is it likely? Absolutely not. So that is the first issue, I think. We've really dumbed down and trivialized what we do as developers. And then people think it's simply the art or the act of writing code and getting that code onto the web. That's it. And if you're a software developer, you know that's really just a small part of what you do. I mean, it's a pretty large part, but when I really think about what do we do? Do we build new products every day? Do we build new features and ship and release them every day? Maybe at a really small startup. But what do larger teams of software developers tend to do? Well, they maintain code, they write tests for the code, they think up new features with product teams to understand what customers might want. They fix legacy issues, they update new libraries, they experiment, they explore, they communicate, they talk, they decide how they're going to keep building on and adding on to this never-ending project. That's why it's not unusual to go into a code base that has tens of thousands of files, millions of lines of code, and has been in existence for maybe a decade, unless you're working it somewhere fairly new. And even if you are building really new things, they often have to fit into old systems. And the code part is one piece. There's a whole other piece that people rarely ever talk about. Infrastructure. What about security? What about cloud deployments? What about the database layer? These aren't afterthoughts. These aren't things you can just say, oh, AI, just do that. Um, where what is my SQL schema going to be? Should I use Mongo? Should I use SQL? Should I use Elasticsearch? Should I use a vector database? You have to have opinions on these things. It's not enough to say, AI, you figure it all out. Where do you deploy it? How do you host it? How do you charge things? How do you observe the spikes in usage, the latency of the request, all these things that don't seem to even be on the radar for most people? And yet they think that you don't need developers to handle any of this because an AI tool can just do it. And I think the worst thing is that online we're being constantly bombarded by new tools. It's people saying, hey, this one AI tool isn't quite good enough, right? Cursor's not quite what you want, or Claude is not quite what you want. So how about this? We have a new tool, and you'll add this onto Claude or this onto cursor. This will make it really good. Or then you have the tech bros online saying, oh, no, no, no. The way you're prompting is wrong. You have to prompt like this. And then you have people saying, oh, no, no, you need to use this other formal system to design and document everything before you give it to a large language model. Who the hell knows, right? No one knows. Here is something I've been kind of noodling on lately when it comes to using AI and how all of us are kind of feeling right here. We're all early adopters of AI. If you're not using AI, I think you're way off base, right? I think you must be using AI once you can code. If you can't code, you should not be using AI like at all, in my opinion. Not at all. You should not be using it to construct code because you're offloading the most important and most fun part of learning how to be a software developer. Now, here's the thing, though. If you are a software developer, you should be using AI. But there's an early adopter penalty. There used to be an early adopter advantage, right? There used to be an early adopter advantage. And I still think that's true here, but there's an early adopter penalty, right? As we're all learning how to use these tools and we're adding more tools on and more workflows, and we're debating online, which I think is kind of fun. It is interesting. We all also have to be aware that nobody freaking knows yet, right? I've worked at a couple small companies that worked with some really large vendors out there. I'm not going to name the database providers that they work for or the people that we've spoken to at some of these companies, but they were the big ones, right? And we asked them things about like, what do you use for agents? Should we use fine-tuning versus RAG? How are you using RAG? What do you think about this re-ranking strategy? And here's the thing: most of them didn't have a hard and fast answer. If you ask Oracle for SQL database schema advice or something like that, or somebody about MongoDB schemas or performance, they're gonna know exactly what to do. They're gonna look at your code base, they're gonna look at your schemas, they're gonna look at how to optimize your queries, and they're gonna have some very, very opinionated opinions. There are full books, many books written on this subject, how to orchestrate and create the data layer, right? There's not much when it comes to AI tooling, agents, whether they're good or not, token costs, observability, vector databases, all these things. This is like totally greenfield brand new territory. We're all just learning this together. And there's even less out there when it comes to like how do you write code with AI? We have decades of material and opinions and thoughts on like object-oriented programming, functional programming, different styles and patterns to use, like the gang of four, how to construct enterprise architecture and write software and write code in JavaScript or C or Python or Rust or whatever, right? But we don't really have that for large language models yet. And everyone seems to have their own little twist. And the worst part is, now here's the worst part here. I don't mean to be Mr. Doom and Gloom here, but this stuff is just eating me up, right? The worst part about all this is it's a black box. When you send your code to OpenAI or GROC, if you're crazy, or Anthropic or whatever, Claude something, right? You're sending it over the wire to some API in the cloud owned by one of these providers, these LLM providers. And you're saying, okay, now you're gonna figure out the code for me. You have zero clue if that model is the same model that was used yesterday or the day before, or if it's down right now, or if it's up right now, or if this service is degraded, or if some new training data has come in that you don't know about, these are completely black box. You have zero insight into them. This is not open source software, which actually makes up the majority of the libraries we use. Now we're all using something that is completely black boxed. So when it goes down, you may not know. And I did an episode on this kind of recently about Claude going down or getting dumber, and it wasn't just all in our heads. Like people writing about this on Reddit, I was feeling it. I'm like, is anybody else noticing that Claude just feels worse, right? And then we find out, oh yeah, it actually did get worse. But the only reason we knew about it was because Anthropic decided to actually release that data. What if they hadn't? We would have never known. We would have just moved on to some other model and had it happen again. So I don't have any great epiphanies or takeaways or some nice way to wrap this all up, to be completely honest. This is the reality of where we are in the field of software development. But I will say this actually, I will say this. If you are a person learning to code right now, you're not wasting your time. You should be doing what you like doing. If you're a person looking for a get rich quick scheme and thinking that coding is going to be it, or cybersecurity, or machine learning, or whatever, I do think that you're in the wrong industry. I do think that we have some more time before the bottom of all this hype cycle falls out. I do think that we are in a bubble, but I'm not intelligent enough on economic issues to really speak intelligently about that. So I'm just gonna kind of shut my mouth when it comes to the financial impact here. But it certainly does seem like something is off, right? Because we need to learn how to trust ourselves, use these models. I always encourage people to do this because I read so much junk online from people that I don't really think code for a living. And I think they have really strong opinions about what we do, the models we use, what coding is as a profession, and telling you, the person who may be early on in their journey, what you should be doing. And I mean, I'm talking about CEOs, I'm talking about heads of engineering organizations, I'm talking about people that have really fancy titles that are telling you things that either simply aren't true or are greatly misinformed. And honestly, that kind of pisses me off and makes me a little nervous because I have kids myself, and I don't like all this doom and gloom stuff because one, if it was true, we wouldn't see OpenAI hiring tons of software engineers, right? We wouldn't see Anthropic hiring tons of software engineers and paying them astronomical salaries. And I don't just mean they're hiring ML people. Just go on their website, look at who they're hiring: front-end, back end, full stack, AI engineers. They're hiring for roles that people were hiring for back in 2013, right? It's not so different nowadays that, oh my God, the landscape is so dramatically shifted that the skills you knew now are completely obsolete. Do you think a massive corporation is just going overnight to just replace their team with AI agents? And if the people that are making these AI tools are not replacing large swaths of their workforce with these tools, what does that tell you? Does the dude on Twitter or on LinkedIn who's 10x his workflow, does he know something that these companies don't? Maybe he does, maybe he truly does, and he's the smartest person in the freaking room. Or maybe, just maybe these people are lying. I don't know. You make up your mind for yourself. Thank you for coming to my rant. I promise I won't do too many more of these in the future. And for all the negative junk I had to say, I'm a huge fan of a lot of the AI tools out there. I use Cursor, I use Claudia. I always feel like I have to say that to like kind of defend myself so people don't think I'm some anti-AI guy. I'm just getting sick of the people that are turning off an entire generation of programmers at a time when I think we really actually need more and better programmers. I think this is the Renaissance period right now, where we get rid of some of the boot campization that has happened over the last 10 years in software, where we regain our sense of self, we learn how to actually articulate the value that we create, and we don't cheapen the profession so much that we say any idiot can do it and you can just get rich in three months by doing this thing. I think that has done a massive disservice to the industry. I think it's ushered in a lot of people that didn't have good intentions, and I think we've set a lot of people up with really bad expectations that one, this is gonna be really easy, two, they're gonna make a lot of money without a lot of effort. And three, just made like the media kind of not like us. I mean, can you blame them? For years, we were on TikTok and Instagram and other social media platforms saying, here's my day in the life. I made a button. I work at Meta and I got$300,000 for moving this button and now I'm going home at 2 p.m. I got to work at 11 a.m. I'm going home at 2 p.m. See ya, screw ya, I'm out. Anyway, you probably didn't find that episode helpful, but I hope you found it maybe entertaining. Maybe you feel a different way. You feel really strongly about AI. I'd love to hear your takes either way. If you write something on LinkedIn, please tag me. Even if it's mean, I don't care anymore. I want to hear what you think about this kind of thing. Anyway, don't freak out. Learn to code if you want to learn how to code. If you don't want to learn how to code, then don't learn how to code. But don't freak out and hide under a rock because that won't do you any good either. Anyway, see you around. That'll do it for today's episode of the Develop Yourself podcast. If you're serious about switching careers and becoming a software developer and building complex software and want to work directly with me and my team, go to parsity.io. And if you want more information, feel free to schedule a chat by just clicking the link in the show notes. See you next week.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.