
Develop Yourself
To change careers and land your first job as a Software Engineer, you need more than just great software development skills - you need to develop yourself.
Welcome to the podcast that helps you develop your skills, your habits, your network and more, all in hopes of becoming a thriving Software Engineer.
Develop Yourself
#235: 2 Skills You Don't Need Anymore In the AI Era and 3 More You're Going to Need
Send a text and I may answer it on next episode (I cannot reply from this service 😢)
The developer landscape is shifting. Fast.
AI isn’t replacing developers—it’s replacing the kind of developer that only knows how to write boilerplate and follow tutorials.
There are some practical skills you're going to want to pick up in the AI era and a few I'm kinda sad to say are becoming less relevant by the day.
I mentioned my unit testing project in this episode - it's still worth checking out IMO.
Grab it here: https://github.com/CodeCoachJS/js_pro_unit_testing
Shameless Plugs
🧠 (NEW) Parsity's The Inner Circle Program - a highly customized roadmap to take you from 0 to hired. For career changers who want to pivot into software.
💼 Zubin's LinkedIn (ex-lawyer, former Google, Brian-look-a-like)
👂🏻Easier Said Than Done Podcast
Already a developer? Check out 👉 Not Another Course
Serious about joining Parsity? Schedule a call with me ☎️
Welcome to the Develop Yourself podcast, where we teach you everything you need to land your first job as a software developer by learning to develop yourself, your skills, your network and more. I'm Brian, your host. I wanna talk about the software developer skills that matter and the ones that don't matter as much in the age of AI. I am not Camp Doom and Gloom, where I think that everything is gonna be automated and that cursor and vibe coding is going to take over the profession. That's just not the way I see things going, and if you do see things going that way, I would actually think that maybe you haven't written a lot of complex software. I'm just being completely honest with you Now. I'm sure there's somebody out there that's going to be in the comments or something, saying I write incredibly complex software and you don't know that AI is taking over every single thing. Sure, I haven't seen it. I don't know anybody that's of that opinion, but hey, you do you. Anyway, there are some skills we spent years sweating over that are becoming low-value activities. Meanwhile, there are some new skills that are becoming critical, and they're the ones that aren't getting taught as much yet, especially one of them which I'm gonna talk about in this episode. If you wanna stay relevant or maybe even dominate in this new era, here's what I think you actually need to know. But before that, let's go over two skills which, I'm kind of sad to say, are becoming low value.
Speaker 1:And this first one really hurts me because I struggled with this. I learned how to do it and then I've taught it to dozens or maybe even hundreds of other developers out there. Writing tests from scratch, yeah, really hurts. I had a whole course on this. I learned how to write unit tests for React and backend services and end-to-end tests and integration tests and have a lot of opinions on tests. I even have a Medium article that was one of my most popular ones about writing truth table tests. Anyway, I love tests. I love writing them. I think it's a great thing to know how to do. Now. Learning how to write for them from scratch is just not a high value activity anymore.
Speaker 1:This is exactly what AI was meant to be used for with coding right. Not for, like, writing all your complex queries and code and establishing patterns that's a terrible use for using a large language model or cursor, but for writing things like tests, things that can be easily automated and that don't really provide a lot of value beyond making sure that your code actually works as you expected. This is a perfect use case for AI and I'm using this to generate code and generate tests that used to take me half a day or an hour, or maybe even sometimes a full data write. I'm doing this in minutes and it will get me 90% of the way there. Oftentimes it will get me 100% of the way there. I love this. Now I can focus on writing the code. I can have the test get written pretty much automatically. I can configure something like cursor to automatically add a test. When I add something to a certain file, a lot of times the code is okay.
Speaker 1:Now, test code doesn't have to be the best. In my opinion, it should still be written well, but it doesn't need to follow the same standards, I feel like, as the rest of your code base. Maybe I'm wrong for saying that, but that's kind of the reality of most of the code bases that I've worked in, where, yeah, your tests are important. They make sure that things that are critical don't break when you update them, but at the same time, they don't need to be written in the same style and with the same rigor as, maybe, writing complex front-end or back-end or full-stack code that actually will be delivered to a customer. Your tests are there for the developers to make sure you haven't broken anything and to make sure your customers don't end up telling you about a bug before you know about it yourself.
Speaker 1:Now, all this being said, you still need to know how to write tests, because if you just take whatever AI generates and you think cool, it works, then one you're not really thinking enough about the point of the test, like you should be identifying what to test, like everything's not worth testing. You shouldn't have 100% test coverage in most cases. I don't think you should do that. I don't think you should have 0% coverage either. You should have somewhere in between the 0 to 100% range. If everything is critical in your application, then 100% of things need to be tested. If you have one really complex flow, like maybe an e-commerce app where people add stuff to a cart and then they do checkout, maybe that's the one thing you really need to test. But you should know what to test, how to test it and then have AI generate that code. So you're still using your developer skills and your thinking skills. Still using your developer skills and your thinking skills, but you're not actually handwriting all these tests and understanding the syntax and getting really good at something like Jest or React testing library. I don't think these are high value skills to learn anymore.
Speaker 1:Next thing that I'd say is a low value activity is writing code from scratch in general, especially things like configuration files, for example. Think of like Docker or scaffolding up a YAML file or doing anything with like a CI CD pipeline basically stuff where you're writing configurations or you're looking at a blank canvas and you're thinking well, where do I start? I need to make a react app. And you're like hand coding all that and taking an hour or two or more to do that. Low value activity. Nowadays you have templates, you have command line interfaces. You can do something like npx create next app and spin up a next app immediately, and then you can use AI, cursor, chat, gbt, whatever to add, like the components and the libraries and different structures for an app, like tried and true patterns that you can just establish immediately. To just get down to the business of actually writing the code.
Speaker 1:It used to be that you could take a full day doing this. You'd be like I need to get a Docker file. I need to then get my you know all the dependencies in packagejson. Then I need to integrate Prisma. Then I need to figure out where my components folders will be. Then I need to think of a naming convention. Then I need to think of, like, the routes and all these things. You can just have AI do that within a few minutes, honestly, and most templates nowadays, like with Nextjs or even Vite, will just do that for you, on your behalf, and you can use AI to get a little closer to your final product and begin to actually write code that will do the thing that you want to do, whether it's a to-do app don't do that. Or a movie finder don't do that. Or some other thing that you thought of, do that. Then you can actually get down to the fun stuff.
Speaker 1:So what's your real value here if you're having AI scaffold the majority of your app? To begin with, you need to understand good versus bad architecture and patterns. So if you don't know that, hey, maybe my components should be in a folder here, or maybe they should be over here, or maybe they should follow this pattern. Or how much type safety do I need in my routes? Should I be using TypeScript? Should I not be using TypeScript? Where should my tests live? How should tests be run? What is the Git flow going to be? You should be thinking about all these things and not just offloading them to AI. This is the part where you, as a developer, have to understand what you're building, why it's important and what kind of patterns and flows you wanna have within it, so that you can develop faster and get things out to your audience or your audience of one, maybe, or just you quicker and safely.
Speaker 1:And, by the way, if you're still learning how to code, like at the really beginning steps, do not skip writing code from scratch. Repeat Do not skip writing code from scratch. You are going to be so, so screwed when it comes down to debugging this code, which I guarantee you will never work on the first try, or, if it does, it won't soon. The more and more you add onto AI-generated code. It's like building on a house of cards Very brittle. One small move can destroy it all, and if you don't know how to get yourself unstuck, you are gonna be limited by the number of credits and your internet connection to something like ChatGPT or Cursor, and you're not gonna be a real software engineer. You're gonna be a vibe coder and that's fine to be, but that does not make you a software engineer and you quickly will realize that you can't quite do the things that you're hoping to do without being completely reliant on AI. So, counterintuitively, use AI when you don't really need to use it to go really fast, but when you really need to use AI, don't use it so you can finally get to that stage, if that makes sense.
Speaker 1:Hey, I hope you're enjoying this episode. Now you know that I own an anti-boot camp with my buddy, zubin, an ex-Google software engineer. If you're interested in not just learning how to code and you know it's going to take more than three months and you're serious about making a transition into a career in software and you want to work with people that have done it before and are currently working in senior plus levels, join me and Zubin at parsityio slash inner dash circle. You can learn all about our philosophy, how we approach learning how to code and switching careers in a much different way, and how we have so much gosh dang success. If you're interested in being one of the few people that works with us this year, go and apply at parsityio slash inner dash circle. And now back to the episode. Now, what are the three critical skills that you must master in this age? Number one is something that you need to master, no matter what debugging, but specifically debugging AI-generated code.
Speaker 1:Here's the issue. Ai-generated code looks clean. It used to be to go to Stack Overflow and you'd copy and paste some code into your text editor and it would work. Now, for the most part, there would be small snippets of code, not really large chunks and blocks of code. With AI tools like Cursor, you can generate pages worth of code and it looks clean to the untrained eye.
Speaker 1:The new developer superpower is debugging this AI-generated code. It's looking at your local dev tools, whether using VS Code or Cursor, looking at your Chrome dev tools and seeing does this code work the way I expect? Do I understand what it's doing? Is it following patterns? If it's not following patterns, how can I ask it to redo things and follow patterns? How can I make sure what it's generating is only what I think it's actually doing and not generating files in other places that I'm not aware of? For example, if you use agent mode in cursor, it will go ham. It will just go across multiple files in multiple different places and begin just writing code everywhere, doing what it thinks it should do on your behalf. You need to know what is necessary and what is not, and prune and clean it so that you only get the most necessary pieces needed to do what you're actually trying to do. Or else you'll be building this house of cards really, really quickly, and a house of cards, as you know, is not stable and it will fall down If you don't have a solid foundation. You don't have a house to build on.
Speaker 1:Now, debugging goes beyond just your code on your machine, right, like debugging could be writing console logs or whatever. That's kind of more junior level debugging. What you need to get good at, in my opinion, is understanding how to debug across services. It's very common to have three to five to 10 services that you're using. When you're deploying a web application, right, you're going to have it hosted on some sort of cloud service provider. You're going to have your database on another cloud service provider. You're going to have your analytics perhaps on another provider, and then you're going to have other services and tools that you may be using to monitor or deploy or whatever with your app, and you need to know how to look at all these different things when bugs pop up.
Speaker 1:Imagine being a vibe coder deploying a Lambda on serverless on AWS, having your front end on Vercel and your back end hosted somewhere else like Rendercom or Heroku or whatever right, or maybe it's on AWS as well. Imagine trying to debug this and not knowing how all this plays together. You need to understand where the logs are for these different services, how to read them and understand what's going on, why things might break in the cloud versus not break on your machine, and the only way to get good at this is to deploy stuff out there and have users use it. You're also going to get a lot of this experience just by literally taking something that works on your machine and deploying it. If you're using something like Nextjs, where you have a backend and a frontend all together, I would highly suggest and we do this in Parsity, by the way of splitting these apart, deploying your backend on a separate service and your frontend on a separate service and having them connect to each other and talk to each other, and finally, having your database on yet another service break when you put it out there and going to teach you all sorts of things about environment variables, how to debug stuff in one application versus another, and how these things synchronize together to make a usable working application. So debugging is now more than half your job.
Speaker 1:If I'm being honest, I write most of my code with the help of AI. Debugging is what I spend a lot of my day doing, and if you don't learn how to do this well, you're gonna suffer. Another critical skill kind of related to this is like developer experience and systems thinking. Speed matters more than ever. Companies expect you to write code faster. They say, hey, you're using the help of this AI tool. We expect code to get shipped even faster than it was, but we're constrained by the processes that we use. If you're a really junior developer, maybe we're constrained by the processes that we use. If you're a really junior developer, maybe you just started learning code. You're probably not aware of this, but you getting code from your machine onto your machine by just like get push whatever out to your little code base here super simple.
Speaker 1:The moment you work on a large team, you realize there's all these processes to do stuff. You have a CICD pipeline. It basically means a pipeline to check the code, then deploy it out to multiple services. You likely have different environments, like a staging environment, a production environment and maybe a dev environment where you can do even further testing, and then, beyond that, you probably have really slow processes, like talking to the guy in marketing or the woman in legal about what the problems are with the website and things that you need to check with them before getting it out. These processes are slow and they will strangle you and the team. To check within before getting it out these processes are slow and they will strangle you and the team. So you want to develop this systems level thinking and this developer experience thinking to suggest places to increase the speed and efficiency of a team, not just writing code. That is a lower level skill and still important. But what's more important, in my opinion, is thinking how can we not only write the code fast right but how can we get it out fast and how can we make sure it's safe and reliable to use? How can we make it so that teams can deploy once or many times a day? Think about things like Git, branching strategies, automated testing strategies, acceptance criteria. Basically, learn a little bit about what traditional DevOps teams might do.
Speaker 1:You can read a book like the Phoenix Project, which is basically like a cheat code to learning some basic DevOps practices. It's actually a fictional book, really cool. I love this book. It's one of my favorite software-related books to read. It's not super dry and boring. It's a fictional book about this dude trying to save a company. That's really slow and boring and archaic and he's like introducing these processes to make the software team run way faster. Excellent book. I can't recommend it enough. Pick it up and you're going to learn a ton just in that alone. And this last skill this one is incredibly important and I don't know of anyone really teaching this yet. So most developers are learning AI and using it like autocomplete Cool right. Like, yeah, you want to write code fast and it autocompletes stuff Very cool, right. The real winners here are going to be the engineers who learn how to leverage LLMs into serious applications. They're going to take the same software engineering practices that we've been using without AI, and Most of these services are not free and yet we're deploying them and we're basically looking at our bill at the end of the month to see, well, how much did we spend?
Speaker 1:How do we count tokens and optimize the input and output sizes of what's coming back from these large language models? How do we monitor and log their behavior and production? Large language models are non-deterministic, meaning you can give it the same input and you will not always get the same output. What happens when a developer changes the prompts in one of your code bases that you're using to talk to an LLM? They made a change in the code base where they changed the prompt by a few words, and now you're realizing everything is broken. It's not giving you the same responses that you expect.
Speaker 1:How do you monitor this? How do you write tests for this kind of things? How do you observe the performance, drift and hallucinations? These are difficult problems to solve and, as far as I know, there's no clear solutions to a lot of these things yet. There are services like Helicone where you can actually monitor and observe your LLM usage and some of the behavior, but how do you write things like unit tests for large language models? It's not exactly clear. Martin Fowler has a great article on this which I'm going to link in the notes, but these are things that you can begin figuring out for yourself and your team. What makes sense. This is when you're going to have to put on your critical thinking hat. You can't just ask AI to solve this for you. You could get some good direction, but ultimately these are problems we don't have a clear solution for yet, so you are going to have to be one of the people that figures this stuff out.
Speaker 1:This is a huge opportunity for developers who figure this out first, or at least have opinions, because right now we don't know. I'm trying to write some unit tests for a large language model in a project that I'm going to give out to Parsity students, and I'm learning what makes sense, what doesn't make sense, and I really can't wait to share it with you. I might give it out for free. It'll definitely be free for Parsity students. It'll be not so free for everybody else. But my goal with this project is in this course is to make you the go-to person in your organization for leveraging large language models in full-stack web applications.
Speaker 1:Very simple, bar none. That's what I want for you, and I know this is possible because this is such a new field that most people aren't even thinking beyond. Just using something like OpenAI and getting a prompt and a response that's bare minimum stuff. That's table stakes. We're going to take this way farther. I can't wait to share it with you. But if you're thinking about these problems, I think you're on the right track, because this is what every company will be doing in the next five years.
Speaker 1:So, just to conclude, in this new era which we are in, you can deny it. You can try to hide your head in the sand all you want. You can deny it. You can try to hide your head in the sand, all you want. You can try to write all your code manually so you don't lose the skill of writing code. Nobody's gonna really care if you can write 200 lines of boilerplate code by hand. Everybody's gonna care if you can debug, scale, ship this code fast and integrate AI properly. If you're serious about doing this and thriving, and not just surviving, you can't cling on to the old way of doing things. You're gonna have to adapt and build these skills. These aren't easy. Most of these skills can't be just learned with AI. They have to be learned through experience. These are the things that are gonna separate the good developers from the great ones and have another digital divide, which is happening faster than I think a lot of us want to admit.
Speaker 1:If you are using AI at work as a software developer, I'd love to hear more about it.
Speaker 1:So please write me at brian at parsityio or leave a comment in the show notes or whatever, because I'm really curious to see how other people are using this stuff and I hope you check out some of the resources in the show notes. They're gonna teach you a little bit about how people like Martin Fowler, one of the in software, is thinking about using large language models and writing tests. And also I'm going to add my unit testing course. Even though I said that you don't need to write unit tests from scratch if you don't know how to use them at all, you definitely need to learn how to write them first before you just have an AI assistant do it for you. So use those resources. I hope you find them helpful and I'll see you around. That'll do it for today's episode of the Develop Yourself podcast. If podcast. If you're serious about switching careers and becoming a software developer and building complex software and want to work directly with me and my team, go to parsityio. No-transcript.