
Develop Yourself
To change careers and land your first job as a Software Engineer, you need more than just great software development skills - you need to develop yourself.
Welcome to the podcast that helps you develop your skills, your habits, your network and more, all in hopes of becoming a thriving Software Engineer.
Develop Yourself
#226 - New Coders Can't Code?
Send a text and I may answer it on next episode (I cannot reply from this service 😢)
The rise of AI coding tools has fundamentally changed how people learn programming, creating a concerning trend: developers who can produce impressive-looking applications without understanding how they work.
What makes a developer truly valuable? Contrary to popular belief, it's not the ability to rapidly generate new code. A revealing LinkedIn survey showed 73% of experienced developers spend most of their time reading existing code rather than writing it. The real value comes from engineering judgment – knowing which approach to take and understanding why certain solutions work better than others.
"The entire act of keystrokes is not what the value in software engineering is."
Instead, it's about developing the mental models that let you reason through complex systems.
"If you don't know what this code is doing, then why is it in the codebase? And why is your name on it?"
Shameless Plugs
🧠 (NEW) Parsity's The Inner Circle Program - a highly customized roadmap to take you from 0 to hired. For career changers who want to pivot into software.
💼 Zubin's LinkedIn (ex-lawyer, former Google, Brian-look-a-like)
👂🏻Easier Said Than Done Podcast
Already a developer? Check out 👉 Not Another Course
Serious about joining Parsity? Schedule a call with me ☎️
Welcome to the Develop Yourself podcast, where we teach you everything you need to land your first job as a software developer by learning to develop yourself, your skills, your network and more. I'm Brian, your host Today on the Develop Yourself podcast. I got my buddy Zubin and he sent me an article a week or two ago and it's about new coders and how they can't code. Now I have a story about a young woman I met about a year ago and she's not in Parsity, so this wouldn't fly in Parsity. But she made a full stack app right and it looked really, really good. And I saw it like wow, this is nuts. And I was helping her debug something.
Speaker 1:And it was with React and using a really simple API in React and it was like and I said I said oh, you have a bug here. I said I think I found it right here and she's like what, I'm like we're using this api incorrectly? And she's like what? And didn't know, like anything, like didn't know react at even a basic level and I was kind of shocked. I was pretty impressed that she was able to build kind of working full stack app, and full stack meaning you know it's like 20, 30 files, because how big is the stuff that you're working on. By comparison, how many files are like in your average? You know?
Speaker 2:I mean, when you walk into a company, we're talking about hundreds of files in a single code base. You know, typically several hundred thousand lines of code per code base. Several hundred thousand, yeah, and, and there's several of those, yeah, right, right, if you, depending on how your architecture is done, there's several of those.
Speaker 1:And so so her project you know considerably smaller right and still, at this point, like chat, gpt was of no use. Hey, I really hope you're enjoying this episode Now. As you may know, I've joined forces with an ex Google engineer, zubin Pratap, who's also an ex-lawyer and learned to code at the age of 37. He and I have very similar stories and we've combined forces to create a highly customized and personalized coaching mentorship program for career changers who are serious about breaking into tech. If you know that the outdated coding bootcamp model won't work for you, you're serious and realize that this transformation will take time. We want to speak with you.
Speaker 1:Our program is not easy, it's not short, but it's highly effective. If you're a listener of this show, you're likely the kind of person we want to work with. If you're ready to apply, click the link in the show notes and you'll be talking to Zubin or I on the phone about the program. Now back her eye on the phone about the program. Now back to the episode. She didn't even know what the problem was or how to solve it and I was like, oh, you don't know anything. I'm like thinking you could fool an employer with this pretty easily if they didn't know how to ask the right questions and which were totally you wouldn't last very long in the job.
Speaker 2:No, but you know what. So so three to four points on that, right. I mean, even on in the power city, in a circle program that we do, we've seen in the last six months or so that people who submit their you know baselining projects and their baselining code, there's a, there's a higher quality of that code because it it kind of it looks like it's good, yeah, right, and then, oh, it looks like it's better than what you'd expect from someone joining the program.
Speaker 2:Let me put it that way, yes, okay, I wouldn't say it's a good point, but it's better than what you'd expect from somebody who says, hey, I haven't, I won't even coding three or four months, and you know, I'm joining the program, I've done a couple of courses, I need help, and then, when you interrogate them on the code, they have no idea. Yeah, absolutely no idea. Why it's there, like they. They know the basics. Okay, this is a rest endpoint or whatever it is. But okay, why have you handled it this way? No clue. Why have you got this try catch here?
Speaker 2:No clue right and it's not being able to understand why you've done what you've done. That makes it dangerous. That's one. Second thing is really, at the end of the day, it comes to a misunderstanding of most people who are not coders, misunderstanding what being a coder is. They think it's about writing new lines of code. Why? Because that's how understanding what being a coder is. They think it's about writing new lines of code. Why? Because that's how the education system is set up. It's about create this file, create this component, create that thing.
Speaker 2:I did a survey on LinkedIn, maybe a year and a half ago now, and I said, for all devs who've got more than one year experience that's how I phrased it this survey is for you. Do you spend most of your time writing code? Do you spend most of your time reading code? And 73% said they spend most of their time reading code. And that's kind of how it is for most jobs not all, but keep in mind. 27% said no, they spend most of the time writing code. So almost a third write code, but two thirds are not spending all their time writing code. They're spending most of the time reading other code that other people have written. Because that's what engineering is you walk into a company, there are thousands of files, exactly like we talked about, and you have to know how to navigate. And if you don't know how to navigate and you don't know how to reason your way through a code base that you never seen before or haven't aren't familiar with, ai can help you a little bit on a file-by-file basis, but it can't trace the happy path and the sad path through the code. It just can't trace the good parts yet, right, maybe in future will. So that's point number two. And point number three is that the entire act of keystrokes is not what the value in software engineering is, right. So what is the AI doing? It's generating the code, it's doing some amount of the research for you, it's doing a lot of the thinking for you, it's taking out some of the trial and error and it's pitting out lines of code. Great, very convenient. It's like a jet pack for experienced developers. So this is what I talked about in my podcast. The Easier Said Than Done podcast is.
Speaker 2:I talked about who Andre Karpati was. I showed his GitHub, I've linked to his GitHub, et cetera. This is a very experienced coder. Oh yeah, this is a guy who's been doing deep learning, reinforcement learning and machine learning and AI for 20-something years. You're right, okay, and he can vibe code his way. Just the same way, I can listen to music after 20 years of playing it and say I think I understand what chords are going to the song and why they've done it this way. But you get a new beat of music to play, to improvise with you. It's going to sound awful, right, because they haven't developed the intuitions yet. So vibe coding is basically improv kind of coding. It's as you go, you, and, like you said, it's very clear it is for weekend projects great Great for that A lot of it is for throwaway code and there's a big difference I think we've talked about in this podcast as well.
Speaker 2:I know I've talked about it in mine. There's a difference between development, which is the process of writing code, and engineering, which is the process of building complex systems using code and other tools that glue it all together. Engineering is a much vaster sort of scope of time. So I think newbies stand are going to do themselves a great disservice because they misunderstanding what it is. They misunderstand what it is to be a coder and so they emphasize in the park. That's not what real life coding is about, because I spend most of my time reading code and debugging code and then you do code reviews and stuff like that right, and if you can't reason through it, you're in trouble. If you don't know why the code is there, you're in trouble.
Speaker 1:I'm one of those 27% that write more code than I read at this point. I'm in a really small startup, so I'm just like-.
Speaker 2:Yes, startups are the only exception.
Speaker 1:And I depend on AI to help me generate code faster. But I was doing some work just a couple of days ago on Friday end of the week, and there was a bug that somebody in product found and they said this is, you know, kind of embarrassing us in demos. It's not looking great, and this was a really tricky bug. You had to understand the entire system that we had built to track down this really strange bug. It's not something you could just dump into chat. If you did, it's like why is this not working? They'd say, oh, that's, which would be the completely wrong answer.
Speaker 1:So me and the head of engineering spent probably a few hours overall debugging this and what it resulted in was one single line of code changed and that single line of code fixed the bug. And I'm sure you know stories like this, where you've read about like somebody finds a line of code or some small function and refactors it or deletes it and that ends up saving tens of thousands of hours in compute costs or something like that. And that's because, like you said, they know the system and they understood how to debug it and then find something really valuable. Now, lines of code is probably the dumbest metric to judge a developer on.
Speaker 1:But yeah, new coders, they don't know this stuff. So I hope that this kind of hopefully illuminates what your actual worth is as a developer, like why you're likely going to be overpaid for what you do. Ultimately, it's not because you're cranking out tons of memorized code, it's because you can actually write the code that makes sense. Because I don't even know what I feel on this yet. What is the balance between using something like ChatGPT and like manual code? Because it is a fine balance and I'm unsure sometimes of what to tell people like how much should you use? When should you use it? You know how much is too much, yeah.
Speaker 2:So, again, we do advise the students, as you know, in Power City, on how to do this, but that's because they're still at the learning phase, right? So there's a different rule that I use for the learning phase. I would not get them initially for the first, you know, for the first 400 to 600 hours of coding, I would not get them to do all the writing for the meal. If they do do it, I would write it out myself. Right? And I've covered this in my podcast, because with every keystroke you have to pay attention to what you're typing. Ideally, if you're not copying and pasting, then you're paying attention, and when you're paying attention it flushes out all the things you don't understand, just when you go keystroke by keystroke, right? So if I copy and text a large slab of I don't know Greek text or something and let's say I spoke Greek if I copy and paste, I'm much more likely to miss the words that I don't know the meaning of. But if I write out each word of that passage myself, handwrite it. But if I write out each word of that passage myself, handwrite it. I will notice when I come across a word that I don't know the meaning of, right. And so then I'm forced to look it up and that is how learning is done.
Speaker 2:For developers, like at work and stuff, I have a slightly different rule about this and again I've talked about this on my podcast.
Speaker 2:But ultimately you're accountable for the code, so you have to treat the AI like a member of your team that you're supervising and who you're going to do code review for and that you're responsible for that code, and you have to also coach them to make them better. You have to treat them as though you're the manager. They're a direct report, they're on your team. They're writing the code for you. So that means you have to give really clear instructions, you have to supervise, you have to review and, most importantly, my third rule on this for developers is if the thing does something that you cannot do yourself already or you did not believe you could do on your own, then you're probably on the edge where you want to try and understand what it is before you copy and paste it. But I use AI a lot of the time to do things that I'm dead sure I could figure out on my own. It would probably take me maybe six times, 10 times longer. Sure, because I may have to research a couple of things.
Speaker 2:I may have to click on a bunch of tabs. That's fine, but I know what needs to be done. My mental model is clear. I have a very solid hypothesis of how this is going to look and what features and attributes it ought to have, what kind of tests it had to have, and all I'm doing is getting the AI to do the typing for me. Yes, that's it.
Speaker 1:I really like that. That's nuanced and really helpful for a lot of people that are out there. Also, by the way, if you go to parsityio slash inner circle or just go to parsityio, you should see a pop-up on there and you can grab zuvin's cheat sheet, basically for using ai, for using large language models, for coding, so that that'll probably be really helpful. If you're having trouble, that'll be in the show notes as well.
Speaker 1:Yeah, I like what you're saying because, that's how I'm using it too, like it's honestly. It's. You know not to brag, but it's rare that it's going to spit out something like in typescript that I'm like what is this doing? This is magical. But when I was really junior, before LLMs, and I was working at a company, I copied some code from Stack Overflow and I put it in the code base and it worked and I was like, hey, this works, this is cool. Blew up later or something changed that then broke and then the manager at the time was like what does this code do? I'm like, oh, and he's like you wrote it. Now, this is the other thing. All the code you write as a developer is public and it's like tagged with your name on it when you wrote it the day yes, so there's no hiding it's fully traceable yeah.
Speaker 1:So he's like well, I know you wrote this. He's like can you explain him? He wasn't a few, wasn't a front-end guy, he was a back--end guy and I couldn't explain it to him. He's like if you don't know what this code is doing, then why is it in the code? Face?
Speaker 2:And why is your name on it?
Speaker 1:Super embarrassing, really awkward, and I just had to say, oh, I copied it off Stack Overflow. He's like, okay, he's like, yeah, we all do that. But if you don't know, and I don't know, that our users are basically having to like be our debuggers at this point, and yeah, totally yeah. And imagine this at scale, because I mean, in the article that you shared with me, the guy was saying, hey, remember when we all had to go to Stack Overflow. Sometimes, maybe, just because I'm a little older too, I still go to Stack Overflow because I'm like I don't fully trust the LLM because I see they kind of junk it right, and I'm sure you do too. I think all coders that have coded for a while, we see like this is okay, this is okay code, or sometimes it's good, but a lot of times it's completely wrong and I'm like there's zero way this should be here.
Speaker 2:Dude. And here's the weird thing Like I talk about this in, I think, my more recent episode, 35 or 36 on the easiest of the none podcast, I talk about how I actually do do it on the screen, because you know mine's video. I do it on the screen where I asked the, the LLM, something, and then it gives me an answer which is actually right. Asked it something fairly basic, I don't remember what it was, but something about. Oh yeah, it was about Golang channels, you know.
Speaker 2:And what I did is I cut and paste some code from Go by example, which is like a canonical reference site for Go. It's not the official docs, but it's canonical reference. I copied their standard beginning here's what a channel is in Go and for those who don't understand what a channel is, it's totally okay, it's not relevant really. And it gave me the right answer Okay, cool. So the LLM gave me the right which I would expect given how basic it was, and then I replied and said I don't think that's right. I think you're using workers in the wrong way. Oh yeah, and it apologized and changed its mind.
Speaker 1:Yes.
Speaker 2:And then I said and it made it slightly wrong, and then I said I still don't think that's quite right. It completely reneged on its position.
Speaker 1:Yeah.
Speaker 2:Right.
Speaker 1:I see this too. It's strange, it's terrible.
Speaker 2:And so how dangerous is that? So if you don't know, if you don't have a fundamental intuition about what's right or wrong, brian, how can you know when to challenge the AI? Exactly like how can you know when to challenge a direct report or someone in your team if you don't know enough to intelligently challenge it right? And that's a major risk, because you don't know when it's hallucinating and when it's not, and you don't know whether the code you're doing, you know whether it suggests it is right or wrong. And worse, a lot of people I think a lot of newbies because they don't understand the process of trial and error involved in the craftsmanship of writing code, that there is a lot of trial and error. There's constant reiteration of trial and error. They'll cut and paste big slabs of code, things break and they have no way of how to fix that right.
Speaker 2:And the human brain clearly works with trial and error. Like you look at how kids learn to walk, you look at how someone learns to drive. There's a constant trial and error feedback loop until it starts to make sense. And I'm really worried. And that's where all the learning is, by the way, that's where you learn all the things not to do, which is as important as the things that you could do, and then you learn all that and the mental model starts to become complete and then people don't stub their toes enough.
Speaker 2:And that's what I'm worried is going to happen with the really, really young developers, is they're going to get to the point where you know they want to try and be more senior mid-level to senior engineers but they haven't had enough trial and error to have the ability to have good judgment. And the more you evolve as an engineer, it becomes more about judgment right, because there are multiple opinions, there are multiple approaches. There's not necessarily one right, perfect approach. That's canonical. If you don't have judgment, how are you going to get to the next level in your career? You're always going to be competing with the AI that has about as little judgment as you do.
Speaker 2:Oh yeah or even less.
Speaker 1:I think, yeah, it does kind of freak me out sometimes about being honest, as I see computer science students and bootcamp grads that go to your traditional bootcamp and they're used to this feedback loop of I turn in an assignment and I get graded and the grade is what I want and I get a good grade, oh yeah, and I got my little certificate and that completely doesn't prepare you for real life as a software engineer, but it's really tempting to do that and LLMs help you do that really quickly and the lack of knowledge I even see with basically how an LLM works. It's like this is a statistical model. Essentially, it's going to give you, depending on its temperature, what it thinks is like the most statistically relevant answer, and so you're yeah. So it's like people looking like this is a fact. I'm like this is not a fact, this is what's likely correct and, like you said, if you challenge it even a little bit, most models will just say oh my bad.
Speaker 1:You likely correct. And like you said, if you challenge it even a little bit, most models will just say oh my bad, You're right, var x can equal y.
Speaker 2:That's fine, and it's there in the name. It's called generative AI, right? It's generating responses. And if you look at the open AI and the other sort of APIs, it talks about the completions API. It's trying to complete conversations through prediction. It's not retrieving knowledge. I know that's how we shorthand refer to it but all it's doing, like you said, is it's trying to complete a chain of tokens by predicting the statistical likelihood next token, like that's all it's doing and it can't Look. It's going to get a lot better, there's no doubt. Yeah, I'm sure it will.
Speaker 1:I can't wait for it to get better. I think it'll be. I, for one, welcome how good it will get. I'm also though I'm maybe I'm a little cynical, just because the more and more I work with it, the more I'm like this is really cool. It's amazing the geniuses that can craft this. But I'm thinking we've now really funny stuff on Twitter, like Supabase, which is a really popular SQL database. Anyway, it's a very it's the OpenTOS Firebase right.
Speaker 1:Yeah, right. And so they checked the guy online and said, hey, you know some vibe coder. And they said, hey, just as a heads up, you've exposed your API key in every single one of your calls, which essentially made his database completely insecure If he was taking things like credit card information or any personally identifiable information. You're toast, you're screwed. Another person said they vibe-coded their way to a project and then they said I can't do anymore because it's gotten past 30 files, like 30 files Nothing. Yet another one got hacked because he also didn't understand security. I'm like this is great, like your boy said, for making weekend projects. You deploy these things to the web and put a paywall in front of them and charge money and take people's credit card information. God help us. Oh my God. I think this is going to create a large wave of work for us all to clean up in the future. There's a few people predicting this, but I actually laughed it off at first but I think this could be reality 1000%.
Speaker 2:And who's going to be having to clean it up? It's going to be someone far more senior who may have to get back on the tools, or it's going to be the folks who vibe, coded their way through it and now have to actually figure out how to actually do it right. And then they have to do this learning in the middle of a busy workplace environment, and it's not easy. It's just a shortcut. It's a nasty trade-off. I mean, you and I we've talked about this. We use AI as a jetpack. We know how to do it. It's just getting us there faster, much faster. Fantastic. It's those who don't know how to do it who get the AI to do what they don't know how to do. That I worry about, because then what are you needed for? Yeah, right, yeah, why? How are you going to get out? And this is the other thing about interviews too.
Speaker 1:I was just going to ask that. I was really curious what you think about interviews because I got an interview story. But yeah, what do you think about AI in interviews?
Speaker 2:Oh man, I think all companies are going to have to evolve their strategies. So when I was at Google, for example, it was very data structures and algorithms-oriented interviews, which is not what we end up using in real life a lot, right, yeah, but it's a proxy for your ability to reason in the abstract and convert abstract ideas into concrete code. So test two things. It may not be the kind of thing, it really is, the kind of thing you're doing on the job, and so you know, I got through the DSA round, but I think in my second quarter at Google, I got stuck on something and most other people didn't know the answer, because it was all internal code and internal Google technology that you don't know on the outside. You can't even Google for an answer. That's the irony. You can't Google for an answer on some of the Google stuff because it's not there.
Speaker 1:It's not indexed by the public.
Speaker 2:And so there's an internal version of Stack Overflow that you'd go and post questions to and, if you're lucky, someone would see it, and if you're lucky, someone would know it. And we're talking about a few tens of thousands of developers instead of the 50 million on the internet, internet, right. So the odds get smaller and smaller, and so you have to get really good at debugging. And so, for interviews, I think if people are going to use this, they're going to get caught out, because you don't want to be in a situation where I was watching Cobra Kai last night, right.
Speaker 2:There's one episode where this girl cheats her way into the big global thing and she gets a butt gate on the tournament because she didn't deserve to be there, yeah, and she recognizes that butt kicked on the tournament because she didn't deserve to be there yeah, and she recognizes that later on, you know, and she somebody else who deserved to be there didn't get there. She got there through cheating. And so, in that sense, if you use AI to get your way through an interview, but you don't have the skills, you are cheating and you will get your butt kicked on the job. You will get performance managed out. You know, that's just how it's going to be.
Speaker 1:I've seen that happen Again, not a Parsity student. I've talked to a lot of people over the years. This is a person I did mentor, though, and he lied his way into a big tech job at a company I shot that name. He was fired within three months. Performance managed out. There's another tool out there. I'm not going to name this tool either, because I think what they're doing is pretty immoral.
Speaker 1:Sketchy yeah it's really smart, though I'll be honest. It's an AI tool that's undetectable in a browser that can help you in real time beat coding interviews that are data structured algorithm related, and it's pretty amazing looking, but I'm like what do people think happens when you ask the interview?
Speaker 2:Like that's it. Yeah, but also companies evolve, they'll just figure it out. Yeah, they just figure it out. I was in an interview.
Speaker 1:Uh, what? Last week? I interview a lot, by the way. Um, I kind of, in a session of mine, I interview at least two times a year just to keep focused, to make sure I know, kind of what I'm doing. And then I interview a bit here and there and I do whatever. I do a lot of interviews.
Speaker 1:So I was doing one recently and I was using cursor and I said, hey, um, just a heads up, I'm using cursor. Uh, do I need to use visual studio code and turn off copilot or anything? And the CTO who was interviewing me said I don't worry about it and like, oh, cool. And so I'm like, well, this is going to generate a lot of the code. He's like that's fine, just do what you would normally do. And I'm like, okay, I'm going to take your word for it. And so I did. And he was overall happy with the cut I produced.
Speaker 1:But here's the thing he asked me when it gave me a response that I didn't want to use, say, hey, why did she use that response that the llm gave you? And I said, yeah, I said I didn't really like the way it was approaching this, because it's using a map and I think, um, we could use probably an object. In this case, really, it's like you know, six, six to one, half a dozen to the other. I just feel more comfortable using it this way. Why? He's like okay.
Speaker 1:Then he asked me to explain why and we kind of got more into a conversation about that and then the ln produced a very wrong answer and I said, yeah, that's not going to work at all. I said why won't that work? I'm like, well, this is like super inefficient the way that it's doing this. I'm trying to make it more performant data structure here to trace something, and that was really interesting. It's like, oh, this is a cool way of doing interviews, but I imagine that if I was a lot more junior, I couldn't defend myself against the LLO and that would be tough. No, you can't yeah.
Speaker 2:Absolutely, and a lot of the stuff, and this is the heart of what we were talking about earlier, about the trial and error and the judgment. Right, not using a map, using an object, suspecting that something's going to lead in a dead end, that's pure judgment. And most of those, all of those decisions in a code base are invisible. Yeah Right, in a code base, you never see all the decisions that were made. You only see the end result of several decisions in lines of code, and so the actual learning is in all the stuff that was considered in discarding compared to what was chosen and included. That's the learning. Yeah Right, that is the learning. And most people never get that unless they do trial and error. You just can't, it's not possible.
Speaker 1:Yeah, you can't cheat time in the saddle and I'm really glad that's one of the things we promote a lot within Parsey inner circle. It's that there's. There is no shortcut here. You know if if you want to vibe code your way through a weekend project, then that's cool that you're not a coder, but you can still write code. I'm not an author because I write. You know rage bait on Twitter either. You know it's like yeah, it's cool, we can still use the language, you can make fun, little stuff, but any last words for especially early career coders out there when it comes to using AI correctly.
Speaker 2:Yeah. So I mean, when you think about it, AI is an extension of tutorial hell in a weird way, right, because what it does is it gives you the end result and you hope it's right, and people stuck in tutorial hell get stuck a lot because things are outdated and which means that tutorial is now no longer correct. It may have been correct four years ago, but it's no longer correct. And the AI presents the same problem in that you don't know if it's right or wrong, but you're blindly following along. So we kind of amplified an old problem, which is we think that coding is about being able to build stuff and write stuff blindly and you know just write lines of code. That's how you learn how to write the code. But how you build stuff is by knowing how to read code and how to independently recess the items that need to go into it and make judgment calls and decisions about that, and you know designs and things to that. So I think AI, for those folks who see AI as a super helpful tool, it is, but it is extremely double-edged and the other second edge is extremely dangerous. It is not a substitute for direct trial and error based learning a substitute for direct trial and error based learning. It is basically I would look at it as a fancy autocomplete on your phone. That's kind of what it is. It's basically an autocomplete function. Think of it that way when you're learning and use it like. This is what we tell students in power city, especially the the more.
Speaker 2:Um, those in the first half of the program, who's still learning a lot of the fundamentals of programming? In the first half, as you know, we tell them use it for conceptual triangulation, right? So in that cheat sheet stuff, if people sign up, they'll see that use it for conceptual triangulation or for conceptual understanding. Do not use it for the coding. Yet in the second half of the program we say okay, now you, you've learned enough to be able to make judgment calls about the coding stuff. Now you can use it for coding as well.
Speaker 2:But you're the supervisor. Think of it as giving you a pull request. You've got to decide whether you want to merge this into your code base, because that's what's happening, right? That's the mindset you need to have. So you know there are two levels and you have to know where in the journey you are. Most people will probably overestimate where in the journey they are and then fall prey to prematurely including stuff in that code. Fine, you know, there there are huge risks. We've talked about that and it won't fly at the workplace. Why we break it down at the power city in a circle program. We break it down into such detail.
Speaker 1:Yeah, If we want to build software engineers not five coders or code monkeys.
Speaker 2:Yeah, that's not what we're into.
Speaker 1:Super and thank you so much. I'm going to have this article that I'm glad you shared with me in the show notes and I can't wait to talk to you again.
Speaker 2:Yeah, man, Thank you for having me. It's always a real pleasure when you and I you know, vibe podcast our way through topics. It's great.
Speaker 1:Appreciate it, man have a good one.
Speaker 2:All right, take it easy that.
Speaker 1:That'll do it for today's episode of the Develop Yourself podcast. If you're serious about switching careers and becoming a software developer and building complex software and want to work directly with me and my team, go to Parsityio, and if you want more information, feel free to schedule a chat by just clicking the link in the show notes. See you next week.