
DataTopics Unplugged: All Things Data, AI & Tech
Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society.
Dive into conversations that should flow as smoothly as your morning coffee (but don't), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style!
DataTopics Unplugged: All Things Data, AI & Tech
#81 AI Code Assistants: The Good, The Bad & The Overhyped, plus Python’s UV Glow-Up & Postman’s Existential Crisis
Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society.
This week, we dive into the latest in AI-assisted coding, software quality, and the ongoing debate on whether LLMs will replace developers—or just make their lives easier:
- My LLM Codegen workflow atm: A deep dive into using LLMs for coding, including structured workflows, tool recommendations, and the fine line between automation and chaos.
- Cline & Cursor: Exploring VSCode extensions and AI-powered coding tools that aim to supercharge development—but are they game-changers or just fancy autocomplete?
- To avoid being replaced by LLMs, do what they can’t: A thought-provoking take on the future of programming, the value of human intuition, and how to stay ahead in an AI-driven world.
- The wired brain: Why we should stop using glowing-brain stock images to talk about AI—and what that says about how we understand machine intelligence.
- A year of uv: Reflecting on a year of UV, the rising star of Python package managers. Should you switch? Maybe. Probably.
- Posting: A look at a fun GitHub project that makes sharing online a little more structured.
- Software Quality: AI may generate code, but does it generate good code? A discussion on testing, maintainability, and avoiding spaghetti.
- movingWithTheTimes: A bit of programmer humor to lighten the mood—because tech discussions need memes too.
You have taste in a way that's meaningful to software people.
Speaker 2:Hello, I'm Bill Gates. I would recommend TypeScript. Yeah, it writes a lot of code for me and usually it's slightly wrong. I'm reminded, incidentally, of Rust here Rust.
Speaker 1:This almost makes me happy that I didn't become a supermodel.
Speaker 2:Cooper and Nettix. Well, I'm sorry guys, I don't know what's going on.
Speaker 3:Thank you for the opportunity to speak to you today about large neural networks. It's really an honor to be here. Rust.
Speaker 2:Rust Data Topics. Welcome to the Data Topics.
Speaker 3:Welcome to the Data Topics Podcast. Hello, welcome to the Data all of them. Today is February 24th of 2025. My name is Morello. I'll be hosting you today, joined, as always, by my faithful sidekick, hey, hey, bart. Everyone wants to share about the sidekicks thing still and keeping us on the line, keeping us on our toes Alex behind the screen there. Hello. Hey Alex, how are you? Uh, yeah, how are we doing?
Speaker 3:good, yeah, nothing special got warmer, but still it's got, it has gotten warmer, and then it's celebrating again yeah, but this web is a bit crazy, I feel like, because like this yesterday I think it was really warm, but like three days ago it was really cold, it was like zero yeah, right 2019.
Speaker 2:Yeah, it's a bit crazy but but hunting from dry to rain I wonder that's.
Speaker 3:That's definitely not good for like environment. You know like the plants and stuff, because I see some trees like sprouting and stuff but, then everything freezes again. It's like um, how are you doing, bart? I'm doing good. Yeah, we don't have a lot of news topics this week actually, so we'll be a bit of a different episode, but still have stuff to talk about. Maybe we can start with LLM CodeGen. What do you use LLMs to generate code these days, bart?
Speaker 2:I do, I do, I do do actually, um, quite a bit.
Speaker 3:I've talked about it before, right, yeah, you're, you're, yeah, indeed. Well, I think there are degrees to it, right, like there's a having lms to the writing you just prompt and stuff. There's also like the autocomplete right, which is kind of looks more like a very smart linter. Um, so, yeah, I think people are still navigating how to go about these things, but, uh, I see here my lm code gen workflow at the moment. Um, this is from harper reed, so this is his blog and he's talking about here. You know what he does. What is this about, bart?
Speaker 2:it's a link I saw passing by on hacker news and it's indeed like it's from harper read. I didn't know harper read, but if you look at the about page, just a picture like it's. This is something you, as someone that you can trust let's see, oh, can I, can I just zoom in?
Speaker 3:and you just need to click about oh, there we go, thanks, oh yeah, yeah, like, yeah, he knows how to go, he knows his stuff, yeah he's at a beach, though, like he's like you know, he's like it's a good combo indeed. Indeed feels like he has a background on now, like he's on a team's call, yeah, and he just has a background. But yeah, I trust this guy, I trusted him him too. How is his code gen workflow?
Speaker 2:So I went through it and I kind of well I think it's more or less how I would explain it myself as well, although I would use different tools, but I think that's a bit of a matter of taste. So he explains a bit like you have two type of use cases. You have and I think in both use cases this is more advanced than what you were alluding to like you just have a fancy order complete. It's more like really using that as the I don't know the 60% of the code that you generate, and it has two approaches. Like you have greenfield, you have nothing and you need to create something and you have legacy. Like you have an existing code base. Okay.
Speaker 2:Not legacy as an alt, but like there is an existing code base.
Speaker 3:Okay. So yeah, legacy here is not like something very old, not a maintained, it's just something that already exists and you're trying to modify somehow.
Speaker 2:Yeah okay, and then, uh, so for the greenfield part, like he, uh, he splits in a number of steps. Uh, first step is id honing, that you use a conversational lm to basically start defining the id of. What should this application or this code base be able to do?
Speaker 2:you start specking it out. Um, I also. You can also add to that by asking the alarm to to ask you follow-up questions, like to me, and to to not make it depend on you, just like I answered this, and this is enough, but also to be a bit critical okay um, he says like he saves it in a spec dot markdown file, okay.
Speaker 2:And then he starts a planning phase where you take the spec and uses, use the reasoning model for it, um to from that uh, from that uh spec file, basically go to like what all needs to be created? Okay, build a plan to create this, basically, um, and he tries to make it uh very concrete, in a sense like this uh plan needs to be translated into to do's.
Speaker 2:So he actually creates a to do dot markdown okay so really something very concrete that you can then use again in the next phase. And then from that, uh, from that to-do, you go to the execution where you basically ask an LLM that is well-suited for code generation of multiple files to basically start building the skeleton for that.
Speaker 3:Okay, cool, and maybe also so. There are three steps that we listed here, right, the idea honing, and he uses jeopardy for over oh three.
Speaker 2:And then the step two, planning, which he also uses the reasoning models here, right, so oh one, oh three or r1, and then for the execution, I think he has claude, oh yeah, you mentioned these what you use, but also like type of tools, because for the execution, like building the, the initial initial files, he also uses ader, uh, which is like this um command line tool to to build out, to generate files, basically um but I think like you have a lot of for each of these steps.
Speaker 2:In terms of tooling and models, you have a lot of different options. I think today, a safe bet is still very much for all of them. Solon 3.5. Yeah, yeah. And then he goes on to the more, the, what he, what he calls the, the, the legacy part, which is more there is an existing code base. Where you want to win iterations, you want to do incremental stuff, where you basically need to have, like, your LLM needs to have a good context where, like, what is the project that I'm building? And I think he proposes there also a number of tools that he uses to do this. I'm personally a fan of CLINE C-L-I-N-E, which is a VS Code plugin, but you have, again, you have multiple things to do this, but I think the important thing there is, like, in order to write relevant code, in order to iterate, you need to be able to have access to the full existing code base and not just access to a file.
Speaker 3:So Klein, basically. So I think this is the one right that I have here on the screen marketplace visuals. It's basically a vs code extension yeah that will basically parse all the files in your project, or or what. What does it do exactly?
Speaker 2:it, uh, depends a bit, so you can. It's very, um, you can. It's very granular when it comes to getting giving access to files for to do whatever you want to do. Like you can say you have access to read this and this and this file, you can write this and this and this file. You can also say you have access to everything, so you can. You could specify this a bit, but indeed, like it's easy for for client to build a relevant context for the changes that you want to do and it will also apply the changes for you in a very managed manner and I'd like it. You can choose basically what underlying model you want to use. I typically use it with with a clot with an anthropic api key.
Speaker 3:So your setup is like you have clod api key and then you have these, these tools that plug into your api keys that you have. Yeah, just itself okay.
Speaker 2:And then he uh mentions like a few challenges. I think those and that's actually why I think those are interesting to discuss. Um, he calls it, uh, he mentioned here skiing. He's uh, he says he's over his skis. Uh, when, uh, when talking about lms in the sense that I'm talking about coding with LLMs, in a sense that you have this danger with with something like client, if you give it all access like to for it to make very big changes and it's hard to understand what all, what the fuck all happened just now.
Speaker 3:Cause I have a lot of stuff changes at once. Yeah, exactly, yeah, I see.
Speaker 2:Like also like from if you do this greenfield approach, like suddenly, like you go from nothing to to 10 different files where there is logic like what happened right? Like everything seems to work like okay, nice, but if something suddenly doesn't work in like there's gonna be this big change. That, yeah, is hard to understand, but I think it's the same point.
Speaker 3:Like you need to get a bit comfortable with it with with one particular with not maybe for greenfield, at least I think so like being comfortable that you're gonna have a whole bunch of code now that you have to stop and maybe understand or just accept that there's over there and choose a bit how you, how you tackle that, of course, like yeah um, I think, uh, I think if you could take it all the way and you say I don't really care, then you need to focus on testing this.
Speaker 2:What is the testing framework that you want to build? I see Okay. So I think that feeling like I'm over my skis with LMS I think is an interesting one to discuss and I think what it links to very much is his remark on this. My skis with with lms I think is an interesting one to to discuss and I think, uh, the, the, what it links to that very much, is the his, his remark on this.
Speaker 2:This is very much a single player mode like I think in single player mode you can get very far, but there are today very little of these tools that allow you to work well in teams, I think. I think the tooling for that is not really mature at this point.
Speaker 3:And how would it look for you, like when you're saying teams, what exactly? Because I'm having a bit of a hard time visualizing what is a LEM for teams in this case.
Speaker 2:So I think typically let's say you are in Greenfield what you would do is that you say with a team, these are someone who's going to focus on UI UX, Someone is going to focus on that page, on that component, Someone is going to focus on that backend component. And now, with this prompt, like something generates everything but the end responsibilities of maintaining that in an actual larger setting it's not just a hobby project like will be spread over a team. I see.
Speaker 2:And how do you do these things right? I think it's going from nothing to 60% in Greenfield. This is something new, right yeah?
Speaker 3:yeah, yeah For sure. For sure.
Speaker 2:And that is I think it's an interesting aspect and that is I think it's an uh, it's an interesting, interesting aspect. Now I don't have any any clear solutions there, but I think people that very much use these tools today. I think they are very big opponents on productivity gains, but this is like in real life. You're not in a single player mode yeah, true and I think that is something that we still need to need to figure out a bit true, true, true well, I thought it was an interesting one and you're so you have and you said you have a similar approach to like.
Speaker 3:For me, I think some of these things are very resonate a lot. Um, I'm definitely not as structured when it comes to the brainstorming that to do. I feel like maybe I have some brainstorming stuff, but I don't ask you to put in to do so. Just kind of say I think I need to do this, this and this.
Speaker 2:But do you have? Have you done a lot of these screen field things? No, I think. I think if you, if it's more working on legacy code, you're not going to do that, um, but like, if you structure it like like this, if you use lovable or bold or something to build an application like this will really speed you up, because otherwise you're going to start with a very small, small, minimally descriptive problem. It will generate something. You need to change that. You need to change that again because, like two iterations, you notice that this is missing. This is missing.
Speaker 3:So it really helps, like to spend a little bit of time on this, oh no, but to spend a little bit of time on this face. But I but I agree, I agree, I think it was more for me. Mentioning this is more of ah, this is a good idea because I think it's more structured and I think it might like I use it, I think I keep more in my brain. Yeah, wait, and we discussed this before that.
Speaker 3:I don't think it's a good thing right so I think to to kind of say, okay, now right to do tasks, okay, this, this, this and this, and I I think it. It would help me in my, in organizing as well and also validating that the steps are what I wanted to do, and all these things I think also the last uh paragraph is an interesting one.
Speaker 2:It's uh about haterade but, this is something that you see a lot like. You have, uh, either the extreme tech bros that say the only thing existing a year from now is ai, yeah. Or the other extreme is like lms they're terrible everything yeah, it's a very polarizing view towards towards this right.
Speaker 3:It's either I love it or it's trash.
Speaker 2:Yeah, and I I don't. I don't follow either of those extreme stance, to be honest, but I think I personally do very much believe um and I see it in a project I'm involved in, like it adds a lot of value if you know the tools well. Right, and tools are still very young and there are definitely challenges. We talked about the team we talk I think we've touched upon before, like what do you do with what? What impact does it have on the education of someone that is just starting in software engineering and is using these tools like they? Will they understand what is going on?
Speaker 2:like there are a lot of challenges, but I think it's too easy to say they're terrible at everything, right, I mean?
Speaker 3:but I think also, these people that are saying that, well, first of all, are they actually turning their backs to lms? Because I I I don't think it's controversial to say that if you are, you're gonna fall behind, at least in terms of productivity, right? So I wouldn't advise anyone to really turn their back. I think, yeah, you can be skeptical when you can, but I think you have to like the earlier you adopt these things and the earlier you can, but I think you have to like the earlier you adopt these things and the earlier you can be productive. I think it's the way forward. Well, you mentioned Klein and you mentioned your workflow, so I don't follow exactly this, but there are a lot of things that resonate with me, like the brainstorming phase or like how do I tackle this? Or sometimes I have a problem that I know I want to kind of parallelize these things, but I'm not sure how I want to do it.
Speaker 3:In Python, I want to add a queue. You know like things that doesn't come up very often in my day to day, but I know they're possible. So, for example, I was trying to stream, so I was trying to do like text to speech and I wanted to stream the inputs in for opening. But you can have a streaming output for the audio but you cannot stream the text in, right. So I was like, okay, maybe I can split into sentences and I can pass it in and I play the audio as it comes, right. So I basically make a whole bunch of calls and I was like, okay, maybe I want to have a queue so it takes in and then adds. Then you kind of have like more tasks and as it does, it goes through.
Speaker 3:I kind of had and I could test it you know, yeah, so also brainstorming just to see, like, does this make sense? Does this not make sense? Am I on the wrong path here? I think it definitely helps. So I definitely have used things like this, but to exceed in a very structural way. I think it's really nice. Um, one thing that I do use I don't use cline, but I do use cursor, which is like a vs code fork, and one of the things that they have um is this agents thing. Um, so, for example, you can do this in line on the file, but you can also just have as a almost like a chat kind of thing, which I think is like this you see, like on the screen this is, I think, the the a competitor to client.
Speaker 2:I think so, like it's very similar, I think so.
Speaker 3:So, and I think what they do is like, whenever you want to, it's open source, right, I think. So whenever you want to edit something on cursor, it gives you kind of this diff thing. So it tries to make it easier for you to see what's changed and you can just accept or decline. So actually, this, I think this is very helpful. So, for example, one thing I use this is I write my python code, my spaghetti code, and then I have pre-commit hooks that will fail, of course, because it's spaghetti, and then I'll just copy paste the errors and just put it on this chat and say, hey, fix these errors. And then you will give me this diff and I can go diff by diff and say, okay, this, this makes sense or this doesn't.
Speaker 3:Doc strings a lot of the times I'm not a big fan because, um, they're very non-informative, they just kind of repeat the functioning Right. So then I usually change these things, but, uh, but I do think it's. Even. Even in those cases, I still feel like it. Um, I gained some productivity. True, so cool, maybe. Um, different, so cool, maybe different ways. You can take this. But one question that I received last week and I wanted to bounce this back to you. Someone asked me what is the best free GPT for coding? And I gave a bit not looking back on the answer that I gave because I said like deep seek, because yeah, you host yourself and I was thinking like on your machine and then you can plug it in and then you do it like that. I was thinking like Ollama on your machine and then you can plug it in and then you do it like that.
Speaker 3:But they were looking for more like chat GPT, um uh like a chat UI, I mean yeah, exactly Chat GPT, like the actual chat, not the GPT models, um, and I guess there are other ones, like deep seek has one and all these things, um, uh, but I think for me I never. I didn't think of chat GPT at first. Well, one because, yeah, it's free, but they still collect your data and all these things, right. So already, if you're depending on what you want to put there, you have to be a bit more mindful. But I guess for me I don't. The main reason why I didn't think of chat GPT is because to me I equate more chat GPT to Google rather than a coding GPT. And I guess for me now, when I think of coding GPTs, I think of something that lives in my ID, something that kind of auto-complete stuff for me that can refactor my code. But I was wondering if you shared the sentiment when you think of a free GPT for coding. If someone asked you this, what would you answer?
Speaker 3:To be honest, I would have to check it out I don't know what the free tiers are on the on the major yeah like chat ui based ones, but to me really like, because you can ask the gpt to write some code, you can copy paste stuff. But would you call that a gpt for coding, because you can also google stuff and copy paste code from stack overflow. For me, for me the chat GPT, the UI, it's more like a Google replacement rather than a GPT for coding. At least that's how.
Speaker 2:Yeah, I don't really necessarily agree to that. I think you can generate, like DeepSeq, for example, like you can generate decent code which you don't get from Google search, right? True, I mean generate decent code which you don't get from google search, right? True, I mean you can definitely do something there without being in your id yeah, but would you say, like deep seek, if you go in the ui.
Speaker 3:Wouldn't it be more appropriate to say that it's a general purpose thing?
Speaker 2:it's not for coding I don't know what the exact question is. You got yeah. That's not the question I don't think there are a lot of like when it comes to coding in your IDE. I think all the major providers they will ask you to pay for. I don't think they have a free tier on their API tokens, unless I'm mistaken.
Speaker 3:For.
Speaker 2:ChatGPT.
Speaker 3:If you go to ChatGPTcom.
Speaker 2:I think so if you use like, for example, cursor or Client. If you use this in your ID and you typically need an API token to do that, you typically need to pay for it. Gemini very shortly had a short-lived free version. You could get an API token, but I think most of them are paid, so then you need to default to something running locally on your laptop, which is not impossible. It's not going to give you great performance, I guess.
Speaker 3:Yeah, because that's also what I was. I started to think more about that, but I feel like I missed the question entirely.
Speaker 2:To me, the question would be if you get a question like that, it's like what do you need to solve? Yeah Right. Yeah, to solve yeah right, that is uh, yeah, yeah. Because if you say like, if you, if person wants to solve, then becoming way more productive, then I think the focus needs to be not necessarily how do I do it in a free way, but how do I do it in an affordable way?
Speaker 3:right, because yeah, yeah, true, true, no, but I I agree with that. I think for me, I think, what was surprising is just the realization that the way I'm thinking about these things is very different from other people. Now, and I'm not sure if I'm, I guess, for me I wanted to ask you as well, because I'm not sure if I'm framing things or equating things differently, like cause, I'm thinking of tools, right, like Google is a tool to do this. Then there's Perplexity, then there's GIGPT and I'm kind of bucking them, bucketing them on the same place. But yeah, I'm not sure if I'm doing a disservice by looking at these things this way. But now you talked about generating code. You mentioned, like it's very new, that today you have I don't know 20 files, just like bam, right, and now you have to understand Boom bam boom, boom.
Speaker 3:Anyways, One thing I saw I came across was this article from Sean Kodeki I'm not sure where he's from. Anyways, to avoid being replaced by LMs, do what they can't. Have you seen this article? I'm not sure if he's from. Anyways, to avoid being replaced by LLMs, do what they can't. Have you seen this article?
Speaker 2:I'm not sure if this article was trendy, the headline no, I haven't seen it, but the headline sounds a bit like fear-mongering.
Speaker 3:but yeah, I think it is a bit, but basically so he splits. So, this guy, he splits LLM usage in three parts short-term, medium-term and long-term. He says in the short-term, like learn a bit of AI, use it because, yeah, get what advantage you can from AI tooling, understand the technical principles behind language models so you can participate in the growing quantity of AI work. I'm not sure how much I agree with this. The second statement, to be honest, because knowing the basics is good, but I don't know how many people are going to be building new foundational models. And it says acquire status, since it certainly seems like more junior roles will be replaced first. Okay, for me, yeah, maybe, but you could always say this right Junior, like medium and senior roles are more valuable, in theory at least.
Speaker 3:So he says, in the medium term, lean into legacy code. So he says and then this is his view, so curious to hear what you think. He says that LMs are good at problems that are technically difficult, to some extent mathematically difficult, but problems that are well defined and well scoped solutions are trivially, trivially verifiable and for the total volume of code involved is very low. So he says let's see where the eye falls short. So instead of looking where it falls short. Let's look at its strengths and, according to him, these are the strengths of lm models today in terms of coding, yeah, okay. So he says okay, then the opposite of this would be problems that are ill-defined, of poorly scoped solutions that are difficult to verify, and the totem volume of code involved this massive.
Speaker 2:And he said when you look at this, this is for me, the first part is this is easier, like this these are easier projects.
Speaker 3:The second part these are harder projects yeah, I also right like this, and I also feel like you can still use llms for the second part.
Speaker 2:In my opinion, right yeah, you can, because I think, like what he's saying there, the total volume of code involved is massive and of course there are these projects out there, but for most of the massive projects that you will actually work on, in reality, the this, this, the maintenance of such a code database, is distributed among teams. Yeah, and the actual amount of code that you yourself need to have in your own personal context is not the full code base, right? Yeah, I agree.
Speaker 2:And you could make that parallel to an LLM as well, like you don't need to focus on the full code base to make relevant changes. And I think, like his point there, like solutions that are difficult to verify, I mean that's for an LLAM difficult, it's for a human as difficult, right Indeed, I agree, so nothing very surprising I mean again, yeah, and I think it's more like this more sounds like bad management.
Speaker 2:Like if you have solutions that are difficult to verify, you should not spend time on making it easier to verify Like you've tried testing in places these type of things like it easier to verify. Like you've tried testing in places, these type of things like problems that are ill-defined and poorly scoped. Like should you not better define in the scope? Yeah, I'm wondering, like isn't this is more of a project management issue than the? Yeah, you can work on this issue.
Speaker 3:True, true, true um, also the massive volumes of code.
Speaker 2:He also kind of talks about rag and he talks about context length and all these things and he's actually says in this in his follow-up sentence, like the, the ill-defined polysculpt, a difficult forever in his view. It is describing legacy code yeah, a largely established code basis and I would even dare to say that, like an lm with a sufficient, a big context might be up to speed.
Speaker 3:Quote, unquote, bigger than someone that's completely new to code base but that I agree actually and I think, yeah, we see so I mentioned klein that you can actually crawl through the things and you can also take the, the knowledge from learning, and I think, yeah, I think arguably it's, it's I would rely more on lms there to get up to speed and try things than if it was like on the other, in the top part let's say I think you can tackle both very much like.
Speaker 2:What type of tools are you using? And I think what today still is very much like, even if you use the best tools out there, it's because your self-experience that you can use these things in an efficient way.
Speaker 3:True true, true, true, true. I think in this post as well, he's really talking about fully replacing software developers by LLMs. So yeah, but again, like you said, I think he says LLMs are good at this. But yeah, this is to me the first step, right, I think, if you're good at problems that are well defined when things are less defined like you have to be good at things that are well defined to be get better to problems that are not as well defined, because I think the first thing you need to do is make it well defined right.
Speaker 2:But I think yeah, I think it's hard to see because this technology is going so fast like, what would it actually do to replacing right? Like, isn't it just like a change in the type of activities that you're in, that you're involved in? Let's take a very simple thing a ui and you need a new button with some logic, or you need to change the logic under an existing button, or you need to do a, you need to do a minimal layout change. I think you can argue there like if you or you need to do a, you need to do a, a minimal layout change.
Speaker 2:I think you can argue there like if you don't need to write code, if you can just prompt it and it works, let's assume it works at some point that it's good enough to do these things, like it's not a bad thing, right, I agree, but people need to be much more, much more trained in like how do you define what is uh? What is the scope, what are the, what is the? What are the specifications of this application?
Speaker 3:like people need to be much better trained at, yeah, these type of things, because, in the end, actually more important I think so, and I think even identifying when these things are not mature, right, like, like, yeah, you have a definition, but it's too wide, right, like this can be better scoped right, I think that's the.
Speaker 3:I even heard, like on another, well, another post, that they were saying that most of the programming happens before you write code. But I guess again, depends a bit how you define programming. But I think it's kind of what you're saying, right, and let's be honest, we're not there.
Speaker 2:Like nothing is fully automated today, like angry I think, uh, this is very much thinking about like where we at some point end up, but, like today, you can't like. I had an example the other day, like you, you ask it. You ask, uh, it was with bolt. You ask to generate some utility functions and it's. Even though I have a folder and files with utility functions, it creates a new folder with overlapping functionality because for some reason it didn't have that in the context. And for this type of things, if you ignore this, if you don't know what is going on, it becomes very, very hard to maintain code base. This is not an autonomous thing. This is very much today. You need to basically very much describe, like, what are you expecting?
Speaker 3:yeah, true, what are your specs? True, true, true.
Speaker 2:So it's almost like you still need the development knowledge to just kind of say this is exactly what I want for it to do a good job, because if you're just too vague, and if you just don't know anything about programming, you just ask you're gonna get something, but it's not gonna be, and I think, honestly, the the major concern that a lot of people are raising is like what if this becomes good enough to just say I need these specs? Like, how do you get people trained on how the llm says I need these specs? No, if you say I need these specs, it needs to look like this this is the functionality. And if the llm is good enough to generate all that, yeah sure.
Speaker 2:How can you be critical of what is happening, how can you be critical about maintainability and how can you be trained on these things if you've never seen them? Yeah, I see what you're saying. I think that's unsolved.
Speaker 3:But I also think that even just coming to LLM and saying I need these specs is already for me that would be already a better place than we are today. Place than we are today, because I think you're already thinking that you need specs. You're probably saying, like, you need a I don't know contract, right? Um, you understand, like you're set, like you can organize the code in different ways, that even if this thing is to be refactor, it's not going to touch everything else. You know like, yeah, to me I think that's already feels more, more mature than what we see a lot of the times, right?
Speaker 2:and you already see that now, like if you're in the Lovable or in the StackBlitz, which is the company behind Bolt, if you're in their Discord servers, like you see a lot of questions coming around usage of these tools, that and saying, oh, this doesn't work, or the authentication doesn't work, or they get these errors, which is clearly like you don't understand what is going on, like you just need to have a more specific prompt, but they don't understand what is going on because they don't have any programming experience, like you already see it happening there.
Speaker 3:Yeah, yeah, indeed, yeah, it's interesting. I think it kind of goes round and round because it's like it looks like you don't need, like it looks like now the knowledge about programming is dispensable, but like, if you really take a closer look, it's really not right. Um, yeah, and I think we touched a bit about this as well he says on the long term, take responsibility. Um, this is from a slide from ibm. A computer can never be held accountable. Therefore, a computer must never take a management decision.
Speaker 3:Uh, and I think this is I don't think this is very controversial, but I think he's saying like you're still gonna need, at least in a company, at least one developer that will take accountability and will direct llms and say this is what needs to be done, that wasn't to be done, which is, to me, sounds very. It rhymes a lot with your experience with bolt. Right, you say do this, okay, this is not good, because you can also see, like, build this thing, and then a junior developer, instead of trying to understand the code there was a utility function, just creates a new one and creates overlapping functionality. Now, actually, this is this is not what I want right and I think it is a bit more experienced view to say I don't want to create new dependencies and have duplication because I want this to be clean I want this to be isolated.
Speaker 3:I want to know where the code is organized and all these things. So yeah, so there you go. Those are the, according to him, that, to avoid being replaced, lms do what they can't. Some things I think are pretty commonplace. Some things are a bit more insightful.
Speaker 2:I would say um and I think the the the core of the message is like if there is something new coming up in the space that you're actively working in, try to learn what it is and use it.
Speaker 3:Yeah, I agree, but I think it would be more helpful to think of these things as tools. Right, because, like you said, you learn it and use it. Yeah, right, but AI today still is very much anthropomorphized.
Speaker 2:that's my fault, um yeah, it's gonna be the agent that runs your life yeah, yeah, it's true.
Speaker 3:Actually and unintentionally, this is something that I had also put here um the wire brain how to not talk about an ai powered future, future. This is from uh inish, from uh explosion inish montani. She's one of the authors of uh spacey okay so she does a lot of talks.
Speaker 3:Well, I don't know how much it does, but I know she's done a lot of talks and, uh, sometimes she likes to to talk a bit about her, about creating the slides and all these things. So this post is more about using the traditional AI images, you know the brain.
Speaker 3:It's an old post as well. Yeah, it is 2017. Yeah, but she did talk about it. She did reference this because actually I found this post in another post. It was a recent reference, okay, but I thought it was interesting one because she talks about yeah you can see, I'm allergic to the images that you're showing here.
Speaker 3:Yeah, but that's exactly what she's saying. Don't do it. So for people that are just listening, these are basically the brain on chips kind of thing, right, the robots. And if you look up AI or machine learning on stock images and all these things, that's probably the images you're going to find. Tldr, for this is basically says, just don't do it. Like, if you're going to talk about AI in healthcare, then it's better to bring a healthcare image than this very generic things. She also says that this also promotes the anthropomorph To look at AIs as humans humans which doesn't help as well. Um, and then she talks about like imagineering, and this is what I thought it was kind of interesting.
Speaker 3:So these are images from 1900s of how people imagine life in the 2000s would be, and then there's like a robot sweeper I guess, basically a robot vacuum, exactly um and then you had, uh, firefighters with wings that could put out oh nice, we still don't have the wings we still don't have the wings, but she argues that the I would prefer the wings above the eye, though that would be cool if I could just choose for myself yeah like do you want access to cloth or do you want a pair of wings?
Speaker 3:that would be cool. The wings would be. Yeah, I would go for the wings. I would go for the wings as well.
Speaker 3:I think it would make my quality of life you know more anyways um, but she does say also like uh, this is not that far from what we see today, in the sense that we have a problem of keeping the floors clean and we have the robot vacuum cleaners and then there are drones that can also help put out fires, right? So if you think of that's the problem here, and she also talks about, yeah, we cannot really imagine what the future is, because we're really trying to imagine the future of what we have today and we're trying to imitate the human behavior to solve these problems, right? Another example so she said, back in the 1950s there was a job that was a knocker uppers, which was basically someone that will come and knock your window or your door at a certain time to wake you up. And then yeah, and she says, yeah, nowadays this job is obsolete, right, it's, for there are alarm clocks for it. So this job was replaced, but it wasn't replaced by a window knocking machine, right? So, and I think this kind of yeah, I feel like when we talk about ai and I think that's the point that she's bringing here ai has a pr problem.
Speaker 3:Whenever we're talking about ai, we think of a little intelligent being. We also assume that if we can do things that we consider hard today, like painting in a van gogh style or something like that, we subconsciously also assume that it can do everything that's easier than that, because we equate it to a human right. So if an lm can write code, then you can also speak english, because we speak english before you write code. And if you can write code, if you can speak english, then you can probably, uh, read poetry or whatever you know, and I think it kind of creates a cascade of things like a layer upon layer upon layer, that we think that because we're on this top layer here, everything below is also conquered, which is not true. So yeah, in the end she basically says the way we communicate is powerful shapes the perception of the world. So let's be careful about when we use ai and we equate to humans and I stopped using the fucking wired brain illustration I agree.
Speaker 2:I don't necessarily agree with the statement that there's a pr problem like depends a bit on what you define as a pr problem, because I think the most hype thing ever is ai yeah maybe the challenge, but I think that goes for all hypes, like the actual what, what is under the hype, and understanding what is under the hype is lost a bit in the hype yes, but um say more about that like.
Speaker 2:I think what, what people like? There is such a big hype that even people that have nothing to do with technology think ai will take over our jobs yeah, true without really understanding like where are we today? What is the what is likely going to be the future?
Speaker 3:like. But I think that's the thing, because people that think that they're going to take our jobs is because they see an ai coding and they think like, yeah, coding is more difficult, more difficult than X, and if I do X and it can do coding, then it probably can do X, because coding is harder, because it's a person, it's not a tool. It's like to say no one says a calculator is smarter than a calculator is going to take jobs, because if you can do arithmetic, the humans cannot. Because I think we really see it as a tool. And I think there's still this thing that the words we use like agents reasoning of a artificial intelligence, right, I think all these things really support that, that image.
Speaker 3:Yeah, right, yeah, I feel like uh never liked the the terminology, but uh, yeah, it is what we are stuck with today. Um, now more on the techie things. Um, you know, uv, right? Bart? I know UV. We mentioned hype. Uv has also been very hyped for for a while.
Speaker 2:Uh, yeah, maybe you need to explain what UV is.
Speaker 3:UV. Um, for people that, yeah, yeah, maybe you need to explain what UV is UV. For people that OOF, oof, yeah. People that are in the Python space you can think of like a poetry competitor, a hatchling competitor. Basically it's a package manager, pip competitor, pip competitor indeed.
Speaker 2:Virtual infant competitor.
Speaker 3:Yeah, it's a tool that bundles a lot of stuff. It can manage your virtual environments, your dependencies. It can manage your virtual environments, your dependencies. It can package your code. It can manage your Python versions, your Python tooling, so even PipX alternative. It's from a company called Astro that also created Ruff, which is tooling written in Rust. So you use UV or no. I do yeah. I think I use UV as well, but I know I'm also hesitant to suggest because python packaging has been changing historically changed all the time.
Speaker 3:I like poetry, then pdm then hatch and all these things just now I think so, and that's the thing, so that's why I wanted to bring it now. Uh, this is an article from who? Who is this? I don't know who this guy is Forgot his name, sorry about that. It's Bytecode, so bytecodedev. A year of UV, pros, cons and should you migrate? And, yeah, subtitles? Yes, probably. So. I was already surprised that it's been a year. So this is from February 15th, so it's been already a year of UV and it looks like it's still ahead of the other competitors. Has it only been a year?
Speaker 2:That's according to him. Yes, it's only been released a year ago. Oh wow, it feels much longer.
Speaker 3:Right, but I think it's been Like there's no skepticism still. I feel like people are still very confident about OUV. Most people are still saying O. Saying V should be your default choice, right, which it wasn't the case for previous package managers, in my opinion.
Speaker 3:I think it depends a little bit on the bubble that you were in, but Potentially, so basically it says he waited a bit before he can bring his opinions because he wanted to see how using UV would feel like and all these things. I'll just go a bit to the conclusions, but I will just mention one thing before that I thought it was interesting. He said that UV, the way that they tackled UV, they really tackled trying to first replace PIP and then virtual environments and pip tools and according to the author here, this reflected a respect for the community and um to try to adhere to the community standards. So it was like it was a bit like I'm trying to do something for python because I'm not doing something what I want, and I think even the inline dependency management that he brought maybe it's in that spirit, right. So he also praised uv in that sense, right, like package manager, like yeah, there's a lot of stuff there. I'm going to try to really take it all in and really try to be the package manager for python, not what I want it to be.
Speaker 3:So the article also talks about everything that uv can do for you, so installing python versions. He also talks about a few tricks. So, for example, if you do uv run, uh, with jupiter, jupiter notebook, yeah, so you've run with jupiter, you can basically run uh, launch jupiter notebook without having to install in your dependencies. So he talks a bit, a few tricks here and there when it fails. So he talks about the cache problem because UV tries to be very fast in a lot of different ways. So whenever he pulls a dependency he'll try to cache it for you and he says using this he actually took 20 gigs of his MacBook.
Speaker 3:But in the conclusions, when should you not use UV? He says there are basically five situations you should not use UV when you have a legacy project where using UV to resolve dependencies will not work, or you cannot afford the mess for the purpose of migrating. So basically, you have a project that has another thing you don't want to change, you're in a corporate environment that will not let you use it. You don't trust it yet because it's not a stable version and I think this is still true Like UV is like not a 1.0 technically, and also because it's part of Astra, which is a for-profit company, and you don't know what their commercial offering will be.
Speaker 3:They also mentioned if you need a specific Python version. That UV doesn't have, because you can download Python versions with UV, but it only goes back a certain amount of time and, if you think, the cli is too big of a showstopper for the team. So basically, the team is very familiar with using other tools and switching to another tools. Um, of all these arguments, to be honest, I think, yeah, legacy, I understand if you don't have standards right, because a lot of these other tools also have python standards that you could still migrate the tools and you can still use this stuff. Uh, corporate environment okay, not trusting it. I get the argument, but I don't feel like it's a big deal because it's open source, right, even if they close source it, someone would just have a fork of it, right.
Speaker 2:I think it really depends on what you're building. If you're building something read field that you'll probably refactor completely a year from now, you don't really care about. There is no stable version. There's so big of a community behind UV. It's good enough now. If you're working on a legacy code base that has been there for 10 years and needs to be active for the coming 20, you're probably not going to do this for a tool that just exists for a year, right?
Speaker 2:yeah I mean, yeah, that's true to me. That is really like the like. Does it matter at this point? Like maybe it brings a big, big back to to the one, one one. Uh, I don't know what was it with the doors one-way doors versus two-way doors yes, what is?
Speaker 3:what is that?
Speaker 2:like uh, I think it comes from amazon, right, I?
Speaker 2:think from jeff pacer yeah so if you uh like this is a choice that, uh, you go to the door, can you, can you take, uh you, can you go back through the door to undo it? Basically that's the thing. And if you're in a greenfield situation and you're going to refactor and you don't know yet what you're building completely anyway, it's easy to go back and change a package manager. If this is something that is super core to your company and needs to be stable down the line, going back incurs a lot of risk. You can doubt whether or not that is a two-way door, then right yeah, but the way door becomes very expensive yeah, but because uv is still a tool in the end.
Speaker 3:Right, and for example, one thing that it changes your pipe, project automa. But you can still read that with other tools right you can say it like that.
Speaker 2:That's indeed the easy, easy thing. But if you're working on this with a team of, let's say, 40 people, they all have their developer environment. It's more than just a file that it generates right. It creates virtual environments, it has an impact on your CI. It's a big job to change everything, like if you're working on a large-scale application, right.
Speaker 3:Yes, but I think, well, it is a change, but to me it doesn't feel like as expensive change as to say we're going to use pollers instead of pandas, for example, or we're going to use this database or that database, because in the end it's like you still have virtual environments, which is still a vmv file there. The pipe projecttomo has nothing that is uv specific, for now at least. Right, right, you have the dependencies, dev dependencies, you have groups. These are all PEP standards, right? Yeah, like I agree that maybe your workflows if you use this tool in CI, there is some change there, but or the log file, for that matter. So there are a few things that change, but I feel like in the end it's really just like it's a tool that adheres to standards.
Speaker 2:So I think for you, because you're very savvy on these things.
Speaker 3:And that's the thing that I was also wondering, because I also proposed using UV with another project and they were using poetry and they said, well, maybe we shouldn't migrate to UV, because the idea is that everyone can just jump in and use the tools that we have. And in my head, because the idea is that everyone can just jump in and use the tools that we have and you might.
Speaker 3:It's like, yeah, but in the end we have virtual environments. It's just that poetry puts somewhere the place, like the concepts are all the same yeah, but for you because you're very savvy, like but this requires education of a team like it's.
Speaker 2:Uh, I'm not saying you shouldn't do it I'm just thinking about like what is the impact?
Speaker 2:like is this is this a one-way or two-way door? Yeah, and, but this like is even no. This is maybe a different situation from the one-way versus two-way door, because the two-way door is more as like, what is the long-term viability of a project like uv, yes, right, yeah, if you at some point you say, as well as doing something shitty with it, with the commercial, but you want to move away, like that is expensive, but like there's also, like it, like every change in curse, a cost, like, and then you need to evaluate, like, do we believe more in uv versus poetry? And, if so, like, and there are good arguments for that.
Speaker 2:I think you can find a lot of arguments for that. There's also an argument to say you want to explore what, what new, new evolutions are, because it's important to stay a bit, yeah, ahead of the curve. Um, but it's not as simple as changing a code like a piece of code on your laptop. There's a team involved, you've people involved that are not typically looking at different type of things and that you see this just as a tool, but they need to be educated but then also the other question there, like if you're using poetry for a long time but you still have a hard time switching out of it.
Speaker 2:I think it's probably because poetry abstracts a lot of things and maybe you don't understand what's happening under the hood it doesn't even need to be poetry, it can be just something as something as simple as pip, like it's just a change, right? Yeah, it's what you're saying.
Speaker 3:It's a change of whatever and I also agree that there's a change in the way of like even using llms. I know some, I talked to some people. They're like yeah, like I haven't. It doesn't really get in my flow of coding like, and I understand that changing the way you do things has a cost yeah right, but uh, I don't know.
Speaker 3:I mean, and also, if we talked about this like a year from now or like six months or eight months ago, I would also say maybe let's wait a bit because Python, like I was saying we should switch to Rai, like right before UV came out. And then UV came out, okay, now we switch again to this. And it's also a bit weird to kind of say, okay, we have to always switch. It always feels like, okay, you're constantly learning a new workflow. Right, I also understand that. But I really feel like today, maybe the argument of astro being a for-profit company, but even then I don't, I don't see a lot of good arguments to not use uv, even if it's just to build a community. So we're all using the same thing. We all think that this is the way to go to adopt the pep standards and this and this and that. Would you ever, if someone's starting a project, say what should I use? Would you ever not say uv, and when?
Speaker 2:if you're starting a new project yeah, today I would use uv but like if someone a year ago would have said something else, and two years ago would have said something else, or maybe a year from now we'll say something else.
Speaker 3:You know, that is where we're at with package management and would you say that it was as a mistake to say UV today, if next year you say something different?
Speaker 2:for a new project. No, I think that's okay. Okay, I think what would be, what would change is like if something like the Python Software Foundation would adopt UV. Oh yeah, I think that would be From the moment that there is like it's officially part of the Python Software Foundation, it becomes also easy to bring this argument like we need to switch legacy stuff to this. It's worth the value because there will be support going forward yeah, from something that has existed way, way, way longer than Astral right.
Speaker 3:Yeah, I agree. No, but I definitely agree. I think if I'll be happy if it goes under the PSF, the Python Software Foundation, you wouldn't.
Speaker 2:Yeah, I would, but I don't think it would happen. I don't think it would happen either.
Speaker 3:Maybe another side note. This is written in Rust. Do you think this poses a problem at all?
Speaker 2:In what sense that python is written in c, so the core developers are c savvy.
Speaker 3:You can say that there's a um, I don't know okay, I don't think there's a big problem, to be honest.
Speaker 2:But at this stage, definitely not yeah, yeah, okay cool.
Speaker 3:So do you agree with this? Always try uv first. If that doesn't work which is very rare go back to what you did and find them yeah, I have a new project for legal yeah okay, I think we can agree on that then as well.
Speaker 3:Um, what else? Maybe, uh, a bit more on the tech corner, maybe a library actually? So when you make REST API requests, there was Postman and I think Postman. There's some restrictions now, or something. I think Postman was very popular as a tool to make requests and save environments and save credentials and organize your requests to test things Right. But if I recall correctly and maybe correct me if I'm wrong Postman now changed the license or something, so you couldn't just use this freely as you could before, maybe for work environments or something. It was like Docker deal. I don't know. I don't know. I don't know. What do you use today if you need to test REST API calls?
Speaker 2:It changes a bit around.
Speaker 3:Oh really.
Speaker 2:To be honest, I've used CLI stuff, I've used Postman and alternatives, I've used Chrome extensions, so I don't really have a.
Speaker 3:Interesting, because someone.
Speaker 2:I personally tend to gravitate towards UI stuff.
Speaker 3:And what do you use for UI stuff?
Speaker 2:Postman Well, I've used Postman, but it's not Postman I'm using currently. I forgot the name.
Speaker 3:I can this one.
Speaker 2:Because I remember Bruno. No, no, I I've seen bruno, yeah, but it's not bruno why don't you use bruno? Maybe you just don't like the dog I don't use like it, no, no, it's not something.
Speaker 3:Uh, this is not something I'm doing every day okay I'm also not evaluating it every day yeah, I was talking to a colleague and he was really advocating for postman, um, because you can save, like, different directories, you can save different variables and all these things, and he was really pro and it's very good for postman, is very good for team-based development, so you can very easily share like types of things that you want to test against and this kind of stuff, but for me, indeed, it doesn't come up as often that I was really like, oh yeah, now use postman, and now let's figure out the ui and let's see how to do this, because a lot of the times I tried to use these tools like bruno or postman, but in the end I just kind of went back to curl because I was just doing this like it's very ad hoc yeah, it depends really what you're doing.
Speaker 2:yeah, but I mean, if you're building a backend server that is exposed to an api, you're probably testing against your local API server. A lot True this type of things, right True.
Speaker 3:This also came up when I was looking from Darren Burns. I think he's from the Textual team. So that's Python 2E from Textual, which is a power of HTTP client that lives in your terminal.
Speaker 3:So kind of like a 2E thing, so I thought it looked nice. I think in the end it's just you can also have environments and variables, but everything is on files, so you can still save very easily and share these things as well. So, yeah, I thought it looked cool and I think this is a type of application that maybe makes sense to stay in the terminal, because, for me at least, I use curl, which is already in the terminal. So maybe if I want something a bit more fancy, then I can still use a twee for this, even though, to be honest, not sure I will use it until I actually need it. Yeah, right, and I don't even know when I would need it yeah, looks, looks, cool looks cool right, I'm uh.
Speaker 2:To me this is a very personal thing, so the these you have a lot of these two. You also have a very cool one to do, uh, to do sql queries against the data with, like basically like a 2e version of dbeaver, and I always try them out and I think it works very well, like if, if, like you're using this 50 of the time, yeah, but for me it's like I use this like intensively for a day and then I don't use the week. It's more like that, and like when you're then not in a ui setting but more in a terminal setting, like in a terminal, like the shortcuts are always different and you need like half the time you need to spend learning these key shortcuts, yeah, and like it's not worth it, like the trade-off is not worth it to me, like if you know them you're gonna be way more efficient and you will know them if you use them a lot.
Speaker 3:Yeah, exactly, yeah, yeah, no, I see what you're saying. I see what you're saying. I think two is are kind of nice, but at the same time I feel like sometimes I think people build stuff on the terminal just to build stuff on the terminal, like it wouldn't be but, I, mean nothing against it, but I'm wondering still if there's a.
Speaker 3:This type of application should always be in the terminal because it's better. But yeah, but, yeah, maybe. What else? Do we have time for one more else? Do we have time for one more? I think we have time for one more, maybe. This is on software quality. Uh, it's a bit short. Well, it's from a summary from a paper, but I'm not going to go over the whole. Well, the whole thing is just describing Um, when we talk about software quality, I do sometimes wonder how do you measure quality?
Speaker 3:Like, especially now that I'm? It's hard. As I get more, yeah, as I get more experience, I feel like people look at me more to kind of say, okay, this is good, this is not good, which we do, which we not do. And then there was a paper from google trying to describe what software quality is interesting. Um, and maybe I'll go so I have an image here on the screen that says process quality arrow to the right code quality arrow to the right system, quality arrow to the right and product quality. I'm going to go a bit backwards. So, according to the paper, they say product quality is like if you build a product right, like you have. I don't know something what your users think like when you make a change, is this a good change? You change the ui, you have a new recommender algorithm. Do the users like it or not?
Speaker 3:Um, hard to measure sometimes. Right, you can do a lot of a b testing, but it's not very easy. It's a bit outside your control, but I'd argue it's the only thing that's relevant indeed, and that's that's why I wanted to start. Right like this is the important thing. So it's like, ideally, this is what you should know, but it's very hard to really get a feel and you know, but that's that's the idea. Then you have system quality, which they mentioned here defect rate, reliability, performance, security and privacy. So how often do things break? How often things are not operational? How often do you need to go there and do a hot fix or something? Um, which you can measure way more easily? But at the same time, it doesn't happen very often, right, it's very sporadic. So how can you test for these things? How can you make sure that this change will decrease this failure rate, all these things?
Speaker 3:Then they have code quality, which is what a lot of people think of software quality, like they think of maintainability, complexity, testability, readability, yeah, which a bit subjective still, but you can argue that these things are best practices, these things have followed the standards and all these things. And then on the left, the process quality. So they actually talk about how do you write code right? When you write code, do you have tests? Do you have CI? Do you have pre-commit hooks? Do you have? What's the strategy? How do you distribute work right? Do you have peer reviews?
Speaker 3:And this thing is like the easiest thing to measure, right. Like how much time does this pull request stay hanging? And all these things. And what I thought it was interesting here as well, that they, according to the research, improving. Even if you improve any of these things right, but like, even if the process quality you enforce more, the review process, the checks and all these things, there are trends or correlations that lead to everything down the line. So for me at least, when I read this, it kind of felt like this is ideal, but it's not easy for us to measure these things. But even if you just control the things you can, which is the quality of the process in itself, everything will also improve, everything tends to improve. You're not convinced.
Speaker 2:I think it's a very fine balance between these things. Say more. I think you could spend a lot of time on code reviews. You can spend a lot of time on maintainability, on making code less complex, on making it more testable, on making it more readable, on making it more comprehensible, and not add a single value for your users.
Speaker 3:Yeah, that's true.
Speaker 2:I think that is very much dangerous to look at any of these things in isolation.
Speaker 3:That's true.
Speaker 2:Or to go too far and like, take code reviews as an example. I think we've all been in situations where there is a code review process, because there is a code review process, and then basically the review process is yeah, this, a second engineer needs to look at it and needs to approve it. And then what happens in practice? I'm gonna send it to you, just approve it and go to prod.
Speaker 2:Yeah, I mean we've all been in these situations, so like I think all of these things are important. I think it's also an interesting way to think about it, like cross quality code system and product quality, but it's um well, to me, by far the most important one is product quality, and and don't look at them in isolation.
Speaker 3:I think all of them are important, but yeah, I think it's a danger to focus too much on one of them well, yeah, I think I agree, and I think especially engineers tend to focus very, very, very much on on the parts before user experience do you think engineers have a tendency to be more narrow vision or tunnel vision, and they it's harder for for engineers to take a step back and look at the broader picture, like why are we doing these things?
Speaker 2:but I think that is why you work as a team and, like, what you need is in the team is also people that really look at this from a from a product management point of view, like what is it that you're actually building? Like you don't just want to build the best mobile app, you want to make the most useful mobile app right yeah, true that is, I think, anything. That is something that you also, when you build teams like you, need that diversity as well to be represented correctly.
Speaker 3:I agree.
Speaker 2:And there, of course, it interlinks very much, Because if you have your process, your code, your system quality in place and you want to add a new feature, it's going to go quickly.
Speaker 3:So it's very much linked right yeah indeed, but you can't.
Speaker 2:it needs to be in balance.
Speaker 3:And I also think that if you add metrics for these things, think that if you add metrics for these things, I the, I I wonder if the action of adding a metric to code quality, for example, defeats a bit the purpose because, like then, you start focusing on the code quality and you don't focus on the thing that actually matters. I feel like there's some metrics that are interesting to keep an eye on, but when they become the goal, they become bad metrics, because the metrics can look really good, but for me that just means that you're focusing on the wrong thing.
Speaker 2:I think you see this a lot with open source projects. So you're not going to and I'm talking about an extreme here, of course, but typically people don't open source until the test coverage is 100%. Yeah, yeah indeed and you're just gonna like, you're gonna build your useful tests and then you're gonna have like for your like. This is what we definitely need to test. This is our core functionality.
Speaker 2:Probably you're testing on like like 60, 80 percent yeah but you're gonna optimize because, like I mean, everybody's gonna see this what I need good quality. You're just gonna assess, like, which are the lines that are not covered yet and you're going to write some easy tests to also cover them. Yeah, that's true.
Speaker 3:Like, like shows a bit like maybe that time should have gone to user experience, for example indeed, but, like, code coverage is a very good example, because you can have 100 cold coverage and have shitty tests, exactly right. So but yeah, true, true, I fully agree, I fully agree, maybe also on that, and one thing that triggered, for me at least and this is more side projects, so maybe this is not, it's like the extra things that I want to do that are not the big chunk of my work, my day and all these things, and I'm switched. I'm starting to switch my mind.
Speaker 3:I used to think that well, I didn't use to think, but I started my career and I know I wrote a lot of shitty code. And then the idea was like, okay, I need to write better code. I still do, yeah, but I'll get to that. And then you get a bit more experience and you can write code. And then sometimes you write some piece of code like you write a project, like man, this looks nice, this is, this is, this is what I wanted to do, right, and you're really happy. And then I find myself like I want all the code that I write to be like that, and then I don't, I don't go anywhere.
Speaker 2:Yeah, you see what I'm saying.
Speaker 3:So it's like, and then I like, also going back, it's like okay, maybe it's fine to just hack things up, it's fine to write spaghetti code and just like it works.
Speaker 3:You know, like, again, I think there's a a bit of a difference. So and I think we talked about with lms as well there I still I can. I think it's okay to write shitty code, but with different eyes on it. You know like it's okay to write shitty code about this function, but you know that this function does and you don't like, you don't have a whole bunch of dependencies, you're not setting variables that are global and this and this. It's a different type of shitty code. And you know that if you need to rewrite it, you can just go ahead and just rewrite this, and that you know where it's at. And actually I think it's quite okay, right? Especially because it's much better to write shitty code and make progress on a project and then go back and refactor it Then to say, no, everything I need to write is perfect and then not go anywhere. And I think the same thing goes with the unit test that you mentioned, the code coverage.
Speaker 2:Yeah, I think definitely for personal projects to me. I also have the same view. I think it's better to Thinking about from the 10 IDs you have, probably 90% are never going to get anywhere, so it's better to just hack it together, see if it sticks, if it's valuable, and then improve on that. Because if you're going to put your standards too high, you're probably never going to get to completion.
Speaker 3:Yeah, true, and I'm also even wondering, but I think it's also a personal thing.
Speaker 2:I want to try the 10 things and nine things, fulfill um and also have that approach, like you can only do that if you hack things together, but if you want to have a show, this is my goal, my singular goal, and I want to do it in a perfect way.
Speaker 3:I mean, it's just a different way of attacking things, right yeah, but even now, I'm wondering, now, even for uh work, right, so projects that you work?
Speaker 3:in a team that needs to be. Maybe it's okay as well to write, to hack some things up, but still write it in a way that you can go back and refactor easily. So I think there are like some decisions that you need to stop down and say, okay, if I make this change here or if I hard code this variable here, it's going to be very hard for someone to find this needle in this haystack, right? But maybe there are some things that's like, it's fine, like yeah, okay, maybe there's a better way. I don't need to write this three times nested for loop, but it works for now and I know what the function does and maybe I can break it down later. This is okay, spaghetti code, right.
Speaker 3:And I'm starting to reflect more a bit bit on this because sometimes I'm like, even while I'm reviewing code, I look at something like, oh, that's, that's really. I wouldn't do that. Like you know, this feels very hacky, like what you're trying to do there. I can think of five different better ways to more cleaner ways. Yeah, to write this, but maybe it's okay because I still know what it does and I still know that I could refactor this one function, everything else you know but of course, like it's a bit of a, there's an educational component to it as well.
Speaker 2:Like if you know it can be improved and like if you know, like this, and these are the ways to improve it.
Speaker 3:That's fine if you're hacking it together, because that's the only way you know yeah, indeed, that's also indeed, and that's it kind of goes back a bit to like. I feel like in the beginning I was hacking things up and now I feel like I know better. But now I'm going back to hacking things up, but that doesn't mean that I'm going back to the starting point. Yeah, right, because I think the the eyes you have on it is different. So so, yeah, cool, maybe one last quite quick thing. One thing I thought it was funny. I'm not sure if you saw, I shared this on our internal slack as well. It was um c sharp for gen z uh no no, um, alex, I think you're gonna.
Speaker 3:You can get into coding after this. So it's a, a twit, or x or no. As we encourage younger generation to try our programming, most of our keywords in c sharp have been permanently renamed. This should increase the amount of younger game developers come the next few years. Instead of public float risk is high key period risk. Instead of private bull is low-key facts. So instead of bull, we have facts. And then is us. And then instead of a try catch, you have a fuck around and find out uh, if is vibe, check return becomes it's giving true is no cap and return uh sorry. And uh, false is cap. And then you have an exception, which is t, and then, instead of debuglog, error is shout out, dot, spill t, and then, instead of throws, yeet. I thought it was pretty cool, do?
Speaker 2:you get this. I. I think this is something that uh, that uh gen x would find really funny and the gen z, like this gen z would find this very cringe I don't know, let's check.
Speaker 3:Uh, what do you think? Uh, what is this? What say you, alex?
Speaker 1:I mean I don't use slang, so for me I don't know. Okay, I just thought it was funny, I guess okay, okay, maybe, maybe not.
Speaker 3:This looks like a bad joke, you think so? I thought it was okay, maybe I'm uh, okay, wow wow, okay um, someone also made a comment that they should replace the semicolon with fr, like for real, for real, for real. Okay, maybe it was just me then. All right, we can edit this out, alex, it's okay.
Speaker 2:It's funny, Muriel, it's funny.
Speaker 3:Thanks, bart, I think I'm okay too. I think we had one more thing, but we can keep it for another time. See, we weren't over there, our usual length. Anything else that you wanted to bring up, any shout outsouts Anything? No, I don't think so. Anything from you, alex, you sure Nothing from the news you want to cover. Okay?
Speaker 2:Can you do a Gen Z closing Goodbye. Okay, if some slang you can throw in. No, I think Marilla's better at that.
Speaker 3:But am I technically Gen Z or no?
Speaker 2:From what year are you?
Speaker 3:I think it's until 1997 I think that I am, then, and you're from? Okay, you're from 2000, yeah, yeah okay, okay, john z closing no, I'm gonna ask chad for this no I don't, I don't know, but you just call me a dad for laughing at this joke. I feel very shy now. I feel very insecure, so maybe we just call me a dad for laughing at this joke. I feel very shy now. I feel very insecure, so maybe we just call it like this, okay.
Speaker 2:We'll come back to this another time.
Speaker 3:Yes, I'll prep something, I'll do some market research, see what the kids are using these days, and then I'll come back to it. Thanks a lot, everyone for listening. See you next time. Bye. Bye-bye.
Speaker 2:You have taste in a way that's meaningful to software people. Hello, I'm Bill Gates. I would recommend TypeScript. Yeah, it writes a lot of code for me and usually it's slightly wrong.
Speaker 1:I'm reminded, incidentally, of Rust here, rust, rust, rush. This almost makes me happy that I didn't become a supermodel.
Speaker 2:Cooper and Nettix Boy. I'm sorry guys, I don't know what's going on.
Speaker 3:Thank you for the opportunity to speak to you today about large neural networks. It's really an honor to be here.
Speaker 2:Rust Rust Data topics. Welcome to the data. Welcome to the data topics podcast.