Code with Jason
Code with Jason
298 - AI-Assisted Rails Upgrades with Ernesto Tagwerker
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode I talk with Ernesto Tagwerker about using AI for Rails upgrades, AI as an unblocking tool rather than just a speeder-upper, and the dangers of AI-generated "speculative code" that adds liability without value.
Links:
A Paper Newsletter For Developers
SPEAKER_01Hey, it's Jason, host of the Code with Jason podcast. You're a developer. You like to listen to podcasts. You're listening to one right now. Maybe you like to read blogs and subscribe to email newsletters and stuff like that. Keep in touch. Email newsletters are a really nice way to keep on top of what's going on in the programming world. Except they're actually not. I don't know about you, but the last thing that I want to do after a long day of staring at the screen is sit there and stare at the screen some more. That's why I started a different kind of newsletter. It's a snail mail programming newsletter. That's right, I send an actual envelope in the mail containing a paper newsletter that you can hold in your hands. You can read it on your living room couch, at your kitchen table, in your bed, or in someone else's bed. And when they say, What are you doing in my bed? You can say, I'm reading Jason's newsletter. What does it look like? You might wonder what you might find in this snail mail programming newsletter. You can read about all kinds of programming topics like object-oriented programming, testing, DevOps, AI. Most of it's pretty technology agnostic. You can also read about other non-programming topics like philosophy, evolutionary theory, business, marketing, economics, psychology, music, cooking, history, geology, language, culture, robotics, and farming. The name of the newsletter is Nonsense Monthly. Here's what some of my readers are saying about it. Helmut Kobler from Los Angeles says thanks much for sending the newsletter. I got it about a week ago and read it on my sofa. It was a totally different experience than reading it on my computer or iPad. It felt more relaxed, more meaningful, something special and out of the ordinary. I'm sure that's what you were going for, so just wanted to let you know that you succeeded. Looking forward to more. Drew Bragg from Philadelphia says Nonsense Monthly is the only newsletter I deliberately set aside time to read. I read a lot of great newsletters, but there's just something about receiving a piece of mail, physically opening it, and sitting down to read it on paper that is just so awesome. Feels like a lost luxury. Something about holding a physical piece of paper that just feels good. Thank you for this. Can't wait for the next one. Dear listener, if you would like to get letters in the mail from yours truly every month, you can go sign up at nonsensemonthly.com. That's nonsensemonthly.com. I'll say it one more time. Nonsensemonthly.com. And now without further ado, here is today's episode. Hey, today I'm here with Ernesto Tag Worker. Ernesto, welcome.
SPEAKER_00Hey Jason, great to be here and chat with you again.
SPEAKER_01Yeah, uh you're coming close to setting a record, I think, of of most times appeared on the Code with Jason podcast. Um I'll have to go back and count. Um, but for anybody who hasn't heard you on the numerous uh podcasts that you've been on before, um, can you tell us a bit about yourself?
SPEAKER_00Yeah, um I'm founder and CTO at fastruby.io. Um we do a lot of talk a lot of work with technical debt and Rails upgrades and performance optimization. And FastRuby itself is a productized service by Ombu Labs, and that's our consulting side, and we do a lot more work with AI custom models for our clients. And uh I'm happy to talk about anything related to like technical debt remediation and the intersection with AI and agents and all that.
Why AI Fits Rails Upgrades
SPEAKER_01Yeah, and today I was interested to talk with you about the ways that you're using AI in connection with your Rails upgrade work. Um that's something you and I have talked about offline before. Um and it makes a lot of sense to me because Rails upgrades, it's the kind of thing, correct me if I'm wrong, but there's a certain amount of grunt work to it and a certain amount of probably just like researching and finding out, finding out like, okay, what's this difference? What do I need to do exactly? Blah, blah, blah, ripe for AI automation. Um, but anyway, tell me about that.
SPEAKER_00Yeah, yeah. There are a lot of things in an upgrade project that are you know very repetitive, and for that, AI is great because you can automate part of it, or you can have Claude, uh, you know, or you can develop skills for Claude or something like that that will help you uh do some of the repetitive work. And then there's also like the other side where you know projects will like monkey patch Rails and extend like private APIs and stuff like that. And then you kind of have to go dig deeper and try to figure out like what why is this active record callback different? And why am I seeing this failure in CI? Um, so yeah, there's like a lot of like debugging and investigation when it comes to like an upgrade project. And what we do is work a lot with monoliths that have been around for more than 10 years. So there's a lot has happened in those 10 years, and we come in and we try to figure out how to get them to like Rails 8 and Ruby you know 3.4 or 3.5.
SPEAKER_01Okay. And how did you start? Um, like okay, so at some point in the past, there was zero AI involved in your upgrade projects because AI didn't exist in the way it does now. Um when when you first started applying AI, what were what were some of those ways that you applied it, if you remember?
Building The AI Upgrade Roadmap
SPEAKER_00Yeah, we've always had this drive to like automate part of the process. We wrote a gem called Rails Upgrade many, many years ago to basically help us with the strong parameters migration. And you know, a lot of the work could be done with scripts and transformations. Um but then when AI came and you know became like so popular, we saw it as uh well, actually as a risk. You know, we do 80% of our revenue comes from upgrades. So someone could develop like an AI agent that basically replaces fastRuby.io. So about two, three years ago, we decided to like turn that into an opportunity and start building our own AI tooling to be the be the service that does use AI to make the upgrades easier. Um and about two months ago, we launched our AI enhanced Rails Upgrade Roadmap, um, which basically takes uh project source code and does a bunch of calculations behind the scenes, uses a lot of data sources to say, okay, this project has 17 deprecation warnings that you're gonna have to fix for the next jump, but also for future jumps. And it gives you a plan. If you have an application that's running like Rails 2, 3, and you want to take it all the way to Rails 8, you can plug in your GitHub repo, submit it, and then you're gonna get an action plan to you know that's tailored to your application. And that's one of the tools that that we just released, and we have more coming down the pipeline uh to make it easier for you know clients that that might not be ready to hire us, but you know, they might want to do it themselves. So we want to be you know driving innovation there and coming up with like more skills for Claude to make it easier for teams to upgrade.
SPEAKER_01And can you refresh my memory a little bit? Like the pre-AI format of it. Would you do some kind of like discovery process and put together a roadmap and stuff like that? Which which components of the process were already there before AI? Is this is this kind of automating and augmenting something that was already there or adding something new or both?
SPEAKER_00Yeah, it's kind of like gluing it all together. Um, you know, there are so many different resources out there. Like you could use um there's Rails diff, um, which basically shows you the diff between versions of Rails. Um there's Railsbump.org is an open source project that was started many, many years ago. Uh it was called Ready for Rails, and it was like ready for Rails4. Uh and then Manual, this engineer in Germany, took it over and integrated it with like GitHub Actions and stuff. And then you know, he kind of gave it to us because he was like done maintaining it, and now we're running that. And Rails Bump basically takes your gemfile.lock and it looks at all the dependencies you have and it tells you like which versions are not going to be compatible with your future versions of Rails that you need to upgrade to. Um and then you know, there are like RubaCop cops that you could use for guiding your upgrade project. So there's definitely a lot of tooling, or there was a lot of tooling even before AI. And now that with AI, I think uh you know it's definitely like useful, but I don't think you can like submit like your entire application to JetGPT and be like, give me an action plan that is like accurate to my project and my dependencies. Uh not saying you know, you might be able to do something like that, but then there are also like you know privacy implications, and I don't know if you want to be feeding all that data to JetGPT, or you want to be very careful if you do that.
SPEAKER_01And I imagine that the upgrade process certainly still involves a non-trivial amount of human judgment. Would I be right in that?
Pre‑AI Tooling And Limits
SPEAKER_00Yeah. Yeah, there you know, there are some gotcha. Like um we usually try to go like one minor version at a time. Uh but you know, Rails 5.2 to 6.1, we would usually say, let's just do one jump. No reason to do 5.2 to 6.0 and then to 6.1 because for big projects is like a lot of QA time and um just go one jump there. Um but I I think most of the human judgment and upgrade project is basically like estimating like how long is it going to take you to upgrade from X to Y? And we still get a lot of questions about that. Companies still need to get budgets approved for those projects, so so that's when they come to us. And one of the things we used to do before the AI roadmap is um basically do everything manually, get access to the source code, get access to the dependencies and forks and all that, and then basically compare an action plan, a generic action plan to our client's source code and dependencies, and then pick and choose which action steps that we needed to include in the plan. And then at the same time, we needed to add like a blind estimation process based on our you know 50,000 hours of doing upgrades for clients. We would look at all the data and all the metadata to say, okay, this is gonna take between five and seven developer weeks. And right now that we don't have the estimates in the AI enhanced roadmap, and I don't think we would want to add them because they're it's just so tricky to estimate, and I wouldn't give that to an AI right now.
Human Judgment And Estimation
SPEAKER_01Right. Um there's okay, so there's there's this interesting there there's so many um aspects of AI that are so interesting to think about. Um and and one is people view AI as being as being a speeder-upper. Um like I can do the same thing in half the time or an eighth of the time or something like that. Um but I don't exactly look at it like that. Like sometimes that's the case, certainly. Um, but most of the time the way that AI speeds me up is like there are just certain like categories of work that I kind of just don't do anymore. Um like for example, I've been doing some side projects recently in Python and in Rust. Um my Python it I my my Python knowledge is not particularly deep. Um I could write a Python prop program from scratch with no AI, but anything non-trivial, I'd have to go look up a lot of syntax. Um and then Rust, I don't know at all. I can't I couldn't even do a hello world in Rust, but with clawed code, I've written a whole complex program in Rust because I never touched the code at all. Um by the way, this is a throwaway experimental project. I would never do that for a production project. Um anyway, I don't exactly look at it as being a speeder-upper, it's just like uh it it takes away these things that were barriers before and stuff like that. Um so I'm curious how you look at it. I I imagine it's not exactly just doing the same exact work faster. I imagine it's different somehow, but I'm I'm not sure how. Can you speak to that a bit?
SPEAKER_00Yeah, and I love that you bring this topic up because there's a lot of hype there, you know, and someone who's like, yeah, it's gonna be five times faster, they're probably full of shit. I'm sorry, pardon my French, but if you come out and say, Oh yeah, our teams are five times faster, I would ask, like, yeah, what data are you using to come up with that? Well, and just what does that mean?
SPEAKER_01What what does five times faster mean to them?
AI As Unblocker, Not 5x Speed
SPEAKER_00Yeah, yeah. And I see, you know, there are studies against that the hype, and it's like, yeah, it is making us faster, it is speeding us up, but it's not 5x, it might be like closer to like 1.5 or 2x. Um, what I do, my main use use case for for AI or or clot or cursor uh or whatever, chat GPT is to get me unblocked. You know, sometimes when I find something and I have no idea or no motivation to do the thing, I'll just ask ChatGPT like, oh, I need to do this thing and I don't know anything about it. Can you just point me in the right direction? And then I find that you know it's it's fixing my blockers faster. Instead of like me Googling and plugging in keywords that I don't even know the right keywords, uh Chat GPT will usually give me a better idea of where to start.
SPEAKER_01Yeah, I want to unpack that a little bit. Um, you said the thing about motivation or something. Um, that is definitely definitely something for me. So like I've referred to AI before as like a mental lubricant, like it it it or maybe even mental WD40, which is not a lubricant, it's like an unsticker, you know. Um so when there's a task, sometimes, you know, I'm just in like a mental fog or I'm like tired or whatever. And I don't know about you, but sometimes I'll just like look at something somebody wrote in Slack and I like reread it three times, and I'm just like, yeah, I'm looking at the words, but they have no meaning to me, and I just like can't think. Um and so I I do this a lot. This is a pattern I use a lot now. I will take an entire Slack conversation and paste it into um AI and just say, like, hey, what are these people talking about? And like you you see this conversation, I'm Jason. So like help me figure out what I need to do. Um like I don't even have the brain power to formulate a smart question, so I just like paste in the context and I'm just like help me. It's as vague as that. And then it helps me like get a little bit unblocked, and it's like, oh, okay, I see that they're they're talking about this, so maybe my next step can be this, and it helps me move forward a little bit. Does any of that resonate with you at all?
SPEAKER_00Yeah, yeah, for sure. And you know, we're constantly bombarded with notifications and we're constantly dis distracted, and sometimes you're trying to do some code, and then you get a text message from your wife or you know, uh alert from Slack where you know it's like an emergency, but it doesn't really pertain to you. Like you don't really need to look into that. What I will do sometimes when I'm like struggling to focus is we'll feed whatever error I found in the console to JetGPT or Copilot or whatever IDE uh code editor I'm using and being like, oh, I did this thing and I got this error. And then that will give me like a couple ideas of like, oh, it could be this, you know, Postgres is not running, so you need to like start Postgres uh or something like that. That's like, oh yeah, duh, that's obvious, and maybe it would have taken me like two minutes to find that, and but if you just plug it in in seconds, it's like okay, cool, here, try these things and this will fix it.
Low RPM Problem Solving
SPEAKER_01Yeah, it's interesting. It's it's an application of an idea that I've been using for decades at this point, which is like um always try to operate at the lowest RPMs possible. So you know, with the analogy there, of course, is like uh you can you can drive a car in first gear and have the pedal to the metal and only go like 40 miles an hour or whatever. That's not realistic. You wouldn't go 40 miles an hour in first gear, but anyway, side note, okay. Sorry. Side note to my side note. One time I went to Las Vegas and I did that thing where you could drive a Ferrari and it was insane. I was there. Well okay, I don't remember. I've done that several times, and this was yeah. Anyway, um in s in like second gear you could go like 40, third gear you could go like seventy. It was insane. Yeah. Um anyway, uh uh what was I even saying? Oh yeah, operating at low low RPMs. Okay. So it it's w when I like get an error message or something, what I've noticed a lot of people do is like they'll scrutinize the error message and be like, okay, what is this? What does it mean? And I'm like, dude, like don't don't think about it. Just copy paste it into Google and see what it says. Like obviously read it once in case it's something obvious, but it's if it's not something obvious, don't think about it, just paste it into Google and and use Google as the mental lubricant. Um and 90% of the time that'll get you through it. Same exact idea with AI, when there's something, error message, whatever, read it once to see if it's something obvious. If it's not, just paste it into AI. You don't even need to type anything more, you just paste it and hit enter, and that's all. Um and 90% of the time it'll get you get you past that. So again, AI is not always about speeding me up. Sometimes it's about doing the same work at lower RPMs. That way I don't run out of juice before the day ends. Um I can keep actually working the entire day.
SPEAKER_00Yeah. Yeah, I see it also as uh rubber ducking on steroids, you know. Uh just the process of feeding the prompt to the AI is like, you know, useful to you because you're trying to connect the dots, and then maybe the answer comes to you even before you send the prompt. But if it doesn't, then the prompt will probably get you there faster than if you were just um you know rubber ducking 100%.
SPEAKER_01Yeah, and and I gotta say this also like AI is is so much more powerful when it's combined with like common sense and stuff like that. Um I I've said this before, AI is kind of like a multiplier. Like, take somebody who's already really effective, and AI will make them like uh ten times as effective. But take somebody who's not very effective, give them AI, and it might not really make a difference. Um so there's there's things like when you're when you're trying to fix a bug, um AI does not obviate the need to clearly articulate exactly what the bug is. It's crazy how much people will see a bug and then start to try to fix the bug without even knowing um without even having articulated or understood what the problem actually is. And so like using AI has to be coupled with these like common sense practices that you would do anyway. That doesn't have anything to do with Rails upgrades or anything, but um side note that that applies to pretty much anything.
Rubber Ducking On Steroids
SPEAKER_00Yeah, no, and we see this a lot in on the Umble Labs side. We are working with teams to basically get them to use AI tooling effectively and securely, you know. Sometimes uh teams, you know, are not leveraging those tools as as as much as us. They should. And part of it is that they're not wiring them correctly. And I'd love to talk about this. And you might uh you know want eventually to add like an MCP server to uh Saturn CI. You know, um I know that every service out there, like New Relic and Honey Badger, I all the services that we use in production have or will have soon an MCP server that you can feed data back to your um models or like your AI agents. Um I think the common mistake that I see is that teams start using cursor or claude, but they don't set it up correctly. So one of the things we recently published is an assessment for these teams to basically come to us and we'll give you an idea of where you're falling short. We're calling this a maturity model for engineering teams. And we can look at all the tooling and all the practices your engineering teams are using and tell you where you're falling short. It's like, oh well, you have RuboCop and you have Reek and you have RubyCritic for this Rails code base, and you have code quality standards and complexity standards that you hold your humans accountable when they miss that mark. But have you configured Claude to feed off this data? Like have you do you have agents that feed off APM data in production, or do you have agents running that will look at CI and it will consume like the failure that happened in your feature branch and give you a suggestion right away in like minutes? You're like, oh, I wrote this feature, I submitted the pull request, and I realize it broke something in the accounting module. Then CI can quickly report back to an agent, and then that agent could like co-pilot could come in and say, hey, this is great, but it's gonna break this other feature. So here's a suggestion to make it work for your feature and for the existing features. Um have you thought about that for Saturn CI?
Wiring AI To Your Toolchain
SPEAKER_01Uh I've definitely thought about that. Um, but it is uh you said this off the air, it's it's it's kind of a long ways off. There's a lot of more basic stuff that is gonna make sense to do before that, um, including especially uh sales and marketing. Uh as of this recording, I have zero customers, so I'm not gonna add anything like that before I get uh some some customers using it. Um but you you touched on something which I find very interesting, and and we're working on this at my job at Cisco Meraki right now. Um and that's kind of the idea. Tell me if this is what you're getting at at all. Um it's something like the idea of applying AI to ops. Um, like for example, something that that we're working on is ingesting all of our deployment-related data, such as here's when certain deployments went out and stuff like that, here's when certain PRs were merged, that kind of stuff. Then you can answer questions like, hey, is is this particular change in production right now, or is it not, that kind of thing. Um and what you mentioned with like ingesting APM data and stuff like that, especially because all these things can be connected to one another, and then you have a really powerful ability to answer questions. And of course, if you put all that stuff in a database, you can have an LLM translate a natural language question into an SQL query and then get your answer. Is is that the kind of thing you were touching on? Or is that does that overlap with what you were thinking about?
SPEAKER_00Yeah, yeah, and that's a really powerful idea to have an agent like answer questions about your code base. Um we're not quite there yet, uh, but we definitely are helping teams set up their tooling so they can answer questions like that. Um I can think of a use case that you could ask this chatbot or this agent, uh, yeah, is this change in production? But you could also ask it to, you know, if it's properly wired, feeding off the APM new relic or something like that, you could say, Oh, I see uh performance regression in the accounting module. Uh now our response times are like 20% worse. Uh what changed in you know, since that performance degradation started happening? And then that chatbot could be like, oh, actually all these changes were shipped between this date and this date. And maybe these changes could impact that performance. Um yeah, that that would be super powerful if if you guys get to that point.
Agents, APM, And Production Signals
SPEAKER_01Yeah. Yeah, I think it's just a general um good pattern of like taking your your taking any any database and layering an LLM on top of it so that you can ask questions in natural language of your data. Speaking of Saturn CI, I've done that with Saturn CI uh to ask questions like how many uh test runs were um were run in the last 24 hours or something like that. Um and it's really interesting. It's it's it's one of those magical computer moments, you know, to be able to just ask the question and then it gives you the answer and it like shows you how it got the answer, and it's like, oh yeah, that's actually correct. That's amazing that that figured that out. Um okay, so you mentioned, I think you mentioned, uh, MCP. This is something that I personally am behind on. I don't understand it as well as I would like to. I've asked a couple other guests in the past, like, hey, what the heck is MCP exactly, and like how are you using it and stuff like that? Um, and I'd like to ask you the same thing. Um how how are you using MCP in in your work? And I don't want to put you on the on the spot, but if you're able to kind of explain what it is and stuff like that, just anything you're able to share.
SPEAKER_00Yeah, I again I'm not an AI expert. I I'm also still learning as I go, but the way I see it, it's just a protocol. You know, MCP is a protocol, and then you can implement it in any way you want, but you know, following the protocol.
SPEAKER_01Model context protocol, right?
What MCP Is And Why It Matters
SPEAKER_00Right. And then um basically LLMs can feed off this data and use it in their context. Um so I that's as far as I know, and I understand like the potential behind that, but I also understand like the security implications there, you know, because sometimes uh what we've seen is that some clients want to use AI and they want to integrate everything, but not everyone should have access to everything, right? So then it becomes a question of like, can you build agents that are only accessed by DevOps or by only by engineering teams, and it won't leak sensitive information. Um, and that's some sometimes that's where we come in and we try to help our clients figure out like, yes, this is great, and you want to start using AI, and you want to have an MCP server, and you want to have an agent that integrates with all these different MCP servers to solve a specific problem. Uh, but you have to be very careful about who's going to be using this agent because you know, if you're gonna have all your infrastructure using AI, you don't want to leak data from like the sales department to like the engineering department with data that they should not be uh seeing. So I think it's super powerful, but it also raises the question of like security and privacy and making sure that people only see what they're supposed to see.
SPEAKER_01Yeah, I'm curious. Have you experienced or witnessed any like bad AI moments? Like, I don't want to say necessarily something like disastrous, but like something where it's like, oh wow, we this is a good reminder, we need to be careful with this. Have you seen anything like that?
Security, Access, And Risk
SPEAKER_00Um no, I haven't seen a lot. I mean, that I can talk about publicly, uh, but I have seen um infrastructures that are not properly configured and access that's not properly set up. So then when we get clients that want to build something on top of that, we raise questions about hey, have you looked at the roles and the departments and the access of the data to like the current infrastructure? And then that kind of is a blocker for us to work with a client because before we build an AI agent for them or a model, predictive model, or something like that, we need to ask those questions because we need to be like responsible about the work we do, and we don't want to build on top of something that's already not properly set up. Um on the upgrade side, I could tell you many stories about big corporations, you know, public companies that are running really, really old versions of Rails in production. But you know, of course, because of NDAs, I can't tell you names, but I can tell you there are many companies out there that are running really, really old versions of Rails and Ruby. And if any of your listeners are in that situation, I would recommend they look at Rails LTS, which is a secure old version of Rails that this company in Germany maintains and backports security patches, and please pay them their yearly fee to use those versions so that you don't get you know hacked or you don't get you know ransom whatever. You don't you don't get some bad actor exploiting a known Rails security hole.
Detecting AI‑Written Code Smells
SPEAKER_01Um I had an interesting sp experience with AI. Uh all right, I I'm gonna obfuscate this in order to uh be because I shouldn't be too specific. Um but there's a developer uh a who I'm familiar with who I don't trust all that much. Um and I was looking at uh some of his code that he wrote. I was looking at a at a PR that he created, and I was looking at the code and I'm like, hmm, maybe you wrote this, but I think a lot of this was probably written by AI, but I couldn't be sure. Um and and it was an interesting moment because it's like if you look at a PR and it's obviously created by AI, that's one thing. If you look at it and it's obviously written by a human, that's another. But if you're looking at code from again, a person you don't really trust and it looks like it was written by AI, I don't want to approve that because I I can't make a safe assumption that this person has vetted this code, you know? If if I'm looking at a PR of yours, Ernesto, I'm gonna naturally assume that like, okay, I don't understand why this particular part is here, but I think it's a pretty safe bet that Ernesto like didn't just recklessly put some random stuff in here. Um so I'm gonna I'm gonna not worry about that. Um on the other hand, if if there's something that's totally AI generated, I'm gonna scrutinize liter literally everything. Um but when it comes from a person and some of it might be AI generated, it creates kind of a tricky situation. Um have you thought about that or have you have you seen anything like that?
SPEAKER_00Yeah, it's funny that you mentioned that because this reminded me of one of our clients that basically reached out to us to add like a senior Rails engineer to a team of non-Rails engineers generating Rails code. Oh wow. So they could tell that there was something off there. And that's the problem sometimes when you have, you know, I I I started with Java, I love Java, but if you have like a Java engineer use Claude or an AI tool to generate code for Rails, and then you start building features that like that, then the code sometimes starts to look like non-rails code. It's Ruby, it works on Rails, but it's not really the way you know convention over configuration is supposed to work. It works, but it does feel fishy and it does sometimes make it hard to maintain, and it's also like hard to explain. Like, yes, this is Rails code, but that's not usually the way a senior Rails engineer would write it. And why are you doing it? And it's like, oh, okay, because you don't have a ton of experience with Rails, but you do have a ton of experience with Java, and this is how in Java it would look. Um so what this client did is we brought they brought us in to basically guide this team into you know writing more Rails-y code, even using AI. So I think that's the danger. It's like, oh, we're using Ruby, we're using Rails, we're using AI, and then you're creating an application that is gonna work, but then when someone else comes along and it's a human uh that has a ton of Rails experience, they're gonna be a little bit confused, and they're gonna be like, I if everything is in the wrong place, it works, but it's not the way a Rails application usually is written. So that then there is that real danger, and this is like a real story from a client that I think we all can learn from.
Speculative Code And Context
How To Work With Ernesto
SPEAKER_01Yeah, interesting. Um I've seen that same phenomenon where somebody comes from a different community and they're writing Ruby code, and it it has a little bit of a funny flavor to it because they're kind of taking the culture or style or whatever from that other community and applying it to this same this this language where those same things maybe don't make complete sense, or they're just uh they they go against the style conventions, culture, that kind of stuff. Um and with AI generated code, I'm sure you've seen this. My my number one frustration with AI generated code is that it writes a lot of what I call speculative code. Um so I I I think of programming a lot in terms of bets, um, where it's like I'm I don't know, for example, I I'm refactoring this piece of code because I'm making a bet that after I clean this after I make the investment to clean up this area of code, uh it's likely. To pay off at some point in the future and have a positive return on investment. I know that out of every 10 bets I make, I'll probably lose six of them and win four of them, but I'm betting that my entire investment portfolio on balance is going to have a positive return on investment, even though I'm not winning every single bet. And there's a big difference between investment and speculation. Speculation is when you're just like, well, I uh I I'm guessing that maybe the value of this thing will go up in the future, so I'm gonna buy this uh in FT or whatever. Um and then uh you know notoriously speculate speculation is very risky and you can just lose a whole bunch of money that way. So speculative code is code that's written without a good justification. Um you know, uh uh Stephen Baker calls it getting a case of the Midas Wells. Like you're in an area of code and you're like, well, I might as well add this because maybe we'll need it someday. That's speculative coding. And so code written by AI is just rife with these, like, I don't know, a conditional for like if this variable is set, then do such and such. It's like, well, okay, but that variable is like never ever not gonna be set, and we have no reason to believe it would never it would ever be not set. So like this code is like all liability and no asset, and it's just so frustrating. And so if you have AI write any non-trivial amount of code, you're gonna have to strip so much of the speculative code out of there. And to me, it just seems like such a nightmare to like have to review PRs by people who are using AI generation as their main way of writing code. Um I don't have like a super uh coherent point or anything with that, but had have you noticed that phenomenon yourself?
SPEAKER_00Yeah, I think that's where this is what makes me believe that you know options are always there. AI is gonna be even better and more present, but it's just like so hard to have all the context, and you'll sometimes get context from the business, and it's not gonna be in a Jira story or a bug report or an epic because it would just blow up the description of that story. But when you're in a company and you're building on a feature, you have a ton of context, and the more time you're in a company, the more context. And you know that if they're like, well, we want you to add this feature to export to JSON, and then you're like, okay, you ask the right questions. You're like, okay, Jason, what about XML? Is this like gonna be a thing in the future? Is it gonna be CSV? And then they're like, no, no, just JSON, you know, and maybe the product manager just tells you that to keep the scope small, but then I don't know, at the same time, you do know that this company does work with Microsoft or some other corporation that loves XML. So then you're like, okay, I am gonna write this code in a way that there's a formatter, and then later, if I want to have like a different formatter, I just have like a clear contract of this class called whatever formatter export, and then it'll do it for me first with JSON, but then it is eventually like easily extensible to XML. So I guess to sum it up, I think the context of a story or a bug report is definitely useful for AI and it's gonna solve that problem. But the way it's solved, it's gonna be like very much tied to like the human driving the AI tool and the context they have.
SPEAKER_01Um okay, we're we're getting close to time. Um so I want to um want to wrap up soon, but uh before we go, um can you can you tell us again just about even as long as I've known you and as many times as we've talked, um I have trouble keeping straight like Ambu Labs and Fast Ruby and all that stuff? Um can you share those things again and and what they are? And like if somebody is interested in working with you, um what's what's the way that makes the most sense to engage with you on that? Um how should they contact you? What what's gonna happen if if they contact you? How does that relationship begin? Anything you you want to share on that kind of stuff?
SPEAKER_00Oh yeah, sure. Thank you. Uh if uh anybody's interested in Ruby Rails or tech debt remediation beyond upgrades, um they can just go to fastruby.io and send us a message there or follow us on Mastodon or Blue Sky or LinkedIn. Uh we're always sharing interesting content uh for anybody to kind of DIY their tech debt remediation. And if anybody's interested in AI and the way that engineering teams are using AI, um go to ombulabs.ai and that's where we publish most of our AI-related content. And some of it might include might include include Ruby, but uh there I think we talk a little bit more about like Python and Kubernetes and machine learning models. Um and again, they can follow us in social media and we'd be happy to connect and see how we can help.
SPEAKER_01All right. Well, we'll put all that stuff in the show notes. And Ernesto, thanks so much for coming on the show.
SPEAKER_00Yeah, thanks so much, and uh, I'm yeah, happy that we got to connect and catch up. Definitely. Thank you.