Code with Jason

253 - Dave Thomas, Author of The Pragmatic Programmer and Sin City Ruby 2025 Keynote Speaker

Jason Swett

In this podcast episode I talk with Dave Thomas, co-author of The Pragmatic Programmer and Sin City Ruby 2025 keynote speaker, who discusses his upcoming book Simplicity and how software development has become unnecessarily complex. Dave and I explore how developers can regain control by questioning established practices, trusting their intuition when code feels overly complicated, and experimenting with simpler approaches rather than blindly following industry trends.

Speaker 1:

Life hasn't been the same since the pandemic. Instead of working at an office around people all day, like we used to, most of us now work remotely from home, in isolation and solitude. We sit and we stare at the cold blue light of our computer screens toiling away at our meaningless work, hour after hour, day after day, month after month, year after year. Sometimes you wonder how you made it this far without blowing your fucking brains out. If only there were something happening out there, something in the real world that you could be a part of, something fun, something exciting, something borderline, illegal, something that gives you a sense of belonging and companionship, something that helps you get back that zest for life that you have forgotten how to feel because you haven't felt it in so long. Well, ladies and gentlemen, hold on to your fucking asses. What I'm about to share with you is going to change your life. So listen up.

Speaker 1:

I, jason Sweat, host of the Code with Jason podcast. I'm putting on a very special event. What makes this event special? Perhaps the most special thing about this event is its small size. It's a tiny conference, strictly limited to 100 attendees, including speakers. This means you'll have a chance to meet pretty much all the other attendees at the conference, including the speakers. The other special thing about this conference is that it's held in Las Vegas. This year it's going to be at the MGM Grand, and you'll be right in the middle of everything on the Las Vegas strip.

Speaker 1:

You got bars, restaurants, guys dressed up like Michael Jackson. What other conference do you think you can go to, dear listener? Or you can waltz into a fancy restaurant wearing shorts and a t-shirt, order a quadruple cheeseburger and a strawberry daiquiri at 7 30 am and light up a cigarette right at your table. Well, good luck, because there isn't one Now. As if all that isn't enough, the last thing I want to share with you is the speakers. And remember, dear listener, at this conference, you won't just see the speakers up on the stage, you'll be in the same room with them, breathing the same air. Here's who's coming Irina Nazarova. Oh yeah, here's who's coming For Freedom Dumlao Prathmasiva, fido Von Zastrow, ellen Rydal, hoover and me. There you have it, dear listener, to get tickets to Sin City Ruby 2025, which takes place April 10th and 11th at the MGM Grand in Las Vegas, go to sincityrubycom. Now on to the episode. Hey, today I'm here with Dave Thomas, dave, welcome.

Speaker 2:

Well, thanks for having me.

Speaker 1:

So, dave, you're working on a new book called Simplicity. Maybe you could tell us about that, and tell us a bit about yourself as well.

Speaker 2:

Sure, let's start with me, just because it's less interesting. So I've been programming for probably about 50 years now, which is kind of scary, and I'm still programming every day, which is the fun part. So along the way I've done a few things. I've been a consultant, run my own business, blah, blah, blah, written a few books, pragmatic programmer being one of them. Written a few books, pragmatic Programmer being one of them, and I think for you guys, probably the more significant is, I came across Ruby in 1998, I think it was, and fell in love with it back then, and so once we finished Pragmatic Programmer, I pretty much immediately jumped into trying to write something about Ruby, just because I wanted to get the word out and Programming.

Speaker 2:

Ruby came along in 2002, I guess one or two and I mean it did okay.

Speaker 2:

And then Rails came along and suddenly, like everything else in the Ruby system, the system just took off.

Speaker 2:

So one of the fun things about the Ruby book was, at the time, the standard libraries, the built-in libraries, were not really that well documented.

Speaker 2:

So I went through and I spent a happy six months reading through the source code and then actually documenting all of the functions in the standard library and that was like an appendix which was half the book of all the function calls. And when I finished that, I then spent a month backporting all of those into the Ruby source code, all of those into the Ruby source code, and I wrote rdoc which then extracted them back out again so you can type ri, whatever, and get the documentation, which is why I mean it's been changed a lot since then. But even so, if you go back to a lot of the documentation, you'll still find wombats and things as example strings in there, because that tends to be my go-to dummy string. So I've had a long, long and very happy relationship with Ruby and first met Mats in 2002, I guess we invited him over for the very first RubyConf and I saw him again a couple of months ago and he's the same Mats, which is amazing.

Speaker 1:

Where did you happen to see him?

Speaker 2:

We were in Sarajevo. No, yeah, it was Sarajevo.

Speaker 1:

Okay, for one of those European Ruby conferences.

Speaker 2:

It was the yeah, yoruko.

Speaker 1:

Got it okay.

Speaker 2:

Or Yoruko or however it's pronounced. Yeah, yeah, it was actually quite fun. At the end we did a kind of panel, not even panel. We just basically sat in armchairs and chatted with José Belim and Mats and me and I basically just sat there and just listened to them. So that was good fun. So, yeah, that's been that 2001,. I got involved in the Manifesto for Agile Software Development. I helped write it, but then at that point it actually became clear even back then that it was going to get industrialized, I guess, and so I wasn't that interested in getting involved in that. And it has been. I still think it's been a benefit to everybody. I think the community is better off after it than before it. And certainly if you look at the various figures on project success rates and things like that, agile projects have tripled their success rate compared to what they were in 2000. A project in 2000 compared to an Agile project today. The Agile project is three times more likely to succeed. So that's pretty stunning.

Speaker 1:

And it's interesting the way that Agile has been construed. There's a lot of baggage that comes with it. If you say you're doing Agile, there's a lot that people imagine comes along as part of that. But like, if you think back to the original Agile manifesto, if you were to print that out, how many pages do you think that would be?

Speaker 2:

Well, the actual essence of it, it would be a quarter of a page right um and so it's.

Speaker 2:

It's not terribly prescriptive in detail, it's absolutely not prescriptive, in fact, the fact that, um, it's expressed as a set of four values, yeah, and everything else is just collateral. Really, it's the four values express agility and a value. To my mind, a value is something that helps you make a decision, something that helps you make a choice. Yeah, I'm faced with this. What should I do? Well, I go to my value system, I go to my beliefs, and they help me make a decision. They, when I go to my value system, I go to my beliefs and they help me make a decision. They never tell you what to do. They help you, right.

Speaker 1:

I think of it like a North Star. Yeah, okay, I have this certain North Star. Am I going to go east or west, or north or south? Well, my North Star is to the north of me. I'm going to go north.

Speaker 2:

You make these big decisions one time and your, your big decision that you made once helps you make these smaller day-to-day decisions yeah, except you've got to be really careful when you talk about it in that way, because a north star implies that there is the one true direction, yeah, and yeah, you're allowed to deviate from the path if there's like a building in the way or something, but there's still a one true direction. And I would say that a north star, the equivalent north star for, uh, agility, would not be that we're heading for the north, but, uh, we're heading for a place where we can grow crops or whatever it might be. Yeah, so it's, it's, it's contextual, it's always going to be contextual.

Speaker 1:

Yeah, I think I get what you're saying. It's like you. You have to be mindful of what you're trying to achieve and and do things that help you achieve what you're trying to achieve, not just blindly follow some rules for the sake of following the rules yeah.

Speaker 2:

I mean, what I see a lot of is, um, like the first generation says, okay, that's our north star, that's the way we're going. And they all start trudging off north and you know, 30 years later the next generation is born and they're continuing it. And then you know, six or seven generations later, they're still trudging up towards the north. And people say, why are we doing it? And they say, well, that's what we've always done, that's what our ancestors did, our parents and our grandparents they all went to the north. Why are we doing it? Ancestors did, our parents and our grandparents they all went to the north. Why are we doing it?

Speaker 1:

Because that's what we do and that's what's happened to the agile movement. You know, I feel like this opens up a really deep and broadly applicable lesson. It applies to agile and it applies to almost everything else in programming. I've been thinking about the question lately about when we encounter a proposed programming principle, how do we decide whether it's any good, and something that I see that bothers me is people hear certain programming principles espoused by very experienced, prominent people in the industry and they say, okay, because this principle is suggested by a large number of experienced and prominent people in the programming industry, therefore it must be good people in the programming industry, therefore it must be good. And, incidentally, that will often lead you to the right answer, but only incidentally, because that's not a valid chain of reasoning, and so I have my own ideas about how we decide whether any particular programming principle is good or not. But before I share my thoughts, I'm curious what your thoughts are on that.

Speaker 2:

Sure, I suspect they're the same as yours, but it comes down. I think it comes down to two separate things. The first is there is a lot of freedom if you can avoid making irreversible decisions. So when you're faced with something where you know, guru number one says, oh, you need to do it this way, then you may say okay, that looks interesting. Am I going to be stuck with that and not be able to back out again? If you are, then the decision chain is very different to if you're not. So is it a question of do I use REXML or Nokogiri? Who cares? But do I use Rails? Well, well, once you dug in that deep, you're not going to get back out again quickly, you know? Uh, so that's one thing is to try to make your decisions in such a way you can reverse them, because that way the cost of being wrong is a lot less.

Speaker 2:

The second thing I'd say is received wisdom is really just gossip, written with capital letters you know, um, the only way I think to evaluate it is to take it in your context, in your particular environment, with your particular set of skills, etc. Etc. Right works really really well for someone over there, may not work at all for you. Right, and not because the thing is wrong, it's just because it's not applicable to what you're doing. So I think that the most important thing to do is to look at what people are saying. If you see it coming from enough people, people that you respect then you maybe think, huh, maybe there's something there, but you evaluate it for yourself.

Speaker 2:

You don't just sit there and say, oh, I've got to use React because everybody says React's best, or whatever it might be. You know, and the way to do that is the way you do everything, and that is you try it, you experiment. You don't actually just make the jump and say, hey guys, today we're switching. You sit there and you find some little toy project that you've always wanted to do, you don't really care about, and you try it with that and you form your own opinion. Opinion, you know.

Speaker 1:

Uh, you don't just sit there and take stone tablets handed down by experts, quite a few of whom have never programmed or haven't programmed in the last 10 years and don't really know what they're talking about you know, yeah, yeah, speaking of stone tablets, if I, uh, when I was a kid, went off of like what older, more experienced, experienced, authoritative people told me, I would come away with a conclusion that God is real and that's just factually incorrect, sorry to anybody. I'm offending by that statement, but it's not valid epistemology.

Speaker 2:

Okay, but I don't necessarily disagree with what you're saying. But that is a really good example of a situation where what's true for you may not be true for someone else. Right In that it doesn't in a way, matter if there is or is not. Oh God, that's getting deep, isn't it? It doesn't matter if there is or is not. Oh God, that's getting deep, isn't it? It doesn't matter if there is or there is not. It's what it means to the people who either believe it or don't believe it.

Speaker 1:

That's interesting. Okay, I want to make a meta comment about this conversation. There are certain deep principles that apply to everything and lately I've been going deeper and deeper and I feel like I've gone like almost all the way to the bottom and then I've like come back up again and then taken these deep ideas to programming. So I like started outside of programming, went deeper and deeper, and then came back and applied them back to programming. Programming went deeper and deeper and then came back and applied them back to programming and it made me think about so much inside of programming in a different way. And so I think these questions about like is god real? And stuff like that is actually very applicable to to programming. Um, it's just not maybe immediately obvious how, um, but what you said is really interesting about what's true for you and true for somebody else and maybe it doesn't matter, and stuff like that. There is one single reality that is the same reality for everybody and, first of all, are we even on the same page with that? No Interesting.

Speaker 2:

I'm not saying that there is not a substrate on which we all live. Clearly there is right, but the only evidence we have for reality is what we sense and then how we interpret what we sense. So, for example, you might say that color is red, but there are tribes in Africa who, because they don't have words for that particular color, don't actually recognize that color. Right, there's a whole bunch of that.

Speaker 2:

The himba people in Africa have an incredible difficulty separating colors that you are wildly different, like green and red. But they can also differentiate shades of blue that you could never differentiate, right, and the belief is very strongly that it's because they don't have words for the colors. And if you don't have words for it when you're growing up, then right in the way that you're taught, that's red, that's green, that's blue. They're not, and so they don't have words for those colors. So there's maybe an objective set of frequencies that we have chosen to call red, but that doesn't necessarily mean that everybody will sense it the same way and therefore everybody's individual perception of that is going to be different. So I don't think it makes sense to talk about absolutes when we have no way of expressing that absolute.

Speaker 1:

Interesting. So I think there's an important distinction to be made here. That absolutely interesting. So I think there's an important distinction to be made here. Um, so everybody's uh, experience of their sensory inputs could be not the same. Um, you know, I recently learned that the term for this is is qualia. Um, like colors are qualia, and other like sensations that we can't describe, like the feeling of warmth and cold and stuff like that. That's all qualia. But just because one person might interpret, for example, a certain color differently than someone else interprets that same color, that's a separate issue from physical reality, because the spectrum of radiation that enters our eyes is the exact same. Enters our eyes is the exact same. We can know that. We just can't know how that radiation is represented inside our minds and how do we know that?

Speaker 1:

how do we know what that the rate? How do we know?

Speaker 2:

that? Yeah. How do we know that radiation is the same for everybody?

Speaker 1:

Oh, I don't know.

Speaker 2:

Yeah.

Speaker 1:

I'm sure that we do know that. I'm sure that that's something that's known. I just don't personally know yeah.

Speaker 2:

I mean, that's the whole point. Is that, yeah, we're all sure, but there's no basis for it. In the same way that we're all sure that a pound of sugar weighs a pound. Yeah, but einstein said sorry, not true you know um what do you mean by that?

Speaker 2:

I mean that we have a physical reality which is limited to our experience. The fact that mass changes, depending on the velocity of something relative to you, is not something we have ever experienced as individual human beings, right? So that is not part of our conscious understanding of reality, right? The fact that time changes, mass changes and all that kind of relativistic stuff. Similarly, the idea that you cannot divide something indefinitely is very, very counterintuitive to our understanding of what reality is. Right? We've never come across something. You can't just keep splitting it and splitting it and splitting it, but you can't. The idea that there is a smallest unit of mass or time ridiculous, right? That doesn't make any sense. Um, so there's a whole range of experiences that we don't have, so we cannot possibly say that we objectively know reality. All we can ever do is say, okay.

Speaker 2:

I think this actually comes back to, in a very indirect kind of way, your original question, which is how do I know whether I should try something or not? Right, and it's exactly the same way we understand, or we appreciate, the world we live in. What do kids do? What does a one-year-old do? They try things, they break things, they touch things, they see what happens when, and from that they build an intuition of how the world works. And that, I think, is the process that we should all be using, and that is experiment. Try it, see what happens. If it does what you thought it would do, then it gets a bit of reinforcement. If it doesn't, you've learned something different. But it's all experiential, it's all contextual.

Speaker 1:

This is interesting. There's so much to unpack here Not to get too crazy, but I think we unavoidably have to bring into the picture what is knowledge, and can we know anything for sure? My understanding is that, no, we can't know anything for sure. There is objective reality, there's one single objective reality, but there's no such thing as completely, 100%, certain knowledge. I'm curious do we view that the same or different?

Speaker 2:

No, there is no objective reality. Okay, here's the thing. There is a reality, right, in that we are living in some kind of substrate of some kind, right? But okay, let's take the classic one, right? Do we know if we're living in a simulation or not? No, so if you were to say no, that's probably the case that we can't tell, and there are some people that believe it's ridiculously likely that we are. But if we can't tell, then what happens to your objective reality? Is the objective reality the rules of physics or is it the programming that some 13-year-old wrote in their bedroom that makes you you?

Speaker 1:

Hmm, is that an argument against there being a single objective reality? Well, yeah, because if if you happen to be running in gary's simulation and you know another set of you is running in fred's simulation, they are different realities hmm, okay, yeah, to me that I don't know if this is exactly the same thing that you're referring to, but there's that idea of solipsism where the only thing that exists is one mind, like, for example it could be that the only thing that exists is my mind and I think I'm talking to a person right now, but it's really just a dream or an illusion, or something like that and I'm the only thing that is exists.

Speaker 1:

Or it could be that you're the only thing that exists, or, dear listener, you could be the only thing that exists, um, and it could be that there are, I don't know. There's a Dave universe and a Jason universe, and you have a reality. That's true for you, dave, and I have a different reality. That's true for me, but they aren't the same universe and different things are different. Is that kind of? Does that overlap with what you're saying?

Speaker 2:

no well, I don't know. Actually I don't know, I don't think so. I think I think we are. We all share a common understanding of the way things work understanding of the way things work. It's not perfect, but we are fundamentally. You can imagine that you know we're playing some game where we don't know the rules, and over the generations, we're slowly beginning to discover what those rules are, and those ideas then become shared and become innate and form part of our bigger understanding. That comes next, but clearly that is not universal. There are many people, increasingly large numbers of people nowadays, who deny what you would call objective evidence, scientific evidence, because it doesn't agree with their beliefs, and so I believe that our common understanding of reality is changing. Reality itself, if there is such a thing, doesn't change, but I think our interpretation of it does, and so what would be common sense for a previous generation may well not be for another generation.

Speaker 2:

I definitely agree with that be for another generation. I definitely agree with that. But that implies then that their reality is different, through an act of will, to your reality. But at the same time, your reality is based on belief, just as much as theirs is. It's just that you believe your belief is more rational because it has, like, scientific method behind it, or whatever it might be, whereas their belief is based on different principles. But who's to say that scientific method is right? I mean, it's been remarkably successful, it's been very, very accurate. But at the same time, newtonian mechanics has been remarkably successful. Until you try and launch GPS satellites, you know there's Success doesn't mean it's right, it just means it's a better approximation.

Speaker 1:

Exactly, and that comes back to my proposition that there's no such thing as certain knowledge. You know, when Newtonian physics was all that we had and Einstein hadn't come along yet, that seemed like the right answer. And then Einstein showed that it wasn't the right answer, and we might discover that everything that we think now, you know, surely, surely 2025 is not the year we got everything right. Surely, at some point in the future we'll find out that we've been mistaken about certain things and some of our theories will be superseded by better ones. But there is an incremental gain. You know, newton's theories were better than what came before. Einstein's were better than what came before. It's not like we're just jumping around wildly.

Speaker 2:

No, absolutely not, and part of the reason for that is we're looking for a? A? In the old days, it used to be felt that the universe had to be simple, right, because it's just a lump of matter and energy. And you know how can that be complex? It's just going to be a lump.

Speaker 2:

And so the rule that we had whenever we were looking at new theories whether it was a math proof or something about physics is is it elegant, right? Does it reduce down to a set of equations or beliefs, whatever it is, that are just beautiful, right? Do they make you feel good reading them? And things like Maxwell's equations were just like three equations that described everything, and that was just like mind-blowing to people. That was great, and that's the kind of approach that we're taking nowadays, the kind of approach that we're taking nowadays as well.

Speaker 2:

We're still trying to find the simplicity, and yet we've gone to a standard model with 12 or 16 separate components to it. We've got well, string theory is kind of died down, but string theory, which is the universe, is made up of n-dimensional, 11 or 10, or however you count it, oscillating things that no one can quite describe. And now we're getting messier and messier and messier, maybe the truth is that the universe is not tidy, that there isn't a symmetry, that there isn't beauty in the way it's organized. Maybe underneath it all is one big hack which, to some extent, to me, argues in favor of the simulation scheme, but it just. You know again, our belief, our human belief, is that there has to be a simple explanation for things. Right, there must be a reason, and maybe there isn't. So maybe that's another reason that we can't have I mean, obviously we can't have absolute knowledge, but that's another reason I think that we can't even approximate absolute knowledge is that maybe the information content of the universe cannot be compressed down into a small set of rules.

Speaker 1:

Yeah, I have no strong opinion about that. Yeah, I have no strong opinion about that. Reality is often quite complex, even when we wish that it would be simple. Okay, just to take stock of where we are, and that's fine. We're not in agreement that there's a single objective reality, but it sounds like we are in agreement that there's no such thing as absolutely certain knowledge. Now I want to talk about models a little bit. Everybody in their mind has a model of reality. You know there's reality, and then there's your model of reality. You know there's reality and then there's your model of reality. And to me, the more faithful your model of reality is to actual reality, the more successful you're going to be in interfacing with reality. If my model of reality says that I can walk through walls, then when I attempt to walk through a wall, I'm not going to be successful because my model of reality is inaccurate.

Speaker 2:

But if your model of reality is that you can't walk through a wall because of Vanderbilt's forces, then that's not going to help you painting it.

Speaker 1:

Sorry, I didn't catch that one.

Speaker 2:

Okay, I think you've got to be careful. I don't think the whole point of a model is that it gives you something tractable to deal with things. Yeah, it's an abstraction. So you could model a car as being the thermodynamics of the combustion cycle and the physics of the lubrication system and the chemistry of the ignition, all that kind of stuff, and that's a perfectly valid model. And if you're an auto engineer, that's the model you apply. If you're a driver, it's I push this pedal and it goes forward. Yeah, and that's the model you have.

Speaker 2:

So I'm not always convinced that the most detailed model is the one you need to apply and I don't think necessarily the most detailed model gives you more control. And if you think about well, there's that bourgeois story about the country that had a map that was the same size as the country and gradually it fell into disrepair because it was stupid. Basically, if you're a general planning a battle, you have maybe a sheet of well in the old days, a sheet of paper, a map that would tell you all you needed to know at the level that you're interested in and what the terrain is, what the rivers and the valleys and the streams and everything else. You didn't have every single blade of grass listed in your model.

Speaker 1:

Yeah, okay, so you're making some good points. So I want to interrupt and maybe take back some of what I said and revise a little bit. So I think what I said was you'll be successful in interacting with reality to the degree that your model of reality is faithful to actual reality. Faithful is maybe not the right word and not the right level of precision and stuff like that.

Speaker 2:

You're obviously totally right Again. Faithful again implies that you actually have an objective reality.

Speaker 1:

Obviously totally right, faithful again implies that you actually have an objective reality, and obviously I'm speaking from the perspective that there is Right. But faithfulness isn't exactly the thing. It's more like usefulness.

Speaker 2:

That's exactly right, yep yeah.

Speaker 1:

And in order to be useful, it does have to be accurate to a certain degree, but only for the relevant purposes. So, exactly like you said, a map is going to be useful and the map has to be accurate, but it doesn't necessarily have to be extremely precise and it certainly doesn't have to be extremely detailed and, in fact, if it's too detailed, that could make it worse for its purpose than a less detailed map.

Speaker 2:

Exactly, exactly, yeah, yeah. And I think, if you look at successful people, I think that the successful people are actually the people that have come up with simpler models than everybody else has, which gives them a better intuitive understanding of how things are going to work. So a successful entrepreneur, for example, has a model of their excuse me, has a model of their target consumers and the market and everything else, which has cut away all of the detail and just gives them a really simple picture of what it is they need to do to succeed.

Speaker 1:

Yeah, it's no more complicated than necessary. It can be wielded handily by the mind.

Speaker 2:

And even actually less complicated than is strictly necessary. I think, because most people who are like really successful at making things happen will tell you that if they don't make mistakes, they're not doing it right. They have to be right a certain percentage of the time, but they're also going to be wrong because their model is not 100% accurate. It can't be because it's taken away all of the kind of detail and they just have a very high level view of I push this button and that happens, and along the way, their model meets actual reality, if it exists and, um, because of that, they have to revise it a bit. But I think the thing that lets people like that succeed is the ability to strip away detail and not necessarily to have more of it.

Speaker 1:

I think that makes sense, because if you have a model that is so complex that you can't hold it in your mind and work with it, in your mind easily and conveniently. You have to go and read something to remind yourself how everything works, or something like that. It's not going to be a very useful model.

Speaker 2:

Exactly, exactly right.

Speaker 1:

I want to talk now about the nature of knowledge. So we talked about how there's no such thing as absolutely certain knowledge. Um, I myself, you know, I've changed positions on that. I used to believe that there it was possible to have certain knowledge, because it's like, come on, like we know that I don't know. We live on this round planet. It's round, not flat like we objectively know that, and we know it for absolute certain, um, but it's not exactly that. We know it for absolute certain. It's just that, um, it has been not proven wrong, correct, um, and for the record, I don't think it will ever be proven wrong. I think it is the objectively correct way that things are. We just can't ever prove it. We can only fail to prove it wrong. And so how is knowledge generated? I read these couple books recently which changed my view on these things. One was the Fabric of Reality by David Deutsch, and the other was the Beginning of Infinity by David Deutsch, an author I've been talking about a lot on this podcast.

Speaker 1:

But knowledge is generated through conjecture and criticism. First we have to make a guess and then we subject the guess to criticism and if the guess survives, all the criticism that we can throw at it, then we can keep that theory, we can keep that guess and we can tentatively accept it as true, just like we did with Newton's laws, you know, okay, these serve us better, they are better models than the models that we had before. So we can throw out the old models where they contradict Newton's laws, because Newton's laws are better, and we can tentatively accept these as true, obviously. Then Einstein came along, and now we tentatively accept Einstein's laws as true. And so to bring this back around to programming I think that's how programming principles work also we guess at what good programming principles are, and then we subject those principles to criticism, and the ones that don't survive criticism get thrown out, and the ones that do, we tentatively keep wouldn't that be nice?

Speaker 2:

wouldn't that be nice?

Speaker 1:

um, yes, in theory you're absolutely right okay, that implies that maybe not in reality.

Speaker 2:

In the kind of knowledge acquisition that you were talking about. You talked about. You make observations, you form a hypothesis and then you test that hypothesis and then if the hypothesis is not refuted, then you actually sort of up the possibility that a hypothesis is not incorrect. Yeah, so you measure and then you use the result to say, yeah, that's stood the test of time right. Yes, yeah, that's stood the test of time right. Yes, when it comes to programming, we don't do that. There is no laboratory anywhere that runs decent tests to say object orientation is better than functional programming or whatever it might be right, and so what we're left with is the same as the flat earthers.

Speaker 2:

We have a whole bunch of people who passionately believe one thing or another and that do not take an experimental or experiential view when it comes to trying to find better ways to do things. It's almost a religious view. You know, I had an interesting experience about a year ago. I'm on a crusade to stop people creating single shop classes in Ruby the kind of class like the kind of it has one public method called call. Yes, exactly.

Speaker 1:

Yes.

Speaker 2:

Okay, you know I wrote this thing saying come on, guys, that's a function, that's all that is. It's a function, so stick it in a module. And the example I had was module something or other extend self depth, whatever it was right, and I got people dumping on me. You can't use extend self, right? That's not what it's for and it's like what.

Speaker 2:

You know, no modules are, so you can mix them into classes. You know, and that's that's like you know a religion now, and that's like the way it should be. And you know, of course we should be using, using, you know, all these ridiculous patterns where I have to decorate a function with, like, an initializer and a class and attributes and all this kind of stuff. I don't, I don't, but it's, it's become a a recognized good way of doing things, you know, and that just drives me up the wall and to some extent, I think, uncharitably. I think that developers kind of like excuses not to have to do the work, and so they look for, hey, guess what, if I want to write a three-line function, I'm going to have to write 20 lines of class around it, and that way I'm going to delay finding out whether or not my three lines of function were actually correct or not. Or I have to write 107 test cases. No, you don't, but it delays having to actually do the work and that is probably a bit cruel, a bit cynical.

Speaker 1:

No, I don't think so and to turn your uncharitable view into a more charitable one. I think you said programmers are looking for ways to not have to do the work. I think just people are looking for ways to not have to do the work.

Speaker 2:

You're right, you're absolutely right.

Speaker 1:

It's just an aspect of human nature, so we don't have to criticize developers specifically. We're just lazy humans.

Speaker 2:

Yeah, no, you're absolutely right, absolutely right there. But the sad thing about that is, as with anything else, that actually makes it harder in the long term, you know yeah, well, I think this you know the name by which I know these things, uh is service objects oh yeah and I think it's one of the more tragic phenomena to to come into the rub Um, and I think at least part of the explanation is cargo culting.

Speaker 1:

Um, people, people see, people see that other people are doing this and people are teaching this and they're saying this is a good way of doing this, and so they rather people are teaching this and they're saying this is a good way of doing this, and so they rather uncritically accept this way of doing things. And the way I view this is an epistemological failure. They don't have good ways of telling what's true and not true.

Speaker 2:

Well, they do, they just choose not to use them.

Speaker 1:

Hmm, I'm not so sure. I don't know that. They do know how to know what's true and not true.

Speaker 2:

Okay, so everybody says I should use a service object, but I as an individual object. But I as an individual look at that and think why, right, what's the service object by me? Well, everybody says no, what is it by me? So I as an individual, think to myself it doesn't look to me like I actually need a class for this, it just looks like a function. So why?

Speaker 1:

don't.

Speaker 2:

I go out on a limb and actually just try writing it as a pure function in a module and see if the world ends and if it doesn't, if my code works just as well as it would have done with all the extra decoration, but if I can do it without, then I've learned something.

Speaker 2:

I do have the means to find that out, but it takes two things to do that it takes courage to go against the trend and it takes learning to listen to that angel on your shoulder. It's very, very easy to code on autopilot and to do things the way you've always done them, and to some extent, ai-based coding makes that even worse, because not only are you doing it the way you've always done it, but now you're doing it the way 8 million other people have always done it. Not only are you doing it the way you've always done it, but now you're doing it the way 8 million other people have always done it. But if, instead, you learn to listen to that little voice that says what the hell are you doing? Why are you doing that? Does that make sense? And then listen to that and then critically think okay, is this good, or can I test to see if there's a better way?

Speaker 1:

um, what I'm trying to teach people is, um, these deep principles of epistemology that can be used for anything, because it's a tough sell. I feel like there are ideas that you need to get people to buy at multiple layers. Like, I'm pretty sure I agree with everything you just said about service objects, but I've tried to have conversations about people, and I've been specifically hired to teach other developers, and this is part of what I've tried to teach them, but what I found in a lot of cases is that we don't even have a shared vocabulary. We don't have remotely a similar model of reality, we're not speaking the same language whatsoever, and so to even engage in some kind of a debate is not even possible because we're not speaking the fundamental same language, and so forget about winning them over.

Speaker 1:

To my view of the thing, we don't even have the same model of reality at all, and so I'm trying to go way down deep and ask the question how do we know whether any particular programming principle is any good, like service objects, for example? And I find it very interesting what you said, and I can understand where you're coming from, you know. Wouldn't it be nice if we could come up with hypotheses for what certain programming principles are and then subject them to criticism and kind of tidily winnow the good from the bad and stuff like that. But in reality it doesn't work that way.

Speaker 2:

Well, you've also got to remember that there is no good or bad right, there's no winnowing the good or bad. Everything is contextual here. So, okay, let's take my favorite straw man and that's Design Patterns. All right, so the Design Patterns book was written in the 90s, basically for C++, because C++ was a horrible language and it did not give you things that you would need to be able to express, stuff, you know, like command pattern and everything, or it did, but it wasn't, like you know, easy. You had to actually think about indirection through classes and all that kind of stuff. And so they published this book of design patterns and it got incredible amount of traction, partly because it was based on ideas from architecture that implied that architects could mechanically construct designs by applying a set of patterns, you know. And so Alexander's design patterns book would say you know, a room must have windows on two sides or whatever it might be, you know. And so that was really really exciting for people that, hey, look, we now have these building blocks and then we can apply them and build software using them.

Speaker 2:

And the sad fact is that if they had chosen to do design patterns in Lisp, it would have been a totally separate, different set of patterns, right, the patents they chose were purely there because C++ had holes in it. Basically, if you try and convert the original Gang of Four design patents into, say, ruby, then I think I can't remember I think it's 18 of them just disappear. You don't even need them. Same if you go to JavaScript. But there are other patterns in Ruby, things that would be useful to be able to have a name for in Ruby. So in that respect, there's not right or wrong. There's nothing wrong with a strategy pattern if you need it, but you don't necessarily need it, and so it really saddens my heart when I see people writing Ruby code with all of these ridiculous patterns implemented in it.

Speaker 2:

The one I hate the most is people that actually use state pattern in Ruby. It's not a state pattern, it's a hash, and the actual state machine is a function called reduce and that's all you need. But no, they write 50 lines of this and 60 lines of that alternate state pattern. Don't do that. The lines of that ultimate state pattern, don't do that. Yeah, don't do that. It is always contextual, right, and that's actually what makes it hard to test and hard to compare things, because you really have to do it, and it's not just the language you're using, it's the environment you're doing it in, it's how much experience you've got, it's what your other team members know. All of that kind of stuff plays into the context. So it was really okay. So I'm about to switch into shill mode just for a second.

Speaker 2:

But it's actually relevant, I think. So I just finished this book called Simplicity and it turns out I actually had to write it three times Because the first time I tried to do it kind of in the style of pragmatic programmer and tried to be kind of like telling stories that said you should do this and you should do that, that said you should do this and you should do that, and I realized that that didn't work, because what is simple for me is not simple for other people and vice versa, you know. So there were no rules I could say. You know, like in programmatic programming we could say decoupling is good, but there are no real underlying rules for simplicity, real underlying rules for simplicity. So then I rewrote it again in a kind of more I don't know what the word would be second-person style where I tried to draw the reader in and find parallels between our experiences and then teach them that way.

Speaker 2:

It didn't work at all and my son told me that I shouldn't try any of that stuff. I should write the book in the first person and talk about all the screw-ups that I've made by making things too complex and then what I did to tackle them and then make the point to people don't do what I do. My particular solutions are solutions that I applied in my context at that time, in that language blah, blah, blah, blah blah. What I'm not doing is trying to teach you how to be simple, how to make things simple, how to be simple, how to make things simple, but what I'm trying to do is to teach you, or is to show you, the steps that I went through, with the hope that you will also be able to do those steps and come to a different solution in your particular context, and I think that's what is missing from an awful lot of rhetoric at the moment.

Speaker 1:

It sounds to me. Tell me, tell me how right or wrong this sounds to you. Um, it sounds to me like, rather than giving the reader a prescription to follow in all situations, you were seeking to equip the reader with a way of finding their own good solutions that work for them in their own contexts.

Speaker 2:

Exactly right, yep, and that's a kind of three-step process. I think the first step is that thing I talked about earlier and that is trying to develop that spidey sense, that intuition that maybe this is more complicated than it should be. And the way to develop that is practice. As with every intuition, all an intuition is is an experience. You've had a lot and your brain has learned to tell the patterns. Intuition is is an experience. You've had a lot and your brain has learned to tell the patterns. You know, um, and you gain that by by doing, by practicing, and by observing. You know, so you practice, um.

Speaker 2:

My, my favorite story on this is um, there's a guy called what's his name?

Speaker 2:

Called what was his name?

Speaker 2:

Gwaltney, I can't remember Wrote a book called Inner Tennis and it was all about how people get in their own way when they're trying to play a sport tennis in this case and I actually saw there was a BBC documentary on this in the 70s and he had a student who had trouble serving and he brought the student onto court with a laundry basket full of tennis balls and on the other side of the court, on the middle of that tee, you know, in between the two boxes.

Speaker 2:

He's put a chair and he said to this person do not try and hit the chair. All I want you to do is to lob the ball up in the air, whack it with a racket and then tell me out loud is it in front of the chair, behind the chair, to the left of the chair, to the right of the chair, right? And do that until your arm falls off. So this woman went through most of the basket of tennis balls, going left into the back, right into the front, et cetera, et cetera, right. Then, when she was down to like 20 balls or so, he said, okay, now hit the chair. And she did every single time.

Speaker 1:

Wow.

Speaker 2:

And what he was doing was he was developing her subconscious ability to correct what she was doing. You know, because you don't hit a ball by consciously saying move this muscle, move that muscle, calculate this differential equation right. You hit a ball because you know how to hit a ball, and so what he had to do was to get into her subconscious the information it needed on how to hit a ball, and to do that he made her generate feedback into her subconscious. By saying it out loud, what you're doing is you're actually involving two, or actually three, separate senses. You have your sight, you can see where the ball went, but then you speak it and then you listen to yourself speaking it, and so your brain gets three times the reinforcement that we'd normally get and, as a result, you develop this intuition.

Speaker 1:

Yeah, you're paying a lot more attention to the feedback.

Speaker 2:

You're deliberately paying attention to the feedback. You're exactly right.

Speaker 1:

Yeah, and it sounds to me like what the coach was helping the player do is develop a model.

Speaker 2:

Exactly right, but it's not an explicit model, it's not a model that they could articulate. Right, it's the same model that you have when you're driving a car. Yeah, you can drive from here to the store and not actually remember doing it, because 90% of it is handled below the surface. Yeah, you will. I mean, it's really funny. When I was teaching one of my sons to drive, we were doing some highway driving and I said careful, that guy's going to pull out. And he said what? And the guy was just sitting in the other lane and I said that guy's going to pull out. And he did. And he said how did you know that? And I said I haven't the faintest idea, you know, but I'll keep him driving for a while.

Speaker 1:

You can tell that guy's gonna pull out yeah, I read this really interesting story about a firefighter. He was in a burning building with some other firefighters and at some point in time he yelled, yelled to everybody else. We got to get out. And then they did and immediately afterward the building collapsed and they were like, how did you know that was going to happen? And he's like I don't know.

Speaker 2:

Yeah, oh, if you want to read, there's a really really interesting book by a guy called Gavin DeBecker, called the Gift of Fear, and he is a he used to be, I think Special Forces and now he runs a personal protection company and he protects politicians and movie stars and rich people and everything else, and he wrote this book for people that couldn't afford his people to protect them on how to look after yourself, and his big point is that he's done research on this and it's valid.

Speaker 2:

Some ridiculously high percentages of the time people who get mugged knew ahead of time that they were going to get mugged and didn't do anything about it. They walk down the street, they get this really uneasy feeling and they go oh, I'm a sophisticated human being, I shouldn't listen to that kind of stuff. And then they get beaten up. The gift of fear is learning to listen to that inner voice, because that inner voice is the vast majority of your brain, continually pattern matching everything going on around it. You know, and it is way more capable than your conscious brain of noticing those patterns that mean something's about to happen, you know.

Speaker 1:

Yeah, yeah, it's really interesting. There have been a number of times in the last I don't know 10 years or something, where I've made a decision where my gut told me one thing but my rational mind told me something else and I went with my rational mind and I really regretted it and I should have gone with. My gut told me one thing but my rational mind told me something else and I went with my rational mind and I really regretted it and I should have gone with my gut, and I don't think the takeaway is like always go with your gut. But I do think the takeaway is like listen to your gut, take your gut into account, because you know, our neocortex is relatively new and it's not in control of everything Like our old brain controls like hunger and fear and all sorts of other things, and it doesn't necessarily like have a conversation with the neocortex.

Speaker 2:

Well, it does, but it doesn't use words right. It's really interesting when it wants to talk. When your sub-brain wants to talk to your consciousness, it does it by making you feel queasy or making you feel nervous, or it gives you like, because it can control your internal systems right, and so it will do things that are very clear indicators that something is wrong, but they can't say what you know. So when I'm coding, if I find myself getting angry or if I find myself standing up, a lot that's actually a common one with me is I'll be coding array and then suddenly I have to stand. You know, and I've learned, that that means I'm doing something wrong. You know that my inner pair, programmer, is trying to get my attention yeah, I, I've learned a similar thing.

Speaker 1:

Like there was this one occasion I remember really well, about 20 years ago, when I was coding on something. It had been going well for a while and then it started to go poorly and I'm like, god damn it, this fucking program, and I got mad at the program. And then, after a few minutes of being mad at the program, I realized that my frustration was totally misdirected. I'm like, wait a second, it's nothing to do with this program, that sucks. I wrote bad code. Like it was me that created the problem that I'm mad about now and the solution to the problem is to clean up the code.

Speaker 1:

And so now, whenever I find myself getting frustrated with a program, I realized that the thing I should really be frustrated with is my behavior or choices or whatever.

Speaker 1:

And then I can kind of pause and assess and maybe back up, start over more carefully or something like that. And usually these frustrations happen when my behavior deviates from what I know to be good practices, when I get like lazy and um, and I, you know I stopped writing tests or something like that and I just want to, I want a cheap win. And so I like throw spaghetti at the wall, make some code change manually, test it. Sometimes you get lucky and it works, but more often than not you don't get lucky and it doesn't work. Then you do that like 10 times in a row and it fails 10 times and you get frustrated and it's like hang on a second, let me just stop. Take a second to write a test, and then it'll be easy, instead of making these gambles anyway, that that frustration is a clue yeah, and one of the things you said in there I think is the most important word, and that was pause.

Speaker 2:

Right, when you get into that state, I find the best thing I can do is take the dog for a walk, because what would have happened before, that is, I would have been winding myself up and focusing more and more intently on whatever it is that's going wrong. And that typically isn't the problem, right. Typically it lies somewhere else, and I find that if I pause, go walk the dog, or if it's towards the end of the day, I just stop work, and I don't know how often does this happen to you, right? You have some really frustrating problem. At the end of the day, you have no idea what's going wrong. You go to bed next morning, you wake up and you know what the answer is.

Speaker 1:

And that happens to you, yeah.

Speaker 2:

Okay, so where's that coming from? Clearly you're not consciously thinking about it in your sleep. That's because in your sleep your subconscious is still doing all that pattern matching and it said oh, you probably don't have a. That's probably a wrong name of variable whatever it might be Right.

Speaker 1:

Yeah, that's a fascinating phenomenon. I don't know how it works. I want to go back to something you said earlier. I made the claim that there are better and worse programming principles, and it sounds like you don't agree with that claim.

Speaker 1:

Um, I want to add an argument behind that claim um I I think that I think that, um, the the quality of a proposed programming principle can be measured via explanations. If a programming principle has a good explanation behind it, that's better than a programming principle that doesn't. For example, I think it's objectively better to program using testing than to not, and I can back that up with a large amount of very detailed explanations for why it's better. But usually people who argue against automated testing don't have explanations behind it other than I've never done it and my life has turned out great or some variation of that. So I think the explanation component is a big part of it.

Speaker 2:

What constitutes an explanation, just one, of why testing is good.

Speaker 1:

Yeah, good question, because there's no option for the software not to be tested. There's only a question of how it gets tested. It gets tested either by the developer at the time of coding, or by the developer's boss when they're doing some QA checks or something like that, or a QA person, or it gets tested by the users in production. It always gets tested somehow. The only question is who does it and how much it costs, how much damage the defect does and how much it costs to fix it.

Speaker 2:

So if you were doing an exploratory rapid prototype for a customer who understood that what they were getting was a rag doll they could play with, would you still be compelled to write tests?

Speaker 1:

Not necessarily.

Speaker 2:

So then, testing is not a universally good thing to do.

Speaker 1:

Hmm, well, sure, but that wasn't exactly my argument.

Speaker 2:

My argument is, but if you say that testing is better than not testing, without qualifying that, that cannot be true.

Speaker 1:

Okay, then I'll say in general testing is better than not testing.

Speaker 2:

Ah, the old fallback in general.

Speaker 1:

Yeah, yeah, I mean, maybe. I mean, there's exceptions to everything you know.

Speaker 2:

Yeah, I think part of the problem is that. Yeah, I think part of the problem is that I am not adverse to testing. I do it myself.

Speaker 2:

But, what I'm adverse to is testing because it's a religion Agreed, and I think the Ruby community has got that disease worse than most. I think sometime in the late first decade of the century it got to the point where people were saying, well, if it hasn't got tests, I'm not going to use it or whatever. And people were getting really arrogant. They produced these ridiculous ideas like BDD and frameworks, where you actually ended up writing more code in your BDD matchers than you actually did in your application, and that was based on a totally false premise that if you could write your tests in a language the user could understand, then the user would validate what you were doing. Never, ever happened, never, ever happened. And yet still to this day there are the people out there doing BDD, writing mantras that no one ever reads.

Speaker 1:

Okay, so the idea that testing is good, sure, but that doesn't mean that more testing is better, I agree Easily 90% of I'll say Rails developers because that's what I have experience with are doing testing not just poorly, but extremely poorly. It's unbelievable.

Speaker 2:

Yep, yep. And the same goes for test-first, same goes for test first. I mean test first is, I think, an interesting approach to take if you know what you're doing, but if you don't, it just leads you deeper and deeper down a rabbit hole.

Speaker 1:

You know that's a really interesting and important aspect of testing. I've been talking lately about how I think the red-green refactor loop is. You know it's almost universally the first thing that's presented in TDD education.

Speaker 1:

But I think it's really not the place to start and it gives it's just not a great place to start and it gives it it's just not a great place to start. I've, and I've thought about what I might replace this with and I came up with something that I call specify, encode, fulfill. So first you come up with your specification what is it that I want to do? And then I take that specification and I encode it in an automated test and then I fulfill that specification by writing the code to satisfy that test. And the really important distinction that I want to make is that sometimes that specify step, the step of coming up with the specifications, sometimes the way that you come up with the specification is by coding yeah, and so people are like, yeah, tdd is great sometimes, but like if you don't have a clear idea of what you want to do, then it doesn't work.

Speaker 1:

I'm like, no, no, that that still can be tdd. It's just that the way that you're coming up with the test you're going to write is by is by coding I mean what you're really talking about.

Speaker 2:

There is effectively many prototypes. Yeah, yeah, I mean I would. I would certainly go along with that, that original code that you wrote, and then rewrite it, having written the test, or do you carry it forward?

Speaker 1:

Sorry, cut out. Can you say that again?

Speaker 2:

Sure. So given the okay. So you've written a bit of code to try the idea out before you write the test. You then write the test. Do you then rewrite that code from scratch or do you carry forward your prototype?

Speaker 1:

I don't have a single approach that I take every single time and I'm trying to think of what I normally do. I think normally I throw out what I've done and start over. But don't quote me on that, because I'm not sure that that's what I really do most of the time I mean, I think that's the sensible thing to do.

Speaker 1:

I think people get too, too attached to their code. They, they, they hold their code too dearly. Um, and they're attached to. I think. People think that the purpose of code, the one and only purpose of it, is to create a work product. But that's obviously. That is one purpose of it. But it's also just a medium for thinking, and throwing away code doesn't mean it's a waste, any more than throwing away a piece of paper means that what you wrote on it was a waste. You know, writing is a medium for thinking also, and sometimes you need to write down some notes and then crumple up the piece of paper and throw it away and write something different. You write drafts and then throw out the drafts and write something different. Same with code. But people are reluctant to do that, but I think they shouldn't be.

Speaker 2:

You're absolutely right. You're 100% right, and particularly as well for two things. First of all, back in the old days, before we had AI, the second time you wrote a piece of code it would typically go 10 times faster. What do you mean? Okay, so you write some really hacky function because you don't know exactly what it's supposed to do or how it's supposed to do it, right. So that's your kind of prototype before the test. Then you can write the test if you must, and then you go back and you have to write the code again that actually implements that function, that part, that writing of that code the second time goes way, way faster than it did the first time. A because you know what you're doing, you have the competence, you've made a whole bunch of those little micro decisions that you no longer have to make. It just flows. And not only does it flow, but typically, in my experience, that second time around it's better code. You know you can apply more subtle ideas to it because you've already solved the hard problem.

Speaker 1:

Yeah, that's definitely my experience too, man, I'm really enjoying this conversation and I feel like we could talk for hours and hours more. I fear that my computer can't handle very much more of recording and I don't want to risk a crash and losing the whole episode, so we probably should wrap up soon. But I definitely would like you to tell us more about your book briefly and share, like where they can find that and stuff like that.

Speaker 2:

Okay, thank you. Just talking about throwing stuff away, over the weekend I made one of the hardest decisions I've made ever writing a book and I threw away the first 30 pages of the Simplicity book and replaced them with two pages. So it doesn't just apply to coding, so Simplicity is simply a book where I try to help people regain control. The agile movement has lost, it's been commoditized, it's been corporatized, and so individual developers now don't really get the option to exhibit agility. So I've been trying to find ways where developers can take back control and not necessarily of the world, I mean just their own, what they do day to day, day-to-day. And looking at the state of software, it's basically just too damn complicated right now.

Speaker 2:

The stuff we do is just over-the-top complicated, and it needn't be. It's complicated because we make it complicated, because we follow what other people tell us to do. We use the brand new shiny, sparkly frameworks or whatever else. You know, we use techniques that we're told were good, all of that kind of stuff. So the book is an attempt to teach people to look more cynically, perhaps, or at least objectively at the things they do, at least objectively at the things they do, and to perform experiments to try to modify something and see what impact that has, and if it works, then okay, try it again, modify it again, find out what happens.

Speaker 2:

I mean, one of my big things is meetings and, in particular, ritual meetings like stand-ups. Right, we do that because we're told we have to do that, because that's agile, it's anything but, and there are many different ways of getting exactly the same benefits without the ridiculous cost of all of these meetings. Right, it's just a question of recognizing that this is really way over the top or what I need, and then coming up with different ways of achieving the same result, and that's really all the book is about. It's about developing that intuition and then having the courage and the discipline to experiment and see what happens if you change it.

Speaker 1:

And Dave, where can people go to learn more about this book?

Speaker 2:

Oh, well, right now it is in beta. It will be until sometime in the summer, just because the publishing industry is so slow. So you can get it in PDF and EPUB formats from pragprogcom and just look for simplicity.

Speaker 1:

Okay, and, by the way, the other day I was not the other day last year I was working with some younger programmer and I I was talking about, oh, you know, programmers that everybody knows about, for example, kent beck, and the guy was like who's kent beck? I'm like oh, um, so I'm used to like assuming that everybody knows everything that I do and stuff like that, um, and I assume that everybody knows about the Pragmatic Programmer. But for anyone who's not aware yet, are there any of your past works that you want to share?

Speaker 2:

Well, I mean, it's funny Pragmatic Programmer is 25 years old more, actually, the original version. We updated it in 2019 for the 20th anniversary edition, but I'm still really quite proud of that book. It is a language, technology agnostic look at what we do as programmers and it's written as a series of short, like one to maybe five or eight page. I think we call them recipes or something I can't remember now, tips, tips, called tips, and each one just addresses one particular thing, and none of this is original. None of it. We came up with um and a lot of people can look at it and say, well, it's just common sense, um, but the reality is it's common sense that most people know but don't do, uh, so yeah, it's, it's been a, it's been an interesting ride. It's actually I mean, it's 25 years old. It's still like in the top 10 on Amazon programming books, which is kind of scary.

Speaker 1:

Yeah, I noticed that because when my book came out I don't have it within arm's reach, but Professional Rails Testing is the book that I put out last October of 2024. On the first day I was proud to see that it went up within like a couple positions of the pragmatic programmer, just ever so briefly. And then it went way back down again. But I was like, oh, prag, prog is your pragmatic programmer, is number, I don't know, maybe it's number one, I don't, I don't remember it's.

Speaker 2:

It's quite high up there um, in amazon testing it's been number one for a couple of months and I have no idea why it it's in testing Interesting.

Speaker 1:

Yeah, that must be gratifying that it's still so popular so many years later. It is it really?

Speaker 2:

is. It's just annoying that we didn't publish it for ourselves, but you know who knows.

Speaker 1:

As is life.

Speaker 2:

As is life, yeah. Well, Dave, I've really enjoyed this and thanks so much for coming on the show sure, maybe we can solve the other half of the universe's problems some of the time indeed all right. Well, a total pleasure. Thank you, jason, thank you, thank you.