Code with Jason
Code with Jason
294 - The Dubious Idea of Code Reuse with Dave Thomas
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode I talk with Dave Thomas about why code reuse is overrated, the economics of programming principles, and why we can't empirically test whether practices work—we have to scrutinize the arguments behind them. Dave also discusses his new book Simplicity and his "developer without portfolio" concept.
Links:
Hey, it's Jason, host of the Code with Jason podcast. You're a developer. You like to listen to podcasts. You're listening to one right now. Maybe you like to read blogs and subscribe to email newsletters and stuff like that. Keep in touch. Email newsletters are a really nice way to keep on top of what's going on in the programming world. Except they're actually not. I don't know about you, but the last thing that I want to do after a long day of staring at the screen is sit there and stare at the screen some more. That's why I started a different kind of newsletter. It's a snail mail programming newsletter. That's right. I send an actual envelope in the mail containing a paper newsletter that you can hold in your hands. You can read it on your living room couch, at your kitchen table, in your bed, or in someone else's bed. And when they say, What are you doing in my bed? You can say, I'm reading Jason's newsletter. What does it look like? You might wonder what you might find in this snail mail programming newsletter. You can read about all kinds of programming topics like object-oriented programming, testing, DevOps, AI. Most of it's pretty technology agnostic. You can also read about other non-programming topics like philosophy, evolutionary theory, business, marketing, economics, psychology, music, cooking, history, geology, language, culture, robotics, and farming. The name of the newsletter is Nonsense Monthly. Here's what some of my readers are saying about it. Helmut Kobler from Los Angeles says thanks much for sending the newsletter. I got it about a week ago and read it on my sofa. It was a totally different experience than reading it on my computer or iPad. It felt more relaxed, more meaningful, something special and out of the ordinary. I'm sure that's what you were going for, so just wanted to let you know that you succeeded. Looking forward to more. Drew Bragg from Philadelphia says Nonsense Monthly is the only newsletter I deliberately set aside time to read. I read a lot of great newsletters, but there's just something about receiving a piece of mail, physically opening it, and sitting down to read it on paper that is just so awesome. Feels like a lost luxury. Chris Sonnier from Dickinson, Texas says, just finished reading my first nonsense monthly snail mail newsletter and truly enjoyed it. Something about holding a physical piece of paper that just feels good. Thank you for this. Can't wait for the next one. Dear listener, if you would like to get letters in the mail from yours truly every month, you can go sign up at nonsense monthly dot com. That's nonsensemonthly dot com. I'll say it one more time nonsense monthly dot com. And now without further ado, here is today's episode.
SPEAKER_02:Again, you make it sound like a chore.
SPEAKER_00:I'm sorry, um, yeah, you were on the show not all that long ago, some months ago, and it was a uh a wonderful episode, in my opinion. That was my first time talking with you, and uh I felt like we really hit it off. Um that was very nice, and here we are again. Um, and Dave, you told me right as we got on this call before we hit record um that you tweeted something that got some engagement. What was that?
SPEAKER_02:Yeah, it was um so I was I was going through uh reviewing one of the books that we're about to publish, and there was a quote. Let me see, actually see if I can dig up the quote because it's actually probably worth it. Hang on.
SPEAKER_00:Uh and while Dave is looking for that, um, it's it's something to do with abstraction.
SPEAKER_02:Actually, okay, I'm not gonna go I'm not gonna go search Twitter. So, yeah, basically it's a book um uh that was about uh how to do quote proper unquote functional programming in Elixir. And it started off by saying that uh reuse was fictional and that um most abstractions have rotted before they get finished. Um and I thought that was actually quite an interesting, uh deliberately provocative, but interesting kind of take on uh a kind of fetishism that's going on at the moment in software development. And so I just tweeted that um as a kind of like throwaway. And it's probably now become my most engaged tweet. I have Bob Martin weighing in, I got DHH weighing in. Oh wow, and yeah, um uh Bob Martin is kind of like uh relatively uh as you'd expect. DH esh was it was kind of interesting. He kind of like almost tried to moderate between the two of us. Um but yeah, I've I've had like you know the whole range of you go girl to you know one idiot.
SPEAKER_00:I love it. Um yeah, I look at it in a similar way. Um people are are real big on code reuse, um, but I think reuse is extremely overrated.
SPEAKER_02:Yeah, I mean reality is there's twofold. First of all, what do we actually mean by reuse? You know, what what is it that we're actually aiming to be able to do? And what price do we pay in order to achieve that? Um I think that if you're writing a library or a framework, or you know, if you're on a team and you're writing the code that you know does something that lots of people want to be able to do, then clearly the job is to make it reusable, and that's fine. But if we're just sitting there writing code, the chances of the particular thing that we are writing needing to be reused are pretty much zero, you know? Um, so why should we be um bending the code and uh putting extra effort into making it reusable, particularly when that effort actually ends up making the code quite often harder to read or harder to reason about? Um so I think chasing reuse uh uh as an abstract concept is just plain dumb.
SPEAKER_00:Yes, uh I would agree completely.
SPEAKER_02:Um well that's a pretty quick podcast.
SPEAKER_00:Okay, so there's there's as I so often do, uh I'm gonna get philosophical for a second, but I think it's it's really important, like um my personal rationale behind my belief that that spending paying paying this price to make code more reusable is is not a very good idea. Um it stems from my my like economic view of programming. Um so what I mean by that is um we talk about, and and you I think kind of quoted this this word earlier, um, the proper way to do programming, the right way and the wrong way, and this is a good idea, this is a bad idea. But I think that itself is not a good way to frame programming principles. Um it's more like costs and benefits. Um and for example, let's let's take something totally different, like um automated tests. Um, every test you write is a bet. Um or you could call it an investment uh that may or may not have a positive return. Um and you it it it it might not be knowable uh whether you'll ever get a return on your investment for any individual test. I'm fairly certain that that you'll never know for the vast majority of your tests. But you can know that in aggregate, the general policy of making all those little bets um is gonna have an enormous positive return. And so it's not about each individual investment giving a positive return, it's about your entire investment portfolio giving a positive return. Um, and so coming back to code reuse, um when I generalize a class early on um to try to make it reusable, I'm making a bet. I'm I'm paying a price to make it reusable because that's gonna take more time to change that code from being single purpose to reusable. Um and I'm of course invoking the uh opportunity cost of not doing something else with my time. Um and I'm betting that on average that policy is gonna have a positive return if if I'm the kind of person who believes that. Um but my personal view, and I think I'm right about this, is that that's not a good betting strategy. Um on average you're gonna lose. Uh on average, you're gonna spend this extra time to make something reusable, and it's it's it's on average, you're not gonna get that payback.
SPEAKER_02:I I I love the analogy. I really do. I think that's perfect. But unlike betting, where you're dealing with random events, here we're actually dealing with something we can measure. So uh again, not short term, but long term, uh there's actually a a way of of um there's a statistical way of looking at it too, which is okay, on this particular project, I am going to uh design everything to be reused, and I'm gonna measure the cost of doing that in terms of you know, if I did this, it's gonna add an extra X percent to whatever I do. And then on the next project, I'm not. In the next project, I'm gonna make it as simple and as as concrete as possible, and I'm gonna see um, did I achieve the savings that I was estimating on the previous project of the cost of doing reuse? Okay, so that's part one of the equation. Then part two of the equation is a year from now, I say, how much did that extra effort of reuse pay back when it came to making changes to my code? Right? Was there a net positive or a net negative? And the same for the code where I didn't aim for reuse, how much extra time did I end up spending because I hadn't baked reuse into this code? And two is not exactly a good sample, but a dozen might be. And so we can actually sit there and actually do the math and actually come up with a optimization or an optimal point, assuming that all of our projects are roughly similar. We can come up with a point to say, okay, yeah, this much reuse building in is actually worthwhile. And this much is not. So it's kind of like you're playing a game of chance where you actually get to by playing work out what the odds are.
SPEAKER_00:Yeah, this this touches on such a can of worms, such a fundamental problem in programming. Um, the problem that in my opinion, which seems to maybe differ from your opinion based on what you just said, in my view, uh empirical measurement of the quality of programming principles is impossible because there's too many confounding variables in any study you could do.
SPEAKER_02:Yeah, I don't necessarily disagree. I think that there is a you're never gonna be able to isolate a variable, I don't think. Um but I think that you can say that some variables are dominant and some are less relevant to the overall. And so you can design your experiments to uh highlight, I guess, excuse me, the dominant features and to play down the other ones. And you're not going to get an exact number, you're never gonna come up with this percentage or something, but I think you can still get a good sense of you know whether or not it works. I mean, for example, take take testing as a good example, right? Um at a first cut, the simple thing to do would be on one project do your normal testing, on the next project, do zero testing and see what happens. All right, you're not gonna come up with a number, this was 12% better, but you are gonna know, my god, this was an absolute disaster, or you're gonna learn, hey, guess what? The world didn't end.
SPEAKER_00:I think that's dangerous. Um and the reason why the reason why is let let's let's take two examples. Uh like one where a very smart, experienced programmer um implements a program using no tests. Um and then somebody else who's maybe smart um but they don't have any testing experience, um, they try to do TDD for their project. Um, and maybe even they work somewhere where the environment is not particularly supportive of TDD or testing at all. Um, that second developer is gonna be so much less productive. And I I've seen this so many times, like not just hypothetically, but but actually happen where people uh experience an extrem people experience a much lower level of productivity when they try to use testing uh than than without testing, and they attribute that negative difference to testing erroneously. Um but the problem isn't with testing, the problem is is just that it it's it's slow when you're first learning, and if you're in an environment where you have all kinds of headwinds against testing, then it it's not gonna go well, and it's really easy for people to draw the wrong conclusion. So that's why I say that's dangerous.
SPEAKER_02:It is yeah, it is dangerous. But here's the thing two two things. First of all, um the one of the problems that we have in software development is that we have uh people like me who go around saying, Well, I try really hard not to do this, but go around saying this is the way to do it, right? Use objects, use functions, use this, use that, do it this way, do it that way. And every single one of those people is wrong because there are no absolute ways to do things, right? Everything is contextual, it's contextual on your environment, it's contextual on your team, it's contextual on your experience. So whereas not writing tests might be a perfectly good strategy for one person, it could be a disaster strategy for another person. How do you know? You know by trying it. And that's the only way to know.
SPEAKER_00:I have a different argument.
SPEAKER_02:Okay, but that's fine. That's that's I mean, that's not unreasonable. But I think the point here is that for your junior developer, the the feeling that not writing tests uh makes them faster is probably true in the moment, but that's not the test. The the that's not the feedback. The feedback is a year from now, what does that what has it done to your code? Right? It does not having tests make your code do you do you continue to see that benefit, or are you now spending far more time fixing stuff because you don't have the regression tests, or because there are bugs that crept in that you didn't notice because you didn't have tests? That's the that's that's the feedback loop. It's not how do you feel when you're writing the code, it's how do you feel long term.
SPEAKER_00:And I just have to mention as a side comment, so many people who try testing and decide it's not for them, um, the the thing that they find they don't like isn't actually testing, it's shitty, stupid testing. Like so much uh educational material out there uh teaches testing in a way that doesn't make very much sense and is not going to be very helpful. And so they try it and they're like, well, this sucks. And they're right, because it does it does suck if they're doing it in a way that doesn't make any sense. So that's another one of the unfortunate headwinds in the community uh for for learning t T D D is so much of the mis-education.
SPEAKER_02:Yeah. I mean, if you go back to um the tests first school, purists will have you write a failing test before you've actually written any code, you know? Like uh test to make sure the class exists. Oh, it fails. I'll have to create a class, right? What kind of brain-dead uh benefit can you possibly imagine that would bring, apart from loading your test code base down with thousands and thousands of lines of totally useless code, which not only is useless but actually gets in the way should you choose to do something like refactor and change the name of something. Right? So all of these the the the the idea. That somehow you're being good because it's painful or it's you know, it's kind of like eat your vegetables, right? It's it's you don't want to do it, but you know it's it's the right thing to do. Bullshit, right? What you should be doing is whatever is benefiting the overall uh uh you know fight against entropy of the of the world, right? And that's not necessarily localized. It could well be you're doing things now because a year from now that will have states. But it's don't just sit there and blindly follow any strategy, any path that just tells you always do this, right? Instead, think of it, everything you do, as being, well, how is this helping? And if if it's not helping, how do I show it's not helping?
SPEAKER_00:Right. I totally agree. Um, and to bring it back to something we were talking about earlier, so um basically, like the question that I'm thinking about is like, how do we know if any particular programming principle is any good? Um and I asked that on Twitter one time, and I thought it was interesting the responses I got. Um, I think more than one response said something like data, um, which like sounds smart and rational, but is is um really not the way that anybody decides to buy a programming principle or not. Um and and like we said earlier, there's like very little actual meaningful data out there um where somebody has done like a longitudinal study uh to try to empirically test principles. Um what does that tell you?
SPEAKER_02:What does that tell you about principles?
SPEAKER_00:Let me, if I may, let me know. I'm sorry.
SPEAKER_02:Yeah, okay. Yeah, come on.
SPEAKER_00:Um I think the way that people decide whether they buy principles or not, and I think this is the right way, is um by scrutinizing the explanations and arguments behind those principles.
SPEAKER_02:May I speak?
SPEAKER_00:Please do.
SPEAKER_02:Excellent. So first of all, I think that the idea of deciding what's good and bad by scrutinizing the arguments is a fantastic idea. Um, but as modern discourse has shown, uh is somewhat lacking in practice, in that we appear to be incapable of discussing anything rationally and dispassionately anymore. Um so although you would imagine that the presence of social media has greatly elevated our consciousness when it comes to what's good and what's bad, the actual reality is the exact opposite. Um it's polarizing and it just drives people into their own opinions harder and harder. So, although I agree with you in theory, I think the practice has shown that that's not necessarily the fact.
SPEAKER_00:Well, I think that's pinning with a little too broad of a brush, but I'll let you continue. Yeah, of course it is.
SPEAKER_02:Of course it is. The other question, though, that's really interesting to me is given that researchers are constantly looking for things to research, why are there no longitudinal studies? And I'd argue it's because no two software projects are comparable. Or very few are comparable. There are too many variables. You can't you can't, you know, pin down everything apart from one variable and do it. So it's I think very hard to try and come up with studies that actually measure anything meaningful. For example, is it good to use long variable names or short variable names? Right. You would think that that's something you could actually come up with an answer for. But in reality, I can't think of a single way of controlling for that test. Because everything is different, you know?
SPEAKER_00:And and you know, there's no way to test something uh between two production projects in a like contrived way. Like you can't make a fake production project because then by definition it's not a production project, and no one would ever like sabotage their own production project by by flouting some rule just for the sake of an experiment.
SPEAKER_02:Right. I mean, and you could arguably say, okay, I'm gonna look at existing projects and I am going to somehow categorize them into this team used short variable names, this team used long variable names, and then I will look at the downstream consequences. But uh I can think of a half a dozen different factors that have already put the kibosh on you there. For example, are you in an industry where there are well-known acronyms for things, in which case you would tend to have shorter variable names uh than you would otherwise, but they'd be just as meaningful. Um are you in a scientific environment, in which case you are very likely to have very short variable names? Uh are you a bunch of Fortran programmers, in which case all your variables are going to be R, J, and K. Um, you know, there's a whole there's a whole raft of factors that would undermine trying to make a decent experiment out of that. I don't know. I'm I'm not I'm I'm I'm not even 100% sure what I'm arguing here, apart from the fact that I'm not convinced that we are yet in a position. That's okay, let's let's let's let's say it that way. If we were a proper engineering discipline, then we would have the framework in place to be able to run those experiments. Because in an engineering discipline, we would be able to control the variables, we would have no baseline data against which we can test, and you know, we would have objective measures that we could apply. And I think everybody would like us to become an engineering discipline. We're not yet. We're nowhere close yet. It was quite interesting. I was actually talking to somebody yesterday and talking about the idea of you know how mature is software development. And they were saying, well, I think you know you could draw the parallel with the automotive industry, for example, you know, and you know, the automotive industry is clearly engineering and they're doing all of the things, you know, they're measuring and they're they're looking at you know all the various factors and safety and material science and all this kind of stuff. And you know, they're doing that. So I said, okay, so how old is the engineering industry? Uh sorry, it's the car industry. And they said, oh, probably about 120 years. So I said, okay, now look at the car industry 60 years in, in the 1960s, for example, right, which is where we are now in software development. I said, that was the era of Ralph Nader, unsafe at any speed. It was the era where people were getting their chests impaled on their whenever they braked. No one wore a seatbelt. It was the era where we were putting fins on the backs of cars, you know, for no discernible reason apart from the fact it made them look like spaceships. Um it that's where we are, right? That's where we are right now. We're at that point in software development that they were in automotive industry in the 1960s. And I would love us to move forward, and we are moving forward, but we're not moving forward by saying we are going to mandate this or do that. We are moving forward simply because each year we get a bit better, you know. And I think that's that's the best we can move forward now.
SPEAKER_00:Um something that I'm trying to do, um, not that I'm trying very hard to do it, but uh something I'm trying to do is is move us forward a little bit in a certain fundamental way. Um, and and I'll try to explain it like this. Okay, so uh before I don't know, let's let's take a random like um um uh sample of a moment in time, uh let's say like the year 1300 or something like that, and then take another moment in time, the year 1800, 500 years later. Um in between those two times, there was an enormous leap forward in science. And in fact, basically, you know, before 1300 we didn't have science, and sometime between 1300 and 1800, we went from not having science to having science. Um and what gave us science, um, and I think what gave us science was that we got better philosophy. Um, and I think what the programming industry needs um is better philosophy, and I think that is one of the things that will that will move us forward in a fundamental way.
SPEAKER_02:I a hundred percent agree, a hundred percent agree. Um my one of my sons did great books program at St. John's College, where they start off in the first year, they learn ancient Greek and then they read the philosophers in Greek, and then basically throughout the four-year course, they go through uh what, in the opinion of the college, were the most significant books of of everything, right? So they read philosophy, they read um literature, they read science, just going forward. And uh, I was recently uh packing up and came across his books for I can't remember, it was like either the third or the fourth year, uh, about electromagnetism. And the college like prepared for them uh extracts from various books, and one of them was a book, and I can't remember the guy's name, English guy uh who was experimenting with um electricity. And they had reached a point where they knew that if you got uh what was it, zinc and copper, and you interleaved it with paper and soaked the whole lot in salt water, um, you would get electric charge or electric potential. Um and the his this guy's paper um describes his scientific method where he would construct a cell of I think it was 30 layers of this, and he would test it by putting his tongue across the wires. And if it hurt, then his battery was successful. And then he would put electrodes into salt water and see what happened. And uh he would get bubbles, and he would uh after a while, I don't know how long it took, he determined that one of the sets of bubbles was hydrogen, and he noticed that he got corrosion against one of the electrodes, which was the other electrode because of the oxygen. And if he swapped the battery connection around, it reversed and he got hydrogen off the other off the other electrode, and the other corrosion went away. Uh, and the remainder of the paper was him trying to describe why this was happening. And he had basically no reason and no idea of what was happening, but he came up with a whole bunch of really quite interesting, fanciful ideas, you know. The thing about the paper is that 20 years later, somebody else read his paper and said, Oh, that's interesting, and went away and repeated the experiment, and then uh tried to work out what actually maybe a mechanism was and came up with the idea of charge moving and you know, charge particles and everything else. And then that got taken up and moved further and further. And within like I think it was 80 years, we had Maxwell, you know. And so the the principle here was that it wasn't necessarily trying to solve the problem, it was simply observing the problem and documenting the problem and thinking about it. And then the combined effect of many, many people doing that is what led to Maxwell's equations and us being able to talk like this. So I think in software we are not at the Maxwell stage yet, you know, nowhere close to it. And so one of the philosophical things we need to address, I think one of them is definitely the idea of scientific method and the idea of uh experimenting and not being afraid to experiment. Um, you don't have to optimize everything. Uh sometimes it's good to fail and to know why. Um and then the second thing is we need to find a way of documenting and describing what we do so that some future people will be able to look at that and go, ah, wait a minute, and draw it all together.
SPEAKER_00:Yeah, yeah, I agree. Um and the the little bit that I hope I can contribute is this idea or this I'll call it an assertion, because I I I can't really call it a fact. Um this this assertion that that um oh how do how do I characterize it? Um oh yeah, the this assertion that we can't judge programming principles based on empirical experiment. Um we have to go off of the explanations and the arguments behind them. Um if if I if I can be successful in getting people to buy that idea, um, that would make me happy because I think that is like a cornerstone on which other things can can rest and make our debates more productive.
SPEAKER_02:But what what are the underlying truths that those arguments are based on? Right? So if if if you only if you can only judge an idea by the ideas that support it, what supports those ideas?
SPEAKER_00:Huh. Well, I don't know if I'm saying that exactly. Maybe I am, but I'm not sure that you can only what was it? Only support an idea. Say what you said, say say what you said again about empiricism and yeah, I I don't think the the merit of a programming principle can be tested empirically. Um it's its merit has to be judged based on on its explanation. Like supposedly this this principle principle is is uh it has merit. Um why there has to be an explanation.
SPEAKER_02:Okay, so let's let's I mean I'm sorry, I'm I'm I don't want to be coming across as aggressive here. I just want to explore this. Okay, so don't take this as being an attack.
SPEAKER_01:Uh-huh.
SPEAKER_02:Um so do that for testing.
SPEAKER_00:Testing?
SPEAKER_02:Yeah, tell me why testing. Give me a give me uh the the reasons why testing is a good idea.
SPEAKER_00:Yeah, all right. So let's see how well I can do just on the spot here.
SPEAKER_02:Um choose another one if it's easier.
SPEAKER_00:No, no, testing is a great one because I'm kind of I have a reputation as a testing guy, so this is a very appropriate one for me.
SPEAKER_02:Um just interrupt here. When people call you a testing person, they don't mean it about coding. You know, that's it's the other kind of testing they're talking about. Hmm. Yeah. You you test them. Oh, right. I test their patients.
SPEAKER_00:Ignore me. Ignore me. Um okay, so in order to answer that question, I have to at least briefly answer the question, what is testing? Because that's that's kind of uh a prerequisite, I think. Um to me, testing testing is this. So, like forget about red-green refactor for a minute. I I have a different intro that I like to use, which is what I like to call um specify, encode, fulfill. So, first you decide on the specifications of what you want to make, and then you encode those specifications in an automated test, and then you fulfill the the specifications that you encoded by making those tests pass. So to me, that's the that's the general idea behind it. Um and there's more than one argument behind this. Um, I'll give well, I'll give as many as as until you tell me to stop or I run out of things to say. Um, but one is that uh your work goes better if you decide what to do before you do it than if you just sit down and and start coding in a stream of consciousness kind of way. Um, you know, we've as an industry we have had an allergic reaction to um waterfall development and big design up front, but I think our reaction has been an overreaction, uh, and now we do no design up front, and a programmer will have a vague idea and then just immediately start typing. Um and to me that's that's uh often very wasteful, and it pays off to spend at least a little bit of time in advance thinking about what you want to accomplish. Because I mean, how can you accomplish something unless you know what you want to accomplish? So that's one argument. I'll I'll just pause there.
SPEAKER_02:Okay. I think that is that's very, very valid, very loose. I like your specifying code for fill loop. And I agree 100%, more than 100%, that you should know what you're trying to achieve before you start doing it. On average. I mean, that's not always the case, but I mean, obviously that's uh an average case. Um where I am uh where I think you've pulled the rhetorical wool over people's eyes there, though, is the assumption that you know what you want to do. Um and I think at one level that is obviously to be hoped that you do. At a practical level though, often that's not the case. And often what we're doing is exploring. And uh when we're exploring, then we are not really the the test that we're doing is not testing to see if what we've got is a correct implementation. What we're testing is whether or not we're actually asking for the right thing. Yeah, is like um choice of a I don't know, choice of an algorithm, choice of a framework. Uh sorry about that. I have a very old dog who's knocking things over. Um things like uh is this gonna work as a UI or not? Right? Should I call the negative numbers red? Oh, I mean, all of these things where we don't know until we've tried it. In which case the test is not the functionality, the test is the choice of functionality.
SPEAKER_00:The test is not the functionality, the test is the choice of functionality. What do you mean by that?
SPEAKER_02:In those, in those, in the well, um, I don't know the correct way to do this, or I don't know if there is a good way to do this, or I don't know if my ideas are gonna work, or I don't know if the customer's gonna like this. Um so it's not a question of whether this that this that I'm doing is correct, it's a question of whether or not this is the correct thing to do. Um there's a really great, a really great example of that. Have you come across Ron Jeffries Sudoku no debacle? So Ron Jeffries um came up with a the idea of documenting himself writing a test first sudoku solver. And he got, I don't know, maybe four weeks into this, worried about how to represent the board and how to trick the board. Yeah. And lots and lots and lots of tests, and basically stopped because then he didn't really have any further way to go. He didn't actually know how to solve the Satoku. And oh, my brain just died. I can't remember the guy's name.
SPEAKER_00:You're thinking of a different guy from Ron Jeffries?
SPEAKER_02:Yeah, yeah, yeah. Um, someone else came in and basically said, okay, Ron, you're not gonna solve this test first, right? What you have to do is you have to understand that this is, I think it was probably a Minimax solution or something. And you know, there are algorithms to solve this, and what you're doing is is this. And the guy then just wrote the solution in a couple of hundred lines of whatever it was, and you know, it was done. And then those that's a really that was a really powerful example of in test first, you can test all you want, but if you don't experiment and don't explore, you're not gonna get to the end, you're not gonna get the thing finished. So I don't think that I think that you're you're right in a in a in a in a certain context, but there are definitely contexts in which the base assumption, which is you can specify what you want, uh just doesn't hold.
SPEAKER_00:Yeah, that's interesting. By the way, I want to make a meta comment about the nature of the discourse that we're having right now. Uh sure. Yeah, we're having a civil argument in which I think both of us are prepared to concede points to the other.
SPEAKER_02:Um I I'm not arguing, I'm learning, honestly.
SPEAKER_00:Um I'm I'm using argue in in a different sense. Like we're presenting arguments to each other. Yeah, yeah, yeah, yeah, yeah. Yeah. Um, and and we're not trying to um uh we're we're not trying to strong arm each other, we're like doing an exploration together, uh, which is something that I wish a lot more people would would do. Um anyway, with the Sudoku thing, that is an excellent point. And to me, tell me if this is consistent with your interpretation of it. To me, it seems like maybe Ron Jeffries skipped the specify step of specify and code fulfill um because he didn't figure out in advance, like you said, what he needed to do.
SPEAKER_02:Well, maybe, but I think in his head, um his spec was here is a partially filled Sudoku board, and the result of the program will be a completely filled Sudoku board, right? So for him, that would be the specification, that would be the test, if you like. So he would be building up to the triumphant execution of that particular test. Um and for him, that would imp and I think for many of the TDD folks, the step between that is kind of like labeled somewhat dismissively as that's just a matter of implementation.
SPEAKER_01:Hmm.
SPEAKER_02:You know, and I think uh no carry on.
SPEAKER_00:Um, I was gonna ask you if if somebody put you in a room and said, Dave, implement a sudoku solver. You can't come out of this room until you're done. Um, but you can, you know, you can use the internet, whatever resources you want. Um, how would you go about approaching that problem?
SPEAKER_02:Um, well, it depends on how honest you want me to be. I mean, I would say uh Google Sudoku solver program and you know see how other people have done it. Um if I wasn't allowed to do that, then wait, let's say you are allowed to do that. Right. So uh in fact I have done that just as a matter of curiosity to see what the various approaches are. And it turns out it's actually a non-trivial problem that there are um uh oh no, I'm sorry, I'm thinking of um nonograms. Monograms are a non-trivial problem, Sudoku is a bit easier, uh, but it's um uh it's it's an optimization problem, it's a well-understood optimization problem. Um, and the uh the approach, the typical approach, is a kind of uh outside in um uh what's the word I'm looking for? Can't remember. Um it's it's it's a deterministic algorithm to solve it. If I wasn't allowed to use the out the internet, then I think my approach would be to uh look at a smaller board and by hand try to work out what I was doing to solve it.
SPEAKER_00:Yeah, exactly. So I think in both those cases you're doing the exact same thing, which is you're figuring out what you need to do.
SPEAKER_02:What I need to do, yeah. But that's not the same as a spec, right? Because the spec is yeah, the spec is solve this problem, right? And what I'm doing, what I'm doing is more like a functional spec, which is how do I solve the problem?
SPEAKER_00:Well, there's different layers of specifications. I I you know, and obviously, like TDD is not magical and solutions won't emerge before your eyes. Uh it's it's not a shortcut that allows you to not have to think very hard about hard problems if that's what you're dealing with. Um and so I I think you know, I'm just going based off of what you told me about the Ron Jeffries approach. It seems like his approach was hollow in that he had the shell of it and he had the methodology, but the actual meat of it, the solution, was missing. And there's like no shortcut around that. If you want to TDD that, then you have to know what your specifications are to a sufficient degree of specificity. Otherwise, you're just gonna have a a board and and the the UI of it working without the actual algorithm solved.
SPEAKER_02:But you've still got I I think there is I think you're right. I think to some extent we're we're arguing about the meaning of specification. Um and I think where we differ is how deep a specification uh goes. In in my mind, a specification is at its at its most naive, a given this input, you'll produce this output. And what what happens in the middle is a black box. That's that to my mind is uh a specification. There'll be non-functional uh you would agree or not? Okay, right. So what happens inside the black box is not in the domain of the specification, except for various non-functional things. It has to do it in 12 seconds or it has to be in green or whatever. Um the that kind of specification does not take you towards a particular implementation. If that is implementation independent, um and the the pure testing folks, and get maybe I'm wrong about this, but the pure testing folks tell us that we're not allowed to poke inside the box. All we can look at is inputs and outputs. And so in that respect, the spec is not going to tell us or guide us towards an implementation. We're gonna have to do something different to get the implementation. Now, in a typical business case, you know, the the total at the bottom of the invoice has to be the sum of the line items. Okay, that does not take rocket science to work out an implementation for that particular spec. And 99% of the stuff that we deal with every day is at that level, where the implementation of a particular spec is well, duh, you know. But when you bump up against something which is not that simple, a mere test is not going to help you.
SPEAKER_01:Hmm.
SPEAKER_00:Okay, so I basically agree with everything you're saying. Um I I would just add some things to it. Um but I I also want to ask, you know, the the testing periods say that you should only test for the the inputs and outputs, and your tests are supposed to be ignorant of what's in between. It's a black box. Um why do they believe that?
SPEAKER_02:I don't know, they're idiots.
SPEAKER_00:I love it. Um I I think why they believe that is because they want to keep the tests uh uh sufficiently loosely coupled from the implementation.
SPEAKER_02:I I'm and yeah, and I was being fatuous, but yes, I think that's exactly right. But the reality is that that also assumes that the implementations are atomic, in that they are they are a simple mapping of an input to an output. Um and in reality, like I say, 90% of the time that's true, but quite often it's also not true. And when it's not true, TDD fails us. If if you do that. So I in the spirit of compromise, a while back, I produced a set of uh they're actually a set of macros for Elixir, because I got really frustrated with this idea that you couldn't test private methods, private functions. And so I wrote a little macro that said, these functions are private, except when I'm in a test environment, and then expose them. Because when I'm writing complicated code, I want to be able to test my internal stuff because that's where the bugs are, you know. I don't want to wait there and wait till I finish the entire palace of code to discover I have a bug, you know? And so I want to be able to test all my internal implementation. And so I wrote this. The vehemence with which people told me I was wrong that you shouldn't do that. Um I came away thinking I must write code very, very differently to the rest of the world. Um, but I I I the idea of black box testing to me is it's valid, it's fine, but that is at the level of a systems-level test, an acceptance-level test. And if I'm building a complicated piece of code, I want to be able to test every single damn thing I can I can from the bottom up, and not just do the given inputs produce the given outputs.
SPEAKER_00:Yeah, and it's it's so telling that people had an emotional reaction, it sounds like. Oh, yeah. And they told you that you were wrong. Um you know why?
SPEAKER_02:It's because it's because it's people want to be believers. They want to be told what's right, what's wrong. They want to be told how to do things. And there is nobody more passionate than somebody who has received wisdom.
SPEAKER_00:I I was wondering how that sentence was gonna end, and I auto-completed it in my head as there's no one more passionate than someone who holds an irrational belief.
SPEAKER_02:Ah, maybe. I don't think that's necessary.
SPEAKER_00:I think I think I think your statement's probably better.
SPEAKER_02:There is no one more passionate than someone who holds a belief. How about that?
SPEAKER_00:Well, it's it's like I don't know. To me, it's like the people who believe the craziest things that are the least supported by facts and arguments and stuff like that are the most passionate about those beliefs.
SPEAKER_02:Because they're invested in them. Yeah. Because they have to put energy into, you know, I believe the the world is flat. Right? I still don't know whether these people are serious or there's a big joke. But you know, I believe the world is flat. And you go, well, what about this? And they go, oh, well, that can be explained by, and they have some very long, convoluted and largely wrong explanation. But clearly that took a lot of effort over you know, many months of thought to come up with some rationale for it. And so the more they have to defend a crazy idea, the more energy they put into it, the more investment they have in it, and the more they're gonna fight for it, you know?
SPEAKER_00:Yeah. And and so I happen to I'll put it this way uh your approach to testing those private methods, that's not something that I myself would probably ever do. But I'm certainly not gonna say that you're wrong. All that I'm gonna say is that, in my opinion, you're losing a certain benefit. Um, you're you're trading one thing for another. You're you're trading that ability to fine-grain test your private methods. You you're trading that benefit for the cost that when you look at your private methods in your application code, you can no longer automatically know that you can refactor those as much as you want.
SPEAKER_02:Oh, sorry. The phase phase two of my approach is that when I'm done, I delete most of my tests.
SPEAKER_00:Interesting.
SPEAKER_02:Because tests on a completed project that aren't effectively regression tests are just dead weight that stop you refactoring.
SPEAKER_00:I buy that. Yeah, tests that don't have value as regression tests don't have lasting value. If that's what you're saying, then I buy that.
SPEAKER_02:I'm saying they actually had negative value. Yeah, yeah, yeah. Like I I I came back to a uh 2006 Rails project that had I can't remember how many thousand tests. And I tried to migrate it up to from Rails, whatever it was, three to Rails 7, I think, at the time. And it was like, and the code hadn't changed, it's just like the environment around it changed. And I Started thinking, okay, there must be like a dozen key things that if I fix suddenly all these tests and stuff. Nope. It was like one test at a time, and it just looked like a totally impossible thing. So in the end, I deleted the tests, all of them. No, that's not true. I kept some, but most of the tests I deleted. And then I re-implemented. And I re-implemented purely on the input-output basis and only for things that I was nervous about. And then gradually built up the code base that way. You know, I think the idea that tests are sacred is a bit like saying in the Rails word migrations are sacred, right? They're not. They're a tool that gets you from A to B. When you're finished, get rid of them.
SPEAKER_00:Yeah, and this again is the danger of uh good, bad, black and white thinking. We do this because it's a best practice. It's like, no, that's not a good reason for doing something. You should do something only if you have good arguments for doing it. Um and if the the context of the situation makes a good argument for not doing that thing, uh then that's a you know, deleting most of your tests. Uh it it if you believe that tests are sacred, then that would be a sin to delete your tests. But tests aren't sacred, they're just a means to an end. And so if you're behaving from an argument explanation-based perspective rather than a um belief uh received from authority uh-based uh way of thinking, then you're gonna you're gonna be led to more advantageous behaviors.
SPEAKER_02:In fact, that actually I I I love the um authority that you injected there, because I think that is actually the crux of the problem, and that is we mistakenly believe that someone who's written a book or someone who gives talks or whatever else is an authority, and they have answers that we should use. In reality, there is only one authority on a project, and that's that blank screen that's looking at you when you start to type. The authority is your context, the authority is what you're trying to do, and that's what you should be listening to, not to not to talking heads and authors and speakers. You should be looking at what your code is telling you, what your users are telling you, what your IDE is telling you. That's the context in which you're supposed to be responding.
SPEAKER_00:Totally agree. Um there's there was something that I was intending to comment on way back that I want to bring up again regarding what is a specification, and I don't exactly disagree. I would just add something. Um so one of the more useful concepts that I've included in my like conceptual toolkit over the last few years is the idea of um uh I don't know, conceptual hierarchy in any system. I often say that a software system isn't made out of code exactly, it's made out of ideas, and the code is just a manifestation of those ideas. Um and anywhere in a system you're gonna find a in independently of the system, uh in the domain, like in the real world, you can overlay a hierarchy of abstractions where you can say, I don't know, here's a town, here's a hospital, here's some doctors, here's here's one specific surgery. Uh I don't know if that's a great example, but hopefully, hopefully you get what I mean by a hierarchy of abstractions. Um, and I definitely think this applies to tests where where in that Sudoku example, there was the the outer layer in that hierarchy was being applied where it's like I want a Sudoku solver, but the middle of that hierarchy was missing. And I am a believer in the idea of having your tests loosely coupled from the implementation because it's it's obviously undesirable to make some kind of superficial change to your code and it breaks your tests, and then you just have to go and make that same change to your tests. To me, that's that's bad. Um, so I'm a believer in having them loosely coupled, but um, to me, there's a a there's nesting to it where you have your very coarse grain tests where where there's a coarse grain black box, but then it's totally fine to have tests that target a lower lower level, and they are more tightly coupled, but you're making a conscious decision that you're okay with that level of tight coupling for that finer grain of test. That's how I look at that part.
SPEAKER_02:I think that's exactly what I'm saying is that is that the authority there is your your choice, your opinion on what you're comfortable with, what you believe is necessary, right? I mean ultimately you're responsible, right? You're you're the guy writing the code, and so you get to choose the best way that works for you in your environment, and the rest of the world gets to trust that you are making those decisions. Yeah, you're a professional.
SPEAKER_00:Wait, can you say that again? I missed it.
SPEAKER_02:So you you and your team potentially, but you are the only person that has the full context of what you're doing. Yeah? Let's just take a simple one-man project, one-person project, you are the person that has all of the context. You are being paid to develop that context, maintain that context, and deliver based on it. So you are the person that makes the decision on what works best in order for you to be able to do that. And if that involves writing tests on private methods, because that's your decision, then there ain't nobody that can tell you you're wrong. Right? That's what you you are that you are the owner of that universe, and you get to make the rules. If you decide this public method is so drop-dead simple that I'm not gonna waste my time writing a test for it, that's your call too. You know? All of this is contextual. There are no universal rules apart from that one.
SPEAKER_00:Um Yeah, I've I've said something before, and I'm probably not the first one to say it. Like, there aren't oh no, no, no. This is just like a famous thing that I I didn't come up with it at all. Every rule has an exception except this one.
SPEAKER_02:Right. Yeah. And it's it's it's true. Um so I think that part of being a professional developer is taking that responsibility and not offloading that responsibility onto some third-party guru. Yeah. Yeah. Bob Martin says I should do this, therefore I should do this. No, read what he says, decide whether or not it's applicable to you, use it, modify it, and more importantly, verify that it's actually doing what you hoped it would do. And then adapt as you go along.
SPEAKER_00:Yeah, totally agree. And I I want to comment uh because I was put in mind by Bob Martin, and and there's a lot of uh, I don't know, uh negativity around his books. People are saying don't read his books. Um I want to comment that I think it's a good idea for people to read books they don't agree with. Um and expose yourself to what you think are bad ideas. Um and there's at least two things that could happen. One is that you could um um expose yourself to these bad ideas and strengthen your thinking around those ideas. It's like, okay, why exactly is this not a good idea? Or you might find something in there that's actually a good idea and it might surprise you and say, Wow, I thought I like totally disagreed with everything this person says, but it turns out there's there's actually something something worthwhile in here.
SPEAKER_02:I agree 100%. I mean, one of the things I think that is the most long-term damaging to humanity, and I'm I mean it that sounds dire, but I really genuinely mean it, is that the idea of canceling people based on some aspect of what they've done or said is incredibly damaging to the overall progress of humanity. Um that an idea is not tied to an ideology necessarily, and it's perfectly possible to have really good ideas and be a total jerk at the same time.
SPEAKER_00:Oh yeah. Not saying that that Bob Martin is not a good idea.
SPEAKER_02:No, no, no, I'm not talking about him, but I mean he does raise his own his own uh uh cloud of anti-ideal ideologues or whatever the word is. Um I I think though, in general, if you are afraid to explore contrary ideas, what that really means is you are incredibly insecure in your own.
SPEAKER_00:Yeah.
SPEAKER_02:And you should take that you should take that as a warning.
SPEAKER_00:Totally agree. Um, and there's a quote that I really like. What is it? He who knows only his own side of an argument knows little of that.
SPEAKER_02:Yeah, exactly. Yeah. I mean, I have never yet got into a conversation like this where I haven't come out the other end thinking slightly differently. You know?
SPEAKER_00:Yeah, and I hope the same is true for myself. Um, and I think it's a really it's a really good thing to be able to discuss important fundamental topics uh with someone you disagree with uh without it getting necessarily heated and emotional and stuff like that. Not that we're touching on anything super heavy or anything like that, but I do have conversations like that sometimes, and it's really there there's something to be said for like I don't know, I had a series of conversations with a really religious guy once, and he was presenting his religious arguments, and I'm I'm an atheist, but we were still able to talk, and I like to think that I came away a little bit more educated or something from that. Um I've got to be a little bit more.
SPEAKER_02:I mean, that's a really good example. I mean, I live in Texas, and I have learned not to get into religion particularly deeply with people. Um, but my wife's cousin used to be a priest, Roman Catholic, um, but left because he believed that Catholicism was getting too liberal. Um and he is one of the smartest people I know. And he's one of these people where if you go on a long car trip, you start talking about religion, and uh it is fun because he does not take it as an attack on anything, he takes it as a challenge to come up with responses, you know, and so all these questions you've always wanted to ask because they don't make any sense, you know, things don't make any sense, you know, you ask him, and then you end up with an hour-long conversation about that. It's it's absolutely fascinating. And you know, it's like that is the spirit in which discourse, you know, it's you have your view, I have my view, I don't necessarily want to convert you to my view, and I don't want you to try to convert me to your view, but I would like to learn and convert myself or parts of myself based on what you say. You know, and I think if people can get over this zero-sum game that everybody believes is the way things have to be, get away from that and say every conversation should be a positive sum game where both people come away slightly enriched from where they went in. Where there's you know, you're not trying to make somebody lose, you're trying to make yourself understand.
SPEAKER_00:Yeah, yeah. Yeah, I totally agree. Um I'm running a little bit low on time, sadly. Um, I definitely feel like I I could talk with you for hours and hours more, um, and I'm really enjoying this. But before we go, um, I want to give you a chance to mention anything you want to mention. I know that you have a new book. Uh you you tweeted that you're available for hire. Anything that you want to touch on.
SPEAKER_02:Well, that was it, basically. Um, yeah, I got a new book called Simplicity, which is a kind of uh my take on why we're making things overly complicated and why we don't have to. Um and that kind of touches on quite a lot of what we talked about, but in a more concrete way. Um, so it's all to do with you don't necessarily have to follow ceremony, you have to be able to work out what ceremony works best for you. Um and then uh yeah, I'm actually currently for the first time, I can't work out, I think it's like 35 years or something stupid, but for the first time in a very long time, um, I am actually around for um a job. So I'm talking to some people at the moment. Um, so depending on when this airs, I may not be looking anymore. But um yeah, I'm I'm kind of and it's kind of difficult because I'm not 100% sure I know uh what it is I want. I mean, at my soul, my heart, I'm a developer. Um, but at the same time, I think the value that I give to an organization is definitely not sitting there writing code for them. Um so I think what I'm really looking for is more a kind of uh in simplicity, I call the role developer without portfolio. It's a person that goes around and basically binds things together, talks to people, looks at concepts, and points out where there's overlap, where there's gaps, you know, and basically makes things run more efficiently with less ceremony. So that's probably the kind of thing I'm looking for.
SPEAKER_00:And how should people get in touch with you if they're interested in talking about that?
SPEAKER_02:Dave at cragdave.me.
SPEAKER_00:All right, well, we'll put those things in the show notes. And Dave, thanks so much for coming on the show.
SPEAKER_02:Hey, it's totally my pleasure. I really enjoy talking to you every time.