Agile Book Club

When Will It Be Done by Daniel Vacanti

September 01, 2019 Justyna Pindel and Paul Klipp Season 1 Episode 7
Agile Book Club
When Will It Be Done by Daniel Vacanti
Show Notes Transcript Chapter Markers

In this episode, Justyna and Paul talk about Danial Vacanti's new book, When Will It Be Done. Plus a little rant about math.

Show Notes:

Buy the book: https://leanpub.com/whenwillitbedone

Buy his last book, Actionable Agile Metrics: https://www.amazon.com/Actionable-Agile-Metrics-Predictability-Introduction/dp/098643633X/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=1566213056&sr=8-1

A bit about that S-curve effect that Paul mentioned:

Dark matter, failure demand and S-Curve – project planning with Little’s Law and adding a project buffer to anticipate risks

http://www.ontheagilepath.net/2015/08/dark-matter-failure-demand-and-s-curve-project-planning-with-littles-law-and-adding-a-project-buffer-to-anticipate-risks.html

Give me feedback, please: paul@wawelhill.com

Support the show

Speaker 1:

Welcome to the Agile Book Club. You're your hosts, Justina and Paul[inaudible].

Speaker 2:

Good morning. Yes. Dinner. How are you doing?

Speaker 3:

Look more. But I'm doing amazing. It's a beautiful day in Sunny Krakow. I wish our listeners could see that.

Speaker 2:

Yes. Well I wish I could see that but cause it, cause w we're sitting here in a tiny little room in an empty office because we, we love what we do and yeah. And we've got somebody exciting to talk about today. This, Eh, it, spoiler alert. Um, okay. We'll you're listening to what anyways, so it's not really a spoiler because this is what you're here for. But spoiler alert, I really liked this book.

Speaker 3:

Yes. I love the book. And after watching some conference talk with a diver Conti, I love him completely. I love him.

Speaker 2:

Yeah. So, so since we both enjoyed the book, let's just, let's just go straight into our elevator pitches. How would you pitch this book to somebody who you thought might be a good candidate to read it?

Speaker 3:

Okay. So I would pitch it for everyone who at least once in their life, we're asked when something will be done. So it doesn't matter if you are a designer, a quality assurance person, developer, product manager, entrepreneur. I bet someone asked you about estimation and if you didn't know what to do, people read this book or if you said with confidence that you will deliver something on the 1st of January and then of course you didn't. Please read this book too because it will help you to make better estimations.

Speaker 2:

[inaudible] absolutely. The title of the book is when will it be done? Yeah. And, and that's exactly what the book is about. It answers the question when will it be done? Now the, the answer may not always be satisfactory. Yes. And it's because the answer may not always be satisfactory. That the book also has to go into how to make the answer satisfactory. And so my elevator pitches are a little bit different than yours. I have long recommended Daniel McEntee, his first book, actionable agile metrics as a starting point for anybody beginning their, their journey to agility. And the reason for that is that I know a lot of people, I have seen a lot of people that a lot of companies that begin their agile journey with a, with an agile process and and there's just so much cargo called behavior in our industry that I think we've all seen examples at which somebody says, I hear that agile will give us faster throughput and higher predictability and so we're going to adopt process x. And then they go and they implement all of the ceremonies and roles of process x, but it doesn't give them what they want. It doesn't give them improved predictability or higher throughput. And My, my thinking and, and the reason why I recommend any of the candies books to people as a starting point is if you understand, if what you want is improved throughput and improved predictability, then that should be, you should start with that. You should understand what it is that that influences your throughput and your predictability. Now there are other reasons to be agile. Obviously having a higher throughput and predictability without a higher quality or without fitness for purpose is going to be problematic. But if in so many cases you've got a corporations, companies that they're fitness for purpose is reasonably good because they have profitability and they've been building a product for for a long time and they've got to happy customers. Their biggest problems and the reason why they adopt agile is competitiveness. They want to be able to deliver as fast and as consistently as their smaller competitors who are nipping at their heels. And so their biggest issue is predictability. And if you, if you want predictability, you should start with predictability. So this is my elevator pitch. Um, I would recommend this book to anybody starting on an agile journey because if you really understand the few simple metrics described in this book and you understand what you have to do in order to make those numbers look the way you want them to, you will necessarily create an agile process process, which is optimized for your organization. But if you start with an agile process, if you start with scrum or Kanban or whatnot and you just mine the behaviors, then it's entirely possible that you may end up doing the process correctly and not achieving the benefits that you want from agility.

Speaker 3:

Yup. Good, good. That's true. That's true. And I can confirm that you always refer to Daniella counties book and recommend them to all people. You are doing that with me for several years and I was afraid to read his book because I felt like I would understand everything that it will be a little bit too hard because I don't have the advanced degree in MF, but actually he's books are so pleasant to it for people like me, uh, that you don't have to be afraid and resist. He gives a load of real life examples that shows an explain the math better so we can really learn a lot. Yeah, exactly.

Speaker 2:

To to the extent that there is any complicated, oh, I wouldn't even say complicated about to the extent that there's any math aside from addition and subtraction, such as, for example, the discussions of little's law. The, the point of the, of the discussions isn't the math. It's the underlying principles and the relationships. And so a person who's afraid of math should not be afraid of Daniel Vacanti his books. Now that said, I think I would like to say something. Can I rant a little bit quick rant on math. Okay. Five seconds. 35 seconds on 35 seconds. Rant on math. Okay, so it's this, there are people who love math and people who don't love math. There are people who study math and people who don't study math. The latter category of people is much, much larger if you're working in a knowledge management discipline, if you're working in an engineering discipline, if you are a human being, then math is one of those things that distinguishes us from just about every other animal. There are a lot really intelligent animals. Some of them are as intelligent or more intelligent than we are, but nobody, like nobody on earth can model their world from the simplest to the most complex aspects of their universe using simply numbers like humans can. And the beautiful thing about maths is that every little bit that you will learn just uncovers new opportunities for application and understanding and appreciating your world. It's like what, what a, was it Richard Fineman said about a flower? Somebody once asked him, how, how can you appreciate a flower with your head so high in the clouds of all your, your mathematics in your science and your theory and what have you? And his answer was that understanding the chemical and biological processes that are going on in this flower only make it more beautiful. And, and, and I would say the same thing about, about the kind of processes that we manage in knowledge work, which is that the deeper understanding is the more we can appreciate them. And, and it's not actually, you don't have to be an expert in math. You don't have to have an advanced degree in statistics in order to increase your understanding of statistics a little bit. And every little bit is pays off such huge dividends. So Daniel, the Kennedy's books are a great place to start because they introduce the bare essentials of what you need in order to understand flow mechanics and flow mathematics. But they only scratched the surface. And there is, there is so much joy to be found in acknowledging that your human is capable of modeling these, these systems and understanding the models by just learning a little bit of math. Exactly. Yeah. That was way over 35 seconds. I am I going to have to edit that down? Yes. Or just apologize. Just apologize. I'm so sorry people.

Speaker 3:

Okay. So the dog can gain through my, uh, first Killarney.

Speaker 2:

It was[inaudible]

Speaker 3:

kind of simple but powerful. It's stopped believing in averages. Like we tends to believe, uh, in averages and we make such optimistic, uh, estimation based on that. And Daniel used a very powerful example of a military mission that happened in the United States when the risk management committee had to decide how many helicopters that have to send to safe hostages. And the basic assumption was that six of them have to be successful in order to save them. And there were 75 chance that each of them will get to the point. So they decided to sell x. And unfortunately the whole mission just fail. And what Daniel might in his book, he showed how they could approach that heart situation in a different way by using the probabilistic forecasting. And I found these examples are powerful and very useful for my way of thinking about averages. So I don't know what is your opinion of her? Just pull, but I'm sure that you agree everyone more to say.

Speaker 2:

Oh, absolutely. Um, in my last engagement I was working with a Fintech company and they used it at their standard mechanism for reporting was average throughput projected into the future and all teams were required to submit on a regular basis, a diagram that showed throughput, um, projected forward. The average throughput projected forward with key milestones. Smart. So that's reliable. Well, it is reliable. It is absolutely reliable. Half the time. Yes. So one of the first things that I did after having I, I probably had a snarky conversation at some point. Not a great way to start an engagement, but um, but I did it because everyone was doing it. But I did one thing differently. And that is instead of labeling my line average throughput, I labeled my line 50% confidence interval. And in doing so, I turned it into the reasonable approximation before cast. So basically what I was saying is that there was a 50% chance that these things that these, um, milestones will be hit by this date or earlier. So it includes the two things of forecast. So is it this, one of my takeaways, you see how I'm segwaying into my[inaudible], what am I first takeaways is that a forecast should always have two parts arrange and a probability. And so instead of saying that the average throughput predicts that milestone six will be hit on June 4th by changing the name of that line, but the 50% confidence interval, I turned that prediction into a forecast, which essentially said there's a 50% chance that that miles done will be hit by June 4th or earlier. So there's the range and there's the probability. And needless to say, people started asking questions just with this minor little change in the legend of the graph. What do you mean 50% confidence interval? And, and I said, well, you don't, if you use the average than than 50% of the time, it'd be right 50% of the time you'll be wrong. And that can lead to other questions like, so how likely is it that we're going to be earlier? Well, it's 50% chances we're going to be earlier, but not much earlier because this is our lead time distribution. We've got a fat long-tailed lead time distribution. So half the time will be earlier than average, but not much earlier. And half the time will be later than average and usually much later. And so that led to other conversations like are you comfortable with 50% with being right half the time and half the time being not just wrong, but very, very wrong. And, and that led to conversations like, well, what is a reasonable confidence interval? Um, what does, what does it look like if we use an 85% or 95% confidence interval? And Are we okay with a six month span between 50% and 95% or do we need to tighten that up then if we need to tighten that up, how do we tighten that up? And that is essentially what this book is all about. Yes. And I had like one of the favorite quotation from, uh, from this book that's actually also from this chapter that plan based on averages fail. Absolutely. So yes, averages are not useful in our world. And the other takeaway for, oh, I had a lot of takeaways from this book because it's so rich, it's so full of, of keen observations and the author is, is critical of many of the common practices that are industry. So another one of the ones that jumped out at me, which I thought was really, cause I've been participating in some conversations about sizing and splitting stories in this sort of t sort of thing quite recently. In fact. Um, and we love to do this. We, we, we like to split stories and there are a few schools of thought about it. Um, one is that, and this is something that any Kanban person has come across at some point, is that in order to maximize predictability in combined all work items have to be roughly the same size, which is absolutely untrue and practically impossible. Um, amongst other things, it's impossible to estimate things well enough in order to get them the same size, but it's also not useful, not terribly helpful. And um, and other schools of thought are you should break things down into the smallest possible size. You should break things down into the smallest, um, value adding signs or what have you.

Speaker 3:

And it's so funny on the way,

Speaker 2:

but, but I've found while I don't 100% agree with it, in all cases, I found a, um, Daniel Candy's, um, observation to be really useful, which is that the, as if, if you are comfortable with, if your forecast tells you, for example, that there's an 85% chance of any given thing being done within 12 days of starting, then the smallest and object needs to be in order to fulfill your expectations as 12 days. So if, if the engineering team is reasonably certain that something can flow through a system within your, your time span of comfort, if that's 12 days that it's broken down enough. Which also reminded me of um, the posit science story cause cause, uh, when Janice was working in posit science, they had been moved from sprints from, from three weeks sprints to a flow based approach. And so they were comfortable with three weeks and they ended up dispensing with estimation and replacing it instead with just a gut check at the beginning. So when an engineer engineering team grabbed a piece of work, if they were reasonably confident they could do it in three weeks or less, they just did it. And it was only if they thought that it might take more than three weeks that they just bounced it back to the, the planners for discussion. Do we do it anyway or do we break? So it's a really nice, simple heuristic.[inaudible]

Speaker 3:

during your, uh, description of the, of this takeaway, I was thinking how popular will be the book about how to size your work, how to make yourself work splitted in the same size yet that would be like terrible, but I think like it could be pretty popular now.

Speaker 2:

Oh careful, careful, careful. Because there are books on sizing there. There's a whole book on how to break down user stories and it was written by somebody who I have a lot of respect for. We might even s we may even review it at some point.

Speaker 3:

Okay. So I take a bag and I just jump into my takeaway, which is a be careful what data set do you use for your predictions because it really resemble the past, um, of your data. I mean, if you would like to have an estimation of, uh, of your team performance in March, March and you based on the data from December, you have to into consideration that it was the seasonal of holidays that lot of people were not working, that there was, you know, Christmas, spirit and et Cetera. And those two data set my note to be so accurate to each other. And the other thing that you might take into consideration was like a team size. What if, uh, you have a data set set from the history of your team. Well it was like 20 people there and now you have only two people. Do you really trust this data to really want to make prediction based on that? So, so they didn't have a camera in his book. You really make a great point about being careful with data set you use and being able to trust their data and he points out that all those agile tools might not be the best way to do it because if you don't understand your colorful charts and you put them in your reports, we can be proud of them. But we have no idea how they were done. The righted, we have zero confidence about what you are talking about. And I've seen it many times that people were actually questioning the physical boards and the questioning, the manual collecting of data. But Paul was always making a great point that at least you trust your data and it's not so time consuming to prepare and to gather your data because you just collect two data points with just like start[inaudible] so I think right now if I will be in the middle of this conversation, I'm not going to call me to direct this person to Paul, but also direct to that his book and this chapter to make a stronger point about the, about being careful about the data that we're using for our forecasting

Speaker 2:

[inaudible] and and that that's really two points, both of which were really important. The the first which is that you should choose a data set which is the closest to what you feel the future is going to look like is one of the areas in which I think that he could have expounded a bit more because he says a couple of times in this book that choosing that Dataset is an art rather than a science, but there are some tools, some, some heuristics, some observations and patterns which are useful. One that he doesn't mention is that lead time tends to follow an s curve during the life of a project. Um, there tends to be slower at the beginning as people are discovering ways of working and becoming familiar with environment and becoming familiar with each other and becoming familiar with the product and, and what have you. And it tends also to slow down in the end. So if you're doing um, like a big release, then there's additional transaction costs and such. And, and also little things that are really difficult to predict, like work that's accepted when it's done is put under higher scrutiny scrutiny when it's actually about to go live. And so you tend to get a bit more change requests closer to the end of a project as well. And so if you have this s curve and you, you're aware that this phenomenon exists, then you know better than to use your first month's data to project very far into the future because we know that you, you can expect to see some improvements, but also, you know better than if you've got a 10 month project, you know, better in month nine than to expect the last month of the project to look like the last six. You should expect some slow downs. So those kind of of observations and patterns could have been useful to include here.

Speaker 3:

That's really useful. Do you have another book that covers it?

Speaker 2:

No, I learned that from a person. Oh, okay. I'm afraid and I'm not even, I'm not even familiar with what the, the name of that phenomenon is.[inaudible]. Um, I've always just called it a kind of s curve effect, but it bears some investigation. If I could, if I can find the name of the phenomenon and in some, um, better description of it, I'll include it in the show notes. Okay. Um, but also the bit about tools. Um, I myself built a combined tool at one point it was called[inaudible] binary and we had to make so many decisions about what to do with data that I tried to make them transparent. So there was a document, for example, in our cycle time report, which described exactly what we did with the cycle time data, how we handled situations like human error. So for example, what do you do when somebody's in your electronic tool takes a piece of work, takes a work item and moves it into an in process state and then moves it back out. We had to take decisions like, well if this happens within 20 minutes or if this happens within such and such a percentage of the recent average lead times, then we assume it's a mistake and we don't count that data. But maybe it wasn't a mistake in this case, maybe it, all of that stuff is being done in the background and if you don't know what's going on, you'll trust it. The moment somebody points to your, your average cycle time trend and says, yeah, but I think that's skewed because of human error, it instantly becomes useless. So yes, I do advocate for tracking your metrics manually to the extent possible.

Speaker 3:

Yes and no. I see one more thing because, uh, when you use the anti law too and you look at your data,

Speaker 2:

yeah.

Speaker 3:

You tend to say yes, but on average it's like you don't remember like I think two weeks ago, uh, we had the training and one of the person who was presenting his data and there were some human errors and then he was referring to average. So now and I won't be in the middle in does that conversation, I can immediately just, you know, take the different arguments that is actually the floor average. So yes, thank you that

Speaker 2:

you very much, you made us smarter and also human human error doesn't impact the data in a Gaussian fashion. Um, they, they, they tended to either be very small, like pulling in the wrong thing and then quickly moving it back or they tend to be very large. Like for getting to mark a work item done four weeks. Um, and so they're, they're not normally distributed in any case. And that's another one of my takeaways is uh, I really liked his observation that it's most useful to think of you process cycle time as a shape rather than as a number because he doesn't dwell that much on cycle time histograms. In this book he spends more time on the cumulative flow diagram and on scatterplots but while a scatterplot will show you just how, um, how stable your system is, I find the time Instagrams to be much more useful in order to determine whether or not your changes in order to try to number one, whether you process is predictable. Then anytime that you have a fat tail of fat or a long tail or fat, long tail lead time histogram, um, you have the possibility of being horribly wrong in your predictions that the fatter longer your tail is the for further off your worst cases are going to be from your average cases and the less predictable the system is. But even if I can just digress for a moment, it is possible to offer prediction with a system which is very chaotic. The problem with that prediction is it's unsatisfying. So if you say for example, that on average any given item that enters the processes delivered in seven days, but a 95% confidence interval would be more like three and a half years, then that may be perfectly accurate that as a forecast that that may be absolutely statistically valid but is not useful. And so you can see that on a lead time histogram as a fee in that case as a very, very long tail than anytime that you have a fatter long tail, then your improvement initiatives should be aimed at shaving the tail. And you can see those visions visible visually by looking at changes to your lead time histogram. So I think indeed that's what it's, it's the shape of the lead time histogram that, that people should be thinking and working to change when they think about managing your lead times.

Speaker 3:

Yes. Lesson learned. So, um, the other thing that I maybe looked like learn but go to more ideas, how to improve it was about the cycle time because he pointed out that if your cycle time is 20 days but only three days are the active time that you're working on the task. You shouldn't put your time, effort and energy to work faster those two days because there's not a lot of chances that they might actually do it. But you should put your focus to somewhere else to this waiting time. And actually he missed as the, the types of the action that you can take in order to make it better, to reduce the wait waiting time. And what he said is like, the first thing that you can do is to control your working process. Then you can reduce the time a work item is blocked. You can look for the dependencies, you can look like why is actually waiting 17 days or you can just review and change the poor policies that are actually might be right there and make your cycle time of a longer. So I really think that, uh, some of the people are not aware of that. Even that it's doesn't sound like a rocket science, but we really like tend to try work faster and faster, which leads to overburdening and also sub optimistic.[inaudible] oh Jesus, can you help people? How can we know? Oh, suck optimization. Nailed it. Yes. Yes. Thank you. So yes, that was[inaudible].

Speaker 2:

Yeah, indeed. And that ties into one of my takeaways, which is that a lot of people say that the most important meeting or ritual in an agile process is the retrospective because that's where you improve. And he has a lot to say about retrospectives and I want to get to that, but he says that, um, he thinks that the most important meeting or ritual and an agile process is the daily Standup, but only if it's done right. And this is something that you've heard me talk about at length during all of my Kanban trainings, which is that the waste, so many people were taught to do as stand up and the reasons that so many people were taught that they should do a stand up are not useless but reasonably low maturity. And that is if the team comes together to do a stand up in order to make sure that they're all in the same page about what's happening, what everyone's doing, I should say not what's happening, but what everyone's doing. And they do it by answering the questions, the common questions that you hear, which is what have you accomplished since yesterday and what are you going to do today and what is blocking you or what's keeping you from doing your best work, whatever. They're not using that meeting to improve their process. They're only using the retrospective to improve the process, which means process improvements are, are done in big batches periodically. And one of the things that, that I always try to do with my standups and build one as a, as a scrum master or as an agile coach or in whatever capacity I'm going to stand up. I try not to ask questions. In fact, I try not to speak, but I try to guide people into doing, into reading the board if they've got a, got a physical board with a visible workflow from the right to the left. So focusing on what's had the most time invested in it first and only looking for what's interesting but, but what's worth talking about, which is new discoveries, surprises, problems. And sometimes there might be nothing to talk about. If everybody is doing their job and they understand the work and they understand what's going on, there might not be much to talk about. But what I really like to see is when people are focusing on the work in process, not the work they're doing. So I, I've, I've said a number of times, hopefully not on this podcast, but maybe I'm repeating myself. Um, that I feel like my time as a team level coach is done when during the standup people aren't saying, I'm working on this, I'm working on this, I'm working with this and I'm working on this. When instead people are saying I'm finishing this up and I'll probably be finished in the next, you know, co hour or two and then somebody also saying, well in that case I'm not going to start anything now I can see that this is going to be ready soon and I want to be ready to take it. When they start focusing on keeping things moving and minimizing the cues and specifically when they stop being so concerned with being busy, when people are concerned about being ready for the work, when it's ready for them rather than with being busy, that's where the daily standups are serving their function optimally for delivering a continuous smooth flow of value with predictability and minimal cycle times.

Speaker 3:

Yeah, and I, I believe that idea, I think that it will be really interesting to see, to get to know the company culture by just looking how, how they stand up look like. Because if it's like a show of everyone saying what they're done, it means that they might not feel, you know, comfortable to say that they are doing nothing, that they are idle

Speaker 2:

[inaudible] and in fact, uh, I think I've observed this before, but, but it bears repeating, which is that any time every member of a team is busy doing their job, that's a guarantee that that team is working inefficiently. Because if everyone's busy, then nobody's ready for work when it's ready for them. Yeah. So the work isn't flowing if people are busy.[inaudible] I wanted to come back to retrospectives because he also said something really useful about retrospect is which I've observed myself. There are so many schools of thought about retrospectives, but I personally find data-driven retrospectives to be the most satisfying. And when I'm going into an organization that's accustomed to doing retrospectives but not accustomed to me, and I bring data to a retrospective, the feedback that I get is always universally positive. People love to see what's really happening and to talk about actually making a difference, not in how they feel, but in how they're delivering. And so a retrospective that doesn't include data that doesn't include, well over the course of, of, of the previous period measurement period, we saw a 15%, um, decrease in cycle time, which can be traced back to that improvement that we made in the way in which we were dealing with, with that other team on which we have dependencies. But I think there's room to improve it further is much more useful than just saying, well, let's review the last x days where the high points, what were the low points? How did people feel in the high points? How did they feel in the low points? What are you w what do you think of things we should keep doing the things that we should stop doing it. It's just so much more satisfying to be dealing with actual data.

Speaker 3:

Yeah. I think you also have to overcome the assistant or the prospectus because people don't feel that they are just talking, talking, talking without an end point, but they can really see it. Yeah. Okay. So now re forecast. Re forecast reforecast yes. Sees the first chapter and the example of a terrible hurricane sandy that a bring one of the biggest, I think in United States, second largest in my lifetime. Yes. Oh, you lived so long. So far the largest was Katrina and I lived in Texas,

Speaker 2:

Texas and Louisiana. But I very clearly remember both of those hurricanes.

Speaker 3:

Oh, okay. Okay. So, so, so as you know for sure the sandy touched 24 states and brought a lot of uh, how does it catastrophe, let's, let's use it like that. But why Danielle was referring to that he was referring to the mechanism of weather forecasting and how they are adjusting the system, uh, or forecasting to the new information that comes how the first one they'd seen these small candor's formula for meat formulating. They were thinking that this is not going to hit the United States and they, after 20 hours, after more and more days, there are gathering more information and building more accurate focus. And this is something that I think we don't do a lot of in the software development that we really stick to our first estimation because that's the plant that we have to follow and we're aware of the new information that it's arriving, but we just simply don't, uh, don't care about it because we already build our estimation. We just tried to try to add at the end. And one of the other example that he used to make this point of her stronger, it was the professional poker players that each time there's a new card on the board, they have to adjust their forecasting, their estimation of the likelihood that will bring their win or loss. So I think that we should be more flexible, more actually agile to enter just to the new information in our work environment than we are.

Speaker 2:

Absolutely. But when we're talking about information, this, this, this brings me to another point which I found interesting and I'm going to be a little bit critical of here. I hope Daniel will forgive me about suspect will understand. And that is the question of how much data do you need in order to to begin making some forecasts. And he cites what he calls the rule of five, which, um, I heard introduced first as Hubbard's estimation principle and it states that five randomly selected data points will match a 93 and a half percent competence interval. I wanna say like roughly 93% competence interval. And um, this is something that I'm going to have to educate myself on a bit. I wouldn't understand the mathematics behind it a bit better, but as I understand it, that is only applicable in a Gaussian distribution. We're not dealing with GFSI and distributions here. And so the way that, um, Alexi, Sugarloaf, I hope I'm pronouncing his name correctly, I'll have to check with him and make sure I don't do, don't make a mistake again, but the way he explained it is that when you have a, um, a weibull distribution, especially like a fat tail weibull distribution, which is what most of us are dealing with in knowledge work that in a random distribution, if you've got a purely random Gaussian distribution, then the, because of the central limit Theorem, the average of a randomly selected set of data points will be very close to the average of the entire set. But in a non Gaussian distribution, like a logarithmic distribution or weibull distribution, the better estimation of the average is the largest number in the randomly selected set. So I think that bears mentioning, so if you, if you have five data points, then you have enough to start getting some, some decent forecasts, but you can't use the central limit theorem to assume that the average of those five data points is similar to the average of your set. It's safer to assume that the largest of those data points is a rough approximate. That nation of the average of your said. Well, absolutely. Well, I don't want to just, just, um, I don't want to our conversation with Daniel to just be all praise and adulation. I want to dig into a bit of this stuff. So I'm kind of, I'm kind of hoping I make a few mistakes here that he can correct. I'm assuming that, that um, I'm as susceptible to the Dunning-Kruger effect as anyone else. So s so I always love it when things that I'm reasonably confident about get shattered and he's just the person to do it. The other thing that, that I took a bit of issue with while, while we're on this subject is he's very critical of classes of service was that he made the point of that subsurface and titanic story. Oh yes. Of course. In the titanic story he tells the number of listeners we're, we're, we're, we're just talking about the, the lessons that we learned from this book and you might not, not really appreciate just how many fabulous stories there are in here. You will, you will learn things about, about rescue missions and about, um, software companies and about the titanic and about the Manhattan project. There are so many fabulous stories in this book that just makes for a wonderful read. But indeed in the titanic story, he was talking specifically about, um, a class of service, which is, is, um, expedite. Yes. And there's lots of reasons to be critical of expediting in general. If you have an expedite lane, then you're going to use it. And the more you use it, the, the more it damages your predictability. But in this case though, he's talking about anytime you have different rules for prioritizing work in the process, you're going to have the introducing variability into your system. And I wanted to challenge that, not that specifically, but, um, classes of service can be used to increase variability if they're increasing waiting times. So if you've got some items that you pull out of a first in first out order while they're in process, then you're artificially increasing the waiting times of some items, which increases variability. But if you use classes of service only as a way of prioritizing the work, the way it work enters a system, I believe you can improve predictability. And the reason for that is number one, let's say you have have a class of service which is associated with fixed date deliverables. If you use, if you know your competence interval on which you're comfortable pulling fixed date items. So you know for example, that you deliver in 20 days with a 95% confidence interval and you're comfortable with that. That can tell you when to begin work on a fixed date item, for example, 20 days or 21 days before you need it. And in that way it can help to avoid fixed date items becoming emergencies and emergencies disrupt your, your predictability. But the other reason is if you've got different kinds of work which are leading to multimodal distributions. So you might have some big pieces of work which are generally new features and you might have some small pieces of work, which are generally bug fixes. If these bug fixes or feature work are coming in bursts, they can add unpredictability to the system. But if you use these as classes of service for pullings, he might have a, um, you might decide that you're going to work at an 80, 20 ratio. So anytime that you're pulling, you want 80% of the work pulled into the system to new features and 20% to be bugs that can increase the stability of your system. That by by making sure that you've got a consistent, a consistent ratio of big tasks and small tasks at all times. So I think done right, classes of service can be very useful, but he's absolutely right that having different ways of handling work in a process can add variable variability and lower predictability in assessment. Is this a time to time into the fact that quotations are, do I have more time for takeaways? Um, let's see. Um, is there anything that I, that I really, really, really, really didn't want one, we didn't talk about Monte Carlo simulations at all. Yeah. That's, um, I think it's worth mentioning for, for our listeners that this book divides forecasting into two major subject areas. One answering the question, when will that item be done? And that's a different question than when will the project be done or when will this batch of items that we're releasing together be done. And all this talk that we've, we've, we've been doing about, um, lead time distributions and lead time histograms and lead time scatterplots and such is aimed at answering the question of when will a particular item be done? Once it enters the process to answer the question of when, uh, we will batch of items be done, um, Daniel Canty recommends. And I also recommend using Monte Carlo simulations. And I think this, this book does a fabulous job of describing what they are and why they work and how to use them. So that's the one thing. And uh, there's one other observation I want to share cause this was a, I've been a fan of metrics for a very long time. I've been a fan of mathematics in knowledge work for a very long time. And so I can say without being, without too much hubris that I learned very little from this book because I've read a lot of books on this topic, but I want to share something that I absolutely did learn that I didn't know before. I think for that, and I thought it was fabulous and that is, I've always understood the concept of flow debt, but specifically he has in this book a mechanism to determine simply whether or not you're likely to have a lot of flow debt in your system. And that is if there is a discrepancy between the actual average lead time over a period and the approximate average lead time as predicted by your cumulative flow diagram. That's an indicator of the existence of flow debt in a system and that brand new to me. I love it. So if you, if you know how to read a little cute little flow diagram, you know the things that you can see are the height of from, from, from when we're interested, system until work leaves the system is the amount of work items in progress at any given point in time. And if you look horizontally, what you're seeing is the average, the approximate average cycle time. And so the relation, I often use that as a, as a visual example of the obvious relationship between work in progress and average lead times. But, and the other thing of course that we look for is we look for consistency in the arrival rate and and delivery rate because if your arrival rate is higher than your delivery rate, then your average cycle times are going to be approaching infinity because you're waiting times are going to be longer and longer and longer and longer. But what I, I I didn't realize is that this average cycle time could be different. The average cycle time as predicted by your came a little slogan, that flow diagram, which is this horizontal distance from between when something interesting when somebody leaves the system, um, is not going to necessarily be closely aligned with the average cycle time of that particular set of actual things when they're actually done. And that's an interesting thing to look at.

Speaker 3:

So I'm very happy that you also have your Aha moment. I had some more of them because I didn't read the first book. But yes, yes. Actually I also enjoy the part about flow back. Yep.

Speaker 2:

So now, yeah, that's all I had to had. If you want to we can get into, um, our favorite quotations.

Speaker 3:

Yes. And then I will use my time to say about two of them. The first one, I think that it should be written on the shoot walling of the company that is getting through the agile transformation and it goes the essence of agile, the ability to make progress with imperfect information coupled with the ability to adapt quickly. When better information comes to luck, I think you should be just on the wall when you enter the office for everyone to see and try to understand that this is what they mean.

Speaker 2:

That's very useful observation. I've been, I've been struggling with this personally because there are so many definitions of agile and all of them are inadequate I think because any, any definition of agile that does not cover the scope of the agile manifesto, I think is shortsighted. Okay. And while I love with Daniel, the candy has to say about forecasting and predictability and it's, it's, it's not that, that, uh, is this, these other things are not important to him. Um, I'm sure that they absolutely are, but this is a book about forecasting predictability. It's not a book about delighting customers is not a book about creating humane work environments. It's not a book about getting the best out of knowledge workers. Um, there are so many aspects of what is agile that it's really easy to focus too much on any one of them. And so, so while I do think that that's a great observation and an essential component of agile, that sounds too much like the reason that a lot of companies adopt agility and agile processes in the first place. And I think it's shortsighted if, if the other aspects of what it is to be agile are overlooked as a result.[inaudible] okay. Okay. And then yours. Alright. My first one is super short, but I just like it because I love the way it sounds. The most likely outcome is not very,

Speaker 3:

and it's cool.

Speaker 2:

I don't know if that needs an explanation. The most likely outcome is not very likely that that's simply as, because it's another criticism of average is that anytime that you have a range of outcomes, especially if you have a large range of outcomes, like let's say for example, he uses the example of rolling a pair of dice. I think we all know that brilliant paradise yields random outcomes. The distribution of those outcomes are regularly distributed over enough roles and the highest likelihood, the most common outcome is seven, but most of the time, despite the fact that seven is the most likely outcome, most of the time when you rolled it rolled paradise, you don't get seven[inaudible] and it, I just liked the way it sounds, the most likely outcome. This is not very likely,

Speaker 3:

actually. I was the thing that is truly want to ask play monopoly. I was say now it will be sad. Yeah.

Speaker 2:

Well, I mean if you have to guess a single number guess seven.

Speaker 3:

Yes, true. Okay, so last one from me. To think probability, it means to acknowledge that there is more than one possible

Speaker 2:

with absolutely. No explanation needed. No explanation needed. Yes. Probabilistic thinking means accepting the fact that what do you think is going to happen is just one of many possible outcomes. I do, I do. If you don't mind, there's a few of us to share. So this is a really long one but I liked the way he talks about story points cause a lot of people talk about story points. I talk about story points and I'm one of those defenders who falls into the trap that that that he say, which is that I started out using scrum and we used story points and I never found them useful for prediction. But I did enjoy the conversations. I did find the conversation using story points as a tool for an engineering conversation about risk and complexity is very useful. And so when people attack story points in general, I'm one of those people who say yes but they can be really good for laying bare a fundamental disagreement or misunderstanding about risk. So when, when when you're having that conversation in your estimation session and somebody estimates something as one and somebody else estimates it as as eight, then obviously one of those people know something the other person doesn't know and then you can have a conversation, which is fair. But his counter to that is if you're one of those people who say that that's all story points are good for, then ask yourself this, why are you tracking them in burn down charts? Why do you base your sprint forecasts on the number of story points you can get done? Now I've fallen into that trap because I stopped using story points for estimating, but I remember how much I liked the conversations. But indeed, if your answer is the reason we use story points is to have a conversation about risk, then it's fair to ask. If that's the case, why are you tracking them? Why are they on your burndown chart? If you acknowledge that they're useless for forecasting, why don't you throw them away after having the conversation? Then what's the rate? We'll do it for the next month. All right. Um, so we've got a few options on our list. I'm still interested in reading the culture game. I'm always interested in what, as an anthropologist myself, I often find interesting insights when people who don't come from social science background talk about culture. No, I'll admit, I often also find it very frustrating but, but when, when, when, when somebody else starts dabbling in your field, if you can lay aside one's professional arrogance, sometimes they see things that generations of anthropologists happened. So I'm curious about that one. Also, I don't know the author Daniel Mesnick, but I do know he's incredibly active on social media and I know a lot of people who do know him. So I think the, the possibility that he'd be willing to talk to us is probably pretty high because he is such an active person in getting his message out there. I'm still very interested in rolling rocks down hill, which amazingly I haven't read despite the fact that everyone says it's a brilliant book, but also somebody who we were thinking about reading. We were, we've been talking for some time about reading Mike Burrows agenda shift, but he's just published a brand new book and I'm really curious about is called right to left and it sounds right in our wheelhouse. So let's vote. I said right to left. Okay. Yeah, I, I'm with you. I think the fact that uh, the culture game enrolling rocks downhill have both been out there for quite a while. Make them less urgent to talk about. So of the three, I'm very keen to read our friend Mike's Brand new book. So for the next month we'll be talking about right to left by Mike Burrows. We hope we've enjoyed this episode of the Agile Bubble Club podcast and, but please give us feedback. We're still new at this. We're still open to adjusting our format open to adjusting our, our, uh, the, the techniques and technology that we're using. If there's anything about the way about the, the way we structure our, our podcasts, the way in which we, we interact with each other in our podcast. If I'm talking too much intent now, if you don't like the way the, that Euston lapse, you can just get lost right now. You're not welcome. This is, this isn't for you. Um, sorry. No, uh, okay. Um, but anyway, but if you think I talked too much, please tell me cause I need to hear more of that. Everyone in my life would agree that I need to hear more of that or if, um, if there's anything you'd like us to do differently, please let us know because we're trying to make this as good as we possibly can and we love getting feedback as your wife said, you think when you talk, when you open your mouth some, ah, yes. Um, I have my best ideas. While, while I like to say pontificating, some people would say rambling, but in any case, please give us feedback. We're still trying to improve this, this podcast and try to make it as useful for you as we possibly can. Because, well, we love you. You're taking your time to, to make what we're doing meaningful. And without you, we would just be two people spending a beautiful Saturday afternoon sitting in a closet, which, which would be sad. So thank you so much for listening. We love you. And a heavy forecasting and happy forecasting.

Favorite Quotations