Agile Book Club

Interview with Daniel Vacanti

September 15, 2019 Justyna Pindel and Paul Klipp Season 1 Episode 8
Agile Book Club
Interview with Daniel Vacanti
Show Notes Transcript

Justyna and Paul talk math with Daniel Vacanti, author of When Will It Be Done.

Support the podcast: http://justbuymeacoffee.com

Get the book: https://leanpub.com/whenwillitbedone

Dan mentions Annie Duke's Book - Thinking in Bets: Making Smarter Decisions When You Don't Have All the Facts, which you can get here: https://www.amazon.com/Thinking-Bets-Making-Smarter-Decisions/dp/0735216371/ref=sr_1_1?keywords=Thinking+in+Bets&qid=1568461310&s=gateway&sr=8-1




Support the Show.

Speaker 1:

Welcome to the Agile Book Club. Here are your hosts, Justina and Paul.

Speaker 2:

I'm joining the call.

Speaker 3:

I'm recording. When will it be done? Is the name of the book by Daniel Vacanti. Probability.

Speaker 2:

Probabilistic probabilistic. I got such a cockle probabilistic, probabilistic, probabilistic, improve your probability.[inaudible]

Speaker 3:

hello and welcome to the Agile Book Club podcast. My Name's Paul Clip and I'm here with Justina Pindell. Today we're going to be interviewing Daniel Vacanti, the author of when will it be done? It's, it's delightful to happen with us today. We both very much enjoyed reading your latest book. One of my observations before in the podcast from, from awhile ago, but B, a, when we decided to read, when will it be done? I remember one of the things that I was excited about was that, um, I was hoping this was going to be the book that doesn't scare people away, that this would be, I think the, the explanation that I made at the time was I remembered a story in which Stephen Hawking's publisher when he was, was got a publisher to publish goodness word. Is that a brief history of time? Yeah. Yeah. The publisher said, you're going to lose half of your audience for every mathematical formula you include in the book. And that's the reason why a brief history of time has no math in it, despite the fact that it's a whole book about math. And I expressed in a hope to Euston when we decided to read, when will it be done that this was going to be your actual agile metrics that doesn't scare away people who are afraid of math. And with, was that in your mind at all when you were writing this book?

Speaker 4:

I'm a little bit, that's, I mean, that, that's, that, that, that's first of all, that's, that's very kind of you to say. I mean[inaudible] yes, yes, it was. I mean, it was, it we, I was, it was how do we make things, how do we make these concepts as accessible as possible, but as, as tangible and as useful as possible as well. You know, I didn't, I didn't necessarily want to, oh, for lack of a better word, you know, you know, dumb it down to the point of being useless or just being, you know, me babbling on for 300 pages. So, so yeah, I think it's fair to say that there was a, a little bit of that going on in my head for sure.

Speaker 2:

And actually that worked pretty well for me because for the past few years, Paul was constantly recommending your first book to me and I was too scared to read that because I was afraid that I won't understand them enough. And then when he told me like, you know, I know he just published a new book that I think it's for people who are scared of math. And that was my first time that I read something that was published by you, by you written by row. And I have to say, I fell in love. I fell in love with all statistic and math. So thank you very much. And Paul, thank you for your lovely introduction. I think that it could help a lot of people like me.

Speaker 4:

Yeah. Well thanks very much for saying that. I mean, you know, honestly, if, if I had my choice, I think it would be my preference that people read the when will it be done first, uh, and then treat the actual agile book as more kind of a reference, um, to the, when will it be done in many ways, when will it be done? Should have been the book that I wrote first. But, uh, I kinda had to get the, the metric stuff out of my system, you know, which is I think why I started with the actual agile, but, but yeah, if, if, if everybody could be introduced to it the way that you were with, when will it be done first and Nashville Agile second, I think. I think that's actually a much better order for sure. Um, but, but to your point about, you know, two different ways to model, uh, to model these things are to introduce these topics or you know, however we want to say it. The, the, for the second book, especially the thing that I hoping people walk away with is it's really not about the math. I know a lot of people like to get hung up on the math and you know about the intricacies of probability and statistics, but that's really, really not what it's about. When I'm really hoping people understand it's, it's, it's the small decisions that you make every day in terms of how you're going to work that are the most important. And if you get those things right, they're very, just a very, and they seem, they're probably seemingly unimportant, but as long as you get those things right, the math really kind of takes care of itself. And it's, you know, it's pretty simple. It's pretty straight forward. It's only when you're not doing those things, you're not making right decisions that that's why I think people feel like, oh, we need all of this complicated math and we need all this complicated modeling that you really, really don't need. You can really fix by just fixing your process.

Speaker 3:

Indeed. Yeah. I like the, those, the way you make the points that the most important thing to understand about the law is not the little's law itself, but the conditions that are required for it to be valid and useful. And those conditions are exactly the same things that we try to build into our systems all in one stable and dependable systems. But they're so eloquent, eloquently stated and Joel's original paper, this, there's something that you've got a lot of experience with more than probably almost anyone else I know because of the work that you've done creating the actual, um, agile metrics tool, that's the name of it, right?

Speaker 4:

Correct, yes. And

Speaker 3:

tool. I've worked with some teams that use it in, in coordination with Jira and the visualizations are just fabulous. But I imagine you've had a lot of time, a lot of opportunities to collaborate with people who are using this. And I'd like to hear your thoughts on the pros and cons and the challenges associated with collecting data automatically using online systems versus collecting and manipulating your own data manually using spreadsheets.

Speaker 4:

Um, okay. So there's, I think I heard four or five questions in there, trying to answer all of them. Um, okay. So the first one in terms of the, the pros and cons, uh, we'll, we'll start with the pros and start with the positive. Usually when, um, I present people their own data in that tool, they've never seen their data visualize that book that way before. Um, and so it's, it's, it's a very, very, very powerful message. Uh, you know, in terms of the, you know, generally speaking, a lot of, a lot of people, a lot of teams have kind of an intuitive feeling about how their team is performing or how their processes performing. But once you put the data in front of them, you know, they can, you know, the data itself really doesn't lie. Right? I mean, it kind of does, but it really doesn't. And uh, it, it just, it just, it just makes the conversation so, so, so much powerful. Uh, so to me that's, that's the, the biggest pro is it really, really opens people's eyes to some of this stuff as a con. Um, and it's a big one and this is one that I am still struggling to overcome is it requires, I don't want to say a massive amount of education, but it certainly requires a nontrivial amount of education in terms of what these metrics, what these analytics are really telling us, uh, how to interpret them and how to make good decisions based on, on what it's telling you. And I'm in complete honesty. I have not cracked that nut yet. Um, you know, cause the agile community, and I'm going to sound down on the agile community, I don't mean to, but the agile community is so steeped in things like, you know, velocity story points and, uh, you know, how do we feel things are going, um, that it's really, really hard to shift them out of those really kind of red herrings. If you will, you know, about the, you know, actual how, how teams are performing and how they can improve, um, to looking at it more from a, an objective customer oriented view of the world. So[inaudible] a big pro and con, um, for the two, you know, the, the pro is they've never seen their data like this before and it's wildly powerful for them. But then number two, you have to educate them in terms of, you know, how to use it. Uh, so, so I can transition to the second, uh, the second question I think you asked, which was, you know, doing it automatically versus, you know, via some tool like Jira or doing it manually. Um, the, the, the thing about the automatic is it's, um, if it's like anything, you know, it's, it's garbage in, garbage out. And so if you're using the tool like Jira, then that requires discipline around, you know, making sure that, you know, if you're using a g report that the g report is updated in a, in a reasonable manner, in a, in, in a close to real time manner. Cause I can't tell you how many times I work with teams and I'm like, well yeah, we're working on this thing or we forgot to write a juror fork but we forgot to start. We forgot to put the Jira in progress so we forgot to close the chair or whatever. You know. And all those little things add up to you have data that you can't, you can't trust. Plus if you get into things like, oh, you know, where we're moving things back and forth or canceling things midway through this sprint or you know, any of those things. Now you're fairly reliant on, you know, how does Jira handle those in those cases? And how is Jira feeding you the information about those cases? And so you have to do some, some kind of fancy algorithmic stuff, um, to, to account for those types of things, which is why honestly, I mean if you're to ask me, honestly, I, I personally prefer the manual collection of data. I, that's how I started and that's how I did it for three years. But the, the big con with that is it's such a barrier to entry. So many people will be like, I don't want to have to track this data manually. I don't want to have to put it into a spreadsheet. You can't, we just grabbed this from Jira and that's, that's the, I think my biggest lesson from building the actual logical tool is, um, this second that there is some, there's a second, there's a hindrance to the path of least resistance, um, that they won't do it. Right. So if you can't get the data, if a team was using Jira and they can't get their data seamlessly out of Jira, they simply won't do it. Um, and it's unfortunate because then, like I said, when I were beholden to, to those tools to provide us data in a meaningful way. So I'm sorry, I rambled there for quite a bit, but it's your fault because you asked the five questions.

Speaker 3:

Indeed. I'm so sorry about that. You know it, I want to see whether or not your, your experience, um, matches my own, which is that there's a very common agile enter anti-pattern in which the data gathering and processing is being done by a single individual or a small group of, of p management types. And they're the ones who are responsible for process improvement. But all of the data generation is done by the engineers. And when you've got a disconnect between people who are generating the data and the people who are using the data, it's very difficult to build that discipline. But when you bring the data into your retrospectives and team of engineers is actually the people who are, who are using the data in order to drive their own improvements, it's much easier for them to understand the reasons why it's important for them to practice discipline with their tools. And they start thinking about it much more carefully because they don't want to corrupt the data that they're looking forward to getting the results out of. Have you found the same thing?

Speaker 4:

I couldn't agree more. Um, and I've got a whole bunch of stuff I want to say to that, but, but to your listeners, just, just rewind and listen to everything. But Paul just sit there, um, just listen to that over and over and over again because that's, that's it. That's exactly right. Um, I guess the only thing I want to add to that is once the team then understands that the data is for them and it's for their improvement, it's not for anybody else in terms of predictability and things like that. Although, you know, it kind of is, you know, that that's, that that's I think when you get that, that more meaningful adoption. But yes, 100% everything that you just said.

Speaker 2:

Okay. So the other question is like how much data do we need to start building meaningful forecast? I know from your book, but I think that a lot of our listeners are afraid that they will have to do like, you know, very tedious work with collecting different data. It will take hours and hours. So I would like them to listen to your explanation.

Speaker 4:

Yeah. And that's, yeah, I think a lot of people think that you need a ton, a ton of data. And I think that's where most teams are surprised is, you know, you'd be surprised how little data you need to get started. Um, especially if you've put some of these basic things that you know in place that we've talked about, you know, as you know, are around, you know, a little sauce assumptions and things like that. But you know, it certainly depends on the type of forecasting you're doing and things like that. But, um, you know, in terms of say cycle time, you know, you really only need, but you know, potentially 10, 11, 12 points, you know, one that, you know, points of data, you know, maybe once you get around 20 or you're, you're, you're pretty good. Kind of the same thing with Monte Carlo, maybe not the bigger, I think the bigger question that people should be asking. Oh, and by the way, one more thing I want to add to that is when I go into organizations, it's rarely, it's very, very rare that they don't have enough data. It's usually much more the case that they've got way, way, way too much data. And now we need to decide what's the meaningful data and what's not the meaningful data. Uh, and that's where I was going to go next was when, when making these forecasts, if we're talking about predictability and making forecasts, um, trying to choose what that historical data that we're going to use that's going to most closely mimic the future. We're trying to predict as usually the much harder thing then, oh, well we don't have enough data. We need to go get data. So, so yeah, more moral of the story is you need a lot, a lot less than you think that you do. It's, it's, it's, it's really more about trying to decide which of that historical data that you're gonna use.

Speaker 2:

Okay. So the following question, how to improve the quality of collected data that you can finally say, I trusted. It's not the carpet, it's not just the colorful, you know, charts that looks good in my presentation.

Speaker 4:

Yeah. Wow. Yeah, that's a great question. You haven't answered that, that I'd, I'd love to know it because a, I think we can make a lot a lot of money if we, if we can.

Speaker 2:

Okay, let's work on that together.

Speaker 4:

But yeah, no, I think, I think it gets back down to, like I said, it gets back down to having the team or whoever, whoever is using the data or whoever's producing the data, having them understand, like I said, those, those little, those little things that they're doing every day, the little actions that they're taking every single day and the longterm impact it has on their data. So things like, you know, hey, we started working on this item, did we actually pull it into progress on our Jira board? And we stopped working on this item. Did we actually close it in Jira? We've got these 20 things in progress. What's the order in which we're going to pull them through the process? You know, as long as people are getting those types of decisions, right. And, and rights kind of, uh, the wrong word there. But as long as, as long as you know, teams are talking about that. And, and generally guided by by those types of things, then we're in a position that we can more trust the data. Yeah. Uh, you know, how to trust data and how to understand where signal and where is noise. That's probably a whole other podcast all on its own. So it's very, very good question. Like I said, when you guys get to answer, please let me know cause I don't have a good answer for it.

Speaker 3:

Yeah. There's something similar that I wanted to follow up on is you were talking about how when you go into an engagement, generally there's no shortage of data. The trick is freaking out which data is most representative of the future and that portion of the future that they're interested in. And I think a nice way of looking at at Houston as question is, how do you determine which data from that from the past is the most relevant? Because any, any, you must see this all the time. You take a representative sample of data and there's always an explanation, oh well Joe was sick during that week and he's one of our key engineers or, no, but that's, that's all well and good, but there was a conference during that month and that threw things off. So when you're evaluating, um, and then I bring this up because it's very difficult to find any month that I get a team in which nobody is sick or nothing is happening, but there's always some explanation. What sort of things do you look for in order to demonstrate that a particular batch of historic data is relevant to the future?

Speaker 4:

Yeah, yeah. Um, there, there, there are two general, um, heuristics, two general rules of thumbs, rules, rules of thumb that I follow. Um, when, when talking about data and you know, I think, I think a lot of people, I think a lot of people know the first one, but I'm not sure that they know. The second one. Uh, the first one I think that most people believe is that, you know, in general, more data is better than less data. And that's, that's technically true. But more data is better than, than less data. With a caveat to that. A rule of thumb was, is, which is the second rule of thumb, which is more recent data is usually better than less recent data or, or older data. Right? Um, because we don't necessarily want to be making forecast for 20, 20 based on data from 2014. Right. But that in and of itself, even the more recent data in and of itself is problematic because, you know, if we're going into planning, say for January, um, we don't necessarily want to use December's data for, uh, for planning January as you guys, because of the Christmas holidays and potentially even the New Year's holidays and you know, and, and things like that, um, it gets a little bit more problematic even in the states because you're like, okay, well let's go back to November and this year's November stated, well, in November, you know, we have the Thanksgiving holidays here in the states. So now we go back to, you know, early November, October, and it's like, well, is October's data are really reasonable to start forecasting January? I don't know. And this, honestly, this is where it gets to be more in my mind anyway, more more art than science really. You know, we just, we just kind of have to make, make the best gas. But, but to your point, yeah, I mean there's always going to be the objections about, oh well this guy was sick or these people were on vacation or you know, hey, we were in the middle of this office move when all this stuff happened. Generally speaking, if you have enough data, you know, in that, in that representative data sample, then all of those things, all of that variability, what I would call variability should be accounted for. It should give you a reasonable forecast of the future.

Speaker 2:

Okay. Since the moment that I read your book, and actually I was talking about this book for the last two months constantly, I was also wondering how to build the, for a casting culture in the company that, you know, it doesn't apply only to one team, but to all teams and all, no. The whole organization

Speaker 4:

and the tapes. Yeah. Even, you know, even though you guys Kinda, you know, infilled transparency, you guys kind of gave me a hint as terms of some of the questions that you might ask beforehand. Do you guys still ask where they really hard questions? Um, so it's, it's hard to come up with a good answer in terms of[inaudible] in terms of introducing this forecast and culture. You know, one, almost every company I've been into, if you were to talk to, you know, some type of management, you know, at the executive level, the senior level of the thing that they are craving is predictability. They will almost always take predictability over, you know, better predictability over getting more stuff done. Most of the conference, I don't know if your guys' experience has been the same or not, but most of the executives I talk to, they'd rather have predictability than more stuff most of the time. And so from, you know, from that top level, predictability is a big deal from the, from the bottom, you know, if we go from the bottom up, they're less concerned about predictability just because I think they're closer to the work. And they know all the stuff that can go wrong and they, and they believe that it's really, really hard to be predictable. And that's, that's fairly true because there's a lot of, a lot of things that go on that, that can really screw things up. Um, and so the way that, the way that I talk about it is really more about, uh, oh, and by the way, most teams hate the whole, you know, whole estimation process. You know, cause now we gotta take and we gotta take the whole team offline for hours or days to, um, to try and estimate all this stuff. And then we all know our estimates are going to be wrong anyway. Anyway. So why are we doing this? So the way that I like to introduce it, introduce it is, you know, what if we could, what if we could get more accurate answers with a lot less effort? Um, and I think that's usually what resonates with teams is by collecting the data that I talk about in both of my books. You can get much, much, much more accurate answers. Like I said with a lot, a lot less effort. And if we can do that, why wouldn't we, you know, if hey, you know, if, if estimating and story points and doing all that stuff works for you, I'm not going to sit here and say stop doing that. When I, what I am going to suggest is, you know, there might be some things out there that you can try where you, like I said, you can get as good or better answers and it's going to take a lot less time and it's going to allow you to focus on the things that are much more important, which is delivering customer value rather than sitting around talking about, oh, that's, that's generally my approach is, and, and once, once like a, a developer sees that what you mean? I don't have to spend hours and hours just sitting around estimating and talking about stuff and I can actually start writing code. Once they understand that, then they love it. That's what they want to do.

Speaker 3:

Yeah. That the real, the real challenge there was that gap in the middle management typically there. Have you heard the, um, the observation, and I'm sure it probably has a name in psychology that people are more comfortable being rolled in a certain[inaudible] yeah.[inaudible] the leap at a conclusion, even without enough data rather than wait for data just because they'd rather be rolled and uncertain. And there's an exact age difference in terms of a person's comfort between, Sharon told me it wouldn't be ready by September 12th. I don't believe her, but she said it would be. And I know for a fact that it will be ready by between September 15th and next January 3rd. That's too big of a gap of uncertainty and in between having a number that you can, you can recite and that you can claim to until you can't anymore. And knowing that your process is unpredictable and exactly how unpredictable it is, you've, how do you close that gap and get people focused on trying to increase the quality of their forecasts as opposed to simply clinging to those, those, um, little melting icebergs of temporary certainty.

Speaker 4:

And uh, I mean, again, great question. And again, if you, if you guys can figure this out, I would be first in line to sign up for the answer. Whatever answer you come up with B because yeah, you're, you're exactly right. Cause you know earlier before when I'm talking, when I talk about accuracy of forecasting and this is where I have to be careful, you know, in my mind when I say accuracy of forecast, I'm still thinking about it probabilistically. I'm saying we W W we we're getting more accurate but we're getting more accurate from a probabilistic perspective. And I think what you're getting to is people have a hard time living in that probabilistic world. They, they, they want determinism and like you said, even if they're wrong, and that's, that is, that is the chasm that we have to cross so to speak, is that when we're talking about forecasting, when we're talking about predicting the future, the future is full of uncertainty and the second uncertainties involved that demands it dictates a probabilistic approach, right? You've got to throw determinism out the window and most people don't want to do that. If they think we can determine, hey, we know we should know this story. It takes 7.3 days or this release is going to be 120 stories are going to exactly be done by November 1st we should know that. Well, that's impossible to know. So it's, that's just impossible. Um, so yeah, getting people to throw determinism out in this and start embracing, embracing that uncertainty and embracing the fact that we need to start, start thinking probabilistically is, um, that, that's, that that is the problem we need to solve. Um, and you know, one of the books that I love to throw around, um, to help people do that is, you know, Annie Duke's recent book, thinking in bets, you know, getting beamed, getting people to understand that decision making. It's really about placing bets. You know, and you, you can, you can place a good bet and have a bad outcome. Just like you can place a bad bet and have a good outcome. That doesn't necessarily mean it was just because the outcome was great or bad that it was a quite a bad bet. Right? I don't, I don't know how to get people to understand that that's what I'm going to spend the next several years. I think trying to try to figure it out.

Speaker 3:

I think that's a really strong case for top-down transformation because at the bottom you've got people who are, you've got predominantly engineers who, who understand these things. It's not too difficult to to convince engineers that their, their estimates are rubbish. And then at the top you've got people who actually really understand the costs of mistakes and the cost of being wrong. You've got people who've got the power to say, you're not going to get what you want and you're going to get what you want, and I guarantee it, and their reputations are on the line. But in the middle you've got this level of middle management who is John is predominantly to try to make people happy upstairs without knowing why and to try to manage chaos that they don't fully understand beneath them. But what if you've got a top, top down approach? Then the only way, I think the only way to really change that culture is, is to get the people in the middle of the organizations to understand what the actual, um, business risks that they're trying to manage are. Because the cost of deterministic forecast in the one, there's no such thing. The cost of this deterministic living in a deterministic world when you're making promises that have huge costs associated with them is just too high. It's, it's, it's, and senior leadership understands this so much better than middle management does technically because that's the world that they live in. But anyway, that

Speaker 2:

did that, there's no question there. I'll just leave it at that.

Speaker 4:

Yeah, yeah, yeah, yeah. I mean, yeah, what you're highlighting is there, there is definitely, you know, asymmetric information there, right? Like I said, the, the people at the top, I understand risk and the people at the bottom, I think understand more, more of the complexity and variability that goes along with that risk. And, um, how do we bridge that gap so that maybe kind of both sides have, you know, similar understanding of each other's information, you know? Yeah. What, you know, whether that's top down or bottom up. I, you know, I don't, I don't, honestly, I don't necessarily have a, um, have a preference really, but I, I understand the allure of both, both sides of that debate. But, but you're right, I mean, I guess the point is there is asymmetric information there and how do we, how do we make that, how do we solve that problem?

Speaker 2:

Okay. So just to keep up the standard of surprising questions, I have one more. Um, what are, what are the forecasting practices that you think that are useful but you didn't include them in the book?

Speaker 4:

Wow. Um, mm. Mm, mm mm. At the, um, at the risk, at the risk of sounding arrogant or aloof. I can't, I can't, I can't think of one. I guess because, and the reason I say that, um, it's not big. I mean, I don't want to sound like I know everything and everything that you could possibly want to know is in the book. That's absolutely not the case. Right. But, but the what I think what I, what I'd really like to get across is in terms of forecasting, in terms of predictability, it's really all about keeping it simple. Just, just some very, you know, I'd much rather have the teams focus on the very, very basic things. Things like, you know, hey, pay attention to work item age. Honestly, if you didn't, if teams did nothing else and all they did was pay attention to age and while, and actually did something about it when, when the age told them to,

Speaker 5:

Huh.

Speaker 4:

That, that takes them almost almost all the way to where they need to be. Couple of that with some limiting of work and progress and you're pretty much there. And so that's, that's why that, that's kind of the formula that I tell people when I go in there. It's like, you know, let's, let's keep it really simple. We're going to pay attention, age, we're going to limit work in progress. Yeah. There's some other techniques that we might layer in once we have those two things solved. But, but let's, let's, let's focus on those two things I said. To me it's more about coming at it from a process perspective or process coaching perspective. If we do those two things like forecasting stuff, it'll take care of itself. So that's, I guess that's why I'm struggling to, to answer your question because I can't off the top of my head. I mean, cause you'll hear it, you'll hear things like, oh well, you know, we need to, we need to do fancy, you know, curve fitting. We need to understand the probability of distribution of our cycle time or throughput data and that we need to fit it to a, a, an occur and using these shape parameters. And No, no, no, no, no. Stop, wait, if anybody starts telling you that stuff, just stop because that is, in my opinion, the completely wrong thing to do. Um, let's focus on the basic blocking and tackling. If I can use an American metaphor, uh, the base of blocking, blocking and tackling of uh, uh, uh, of our day to day process and not worry about the f the fancy mathematical stuff.

Speaker 3:

Well, that that all works very well when you're dealing with a predominantly autonomous team. But in most large organizations, in most of the places in which I've worked, the primary causes of delay have been external to the team. They've been shared services, they'd been infrastructure support teams, that sort of thing. And one of the, I was challenged in one of my recent trainings when I was a listing just the mirrored possibilities. I've got one talk that I give I'm, that that just lists 40 or 50 things that a team can do when they feel like they don't have control over their environment. And somebody raised a valid criticism that so many of those things just involve shifting the uncertainty. So for example, if one team out of 20 negotiates a tight SLA with the shared service, all that really does is increase the unpredictability of the other 19 teams. Do you have any tips or techniques for trying to balance predictability across a complex system that has multiple dependencies?

Speaker 4:

Yeah, yeah. Again, Ron, I'm just, I'm, I'm floored. I'm floored by the quality of these questions guys. Um, this is, this is where I always, um, and, and this whole idea of just, just as a, just a quick segue, this whole idea of empiricism is kind of at the front of my mind recently because that's, that's my next series of things I'm going to be talking about is in Paris. CISM and my imperialism isn't all, isn't necessarily all it's cracked up to be. It is, but it isn't. And so when, when, when, if somebody were to come to me and say, you know what, our, our problem is not necessarily us as a team. It's all these external dependencies that we're suffering from shared services or third party vendors or whatever it may be. Uh, the thing that I like to say is, you know, number one, you know, again, from a, uh, from a process optimization perspective, the way we need to look at the world is there's things that we can control. And there's things that we can't control. And before we start going after the things that we can't control, we need to make sure that we have everything that we can control, that we are controlling as much as possible, the things that we can control. Um, and so that'll be the very first thing I'd want to understand is when I hear, wow, you know, this, these, these other dependencies that are killing us, that may be true. That may be true, but let's make sure that that's the case and let's make sure that there's not other things you're supposed to be controlling that you're not controlling. Once we have that, and by the way, that that should be the majority of the variability in your process is the stuff that you can control, right? This is this whole common cause versus special cause variation, which at some point we maybe, maybe if you guys invite me back, we can do another talk on this or whatever, but, but let's make sure that we're, we're, we're covering that. Once we have that, now we can start collecting data on the stuff that we can't control because now we know truly this variability that we're suffering from is because of this external dependency and how much, now we can quantify how much of that is really costing us in terms of, you know, extra time, extra effort, extra money, uh, you know, whatever. Um, and, and we can come at it, we can come at it more objectively. Um, you know, so maybe it's worth that investment to fix that problem, but, but maybe it's not. But until we have that data, we, we, we, we just simply don't know it. It's just a hypothesis. Um, if we can verify them that that is, that is that external stuff that is causing us the problems. Um, well, you know, dominion short give us some very, very, very, very handy tools with how to, um, how to fix those problems. And at that point we're crossing over into the risk management risk mitigation realm. That again, I wish I had a ton more time to, to go into, but, um, there's a whole bunch of stuff we can talk about there. So I'm sorry, I really kind of waved my hands at that one, but I'm afraid of starting to too deep because we could probably take them and more on that, you know. Okay. Am I okay? Does that make sense? Does my answer make sense? I don't, I don't know.

Speaker 2:

Yes, it makes sense. And I would just stop you two, don't tick farther because I still have like two questions that I really, really want to ask. The first one is like, I had really a shocking moment during reading your book. Uh, it was when I read about classes of services and titanic story. Then I watched your talk, uh, on that topic as well. And I wanted to ask you and read and hear your explanation. Why do you think using classes of services is not the best way to prioritize work and group the work items that we have?

Speaker 4:

Yeah. Yeah. Um, you guys are just, just pushing all my buttons today that, that's fine. That's fair. You guys get to do that. So just for, for people who don't know, for people to listen to this and, and maybe never have heard the word class of service before. The way, the way that I, the way that I define class of service is once an item has started. And to me that's really kind of key is that, you know, class of service, does it make sense while an item is still sitting in the backlog or it hasn't started or wherever it is as a service doesn't make any sense. But once an item has started high a subsurface is, is, is the rules around or the policies around how you treat certain items versus how you treat surface. Certain other items specifically in terms of specifically but not limited to in terms of the order in which you're pulling something through the process. So a classic example of a class of service is something called an expedite. And people will tell you, well, you know, we've got this, got this item in progress. It's an expedite. So that means whenever anybody frees up, um, or, or whenever that item comes into the process, we have to stop what we're doing and we have to go work on that expedite. And we have to, we have to work that through the process. We have to forget about working on, you know, other things or everything else to work on that. So that, that's, that's class a service in that nutshell. Um, the reason that I don't like that in five and 25 words or less is generally speaking, faster service serves to make your process less predictable overall than more predictable. Most people think the justification for the classes services that, oh, we were going to introduce these classes, service classes of service so we can make our process more predictable. Well, the truth is the exact opposite. Um, and what I like to tell people is what we really want to do is we want to model our process. So the process itself, um, is behaving as optimally as it can independent of these classes of service. Because once you have that, then what you will see is introducing these classes of service like expedite or fixed date or whatever, you will see that those policies will actually serve to make your process less predictable rather than more predictable. And again, it's one of these empirical things, right? I mean, people, it feels like, Hey, if we expedite these things, it feels like they're going faster and we can, we can make them more predictable. But what you'll find out is you're actually not making things better. You're making things worse. Um, so again, I'd love, I'd love to spend a whole hour on that topic, um, if I could. But

Speaker 2:

yes, yes. But I'm, I'm, I'm happy with your answer and I have, I think the last question of today's podcast, which is something that I stole from Paul, and then he showed me, if I don't ask you that, you know, I'm not welcome. And the marine officer I have.

Speaker 4:

Okay.

Speaker 2:

Tell me why are people scared of math? I have my own reasons why I was scared of Ma, but I'm just interested, interested in your point of view.

Speaker 4:

Yeah, that's, you know, um, I was, I was actually just having that conversation with some, some people this morning, um, about that. Uh, I don't, again, I don't, I don't know. My, my theory is that I think it probably just goes back to, at least in America, I guess, I don't know. I can't speak to Europe or the rest of the world, but in America, how we're taught math and we're kind of just taught math by rote. Um, and so you're taught the mechanics of math, but you're not really taught that application or, you know, hey, why and why are we solving for x or y? Are we differentiating this equation or why are we, well, you're never really taught that. So people graduate high school and yeah, maybe they know a little bit about Algebra, you know, obviously they know arithmetic or whatever and things like that, but nobody really knows, knows how to apply it. Um, and so I think that's, I, you know, I, ah, I, I don't know. I, I don't, I really don't have a good answer. I look w what are your guys' thoughts? I'd love to hear your, your philosophy on this, your opinion on this?

Speaker 3:

Well, I was trained in the American system, um, and I was, I was always good at math but never enjoyed it until geometry. And I've gotta say my son is learning geometry in Poland now and I'm very disappointed because geometry was the moment in the American system in which I began to appreciate what math was. Because up until then it was just solving problems. But in geometry we spent an entire year of studies just proving why we know a thing to be true, that the, the vast majority of the work was, was designing these proofs that just work backwards in order to, to create a logical arguments for why we could extrapolate certain universal truths from one status to another. If this is true about this statement, it's also true about this thing. And these are the reasons why. And that was the moment I began to realize what we were doing was creating models. I explained it to my son this way. I, I was telling my son when he was first learning, and he must have been like four or five years old at the time, but he was just learning a bit metallic. And I told him if he honestly wants to know what happens if he has three apples and he gives me how to apples, how many apples he gets to eat, he would have to go to the store and buy three apples and bring them home and put one of them on the table and then take two of them and go across the street and ring the bell and go up the stairs and give two apples to me how and explain why he's giving them to them and go down the stairs and back across the street and go home, open the door, go into his kitchen and count the apples on the table to know what happened. Math isn't much easier way of doing that and, and that's just a simple way of doing it. I think another thing that really brought it home to me is I read a book, um, almost a decade ago, and if I can find the title of it, I'll put it in the show notes, but it was a book that, uh, illustrated how to, it basically showed how you could use, um, calculus, goodness, what's it called? Um, ah, a geometrical approach to, to, um, modeling systems using grids. I forget what the name of it is, goodness. But it basically solved common mathematical problems but using three different styles of mathematics. So instead of just saying, well, this is what we know about triangles because of the Pharis, it would say, well, this is another way that you can explain the same thing and this is another way that you can explain and illustrate the same thing. And that's what turned it on for me. That that's what made it exciting for me is, um, the idea of being able to model a system and then perform experiments on the system without actually spending my own money.

Speaker 4:

Yeah.

Speaker 3:

And, and then there's not nearly enough of that taught in school. Um, I would much rather that beyond basic arithmetic and Algebra students were given problems to solve real life problems and then the encouragement to find solutions, but ideally problems that are not easily solved by any other way other than modeling the system.

Speaker 2:

And I think that a lot of people are scared of math because they don't want to look stupid. I remember, uh, yes, two years ago, uh, we chose the main topic of our ace conference to be like related to a muff and stuff. We invited Troy and we had like a lot of talks actually in this topic, but we didn't market it that it's about the math and forecasting and predictability and stuff. We didn't want to tell it to people because we're afraid that it will be, you know, scared that it might be like, you know, too hot for them and they won't understand. So they are not going to spend their money for the two days conference ticket to sit and look stupid or pretend that they, you know, uh, understand. So that's my, um, suspicion. And also when I can say about Polish, uh, education system, I was still at the school that math is the queen of all the science and the smartest people understand math. So there was like this kind of barrier I would say at Polish education system that I experienced that was telling you if you're not super smart, yeah, you, you better don't start. It's not like for you. You won't understand. And I think that it stops some of the people. Yeah,

Speaker 4:

yeah. Yeah. I'm, if, if I can just, if I can just pile on just one, couple of more sentences, um, because this is obviously something that's pretty close to my heart cause I agree completely with, with what you guys have said and to really, uh, along the, uh, people don't want to feel stupid thing. Um, w when I, when I'm more afraid of is not, it's not people who don't like math, but I'm more afraid of is the people who purport a solution because mass, right. That was the problem with story points. I mean, honestly, Fibonacci sequence and story points, I mean, oh, because we're doing math that makes it right when that was probably one of the stupidest applications of math than anyone could ever think of ever. And by the way side, even the reels, Fibonacci sequence, right? You know these people right now let's, let's fit cycle time data to have viable curver weeble or however you say it or whatever. That's just a poor, poor application of math. And I think it makes people feel stupid because they don't understand why I would ever use a Fibonacci sequence for story points. Well, the real answer is you wouldn't, why would you ever fit your data to a viable irv? Well, the real answer is you wouldn't, you know, and that's why you don't understand. It's not because you're stupid, it's because it's just not, not a good application. So yeah, I think there's, there's too much of the, because of mass in agile and you know, hey, if we do math and we must be right when that's just wrong. So if I can, maybe we'll just end it on that. I don't know.

Speaker 3:

Wait, you know, in all fairness, you can't trademark the Fibonacci sequence.

Speaker 4:

We should leave it at that. Yup, exactly.

Speaker 3:

Good. Has been absolutely delightful talking to you today Daniel. Um, and, and we really need to get you out to crap before these days.

Speaker 2:

Oh yes, yes, yes. That's the go for it.

Speaker 4:

I can't wait. Yeah, I know. I was, I was, I know I was supposed to come out to the ace conference a couple of years ago and I got sick and I was just, just heartbroken. Um, so I guess, I mean we will, we will find an excuse for me to get over there. I, I would love to meet you guys in person and by the way, the pleasure of the pleasure was all mine for being here today. I am really, really grateful for the opportunity to talk to you guys. Really, really enjoyed it. I hope we get to do it again sometime soon.

Speaker 2:

And thank you very much for really introducing me to MF forecasting and everything, because I think like without your book, I would be still scarred for some time until I would be just forced to do that.

Speaker 4:

Right. Well thanks. Thank you very much. I'm glad I, that's honestly, if we, if we just went over one person, then that's, that's, that's good enough for me,

Speaker 3:

I think in a win over a lot more than that. Thank you so much.

Speaker 4:

Hey, thanks. Thanks guys. Thank you. Bye. Bye.

Speaker 3:

Well, that's it for this episode of the Agile Book Club. Join us in two weeks when you in it and I discuss Mike Burroughs new book from right to left.