Testing Peers
Testing Peers is a community-driven initiative built by testers, for testers. We are a not-for-profit collective focused on supporting each other across software testing, quality, leadership, and engineering. This group is peer-led, values-driven, and passionate about shaping a more thoughtful, collaborative testing culture.
The Testing Peers podcast is now expanding beyond its original four hosts, David Maynard, Chris Armstrong, Russell Craxford and Simon Prior, striving to represent the voices of a diverse and thriving community.
Our inaugural in-person conference, #PeersCon, launched in Nottingham in March 2024, returning for #PeersCon25, with #PeersCon26 already scheduled - further solidifying Testing Peers as a not-for-profit, by testers, for testers initiative.
Testing Peers
Flow, Friction, and Value
Hello friends, and welcome to another episode of the Testing Peers podcast.
In this episode, Chris, Dan, David, and Russell come together for a wide-ranging conversation about flow, what it really means, and why it matters far beyond speed or delivery metrics. The discussion starts with some light New Year banter before quickly moving into systems thinking, value, and the often unseen friction that slows organisations down.
The group explore flow as something that exists across people, processes, and systems, not just CI/CD pipelines. Using plumbing analogies, real-world examples, and a healthy dose of scepticism about simplistic metrics, they unpack why optimising individual components rarely improves outcomes if the wider system is ignored.
A recurring theme is the idea that quality is about the removal of unnecessary friction, and that debt shows up in many forms, not just code. Documentation, onboarding, learning mechanisms, and organisational processes all contribute to how effectively value moves through a system.
The conversation also touches on how difficult flow is to measure meaningfully. While metrics like DORA can tell part of the story, they often focus on speed rather than outcomes, impact, or sustainability. The hosts discuss the importance of qualitative signals, trending over time, and understanding what good actually looks like in a given context.
A significant part of the episode focuses on the human side of flow, including onboarding, learning, feedback loops, and psychological safety. The group reflect on how better onboarding and clearer purpose can help people contribute sooner, feel more connected to their work, and understand the impact of what they do.
From a testing perspective, the discussion highlights how testers already have many of the skills needed to assess flow at an organisational level. Curiosity, critical thinking, risk awareness, and communication all play a role in identifying friction, asking difficult questions, and helping teams improve. At the same time, the hosts are careful not to position testers as uniquely gifted, recognising that good systems thinking comes from diverse roles working together.
The episode closes with reflections on trust, credibility, and the role of testers as trusted advisors. Being listened to is not about job titles or tools, but about doing the work, understanding the system, and backing up insights with evidence and experience.
Links and references
- DORA metrics: https://dora.dev/guides/dora-metrics/
- The Phoenix Project: https://itrevolution.com/product/the-phoenix-project/
- Theory of Constraints: https://www.leanproduction.com/theory-of-constraints/
- Stu Crocker on quality as the removal of unnecessary friction
- Post Office Horizon IT Scandal: https://clarotesting.wordpress.com/the-post-office-horizon-it-scandal/
#PeersCon26 Tickets for the event are live for just
£30.
And as always, we are looking for sponsors to make this event the success it has been for the last 2 years, get in touch if interested
Twitter (https://twitter.com/testingpeers)
LinkedIn (https://www.linkedin.com/company/testing-peers)
Instagram (https://www.instagram.com/testingpeers/)
Facebook (https://www.facebook.com/TestingPeers)
Right. Hello friends. Welcome to another episode of the Testing Peers podcast. My name is Chris Armstrong, and tonight, today, this morning, whenever you're listening, I am joined by the one, David Maynard. Hello, the two Russell Crocksford.
Why I
and the three, Dan Ashby.
Hello.
He said goodbye before his. Hello.
It's all very well Insensible here today. Today we're gonna be speaking about flow, optimizing flow, providing value, showing value, all those good things. Good luck to whoever's making the title for this episode. But before we dive into that one, Dan, you've got us a little bit of banter. I do.
Yeah. And since it's new year, I wanna hear about any new year resolutions that you
Introduction
have.
Oh, new year resolutions every year.
I can give you mine. Mine's relatively easy, one that I've been working on, which is just get healthier. it's not got a very defined outcome, but it is end year healthier than I start the year. So lose weight, be stronger, whatever it is, just be slightly better than I was the year before.
Positive trends,
uh, mine is very similar. although, yeah, so I've, I've, I haven't, I've sort of given up a little bit on running, but I've got a half marathon in March, so I need to put some training on that. And also I keep going on about the NICs and I did do quite well. I've done quite well just before Christmas.
Uh, but. I've hurt my back, so, I've had a couple of weeks off, so I need to try and get back into that. Uh, so yeah, so it's, it's getting healthy and um, yeah, hopefully avoiding injuries in both running and, general fitness.
I think, uh, every year I try and start a, a nice trend where I get to read at least a chapter of a book for pleasure every single day.
And every year I get a certain way through and then I drop off. So probably what I'd like to be able to do is a bit of both. I'd like to write more regularly and I'd like to read more regularly. and again, on both counts for pleasure. There's stuff we do for work. We write bits and bobs all the damn time, but I wanna write a bit more for pleasure.
It's been a while since I've done that. You know, I spent a lot of previous years writing for purposes of, of brand awareness or, or, Hey, look at me, gimme a job. Whatever I needed to get done at the time, I'd quite like just to be like talking about this is the thing that I find interesting. Let's have a bit of a chat.
Or by
you. Sounds fabulous. Yeah. Uh, mines is actually a bit of all of yours, so, uh, yeah, the health one, Russell, like, uh, I think that's a common one every year for me. And I just, it's an aspiration more than anything. but also like I've found my writing module as well, Chris, like, so get back into that.
I'd like to try and do more. So side projects from a development perspective, like building a bit of a profile website, for example, or seeing some of these sort of app ideas that I've got. It's easier with these AI tools. So doing a bit of vibe coding, maybe That'd be good.
And of course you're looking forward to being program chair, not program chair.
Program committee for peers Con 26 and being there be your first time at the event, won't it?
It will be, yeah. Yeah. I took a bit of a break from testing conferences for a while and trying to get into the, the dev ones and. Uh, remain active and agile ones, but I'm so excited about this. Like I'm really excited about it 'cause it's been a while since the testing conference and I kind of, I've missed that a bit.
You know,
that's the way I think we all felt a bit of that end pandemic times as well, didn't we? When we didn't get to go to anything? Oh, we could talk about all these sorts of fun things going on, but there is something that, I proposed as a topic to sort of dive into today. It's a lot, a lot about what I've been thinking about and you know, we talk about testers and quality, so we're gonna just put that semantic bit, just in a little corner over here to sort of talk more about the things that we can do to, or as test as we might be drawn to do, to optimize
Understanding Flow
flow.
Look for ways to show that we're providing. Value in the workplace rather than just being the folks at the very end reporting on numbers of things. So maybe leaning more into qualitative items, preventative mitigation, risk informed areas. I've planted a lot of seeds there. Um, but it's something that I've been really kind of chewing on quite a lot recently and thinking that there's a lot of value add.
In the instincts that we have, the places that we find ourselves as testers. Um, so that's kind of my opening gambit. I had a couple of nods from people, so I feel like there's something there
The Plumbing Analogy
in silence.
Well, I've got a lot that I could, I was wanting to hear you first.
No, no,
no.
We
all thought that. Therefore, no
one
spoke.
Yeah, flow flow's a really interesting one, right? Because a lot of people work in, in today's world, a lot of people talk about speed, like we must move faster. And um, a lot of companies tend to think of that as do more, like actually take on more.
But the way that I think about flow is kinda like a, like a plumbing system, for example, right? So. You're not just trying to, you can't just force more water through the pump, the, the pipes, right? You, you need to refactor the design of the pipes. Think about the. The actual width of the pipes themselves, you know, um, the, in, in a working context.
So that equates to like your processes, the quality of your processes, um, the, so not just the speed that you're moving at, but like even how iterative you are, like breaking things down into smaller, smaller chunks. Reducing those long feedback loops to make them much shorter, right? So things like pairing and ensembling helps with improving flow, even, although it doesn't really feel like it should because you're all working in one thing at one time, perhaps with ensembling.
So it is a little bit, uh, like less intuitive in that sense. But flow is a, a sort of grander sort of scale of things when you're thinking about not just moving faster. So it's a huge topic, like loads that we can break into.
I always like an analogy and get that pipe analogy. I'd like to take that one step further.
'cause actually it isn't just about the size of a single pipe. It's the number of pipes and it's the way that the pipes are then convoluted in order to come back together. So it's a bit like sort of system architecture almost, you know, breaking up and also. If you have a, a, a set amount of water, if you have too many pipes, then actually the flow through each particular pipe is greatly reduced.
You might have a tick trickle, uh, in all the pipes, and so therefore, actually your cadence is reduced because of it. Um, and yeah, so I, I do like that idea and obviously it feeds absolutely into flow.
Do you want me to take it even further?
Go on.
I could, which is, it's not just where the pipes, it's, it's the output of the pipes.
'cause they go into something that can't take the load, take the flow of all the work and all the rest of it. They become the bottleneck that stops the flow. Um, so that you've gotta actually organize the whole system, not just maximize your pipes and then have something at the end that can't cope with it.
'cause you can flood. Say a room or whatever, it's, so you've got to make sure you think of the system rather than just optimizing all the individual components.
It's as if someone named these things pipelines for a reason sometimes, but the pipeline, the, the pipelines are more than like, I think we've already touched on it's more than just A-C-I-C-D pipelines, right?
This is, this is like a throughput of, of work, of, of, of process, of, of appropriateness appropriate tools for the appropriate places. And there's a different context. That's gonna require a different tool, a different bend, a different width, a different, um, velocity, a different level of pressure being applied.
And, you know, I quite like the, um, off quoted by me, uh, Stu Crocker's. Um. Quality being the removal of unnecessary friction. Like you wouldn't want always for your pipes to have it be free flowing in, you know, relentless forms if it wasn't appropriate. And I, and I quite like that sort of level of appropriateness and, and necessary and unnecessary being sort of brought into that throughput and, and just to.
Deviate. I'm sadly away from that particular analogy. I like to sort of think about when we've got things that we need to fix, but things are continuing to work for a while. We're accruing debt. And one bug bear that just gets me, and I just wanna get it off my chest, so I'm just gonna say it quickly, is like when people talk about tech debt, they think you're just talking about tech stack related debt.
Think about different things, don't they? It depends who you ask. It's entirely
architect. Yeah. But very often I've sat there and form, we've got tech debt and people go, oh, well do you know, do we need to add more memory? I'm like, well, no. The tech debt here is maybe what you could call document debt or contra debt or process debt or something like that.
And they just look at you and go, well, that's not a technical problem. And you're like, it is a technical problem. You've just, now I just go, but I just wanted to get that off my chest. We can go back to the, uh, the friction points now.
I think it's really interesting points as well. Like you, Russell, we can layer on what you said as well, right?
In terms of like. Like the, the cleanliness of the pipes, the dirty pipes, right. Produces the dirty output. So that's one aspect.
Oh, maintenance.
Oh, tell me about it. But then Chris, you mentioned as well, like beyond like CISD pipelines, right? Uh, my head immediately doesn't go to pipelines, but everyone in tech, I imagine most people would go to pipelines, but I was actually going to organizational systems.
So thinking about even learning mechanisms, right? Really clunky or non-existent learning mechanisms makes it harder for the team to actually grow, which makes it harder for them to improve the output of their work. Right. And improve the flow. So every, every single aspect of the organization is one of these pipes.
Right?
Yeah. I think you're very right. When we're talking about pipelines in this sense, we're talking, we were using the analogy just to talk about. Getting something from A to B, wasn't it really, um, outcome, um, versus necessarily A-C-I-C-D. But you're right, also, CICD is the big pipeline that us within technology often refer to.
But again, different people consider different things in the CICD pipeline, um, from release. Some include it, some wouldn't, you know, continuous deployment, continuous delivery. There's all these different sorts of. Variations of all these things, but ultimately, I guess if we think about it as value add, it's what gets in the hands of the customer from somewhere.
What you add to the value of the company, the organization is usually ultimately from buying a product, selling a product, giving a product away, whatever it is, and it's all the organizational aspects from training, development, feeling psychologically safe to actually writing software that deliver that.
In our case, it could be manufacturing, it could be one of the other fields out there. But yeah, it's all these things that together create value. Uh, and it's interesting 'cause if you think about it, it, that includes everything from the hiring, the onboarding to the, every part of the process actually has a important part to play to the whole system.
Scary.
I was also gonna say that another thing that's important, um, as well as the sort of training is the onboarding. How quickly you can add value to a particular project by the, the quality of the documentation or whatever on the actual project, um, for new people coming onto a, a particular project. Uh, the other thing I was gonna say was that I think that a lot of companies consider or only look at the output that of particular projects or, or the actual thing rather than the outcomes.
And actually that's where test comes in, is, is concentrating on that outcome. What is. What does success look like? Rather than just actually producing the software or product or whatever the project he's trying to do.
Yeah, I I was about to say something very similar, David, which was kind of around the
Measuring Flow
metrics around it.
All right. We see things like Dora, the DORA metrics that talks, uh, kind of talks about the time aspect of it still coming back to speed, right? But is there anything that actually you can think of that measures flow. Not just speed.
Depends, depends on what you call by flow. Exactly what I was gonna say.
But what we've been talking about is flow, right?
Because, because it's, it's tricky though, isn't it? Because flow is, flow is a number of, it's a number of items that are quantifiable, and that's what we're going to be. Um, concentrating on naturally because quantifiable things are by very nature, easy to count and therefore easy to report on.
So it could be number of work items that move from. A, B, C and how quickly that happens. How long something is static. Exactly how, how long, how long something is static, um, how responsive something might be when you call upon it, you know, we've got meantime to response, meantime to resolution things, things that exist.
Um, well known sort of measures that we have. The difficulty that we have comes in that, that other aspect of things that are less easy to. Immediately quantify. Um, things that, you know, like I, I touched upon sort of things that are a bit more qualitative and, and maybe they're a bit more disparate depending on where, where you sit in different projects and things, where it's got, it's thematically similar, but there's a different context, there's a different, um, pathway through to stuff that means it isn't immediately the same and.
I was trying to look at things about saying like, you can always trend over time, various things if you can work out what good looks like and bad and attribute something akin to an index, to something, just so you can sort of work out how those things are. But it, I don't think it's an exact science, but you have to sort of start somewhere.
'cause if you don't have, if you don't work out what good looks like and, and what it is currently. Then how you are sort of trending over time on those things, you're, you're not gonna be able to gauge if it's better or worse realistically. And sometimes it's not something you can just count, which is really frustrating because we're almost always asked to just count the things.
Got to boil it down to an oia, Chris. It's got to have that key result that's measurable. Right. I, I'm joking, but
No, you are and your aren't, I guess, is the point, isn't it? That's what makes it humorous. Um, what flow is, is, is, is hard to define, isn't it? It's a multitude of things. It's not one measure. It's probably two or three or three or four things together.
It's things like we've already mentioned, like work in progress. Um. But it's also things like output. So you need both of those things together to kind of understand these things. And it's kind of, um, understanding about guess the change, change flow rate. So I guess if you go to Dora Metrics, it's seeing how fast you could do something and how reliably you can do it.
'cause it's all great having great flow, but it's overflowing or it's, it's you, you, you piping. Um. Foul water rather than clean water out. Um, it's not good flow. So you actually start measuring not just what you output, but the value of it. Um, if it's manufacturing, for example, you might measure into it the failure rate.
As in how much of what you're producing actually goes in the bin. So that's waste. So it's, it's generally obviously trying to measure the fact that it's good, what comes out, the rate in which it comes out, and the time it takes from start to finish to come out, be whatever the system. That's generally, I think the three key measures in my head at
least.
It's almost always got a delta, hasn't it? Like it's not just, it's not just an AA to B. There's, there's a delta, a qualifier, a piece in the middle that that adds. Richness to the story beyond just the number because, you know, yeah, I, I've, I've had questions about x number of test cases written. I'm like, okay, but should we have done, what was that measure written?
Billion. It's
great.
That trigger, like we, we went through that pro process that said, should we have done this? Yes. Yeah. Then I want to measure it. Did we do it? If we just did it without measuring that, I, that's, that's not useful.
Yeah, I, I agree with everything you're saying. I, I want to kind of take it to a meta level though, where I'm thinking about,
I don't want Facebook
Engineer Experience
involved.
Oh, I know. You know. Oh,
oh, sorry.
Like, um, like rather than thinking about it in terms of products, if I'm thinking about it in terms of like on. Um, an engineer experience or a tester experience, right. So there's a bit of a lifecycle here as well. Like I know that for products, we think of an a software development lifecycle.
We go through various activities to build, test, design, whatever. But if we think about like, uh, a tester experience lifecycle, right? Does the. The recruitment process, then the onboarding, right? Then there's the, the all the day to day stuff that happens that isn't an SDLC, right? So your ceremonies, um, that you have within your team, the communication, the collaboration, the decision making, the support that you give others, the support that you do in your products, even might not even been an SDLC.
Like on call, right? And then there is an end to the life cycle, which is sort of like the offboarding part. And in between you've got the growth part as well, right? The, the learning that I mentioned earlier. This is what I was mean by organizational systems, right? Because this flow all the way throughout this as well.
And yeah, you can measure it in terms of like throughput and um, rates of change and all this kinda stuff as well. And speed of course is obviously gonna be part of that as well. Like no one wants a lengthy onboarding process, right, or, or interview process even right at the start, if it takes too long. Um, but thinking about other ways to measure this, it's really challenging from an experience perspective.
It, it matters though, right? Like it really matters. And from, from that product perspective even it matters to, to your customers, your clients, and your users, you know?
Even with AI, where it is, humans are a key factor of all these systems and working, collaboration and teamwork is still crucial. So,
mm-hmm.
As you say, the onboarding to the offboarding, the, the human life cycle for a better word, yep. Um, is crucial. And I guess measuring that is harder, but. I guess we try that with feedback loops, again, one-to-ones, um, with, you know, skip level meetings, uh, with feedback loops, retrospectives, um, onboarding, often get asked feedback as you're going about it.
Um, I've generally, wherever I've worked, I've kind of onboarding documentation is always. Poor, I'm being polite. Um, so it's kind of like the next person that comes to it, review it, edit it, and prove it. Like everyone iterate on it. And actually it's part of the learning experience is you help set the culture, which is you say something broken, say something that's not as good as it could be, contribute, do it.
Yep. Make it better. And if you, and it doesn't solve it, but it actually a generally highlights what's broken. So you can see it, and actually it gives you a chance to feed back Loop as a leader or as a peer, but also gives the opportunity of someone to see what they're doing, having an impact from day one.
Mm-hmm. And you start sort of like. Where you see there's a problem in the flow to, for of a better name for it contribute to trying to make it more effective, more efficient. And it just starts, as I said, from that early life cycle and it creates cultures of caring, of feedback, being important of you can make a difference.
And I think that's kind of, um, think how do you measure that? Don't know but you, but probably by speaking to that person at the end and saying, how have you found this experience? What would you do differently? Could we have made this better? And being crucial of yourself, I guess.
So can
Impact & Culture
I jump in there?
'cause I know that you are an advocate and you're probably gonna go into it. 'cause I, I am also an advocate about the impact. You know, I think that the really key thing for measuring. Individual contribution is their impact on it. And you can, you can put that impact, you can put a monetary value to a certain extent on impact.
You can put an put an put impact on other people within the team. You can go impact on the project. You can go absolutely, you know, really big on the impact. And I think that that's the sort of most measurable thing that can be measured by each individual.
Yeah. I was gonna take it a step further as well, and the impact that you make on that sort of engineer or tester experience, right?
You should actually see the metrics change on your product as well, because what I was trying to say earlier was the quality of all of that stuff, right? Raises the, the, the performance of the people to produce higher quality on, on the actual products that they're building and testing. So in reality, could we look at those product metrics, the quality metrics, right, and see if we've improved flow.
Then the product metrics should change for the better
being devil's advocate, which I hate being, but I always am. Um. You might just see though, that what happens is that the throughput timing is shortened. So, for example, the people get up to speed faster, not that they've become better. They get to the, they get to the, the level, should we say, um, at a faster rate.
Rather than necessarily changing your throughput or changing the quality of your product or the rest of it, you might not see necessarily a reduction in bug rate or something of that you might not see. Code quality go up and so on. But what you might see is a lower area rate from an individual to start with or something like that.
It might be hard to see. It depends on. Like everything in the, it just depends, but I can, in my head, foresee often if it's the onboarding particular we're talking about more than anything else, you should see that be about almost getting to maximum efficiency. God, it sounds horrible, don't my people like that.
Um, but getting efficient, like getting on board, understanding it, feeling part of the team, but getting to being great quickly rather than slowly, whereas. Yes, it could have an impact on the general overall quality of a product, the, um, the product as a whole, regardless of it, quality or anything else. But I foresee more the onboarding as getting someone to the pace, which they could continue to deliver sort of thing.
And then after that. But if you've trained them right, I see your point, which is you've trained them, you've enabled them. Hopefully then that will continue on. But actually that's what you would hope people were already doing. If you know what I mean? If that was the culture, yeah. You are getting them to embed into the existing culture ish, so I'm not sure whether you'd see complete quality or complete product changes.
You probably wouldn't necessarily impact the general flow. In my head at least, but it would impact the flow of how fast you can onboard, get people up to speed, get them into the race, that sort of side of things. But I, it'd be interesting, I'm not sure,
well, where, where my head was going, um, was that if you take onboarding, for example, right?
If you improve the onboarding processes, and I don't just mean like getting through the checklists and reading the documents, but I'm talking about onboarding in terms of relationship building. Inclusion, all this kinda stuff, right? It generally makes people happier. Right? And happier people do produce higher quality output.
Right? That's where my head was at. Like, if you can improve, not, not just thinking about improving morale, but like we've spoken in the past about like how do you actually make a workplace joyful, right? One way of doing that is improving things like flow and processes and all this kinda stuff, right? And if people are more joyful in the work that they're doing, they protect, they don't pretend anything.
They actually potentially are able to produce a better, there's more to it than that though. Isn't that
because it's, it's one thing enjoying what you're doing. There's another, which is an angle I see quite often actually. I see, uh, a bit more in where. People are being forced down workflows and paths. You know, it is not just AI things, but respect being
Value & Purpose
heard.
And, um, we've, we talked a bit about value earlier, but also people knowing the value of their work, knowing their channeling, their inner Backstreet Boys. So they're told why, you know, they want, they want to know why they're doing something for whom. What good looks like in that case, because I think if you understand the rationale behind the work that you're doing, why it's important, where the impact lies, you know where there are dependencies and stuff.
That you are gonna unlock, that gives you more value to the work that you are doing as well. 'cause we don't want to feel like we're wasting our time. We don't want to feel like I'm just doing this thing to get through it. Like, you want to feel like you're doing something of value. You know, we, we, we mentioned things like continuous improvement and stuff.
We want to feel like we're doing better each day. People generally want to feel. Like what they're doing has a purpose, right? Like we exist for a reason, not just to to get by. And I think a lot of that comes through. It's more than just communication, but it, it, it's understanding a bit more of that bigger picture, which is where we've come back to system thinking, but it's, it's knowing where you, where you fit within a whole system, why what you are doing is important to that whole bit.
What the impact of you not doing it would be like the number of times I've sat there and someone said, we gotta do this thing, and you're like, what happens if we don't do it? Do you think anything's gonna change? That's a really interesting and powerful question to, to consider in those instances because, you know, like leaks do happen.
Coming back to IL pipe analogy, see I did that and, and you know, you might not notice for an incredibly long time, and the damage that could be done can form. Rotting and, and, and ground damp and stuff. If you don't pay attention, if you aren't looking at the right things, if you're not investigating, there can be a lot of rot at the core that is not uncovered.
If there isn't someone looking at the big picture, if there aren't these sort of audits and reviews and housekeeping and things across the whole system,
I was thinking horizon. That was one where rocks got in and caused problems that was kind of ignored and pushed under and so on, and look at the cost of that in the long run.
Genuinely the reason why I refuse to wear the old Cambridge United shirt with Fujitsu across the front.
But for the context of our international, uh, listeners, uh, the Fujitsu Horizon scandal is to do with our post office system, uh, costing millions and being covered up for many years.
So what would we do to improve flow?
Improving Flow
We, you know, we're talking about obviously making better experiences for people. We're talking about the holistic side of, uh, things. We're not just talking about making a process better for a software delivery. We've been talking about actually making processes better for humans, um, experiences better, making them more joyful, um, things like that.
I know there's lots of, you know, we've mentioned Dora metrics. There's, um. The Phoenix Project and things like that. Theories of constraints. There's lots of different things about flow, which can apply to people, can apply to people, processes, as well as. The processes, but is there anything that kind of springs to our imaginations of things that we kind of like to think about when we're looking at how we improve flow of people, systems, software, podcasts, you name it.
I mean, feedback we've talked about a little bit there, right? But if you think about mission critical things that exist in life, say for example, fire alarm systems. Those are things that are routinely checked. They are. Deeply looked into, they check end-to-end systems. They make sure that comms work. They make sure that entire processes are there because when you are fatigued, when you are tired, when you are maybe burnt out, the things that you have a routine, you just know exactly where you need to be.
Those sorts of things are things you can do without to engage your brain. It's why sports teams make you run through certain plays. To the nth degree. It's how bands can play when they're exhausted because they do it over and over again. They practice things routinely. Servicing and finding out what your most important needs are, what your failure recovery processes are, and, and actually doing audits, investing time in, is this the right thing?
Will this still work? If. Investing time in those processes and then doing them over and over again. And back to my own point. 'cause apparently that's what I'm doing now. Telling people why it's important, showing them, making that visible so that people can see we do this thing, we check this thing, we regularly do it.
We share our reports and things. That's why it's important. It's why it's important. Your car is MOT. It's why it's important you go over health checks with your body. 'cause if you leave something for too long, it'll be too late.
The thing I was gonna say about your, I think the fire, fire alarm is quite a good example, but, um, the problem with, with the fire alarm is that people revert to type.
You know, so many times when there is a fire alarm at work, people still don't use the fire exits. They don't go for the fastest route. They go down the route that they are used to going
down. They don't know why it's important. Exactly. They haven't had that reinforced, and that's because humans don't do as they're told rural, but neither does software.
Let's be honest. That's why Happy Path doesn't work. But I think reinforcing that value, a bit of education goes a really long way. This is why this thing is important.
Sort of going back to the whole flow thing, sometimes you can get blindsided by knowing what the, the sort of escape routes or the, or the side paths.
And you know, again, with software testing, we look for the happy route. We look for the simple route. We don't, sometimes, we don't actually look for those edge cases. That will mean that there are problems going down, uh, the path.
I actually think it could be a bit simpler. Like we look at when I'm thinking about like testers and testing skills and my very simplistic view on testing, just being assessing
Testing Skills
quality.
Then if we look at flow, we look at poor flow being bugs, like things that bug us bugs, right? Then we can apply our, our, the exact testing skills that we apply to products, to processes, to organizational systems. So we already have the skills to be able to assess and diagnose problems. Right? So then it's just about carrying that forward to.
Trigger the improvements. Right?
Yeah. Makes a lot of sense. I'm gonna say the one thing that testers have learned over time, I think, is that we're humans and the users are humans. People of all these systems and press are humans. So what would a random human do? And that's a fantastic test for anything.
And, uh, you know, HR systems, putting a human into them rather than a, a legal. Boffin machine thing to read and understand what what's meant to be done, I think is a fantastic thing because they go, well, what do you mean by that? What'd you do that, that's ambiguous, that's this. And test a mind of questioning things that are unclear or could be construed in many ways is a fantastic way of, um, looking at systems and looking about whether they go right or wrong.
Because the one thing that all systems have in common. Even the automated me mechanical ones is humans are involved somewhere. Humans are over fixing them, even they're not building them, um, or creating them. But humans have generally been involved. Even the AI has been built by humans. There is a human fallibility in everything that we do, and just because it.
Well, was it, it can go wrong, it will go wrong. Those sort of sentiments there. But testers bring in that kind of attitude, for want of a better word, and are very good at analyzing, critically analyzing and looking at, um, flow, looking at system flow, looking at process flow and going, well, what if, what if, what if?
And raising questions that make answers. But, and I'm gonna put a downside to it as well, which is if you ever tried writing a process or system to count for every potential. Downfall. It's a ion impossibility, uh, language is so ambiguous. It's, it's really hard. It's infinite, isn't it? It is. It is a, it's a hard game.
So intent is a very valuable thing. And like we as testers have to learn where to draw a line, I guess sometimes and think about kind of, um, risk, I guess is probably the right way. Probability
risk. Risk is
risk rather than
is a good one. Probability is a good one. Um, impact, like you said, is the other one.
Like, 'cause think things can go wrong, but.
If someone's gonna, you know, misconstrue something that could let them release something, that could take out the entire system, you probably need some more safety nets around it. Um, more improvements of that system. But if something matters, whether, um, a letter gets sent out with a full stop on it or missing or not, it's less important, if you see what I mean.
Unless that change of punctuation is gonna make you pay someone 10 million pounds rather than 10 pounds or something. But you know, you've gotta be careful sometimes.
It just sometimes, you know, don't be reckless, but yeah,
you've got where it makes sense.
But a lot of that comes with being appropriately informed about, about constraints, dependencies, value, upstream and, and, and, and.
Tester's, inquisitiveness does lead us to being able to view more holistic teachers, more system thinking possibly than than other people, just because we're nosy and curious and interested.
I'm gonna, can I be my devil's advocate again? Because
twice
in one episode, I know just part of me just gets like, the narrative that we are saying just worries me slightly.
Because we're not the only ones that are inquisitive out there. We're not the only people that actually have a lot of this mindset, and I firmly believe that a lot of developers have this mindset. A lot of people have these sort of things. We're, yeah, we value it as a profession, probably more, but actually it's quite, it's quite out there, but.
We, we go in with those skills, I think is what Dan was kinda talking about. We, we refine those skills quite heavily and so on, but we're not the only ones. I would say some, you know, business analysts I work with are fantastic at that. Some of the system architects I've worked with are greater thinking, full system pitches, but they need people of different types, backgrounds, diversities to help complete the picture.
We all need assistance, I guess is the point, but I don't want to. I don't wanna sing our hymns and praises too much, I guess is my point. Because we are one of a team, we're one of many, and a lot of us have similar-ish skills, are looking at systems and problems. But yeah, we are definitely more big picture than we are minute.
Even,
even the four of us appraise systems and apply systems differently, and we all identify as testers. Yeah. You put the four of us in in the exact same job, we would do a very different job to each other.
Exactly. But that's the human factor again, isn't it?
Yep. I, I agree with everything that you've said there, Russell, like everybody applies the same sort of lateral critical thinking skills.
Right. I think the difference is coming from a perspective though, where like engineers have worked this. Like some of the best engineers as well, and, but it's always coming from that builder's mindset. Like brilliant lateral thinking skills. There's a hundred different ways that they could build this.
And they're aware of all these different ways and critical thinking skills from a construction perspective, right? Because they're in builders mode, whereas testers are often from that risk, pessimistic, destructive view. So finding problems, uh, is an action, almost insurer point of
view. How much does it cost me if it goes
wrong?
Yeah. Yeah. But the other skill that I was gonna mention is communication, right? Is it like Tesla have had to be resilient in a way where they're forced to have good, a better communication skills, even to log bugs, right? So having those communication skills, I'm thinking how that might help. Like it's not just about digging into processes, it's about doing some investigation and speaking to people and finding out how they feel about processes.
Systems and all that sort of stuff as well, their experiences. So that relationship building side of being a tester, often if you're one tester and a team of engineers.
It's a lot of things though, isn't it? Because I think, and I'm conscious that David is, is, is, is looking, is what, but there is a piece here that I wanted to just quickly touch on, which is really.
Building Trust
The, one of the key things we want to strive for as testers is being a trusted advisor. Like when we say things, people trust that there is value in there, that we've appraised, systems, that we've investigated, that we've looked at things, and that when we're speaking about something, people would, would listen to you.
That isn't something that you just flick a switch and you've got, that's earned because of doing. System thinking by doing good work by, by appraising systems, well, by diving deeply, by showing that you understand technical systems and, and architecture and, and all of that, so that when you speak to that, people respect you, but they listen to you and, and you are, you are acting as a trusted advisor in that, that space.
And I think that that is really important because I think a lot of people think that they're good communicators. and you know, a lot of people are good communicators, but there sometimes there isn't so much gravitas. There's not so much grounding. There's not so much data to inform the rationale behind the.
What you're talking about. Sometimes it can be like my finger in the air and go, oh, you know, it's just a bit of this, that, and the other, but you're not backing it up. And, and I think we're, we're also in a, an AI informed age where people will put stuff into something quite quickly, probably not review it or, or give it a light pass, ask for a quick, you know, summary done for us.
And, and you lose credibility. When you aren't known as someone that you can trust, that is detailed and you know, the best testers sometimes are the ones that encourage the developers to do a better job because they know that you'll find something, and so they're more likely to listen to you. Like the best tests are the ones that when they fail, you're like, oh crap, something's gone wrong because you trust that the information that test has given you about a failure.
Comes with the credibility that it was. It's actually giving me information of, oh crap, if something's gone wrong. No, it's just a rubbish framework. It's a rubbish test. It doesn't really matter. It's just noise.
Thank you, Chris. And I'm not just looking at my watch. I'm thinking of the wonderful listeners watches as well.
Closing
They can
pause at any point to
pick up
later, you know?
Yeah, exactly. This has been a fantastic, fantastic conversation that we've had. So thank you so much for Chris, Dan, and Russell. Uh, it's been an enjoyable time had, I hope everyone else listening has had, has been in inspired, uh, by the conversation. If you do want to reach out to us, please contact us, uh, via email.
Contact us@testingpeers.com or you can contact us through LinkedIn if you want to. Um, also. Please don't forget, uh, P Con. Uh, so please book your tickets. Uh, from the time that this episode comes out in eight week times. It will all be over, uh, on the 12th of March. We will be in Nottingham four P Con three and please do bio tickets.
All details are available on testing peers con.com. Tickets are still available. Uh, the program will either be released or very close to being released. so please do check us out and come and see us, in the flesh as it were. That leaves me to say thank you very much for listening and uh, happy New Year and we'll catch you again soon.
Alright,
for now. It's goodbye from the testing peers. Goodbye.