Test Case Scenario

Business Resilience Test Strategies for 2025

Sauce Labs

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 47:44

Send us Fan Mail

Is your testing strategy ready for 2025?

In this episode of Test Case Scenario, Jason Baum is joined by Maaret Pyhäjärvi, Principal Test Consultant at CGI, along with Diego Molina and Titus Fortner from Sauce Labs, to discuss the evolving landscape of quality assurance and business resilience. The panel delivers insights into the biggest challenges and opportunities for testing teams in 2025, from AI-assisted automation to the growing importance of accessibility testing.

You’ll also hear insights on balancing efficiency with velocity, shifting team roles, and how organizations can adapt their QA strategies to keep pace with rapid innovation.

Join us as we discuss:
(00:00) Introduction
(02:52) The role of AI in modern testing

(03:59) Accessibility testing and the EAA deadline

(05:29) Prioritizing efficiency over speed in 2025

(07:28) Who owns quality? The debate continues

(08:36) AI as a tool for exploratory testing

(10:41) Integrating AI into test automation

(12:04) How testers are adapting to AI-generated code

(17:53) Managing risk and compliance across industries

(21:39) Continuous delivery vs. quality trade-offs

(25:24) Measuring success: what really matters

(30:31) AI’s impact on collaboration and team dynamics

(35:46) Balancing risk tolerance and release velocity

(38:48) The industry shift toward AI-assisted test strategy

(44:11) Testing best practices across global markets

We’d love to hear from you! Share your thoughts in the comments below or at community-hub@saucelabs.com.

SUBSCRIBE and visit us at https://saucelabs.com/community to dig into the power of testing in software development.

Sign up for a free account and start running tests today at https://saucelabs.com/. 

▶ Sauce YouTube channel:  / saucelabs  

💡 LinkedIn:  / sauce-labs  

🐦 X: / saucelabs

Jason Baum [00:00:00]:

This is Test Case Scenario with me, your host, Jason Baum. This podcast is the definitive hub for knowledge and stories in the software testing and development communities. If you're new to the channel, hit the subscribe button and let's dive straight into the episode. Hey everybody. Today the topic is business resilience in 2025 and I am Jason Baum. I'm the Senior Director of Developer Relations for Sauce Labs. Been here for, for a few years now. And I am a lifelong community person, I guess is kind of how I describe myself about too many years that I really wish to throw out.


Jason Baum [00:00:50]:

And so I'm very excited to to a be at Sauce Labs, be with you today, and c be able to host this panel because I think this is something that's top of our mind as we head into this new year. We're now several weeks in, hopefully we've already thought about some of these things, but maybe today you'll learn some things that you, you weren't thinking about, which is always good. And we'll just go round robin here and introduce everyone. So I'll just kick it off by Maaret, would you like to introduce yourself next.


Maaret Pyhäjärvi [00:01:18]:

Hello, everyone. My name is Maaret Pyhäjärvi. Hard name to pronounce, especially for Americans. I'm not going to be challenging my colleagues here too much. I work since last June. I work for CGI. I work with consulting of testing services, trying to kind of build up some more of our competencies and just scale it up. All of that usual stuff that you do when you work with a lot of different customers.


Maaret Pyhäjärvi [00:01:45]:

I'm so happy to be here today with great colleagues. I know most of these, these folks from the Selenium community that we volunteer with together and I just absolutely love being here. So thanks for having me.


Jason Baum [00:01:58]:

Thanks so much, Maaret. Titus, you're next on the slide.


Titus Fortner [00:02:02]:

Hi, I'm Titus Fortner. I describe myself as an open source developer. I work at Sauce Labs. I've been here for eight years and change now and essentially I get to help people, whether it's customers, whether it's community, whether it's training, whether it's just one-on-one.


Jason Baum [00:02:19]:

All right, thank you. And Diego.


Diego Molina [00:02:21]:

Hey everyone. Thank you for being here. I'm Diego. I'm also an open source developer. My main motivation is to help people be successful in their automation efforts. So that has driven me to be one of the main contributors in the Selenium project and also met people like Titus actually before joining Sauce Labs, Maaret while working in the project, and Jason while being in Sauce Labs. So happy to be here and looking forward to share some interesting things for you all.


Jason Baum [00:02:52]:

Absolutely. So I kind of teed off the topic earlier with a new year, new opportunities. What's coming for you all in 2025? What should we be thinking about? What are some of the top things on your mind as we head into the new year? I'll throw that question to Maaret first and then let's open up a dialogue.


Maaret Pyhäjärvi [00:03:14]:

I've been kind of framing all of my efforts in this idea that the future is already here. It is just not very equally divided. And there's a long, long list of things that I've done in my previous places of work over the last 30 years that I'm not doing in all of the places of work that I work with right now. So kind of like scaling and equalizing and democratizing some of that testing knowledge. That's definitely the opportunity on, on my way, and there's a long list of details we can probably go into with that.


Diego Molina [00:03:46]:

Yeah.


Jason Baum [00:03:47]:

What about you, Titus?


Titus Fortner [00:03:49]:

Yeah. So William Gibson quote on the future is here. It's just not evenly distributed. We've seen a lot of different trends, different directions.


Jason Baum [00:03:57]:

Diego, what about you?


Diego Molina [00:03:59]:

I think there are many opportunities this year's more around getting new skills because I think in the past we were concerned a lot by, you know, what AI was going to give us, and there was just a lot of marketing around that. And in the recent six to 12 months, we have seen more concrete things. So I believe that it's giving us the opportunity to learn things that matter more in the sense of getting more knowledge, to understand what AI is giving us, to be able to see, okay, what I'm getting from whatever AI tool I'm using is actually useful or not, and just not being the position of accepting whatever we're getting from any new tool we are using. So from my point of view, the opportunities to learn new skills or solidify the skills we have to understand better what is going around us.


Jason Baum [00:04:54]:

One that I've been thinking about for 2025 is accessibility testing and especially the European Union with the EAA, the European Accessibility Act, that comes into play in June. That's another big one that I think companies start if you haven't already started, which, gosh, I hope you have. Time's up. Time to start thinking about that. And we've done webinars that in the past, so great. So now we have a poll for you. I wasn't expecting 80% to be on one. Improve efficiency of our test program.


Jason Baum [00:05:29]:

What do you guys think of that?


Maaret Pyhäjärvi [00:05:31]:

That was my vote, definitely.


Jason Baum [00:05:33]:

Was it?


Maaret Pyhäjärvi [00:05:34]:

Yeah, that was mine. Like, all kind of, like, practices and choices that we make that, you know, any one of us can make them, but they have a huge impact in the. In the overall scale on how we do things. That's kind of the theme of my year, at least on, you know, figuring out better choices.


Jason Baum [00:05:52]:

What about you, Titus, or Diego? Were you surprised by that?


Titus Fortner [00:05:56]:

I think it's interesting that we've been the past decade or whatever, focused on growth and faster velocity, and I think what we're seeing is people just trying to catch up in a lot of ways with the promises that we've already made or what's out there. And so the idea that let's just take what we're doing and make it more efficient, or let's try. And we're not trying to increase velocity, we're just trying to maintain what we've been trying to do already. And the best way we do that is to improve efficiency and focus.


Diego Molina [00:06:30]:

To me, this was not a surprise, because I think. I mean, usually January is a time where you kind of reflect on what you want to do, what you have done, and all these things. And to me, the way I see it is that every couple of years, we see a hype on something. Then many years ago was, like, visual testing was the new thing, and then we walk to, like, whatever blockchain was, and now it's AI. So people are always trying to catch up on something, and then around this time, we reevaluate them. Okay, with the things I have, how can I be more efficient? Is there something I can add to the. To the mix to make things more efficient, or should I remove things? So. So I think this is, like, a regular thing that happens every year, and I guess the topic just changes because there are, like, new tools or new ingredients we're putting now.


Diego Molina [00:07:17]:

So I was not surprised, but I'm happy to see that people are still in the trend of, like, okay, we. We need to not rush and then figure out how to be more efficient to some extent.


Jason Baum [00:07:28]:

I guess I was a little surprised that managing risk and compliance wasn't a little higher on that list. I kind of want to go off of what you were saying, Diego, because I really like that perspective of New Year. We can think about making ourselves more efficient. How do we make sure that that's not, like, a New Year's resolution? I don't know about you if you all make New Year's resolutions, but is, like, making Your test program more efficient. Is that a New Year's resolution? And we're just gonna end up being a service to acceleration or. I guess my question is, how do we not fall into that trap? And how do we prioritize that or think about that in our test strategy for 2025 so we don't fall into that trap?


Titus Fortner [00:08:12]:

I think the best engineers that I know are the best testers are the ones that are constantly looking to improve the efficiency and make things better and figure out how to get more done with less. I think the fact that we're putting that as our priority just indicates how much everyone's struggling with the complexities of testing and test automation at the pace that we're being expected to deliver it.


Maaret Pyhäjärvi [00:08:36]:

Early New Year's resolutions and big plans, they just so often tend out to be, not just be plans. Plans don't do any of the change yet. So kind of like picking something that you recognize when you're reflecting, you recognize that it's relevant, and you should be doing that and getting that through and working with other people on sharing some of that stuff. Probably a better way of doing it. But I've been kind of, like, trying to think a lot about this strategy idea in the sense of kind of like it's the ideas that drive us. And I've been trying to figure out what are the ideas that currently drive me. They usually come from kind of like disagreements with other people. So maybe we can also pitch into that at some point.


Diego Molina [00:09:19]:

Yeah. What I. What I tend to think is that in this case, a test of the. Or like a new habit or like retaking an old habit, usually works well when you do it with more than one, just not yourself, but, like, maybe with someone else. And in this context, if you're doing this as a team, right. Like, if you are reevaluating what you want to do as a strategy and you do that as a team, that type of thing works better than just you going on your own and then deciding to make changes for the whole team, for the whole department. So that part, when we start unveiling, a bit more about our thoughts about test strategies in 2025 in general, my thought is if this is done as a team effort, this usually pays off in the long run. When you do things together, you tend to go far away.


Diego Molina [00:10:05]:

Having said that, I think in general, the revaluation works much better when you do it as a team.


Jason Baum [00:10:11]:

Definitely. If it's one thing that I've learned talking to businesses about test strategy is it feels like everyone's strategy is different. Everyone has a different approach with some of the things that you touched on at the beginning of our conversation about things that you're looking forward to or things that are new that you've identified for 2025. How would you start planning your, your strategy for those things?


Maaret Pyhäjärvi [00:10:41]:

At least for me, one of the items that I started this year with was getting more of the different projects that we're doing to use AI for testing purposes. So kind of like creating a slide deck that we're going through kind of coverage wise across many different organizations so that we've had same conversations with all of the customers that we are working with and just getting through that. I thought that was kind of funny that I ended up framing it as coverage. I'm so much of a tester that everything is a coverage to me in many ways. So AI and the conversations of can we use that, that's definitely been my go to. And the other one, well, you mentioned the accessibility stuff. It comes not only from the EU stuff but actually local legislation is also being reinterpret it, at least here in Finland. So again, I know you know local stuff from my country, but everyone probably needs to take a look at the local stuff and local things going on in their own market and working with various markets like.


Maaret Pyhäjärvi [00:11:43]:

Well, globally companies tend to do. There's a, a lot of hints that we can share as a community. So those two are kind of my go to things now.


Jason Baum [00:11:51]:

Yeah, yeah. And, and I mentioned the European Union, but it also like GDPR has impacts on anyone who's doing business with the eu. So it does impact outside the eu. Diego, what about you?


Diego Molina [00:12:04]:

Yeah, in my personal case, what I see is that. So in the previous years I've been more on the skeptic side of AI because the focus we have been using AI in test automation has been to replace someone, right? Like to replace the code as something that something or someone is writing. And I think that was wrong because in general I tend to think what happens in the long term, like who's going to maintain that code? Is that code? And I'd be written in a. With good practices. Can people actually work with that? Understand with us I was thinking like what happens in the after scenario, but now I'm moving more for, to the side of like I'm kind of believing in the tools we're having these days, not to generate code, but more like to assist us maybe writing code, but also to understand better the test cases we're generating, getting more ideas on, on exploratory testing on how we could approach this or that problem, what strategies we could create based on a given scenario. So I think those, those type of uses are more suitable from my perspective, when we want to do. When we want to use AI in testing.


Diego Molina [00:13:23]:

So I'm starting to switch more to the skeptical side, to the, to the belligerent side because I'm seeing tools that are more helping testers to move forward rather, rather than having them in this scenario where we're going to replace testers with a tools because I think this has been the only industry that has been trying to place that label that we're going to replace them instead of, we're going to help them to be just more effective and deliver more value.


Maaret Pyhäjärvi [00:13:51]:

Framing testers, especially exploratory testing. Over the years, I've come to think of applications as our external imagination. They kind of, you know, they make us more creative. When we look at the application, we are more creative. Then I realized that I am my developer's external imagination. They look at me and they do better testing because they imagine what I would tell them to do. And it's been surprising how well that works as long as I keep silent and I just let them look at me and the external imagination works. And AI, for me at least in testing, it's exactly the same.


Maaret Pyhäjärvi [00:14:27]:

It's an external imagination. When you hate the document it generates enough, you're going to create a better document. When your energy isn't on the mechanics of writing, you can maybe even focus on the contents a little bit more. You can figure out both the summaries and the extensions. You can blow it out into a long form and you can question things. You can use your energy in a different place. So at least I've been using already for last year. I make my personal notes.


Maaret Pyhäjärvi [00:14:57]:

I use a RAG style approach on entering some of my notes and I'm asking, what am I not thinking today? So again, this framing of external imagination and somehow helping us rather than trying to automate for us, at least for me, it's been very helpful.


Titus Fortner [00:15:14]:

It's a great exploratory tool. I think that having conversations like it's a chatbot, but having conversations about am I testing the right things? Am I missing something? Is this code well structured? Seeing what AI can do by itself is anyone that's using it will look at what AI is generating by itself and realize that it's not sufficient. And so we really need to be focused on how we're working with it. And 2025, it's not a matter of whether you're using AI, it's a matter of how you're using it and figuring out how to integrate it into your processes. It's not a magic wand. It's not going to fix everything. And I think that's what we saw like from the door report last year that people are looking at. It's like, oh, this is going to change everything.


Titus Fortner [00:15:57]:

And it's like, well, yes, but you have to figure that out. It's the process, it's the pieces that go into that. And I think what Maaret was saying about external mapping, of how you're thinking about something really matters and allowing an AI to help you think through things and be more creative is going to be very powerful.


Jason Baum [00:16:16]:

Yeah, There were two reports that came out around the same time, the DORA report and then Harness did the State of Software Delivery report and I'll read out just a few stats from that one and it's interesting because it kind of paints the picture that you are all describing. Over 95% of developers believe AI tools can help reduce burnout. But 59% of developers report that AI tools introduce deployment errors at least half of the time. 67% of developers are spending more time debugging AI generated code. 68% of developers note an increase in time spent resolving AI related security vulnerabilities. 92% of respondents acknowledge that AI tools are expanding the scope of bad code that needs debugging. So there you go, right there in the stats. Diego, what do you think of that?


Diego Molina [00:17:10]:

That makes a lot of sense. I think there was a study, I forgot the university that it was done, where a teacher split the classroom in three. First group, they had to read a book and implement a given thing in C. The second group had to deal with the opportunity to Google things like stack workflow and so on. And the third group had the opportunity to use AI tool to generate the code. And when they were asked about explain the code to me, the first group was able to debug it, to even fix things and to explain properly what was happening. The second group also to a major extent was able to do the same thing. The third group had no clue how to fix a bug in that code.


Diego Molina [00:17:53]:

So that's the challenge that we're getting with the code that is generated by AI. And that's why I think going back to my first statement, it's a great opportunity in this year to upskill our technical part in the sense of to understand better what AI is telling us in the sense of code. There is much more to say about that, but in general, we need to be able to understand what we're getting as an output, if it's actually valuable or not, and ideally maintainable in the long term or not.


Maaret Pyhäjärvi [00:18:23]:

But doesn't kind of sound like exploratory testing when you are comparing your developer intent to whatever something got generated and you need to figure out if those two match. So in a way, kind of these tools are taking all of us into the exploratory testing side in many ways.


Jason Baum [00:18:40]:

That's really interesting. Yeah. Why don't we go to the next question? Because I feel like it's kind of complementary to this one. How will these changes influence team roles and responsibilities in 2025?


Titus Fortner [00:18:53]:

One of the trends that we're seeing is we're seeing that developers are being increasingly expected to be responsible for the quality of the code that they're writing without as much separate departments or separate individuals or separate roles. And that's making a big impact in the overall quality. And I think a lot of people are turning to AI to try and bridge the gap of, I'm a developer, I don't think about how to break things. Let's just use an AI to figure all of that out. And I don't know, I feel like we're running an industry experiment on is that going to work and how badly is that going to bite us? And that's going to be one of the things I think we're going to see this year.


Maaret Pyhäjärvi [00:19:34]:

This kind of like a big shift over everyone owning quality teams, owning quality testers being called developers inside the team so that they are one of the developers in the team to own quality. So that's definitely still going to continue into this year. But I'm kind of wondering if the AI tools that we were talking about right now, if they will actually change the team roles in some way, or if it's just kind of the contents of the work for the same roles that will somehow maybe shift.


Diego Molina [00:20:09]:

Yeah. I'm also wondering how that will change in the sense of, for example, in the Selenium team, I think Titus was the one who added that plugin. We have an AI tool that reviews pull requests, and to some extent the tool actually gives good suggestions and points out things could be improved. But also you need to pay attention because probably the person who sent the pull request has some code that was generated, like part of it was generated by an AI tool. So now you need to spend more time reviewing things than actually building things. So I wonder how people will react to those changes in their roles that they are not actually spending more time automating or creating things now. They would need to spend some good time reviewing things to make sure that the quality is still there.


Titus Fortner [00:21:00]:

Yeah. Now that it's a lot easier to generate the content, having the evaluation of the content becomes the limiting factor. Because yeah, I think we all look at those PR analysis or things that get generated and we look at them like, yeah, we can't take all of these things at face value. This actually is wrong. The way they did this is not correct. And you have to know enough about what good code looks like or maintainability or the standards of the conventions that the project is using to figure out what is the right way to add things. And so I do think that there is going to be a lot more focus on making sure that what's been generated is good rather than just trying to generate it yourself.


Jason Baum [00:21:39]:

I think the thing that I think about quite a bit now is the question kind of Maaret, you, you kind of touched on it was ownership of quality. And I've heard different answers pretty much every time to me asking the question who owns quality in your organization? And many times the answer that I hear that's the same. You're all going to know this is, it's everyone's responsibility. Would love to get your take on that, on that answer. And in 2025, as we're going forward with everything, we've kind of just introduced the problems, the challenges that are introduced by some of the things that we've talked about already on this call. Do you see that ownership changing?


Maaret Pyhäjärvi [00:22:25]:

It's not an easy, easy answer, let's say that way or easy question in any way. I'm very much a believer in the idea that everyone can own quality and we don't need to appoint a single owner. At the same time, I kind of feel like, you know, testing is too important and too relevant in all things to be left just for testers. But it's also too relevant to be left without testers. So this kind of quality coaching, you know, someone holding space for testing, holding space for quality, conversations figuring out how do we distribute different kinds of quality related actions all around the lifecycle of building and maintaining software. That's kind of what's, what's happening. So is that then ownership? I don't think that's the owner. That's ownership, because you have to have ownership is either on the whoever holds the money or whoever does the changes.


Maaret Pyhäjärvi [00:23:15]:

Those are the real eventual owners. And I would hope that we can really learn this year to do this more collaboratively so that we don't have to try pointing a single point of failure for these kind of things.


Titus Fortner [00:23:29]:

Yeah, I think it's really easy to say everyone owns quality, but it really matters. The incentive structure that each individual team has. Does the manager prioritize that are they are people that are devs going to be rewarded for velocity or for quality? And those happen on a team by team basis. And so if you don't have some external person responsible for thinking about overall quality, some teams are going to slip through the cracks because they're not going to be incentivizing the right things. And you know, the integrated start get into Conway's law of you know, the different roles and groups and silos that are all integrated in reality but not integrated in how things are developed. Who's thinking about the big picture things? Who's thinking about the disaster scenarios that could tank your company's stock? Because someone wasn't thinking through if this goes wrong and that goes wrong and this goes wrong, what's going to happen? And so even if everyone is responsible for quality, I still think you need someone thinking through from a strategic company wide, product wide perspective.


Maaret Pyhäjärvi [00:24:40]:

Yeah, you definitely need to split the responsibilities over various teams because there's very rarely nowadays a product that gets built by a single team so that a single team gets to own it anyway. So maybe we should pay more attention to how we distribute the responsibilities to the various teams. And when do we create separate teams where there's dedicated quality related considerations? When do we actually avoid having those so called testing teams separately? That's kind of been the big theme that we've been trying to figure out for the last, I'd say 10 years. I don't think we're done with that. So the whole kind of center of excellence. Yes. No. Will we go there? All of that conversation, it's still kind of like a hot potato in many ways.


Jason Baum [00:25:24]:

Yeah. At the beginning of the call we did a poll and if I count this panel, I would say close to 90% of people on this webinar said that we want to improve efficiency for 2025 over velocity. Who made that decision, I guess is kind of my question then internally, who is the one that gets tasked? That's now part of their metric of success.


Diego Molina [00:25:51]:

Yeah.


Titus Fortner [00:25:52]:

Are you incentivized to do that?


Jason Baum [00:25:53]:

Right.


Titus Fortner [00:25:54]:

But it's also the thing that a lot of testers can control, right? Like we, we can't control overall velocity, but we can control how efficient our piece Is it's just a matter of whether that's incentivized and to what extent across teams.


Maaret Pyhäjärvi [00:26:08]:

I kind of wanted to still throw this out. Can you really separate velocity and quality from one another? Can you really go fast if it's like walking on a minefield, at least I've never managed to go very fast when I have to kind of avoid all of the traps of, oh, I reported this bug already or somebody did and now I'm not supposed to mention it. I actually need all kinds of positive things in my life when I'm working in projects like that that make me avoid all of the known problems, for example. So in many ways, I think we need to find ways of integrating and putting these two thoughts together.


Diego Molina [00:26:45]:

I agree with the idea of owning quality. Everyone should do that. But I was also thinking of some anecdotes that some friends told me and also a couple of them that I lived. Even though we praise that, what we see often in, in different scenarios and companies is like, if someone sees that quality is being compromised and we cannot do a release, we cannot like push or we cannot do whatever has to be done to move forward because quality is being compromised. Like people, people really act in a way that as if they were like being offended. Like, you know, like, you're not letting my code move forward, they're not letting me move forward. I've seen this, that, this, this attitude to what the person was making the decision. And I wish that, that with all these new tools we're having around that we're getting like more visibility, more clarity on why it's important to make a pause, fix what has to be fixed, and then move forward and have a bit more respect to the person who's like making the call or if it's a collective call.


Diego Molina [00:27:51]:

Because I've seen situations where there has to be like a bug fix or like a hot fix, and then for some reason it cannot happen. People really get mad at the situation that they cannot move forward, which I think is understandable. But there is a lack of solidarity in these situations. So my wish is that these roles that the way they changes in the sense that we're all reviewing, we're all paying attention to quality in the sense that we're. This is like building a better sense of belonging, that qualities actually belonging to everyone. Instead of like, yeah, we say that it will like everyone's responsibility, but actually like living to the world. So that's what I was thinking while you were all talking about that.


Titus Fortner [00:28:39]:

I want to respond to something that Maaret was saying about can we decouple quality from velocity? And I think that's one of the core issues that we're facing because a lot of people do. We've seen a CEO recently step down because they released a mobile product on a timeline before the quality was insured. And so some of this is the decision making at the top of what are we going to focus on quality or features or velocity or timetables. And I think that's part of what the industry is struggling with, is what's enough, what are we prioritizing. And it depends on industry too, right? Twitter can get away with, or, sorry, X can get away with a lot more bugs because the users will forgive them on a temporary basis. Your bank cannot get away with that, right? And so there's also going to be different appetite for how much velocity you want to prioritize over quality. I guess it's a matter of there's a balance there instead of it necessarily being directly proportional.


Maaret Pyhäjärvi [00:29:39]:

But the way that I think of it is more on the side of you would be faster to deliver on the right schedule if you always had those two walk hand in hand. Like, you know, over the years I've been late on projects. I have been on those projects where we have to delay the schedule. I've also been on projects where we've actually been on schedule. And what seems to be the determining factor on being on schedule and being, you know, at least able to put it into production and have it work there and not get this kind of like catastrophic failure tends to be that we've incrementally built quality in and we have more control over that. And we've been building that idea in the overall industry. The incremental development and early involvement of our customers, testers are bad proxies. All of these ideas, we've been building those for years.


Maaret Pyhäjärvi [00:30:31]:

So in a way, I feel like we need to rephrase that conversation happening on velocity over quality, because what we're looking at now is the outcome. It appears as if they were looking at only the velocity. The schedule is all that they cared about. But did they get the thing that they expected on the schedule? I don't think they did.


Titus Fortner [00:30:51]:

But I don't. I don't know if the industry is necessarily pushing that direction. It's interesting because when we were talking about this panel earlier, we had a conversation about the differences between how the US companies focus on things and how Europe and other countries think about testing and velocity. And the American companies are very much a move fast break Things, especially in tech, and I'm not sure we're necessarily seeing more appetite to slow down and get quality. And the push is still velocity. It's interesting when I give talks in the us I'll talk about continuous delivery and everyone nods along in Europe for a long time and I think it's changing a little bit. I'll talk about continuous delivery and everyone's like, that's not what our focus is. So I'm wondering to a certain extent how much these differences are where different regions are thinking.


Jason Baum [00:31:41]:

I think it's also dependent on what you said earlier. It's very vertical driven as well.


Maaret Pyhäjärvi [00:31:46]:

There's a lot of capabilities that you need to include in your organization before you can do continuous delivery, especially with quality. I think there's a lot of different interpretations out there on how we build this.


Jason Baum [00:31:58]:

Awesome. Let's get into the stack. So what tools or frameworks should your team implement to be successful in 2025?


Titus Fortner [00:32:07]:

Selenium.


Jason Baum [00:32:09]:

Care to expand?


Titus Fortner [00:32:13]:

Seeing as all the panelists are on the Selenium team, I figured I'd throw that out there as a joke. But obviously it really depends on who makes up your team and what things we're prioritizing. And a lot of this is also going to be what AI tools come out of the mix and things like that, and how much are we going to rely on those things and who the makeup is. We're seeing teams that are doing centers of excellence, focusing on Selenium and Appium and building complex things with those low level tools. And we see, I think, developers that don't have time for that testing thing just trying to get by with what's the easiest, fastest thing I can throw out there and then forget about. And often that ends up being tools like Playwright or Cypress.


Maaret Pyhäjärvi [00:33:02]:

I'm kind of always framing this back to what's the development, what's the programming language that we should be working on? I care a lot less about the tool that we end up using, and I care a lot more about the programming language. We should not use specialized testing programming languages. We should use general purpose programming languages. And if we use specialized languages, they're probably going to be more visual, but I'm not sure if I would like to work in those visual languages as much as the trends seem to indicate that someone wants to. What are then the lower level tools kind of underneath that? Is it going to be Selenium or Playwright? Most of my teams, even though I do volunteer with the Selenium project, are using Playwright, a lot of that and it comes from the fact that there are exploratory and manual approaches to compensate for the things that we usually feel like we're missing on the bugs because someone is going to spend time on using that application as well. So we've been prioritizing it so that we don't care about the real browsers as such necessarily. But it depends entirely on the team. It depends on what layer of this whole stack we're on.


Maaret Pyhäjärvi [00:34:11]:

Kind of like the lower we are, the more likely we're going to need all of the real browsers and the higher up we are on the stack kind of building just the simple applications, the less we care. So for me it's more of like understanding what you have as choices and then following the new players coming into the field and just mapping that all somehow together.


Titus Fortner [00:34:33]:

You think that language is going to be less relevant with more AI usage because from a Selenium development perspective, I'm writing code in Java and. Net and Python and Ruby and well, JavaScript to a certain extent. And I find I can write the code in Java or Ruby and then say what is the right syntax for Python? You can copy Paste and see that it works pretty easily without it being something I think that it was all of the stack overflow trying to figure out like, all right, what's the general question? How do I ask it on Stack overflow, get a general answer, then apply it to my specific case. And if it's just a matter of syntax between Ruby and Python, you can get away with just using AI to translate that in a way that may be language.


Maaret Pyhäjärvi [00:35:20]:

I don't think it's just syntax. I think it's also the kind of idiomatic use of languages on what does could look like, what are the kind of the principles behind that language? And also with AI, it's definitely re implementing the things that somebody, you know, you could get them as a plugin, for example, and then you get people re implementing those into locally built testing frameworks. Because AI said this is how you could build it. Yes, you can build it that way, but you could also use somebody else's already built tool that you can just integrate into your ecosystem. So for me, kind of like the way I think of language is that what I'm emphasizing is the idea of collaboration. Developers don't touch robot framework if they're working in Java, Python, C, C, whatever. They are not happy to touch robot framework. That's a Finnish thing to say.


Maaret Pyhäjärvi [00:36:11]:

Very much so, because it's a locally oriented tool nowadays. Also available global the language, it creates this kind of boundary between the two groups, the so called developers and the developers who write the specialized language. So when I say language is going to be a problem, I think it's more on the side of we'd like our people to talk to each other and we'd like the language to stop being a barrier in that conversation.


Diego Molina [00:36:37]:

Yeah, I agree on that. Because when I was part of teams either doing development or testing, both, in both cases coding, when we were using different languages, it was more like this is your code and that's my code. So if there is a bug there, you have to figure out how to fix it. But when we started using for specific projects or specific scenarios, the same language, there was no excuse. You could just move across the code base freely. And I think what, what you're saying, Titus, is a special case because in Selenium developers, every single feature we implement in the library has to come in five different languages. So I think that's a valid use case of AI because in my case I can do it kind of decently in Java and then I can tell helping with Ruby and then we can extrapolate from there. But yeah, in general I think the language is what makes the collaboration much easier.


Diego Molina [00:37:35]:

So when people are asking what language should I use? Like what language is your team using? So that's a better way of answering that question.


Titus Fortner [00:37:42]:

Well, that's another thing though. You're both talking about collaboration. Is collaboration increasing or decreasing in 2025, like the post pandemic, are we coming back into office so that we can collaborate or are we doubling down on? We're remote and we don't need as much. Because I think that was one of the things we saw coming out of the pandemic is there was much less collaboration, there was much less mentoring of, of junior developers. Is AI going to allow us to collaborate less because we can each collaborate with an AI instead of needing to collaborate with each other to the same extent?


Maaret Pyhäjärvi [00:38:17]:

Oh, that's a future I really don't want to experience. Collaboration happens in the pull requests only. That's not one that I'd like to.


Titus Fortner [00:38:26]:

See, but that's true for a lot of companies is that is where the collaboration happens.


Jason Baum [00:38:30]:

It's the same question being asked for writing, like in general, just writers who used to send their work to other writers. Hey, punch this up now you don't have to. You just send it to AI and you get instant results. It's a scary thought, isn't it? Or maybe not if you don't want to collaborate with other people, but you.


Maaret Pyhäjärvi [00:38:48]:

Know, I, I share my, my screen over the Internet all the time and, and I do a lot of pairing and ensemble programming and all of that stuff where you're very collaborative. And I think it's actually better remotely because you don't have someone you know in your close physical space. And you can actually accommodate for the size of the screen you have and all of the font sizes. I can have my font size and you can have yours and all of that works nicely together. I kind of think we're getting lovely tools, but we have still some work to do on the practice of bringing some of that real human to human learning connections into the everyday life of office.


Jason Baum [00:39:29]:

What I've just learned from this panel, Barit, if we, if we meet in person, I'm going to give you plenty of personal space when we say hello greet. So let's move on to questions. We'll open it up here. I hope you've been enjoying this so far. This, this group is so great to get together and I love to be a fly on the wall when friends are talking about topics I'm, I'm interested in and hearing, you know, what, what they have to say. How can we think about measuring success going into the new year?


Maaret Pyhäjärvi [00:40:04]:

Success is kind of like in many ways still something from a personal point of view. Like you need to know what you wanted to achieve before you can measure success around that. So you'd probably go around figuring out if you have something in common in your goals and then you'd set up the agreed kind of experiment and how do we recognize if we did that kind of conversation? That would probably be my go to on on how I would measure success. There's many personal successes that I'm just happy, you know, like I got to do this. But the real impact comes from getting a group together and agreeing on something that is bigger than an individual.


Jason Baum [00:40:41]:

What about you guys? Titus, Diego?


Diego Molina [00:40:44]:

Yeah, I go almost with the same answer because even when you are like testing or validating or checking whatever name you want to use really depends on what the group sees us as. This is something valuable that we can push out. And if this is out and then this is useful for someone given our parameters, context and so on, this is a success already. So I think that's pretty important. When, when you are actually gonna release something, have the that metric, know like what, what does success mean for you? What does like, okay, this is achieved and done for you because you might have a feature that is maybe 60% done. But if this is good enough, perhaps it is already a success for you because you're getting feedback from your end users. So I would go with the same.


Titus Fortner [00:41:34]:

Our goal is to be more efficient in our testing. What. What does that mean? Like how, what. You can't easily put a number on efficiency. We put numbers on what is our pass rate, failure rate, we put numbers on coverage, we put numbers on how long does it take to. Or we should be putting numbers on how long does it take to evaluate the results. I've been giving the pitch recently that your test suite hasn't finished running until you've evaluated all the failures. You should be tracking that.


Titus Fortner [00:42:07]:

I think that's what I would like to see us focusing on is how do we reduce that time? How do we get actual useful information faster? I think if we were thinking about our success that way, that it would make a big improvement in test automation.


Maaret Pyhäjärvi [00:42:23]:

Specifically from the features perspective. I really like the idea of having a new feature, sending telemetry in production and seeing from the telemetry that it's actually doing whatever it was supposed to. Like in the Selenium project, when we introduced telemetry last year and starting to realize how many people are actually using, that was really valuable moment. So getting a lot more of the conversation around, kind of like, you know, you grow your telemetry, you grow your features, you grow your measurements of, are we adding new value? And then you question whether the value is really there. So not just a number, but more of a quality aspect to it as well.


Jason Baum [00:43:01]:

Yeah, I'll just double down on setting goals that can be measured at the beginning. That really goes back to the first thing we were talking about. And if this was like, we're going into the new year and we're setting these resolutions, feel like you're much more successful. Rather than saying, I'm going to go to the gym in the new year, it's, I'm going to go to the gym five days a week and I'm going to hit a minimum step goal of 7,500 steps, or whatever it may be. That's not mine, I promise. But when you hit, when you set those specific goals, it's much easier to see whether or not you've achieved your success. I think we have time for one more question. We kind of talked about this before, but what's the best practice for managing testing in different geographical regions such as the US and eu? We sort of touched on this earlier.


Titus Fortner [00:43:50]:

But yeah, we talked about this, I think a lot more when we were prior. Now we're focused on AI.


Jason Baum [00:43:55]:

Yeah, I know we got way into AI.


Maaret Pyhäjärvi [00:43:57]:

What is the best practice really on this one? The lowest possible denominator, kind of like whichever is giving us either most constraints or most freedom and then we choose from that.


Titus Fortner [00:44:11]:

I think it's really a risk tolerance, right? And it's going to vary by industry. What we think is good enough and who's deciding what's good enough. And what process do you have in place to ensure that you're hitting good enough.


Maaret Pyhäjärvi [00:44:25]:

I just wanted to mention that I'm a little uncomfortable with the concept of risk tolerance because risk is something that if you close your eyes you don't see it coming. So you can't really do that in advance too well. So I'm looking for kind of like the practices that could be, you know, something that could be more proactive than just kind of like figuring out what's our appetite for risk. Maybe it's because I look at things from the perspective that the competencies of finding these problems are not always in place in all organizations. So the results of testing, they are still a problem in many organizations. Unless we address those results of testing, then the reality of tolerating or you know, controlling or making decisions on that risk, it's actually not going to work too well in practice.


Diego Molina [00:45:12]:

The context is so important when we're talking about that because there is someone who's going to give a talk at the selling conference and they're going to explain how they are, they work in the medical industry and they're going to explain how they develop their test automation approach and in general testing approach. And when she was explaining to me the structure of her talk and really drafting some slides, it was so interesting to see the cycles they have. There is not like 2 weeks print like and there are like so different type of ways of measuring what success is. And it doesn't mean that you have to like releasing a half done feature is not acceptable in that industry. So on that sense it's so interesting how the context and the risk tolerance and how the approach has to be depending on the industry and the context you are. So if you're going to the conference, have a look at that one or the digital video afterwards because it's super interesting how you still think that waterfall is something that doesn't exist. But it does very much exist several places and it's by regulation in most of them, so um yeah.


Titus Fortner [00:46:20]:

There's just fewer blog articles being written about it right now.


Jason Baum [00:46:23]:

Great, thank you. Thanks everybody. We are so up against time now.


Titus Fortner [00:46:28]:

I was going to argue with Maaret more.


Jason Baum [00:46:31]:

I just want to say thank you so much to this panel, to Maaret, to Titus, to Diego, and to you for listening along. And hopefully you, you got something from this webinar. And we're really looking forward to our next time getting together for another future Test Case Scenario live. Thanks again, everybody, and we'll see you next time.


Jason Baum [00:47:13]:

Thank you for joining us on Test Case Scenario. Share your thoughts in the comments. We'll make sure to respond to each and every single one. Don't forget to subscribe and hit that notification bell to keep in touch. If you missed our last last episode, it's popping up on your screen right now. Go click it. Until next time on Test Case Scenario.



Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.