Test Case Scenario
Join us every other week on "Test Case Scenario" presented by Sauce Labs, where our expert panel dives into the exciting and ever-changing landscape of technology, pop culture, and business. Host Jason Baum, Director of Community at Sauce Labs, will lead the discussion with our esteemed recurring panelists: Marcus Merrell, VP of Technology Strategy; Nikolay Advolodkin, Senior Developer Advocate and Evelyn Coleman, Manager of Implementation Engineering. Get ready to uncover the impact of continuous testing in this thrilling exploration of the tech world!
Test Case Scenario
Building Quality as a Shared Responsibility with Rémy Gronencheld
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Is your QA strategy keeping up with the speed of innovation?
In this episode of Test Case Scenario, Jason Baum and Marcus Merrell are joined by Rémy Gronencheld, Quality Assurance Manager at BlaBlaCar, to explore how the global carpooling platform scaled from manual testing to seamless automation. Rémy dives into BlaBlaCar’s journey—moving from Mac minis and manual regressions to a robust, scalable test automation suite with Sauce Labs.
You’ll also hear how BlaBlaCar’s “Q as a Service” model is redefining quality as a shared responsibility across teams.
Join us as we discuss:
(00:00) Introduction
(02:07) Carpooling, community, and connections
(05:47) The early days of testing: manual processes and Mac minis
(08:52) Transitioning from manual testing to automation with Sauce Labs
(10:08) Scaling automation and improving test coverage
(12:17) How Sauce Labs transformed testing workflows and reduced maintenance
(17:15) Tackling global challenges
(21:22) Why quality goes beyond the app and into the car experience
(22:58) Shifting left, shifting right, and finding the right balance for QA
(25:10) Enabling shared responsibility for quality
(28:06) Using data, metrics, and SLOs to track quality and performance
(32:39) Building a culture of quality
(35:43) Supporting teams with QA expertise
(37:04) Adapting QA strategies to team needs and product maturity
(37:37) How BlaBlaCar’s testing journey saves time and builds trust
(38:24) The future of quality at BlaBlaCar
We’d love to hear from you! Share your thoughts in the comments below or at community-hub@saucelabs.com.
SUBSCRIBE and visit us at https://saucelabs.com/community to dig into the power of testing in software development.
Sign up for a free account and start running tests today at https://saucelabs.com/.
▶ Sauce YouTube channel: / saucelabs
💡 LinkedIn: / sauce-labs
🐦 X: / saucelabs
Jason Baum [00:00:00]:
This is Test Case Scenario with me, your host, Jason Baum. This podcast is the definitive hub for knowledge and stories in the software testing and development communities. If you're new to the channel, hit the subscribe button and let's dive straight into the episode. Hey, everybody. Welcome to another episode of Test Case Scenario. I am your host, Jason Baum, and with me, as always, Marcus Merrell. And we're very lucky on today's episode to be joined by Remy Gronencheld. Remy, thank you so much for coming on.
Jason Baum [00:00:43]:
And why don't you say a few things about yourself, give your little bio.
Remy Gronencheld [00:00:48]:
Hello. Hello, everyone. Yes, it's quite simple. I'm working at a French company which is called BlaBlaCar. So we are a company that is doing a lot of things with cars, obviously, but we also have buses, we also have a train, and we are doing carpooling actually, as our main activity. So making sure that people can travel together in the same car or in a lot of means to make sure that we bring people together and we make sure that they can travel in the safest way and in the best way possible. So, yeah, basically I'm the manager of the quality assurance team in the company, and it's been about ten years that I'm doing QA and QA testing, relating activities regarding that. I'm part of the company, a department that is in charge of all the infrastructure, so we call it foundation teams, which includes database management, all the infrastructure, all the security topics, and kind of a transversal support team to ensure that the business is working as expected in term of tech.
Remy Gronencheld [00:02:07]:
Really excited to have those great discussions with you to discuss testing, QA and strategy in general.
Jason Baum [00:02:15]:
Awesome. Yes, we're really excited to have you on. And BlaBlaCar is really a fascinating business. It really is. Last year, Marcus, do you remember having the conversation around in Valentine's Day? Actually, earlier this year, we had a conversation about testing dating sites, dating apps. And. And in many ways, because we. We kind of talked about the difference.
Jason Baum [00:02:41]:
What did we compare a dating app to? It was like a rideshare Uber. And we're like, there's so many similarities, right. When you're testing it or thinking about it from the back end. And I would imagine that BlaBlaCar as I look at it, it's kind of similar, right? You're like kind of trying to find a match to commute with. And I mean, first, why don't you tell us a little bit about BlaBlaCar and. Yeah, and is that a fair comparison from a backend standpoint or is it completely different?
Remy Gronencheld [00:03:10]:
We are not there to make couples. But it's true that in a sense it's a matching activity. It's a marketplace at some point. Like on the dating app, you have a marketplace from people and here we have marketplace from the one that owns a mean of transport and wants to go from point A to point B. And on the other hand the person that doesn't own a mean of transport and wants to go to the same place. And so it's kind of that. So it's matching people on criteria which are not the same as a physical criteria or stuff like that, but can be like, I don't know, I like your car or you going to the right, to the right place or I don't know, there is a lot of possibility. You can also add in your profile some things that you like, your hobbies and stuff like that.
Remy Gronencheld [00:04:00]:
So it's also a way to exchange to people not to be alone on the road because we have a lot of people that are doing carpooling, not really for gaining money because it's not part of that. What you want to do by doing carpooling is really to share cost and not necessarily to make money. So it's not a number in that sense. And people just use this mean to meet with people, to have new experiences and to not be alone at some point in the road because sometimes you have, I don't know, a six-hour drive and if you're alone it's kind of dangerous and you get bored. But if you have other people and to talk to, you can book some songs, have some, some more activity and it's, it's more refreshing for you and you have like more, more things to do in the road in terms of you, you need to match stuff. I guess we are, we are quite close.
Jason Baum [00:04:56]:
Yeah. Yeah. I could see the similarities, obviously the differences.
Jason Baum [00:04:59]:
It's cool.
Jason Baum [00:05:00]:
It's a, it reminds me, I mean growing up it was all about carpooling. It was, all we did was carpool. I feel like I don't hear about carpooling nearly as much as I did. And it is kind of a lost thing. At least here in the States. I don't hear as much of it, but it's kind of cool to have that concept and sort of commercialize it, but more like to get it out so that people can have that experience more because it is important. It is a cool way to meet people and yeah. Share that time on the road together.
Jason Baum [00:05:33]:
So how was testing done at BlaBlaCar kind of take us back a little bit before perhaps your time as a customer at Sauce Labs, maybe. How were you testing?
Remy Gronencheld [00:05:47]:
It was quite manual, I would say, when I arrived in BlaBlaCar, since, because it's almost eight years that I'm in BlaBlaCar, so right now it's quite a young company. When we arrived, there was a lot of things that was manually. You had embedded QAs that were into what we call at that time tribes. So on the Spotify model, so we had some tribes of people with some squads and you had QA dedicated to some of them really embedded to the team QA as a last resort actually to say, okay, I finished my job, I handing over to the QA. So with, I don't want to say traditional because I don't know, it depends on the country, on the situation, but I would say it's kind of the old school QA having, okay, it's part of the process to have one line of people that are doing tests and handing over to say yes or no, you can go to prod. So it was a bit like that. And on top of that we didn't have any automation, so no automated test. So obviously you need more people in order to manually validate.
Remy Gronencheld [00:06:57]:
And so it was a different period at that time we had a team of QA engineers and their job was to create a new framework in order to automate. And so we were starting slowly to go to our automation. So at the beginning we had a setup which was really, I want to say rustic, what I call by that. It was like we had a one Mac mini that was kind of in a data center somewhere and we were logging on it via some kind of VMs that we needed to have on a computer that was in the office. So really it's cool. And so when we wanted to, I don't know, to launch an automated test, we had to configure our own emulator simulators. We had to to make the updates ourselves. We had a lot of problem with maintenance, with a lot of problem with the way we were working.
Remy Gronencheld [00:07:57]:
And so we had to build from scratch all our parallelization model, our model to register to take screenshots and all those stuff. We needed to create and maintain them by hand, which was not really easy for us and was taking a lot of time, to be honest. And all this time is not the time you are testing, which is what we want to do most of the time. So. So yeah, it was pretty, pretty big of something that was starting to build. But not reaching a point where it's mature enough or scalable enough. And at one point the solution was, should we continue investing in more Mac Minis? Like, should we do that more? So maybe have people dedicated to that 24 hours. And the alternative to that was to find a tool and to export this complexity to the one that knows the best.
Remy Gronencheld [00:08:52]:
And so obviously we top on mine, we had Sauce Labs in our radar. So it was really kind of a no-brainer in the tool, especially at this stage. Actually I think there was like. And today also I do believe that so Sauce Labs was the best company in order to provide this service. And so we started with that, with a new plan, starting to learn how to work with SourceConnect, how to really integrate all those bricks. That was a lifesaver actually for us because we removed, I think it was about four hours a week of maintenance that we removed and that we completely removed, allowing us to dedicate that to create more tests, automate more and save more time, maybe I have a bit of figures regarding that when we were. Because we kind of are mobile first, because when you go on the road most of the time you need to have your mobile to discuss and stuff like that. And so our mobile application are really critical for us and so it's important to have them really up and running and working well.
Remy Gronencheld [00:10:08]:
And so by that time when I arrived, we were doing mobile submission. So all the regression testing that you do before sending a new release to production and during that meeting it was about 15 devs and QAs. So it was kind of a mix of people working together on validating manually a scope of about 50, 60 test cases which represent, with all the complexity of the app, not so much in the end. And we spent between two and three hours doing that with all those kind of people. So it was quite, quite heavy for us and it was taking some time for a lot of developer, which was not used to deliver, to test or anything. And so thanks to Sauce Labs we managed to scale our automation, I would say because we enable more automation to be done at the same time. So in a sense we enable also the fact that we can provide better coverage to our application. We included that into our continuous testing because it was a way for us to not just wait for the mobile submission to do tests, but to have the automated test running nightly, daily and having some kind of structure around that.
Remy Gronencheld [00:11:29]:
And by using our abilities we started to cover more and more tests. So grow the activity, grow the number of tests. And so Sauce Labs was there to support our growth because we could expand, it was not costly for us to do it. We didn't have to restructure everything to re-channel everything. It was already there, the platform was working, people could access it, we had access to videos, we had access to screenshots. It was all embedded. We could even debug directly the app using the live mode when you launch a test and when you can take over and just work on it to understand. So pretty neat for us we build a lot of tooling around the tool so that we can get the result, we can fetch different information to have some logs and stuff like that.
Remy Gronencheld [00:12:17]:
So today it's really embedded in our slack, in our tooling and so we are using it and we are leveraging it in the right way. And I will say thanks to that we were able to really scale the activity. And the result of that is like today we do manage to have a mobile submission that is no longer tested manually. So today it's not actually a reality. We are still doing a lot of automated manual tests I would say but we are starting to believe that we can reach a point where we could fully automate the regression test suite and so we could fully leverage automated mobile submissions as a tool. So coming from three hours every week to nothing actually because it's already done, we have a coverage that is way better because today we are thinking about run 40 tests that are in end to end mode and so targeting application and targeting directly as a user. So it's pretty strong in terms of trust. So it allows us to really be confident when we ship in a new application.
Remy Gronencheld [00:13:29]:
If those tests are passing, we know that we're on the right track and we can go to Prague. It's really something that help us in the long run. And today we are like scaling the activity. Thanks to the support we know that we don't have to care about a new version of the app because Sauce Labs is always top of the game on like updating the right OSes and the right things, having the right devices whenever we need them and so we know that we can count on them. The support is great. You remove some, some worries that we used to have with a local setup and so quite a huge change for us compared to where we were.
Jason Baum [00:14:09]:
Long day since those long days since those Mac minis come quite far and I'm surprised you guys didn't want to man like handle your own infrastructure and grid. I mean I'm shocked by that.
Marcus Merrell [00:14:23]:
I'm curious you go down a level to the tactics of testing an app like that. I'm imagining you've got a lot going on in terms of there's people on two sides of the request and the response to create the match. And then there's a GPS consideration about location finding. Do you do like real time mapping like Uber or stuff like that? Or is it really just to set up the ride and then let people communicate with each other?
Remy Gronencheld [00:14:47]:
That's why blind in French or maybe in some, in some country means to talk actually like doing some blah, blah, it's like talking. And so that's kind of, it's the. After the company, like we prefer people to exchange together. We have a messaging system and stuff like that for people to exchange. But today we don't necessarily track them and have this. That's something that we could implement. But today it's not a concern for us. We want to really focus on the experience and the way they're working in the car.
Remy Gronencheld [00:15:20]:
Because actually the funny part of our application is like most of the business is not in the application. Actually, the application is a means to meet people. And so it should be fast, it should be quick and you should be on it to do what you want to do, which is travel and meet people. And that's why it's different. But today the geolocalization, we are not really using it, leveraging it as a feature. Most of the complexity is also in the way we are matching those people. We have a lot of internal tooling to define when it's appropriate for people to do detours to get other people along the way and things like that. So we are mostly working on that and working mostly on the departure arrival.
Remy Gronencheld [00:16:09]:
And based on that you define what is the best matching in terms of convenience, timing and stuff like that.
Marcus Merrell [00:16:15]:
It seems like you'd have to deal with situations where people have a lousy Internet connection on one side of the equation, but on the other side maybe they don't. Do you have to simulate bad cell phone connections and stuff like that or do you just work through things?
Remy Gronencheld [00:16:28]:
Because we operate in a lot of country actually, for example in Brazil, we are in Mexico, also in India. So obviously we have to deal with a lot of different networks and things like that. So we are like it's a multicod to all testing actually it's just about language, but it's also about configuration, also the phones. Because on some country like you will have like the top-notch phone and stuff like that. But our population is everybody actually. So they can have like the last phone or they can have like one that barely cannot handle an app at some point. So we need to take that into account. And that's why having at this position a huge range of devices and an emulator simulator in order to do those tests, it's helping a lot actually.
Remy Gronencheld [00:17:15]:
So yeah, it's part of the constraint and it's true that we need to work on a tech level on ways to make sure that even with low connection you can still access data offline and things like that. So there is a lot of things also around those features to enable people to still know where they go, still know how to contact their driver and stuff like that.
Jason Baum [00:17:36]:
BlaBlaCar is kind of maturing as you've, you know, evolved with the automation testing and end-to-end coverage. Where do you see quality kind of living now and is it and is that evolving along with it? Are you adopting a more DevOps approach to things shifting left or has that not changed? Can you expand on that a little?
Remy Gronencheld [00:18:00]:
Yes, of course. Our first introduction of automated tests was mostly to reduce the regression testing. So it's something that is really important and we still want to reduce as much as possible retesting stuff and like waiting five hours in order to know whether or not we are good. So that was a priority today we want to enter kind of the experience testing which we call it like that. We want to change a bit of just yes or no on business criteria on those things like that which is really well done by machine. And that's what I call the extra quality that we want to get is like get actually the application in your hands and use it as a BlaBlaCar member will do. And so being into a situation actually in the company we often have, we have one day off that is offered by the company which we call the Vis-a-member day. And on this day we are allowed to use the product actually it's a way for us to kind of live test the application and we are expected the only expectation that the company have is that we share the feedback and actually that we use the platform and the differences.
Remy Gronencheld [00:19:17]:
So it's something that is part of the culture to have people caring about the product, using it and also working with it. So that's also something that we like to narration to have and we want to go more this way actually to make sure that we can hands-on use the product and really be in a situation where okay, is it working or not? And not just is it matching the expectation that we put on paper because reality is always really different than paper. It doesn't prevent us from going around with all the security requirements and things like that, which are something that is more, I would say, structured and normalized. But what I call the extra quality and what is made quality is kind of the woe effect. And they say, okay, yes, this app is doing something that is really bringing me some value. And this app is working like that. And for that, as we have a multicultural population in our application, we need to make sure that we can put ourselves at the place actually of those people. And for that we are making the team as much as possible.
Remy Gronencheld [00:20:27]:
We want to make testing easy for people. And so the QA team has this position enabling people to test and any people. We have a different team, for example, the community relation team, which are the one that are doing the support for any, any problem that they have on the road. We try to have them as part of the validation. We want to have their opinion, ask them how it goes, what's the feeling about the product. We get a lot of feedback and so this is where we get most of the value and where we can bring quality. Actually, it's when you really not just keep yourself on saying, okay, is doing what we want, but is it really something that is working for us as an experience and in the world? And as I said, you could have the best app in the world. What our business do, everything happened in the car.
Remy Gronencheld [00:21:22]:
So at some point it's also something like you could have an app that is perfect in the car. If something goes wrong, you need to embed quality even more than just in the app. You need to have kind of your processes that handle quality and you need to make it as part of the company culture. Also, it's kind of a shift left, actually, but the ultimate shift left because we go up until like the company itself, actually, the way that I think.
Jason Baum [00:21:53]:
You'Ve described it, it could be said that it's a center of excellence approach that you've sort of taken to quality. How would you say that that differs perhaps than the standard shift left DevOps approach?
Remy Gronencheld [00:22:10]:
I do believe that those are kind of words or expression to kind of relate to some frameworks that are known. Because in a sense we can say that the way we operate is also DevOps because we are concentrating ourselves on enabling people, working on expertise and defining that. So there is a lot of similarities with what we are doing that is can be called DevOps. What I want is like something that is working for my company and that's what it's important for me. Maybe I will deceive you in responding to a Question because I'm not. I don't want to really be into that place. In Berkar, we built a model that is called Q as a service. We are still building it along the way and it's taking some stuff from.
Remy Gronencheld [00:22:58]:
Yeah, we wanted to chief rep because we want to talk with the PMs, with the, at the beginning of the project. We understand that, but we're also shifting right A lot because we want to understand what's in production because we want to be the member and work with them. And so in the middle we have what, we have the automation actually we. We want to automate as much as possible so that the idea that we built all the work that we did on the specification we see directly as a result in the easiest way possible by bringing as much people as we can in the company to give feedback along the way to make sure that we're on the right direction. So it's definitely a center of expertise in the way we want to bring people to quality should not be an issue in BlaBlaCar. Testing should be seamless and so any people could participate so that we get as much feedback as we can along the way.
Jason Baum [00:23:56]:
I kind of set you up there because you're here with friends, you're with friends who agree on you with this. On this topic. Because I actually wrote something Shift Left is Dead, an article I don't think in meaning shift left testing. And I don't mean that it's all the way dead, but with the intent that, you know, it can't mean we're eliminating QA completely from the process, which I think a lot of times that's what shift left is kind of leading to, unfortunately. And in actuality, I think you're actually more shift left than perhaps traditional shift left testing kind of gets into. Because as advisors, early, early in the process, you know, before you get to the room, it's the conversation that happens before that at inception, that's where QA and true shift left. I think if DevOps was actually done correctly would actually start and you guys are sort of there and then also covering the back. And don't forget Shift right.
Jason Baum [00:24:56]:
Because I think traditional DevOps was almost not ignoring, but it was like, no, we have to shift further left. But what happens to the right? Are we ignoring to the right? We can't ignore the right. What are the advantages of the approach you've taken?
Remy Gronencheld [00:25:10]:
Just to get back at the shift right? Shift left?
Jason Baum [00:25:14]:
Yeah, yeah.
Remy Gronencheld [00:25:15]:
I believe that, shift left. It's all what is not real actually in the mind of the company, it's something that is building. So this is where you can open a lot of direction, but this is where you can also build something that is completely wrong or not. When you're on the shift right? Actually you get to the reality and you cannot ignore reality when you're doing something like that. You need to make sure that what you think and when you want to build and what you want to explore and to do will have the right impact in the end. So I see it really like that.
Remy Gronencheld [00:25:49]:
And in the middle, it's the positioning that you can have. So depending on your company, depending on your product, I do believe that you need to find the right way. For us, we have an application where we can take a bit of risk. Actually we are not shipping rockets on the moon. It's not something where we say, okay, we need to be perfect on all the criteria so we are allowed to take some risks. And it's allowing us to have this positioning and being more like, we can let people being more included into the testing parties because we can have non tester experts doing the job. And by doing that you ensure that the quality is again ubiquitous, because that's part of the definition. So in our positioning we are enabling actually more shiftless because if you are not directly into the delivery, you can pick what's happening on prod, you can pick what's happening before.
Remy Gronencheld [00:26:52]:
And so it's a positioning that allows you to be more flexible and based on the need, you compare that. And that's why we call it Q as a service, because it's really like where do we need us the most? Where can we bring most value to the company as expert in testing, in validation, in quality. And I guess that's the strength of the model. And obviously it has some perks because some people love to have a QA team that knows everything and that has the answer to everything. I do believe that it's quite comfortable for people. But in the end, is it scalable or is it good for the, the company itself? I'm not sure because you, you crystallize the information, you crystallize the ownership of quality, which I do believe that is not the right way to go. If you want to grow a qualitative business.
Marcus Merrell [00:27:42]:
I'm curious whether or not you have incorporated into your auto or automated or manual testing, testing of the user journey and analytics that I imagine you're collecting to understand people's, how they use the app and stuff like that. Because I've noticed, I've been, I've worked on some apps before where that kind of stuff breaks and people don't know it for quite a while because it's not part of testing until you deliberately start testing. So I'm curious to see if you have any motions there.
Remy Gronencheld [00:28:06]:
Yes, actually we. We have this strong automated suite and I do believe that it's all the key in that is having the right coverage. What we are doing right now and in the community since a while actually, because it's a. Breca is a company that has been that. That are the events since a while we have a lot of analytics, so we have a lot of tracking in the app and we are not even tracking exactly quality, although we are doing it. But at some point we are tracking business metrics like saying okay, on this flow we have a drop at this page at this moment and so we can understand that and we have the full data team that is working on this along the way. And so it's also part of how we can distinguish sqa, where do we put our effort? Because at some point we see that in the metric that there is an area that have some drops. We know that there is some gray area where we don't know what's happening.
Remy Gronencheld [00:29:00]:
So we are having a focus on that and we can rearrange the way we are testing to incorporate those even more. So a lot of data are being followed by the PMs, by the data team and there is a lot of activity around that. On top of that we also have a stronger emphasis on SLOs. We have a good framework actually that has been created by a team that is really owning all the observability stack of BlaBlaCar and they implemented a product feature slo which are really linked to the product itself. So we are defining a flow, a subflow and based on that we are defining indicators, SLI and stuff like that so that we can create expectation regarding those flow in terms of availability, stability and sometimes quality. And based on that we have rolled out this framework for all the tech team. And so any team based on the flow that they own in their scope, they can say okay, for example on the publication to publish a new offer to go on a trip. On the publication flow you have an SLO which is okay, this flow should be available at least 50% of the time.
Remy Gronencheld [00:30:15]:
It's really more than that if it says 50% so it's really nothing but like we have some leverage regarding that and today it's part of the full ecosystem of visibility observability that we have on our app. So not just only NPS, because we do measure the NPS, we do measure the flows and the data on that, but we also measure the technical part of how the product should work. And on top of that, we have the automated test that are telling you on this critical flow, on this critical business area, are we fine, are we confident with what the application is doing on that flow or the other? And so yes, we have a lot of things regarding that.
Marcus Merrell [00:30:58]:
That's great. It's really powerful stuff when you can deliver that kind of thing confidently to make sure it continues to give you all the signal that you need.
Jason Baum [00:31:05]:
One of the questions I like to ask now because the answer is always different and I just love asking it because of that is so who owns quality then? At BlaBlaCar.
Remy Gronencheld [00:31:19]:
I would really be happy to say that it's everyone. It's a really good question because today I would say there is not someone or some team that owns quality, which is good. I'm not sure it means that quality is owned today. So it's kind of a quality. It's a spread ownership. I would say like every team has their ownership at their level, but we don't have like someone on top that is saying, okay, I'm owning this part of the quality. It's kind of the company objective actually to watch and to make sure that what we are doing is quality. So I would say quality is a shared responsibility between the tech team and the product team, actually, because they are the one.
Remy Gronencheld [00:32:06]:
But I will go even further. It's a, it's a quality issue because it's a team and company show. Because at some point it's not only tech, it's not only product, it's also the people in marketing. It's also the person that are doing the commercial work. And so all those persons that are working along the way, they are the ones that are owning quality because if one of them is not doing something qualitatively, there is a hole in the, in the process and cannot work.
Jason Baum [00:32:39]:
So does it like create a culture of quality by doing that?
Remy Gronencheld [00:32:43]:
Yes, exactly. Before actually, when we were more embedded in the team, we had some behavior where people was maybe less aware they should care about what's happening, actually. So we ended up like starting a test where nothing, everything was wrong because nothing and nobody has been testing before it arrived on us and say, okay, it's your job to make sure that we are good. Today we don't see that. Today we see people that they want to do what is good for the product, for the company. And so they came to us and say, okay, I want to make this good, but I don't know how to do it, so I need your help. Whereas better it was like, okay, I'm doing this stuff, I hope it's good, I will send it to the one that can tell me whether or not it's good. And so in a sense, by removing your presence, you increase the need for quality.
Remy Gronencheld [00:33:38]:
Obviously you cannot do it blindly. And that's why we have all these indicators and stuff like that. What we are doing as a center of expertise, it's also to measure and to have visibility to say, okay, this team, we need to come and work with them at some point. Or this team, they are good, so we can leave them a bit more free on this topic. So it's kind of the way like you let them and by removing your presence there, you, you're starting to see a lot of good behaviors and a lot of proactivity, ownership and things like that. What you can do in order to support them is to answer their question and support them in their objective to go towards quality. Whereas before it was like, you're the one, make it good and make sure that we don't have issue. And that's in that sense that I guess we can measure the importance of that thing.
Remy Gronencheld [00:34:34]:
Obviously it's not something that is happening from one day to the other. And so we need to be careful in the way we are doing it. Some team are not ready to do the big step because they lack confidence, because they lack knowledge and they lack tooling maybe. And so based on that, that's why the center of expertise needs to go and work with them. We have several actually processes that we use in order to diffuse this best practice and stuff like that. In BlaBlaCar, we have the concept of classroom, which is a slot that is available every week and people can plug question or thematics with us and we discuss and hands-on meeting where we crack a problem or we respond to a question that a team has another process that we have, we call it immersions. So meaning like for one month we put one queue in your team and he or she will be helping you to develop the process to automate a bit and do it. So thanks to those two elements, we maintain visibility and we can have a specific, I would say action with the team that needs it the most.
Remy Gronencheld [00:35:43]:
And so it's kind of if you say you remove, you need to be ready to support and to enable people to do their own quality. You cannot just leave and lead Them into their own duties.
Jason Baum [00:35:56]:
I will say I love that answer. I love everything about this. We're so on the same page, I think. Do you find that it's alleviating pressure for developers who are testing more by doing it this way?
Remy Gronencheld [00:36:08]:
I don't know because I had a lot of complaints so I.
Jason Baum [00:36:12]:
That's going to happen anyway.
Remy Gronencheld [00:36:14]:
Yeah, it's true because anyway, it's not actually most of the complaints came from the fact that they, at the beginning they were spoiled. We were there, we were doing the stuff for them. And so it's kind of when you have something since the beginning, when you remove it, obviously there is a lot of things. So it's kind of marginal in a sense. In the end they are quite happy that they can save some time. And actually we demonstrated that at some point they save time because they don't need to interface with someone they are unable to test, they go to prod and we save time in time to market, but while keeping the same quality, actually. And why is that? Also because we can leverage an automation suite that is strong enough to ensure that we don't have regulation. So it's kind of you can go there if you have the necessary tooling in order to support your growth.
Remy Gronencheld [00:37:04]:
Also and actually in BlaBlaCar we have a sub-product that is doing not long distance carpooling but that is doing a short daily commute from work to home. And so it's a product that is developed and it's working and on their side they don't have automation and so it's difficult for us to not be there actually. So we have one QA that is embedded in this team and that is working with them along the way. So that's why it's also, we adapt also our way of working based on the necessity of the product of the team and things like that.
Jason Baum [00:37:37]:
Thank you so much, Remy for being on today and for talking to us. This was awesome. This is fascinating. I'm a big fan of BlaBlaCar. I didn't know too much about BlaBlaCar before and then I was doing a lot of reading. It's a very cool app and the work you guys are doing is fantastic and love your approach to testing and I'm glad that you're doing so well in your automation journey and it sounds like you're going to take it even further. So, Remy Guronichel, thank you so much for coming on Test Case Scenario. Really appreciate it and thank you for listening to this episode and we will see you next time on Test Case Scenario.
Jason Baum [00:38:24]:
Thank you for joining us on Test Case Scenario. Share your thoughts in the comments. We'll make sure to respond to each and every single one. Don't forget to subscribe and hit that notification bell to keep in touch. If you missed our last episode, it's popping up on your screen right now, so click it until next time on Test Case Scenario.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.