Your Work Friends | Fresh Insights on the Now and Next of Work

Code Red: Zapier's Approach to Move from AI Fear to AI Integration w/ Zapier's Brandon Sammut

Francesca Ranieri Season 2

Send us a text

“With AI you can delegate a bunch of the work, but you cannot delegate the accountability.”

This week on Your Work Friends, we sit down with Zapier’s Chief People & AI Transformation Officer Brandon Sammut to break down how Zapier hit 97% AI adoption without traditional L&D, why they declared an internal “Code Red,” and how culture—not tools—is the real engine of AI transformation.

We get into Zapier’s AI fluency rubric for new hires, their “clear the lane” discipline that freed HR to actually innovate, and the role of AI Automation Engineers reshaping how teams work. If you’re stuck in pilot purgatory, wrestling with tool sprawl, or trying to get your leaders hands-on with AI, this episode gives you the moves.

What you’ll learn:

  • Why Zapier called “Code Red” on AI early
  • The habits behind their 97% adoption rate
  • How transparency + psychological safety supercharge experiments
  • The AI fluency bar every new hire must meet
  • One golden rule: delegate work, not accountability

If you’re a CHRO, CPO, or operator trying to make AI real inside your org—not just a slide—this one’s your playbook.


About Brandon Sammut
Chief People & AI Transformation Officer at Zapier, leading the organization’s AI adoption, AI fluency standards, and automation strategy. Follow him on LinkedIn for playbooks and real-world experiments:  https://www.linkedin.com/in/brandon-sammut-8147b76


#YourWorkFriends #Zapier #AIAtWork #AITransformation #FutureOfWork #HRLeaders #PeopleStrategy #TalentStrategy #Automation #GenerativeAI #Leadership #WorkplaceCulture

Disclaimer: This podcast is for informational purposes only and should not be considered professional advice. We are not responsible for any losses, damages, or liabilities that may arise from the use of this podcast. The views expressed in this podcast may not be those of the host or the management.

Thanks for listening!

Hey! We love new friends! Connect with us!

SPEAKER_00:

And one of those principles that is happier comes from our CTO, Brian. And I'll never forget the first time he said it because it was so helpful and so sticky. It was like impossible to forget. He said, hey, with AI, you can delegate a bunch of the work, but you cannot delegate the accountability. Full stop, period.

SPEAKER_03:

Hey, this is your work friends. I'm Mel Platt and I'm Francesca Renieri. And we break down the now and next of work. So you stay ahead, Francesca. How are you today? I had to go to traffic school because I got caught speeding.

SPEAKER_04:

Can I tell you? A little PSA, PSA, something I learned at traffic school in Oregon. And I'm sure this is very true for most states. It is illegal for you to be even holding your cell phone. I'm not even talking about texting or checking your phone. I'm talking about just holding your cell phone. Guess how much the ticket is. If you get caught holding your cell phone in Oregon.

SPEAKER_03:

Connecticut has a lot like that. So I'm going to go out on a limb and say$1,500.

SPEAKER_04:

No, you're supposed to undershoot it. So it sounds like mine's like, oh yeah, go ahead.$120. No, it is not. It's a thousand. It's$1,500.

SPEAKER_03:

It's$1,000 in Oregon. It's$1,500 in Connecticut. I don't know. I'm just assuming it is because Connecticut's expensive. But I do know Connecticut just also passed a similar law. And I'm like, how the hell am I supposed to turn on my fucking GPS? Sorry for that. My GPS. Like, what if I need to change direction? That's a lot of money, man. That's a lot of money. A lot of money. I appreciate it because taxing and driving is still an issue. I was on 95 and someone was like, I was like, person drunk in front of me. They were not even looking at the road text while they were driving on the highway. So I can appreciate it. I appreciate it. I'm a fan. Like I'm voice to text, all that good stuff. But just holding your phone.

SPEAKER_04:

Yeah, just holding your phone per driving school. Texting and driving is the equivalent of driving a full length of a football field blind. I believe that. Yeah. Yeah. Are you a better driver? I'm actually really more a lot more cautious. It was a good course. I was like, I appreciate this course. And they're like, okay, get out of here. Anyway. Yeah. Hey, evil can evil. He's like, Do you know how fast you're going? I'm like, nope. He's like, where are you going? I'm like, just home. Like, what?

SPEAKER_03:

Yeah. I just really want to be in my sweatpants. Thank you. Well, we had we just got off such a delightful conversation. We've been following Zapier for the last few years because they've really been leading the way in terms of automation and AI transformation. And we connected with Brandon Samut, who is their chief people and AI transformation officer. And his team is just joyful to follow on LinkedIn. If you are not following them, you absolutely should because they are putting out playbook after playbook on how to be successful with this. And we were talking with him about Zapier's call to action for AI fluency and adoption within their own organization, their code red. Francesca, what do you think about this conversation?

SPEAKER_04:

Every organization, you've either been a part of a tech integration or you've led a tech integration. And adoption's always just a super pain in the ass. And Zapier's at a 97% adoption. By the way, they did that all without any formal learning and development. And so we were like, how in the hell? So I loved this conversation. Brandon broke down exactly how he did it, what you need to consider, how big of a play culture is in it. I left feeling like Brandon's a good egg and I want this type of CPO in every company with that.

SPEAKER_02:

Here's Brandon.

SPEAKER_03:

I love what you put out. I love that it's from all levels of the organization. Everyone's involved, everyone's bought in. And then I read about code red. Two questions here. What was code red? Is code red? And what triggered code red at Zapier?

SPEAKER_00:

Yeah. So this is back in March of 2023, two and a half years ago. And it was about six months after ChatGPT 3.5. It's about six months after that model came out. And we had plenty of folks at Zapier been keeping up with all the models and building with them and you know what have you. But 3.5 was pretty capable. It was a big step up. And for Zapier, the code red moment wasn't just about a new model coming out. It was about the also the pace of progress with the models. And you start drawing that improvement curve as the little dots start getting plotted, as it were. And you're like, these are going to get better fast. And for a company like Zapier, whose mission has always been to make automation work for everyone, you see a massive opportunity. Like AI can help make automation more powerful, but it also make it easier to use. And you don't always see that kind of business opportunity, right? For the thing that you wake up to do every day is do it better and also make it easier through the humans doing the work. So that's captivating. That's a great opportunity. But you might be asking, like, then why a code red? Like, why not just pixie dust and rainbows and let's go? Well, because the other side of the coin is that same technology, generative AI, can also not really challenge a business like Zapier if you don't seize the moment. And I think this is true for a lot of organizations right now. I don't think every organization had that imperative two and a half years ago, but nowadays many more are like shoulder to shoulder with companies like Zapier and being like, hey, I don't know if we've ever had a bigger opportunity. But also, the ball's back up in the air. However good we thought we were, however dominant we thought we were, we've got to go play for it over again. In any business that has the privilege of being around for more than a hot minute, you have these cycles.

SPEAKER_02:

Yeah.

SPEAKER_00:

And Google's had it, Meta's had it, every conceivable organization, all public sector organizations have the same phenomenon. But it can still be disorienting for teams because you have a season where you feel like you're on top of the world and the the snow globe gets shaken up and you're like, I can't even see two feet in front of me. I don't know what to do. So I have a lot of empathy for that. We definitely navigated that it's not beer. But that was the inspiration for the code red. And if I reflect on that two and a half years later, I have two reflections. The first one is it wasn't but totally popular when we published it. Or folks that they're like, this is literally alarmist, calm down. Or I think I see what you're saying, but now I'm nervous and I don't like feeling nervous. And clearly we don't have all the answers, which makes me more nervous. But two and a half years later, I'm glad we didn't wait for consensus because a consensus was never going to come. We've gotten much more clear on our opportunity and how we're gonna go do it. And we've gotten much more aligned as a team over the last two and a half years about the importance of that and the role that each of us needs to play in it. But the only reason we were able to get to that alignment wasn't by waiting for consensus. It was by being clear and then using that as an anchor point to build alignment. And but it's hard to do. Sometimes I think there's a saying we time will tell. We may be wrong about this, but we are not confused. We might be wrong, but we are not confused. And that I think for leadership teams, especially in a moment like this, like time will tell, right? You know, what we get right and what we get wrong. But the thing we can absolutely influence, and I think we're really accountable to with our teams, is are we being clear?

SPEAKER_03:

I love that saying because it's like the permission to be wrong. We're all experimenting, but you're very clear on the path and the goal for the org. Like that's super clear. And looking back, as you I saw the note about the 97% adoption rate. You went from 65%, 77%, 89%, 97% adoption now. What were those two repeater moves that actually helped you move those numbers within the organization, even with the skeptics?

SPEAKER_00:

Two things in particular. One is anchoring a lot of that day-to-day use on problems that teams already have. AI is a it's a method, right? It's a tool, it's a technology, it's a way of having impact. It's not the impact itself. And if we want folks to pick up something and invest in learning a new thing or what have you, it's just like a lot of us. We what's the motivation rather than tell folks like you have to do this because I say so, right? Why not say, I think it's important. Leaders need to have a point of view. We need to do our own homework. I actually think the first ingredient in that was we insisted that our exec team got hands-on keyboard and did a lot of that initial wayfinding with the team because we started seeing our like, oh my gosh, like here's an opportunity to remove toil from my job. And if it can do that for me, it can do that for these people who do similar work on my team. Oh, here's a thing that AI is not yet very good at. So I'm going to share that. So maybe save folks some time elsewhere, or maybe they can figure it out better than I can, challenge accepted kind of stuff. So I think leading by example, show me greater than tell me type stuff. Like that was helpful. And then creating a couple of really just really easy, standardized places where folks could do two things. One, ask questions. So even to this day, we started a Slack channel two and a half years ago that's called AI Help Desk. And big or small, it doesn't matter what the question is. It could be around tool use or governance, could be around like, I just can't get this thing to work, whatever it is, like you drop it in there and you can expect a speedy and helpful response. Often from peers. So a lot of peer-to-peer sharing and support in that group. And the second kind of happy path we set up is just a single, simple place for folks to share what they're learning. And that could look like a demo or a retro, whatever the case may be. Something we're working to get better at is like chronicling or archiving like the best of the best. So that someone who joins the company and is, I want to, I'm, I want to be great at this. Or here's a typical issue I have in this job at my last company. I just want to, I think Zapier probably has a way to do it better. What is it? Like, how quickly can we pattern match our best of the best AI workflows to jobs to be done across the company? I think it's a very common like coordination or extension problem that teams are having right now. But to put a bow on it, we just have to get hands-on keyboard, show what most things we don't know. It's a great way to model experimentation, but it's also a great way to model psychological safety, right? Our execs in the AI help desk channel asking for help too.

SPEAKER_03:

Yeah, I love that. I just recently I just got off of a call with someone who is talking about in this moment, like going back to the speech by Teddy Roosevelt, man in the arena, and just having the willingness to try. And so what I love about what you guys are doing is this joint willingness to try and try together and fail together and learn together. It sounds really powerful. I read something about having this attitude of a default to transparency, which is clear with the examples you just shared. So that's shaping how experiments are being shared and scaled in people operations. When you think about Slack channels and things like that, how is the transparency throughout the organization helping drive things forward?

SPEAKER_00:

Yeah, that beer has a value that that predates me as appear called default to transparency. And sometimes people ask, like, where'd that come from? And I asked about it too when I was getting to know Wade, our founder CEO and the team before I even joined. And it turns out transparency is a really important ingredient, especially for a highly distributed team. You don't have this received wisdom from the water cooler or happening upon someone in the hallway. Now, I will tell you, most organizations these days of any reasonable size are distributed. Zapier also happens to be remote, but that's not really the point. It's the distribution of the team that makes transparency so important. At my last company, for example, another great company, I was mostly in office-based culture, but we had 13 offices all over the world, all these different time zones. That was a distributed team too. Same problems and opportunities that we have at Zapier, even though everyone at Zapier works from home or wherever. So the transparency team is just important, like for making sure everyone is on the same page. But it also helps Mel, like you were saying too, with like other cultural ingredients that turns out are really important. You want to have a culture of psychological safety and experimentation, culture of being able to give and receive constructive feedback. Those all require various forms of transparency as well. The easy part now is actually the logistics of that. No company at this point is short of like pretty powerful, cheap tools to actually deliver on transparency and keeping everyone on the same page, even if the team is very spread out.

SPEAKER_03:

Is it fair to say transparency, like this core value that existed even before this AI craze is actually like a key driver to the success of the adoption rate you've even had inside that organization? Just that that core value.

SPEAKER_00:

It is. And you just made me think, Mel, a bit of an analog to something that folks are talking about with AI right now. There's a lot of talk around context windows, really just meaning like how much memory or knowledge, right, can uh a model or a workflow hold in order to help make great decisions or get work done really well. It's one of the, I think, biggest kind of conundrums or problems to be solved in AI right now, context window. The same thing's true for humans. Just like AI's ability to do work really well or make great decisions is largely dependent on the information it has at its disposal. The same thing's true for the humans too. It's always been that way. And so I think that's part of what transparency helps with, right? Is widening the context window and making sure that it is similarly robust for all people, not just some people, right? Not just people who happen to be working in the headquarters, not just senior leaders. Every human in the company should be coming to work to do something really important for the organization every day. And if you really believe that, then you get very interested in making sure everyone has the information they do to be brilliant at their job.

SPEAKER_02:

Yeah, I couldn't agree more.

SPEAKER_03:

I think I read somewhere that AI fluency is now a requirement for all new hires with this four-level rubric. What does that rubric look like? And what does a pass look like in interviews? How do you measure people against that, just knowing it's still early days for everyone? And how do you level people up once they get into the organization?

SPEAKER_00:

Ooh, what a great question. Okay, so why do we do this? The reason we do this is twofold. The first reason is AI fluency is just very important for being successful at this company. And while we do a lot of like value-added training during onboarding, there is some baseline fluency that folks simply must have. Now, what do we mean by AI fluency? What we mean by AI fluency is the ability to use AI to be unusually effective in the work that you're going to do at Sapia. What's interesting, if you look at the AI fluency rubric, which we've open sourced, it's not all prompting skills and evals and traditional like AI builder technical skills. It's really two-thirds about craft and judgment, curiosity, strength of experimentation, and so on. These are timeless competencies. And so where folks wash out of our AI fluency assessment, it's at least as often because they never got particularly good at those things, which by the way, were really good qualities to have way before Gen AL than it is because they're lacking in prompting skills or some of these AI builder hard skills. By the way, a lot of those AI builder hard skills, like those methodologies or best practices, are changing pretty quickly too. And so we just have to have a good muscle inside the company of keeping up with all of that. But what we absolutely must have for everyone that joins the company are some of those timeless mindsets and capabilities, which again, like actually aren't specific to Gen AI, but actually very helpful for building and using it. Let me share one other thing. There are four tiers, like you mentioned now, to that framework. The first one is literally called unacceptable. We like to be clear at Zapier. It's as much about mindset as it is about anything related to skill, right? The folks who typically qualify as unacceptable have almost like ideological opposition to the use of AI, which is interesting. We still see people apply to jobs at Zapier and say, oh my goodness. It's like this is very important for us. So we just can't have it. But that's part of what's helpful about this screen, right? Yeah. Hey, like we can't work together. Like we can ask hard questions about AI. In fact, we need intellectual honesty about thoughtful use of AI. Like we want that at the company. We don't just tolerate it. We need it. So that that's not a fail. That's actually an ad to the team to be able to think critically. But like dogmatic opposition to the use of AI, period, is incompatible with what our customers are counting on for most and what we're trying to do in the world. The other three, capable, adoptive, and then at the highest, transformative. Most roles at Zap, you can be hired in at any of those three levels. There are certain roles in engineering and a couple other places where adoptive, that second to highest, is the floor or the baseline. But for most roles, it's this capable. And capable is really all about the experiment with AI. I can share examples of how I've used it. I may not have a lot of examples yet of how it's like measurably improving my or my team's work, but I can share example after example of what I'm trying and what I'm learning and what it's making me think. And I can also share a little bit about how I make decisions about what types of problems I try to solve with AI. We're not looking for a perfect answer, and we're not even looking for folks to perfectly agree with some of our frameworks on this. It's just the folks who've really given this some thought and have both a critical eye, but also a sense of possibility.

SPEAKER_03:

I think it's really important that you have that in just in terms of knockout, because as you said, like what you're trying to accomplish in the world as a business. So it's important for people to measure that. But what stands out to me, and we're gonna show the rubric in the show notes so our listeners can access this too for inspiration, maybe in their own orgs, is that you're doubling down on those human capabilities that you talked about, as you mentioned. These are all important before AI. They're gonna be important way down the road and even align with what World Economic Forum has also said are the most important capabilities by 2030. So we'll share that out as well.

SPEAKER_04:

Out of curiosity, are you letting people just go and have agency? And are they bringing their own AI to work? Do you have firewalled AI? What are you all working with? Like how much experimentation do people have?

unknown:

Yeah.

SPEAKER_00:

Experimentation in a sandbox. And so part of our accountability is to define the sandbox. So to give you a couple specific examples, any and all use of AI inside the company has to be on enterprise grade models that have, for example, like none of the data that we put in can be trained on by the providers of the models, for example, and so on. You know what's interesting too when procuring a lot of these AI tools, a lot of the procurement guardrails or screens that companies like Zapier have had for years, they apply here too. Right?

SPEAKER_04:

You want to know PII data, you know, like the problem.

SPEAKER_00:

Exactly. So it's, you know, none of the as long as organizations have strong like data, data governance and procurement practices in place, like we build on top of those. It's not to say there aren't a couple like eccentricities with AI tools that are worth layering on top, but it's 80% of it was built on the strength of your existing governance and data privacy programs. But by the way, though, that does mean that if those are shaky, bringing a bunch of AI models and tools in, then meaningfully increase your risk profile. One of the things that we recommend with a lot of our customers too is that you have to do a bit of a pressure test when you're early in your AI transformation journey, both at the governance level, like we're talking about right now, at the operational level, do we have really strong standards of what excellence looks like? Because AI is meant to help us be excellent. So if we don't have a really crisp, measurable point of view on what great software development looks like or what great recruiting looks like, or whatever the case may be, then we're like building towards no particular end. So governance, operations, and then culture. We talked about some of this stuff earlier, right, Mel and Francesca? It's organizations that want to do something big with AI, but don't have high degrees of psychological safety, don't have particularly great cultures of experimentation, aren't yet cultures where people can give or get constructive feedback in every conceivable direction, are gonna have a really hard time with this, even if they choose the best models, write big checks, and have great governance policies.

SPEAKER_03:

Do you think it starts with culture? What do you focus on first? Is it the governance or is culture really the baseline that has to be addressed to make sure this is successful?

SPEAKER_00:

I think it's hard to make a lot of meaningful progress without culture. Culture is necessary but not sufficient. All these ingredients are necessary but not sufficient. You need them together. But yeah, my mental model would start with the cultural components. When we were embarking on this, we retuned our engagement surveys to measure some of these ingredients that we've talked about more explicitly. And like any organization that's like more than 20 people, the answer to those questions around culture of experimentation, culture of scientific authors 50 is that it depends.

SPEAKER_02:

Yeah.

SPEAKER_00:

Which teams, which geographies, which this, that, and the next thing. The averages are unhelpful. Understanding which parts of the organization or segments of the team are strong in some of these cultural indicators and which are thinner and in need of shoring up or improvement, that's what we really wanted to understand. And I think teams can do that as they also strengthen their governance policies, their data privacy posture and what have you. But what I feel really strongly about is that you can't do the culture part later. It may not be the only thing you do in your like your first motion or your first big wave of investment, but it has to be part of the first wave.

SPEAKER_04:

I'm taken with so many of the people that we're talking to, like CPOs, CHROs. They're just coming out of COVID, they're trying to reset, here comes AI. And a lot of folks, maybe under resource teams, they're a bit lean. I'm curious about what did it look like from the HR organization's angle when you did a code red? What was the setup? What was the charter? What was the budget? Big long sob story. How did you get this done through HR?

SPEAKER_00:

Yeah, I'll start by saying we're not done yet, but there are a couple of things I'd say they're helpful, but they're also necessary. So the first one is, like you were saying, Francisco, the Zapier people team already had a lot to do when this opportunity cropped up. And I'll tell you, for the first year and a half of that AI journey, so spring of 23 all the way through the end of 2024, the people team, like the rest of Zapir, meaningful gains in AI fluency, starting to use a more automation AI to improve everyday work, starting to see some interesting productivity and quality gains. That's all good on the adoption part of the curve. When it comes to like transformation, like complete reimagination with two, three, five, 10x improvements in efficiency or quality. Also employee experienced, by the way, as part of what we get to play for with all this to make the jobs for humans better. Like, I didn't come out of last year, 2024, feeling like we were on a path to unlock that in 2025 and talk with our people LT. I was like, part of the reason is that's what you mentioned, Franchise, because like everyone's got a day job. And the team is not lacking in will, and it wasn't lacking in skill to take on some of these bigger redesign work with AI opportunities. It was really just focus. Like we talked about earlier. It was like, what's the thing you believe? I was like, oh my gosh, like focus, do fewer things better. And I was like, I'm not leading this team by example in that area right now was my takeaway at the end of 2024. So in the first half of 2025, as a team, the whole project we called Clear the Lane. And here's how it worked we made a spreadsheet of every single people practice product and policy that we are accountable for at the company. And I asked the team, I said, hey, by the end of June, so halfway through 2025, we will meaningfully further automate, de-scope or retire entirely at least 50% of the rows on the spreadsheet. Like I'll tell you something, like every team, including our own product teams that face the customer, is like building stuff. We do a lot of building, we don't always do as good a job or spend as much of our attention on pruning, taking stock, right? Something that was super effective two years ago may just not be quite as relevant as it was then. Things change. But there are very human reasons why we're not as good at the pruning. As humans, we develop pride and affinity for the things that we build. And this is a good thing, right? Proud of the thing that has our name next to it. Sometimes on NASLA's hierarchy, right at the base of the pyramid, it's also just about job security. And one thing that really helped in the team, the people team at Zappi are rowing together on this clear-the-lane effort, was being really clear about we were asking ourselves to let go of some things so that we can pick up this. And we had no shortage of like reimagined work with AI and automation opportunities on our backlog. So we have the this, right? So we can pick up this. Like almost every part of the people team, but I just asked folks if you had this time back, like what would you do with it as it relates to our opportunity with AI? And like we say, yeah, I would do this and I would do this. Don't have time to do it. Okay. That was really helpful because I think it is a lot. I think it is a lot for leaders to ask folks to, both from a pride and a sense of job security point of view, to just start retiring or shredding a bunch of their current accountabilities without having a vision for what we're gonna pick up and be working on and stuff.

SPEAKER_04:

I find it actually really refreshing though, because I am such a big believer in less is more. And I don't frequently see people that don't prune, don't edit, be able to do big transformation work. Most HR teams aren't getting flush with new budget necessarily. So you have to make space for it. I think it comes from the top to say we are gonna make space for it. We are gonna make priorities. It doesn't just happen. How do you feel like your team felt about it?

SPEAKER_00:

As a leader of a team, you never know for sure, right? There's always like some fog in between what you're seeing and hearing and how people truly feel. But best I can do is I ask folks as we're doing the work, I ask folks in skip levels, and then I let the what's actually happening be like the ultimate barometer. Because you can feel when you're undertaking something like this clear the lane effort, just by the pace of progress and folks' engagement with it, what ratio of like friction versus forward inertia, like wind at your back versus wind at your face. And the overall sensation in the six-month period when we were doing this was largely wind at our backs. And so, on one hand, part of my own critique on my own leadership is I wish I had done it earlier to some degree, but maybe not that earlier, if that makes sense. Because I think if we come out swinging in spring of 2023, when we were even just starting to wrap our heads around the opportunity, it may have been too soon. As a team, we wouldn't have had as crisp a vision for what we thought was possible and what we would do with some of the time we would get back if we started pruning back some of our current portfolio.

SPEAKER_04:

Yeah. Certainly a good hygiene to get into, not only as a team, I would think as an individual too. Like always asking yourself individually with what I'm doing, what can I be giving up in order to make space for more transformational things.

SPEAKER_00:

Absolutely. It's also a lot better when the team does it with itself rather than the CEO or the board coming around at some point and asking you to do it as part of some exercise. So I think it's just good. All teams end up doing this. And if we really think about it, like we branded it, clear the lane. But if you think about it, like all teams, including HR teams, they do this over and over again every few years. And the thing that got me thinking about is do we want to be in charge of our own destiny? Or are we gonna wait until someone asks us to do it?

SPEAKER_04:

That's the question to be asking for sure. Super smart. Uh Zapier named its first AI automation engineer for HR. What is up with that?

SPEAKER_00:

Oh, this is one of my favorite things that's happened this year on this topic. Like anything with AI, we're just like insisting, build within an area of need, not just for its own sake. And with our own AI efforts on the people team, now imagine, remember, we've done clear the lane. We've created a little bit of elbow room to like take on some of these bigger opportunities. And now the question is that's okay, let's build. And it's okay. Like all of our organizations on the people team, we had some pretty skillful builders. And we had some folks that had great ideas, they had great craft, that kind of knew what the pain points were, but didn't have the builder skills to take an idea into production. And this is a very common pattern we saw across the company. And we asked a member of our existing team who was among our most skillful builders to take on this new job, AI automation engineer for HR. And what do they do? They do one thing and it produces two benefits. There's this concept called forward-deployed engineer, which is it's someone who like really knows how to build stuff and they come alongside teams, wake up doing that type of thing every day, and they help them see the new way of doing it and they help them build towards that. Our model of this of the AI automation engineer is to teach people how to fish. So, Emily, our AI automation engineer for HR, she does co-builds with teams, talent acquisition, total rewards, people operation. As she's doing that, and this is the second benefit, she's building their AI fluency through hands on. And in fact, provocatively, Zapier does effectively no like traditional L and D or classroom based learning as it relates to AI at all. I think that's debatable, by the way. There are other ways to think about that. A very hands on approach. And that's her job. Now, increasingly, in her particular frail that job, she also. Does that to some of our customers, which is a whole other story for another day. But now you have someone at Zapier who was hired as a very talented LD specialist, who now is learning all of these other parts of the HR shop, is helping customers do the same, is learning about how account teams work, is part of our sales and customer onboarding motion. And it just goes to show you like people are capable of so much because what makes someone like Emily great as an AI automation engineer, she has three superpowers. One, craft, like we talked about earlier, has worked in and around HR teams long enough to know a lot of the general things we're trying to do in the world and what great looks like. Two, build our skills. She can help translate some of those opportunities of kind of definition of excellence into automation and AI workflows. And three, and this one's maybe especially obvious for Emily because she came up from within LD at Zapier, she's a great teacher and coach. So when she's doing these co-builds, it's incredible. Like you would just lovely because like she's got the skills, but more importantly, like she really is doing capacity building, leaning on some of her LD expertise. So that's the AI automation engineer concept. We have three more of them coming online in different departments by the end of the year. And you know, different companies are going to call them different things. But I think this particular talent investment is a really nice accelerator because you do end up building things that produce business impact, but you do it in a way that kind of raises the AI fluency floor for the team.

SPEAKER_04:

Love it. I also love from a career perspective, for someone like an Emily too, she's getting RevGen experience, being on the customer side and internally, and all of a sudden we see the evolution of careers look almost more portfolio internally. It's fantastic. I love seeing it.

SPEAKER_03:

Something that really stands out to me and Zach, your story that's so powerful, is you've created this threshold of establishing good habits and hygiene to be successful for any organization. A lot of people are experimenting right now. What advice would you give to an organization that is stuck in what we're calling pilot purgatory? They're just stuck in that testing phase. What would you share with them? What guidance would you give?

SPEAKER_00:

You made me think of two things, Kyle. The first is sometimes when we see like pilots not convert, you know, I'll take off the table for a minute, but just when the pilot's not successful. Like when I hear pilot purgatory, I hear like something that has not definitively been proven not to work, but is somehow not converting into a like a production grade system that's producing impact. I the first place I look is what's the problem we were trying to solve with this? Because sometimes the answer is it wasn't a big enough or urgent enough problem. It was a nice to have, not a must-have. And for pilots, like ideally, our AI pilots are grounded in a problem to solve that is almost like almost burning platform, if not burning platforms. That's thing number one. I'd like to go right to the root of what were we trying to do here and like how existential is this for the organization? That's the first one. The second one on pilot purgatory actually does come back to that notion of the AI automation engineer. Sometimes folks just aren't stenched to convert a pilot into a production grade workflow or system, and they need help getting it over the line.

SPEAKER_03:

It made me think too about that article that came out that was super inflammatory. 97% of AI efforts are failing. And I'm like, how much of that is the tool? Or to your point, are they solving for the wrong problem?

SPEAKER_00:

Oh, you bet. That was yeah, that was like late summer. And then two or three months later, you had Wharton's for three years and more, Wharton has co-authored a like AI efficacy study and showed like actually really meaningful industrial gains and outcomes through the use of AI. And you like look at that and you're like, how could both of those things be true? And they're actually, I think those two findings are totally compatible with each other. The MIT study was looking at success rate. The Wharton study was speaking largely to the impact of the things that are working. And first of all, I think if 95% are fail, we're failing this year, maybe next year it's 85%. I don't think it's ever going to flip. Otherwise, I don't think we're being ambitious enough. Like I don't want to see a 95% success rate, you know, like pushing the frontier. But what it was neat about seeing those two studies together this year is that one's focused on success rate, the other's focused on basically magnitude of impact. And if we're an organization today, and even if we have that 95% fail rate, I'm interested in the difference between what we can learn about how to have a better conversion rate for success. But I'm really interested in what's the so-what for the 5% that succeeded.

SPEAKER_03:

Right.

SPEAKER_00:

Because if the 5% that succeeded, is our CEO Wade will say about things like this if the 5% that succeeded changed the color of the sky for our customers or for the company, I pretty quickly forget about the 95%. But it gets back to what we were talking about earlier is when we're making these investments, let's play for things that, if they are successful, would be a really big deal.

SPEAKER_03:

So one of the things we're always super sensitive about is tech sprawl, tool sprawl, having too much. Like now you have a bunch of Franken tools potentially. And with people building up builders and as people join, look, I guess your one rule, your one golden rule that everyone follows to avoid the sprawl with tools across the organization.

SPEAKER_00:

Oh, it's so interesting. I wish we had one on tool sprawl. We have one overarching principle for AI use, which can actually then relates back to this topic and it is quite helpful, which gets to a bigger point too, around like governance and AI policy. The terms and conditions are the specifics of uh AI use guidelines or have you very important, but they're not a substitute for just like one or two really big ideas that everyone has committed to memory and just like actively uses to make decisions about how they use AI. And one of those principles of Zapier comes from our CTO, Brian. And I'll never forget the first time he said it because it was so helpful and so sticky. It was like impossible to forget. He said, Hey, with AI, you can delegate a bunch of the work, but you cannot delegate the accountability. Full stop, period. And you just think about how much water a principle like that can carry for a team in making these everyday decisions because no policy, no matter how great it is at a given point in time, can cover all the situations that folks are going to encounter. So, what does that have to do with tool sprawl? With tool sprawl, first of all, a lot of empathy. There is a lot out there. It is gobsmacking and impossible to keep up with. We accept tool sprawl where it's in service of experimentation. But we did accept a fair amount of messiness at the beginning. Everything, though, had to go through the same high-rigor procurement process. So sprawl could happen as an outcome of saying yes, but the yes had to still come through the same like guardrails and screens. We are starting to consolidate. The type of sprawl we're experiencing right now, and we haven't totally figured this out yet, is similar to what we talked about earlier, which is just like workflow sprawl. It's like, by the way, there is a season of experimentation which is really helpful. I'll give you an example. Take a recruiting team. Let's say you have 10 recruiters in your organization. And for a time, based on their craft and their builder skills and their hypotheses, they're all solving, they pick five big typical pain points or sources of toil for recruiters, and they all try different ways of solving it with the tools they have available to them. Season of experimentation, great. Some of these are better than others. The work of them like standardizing and saying, yes, like the experimentation season is important. But then at that point, once you have your best of breed, every day after that, it is now meaningfully messy and inefficient for 10 recruiters to have six different ways of solving the same problem when we know, if we really took a beat, that there's really one best way to do that as we understand it right now. That narrowing is hard. And it's hard for us too. And one of the things that is actually like in our this type of workflow consolidation are like establishing and standardizing our golden paths is part of our scorecard for 2026.

SPEAKER_04:

Can we talk about fear for a second?

SPEAKER_00:

How much time do you have?

SPEAKER_04:

Listen, Mel and I think about this all the time just because we do a lot of future work, we talk to a lot of futurists. You hear everything from the biggest company in 10 years is going to be 50 people, and we're all going to be farmers to AI is going to, yes, take some jobs, but create much more. And I'm curious as someone that has, in my mind, been on the forefront of AI adoption, uh AI transformation in an organization. What is your hunch? What is your gut? Are you guys finding there's a lot of replacement that it's evolving and morphing? What are you finding in terms of if I had to ask you, where is the like workforce going?

SPEAKER_00:

Yeah, I don't know. And I think anyone that tells us that they do is kidding themselves. Yeah. I think about this a lot. I want to know. I cannot know. And what we talk about with our team is this is an important question. What can we influence within this question? So, one, we have control over our workforce strategy. When Zapier reduces productivity gains, what do we do? And our customer support team, for example, over the last 18 months or so, they have doubled their productivity by cutting in half the amount of time it takes to respond to a customer support ticket. The customer support team in Zapier, does it have half as many people today as it did two years ago?

unknown:

No.

SPEAKER_00:

But what is true? One, the jobs are a little different. The ratio of frontline support people to like technical operations people, that's changed. There's some new jobs and new ratios of jobs on the team. But the bigger one, in addition to that, is that the team is now able to do things that we have dreamed of being able to do that were never possible in terms of bandwidth or finances before. The specific example would be as the team was able to use more automation and AI to respond to tickets faster and unlock some bandwidth, they're now participating in some of our like account expansion. This is good for customers, is good for Zap3 of the business, it's a new set of skills and experience for the team. But that's part of the opportunity that leadership teams have to make these decisions. That's what we can control. I don't know the macroeconomic outlook for five years from now or what have you. I do have one other like little hot take on this, though. No comp, no company, no management team, like this this is gonna have to be like a policy solve. Especially in a world where there's like some type of like macro net job loss is the policy solve. Like the most influential CEOs in the world aren't gonna solve this problem. As a political science major, I started work in the public sector. Like I believe in the importance of governments and policy to help like engineer a good path forward in places where, with a lot of tech disruption, individual organizations by themselves, there's just all these gaps in between what individual companies or organizations can do.

SPEAKER_04:

Yeah, super fair. Here's a question for you about what you tell like an individual employee or what you would tell a manager. What would be like a 30-second script you would give a manager that they might have employees that are really freaked out about using AI or they have a fear of being irrelevant or losing their job? What's the rah-rah there? What's the 30-second good speech there to give somebody?

SPEAKER_00:

Oh, yeah. It's a human moment. And the most important thing is that we can talk about it. So that's thing number one. Thing number two is the way that the work we do gets done is changing. And the best way to stay not just relevant, but to maybe even take up a couple notches, how well we do it, is to lean into the moment, not lean away. This is a row with the current, row against the current type of moment. The other thing I would show them is we have a framework for impact with AI at Zapier. And it starts with efficiency, so saving time and money, but there are two other pieces that are just as important. We want to save some time or money where we can. We also want to improve the quality of the work that we do. And it feels we like, we spend a lot of time at work. Like we want to be good at the things that we do. We want the work that we do to have quality. And the third part of that impact framework is employee experience. We are going to change the way we get work done, but part of how we scope in what that looks like, and it's in writing, is how we architect the stuff, is improve the experience of the humans doing the work while we improve the efficiency and quality of the work too. That's the answer. That's what we play for. And we will show ourselves with our words, with our D's, not just with our words, that this is true. And we have some living examples of this at the company, and that helps. Like it's a virtuous cycle. Like companies need to get their like initial quick wins that demonstrate their intentions. And then I think a lot of this gets more straightforward. That being said, I still have concerns too, right? But I think I want the people at Zapier and so many of our organizations, I think are really thoughtful in the critiques, if they're constructive, are a really important part of the conversation. So if anything, I just want our heads in the game and organizations who are being like very earnest and thoughtful about this to be in the mix. Because again, like it's a technology, the extent to the ratio of used for good versus used for something other than good is going to be determined by everyday decisions made by organizations all over the world. And we want to be part of that.

SPEAKER_04:

You give me hope, Brandon.

SPEAKER_02:

You give me hope I appreciate that.

SPEAKER_03:

Brandon, we like to do what we call rapid round. You can respond in one word, one sentence, whatever you feel comfortable with. But this is just for us to get to know a little bit more about you. Are you okay if we dive right in?

SPEAKER_00:

Yeah, let's do it now.

SPEAKER_03:

All right, perfect. It's 2030. What's work going to look like?

SPEAKER_00:

Better, hopefully, but only if we make it sound.

SPEAKER_03:

Yeah, I think you're right. All right. What's one thing about corporate culture you just want to see die already?

SPEAKER_00:

Oh gosh. People saying different things depending on who's in the room.

SPEAKER_03:

I can't take it. What's the greatest opportunity that you think most organizations are missing out on?

SPEAKER_00:

Oh, do fewer things better. Focus.

SPEAKER_03:

Yeah, that the whole say no to do more. Um, what music are you listening to right now? What's on your playlist?

SPEAKER_00:

Oh, it's all about K-pop demon hunters over here, my nine-year-old and six-year-old. Yeah, with the dance moves, natural. Yeah.

SPEAKER_03:

Very nice. Very nice. Any TikToks? You doing any TikToks with them on it? Or no?

SPEAKER_00:

Hold me back. Yeah.

SPEAKER_04:

What do you think about K-pop Demon Hunters? Do you like it? Did you like the movie?

SPEAKER_00:

Oh, cultural phenomenon. Yeah. There's a lot of there's a lot of like. There's a lot from the art to the music, the whole deal. And it's got some sass, right?

SPEAKER_04:

Yeah. Yeah. Yeah. I I like Derby Cat the best. It's the cat, the crow. It's my favorite. I like love that. Yeah.

SPEAKER_03:

What are you reading? It could be an audiobook. It could be a journal. It could be anything. What's got your attention these days?

SPEAKER_00:

Yeah, so I'm almost done. I'm going to finish it this weekend. I am reading The Goldfinch by Donna Tart. Highly recommended. Just freaking captivating. It's also like a for a book that good, it's also relatively new. I think it's only 12 years old or so.

SPEAKER_02:

Very nice. Very nice. Okay. Who do you really admire?

SPEAKER_00:

Rather than share like a specific person, I really admire people that work really hard and find joy in the work, even when it's hard. I grew up in and around the Detroit area. And a lot of my family has worked and still does work in some cases in the auto industry. That's just that's just a hard-working culture where folks still to this day have a lot of pride and a lot of craft in the work. And it shows. And when that when I see that at work, being with my own team, like that stands out too. And it's just a it's a fun environment to be around.

SPEAKER_03:

Yeah, there's nothing like liking what you do and just feeling energized by it.

SPEAKER_00:

Novel. I know. I wish it were more common.

SPEAKER_03:

Yeah, same. What's one piece of advice? Maybe something you received or just something you've learned along the way that you wish everyone would know.

SPEAKER_00:

One of the best pieces of received wisdom I ever got was when you wake up in the morning and you look at your day to the extent that you can logistically do the hardest thing first.

SPEAKER_03:

Yeah, smart. Otherwise, it's a can that always gets kicked. You're like limited.

SPEAKER_00:

Isn't that the truth? We've all done it, me included.

SPEAKER_03:

Yeah, it's the same. What's one thing you know today that you wish your team knew a year ago that you can share with anyone else who might just be starting out?

SPEAKER_00:

My my optimism for how quickly folks can pick up some of these view ways of working is meaningfully greater today than it was a year ago, just because I've seen so many more examples of it. And I'm also seeing what it means for people. Like how much more like capable, powerful, proud you can be of your work when you can do it in some of the ways that we're starting to see. So that would be my parting thought.

SPEAKER_02:

Thank you for your time today.

SPEAKER_03:

Appreciate you. Brandon, how can they stay in touch with you? What's the best way to follow you? Perfect. All right.

SPEAKER_02:

Thanks for having me. Thanks, Brandon. Thank you.

SPEAKER_03:

This episode was produced, edited in all things by us, myself, Mel Plett, and Francesca Reneri. Our music is by Pink Zebra. And if you loved this conversation and you want to contribute your thoughts with us, please do. You can visit us at yourworkfriends.com. But you can also join us over on LinkedIn, join us in the socials. And if you like this and you've benefited from this episode and you think someone else can benefit from this episode, please rate and subscribe. We'd really appreciate it. That helps keep us going. Take care, Friday. Bye, friends. Bye friends.