Code with Jason

285 - Michael Ferranti, Chief Marketing Officer at Unleash

Jason Swett

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 52:54

In this episode I talk with Michael Ferranti from Unleash about feature flags, trunk-based development, and why DevOps metrics alone aren't sufficient. We discuss FeatureOps—focusing on customer outcomes rather than just code delivery—plus the "three voices" (engineering, business, customer) and AI's role in accelerating feedback loops.

Links:

A Snail-Mail Newsletter For Developers

SPEAKER_00

Hey, it's Jason, host of the Code with Jason podcast. You're a developer. You like to listen to podcasts. You're listening to one right now. Maybe you like to read blogs and subscribe to email newsletters and stuff like that. Keep in touch. Email newsletters are a really nice way to keep on top of what's going on in the programming world. Except they're actually not. I don't know about you, but the last thing that I want to do after a long day of staring at the screen is sit there and stare at the screen some more. That's why I started a different kind of newsletter. It's a snail mail programming newsletter. That's right, I send an actual envelope in the mail containing a paper newsletter that you can hold in your hands. You can read it on your living room couch, at your kitchen table, in your bed, or in someone else's bed. And when they say, What are you doing in my bed? You can say, I'm reading Jason's newsletter. What does it look like? You might wonder what you might find in this snail-mail programming newsletter. You can read about all kinds of programming topics like object-oriented programming, testing, DevOps, AI. Most of it's pretty technology agnostic. You can also read about other non-programming topics like philosophy, evolutionary theory, business, marketing, economics, psychology, music, cooking, history, geology, language, culture, robotics, and farming. The name of the newsletter is Nonsense Monthly. Here's what some of my readers are saying about it. Helmut Kobler from Los Angeles says, thanks much for sending the newsletter. I got it about a week ago and read it on my sofa. It was a totally different experience than reading it on my computer or iPad. It felt more relaxed, more meaningful, something special and out of the ordinary. I'm sure that's what you were going for. So just wanted to let you know that you succeeded. Looking forward to more. Drew Bragg from Philadelphia says, Nonsense Monthly is the only newsletter I deliberately set aside time to read. I read a lot of great newsletters, but there's just something about receiving a piece of mail, physically opening it, and sitting down to read it on paper that is just so awesome. It feels like a lost luxury. Chris Donnier from Dickinson, Texas, says just finished reading my first nonsense monthly snail mail newsletter and truly enjoyed it. Something about holding a physical piece of paper that just feels good. Thank you for this. Can't wait for the next one. Dear listener, if you would like to get letters in the mail from yours truly every month, you can go sign up at nonsensemonthly.com. That's nonsensemonthly.com. I'll say it one more time. Nonsensemonthly.com. And now without further ado, here is today's episode. Hey, today I'm here with Michael Ferranti. Michael, welcome.

SPEAKER_01

Hey Jason, nice to be here.

SPEAKER_00

Nice to have you here. So you're from a company called Unleash. Tell us about Unleash and tell us about yourself.

Feature Flags 101 And Why They Matter

SPEAKER_01

Great. Well, uh really excited to be here. I guess background for me before people care about Unleash, they might care about the person who's telling them about it and whether they should tune in or tune out. I'm a product leader in enterprise software, DevOps tools, infrastructure, code, all of that kind of that whole ecosystem for uh well over a decade now in a variety of roles. Um and um my company, Unleash currently, uh we are uh we're a developer tool uh platform. Our mission is to uh make live easier for developers. Um and the particular way in which we do that is um through a popular open source feature management platform um called Unleash. Um feature management for those uh who don't know, um you may have heard of the term feature flags. Um it's basically the ability to kind of control what appears in production um via um kind of remote toggles and configurations. So instead of, you know, hey, I've I have a new version of my application, I don't I want to deploy it. Well, I'm gonna do a code deployment via CICD. That takes some time. Code actually moves from over here to over there, and then it's available on some server somewhere. Um, with feature flags, um, you can kind of flip a toggle or a flag, and then that that um that code, which is already running in a production server but latent, not exposed to a user, becomes exposed to a user. Uh, and that's particularly interesting for a number of reasons. Um, one being for you know your uh developers listening in, like you know, um one of the one of the you know veins of mini developers' existence when you're when you're developing software is kind of merge conflicts. It's like okay, I've been working on this branch for you know a week, it's awesome. I'm ready to do my PR and it's ready to merge this thing. And it's like, oops, like you know, you gotta fix all of these different merge conflicts. That happens when you have drift between um kind of the the the trunk and the code that you've been working on. Um and what teams have decided is a lot better way to do it is to um basically just merge back into trunk um as often and as quickly as possible. Um so it's like, oops, like I need more coffee, right? Instead of just not doing anything going grabbing coffee and like you know, trunk is moving you know ahead of me. I'm just going to um push the code, the unworking code, by the way, uh that I've been working on. I'm just gonna recommit it um to um push it up to trunk. Now, what that means is that at any given time there's a bunch of code that's not work that's not we're in a working state, right? This could be, well, that sounds like a really bad idea, Michael. Uh, you should maybe think about a different job. Well, the beauty of it is all of that code, when it's wrapped in a feature flag, could be off. So it is running in production, however, it's not exposed to anybody. However, at the point where it's all where I'm like, hey, actually, I think I have a pretty good working prototype of this, I want to test it out. Um, I can flip that flag, I can make it available to myself and my teammates internally, uh to beta users, to users that are only in, you know, in New York City, um, people that are only on a certain plan type, and I can gradually roll out the change. And so instead of doing kind of a lot of synthetic testing um with you know a variety a variety of techniques, um I can actually test this code um in a controlled environment. If it doesn't work, I can roll it back, I can fix it, I can roll it back out. That's generally what um uh what feature management allows you to do. Um and it has a bunch of other implications that I think are really interesting, I think we'll get to later in the podcast. But that's a high level here.

Continuous Integration Without The Pain

SPEAKER_00

Yeah, that's great. And that touches on some really important ideas. Um I'm I'm gonna kind of restate some of what you said in my own words and stuff like that. Um so anytime you have like a big change, whether that's a big deployment or like you said, a feature branch you've been working on for a week, and there's a big difference between your work and what's on the master branch. Um, anytime you have a big change like that, there is a risk for something going wrong. The more stuff there is, the more opportunities there are for something to go wrong. Um and that's where the idea comes from of continuous integration. Instead of integrating infrequently, you integrate more frequently. Um continuous integration as a term is maybe a little bit misleading because you you can't literally integrate continuously. Um it's just integrating at at intervals that are closer to continuous. Um and so what does that mean? Maybe it means integrating several times a day instead of once a week or something like that, several times a day. You're um doing very short-lived branches off a master. They only live for a couple hours or at least maybe like a day or two. Um and some teams even commit directly to master, although frankly, I think that's well, I have yet to see that actually work. Uh I'm very curious about it, but uh but I haven't seen anybody actually do that successfully. Uh-huh. Anyway, um keeping the delta small between the master branch and whatever you're working on. Um I often encounter an objection uh when I suggest to people that they do that. Yeah. Um you probably know what this objection is, Michael. Is it's what if I'm working on something big that can't be done in just a few hours or even a day or something like that? Do you do you encounter that objection also?

SPEAKER_01

Yeah, yeah, exactly. It's um I think I think it's a fallacy. The fallacy in that line of thinking is that the more important the thing that I'm working on is, the less I should use this approach. So sure, that makes sense if like what I'm working on doesn't really like the risk of it breaking something because it's unfinished or it's untested or whatever is really low. And so I can use this kind of continuous integration approach. But like I can't do that. I work at a bank. Like, you know, the stuff that we work on is really important. We could lose people's money, and therefore I need to integrate less.

SPEAKER_00

That's fascinating. I I've never encountered that, and that's so interesting because obviously that's like precisely the opposite. Like risk mitigation is exactly why you integrate more frequently.

Risk, Compliance, And Enterprise Realities

SPEAKER_01

Yeah, it uh exactly. Tell that to um, and I I I love my friends in the banking industry, um, but you know, tell that to a you know product manager or release manager at a big bank where it's like, you know, Jason, I I get it. Like I've I've read the O'Reilly books too. However, you know, we have a chief compliance officer that says we have to do things in a certain way. And this is where there's kind of you know, the the um the futures here is just not evenly distributed yet, comes from, which is to say you're exactly right. Because risk of failure is so high, you want to integrate more frequently, and yet there are these barriers that are in place at the companies that we all rely on, like our banking, our insurance, our you know, healthcare and things like that, that are that absolutely absolutely don't want to fail in production, and yet those are the ones that are least likely to do this kind of thing. Now, things have come a long way in the last decade. Um, we work with a lot of very large organizations that do develop in this way. Um, but to be frank, it's it is a transformative process for them, which means 100% of their code is not being delivered this way. They typically will start with um, I don't know, let's say that you're a 150-year-old bank, you know, you're gonna start with the mobile application and making sure that that experience for, you know, I don't know, um, you know, uh mobile check deposit, for instance, is working really well. This is a brand new application. Typically, you're not gonna take your, you know, um uh your journaling system that's running on COBOL and all of a start and move to trunk-based development. Um, but that objection is is something that we hear all the time. And it's actually one of the one of the main reasons we advocate for um uh feature management broadly as as a as a class of um as a way of working. And when I say that, it's like you know, you know, version control. Like version control is great, you know, GitHub wants you to use their version control system and Bitbucket, theirs, and you know, GitLab, but like everybody at all those companies is gonna make an argument first and foremost for like why version control. And I think you know, most people are like, please, like you don't have to use ours, but like you have to use something because reasons. Uh we believe that feature feature management is is one of those arguments, like CICD, like version control, like too fast draw authentication. It's just it's something that engineering teams should embrace, not because it it's it's good for you know, unleash personally, it's because it's better for you, it's better for the company, your company, it's better for like me as a consumer, because if I'm relying on your service, I hope that you have the ability um to um you know disable something that is causing issues in production without you know paging an SRE that they come in and SSH into a server to reboot something.

SPEAKER_00

Yeah. Um yeah, but if I may interject something, um you know unleash is a tool, um, but feature flagging is a principle, just like Git is a tool, but source control is a principle. Um and and these are sound principles independently of what tools you might use.

From DevOps To FeatureOps

SPEAKER_01

Exactly. Um, and recently we've been um we've been noticing something that our customers are are doing, and we've started to talk about it as well. Um, it's this concept we now call feature ops. So um it's interesting that like DevOps is obviously a thing, and it's a set of principles and practices that go beyond coding um and how to deliver software. It's also cultural practices in terms of how teams interact with each other, how many pizzas that you need uh to feed the group of people, right? Two pizza teams is a is a principle of DevOps. And what what we noticed, what our customers noticed is that there was this massive move to DevOps kind of in the, I don't know, let's call it like the you know, early, early um, you know, 2010, 2012, really kind of birth of Docker, birth of Kubernetes, kind of the real maturity of CI CD, and then you had the infrastructure platforms of the cloud providers that allowed you to kind of do all of these things in a fully automated manner manner. The APIs and scalability of the systems were there. And kind of DevOps came and it replaced kind of the old way of doing things, which was more kind of you have sysadmin related. Um the the challenge with that model was that it's still very internally focused. And you kind of you when you when you know maybe like well, you you you said like the the show is not too serious, and you know, we can go off the rails a little bit. So let me let me let me kind of take a detour. Um, you know, Amazon.com was famously customer obsessed, and they became the company that they are because of that obsession with customers. That means that you look outside of yourself, not inside of yourself, um, in order to determine kind of what the market wants, what people desire, things like that. Um, and then there's like the whole other chain of like, well, that led to AWS in terms of kind of an infrastructure platform. But it's really in my view, it's this it's the same thing. It's this customer obsession that enables them to build the products and services that you know made them one of the kind of iconic businesses of a generation. Um it's all about external focus. Customer obsession is about external focus, not internal focus. Um, DevOps, in our customer's view, and what has become our view is that DevOps is a very internally focused um set of processes and mechanisms, um, because it's about how do we deliver code most efficiently. Uh now, code represents something that customers interact with and desire, but it's not the thing itself. So if you ask um, not to you know, um throw uh my parents under the bus, I'll just use them as an example of like a generation, right? When you ask them about like what's happening, you know, on their iPhone or their iPad or whatever, they're gonna talk in terms of the functionality that's happening, not in terms of the code. Now, you and I both know that without code that was started on a local machine somewhere that was then pushed to a server, probably an Amazon or Google, um through some set of automated techniques that was then deployed, that there's networking and ingress, and all of these things that we understand is like how you actually end up having that experience. For the end user, it's really about the experience is is this application doing what I want? Did the design change in some way? Um, the the value, in other words, of that application is wrapped in the functionality that it provides, not in the code that delivers that functionality. And by focusing on the delivery pipelines with DevOps, you end up in a situation where an executive can often say things to the mobile team. It's like, you know, just you know, we've we've spent$50 million on this DevOps transformation. Can anybody tell me why our mobile app still sucks? Like it's like we've spent all of this money and we've done all these things, and I've been aware of all these projects, and I hear that we're deploying more frequently, more faster, like our mean time to recovery is down, and all of these things, and yet like the things that our customers interact with, you know, our mobile app isn't as good as our competitors' mobile app. And feature ops kind of flips a lot of these things on its head, which is to say, let's focus more on the functionality that is an end users' hands, the things that customers care about, and optimize for that, and then work backwards into the software engineering practices that enable that process to move a lot faster. Um so you end up with this concept of feature ops, which focuses on kind of the lifecycle management of capabilities. And how do you not just think about a project plan in terms of shipping code, but actually getting it deployed to all of your users as quickly as possible, getting feedback, taking that back into the software engineering process. Um, continuously doing that. So we we talk about CI CD, we have a concept of CICI C D CR, um continuous integration, continuous release, and continuous continuous deployment and continuous release, we're continuously releasing capabilities to customers and getting feedback from the people that actually pay the bills and then improving concepts like full stack experimentation, um, where you're you know taking um this is getting to a long answer. Maybe actually we can come back to some of these concepts, but basically where you in um in many DevOps-driven experiences, you either have things that are simple to test on the front end, it's like, oh, let's change this button color, the conversions increase. Or you know, let's do this change and then look at Grafana and see what is our memory consumption like. And you kind of have these two ends of the spectrum, and oftentimes they don't communicate with each other. What FeatureOps says is there's only one in customer experience. And so the the redesigned UI needs to drive customer value. People need to like it, they need to be able to use it, they need to be able to solve their tasks. At the same time, right, the from an engineering perspective, we need to make sure that that is a low latency experience, that the error rates aren't higher, that it that it's um uh not consuming too many resources. And to know whether or not something is better or worse based on the code changes that you've made, you need to measure all of those things. And the the key point to all of this is that it's really about optimizing the functionality that comes out at the end of the software development process versus optimizing the code that is the endpoint of CICD pipeline.

Designing For Users, Not Requests

SPEAKER_00

Um Yeah. Okay, so you've touched on so many pain points and pet peeves of mine. Um the the biggest okay, the the the biggest like highest level gripe that I have with software um and and how it's made is not considering uh the big picture and not considering um the value that's delivered to the user. Um I used to do consulting for technical leaders, and and one thing that I always advise people is to actually get together in person with their customers, um, not on a call like literally in person, because it's uh it's very much different. Um and the reason is because so much feature development is is just compromised from the start based on the fact that um the ideas uh before any coding is even done, the ideas are off the mark. Um they're off the mark because the people coming up with the feature ideas don't really understand um the they don't understand what it is that the user needs. Um what most places do is they listen to user feedback and they're like, okay, users are saying this, so that means we need to build this. And it's like maybe, but like also maybe not. Um I think I think um the job of a product designer, whatever your actual title is, you know, maybe you're a developer or a project manager, product manager or whatever, um, but you're playing a role of a designer. The job of a designer is to understand your customers' world and understand the job that they need to do. And then you be the designer, don't abdicate your responsibility to be the designer and let the user be the designer. Um, because what you end up with when you do that is just the user gives you all these requests, and then you just like tack on all these features without having like an overall cohesive design to the product. It's it's like so plainly obvious that most products are built in that way. Um whereas, you know, to take the classic example of maybe the greatest product designer ever, Steve Jobs, um everything that that Apple did while he was in charge, um it it the design was cohesive. It didn't feel like something that was just slapped together based on user requests. It was actually something that was deeply thought out and it was pleasant to use and you could do the things that you wanted to do and stuff like that. Um but as obvious as it sounds, uh very few people actually do that. And I think that's one of the big reasons why so much software is unfortunately such low quality.

SPEAKER_01

Yeah, yeah, that um I completely agree with that. It is a little bit counterintuitive because you want to listen to user feedback and have that inform what you build and what you design. But that's not the same thing as asking an arbitrary user what should we design. Right.

SPEAKER_00

It's like I can ask my I can ask my kids what they want to have for dinner, but I should decide what we have for dinner. You know, if if they say candy, then I shouldn't just say, okay, candy it is. Um I'll take into account their preferences or whatever, but then I'm gonna make the decision.

Feedback Loops, Rollouts, And Experiments

Balancing Three Voices: Eng, Business, Customer

SPEAKER_01

Yeah. And I think the a structured way of do of learning. I think I think the you know, if we say, okay, you know, Jason, I that I agree with that, but like how do I structure a process that allows me to move around that that that loop, that learning loop, as quickly as possible? Um the the feedback loops are the thing we believe you should optimize for. So that you, you know, um a product, um, a product owner, engineer, whoever, guided by you know, empathy for the user, you know, real-world experience as that persona, as that user, right? If uh that's why one of the reasons I love working for dev tool companies, because everybody's writing the software, is is not always the perfect example of an end user, because sometimes the role in which where you develop software can change, but you have an intuition for why it should work this way or that way. You also want to pull in user feedback. And then if you you know you learn that you're doing things well, then you want to do more of that. If you learn that, oh, there's actually friction or we're not doing things well, then you want to learn that you want to change that as well. Um, this is one of the reasons we believe two things. One is kind of controlled feature releases continuously is super important because control, because you want to make sure that you are getting the early versions of that software into the hands of people that are really going to be to provide you very rich feedback, right? That's that's probably not 100% of your user base. Um just think about like the um you know product adoption lifecycle. You've got your you know, your um your your laggards, which you probably don't really care about from an innovation perspective, and you've got your early adopters that you do, but like they're not the same persona as like that early majority. So there's that whole world. So you want to pick who you're gonna, who you're who you're going to spend time with. Um, you want to be able to get something in their hands where instead of just having a conversation about this, it's like actually watch them use it. Does it solve their problem? How do you measure that? Um, and then get that information back and then loop around. Um, and so feature management is a great way to enable very frequent releases of functionality to end users, learn about it, take that back, and then iterate over and over again. And then there's the whole experimentation. That's kind of just that's just the modus operandi in my view of like modern software development based on these concepts of feature management, feature ops. But then you could add on like experimentation on top of that, where it's like, okay, we have a very concrete hypothesis that if we do X, Y will happen. Let's test that. Um, it's you know, you've probably you know seen some of the when you're interviewing someone, you know, you want behavioral interviews are better than you know, interviews that ask people, if this were to happen, how would you how would you how would you act? It's better to ask someone, tell me about a time where this happened and how you acted. Because one is like, okay, this is tell me what you actually did versus what you think that you would do in a situation. Oftentimes those diverge wide wildly. Um, same thing with kind of you know customer interviews versus actually watching someone use their product, those can diverge. And so experiments allow you to get hands-on with get an answer. Um but as I mentioned earlier, uh the limitation of a lot of experimentation is based on what you can measure. Um, and the reality is that in complex systems, success could look like an improved user experience by some metric, but not at the cost of some you know critical engineering variable, maybe like infrastructure cost. Imagine that you had you could you know provide everybody using your Apple Lightning experience, but that required like a dedicated database for every user. It's like, well, it's just not financially possible. We can't build a business model that allows us to do that. So we're not going to call that experiment a success simply because you know conversions increase by XYZ. You actually have to balance these things, and we we we describe it as learning about what better looks like. We describe it as three voices. Um, one is the voice of um engineering, right? Talking, we're on we're on code. Podcast. So like engineer is really, really important. You have a seat at the table, right? Is it performant? Is it buggy? Is um, you know, uh what's the infrastructure cost like? Like all of the things that you might measure in like something like a Grafana. Like that's an important part of making sure that we're making improvements, that those metrics aren't going the wrong direction. Um, another voice is the voice of the business, right? Is this making us more money? Are our users happier? Like whatever, whatever the business model is, you have some objective from a business perspective, from a user engagement perspective that you want that you want to see. Um, and the third voice is the voice of the customer. So, you know, you might be making more money as as a company, but the customer might hate it in the short term, right? You you can be in a position where like the the interest of the application and the interest of the user diverge, and long term that creates um sustainability problems for you at you as a business. Short term, you might not see it because they're stuck, there might not be an alternative, but over time they're looking and then they might churn completely. So there's nothing.

SPEAKER_00

Yeah, I think maybe a great example of that. Uh I was watching a video on or a movie on Amazon Prime last night, and now they have these really annoying ads at the beginning and interjected into the middle of the movie. Um, short-term gain, obviously. Um, but I suspect that in the long term, if there's a competitor that delivers an equally good service but without the ads, then Amazon's gonna lose those customers to this other competitor. So that's an example of when it's good for the business, but bad for the customer. And so ultimately probably bad for the business.

SPEAKER_01

Yeah, exactly. And so you want to, when you're thinking about the kind of that um what you were talking about is like, how do I how do I design a product and set of experience? We believe you should be thinking about three things voice of engineering, voice of business, voice of customer, and design experiments that allow you to measure those things over time and bring that feedback back into the um uh into the product development process. And and this is kind of a core staple of this concept of feature ops, which is again, like DevOps, DevOps wouldn't be asking the question, do our users like our product? Right? That's that you know, I wrote some code, I automatically deployed that code into my Kubernetes cluster, now it's available. Right? That's the that's DevOps kind of the DevOps story ends. It's not that that's not important, that's exceptionally important, right? I hope all of the products that I rely on are using these automated ways to get code into production, but like that doesn't tell you whether or not you're making improvements from a user's perspective, from a business perspective. Um, you have to ask these other questions, and that's where FeatureOps picks up. And in this full stack experimentation, hypothesis-driven development that takes into consideration these three voices is really, really key.

SPEAKER_00

Um I do have to say something just uh just for the record. Um there's there's something that I view a little bit differently. Um so with with DevOps, um my interpretation of the end goal um is to deliver value to the user, um as opposed to it just the cycle ending with deployment or whatever it may be. Um there's this book I've been reading um called The Goal. And I think the uh the uh some of the original proponents of DevOps were heavily inspired by this book, The Goal. Um and dear listener, if you're not familiar with The Goal, it's it's a it's a novel, um, it's a novel format, but it's a business book. Um and it's all about it's centered around a um manufacturing plant and the operations of the manufacturing plant. Um and and they're just in really rough shape. In fact, the the big boss said, Hey, if you guys don't turn things around and stop losing money, I'm gonna have to shut the plant down in three months. So I got to figure this out. Um so the manager of the plant finds this really smart guy, and this guy um acts as a consultant and helps helps the manager turn the plant around. Um and and the consultant asks the manager, what's the goal of this whole thing? What are you trying to do? Um and at first he comes up with like kind of myopic answers, like, oh, we're trying to like be as efficient as we possibly can, or like produce as as many items as we probably can, or as we possibly can, or whatever. Um and and the guy's like, no, those those things aren't it. And then he finally lands on it, he says, Oh, the goal is to make money. Um and the consultant says, Yeah, that's the goal. Um and so that's like the the absolute like biggest picture thing, of course, uh making money. And it's the exact same in software, the goal is to make money. And then there are like nested goals inside of that that are subservient to those bigger picture goals. Yeah. Um but that's my personal interpretation of DevOps as encompassing all of that.

DevOps Metrics Are Necessary, Not Sufficient

SPEAKER_01

Yeah, I um I I appreciate the perspective and I and I agree with the perspective, but because like it's more interesting for people going on their run if we disagree a little bit, I'll like I'll I'll come back at you um with all due respect to say, like, I agree, and yet I think if you look at if I were trying to op if I try to operationalize DevOps and say, okay, like well, how do we know if we're doing DevOps good or not? I think most people would say we're probably we're probably not gonna look at you know revenue of you know JP Morgan Chase. If I'm the you know VP the CIO of JP Morgan Chase, I'm gonna have other metrics because the revenue number is so bound up into so many other things. And I think the best metrics to measure DevOps success would be probably the Dora metrics. Um so um these are basically Dora stands for DevOps Research Assessment. Um, and it's kind of it Google's really championed it, uh, both DevOps as well as the Dora metrics, and they are uh deployment frequency, uh lead time to changes, basically deployment frequency is how frequently are deploying. Um lead time to change is like how long does it take for us to have an idea and then to actually get it into production? Um change failure rate, how often does that process fail? Um, and then the um the fourth is mean time to recovery. So if something does fail, how long does it take us to get it back? Um and if you say, okay, well, that is what DevOps, like that's how we optimize DevOps. If these metrics are improving, our DevOps practices improving, you can be in a situation where your deployment frequency and your lead time to change and your mean um uh mean time to recovery like are all good, and yet you're completely missing the mark in terms of what the customer actually wants and thus what's going to drive revenue. Um, clearly, like you it would be it'd be hard for me to imagine long-term a successful modern company that was failing on all of these things and yet had a really long-term sustainable business. And so, like, they're they're necessary conditions for success in the modern world, but our argument is that they're not sufficient, and that you can you can again, in the spirit of making this interesting for our listeners, you can overload the term DevOps with anything that you believe is like you know a valuable end goal for a company. And you know, if I was advocating for DevOps budget, like I would totally do that. I would make the argument that really, you know, Mr. Boss, you know, give us more money to invest in DevOps because ultimately we're gonna be to drive revenue, we're gonna save you costs, like we're gonna do the things that ultimately we're in the business for. However, as an engineering leader, I would say, okay, the optimizing DevOps metrics are probably not the best way of going about optimizing, for instance, like long-term user experience. Um, like I think there's a different set of metrics there. Um, and feature ops, in our view, gets you closer to that because the things that you operate on are closer to the end customer. Instead of operating on code, which is like an abstraction on top of what the customer experiences, you're actually interacting, you're you're actually operating on the thing that ultimately the comp the customer does interact with. That could be the product or service. And it extends even to like APIs and things like that. They're back-end services that customers ultimately depend on. It's just in that case, the the end user is different. It might be another API versus like you know, Jill or John. Um, so yeah, that that that's kind of my take on how they're related, but a little bit different.

SPEAKER_00

Yeah. Yeah, that's actually a good point. Um and maybe I change my position a little bit based on that. Um because I don't disagree with anything you just said. Um that all makes a lot of sense to me. And and the thing about um, you know, if you're hitting all the door metrics, which by the way, I think are really good measurements of how a team is is doing with regard to software delivery. Um I I agree that it's necessary, but not sufficient. Because what are you delivering? You know, you could be you could be hitting all those door metrics and doing a great job, but if what you're delivering is not good, then the whole picture is not good. Um and so these things we talked about, like understanding your customer and building things that are actually good for them, that is something that is additional. And and the DevOps principles, even though I still think that one should interpret all of the DevOps principles as being in service of the highest level goals of the business. Um despite that, the DevOps principles are not prescriptive as to how to design your software such that it's good for the customer. It's more about delivery.

AI As An Accelerant, Not A Shortcut

SPEAKER_01

Yeah. Yeah, yeah, no, I completely agree with that. Um one of the we talked a little bit about, I just um I'm gonna switch gears a little bit, and sorry if I'm I'm taking over because we talked about something in kind of the the brief that we had before. It's just about like, what about AI? I think you I think as a podcaster you probably get fined if you don't talk about AI on every one of your shows, at least for a few minutes. Um it seems to be the rule nowadays. Um, but it's also something that the customers that I talk to are like they're trying to figure this stuff out. And I had a conversation uh recently with um uh with Wayfair, um who's uh who's one of our customers. And I mention it because it's probably a company that many of your um listeners are familiar with, big retailer um in like an e-commerce retailer in the US. And like they're they're in a market, it's kind of called a red ocean, right? It's very, very competitive. Margins are very, very low. Like their competitors are like some of the like biggest and most sophisticated companies in like the world. Um, you know, Amazon, Walmart, et cetera. And so Wayfair is there trying to hold their own. And they have to be really, really good at the user experience part in order to um in order to basically win in this market because like they don't have the economies of scale of their biggest competitors. Um, what they do have is you know a better understanding of their users, you know, user experience. Um they can move faster, they can experiment faster. Like that's where the rubber hits the road on a lot of this stuff is like, you know, how do I compete in a world? And and that's where I think the DevOps stuff is a competitive advantage. It's not that I'm I'm dismissing the door metrics or anything like that. It's they do help. It's just there's these other things. And so the the anecdote I wanted to relay um with regard to the conversation I have recently with Wayfair is like they're thinking about like how do we get that next level of um advantage? And they're looking a lot at AI right now as a way to just accelerate the cycle. It's not that the cycle has changed in terms of like we want to go from, you know, for instance, lead time to change, idea, code, commit, deploy. Right now the customer can, you know, click a new button and add something to their shopping cart or whatever. Like that hasn't changed. They want to accelerate that and they want to use AI to go around that loop faster. They also want to go faster around the okay, now that new experience is there, is it improving conversions? Are our engineering metrics continue to be saying they want to go around that loop faster? And they're like adding AI as a um as a uh as an accelerant for this. And just I guess the the thing that I want to discuss with you is in our view, there are certain engineering best practices that like just will continue to make sense. They make sense before AI, they make sense after AI. And in fact, if you aren't doing those things and you're using AI, it's like you know, driving around on a you know on a race car without a helmet and seatbelt because it's just like you're gonna go flying out the window. Um, and those would be things we talked about a little bit, like you know, change uh source control, you know, CI CD, um feature management is what is one of those things because one of the one of the patterns that we've seen emerge is that um like velocity is is increasing when you sprinkle AI on top of things. Um that's you know, uh that could be um AI assisted coding, you know, co-pilot type things, you know, everything from well, and I'm not gonna iterate through all the examples. I think people know at this point like what that means, um, all the way to like ecentic development, which is you know, is a little like it's here, but it's also like it's a little bit you know hand wavy at this point in terms of like can you really have an agent that's doing these things? I'm sure that there are some places where the answer is yes. I'm sure they're like truly autonomously, I'm sure there are places where the answer is like hell no. However, like AI is being applied to the software delivery process. And yet, just like you would not want those agents to like not use source control. So it's like, when did this change appear? I don't know. Like, you know, you you want get blame for reasons. Um, wrapping all of that stuff in feature flags also makes a ton of sense um from a company like a Wayfarer's perspective, because one of the use cases for feature flags is like a kill switch. So it's like, okay, I have this code, it's been um it's been co-piloted or it's been a gentic developed, and maybe it's great, maybe it's not. Only one way to find out, let's roll it out to a subset of users, let's test it. If that's working great, let's move into 50%, 100%. And if it doesn't work, we can roll it back instantly without requiring additional software deployment. Um, I I guess the point that I'm making is it feels like like right now feels really um kind of anxiety-inducing from an engineer's perspective because like so much stuff has changed. But one of the things I like to think about is like how much hasn't changed, in that the best practices that apply to software development continue to apply to software development using AI in those control mechanisms that we've come to rely on, like we just need even more of them because the velocity is has increased so much. And I was just curious your thoughts if you Oh, yeah.

Timeless Practices: Tests, Flags, Controls

SPEAKER_00

Yeah, yeah. This is something I tell people all the time because I think people kind of conflate uh where the technology is at this current moment in time at any particular time. Uh they conflate those things with timeless principles that are independent of the current state of the technology. Um that's a great example, what you said about version control. Like that's gonna be a good idea, independent of AI. Um and okay, here's here's maybe an even more relevant one. Um automated testing. Like people might think that the that things have changed. Like it doesn't make as much sense to write some tests and write the code when you can tell the AI to write the code and then write some tests for it. Or even the other way around. Right, exactly. Um the same principles still apply. Like um one of the biggest reasons for having tests is so that you can make broad changes, um, or or not even broad changes, but just any change, um and still have reasonably justified confidence that you're not gonna break anything. Um and even if the AI can write absolutely perfect code on the first try, um, that doesn't mean that the benefit of having tests goes away. Um it's it's still worth doing that. Um and and so there's these it it's again important to me to think very carefully about what happens to be true right now um and and what's always going to be true no matter how good or bad the AI technology is.

Making The Business Case For Engineering

SPEAKER_01

Yeah, yeah, for sure. Um and I think one of the I think this this is a golden opportunity for engineering leaders to um because like like let's face it, times are tough right now, right, for these you know engineering leaders who they are being asked to do more and more, and yet it's like you know, budgets are being cut and like they're having to defend purchases for what are just fundamental building blocks. And one of the things that we're helping our customers with is be like, okay, um, you know, the you know the CFO might not understand anything about what you do on a day-to-day basis. Um but what they do want is you know they want to cut costs or they want to increase revenue or whatever, whatever you know, CFOs want to do. And one of the things that you can do with like, but you know that we need to to do those things, we need to write software. And to write software, whether it's written by humans or written by you know um agents, requires certain processes, toolings, processes. And so it's like you can tie what you're trying to accomplish, your your mission, right? I want I want to move everything to automated testing. Like I've been I've been wanting to do this for years, I haven't been able to get, no one's been, no one's listened to me. Um, but now what you can say is, well, you know, if if you want you know us to you know do twice as much with half the people, right, we have to invest in these things because we're not gonna be able to get the value of the AI investment without solving these these fundamental pieces over here. You can almost think about it as like, you know, uh um uh tech debt in the sense of like these these are these are these are gaps in our process that we need to pay down before we can start really taking advantage of the AI. And I think there are a lot of places where some of these technologies, which don't themselves directly contribute to revenue or don't allow you to, you know, um, you know, have make everybody a 10x developer, and so they kind of get deprioritized. But if you can tie them to the bigger initiative, then all of a sudden it's like, okay, you know, so you're saying that if we invest in these areas over here, you know, we can we can you know double our output on a given year. It's like, well, yeah, I think we can safely do that if we have the the mechanisms in place in order to scale it. Um, and I think it so in a in a way it's like it's a good time to be an engineer because like you can get access to all of the tools that make your life easier, that make you more efficient, um, because you can it help explain to them in a way, um, you can help explain in a way how they contribute to ultimately what say your CAO or CFO is is interested in.

SPEAKER_00

Uh there's one last question I want to ask you, Michael, before we have to wrap up, which is is there any way anywhere you want to send people to check out um unleash or you or wherever you want to uh send people?

SPEAKER_01

Yeah, um I would um send people to our website. It's gitunleash.io. Um and um, well, that's how it sounds gitunleash.io. I guess actually for developers, it's G-E-T, not G-I-T. Um, get unleash. Um and we're actually having our annual uh user conference um next week. Um and so I mentioned Waypair, Waypair is gonna be talking. Um we've got other customers like uh Prudential, um, Lenovo, um Mercadona, which is the largest uh retailer in Spain, is gonna be talking. Um uh we've got a great AI company um called ASAP that's gonna be talking. And they're just really great engineering talks. We've got you know people that are just people like you would like to have a beer with, that people who have you know been building large-scale software systems for decades in some cases and just sharing lessons learned for how they you know work and build build software and and build things that people want. Um so I would encourage people to take a listen and sign up for that. And if you listen to this podcast like two weeks from now and you miss it, then like come to our YouTube channel and all the talks will be recorded. You can watch them there.

SPEAKER_00

All right. Well, Michael, thanks so much for coming on the show. Yeah, my pleasure.