On Theme: Design Systems in Depth
Exploring how successful design systems get built, maintained, and deliver impact. Design systems is having a major reinvention moment, and I want to share what's working from design system practitioners out there forging the way. Expect aha moments, actionable insights, thoughtful discussions, and spicy takes from accomplished design system practitioners. Hosted by Elyse Holladay.
On Theme: Design Systems in Depth
Design system quality has a business case now (and it's ... AI?), with Murphy Trueman
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Murphy Trueman joins me to dig into something most design system teams already know but haven't wanted to say out loud: your design system isn't ready for AI. Ambiguous tokens, undocumented rules, and a lack of parity that only works because experienced humans know how to use the system... we've been getting away with it for years. AI just made the bill come due. We discuss what it actually means to treat your design system like a semantic API, how to think about governance, and why the fixes AI demands are ones we probably should have made years ago. Plus, what it means to allow more roles (and LLMs) to build with the system, and and what teams can do right now to get their house in order.
Resources & Links
The move is from implicit knowledge to explicit contracts. Human designers can intuit that a button with rounded corners and a blue background, is probably a primary action, uh, but agents can't reliably make that leap. They don't intuit behavior, they infer it from structure.
ElyseThe rigor around foundations, documentation, component relationships, composition, has to be much greater than it ever has been. So it's not like we didn't have good foundations. It's just the level of rigor now, is, the demands are, are greater.
Murphy TruemanEvery design system runs on a layer of undocumented rules, like, don't use that component for navigation, or that variant exists but we're about to remove it. That's all context that lives inside someone's head, and it gets passed around in Slack threads and pairing sessions, but doesn't really make it out, outta people's heads, and we've all become really good at translating our own inconsistencies. Humans are incredibly good at compensating for missing information. But AI exposes every place where you were relying on that compensation. Instead of building the information into the system itself.
ElyseThis is On Theme, Design Systems in Depth, and I'm Elyse Holladay. Together, we'll be exploring how successful design systems get built, maintained, and deliver impact. Design systems is due for a major reinvention moment, and I want to share what's working from design system practitioners out there forging the way. Let's dive into the show. Today's guest on the show is Murphy Truman. Murphy. Truman is a product and design systems leader, and in my opinion, one of the most spot on thinkers in design systems right now. She's been blogging prolifically about tokens, component architecture, and the organizational dynamics that make or break design systems, especially in this new AI era. I've been trying to get Murphy on the show for a while, and I'm so excited to finally have her because I wanna talk about everything that she's been writing about. Obviously we cannot do that because this episode would be 12 hours long. So we're gonna try to focus! Today we are gonna dive into a couple of Murphy's core ideas that I think are really relevant right now that a design system is an API, and this idea of what AI readiness really means for design systems. Murphy, welcome to the podcast.
Murphy TruemanThank you so much for having me.
ElyseSo your writing keeps coming back to the idea that a design system is an API, or your design system more ready is an API, and it's usually not a very good one. Can you explain what you mean by that, and what a design system that treats itself as a semantic API actually looks like in practice?
Murphy TruemanYeah, so an API defines how systems communicate and it sets clear expectations, whether that be inputs, outputs, rules, that sort of thing. Great APIs are consistent, self-documenting, and predictable, and I think as design system practitioners, that's naturally something we gravitate towards. Interestingly, design systems have always functioned as APIs. A component with props, expected inputs, and documented behavior, is naturally an API definition. And the question is really whether it's a good one or a bad one. A bad API has unclear inputs, undocumented side effects, inconsistent naming, and relies on implicit knowledge to use it correctly, um, and most design systems I've worked on have fit that description. They work because the humans consuming them have years of context. They've sat in meetings, seen the design files evolve, and absorbed the unwritten rules. And once you strip that away and hand the system to a new hire or an AI agent, um, the gaps become immediately obvious. I've spent a bunch of time in agency environments working across different organizations, and discoverability was always the most interesting challenge, figuring out how to make things findable in each organization's context, with designers and engineers who have completely different mental models of what the system is even for. Designers are often thinking about visual consistency and component reuse, whereas engineers are thinking about prop contracts and predictable behavior. They aren't the same mental model. I find that most systems don't bridge that gap explicitly, they just hope people will figure it out. Treating the system as a semantic API is really just taking discoverability seriously enough to encode it into the structure. That work didn't start with AI, but it is the same instinct, encoding intent so the right person or the right tool can find the right thing. And the benefit is that that helps everyone. Clearer mental models for human team members, clearer outputs for AI tooling. It serves both audiences.
ElyseYeah, I wanna hear more about that because discoverability is, in my opinion, one of the key issues we have with design systems and design system adoption. There's adoption in the sense of, you know, is the package installed in the product across all these different teams. But real adoption means usage. And usage means understanding what the system has, what it can offer you, why you might choose one component pattern over the other. So tell me more about how that works, or like, what are the parts of discoverability, uh, if a design system is a semantic API?
Murphy TruemanI think it comes down to a few things. Um, components need to communicate their role in the interface, not just their visual treatment. So, purpose driven classification rather than appearance based naming. When AI tools consume tokens, they read the naming and interpret it. Structured semantic tokens produce decent AI generated code, but ambiguous tokens with inconsistent naming produce terrible output. That's not just the name, but the components API itself. Props should carry meaning, not just visual interactions. So tokens need to be named semantically. So when an agent reads, appearance equals danger, it understands that that signals a high stakes action, it's not just red styling. Color border critical carries intent rather than cherry 500, which is really just a value with no context. So whether through naming conventions, variant systems, or structured props, the key is encoding intent. And then there's what I'd call composable contracts, so components that describe their relationships explicitly. A modal that requires modalHeader, modalBody, and modalActions creates a clear contract. And that's where the API metaphor stops being a metaphor and becomes a practical architectural decision. The move is from implicit knowledge to explicit contracts. Human designers can intuit that a button with rounded corners and a blue background is probably a primary action, uh, but agents can't reliably make that leap. They don't intuit behavior, they infer it from structure, and that's what makes naming, composition, and documentation essential infrastructure for intelligence tools.
ElyseThere's one blog post, and I don't remember the name of it, but you talk specifically about this relationship contract, like modalHeader, modalBody, you know, the modal can have at least one button or two buttons. That has to be a button of this type, and I think that that's a pattern, API that, I don't think I've ever seen anybody implement. The other thing that I love that you're talking about is the idea that working with LLMs is exposing issues with our process, with design system tooling, that we didn't really have to think about before. You touched on an engineer or designer, maybe they can figure out which component, oh, do I need a box or a container or a wrapper, kind of by trial and error, because they understand intent. We know because we can see with our eyes that a blue rounded button is a primary action, and when you would use that primary action. We talk a lot about context with LLMs, everything's about context, but when you're building a product, you write a prompt, you don't put that kind of context in. You don't say, and then you use the blue rounded button as the primary action. You just assume that the system knows that. Defining that with all the things you're talking about with, semantic names, but also with props that are, you know, meaningful, and then this like composable contract idea, I think is really exciting. So tell me, I know we touched on naming, but tell me more about some of the places in design systems that you feel like are getting exposed as a issue with AI that we didn't really have to deal with before.
Murphy TruemanI think when every consumer of your system was a human designer or developer, naming was almost a preference. You could call a component pretty much anything, and in theory it was fine, because the person using it could look at a screenshot or read the Storybook example and figure out what it did. I keep coming back to something Dan Mall said at a token workshop, which was,"it doesn't matter what you agree on, it just matters that you agree." That's kind of fueled my approach for years. But naming now feels more like a functional requirement. An AI agent can look at a rendered component, but it's inferring purpose from appearance, the same way a new hire might guess that something, what something does based on how it looks. And names that encode purpose remove that guesswork. The agent doesn't have to interpret, it just reads. And teams that got away with appearance based naming for years are now finding that suddenly that's a real obstacle they need to tackle.
ElyseSo you're saying that the kinds of component names we used to use like InfoBanner or CardBase are terrible because they describe appearance, not purpose. Something like InfoBanner is interesting because maybe that's purpose, maybe that's appearance. Walk us through how you would evaluate if a component name is doing enough now or if you need to change it.
Murphy TruemanYeah. A name is the first piece of information any consumer gets about what a component does. So a designer scanning a library, a developer searching Storybook, or an AI agent selecting a component for a layer. Before anything else, they see the name. And they make a judgment call about whether this component is relevant to their problem based on that name alone. And I think the difference is that a human will click through to check if their judgment is right, whereas an AI will just use it.
ElyseOne hopes that a human would.
Murphy TruemanYeah. Yeah. When I'm evaluating whether a name is doing enough work, I run through a few questions, which would be like: would this name make sense to someone who's never seen the system before? Does it describe the job the component does, or the shape it takes? And with the name survive a redesign? And I find the last one catches a lot. Blue button works fine until a rebrand changes the primary color to green. Three column layout becomes a liability when we make a design responsive, moving from three columns to two on smaller screens. And function-based names survive those changes. Primary button remains accurate regardless of color. I also flag names that are generic to the point of meaningless, so we were talking about earlier with Box, Container, Wrapper, Base. These tell you nothing about what the component actually does. Um, a human encountering three components all called some variation of container, we'll eventually figure that out through trial and error. An agent selecting from those same three has no basis for choosing correctly. They're just flipping a coin basically. A few years ago I was working on a component called Notification that had accumulated a dozen different behaviors. And a lot of new hires would treat that as a Banner. Engineering would treat it as an Alert, and testing would refer to it as a Toast. And no one was technically wrong, but the name stopped being accurate somewhere, and nobody really noticed when. But now when we consider an agent encountering that component and reads Notification, it's just inferring a single clear purpose and uses it for that purpose only. The other infinite behaviors are kind of invisible.
ElyseI think we have one of those com, those Alert Banner Notification Toast. We have that exact same problem. So how would you, how would you refactor that? How would you rewrite or rename that?
Murphy TruemanUm, I think I'd ask what job the component does in the interface. So if it's an InfoBanner, what information, what are we informing the user about? Um, if it's a system status message, let's just call it a system status. It's an onboarding prompt, call it an onboarding prompt. It's a feature announcement, maybe it's a feature highlight. You're answering what does this do before what does this look like? And I think that answer needs to be complete enough that an agent reading only the name can make the right decision without needing additional context. Practically, I wouldn't necessarily say you rename it and call that done. You'd need to consider like, aliasing the old name so that existing references don't break. You'd need to deprecate it with a clear timeline, log the rationale somewhere accessible so the decisions don't get disputed months down the line. Um, and I think, probably most importantly, involve more than just the design system team. Because if only, if the people that are using that component can't explain what it does, the name isn't working, regardless of what you call it.
ElyseThere's something I'm really interested in here, and that's, system status, onboarding prompt, feature highlight, that might be three or four components that are almost identical. Maybe they look very, very similar. Before, we would say, oh, that's InfoBanner or that's Notification, and we use it for showing system status and onboarding and highlighting a new feature. But the switch to generating code with an agent means that maybe one of the things we need to do, is actually have four of the same component. I'm saying the same component in air quotes, like it looks almost identical, it has maybe the same color options, but you actually have various names that then the agent can read that are more semantic and specific. And I think that that goes against the grain or the training that a lot of design system practitioners have really been used to, of like, dry it up, make it higher level, like bring that up so we don't have all of these copies of things. I don't know where I'm going with that, other than it just really strikes me as a very different way of thinking about components and patterns than we have been. But it also seems extremely practical for the new world of working with LLMs.
Murphy TruemanYeah, I think it's tricky because you are kind of unlearning a lot of behavior where you'll bake as much flexibility into anything you're creating as possible. But I think now we are going back to this, like, what is the base component and what can we wrap around the base component so that it fits all of these purpose for an AI to understand how to use it. It reminds me of when like, the way you would build components in Figma is having the base component, and then building instances off that, rather than baking that into your variant structure. And then we went through this whole overhaul of libraries because we could edit multiple things at once. Feels like we're going back to that place now.
ElyseYeah. Okay, so what else? Uh, naming obviously encompasses a lot. We've talked about component names, we talked a little bit about token names, I'm sure tokens will come up again, and like this contract, this relationship. But what are some other things that we've kind of taken for granted that's maybe getting exposed a little bit right now?
Murphy TruemanI feel like this is a point that you and I both see quite closely eye to eye on, um, but the gap between Figma and code would be another one. That gap has always existed and I think teams have been really good at working around it. A designer knows that their Figma file doesn't perfectly match what engineering is built. A developer knows that like spacing dot large maps do spacing dash large in the codebase, even though the names don't match. And we've all become really good at translating our own inconsistencies. Tools like Figma's MCP or South's Figma Console MCP, read both sides of that gap simultaneously. So when an agent sees a Figma component called button slash primary and a React component called primary button with a variant prop, it can't reliably tell that those are the same thing. And the naming conventions there are structured really differently, and that translation work that humans did automatically, becomes a technical problem. Um, implicit knowledge is another one. Every design system runs on a layer of undocumented rules and things like, don't use that component for navigation, it's only for content areas, or that variant exists but we're about to remove it. That's all context that lives inside someone's head. And it gets passed around in Slack threads and pairing sessions, but doesn't really make it out, outta people's heads.
ElyseI'm, I'm laughing because that thing lives in my head.
Murphy TruemanYeah.
ElyseI feel like the LLMs just, they take everything at complete face value. There's no,... you can't provide that kind of information. It's either like deprecated or we're using it.
Murphy TruemanYeah. And that's kind of a dream for someone like me that loves to write everything down, but otherwise that's a nightmare. But yeah, AI tools read your documentation literally. If a component is documented as available, the agent will use it, even if every human on the team knows that it's problematic. There's another thing that I don't think it's talked enough about, which is composition. So how components relate to each other, which ones can be nested, which combinations produce accessible outcomes. Uh, human designers can make those judgements through experience. You know that a particular nesting combination is gonna break screen reader announcement order, or create a focus trap, because you've seen it happen. And an agent that has no knowledge of those constraints can make it look plausible, but it is really another question as to whether it actually holds up. And I think most systems don't have that information, uh, documented anywhere. I think this system worked fine for human consumers. Humans are incredibly good at compensating for missing information. But AI exposes every place where you were relying on that compensation. Instead of building the information into the system itself.
ElyseYeah, I don't, I know of very few or maybe no design systems that really have that kind of thing deeply encoded. That's very, very sophisticated. And I think, you know, for a long time it was, like you're saying, it wasn't really necessary. Like it would've been nice, but we could get away with it because we had all of this human compensation happening. But as I think about these examples you're giving, this would help everybody on the team. Like, forget AI agents. I have these kinds of conversations with the designers and engineers all of the time. Like, no, we only use that for content, no, you can't put that in the, that's not what the side nav is for, like that actually doesn't go there. And I think we're gonna have to start figuring out how to encode this kind of stuff in a really sophisticated way. I think one of the places that I hear about this a lot is around, around tokens. We, we seem as an industry to think about tokens as the only way we encode information into the design system. Where everything's a token, every decision that the system ever made is encoded into a token. And I think you're talking about something more, like API schemas and things like that. But let's talk about tokens a little bit because I'm convinced that most teams have super over-engineered their token architecture and it's like more confusing than useful. What do you think a good token setup actually looks like, in in the AI future?
Murphy TruemanI think currently tokens are where teams are most reliably over-engineering themselves into a corner. The three tier model gets presented as the destination, and I think teams interpret that, that as meaning, we need all three tiers on day one, because that's what a diagram showed us. The question of whether, do we actually need this tier yet, is asked a lot less often than I think it should be. Most token architectures tend to fail because they're too ambitious, um, teams will add a component token tier before they have reason to. They create semantic tokens for one-off use cases, that should probably reference just the primitive token directly. And that can often result in ending up with hundreds of tokens that nobody can navigate because every decision was abstracted. I think primitives and semantics are enough for most teams. You can add the component tier when you have a real demonstrated need, but probably not before. Where this gets interesting right now, is that AI tools are reading those token files directly, and the quality of that token naming cascades through everything downstream. Strong semantic naming produces accurate AI generated code, which then produces correct implementations and passes tests. Weak naming produces hallucinated references and broken implementations. Background error tells an agent what the token communicates to users, while a red five tells what color to render. The agent can work with both, but I think the quality of what it builds from each is dramatically different. And one weak token name is a minor issue, but when we scale that to 50 weak token names across a complete layout, the agent is making a bunch of different decisions and inferences, that a human would need to go back and correct. The errors aren't often isolated. And, the thing I'd push people on is, governance over architecture. So who approves new tokens? Who decides when we need to deprecate them? Who consolidates, when three tokens resolve to the same value? Tokens will accumulate regardless of how clean that architecture is. And that accumulation is what will make a system feel overengineered. Naturally that is harder for AI tools to work with because, the more noise in that token set, the more opportunities for an agent to pick the wrong one.
ElyseAnd, and for humans too. I mean, I think that's part of the reason that having all of these token layers— primitive semantics component layers— makes it feel so complicated because you're like, well, I just need a color gray that's like this. Which one do I pick from which layer? And I think that's a place that often is over-rigid. You were using the example of background error semantic versus like red 500. And there are times when sometimes you just need red, but the only thing you have available is error or you know, vice versa. And so you're like, well, what do I, what do I pick? And then you start doing like weird things. But I digress. Um, I, I wanna talk about governance because governance is, if it's not already, it's about to be a huge, huge topic. Because when we generate code, we generate designs, with LLMs, that is opening a huge can of worms, not just around who gets to, you know, decide on a token name, like presumably that's the design system team still. But it's like, who gets to have ideas? Who gets to ship stuff? Who gets to say, this is a thing that we're gonna do, like this is a design we're gonna make. We have design with code tools that are making it possible for PMs and non-designers to build real interfaces using the design system components. And I think that that's really exciting. But before, we had, or at least we thought we had, governance around things like, who gets to make the decision of, if this is worth building, is this the right UX? And I actually think we didn't maybe have as much rigor there as we thought we did, but maybe we had a sort of stay in your lane mentality. Like designers make a certain kind of decision, engineers make a certain kind of decision, PMs make a certain kind of decision. But the reality is, engineers have always been making design decisions. And the kinds of designs a designer makes have technical implications. So you could say that designers are making technical decisions. PM feature choices have design and technical implications too. We're blowing all of that up by collapsing these roles a little bit. So I would love to hear what a governance model could look like that can resolve some of these questions, how you're thinking about that. Because I think it would be, design system teams became kind of a rejection machine for a while, you know, like, no, you can't do that. No, you can't ship this. No, you can't make a new component. And I think it would be really easy to go back to that and just be like, anything that you're doing that should like, no, no, no. What might a governance model for the future look like?
Murphy TruemanI think you need to separate two questions. One is, can an AI or a non-designer build something that's technically correct? And then two, should it ship? And I think those two things need different answers. The one that's more technical, I think the system can handle, which is linting, validation, accessibility, and token compliance. You can automate that, um, you can get it out of that review process entirely, so nobody's wasting time in a meeting debating whether or not components are used correctly. But I think the should this ship question is a lot more difficult to answer. And it's the one that does need humans involved. A PM can assemble something that ticks every technical box and still solves the wrong problem, or solves it in a way that might confuse users, and that's a design judgment call, and it doesn't go away just because tooling got easier. I think, what you ideally want to avoid is making every piece of output go through the same review process, regardless of what it is. Someone trying to prototype an idea to test with users doesn't really need the same amount of scrutiny that someone might need when pushing something to production. And figuring out those thresholds early when nobody is emotionally invested in a specific outcome is really important. These conversations are harder when someone has already built something, they're already emotionally attached to it. Getting the thresholds agreed on before there's a specific piece of work attached to that means that nobody's defending their output, and they're just agreeing on what that process is. I think that is what stops governance from becoming a rejection machine, having those kind of proportionate gates and not a single gate that everything has to pass through. Because the moment your governance starts feeling like an obstacle, people will find a way to get around it. And that window closes very quickly once teams start shipping. AI does make that faster, so the model has to be something that people want to work with, not something they tolerate until they find a shortcut.
ElyseYeah, I'm, I feel like we're dealing with so many of these things. Okay, great, you made this beautiful looking, accurate looking design or prototype. Is that even the right UX? And we've been having these conversations with PMs and with designers, and what does it look like to bring that back to the design team and go through some sort of design review? So I feel like this is already, I'm already seeing this happen and I suspect that a lot of teams really are, and if they're not, they will be.
Murphy TruemanYeah.
ElyseOn the design system side, there's been this rule, if it's used three times, put it in the system. So we were talking a minute ago maybe about building a product UI but then there's also like building new components and working within the design system itself. If your teams who are using AI keep generating the same combination of components, the same kind of like usage, how do you evaluate whether that becomes a real pattern or a component that goes back into the system? Do you do that differently if it's LLM generated, than if it's like human designers saying, here's a pattern that I'm designing with. How are you distinguishing between, oh, this is a useful pattern or component, versus the AI keeps suggesting this UI and everybody is just hitting yes on their, on their chat prompt?
Murphy TruemanYeah, I think it's interesting. I don't have a clear answer for that at the moment and I dunno that anyone does. I think—
ElyseNo.
Murphy TruemanUm, I certainly don't. I think, being honest about that is a lot more useful than pretending we have it figured out. I think at its core, the problem is really distinguishing signal from echo. So AI tools are pattern matchers. And if a particular three component combination appeared in a few early implementations and the agent learned to default to it, it's going to suggest that to other teams, who will accept it because it looks reasonable, which then reinforces the pattern, which then makes the agent suggest it more confidently next time. I'd say like, the frequency of appearance isn't the same as quality.
ElyseYeah. I'm thinking also about how LLMs tend to spit out UI that is the aggregate of everything on the internet, and it can make patterns that are very common, but maybe aren't necessarily the best for your particular product.
Murphy TruemanYeah, and I think the evaluation criteria needs to be different for AI surfaced patterns. When a person proposes a pattern, you can ask what problem were you solving, and get context. With an AI surfaced pattern, you kind of need to ask different questions. And that might be like, is this showing up across genuinely different teams and context, or is it the same pattern propagating? Or are these teams satisfied with the outcomes, or are they just accepting the suggestion without interrogating it? Does it introduce problems that individual teams wouldn't notice, like accessibility issues, performance implications, that sort of thing? The contribution process itself needs adapting too. Traditional contribution assumes that someone noticed a pattern and brought it forward. AI generated patterns don't come through the front door. They just kind of accumulate in the codebase until someone looks. And before you can even evaluate whether something should be official, you need visibility into what's being generated and repeated across teams. And I don't think that's something most teams have yet. But the teams that can, or will, handle this best, will be the ones that are serious about decision rights before the question becomes urgent. These are new problems and the frameworks will need to be built as we encounter them.
ElyseI love that, and I think that's one of the reasons I feel like it's an exciting time in design systems. But I've never been excited about design systems because of components or tokens or even the visual consistency story. Like for me it's a, a system mindset thing, a shared language so that we have efficiency, and shared ways of working, but also room to evolve. This feels like a real sea change in how we manage design systems. We had a certain way of doing things, a certain level of implicit rules that we could live by, discoverability, foundations, naming, like the level that we had to explicitly state or could just imply, like just doesn't fly anymore. Many, many teams, maybe all design system teams are dealing with AI in their workflow without having had the right foundations in place first. Or maybe it's better to say that the rigor around foundations, documentation, component relationships, composition, has to be much greater than it ever has been. So it's not like we didn't have good foundations. It's just the level of rigor now, is, the demands are are greater. For a design system team that finds itself in this position, which maybe is all of us, you're feeling that pain of AI readiness. Maybe your outputs are crappy. You have some confusing token names. You don't really have some things explicitly documented. You're thinking about pattern relational schemas. What are in your mind, some of the concrete things that that team should be focused on, technically and organizationally, to get their house in order for this new world?
Murphy TruemanYeah, I think there's five key things I'd focus on. Um, the one I would start with before any of the technical work is, knowing where you stand. Most teams have a mental model of their system state that is a lot more optimistic than the reality. And the gap between the mental model and reality is trouble when those AI tools start consuming the system and producing output based on what's documented rather than what's actually true. It's not a cleanup project, it's not a migration plan, but an honest account of what is reliable and what isn't. And ideally, that should be written down somewhere that is accessible The next would be, looking at fixing your communication, before your components. So a single entry point for requests, bug reports, questions. Most governance problems start with unclear communication. And I think when people don't how to ask for help, they stop asking and just start building workarounds. Another would be making your semantic intent readable. So taking your 10 most used components and making sure each one has a name that communicates purpose, props that communicate intent, and enough documentation that an agent could make the right decision. And I think if we start there and then begin expanding, there'll be a lot more success. Another would also be establishing decision rights. So who approves new components? Who decides when to deprecate? Who resolves conflicts? Write it down while the stakes are low, and then those decisions are considerably easier to make, once they're not attached to live conflict with a deadline. Lastly, I think would be, measuring satisfaction alongside usage. Usage metrics can tell you what's happening, but not why. And regular check-ins with consuming teams will surface that friction, that a lot of analytics that we've historically relied on will miss.
ElyseThat was just like an incredibly concise playbook of the things that teams, including myself, need to be doing right now. I wanna read this blog post. I think you should release like a playbook guide. I think that's amazing. And like we said before, I think we also have to be careful not to go back to gatekeeping. Governance is not the same thing as, the design system team is the only decision maker, and we're the only people who can say yes and no. I think this is a real time of expansion and exploration and, uh, my VP of Engineering called it the Cambrian Explosion. Like people are excited about the creativity and the ownership and the things that they can now do, the way that they can get their ideas out, especially in roles that haven't traditionally gotten to stretch their legs in this way. But it does open up that can of worms, right? Like I think it's so cool that our PMs can get their ideas out like this, there's real benefit to alignment, and there's also like this can of worms of, is this a good UX? Are you making these design decisions with the same kind of knowledge that a designer with multiple years of experience has? So I love the idea of establishing decision rights as part of the governance process, like what do I have to do to see if that's a good thing to ship? So I love that so much. Alright. Murphy, we are at the best question on the podcast, it is spicy take time. Close us out, and tell us your spicy take on design systems right now.
Murphy TruemanAI readiness is just design system quality. We rebranded fixing our mess because nobody would fund it otherwise.
ElyseOh, I'm, I'm, this is so spicy. I love it.
Murphy TruemanI know. I'm scared.
ElyseNo, you are spot on.
Murphy TruemanThe naming problems, the implicit knowledge, the undocumented rules, those have been failing new hires and contractors for years. We just didn't care enough to fix them until a machine started making the same mistakes and output, and that output became visible to leadership.
ElyseThis really hits because, you know, I was saying before that maybe we could get away with maybe some mushiness in in the foundations or docs because we could kind of trust designers and engineers to come ask or figure it out. But obviously having that rigor would've been hugely beneficial for all of us this whole time. And it is depressingly ironic that we are maybe finally spending the effort because of the push to use AI. I think you're spot on that the output being visible to leadership is really key. I think it's different when you personally are using a tool and you feel like the output of the tool is failing you, versus seeing your team produce something and you have this, like, nagging sense that it's not exactly right, but you don't know why. And it makes you much more interested in investing in, in like getting it right.
Murphy TruemanYeah, exactly. I think the nagging sense is the worst outcome, because it's not actionable. When something's obviously broken, you can fix it. When something's slightly off in a way that nobody can articulate, nobody can push back. And that's how the system degrades.
ElyseSo spot on, and that's how design degrades. And we go, well, I guess we're, we gotta ship it. It's urgent. And then, here we are. Yeah, so, so much to do. So many ideas from all of this. Murphy, thank you so much for coming on the podcast and for all your thinking and your writing. I encourage everybody listening to go binge read all of Murphy's writing at blog.murphytrueman.com, all link in the show notes. Enjoy having your mind expanded, and I hope that you got some really actionable takeaways from this episode. I know I did. So Murphy, thank you so much.
Murphy TruemanThank you so much for having me. It's been a lot of fun.
ElyseThanks for listening to On Theme. If you like what you're hearing, please subscribe now on your favorite podcast platform and at DesignSystemsOnTheme. com to stay in the loop. See you next episode!