Just Curious: Applied AI for Value Creation
Just Curious is the podcast of Pluris, a platform connecting investors and operators with the world’s leading applied AI experts. Each episode turns AI from buzzword to bottom line through sharp case studies and practical conversations. We explore how AI is used to grow revenue, expand margins, improve operations, and create measurable value inside real businesses.
Learn more at checkpluris.com
Just Curious: Applied AI for Value Creation
Justin Massa & Jason Rubinstein - Remix Partners | How to Operationalize AI: 32 Kickstarts, One Consistent Pattern
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Justin Massa and Jason Rubinstein are the co-founders of Remix Partners, an AI strategy consultancy that has now run 32+ kickstarts with small and medium-sized businesses across sectors and geographies. Their perspective is grounded in what actually works — not what looks good in a slide deck — and this conversation is full of practical frameworks for PE investors and operators trying to move AI from an experiment into the business.
You'll Learn:
- Why executive engagement is the single non-negotiable success factor in any AI rollout, and what it actually looks like in practice
- The difference between augment and automate — and why 95% of consulting firms stop at automation when the real leverage is in augmentation
- How to use a three-tier deployment framework (Experiment / Pilot / Implement) to prioritize where AI goes next, and why you need to revisit it every quarter
- What "agent-legible" means — restructuring business information so agents can navigate it efficiently — and a simple mental model for where to start
- A detailed manufacturing case study: how a 70-year-old custom parts company cut its RFQ turnaround time nearly in half using a Claude Code workbench built in 3.5 weeks
Chapters:
00:00 – Intro
00:33 – Welcome & what Justin and Jason hope listeners take away
01:41 – Who are Justin & Jason / Remix Partners origin
03:19 – Jason's background and why they're doing this together
04:36 – Reflecting on the past year: what changed in the market
07:21 – What's driving the shift in appetite for AI
11:50 – Going wide vs. going deep on AI opportunities
12:55 – Data on AI adoption: where companies actually are
14:42 – Why small businesses are now advantaged over enterprise
15:39 – Operationalizing AI: the Remix kickstart approach
18:21 – Executive engagement as the #1 success factor
19:13 – Show and tell culture: banning secret cyborgs
21:33 – How the approach scales from SMBs to Fortune 25
23:56 – The leader's mandate: making time for AI
25:03 – What went wrong with other consultants
26:48 – The augment vs. automate framework
28:07 – Getting from automation to augmentation
30:33 – The jagged frontier and workforce implications
31:55 – How to navigate capabilities without a static view
33:35 – Deployment zones: Experiment / Pilot / Implement
35:29 – How zones shift quarterly / software stack decisions
37:50 – Manufacturing case study: the setup
40:26 – What they built (workbench in Claude Code, RFQ automation)
42:54 – Who was involved across the organization
44:27 – Impact: clock speed and capacity nearly halved
46:00 – Engineers' time reallocation; divorcing revenue from headcount
47:33 – Agent legibility: what it means and why it matters
49:38 – Making your business agent-legible: the practical how
52:54 – This year we edit documents, we don't create them
53:32 – Where to start for a 50-person company
56:16 – Plug into your core work application and play
58:07 – What happens to companies that don't experiment now
59:11 – Playing to Win in an AI era: differentiation AND low cost
01:02:33 – Closing
- Watch on YouTube: https:/
Just Curious is the content and podcast brand of Pluris, helping leading operators and investors cut through a noisy AI market and find the right partners — from strategy and assessments to full-stack AI development. Our network of applied AI experts spans generative and agentic AI, workflow automation, and custom AI-powered software, helping teams turn curiosity into measurable value.
Explore more interviews and connect with experts at https://www.checkpluris.com
Subscribe to our newsletter at https://just-curious-ai.beehiiv.com
Justin Massa and Jason Rubinstein are the co-founders of Remix Partners, a firm helping companies turn AI from scattered experiments into real operational growth engines. Today we'll talk about how organizations can operationalize AI inside real workflows and why some of the biggest opportunities are hiding in the most mundane or obvious processes. I spoke with Justin about a year ago when he was just starting the business that would become Remix Partners. Now he's joined by his partner, Jason, and I'm excited to reconnect and hear how things have evolved. Justin and Jason, welcome to Just Curious.
SPEAKER_02Thanks for having us, man. Good to be back.
SPEAKER_01To start off, what do you hope listeners walk away with from this conversation?
SPEAKER_00You know, uh not to be put on the spot. I I think first of all, it's really kind of where where is generative AI today through the through the eyes of a small business for small businesses. Um I think, you know, just where where can people start? What can they accomplish using these wildly powerful tools? But Justin, what do you what would you add to that?
SPEAKER_02Yeah, I'll build on what you're saying, Jason. Like thing, I feel like I asked this question 10 times a week of like, so where are you in your AI journey? And it's definitely a journey, you know, you don't get to kind of leap to the end. You gotta just kind of start at the beginning. And so I think I would love for kind of people who are listening to come away with like what is the natural or the right next step in their journey wherever they happen to be in that journey today.
SPEAKER_01Yeah. Where are you all in your AI journey? And so who are who are Jason and Justin? And what is Remix Partners?
SPEAKER_02Yeah. Remix Partners is, you know, a small consultancy we started just seven months ago now, I think. It's wild how quick the time flies. Um, building off of a business, you know, I started a year ago when I first came on Stu. Um, and so we're doing kind of one of the things I started doing quite some time ago, which is helping small and medium-sized businesses um kind of get started with generative AI. And so, you know, I think when I was on a year ago, I had done it five or six times. Earlier this week, we kicked off with client number 32 for that same offering. And so we just had like a lot of opportunities to work with a lot of different small and medium-sized businesses across sectors, across geographies, and we just kind of learned a ton. Um, we're also now doing some additional things. So we're helping our clients actually build kind of small implementation projects. You know, if it's a couple of weeks, you know, to use some language we'll get into later, like if it's building a jig, we'll totally just help our clients do that. Um, and then we've just launched a new kind of advisory subscription offer called Pulse, essentially to help our clients stay on the cutting edge after they kind of get to the end of their Kickstart engagement. To your other question, in terms of kind of like, where are we on our journey? You know, I think most of last year we were really like chatting with AI. And it was great. And I think we got good outcomes. I would say end of last year, beginning of this year has been kind of welcoming the agents into the business. One of them sitting behind me on this table right here. Uh, that's Kirby. And I think the exciting thing we are now trying to figure out is like just how much of the business can we automate that isn't us talking to another human being. And I think we have some very audacious goals there. Jason, please build.
SPEAKER_00Yeah. And and I'd say as we think about our partnership and where we both come from, like why are we doing this? Justin and I met a bunch of years ago in the Chicago tech community. He was a CEO of a company and I was uh one of his mentors. And we we both we have a really interesting Venn diagram, at least I think we think it's interesting, but you know, I went on to work at both a bunch of public and private companies, product and strategy roles kind of many times over, have scars and trophies, you know, to prove it. And, you know, Justin went on to an awesome career most recently at um, you know, a big consulting firm, IDO. And so I think we bring with us, you know, the scars and trophies. We bring with us the best practices of having built stuff and helped other people build things. And that sort of experience in the trenches, like I think really has us and me especially very excited that we're able to kind of walk into any business and any industry, even a Fortune 25 that we can't name, um, and and bring the practicality of these tools to the table. And so that's kind of a little more about us and why, you know, I think why we're doing this.
SPEAKER_01Yeah, and excited to unpack a lot of that with you, share some examples of your work, go deep into agents and you know what that means for you and also your clients. I'd also love to reflect on the last year. Um, as I noted, we sat down last spring when I was starting this as a project and Justin was starting, you know, what became remixed partners. When you look back on the past year, how has the landscape changed? What's surprised you like both positively and negatively?
SPEAKER_02Yeah, it's a I'll I'll I'll kind of it's a great question. And I'll speak about it in kind of like two perspectives. Um, so one is just like talking to small and medium-sized business owners and leaders and executives. I think a year ago, that you know, majority of people I talked to, I would say were like curious, excited, interested in generative AI. But if I'm being totally honest, I don't think a year ago most leaders I talked to were able to accurately describe just how big of a concern this was going to be in their business 12 months later. And so, you know, I feel like a year ago, you know, I would have conversations with folks and, you know, business was good and we were winning projects, but a lot of folks like, oh, you know, not right now, maybe later, let's stay in touch. And fast forward, I feel like all of those people have gotten in touch in the last couple of weeks in a way that is like exciting and a little overwhelming. But it kind of feels like I think if you rewind like a little more than a year ago, there were stories about like models not doing well. If you go back to last fall, there were stories about like models not doing well. I feel like a lot of the world is finally waking up that, like, no, we are on an exponential improvement curve and you really need to take this stuff seriously. And so I feel like kind of the external urgency in the market has been really notable as a change over the last year. So that's the first thing. I think the second thing on like the capability side, I was a bit of a skeptic, I would say, of kind of like the linear AI workflow builders that we called like agents. And for me, I don't think it was really until kind of the combination of like quad code, Opus 4.5, the skills architecture, and then having a lot of time to kind of experiment at the end of last year that, you know, my perspective is I think last year was agent-ish. This year is going to be agenc, and it's gonna look very different than like those linear workflow builders. And I'm really excited about the capability. I think last year there was a lot of buzz, and it was people trying to get the AI to do the thing it really quite couldn't do yet, and it was hard. It can do that stuff now. Now, there's implications of that, and which we'll probably get into, but like it can do that stuff now, and that's just super exciting.
SPEAKER_01What do you think's changed in terms of the uh appetite? Is it multiple news cycles about AI's impact on you know software? Is it that the organization have organizations have become um just more familiar with how the technology can be used and they're using it maybe, you know, uh chat and uh so on. Well, what what's changed in terms of like the appetite?
SPEAKER_02I think I think there's a few things. I think one thing is you know, the personal AI that a lot of people use outside of work has gotten significantly better, and they are frustrated at the lack of that same quality at the tools they use inside of work. And so they're like trying to figure out a better path. I think there's a lot of people who became secret cyborgs over the past 12 months and then started doing things that their bosses didn't know how to understand. And they're like, How are you doing this? You know, and look, it's it's created tension in workplaces. We've encountered a lot of this tension in workplaces, but I think there's been kind of an explosion of these secret cyborgs. And so I think executive leadership is now like, how are you doing this? What's going on? Like, how are you kind of racing ahead of the rest of the team? And then I think the third thing is like somewhere, I don't know. I see different people talk about this in different ways, but like somewhere last summer-ish, last fall, you know, it would be the thing, like you would start a session with the generative AI model, and maybe it went a little sideways, and your only path was to just like kill it and start over, right? Like there's no resurrecting the session. It's the context was clouded, it had gone sideways, it was starting to drift. It seems like now you can rescue most sessions and you can like pull them back into the pathway, you can realign them with the human intent. And, you know, like I've seen people say, like, we went from a world of compounding problems to compounding solutions in just the raw capability of the models, like sometime last year. And I think a lot of people have felt this, but they don't then know what to do about it in the business. And like that's where we come in. But I think that I think there's to your question, I don't think it's one thing. I think it's a lot of little things and just kind of the the tidal wave is continuing to build. And I don't think it's crested yet. Like my sense it's not going to crest until later. But yeah, Jason, how do you think about this?
SPEAKER_00Yeah, I mean, so to add on that, you know, Joshua and I, you know, reconnected at a venture conference at which he was speaking and where uh I have a friendly relationship as well in May. And I would say it was right around that time, you know, in the spring, when the discussion started to change within companies from, oh, I have this thing called Copilot or Gemini that I have for free, not sure what to do with it, but it's there. And and I think that that was the first thing. So there was a foundation that people had access to at many companies, whether they realize you're not. But then the second thing that started to happen was that that classic Gartner hype cycle went into full force. And all of a sudden, there's articles, and there, you know, the the leaders of the big LLM companies are starting to talk about did you know what's possible now? And then we saw all the fundraising activity and these outrageously exciting valuations to complement those. So that's where I think the hype cycle just started to really get on people's radars. But it became justified, one could argue, by the capabilities and actual outcomes and things people could do with these models. And so I think it was an interesting storm of activity, which has now completely transformed the way the world and you know, the civilized world, I'll say, with some caution, is thinking about how to take advantage of this technology.
SPEAKER_01Yeah. Are you seeing changes in that expectation? I feel like the initial approach was this deployment strategy, the here's your co-pilot, like figure out what to do with it, versus like let's go look for problems. Are you starting to see leaders and management teams more appreciative of a sort of problem-focused approach? Or is that still a big part of what you have to help them with is like kind of kind of reframing how you approach AI from like kind of using it to like finding a problem it can solve?
SPEAKER_02So I would say like I think this is it's I like this question because I think there's a nuance that I might layer in. So like a year ago, I feel like people would kind of see the technology, get interested, and kind of have like a single use case they were very passionate about. And like that might be like the thing they individually explored. Um, you know, I feel like last summer, last fall, a lot of those same leaders would like invest in building out pretty far and pretty deeply a single AI kind of based solution in their business. And I think what we are finding out, in fact, we kicked off earlier this week with a client that's kind of in this in this situation where, you know, they had actually made a pretty big investment in a single AI solution, got pretty far down that path, and realized, wait a second, am I going deep when I should be going wide? Like, is this the right thing I should be working on? Are there five other opportunities that I should pursue? And so I think that the the evolution I've observed is I think a lot of leaders have gone from this is a cool technology, let me use it in this way, to this is a cool technology, let me figure out all of the ways I need to use it in my business.
SPEAKER_00Yeah. And and I'd say building on what Justin just said, if we look at some of the data that's out there on this, um, never mind that the US is behind most other countries, I'll use the word civilized again, uh, in terms of AI use, but um, maybe 10 to 20 percent of companies in the US are actually using AI day-to-day. And I mean generative AI. If you double-click on that, you know, again, depending upon which of the data sources you believe, one to maybe two percent are coding. And you could argue, so in other words, using tools like Cloud Code. And of those, and of that one to two percent, the vast majority is probably folks that are actually software engineers. And so I think if if we if we just accept that those ranges as accurate, and I think that also tells you a lot about where companies and people are, which is basically the chat capabilities of these models have replaced, you know, traditional Google search or other search engine search. It's like, oh, I'm gonna get something different. Um, then there's cowork, which was developed over a matter of weeks as an experiment and has taken off. Um, and then there's cloud code. So now we're seeing three very different entry points into the same capabilities. And it's really early days. And now companies are trying to figure out beyond just doing some cool one or two off workflows, what can I do that's gonna stick? And that's where we're just seeing such tremendous opportunity, especially for smaller businesses who now have access to the same powerful technology, frankly, that many big public companies won't use because they have these big enterprise contracts. And so the smaller companies for the first time actually are highly advantaged as opposed to being perpetually disadvantaged, which is their story over decades.
SPEAKER_02I've I've heard a real quick story. So I was out to breakfast a couple of weeks ago, maybe about a month ago now, with a buddy of mine who runs data science and AI at a large publicly traded company. And we're talking about things, and he's like, you know, just this past week, we rolled out Microsoft co-pilot with ChatGPT 4.0 to the whole company. And I was like, hey man, Witty Use 2004 as hot AI. And he was like, puts his head in his hands and he's like, I know, I know, we're so far behind. But like, you know, my executive leaders are all patting themselves on the back for doing this. And I was like, oof, man, the the gap is getting real.
SPEAKER_01Yeah. Yeah. I guess it's better than nothing.
SPEAKER_02Better than nothing. I mean, I'm glad they're doing something. But you know, I I may have made a joke to him about how I hope he likes his horse-drawn carriage when I drive by in my car.
SPEAKER_01Yeah, yeah, yeah. Uh let's talk about operationalizing AI. So companies increasingly interested in using it for value creation, but lots of organizations spend a bunch of time discussing strategy and never operationalize anything. Why do why do you think that exists? Why does it happen? And what do you do to help them move from strategy to experimentation to kind of operationalization?
SPEAKER_02Yeah. Jason, want to take this one or do you want to jump in? Go for it. I'll add. So I think there's, and this I would say stuff is like an evolution of kind of how I talked about some things a year ago and kind of some of the patterns we've observed. So I think one of the big challenges is there are kind of both technology solutions you can just buy off the shelf. And there are like other flavors of consulting that want to come in and essentially like tell you you will need to buy from us all of the things you need in your business, right? And so they're kind of like trying to build this like cultural dependency on it. We take a very different approach. Like our goal is to like basically help you figure out where is the line of your company's ability to self-serve. And philosophically, we think every company should self-serve in regards to generative AI all the way up to whatever their line is. And so what we do in our engagements is we start off with kind of a theoretical grounding of what is going on and what's happening, give people the mental models to understand all of this. We identify a bunch of opportunities, and then as we tell them, and then we're gonna push you into the deep end of the pool and not teach you how to swim. Because what we have found is that people need friction with the technology in order to form the right set of mental models to actually work with it. So, kind of the first leg of our engagements are it's gonna be a little frustrating. Like this is you falling off the bike and getting back on because you have to develop a sense of balance, right? Then we get together about midway through. We do some kind of training, we actually teach people how to swim, so to speak. We talk about context engineering, prompt management, memory, memory, metacognition, all that kind of stuff. Um, and then we tell say, all right, now go back out. And the second wave of building is to figure out your company's collective level of where can you self-serve. And then at the end of the engagements, we look at everything and we're like, all right, here's all of the things that you can and should self-serve. And here's how you build that inside of your business. And I'll come back to that in a second. Here's the stuff you're gonna need help with, and we'll figure out how to get you the help, right? Of the stuff that companies can self-serve, I think there's a few patterns we've observed that are really critical to this going well. And I mean, it's striking to me how clear these patterns are. I think the first one is executive engagement is critical. If CEO and executive leadership team thinks they can hand this off to someone else, they are sorely mistaken. They themselves should be spending a lot of time on it, and they themselves need to very publicly be out in front of their company celebrating and shining a light on what's working. If CEOs don't do that, this does not go well, period. And it's, I mean, 32 times, this is the single biggest pattern is the executive leadership team in not just kind of giving it lip service, but actually involved, like in the show and tell Slack channel, actually commenting on things, showing their teams the things that they are doing with it, right? I think that's the first one. I think the second one is you must have a show and tell expectation. Grit sideboards are banished, everyone talks about what they are doing. And, you know, I was telling somebody this over breakfast today, like, you know, peer pressure is a really effective management tool, and everyone wants to show off to the boss. And so if the team knows that leadership is paying attention, the team is going to show their absolute best work and push each other to do even better things. Those two, the biggest things that jump out to me. Jason, what what else? Yeah, please build. I know you see some more things.
SPEAKER_00Yeah. And then, and then in terms of some of the, some of the other how, you know, Stu, how do we, how do we work with companies on this? We also have some secret sauce. You know, we have, we've spent some time over the past several months building some really cool, highly proprietary technology that does a lot of really complex analysis around the company, the industry they're in, the models, what we've learned through our engagement, uh, you know, making sure we're listening to every syllable that's pronounced along the way and analyzing it, but analyzing it in the right way. So that when we get to the point of wrapping up typically our our traditional kickstart, is what we call it, we have really powerful insights that we give to them that they honestly people tell us you should add a zero to what you charge for what you do. And we're not going to tell you what we charge, but we're trying to just deliver crazy value versus the traditional big consulting firms who are really trying to figure out how to survive in this environment from our biased estimation. But, you know, we we bring just extremely powerful insights to them in a roadmap, um, and they get to take action on that. And so I think it's, you know, they've they've they've gotten their hands dirty. You know, as Justin said, we know we've we've taught them swim. We haven't just sent them links to great swimming videos. And that's really the thing. And then lastly, and I'll stop here, is we, you know, given the management background that Justin and I have, you know, it's as much about culturally helping the company figure out, beyond a cleverly written roadmap, how to how to deal with humans who are too busy trying to make time for these tools, whether it's the Friday Amazon gift card giveaway for the coolest jig, all the way to they've restructured their entire OKRs and comp structure around it. And we've seen everything in between and and helping the CEO and leadership team put that in place.
SPEAKER_01How does that approach evolve depending on the size of the client? So things like CEO involvement, active show and tell in a Slack channel feel to me like things that would really work for a company, you know, with 10 people or 50 people or maybe 100 people. But when you get into like 500 people or a Fortune 25, a client that you can't name, the CEO one has a lot going on. She may not be actively involved in what's going on in a business unit that you're working on. And so just how does that how does that differ depending on the size of the company?
SPEAKER_00Yeah, I'll I'll I'll jump on that one. So and this is sort of an illustration of how our business has grown despite the original thesis Justin launched, you know, his initial consulting practice with. It's really about the champion and the stakeholder being committed. So, you know, for the unnameable Fortune 25, you know, the two VPs that sponsored the work have each have multi-multi-billion dollar PLs. And uh so they are, you know, more than CEO level in terms of just you know the faith that they've been given by a major corporation. But um, but the engagement was with the top, you know, AI focused people at that company. And we, you know, we felt very happy about the work that we did and they did too. And honestly, we're still on their Slack channel, and you know, we'll weigh in once in a while. So we haven't been kicked out, which I we is a good indicator. So I'd say whether it's a Fortune 25, a middle market company, or um, you know, your friendly neighborhood SMB, it's really about that stakeholder and champion who um has the authority to make to empower and unlock this capability for their teams, and who isn't just saying you need to talk to Jason and Justin, but say This is important to the business, and I'm giving you the permission to do this work and to play with these tools and to become experts. So, Justin, what would you add to that?
SPEAKER_02I want to emphasize what you're saying there, Jason. Like, I think to me it's all about like you need someone who is empowered enough in an organization to set a mandate and then bluntly like free up space and time on people's obligations and calendars. If you're a leader who can do those two things, you know, whether you're a CEO, which, you know, small business is very easy to do, or you're like a VP in a very large enterprise, that is, I think, the the magical thing. I mean, the single biggest problem we encounter, and I would say this is from small business to large enterprise, is I don't have time to do the thing that saves me time, right? Like we hear that multiple times a week, which sounds ridiculous to say. I'm something like multiple times today. We hear this from people. And so a leader who can make space and make time is critical for this to go well. And it the good news is like, it's not like a leader has to like, you know, be the shit umbrella for months and make space and like hide their team. Usually in like two weeks, it becomes very obvious this is a positive investment, but you need that initial catalyst. Like you need someone in a position of authority who can be like, no, no, no, I know there's a lot on your plate. This is the most important thing, and I'm paying attention.
SPEAKER_01Yeah. When you walk into a company that's already hired consultants, but they haven't gotten a lot of traction with AI, what typically went wrong? Hmm.
SPEAKER_00That's a yeah. That's that's a great question. In fact, we're starting to work with a client that actually has a major consulting firm engaged, and they knew there was a big gap. And so they actually brought us in, call us a uh uh an AI sidecar with one less zero or two uh in terms of our fees, which is totally fine by us. But um I think what's what's gone what we've heard anecdotally is number one, it was too high level. There were sold wickedly expensive PowerPoint slides, but they weren't given that hands-on knowledge for how to take action with it. And secondly, they didn't have the uh that operational experience. They haven't tinkered with it in the same way. And so just not not as credible. Um and so because we're really approaching it sort of one part old school, one part new school, like hey, let's take the best of human-centered design, let's take the best practices that Justin and I have employed over years and bring that into and distill that and bring that into an engagement so that we can speak strategy, but then when you're talking about a tool like Gen AI, we're gonna bring the most current thinking and the most current approaches to how to play with that stuff to the table. The big consulting firms so far, we haven't heard evidence that they really get that. But Justin, you know, anything you'd add.
SPEAKER_02I mean, I would say, look, you know, we've heard more than our fair share of stories about kind of it was obvious the big consultancy was learning on the client's dime in this project, which, you know, I mean, look, I spent my a lot of my career in consulting. Uh I've certainly done that myself in a few projects, but I think with generative AI, the dynamic is just fundamentally different. Like you, it is moving so fast, the client the consultant cannot learn on the client's dime in the midst of the project, right? A beginner's mindset here is not an advantage to you. The other thing that I would say is, you know, we we really think about like two verbs with every company. What are you gonna augment? Like, what are you gonna combine human and machine and exaggerate your competitive differentiation? And what are you gonna automate? Like, what's just the cost of doing business that AI can disrupt? And you want to take as much cost out of that process as possible. If I am being brutally honest, do I think 95% of the other consulting firms out there find the automate opportunities and simply stop looking, right? And I'm not saying that automate isn't important. Every business needs to be automating what they can to take costs out of the business right now for all the reasons that every business leader knows. But it is not the most important kind of thing to think about. The most important thing to think about is how can you use generative AI to exaggerate your competitive differentiation? And unfortunately, that is not most of the consulting work that I'm observing happening right now, especially not when we get called in to help somebody.
SPEAKER_01If I'm a CEO and I'm interested in automation, but also augmentation, what helps me get from automation to augmentation? So what approach or you know, philosophy, framework, like reframing will unlock augmentation and not stop me once I've just automated?
SPEAKER_02I think so. For us, it's all about starting with what is your business's competitive differentiation, right? Like let's start right there. And I mean, this is we do this at the end of our first workshop. We did an hour-long discussion about this earlier this week with one of our clients in a kickoff, right? We've got to understand why do you win with your customers? Why do you win relative to your competitors? Why do your clients keep coming back to you? Like, what is the thing that you do better than anybody else that you have a right to win around? When we identify those things, there's then basically a question: can generative AI do that? If it can do it, if it is at very high risk of AI disruption, then you basically have two choices. You can either automate it and take as much cost out of the business as possible. You in a two by two here, you kind of move it down, or you can move it over to augment and combine it with human in some way. And so I think really it's about two things. Like one is honest or three things, an honest conversation about what your business really has a right to win around. Secondly, an honest and accurate understanding of can generative AI do that thing? Sometimes there might be an eval or a benchmark. Many times there's not. Like this is a thing. Many companies have to go and test actively for themselves. And then based off of what they find, is there a way to take AI capabilities, combine it with human, and sprint way ahead of the jagged frontier? Or is there no pathway to do that and you just have to take all the cost out of it? And I think, Stu, like a very frank discussion about actual capabilities with actual tests is really important here because it kills me, but I observe a lot of wishful thinking when it comes to this dynamic. And I think there's a lot of people who tell themselves stories that AI can't do things, or they they tried it six months ago and it couldn't do it, and they haven't tried again. And it's a real danger point just given the growth and capabilities right now. Like in the vertical part of an exponential, you cannot hold a static understanding of capabilities. And I think that's where a lot of executive leaders go wrong. Like they tried it six months ago, it couldn't do a thing, and they have no tone, oh, of course it's plateaued, it's gonna be able to do that thing, which is completely wrong.
SPEAKER_00Yeah. And the in the in the underlying tension that we're seeing too, which we all read about every week, is you know, are companies looking at Gen AI as another reason to cut jobs or to do silent job cutting, which is called attrition and just not rehiring the jobs, or for the larger size companies that have quote unquote classes of new employees, what are the profiles of those people? And so I think you know, we've we we have multiple clients that are in creative businesses, creative services business, or businesses that where humans are absolutely required. So that's where it's almost the most exciting thing or the most tension because they have this existential fear that what they're doing is gonna be replaced by the machine. And so that's where, you know, what Justin talked about becomes even more urgent because we have to help them stay at their core in terms of what their differentiation is, but not be blind to the capabilities that are here versus what's expected to come. And and that it's fascinating and interesting to be a part of that.
SPEAKER_01I want to go back to something Justin said about what AI is good at and what it's not good at. You all of you use the phrase, the jagged frontier, describe AI, that it's incredibly good at some things and it's surprisingly bad at other things. How should leaders learn to navigate that boundary?
SPEAKER_02I think like to me, I I'll put it like really succinctly. Leaders should not believe that they can intuit the edges of AI capabilities. Like, in fact, I can't. I've been doing this for a year and a half full time. Jason can't, he's been doing this now full time for eight months. Like none of us can. You have to test them. You have to actually go out and engage at the edges of the jagged frontier to understand in very precise ways relative to every business exactly what can it do and what can it not do. And the benefit of doing this is that you then get to reapply that test, right? So you test it, you find the jagged frontier, you test it again a few months, you test it again a few months. Like, but getting into that rhythm, I cannot tell you how many people we meet who believe their experience, they have a ton of depth in their field, and that that depth of experience gives them the ability to make educated guesses about capabilities, what they can and can't do, and then kind of operate their business from there. And it is just I mean, the I'm sure you hear the term, so you know, the the adoption gap or the capabilities overhang is extreme right now. It's because most people think they understand the limitation, tell themselves a story about it, never actually test it. AI can do many of the things way better than the humans could. They just didn't happen to look.
SPEAKER_00Yeah, but the but there's an interesting danger to it, right? Because let's not let's not forget who's building this stuff. It's a lot of engineers, as it should be. Let's not forget who's writing the benchmarks. It's a lot of engineers. And so the benchmarks that we read, especially GDP Valve, others, about where the jagged frontier is, meter, looking at where the models are performing, one one has to look at that through the lens of yes, the data is, well let's assume the data is correct, but to Justin's point, practically speaking, what does that actually mean at a company today? And so that's where we work very hard to say, hey, here's where the world is going. And we've got all the charts to prove that to you. But now let's get down to what it actually means for you in this moment. And that's the part that's hard.
SPEAKER_02That's yeah, one of the things that help our clients think through this is we've developed something in recently called deployment zones, right? And so basically it is for anything that you want to change in your business, should this be something that you just experiment with? Like it's the lowest level. AI is kind of okay at it. It makes a lot of errors, but you know, it's good enough at it where you should actually experiment today. There's a step up from that, which is you should pilot. Like AI is good at it maybe 40 to 50% of the time. You're definitely gonna need the human in the loop. But you know, start experimenting because it's getting better very fast. I mean, in most of these metrics, like it's doubling every six months right now. And then you have your just implement zone. Like, look, it is good at this 75% plus of the time. Yes, you still need human in the loop, but don't overthink it. Just go figure out how to implement this stuff in your business right now. And so, like, you know, some of the newer stuff we're doing is like on a quarterly basis, revising roadmaps and saying, here's your experiment priorities, here's your pilot priorities, you're just go implement this stuff.
unknownYeah.
SPEAKER_01And you're seeing things move from one zone to the next as the models get better and season.
SPEAKER_00Yeah. And then and then and then the inevitable conversation happens, which is, all right, now I know how to build some cool workflows. I can even I can do a jig, whether it's dance or carpentry tool. But how do I think about my software stack? What kind of licensing term should I sign up for now? Uh should I get the big enterprise tool that has AI included? Should I get, you know, the enterprise tool that is now implementing AI in a very forward way and they're pushing it on me every time I open the app? Should I sign up for the AI native app that's a layer, thin layer that sits on top of the models? Should I DIY? And so it gets to that place as well because beyond the fun of experimentation and making this experimentation, you know, result in real ROIs, which is what we do in our consultancy, it inevitably gets to that other conversation and say, well, there's a there's this really cool roadmap that we've used our insights and sophisticated tool to help generate. But then it's a practical conversation, like, okay, well, what do I do? I have all the software, I've got all these people coming at me. And in fact, like one one client is a major law firm. They had a failed consultant engagement who put them into a very expensive piece of software and didn't work. So we're basically doing sort of a hybrid uh offering for them where it's all the strategy work that we do, but they already know we need a different tech stack. That's, you know, that's it's interesting because it's an outlier compared to what we normally do. But I think that as we think about this year, that's where companies that are AI forward, they have to be thinking that way about what do I what do I think about SaaS versus non, if you will. And um, and they need a partner to help them objectively think through all those choices.
SPEAKER_01I'd love to um I'd love to share an example of your work. And I know you shared a case study with me, and I'd love to walk through it. So maybe you can start with the company. What type of company came to you? What was the problem? Um, and then we can go from there in terms of how you worked with them to resolve that and ultimately drive uh ROI.
SPEAKER_02Yeah, so this one's got kind of like two parts. So I'll I'll give you kind of the setup for it, and then I'll let you ask me questions about it, too. So um a year ago, a manufacturing company based in California came to us. Um, the CEO had getting really excited about the potential of generative AI, knew he wanted to leverage it. You know, he's a forward-thinking CEO, is looking to modernize the business in lots of ways. It's been around for 70 years right now, uh at this point. Generative AI was one of the many kind of modernization levers he wanted to pull. So reach out to us. We did, I think he was like our sixth or seventh kickstart. So this was quite some time ago, had a really good experience. You know, I would say the second half of last year, his team did a ton of work. You know, I'd say probably six or seven people on his team in the office really leaned in hard. They built themselves a ton of gems. I mean, it's pretty amazing what they did for themselves. And I'm gonna say by the end of last year, they kind of hit the wall of what they could do for themselves. Right. And I think it's a combination of kind of some of the limitations of just the technology itself, some of the fact that, you know, like they're not software engineers themselves. And so there's some things they just weren't able to do. And so, like to give you a really specific example, um, they've got 40,000 SKUs in their catalog, right? They're they're a custom manufacturer and do lots of highly specific, like, we'll make five of them, then we'll make 5,000 of them for you, right? And so they've got this massive catalog. Um, the way they design their product is you know, lots of custom specs that are come into the business. Um, they use this kind of very old software program. It spits some things out. Somebody manually types that into another system, it advances the bill of materials, it does all the fancy stuff, and then they get like an RFQ at the end, or you know, a response to an RFQ at the end. Um, the whole process is super inefficient. They have this kind of target time of 48-hour turns on RFQs that they almost never were able to hit. It's like endless back and forth with the customer. Oh, I forgot to ask you this, I forgot to like all that kind of stuff. And so they came to us basically like, look, we've built as much as we can with Gemini, but there's, you know, we can't deal with 40,000 examples. Can you kind of unlock what happens next? And so that's kind of that's the setup. So, what did you do? So we came in and I would say the best, the the way to kind of think about what we did was we first said, all right, what are kind of the core Legos that you need in the business? And then what can we make with this initial set of Legos we've constructed for you? So, I mean, this is gonna sound really silly, but like one of the first things that we did was take the output of one of these super old pieces of software they have to use in the business and turned it into JSON. And that was like all of a sudden, that unlocked so many different use cases in the business because now it's not a thing like somebody's got to print out and type in and double check, but like we can just take this artifact and like use it in a bunch of different ways. And so over the course of like, I think we worked on this for like three and a half weeks. We built out what we basically think of like a workbench based inside of clawed code for this client, where it can, you know, receive in an RFQ, it can compare it to all of the prior parts this business has ever made, it can generate a draft bill of materials, it can notify people on the team, it can go from the draft bill of materials to an actual price that they would turn to the customer, and then it can take a first pass at like the actual drawings that someone on the manufacturing line is going to use to make the part before it goes out the door. And it's, you know, this was kind of project number one, right? And so, and three and a half weeks. This is very scrappy. You know, we didn't do it on like every single one of their types of product they make. We kind of took a few categories and like followed them all the way across the kind of workbench, all the stages of from a quote to production starts and kind of figured out that this actually works. And now we're kind of thinking about what do we do next? Do we go very deep in one segment and do it for all the product types? Do we layer more Legos across the top? But I think the the thing that's been really fun and exciting, and I think exciting for our client. We talked with him yesterday, is he can start to see the vision of how over time, as we continue to build up kind of this kit of Legos, there might be some like major subscriptions in his business. He's not gonna have to keep anymore. And so, you know, it's look, it's very early days. Some of this is also gonna rely on kind of the models improving over time. But I think we both are very excited about kind of his business's ability to almost like surf the edge of capability improvement over time, given this kind of like kit of parts approach we've been taking.
SPEAKER_01Who did you involve from the organization in the design and scoping of this solution?
SPEAKER_02Yeah. So I would say like with most of our engagements, you know, we're working directly with the CEO and who's also the owner here. Um, you know, I would say just for what it's worth, like we almost always insist on working with the full executive leadership team because I think we just find a lot of values. That's that's part of it. Um, but we've also interviewed a bunch of engineers on the team. We've interviewed folks who have some of kind of the gritty, kind of not super fun, like moving information around the office jobs, like, you know, people in the office who are literally typing some of these things in when they get printed out. And then we got a ton of examples from them. And then we had the really fun moment of having this workbench produce things that the CEO then had to take to engineers as a grade and basically be like, hey, is this plausible? Like, is it hallucinating? Like, how close to what a human could do with this? And we're passing the engineers are giving us a sign of approval so far. Really good about it. You know, I mean, lots more work to do, I would say early days, but I think it's gone well because we've been able to engage with everyone from executive leadership all the way down to frontline staff, understand their needs, and then kind of build this kit of Legos that's gonna be able to meet lots of people's needs. I mean, we've got everything from like, here's how you could deploy this to the website for like a conversational interface to generate enough information to start the RFQ, all the way to here's how you could use it to take a first draft of the drawing that the person on the line is actually gonna use.
SPEAKER_01And in terms of the impact on this manufacturing business, what did they see? They saw reduced time per order, uh engineering time. Uh, what did that look like?
SPEAKER_02Yeah, I I would say there are two big things that are like the two champion impacts, kind of clock speed and time per project. So the overall clock speed on RFQs is just getting much, much, much faster, right? And it's a lot because they just don't have all this back and forth that's going on. The second thing is kind of the number of RFQs any one engineer or any one person on their team can hold at any one time is also going up because they've now got just kind of it takes less person time for any individual request and they can do the entire process and like less clock time. So it's it's a bit of both, you know, before I would say they were, they were equally challenged on both of those things. There were ways where they could get things out the door fast, but it still just took a lot of people time. They've been able to cut both, I think like almost in half already. And now not for everything, like just for the kinds of parts we've been looking at. Um, but you know, as we can keep expanding kind of across their entire really wide catalog of things that they do. I mean, they do everything from like manufacturing things that are this big to things that are would be bigger than the room I'm in. Like it's a really interesting category they play in. We started with a very specific subset to make sure everything works, and now we have to kind of do that iterative looping to keep expanding the breadth of the catalog.
SPEAKER_01Yeah. And in terms of the engineers' time, they're spending less time responding to quotes. They can presumably respond to more quotes than they could before, but they probably are freed up to do other higher value work. Uh, where are they kind of reallocating?
SPEAKER_02I'll give you slightly different metrics to. So in this instance, you know, the business is in the fortunate position of experiencing a lot of demand right now. And so his question is, do I automate it or do I hire another human being? And so far, he's been able to stave off hiring additional humans in the office. And that's really a good goal. It's like, you know, it's a privately held business. He's accountable to himself, he's got kids he wants to bring into the business in the next few years. So, you know, he isn't interested in like automating everything and it's just one person. Like he wants to have a business that looks like this now, but if he can divorce the relationship between revenue growth and headcount growth, he's very excited about that. And so, you know, I think his vision is maybe I never have to net hire another head in the business moving forward, but in a few years, I could double or triple revenue. And, you know, we'll see if we can help him get there. I think there's a lot of things he's got to do beyond just AI automation to get there. We really, I mean, to say it bluntly, this is the thing I love about working with small and medium-sized businesses, especially those that are privately held, is they are not thinking about like what am I going to do next quarter. They are really thinking about like what is the legacy I'm gonna build with this business? And everybody wants to do the same thing. Can I stay about the same size, but grow revenue, not directly related to headcount? And that's just an exciting space to play in, I think, for Jason and I.
SPEAKER_01Yeah. Let's transition the conversation to agents. Um, you all have been focused on uh increasingly focused on helping organizations make their businesses agent legible. Tell me about what that means.
SPEAKER_02Yeah. So um, you know, last year I think there was like this explosion of like linear workflow builders that would use AI at some point in the process, you know, like things like N8N and make that come and relay. I mean, there were a million of them, right? And I'm not being critical of them. They're super useful and helpful. But in hindsight, I now understand all of those tools as essentially making generative AI run the obstacle course that is a business built for humans, right? And what happens when we do that? We burn a ton of tokens on tool usage, like generative AI figuring out how to get my repository or like going into my Google Docs and finding a document and opening it up and being able to read it. None of those tokens actually advance what I'm trying to do. They're just like taking up space in the context window, right? So they are costing me tokens, they're slowing down the overall process because I need my AI to like engage with a whole bunch of different things. And they are taking valuable context window space off task of the actual cognitive work I wanted to do, right? So there's like these three harms that come from it. So, what we have been thinking about a lot is instead of taking the business as built for humans and making our AI run that obstacle course, how can we radically change the structure of kind of the information that is business itself to be agent legible from the beginning? And so the idea is, you know, I mean, it it it's I'm gonna say it and it's so simple to say. And in practice, this is ridiculously hard. But all people need to do is take everything your business does, describe it in a set of well-organized, consistent language markdown files, put it into a nice directory structure and have clear process diagrams. That's it. That's literally all you have to do. And that is so easy to say. And wow, is that hard for almost any business? Like, and so we anticipate like we're in the process of doing this for ourselves, you know, we're moving to Markdown, we're, you know, trying to change things around. We've got a couple parts of the business that are fully agent legible, and it is mind-blowing some of the stuff that you can do. Like you can just say, just go look over there. And it's like, oh, great, I found everything I needed. Like, let me come back. I mean, it's it is a whole different experience, bit of a nightmare to figure out how you go from point A to point B. And I think we're doing that for ourselves. There's a lot of companies that I think need a ton of help, and I have a hunch this is gonna be a lot of what our business looks like later this year, especially into next year. But like, I think a lot about how there's just so many advantages when you make your business agent legible at like a really native level that you unlock so much more capability. And we've experienced it, and we want to help everybody else experience it too. Jason, I don't know. Please, please build here.
SPEAKER_00Yeah, and and Justin, you know, as always, you've painted a wonderful view. Now, there is a journey involved, and you know, it starts with something really simple. Record everything you say and do and put it into a folder that your friendly neighborhood, Jenny I, can see. Just that alone is transformational. But then it becomes, depending upon how you think about data and how comfortable you are with your um, you know, internal file sharing and so forth, make the folders available that you want to make available versus the ones that you don't. Make sure you give very clear permissions to what can be looked at, what can't be looked at, what access to calendar versus email are you gonna allow. So one has to, you know, the being agent legible means, you know, teaching it, you know, unlocking everything that you're comfortable unlocking, you know, bolting down what you're not, and then making sure that you give very clear instruction because the agents don't get it right from day one. You know, we heard a story the other day. There was a there was a YouTube video floating around where I think it was someone from someone from one of the big AI labs was watch, was watching real time, and this was a researcher was watching as the same model that she was working on was literally deleting her hard drive. And she had to run to run her computer and unplug the computer from the wall to get it to stop. Now, this is someone that builds this stuff. So so becoming agent legible, you know, you know, start with the basics, you know, work your way up. But yeah, getting to getting to a point of where you can say, hey, now we're gonna deliver files in a markdown. Why? Because that's the most simple format. That's the HTML of Gen AI. That's the place to get. But there's a there's a journey to go on to get there and to not be afraid to ask the hard questions of the Gen AI to explain itself what it's doing, what it thinks it wants to do, and why.
SPEAKER_02Yeah, I think like this year, Stu, I have not like every document that we've created in the business this year, Gen AI has started. Like we haven't, we don't create documents anymore. We edit documents. We said the first two weeks, that was miserable. It was so much less efficient than me just creating the documents. By about week three, it was about the same time. Now, you know, for this entire last month, it is so much faster. It's ridiculous. But you know, you got to be willing to put in those cycles in the beginning, like give, you know, it's like a junior employee. You got to give it the feedback, you got to give it the coaching, you got to invest the time. And then all of a sudden they can perform well and your whole business is different.
SPEAKER_01Yeah. When it comes to that journey, understanding every company is different. Can you provide a little bit more color on what that looks like? For example, um, you mentioned just uh, you know, talk about what you do every day and then turn that into a markdown document. Or, you know, let's like map out like what the business does. Um, you know, there's a lot there. Where do I start? What what what am I talking about with my day? Am I just walking through everything I do? Well, what's sort of best practice for, like, say I'm a leader of a 50-person manufacturing company? Where do I start?
SPEAKER_02So this maybe sound a little bit weird, but here's what I would recommend someone do is imagine you are about to onboard a new employee, but you have a funny constraint, which is you are never gonna be able to talk to this person synchronously. So you've got to figure out a way to onboard this new person into a role in the business solely through a set of documentation. You're never gonna get to have a conversation. They've got to simply be able to read a manual and go exclusively off of that manual. I think when people start, so you know, you've got job descriptions, you've got existing employee onboarding material, those are amazing places to start. You've probably got some existing software in the business that you can kind of use AI to help you make process diagrams from it. That's another wonderful place to start. You know, we like to tell people use an AI model and like turn on voice transcription and just start direct, you know, just clicking through screens, record your screen, director's commentary, what it is and how things work together. I don't think there's any like one way to do this, dude. But to me, it's like the the mental model people should have is this is basically a new employee and they are ridiculously book smart and they're also kind of weirdly street gullible. And so, how am I gonna be able to like onboard this person into my business when the only way I can ever communicate with them is sending them documents that they're gonna read on their own. And it's it's I know that's kind of weird, but like that's kind of how I would think about it. And if you work from that constraint, and like there's a lot of devil in the details here, like this is some of the stuff Jason and I are doing now. But like if you start from that relatively high level, you're gonna end up in a pretty good place. What I will warn people is this is a lot more work than you think it is. Like, there is a lot of things, especially in small businesses, that are way less process and way more hierarchy and individual judgment calls. And that's the stuff where you really have to like spend time and work through it. But but you know, I think starting from that idea of like, let me just start with like one process, let me richly describe it, great place for people to get going.
SPEAKER_00Yeah. The other to add on to what Justin said, um, and I think that's an awesome example, Justin, is you know, every business outside of email, calendar, and messaging typically has a core application that their business runs on. Um, you know, without going through all the options. But what I would say is plug your AI into that application. And the big AIs, there's all sorts of connectors out there. Heck, you can even have your agents write a connector. They will do that for you. Um, plug your plu plug your work application or two if you have more than one into your Gen AI and start to interrogate it. So building on what I advised before in terms of record everything, plug into your most critical work application, and view that as this incredible data source where um you can start to ask questions and your LLM will remember what you asked and it will it will get smarter and smarter and smarter as it's supposed to about what you're asking it to do and what you're asking it to do again. And that's the other thing I would do, and just see what happens because you'll you'll be kind of amazed at what what you'll see. So those are, I think those are kind of the three things. You know, what Justin said, you know, onboarding that new person to your company, you know, record everything, and then plug it into your core work application and just just play around a bit.
SPEAKER_01What's gonna happen with companies who aren't doing that experimentation today? What do they look like in two years?
SPEAKER_00They're gonna spend a fortune to hire people like us or the big consulting firms, their competitors. Um, you know, Justin will Justin can talk about software factories. At some point, um, you know, software is gonna be built and delivered and serviced automatically. I think that what's gonna happen is you're gonna see the next generation of startups get created by agents and you're gonna start to see, forget SaaSpocalypse, you're gonna see like true, true disruption. Right now, it's Wall Street disruption. Just wait. So I think for companies that don't get on, don't start to experiment and start to develop a vocabulary and see, I think that they are really disadvantaging themselves for the future. But Justin, you know, please add to that.
SPEAKER_02Yeah, yeah. So let's see. So it's on the shelf behind me. Here's a copy of the book Playing to Win. So the strategy book, kind of the classic one. You know, the the canonical wisdom in playing to win is businesses should compete on either differentiation or low cost, but never both. Because historically, people who compete on both, like trade market position, they diminish their returns. And the reason why you don't compete on both is, you know, the capabilities to be low cost are very different than the capabilities to be differentiated. It's been hard for businesses to hold both sets of capabilities. My point of view is that in an AI paradigm, every business must simultaneously compete on both. You must be differentiated while simultaneously being low price. And so my concern is that for businesses that have historically won because of differentiation, experimenting with how to cut price out of what they do is anathema to how they understand their business. And if they don't start actively thinking about it now, a year from now, when their low-cost competitor can play the differentiation game as well as they can, or when some brand new entrant is both low cost and differentiated in ways that any of the incumbents can't handle, they are going to be in an incredibly uncomfortable competitive situation. And so my concern is that like businesses that put this off a year or two from now are going to be in a world of pain because they are going to have a cost basis that informs the price of their offers that the rest of the market is going to radically undercut and be better at.
SPEAKER_00Yeah. And yes, and think about college graduates and think about community college graduates or grad school grads. Um, you know, they were blindsided over the past, you know, having having a college graduate in my house, I can tell you firsthand from a prominent university, they are blindsided. Half of them don't have jobs. It's not and and because the universities didn't know how to react as fast as the market developed over the past 12 months. So what will happen to those companies, you know, to that illustration, you know, you asked about, Stu, is these college students are now starting to lean in. And, you know, companies are getting smarter, and the companies that are AI forward are requiring them to write prompts in their job interviews. All the resumes are written by the same, you know, are analyzed and and will start to look and sound the same. But watching someone solve a problem in 20 minutes, real time, with a with a prompt, that's new. And so I'd say the companies that aren't anticipating the implications across the workforce and um I I think are really not are really doing themselves a disservice because the next generation of workers are already here and they're the ones that are actually going to transform from the inside, hopefully with the support of the CEO and the executive team. But the people that are having some of the biggest impact are people with nothing to lose, that are fresh out of school, that love this technology because it is so fun. And they don't have any, they have nothing to lose by taking on the company's biggest problems. And so that's the thing that's really going to transform these organizations.
SPEAKER_01That's a great place to end. Uh, gentlemen, this is super fun. Uh we'll have to do it again. And uh very much appreciate you two coming by, just curious.
SPEAKER_02Let's do things happen. It's a blast, man. Yeah. I it's it's weird to think what changed over the last year. And if we do this again, hopefully not more, not a year long, but like if we do this again next year, this will be a wild.
SPEAKER_01Yeah. Yeah. I'm gonna put it in my calendar. All right, see you guys. Thank you.