
ExperiMENTAL | Smarter Marketing Starts Here
For B2C marketers, founders, and analysts who are tired of surface-level advice and ready to cut through the noise with smarter, data-informed decisions. Host Sundar Swaminathan, former Head of Brand Data Science at Uber and creator of the ExperiMENTAL newsletter, shares real-world insights, ROI breakdowns, and growth strategies from leaders at companies like Uber, Google, and Faire.
Each episode helps you move from guesswork to grounded strategy so you can drive impact, prove marketing value, and lead with confidence.
If you're ready to think critically, test boldly, and grow with clarity, ExperiMENTAL is your bi-weekly dose of thought-provoking, data-savvy marketing wisdom.
New episodes every second Thursday on Apple Podcasts, Spotify, and YouTube.
ExperiMENTAL | Smarter Marketing Starts Here
Assumption Audit: The Step Most Marketers Skip | ExperiMENTAL Ep. 6
Assumption Audit: The Step Most Marketers Skip
Most marketing failures don’t stem from bad creative, poor media buys, or weak strategy. They start earlier, at the assumption level. In this episode of ExperiMENTAL, Sundar sits down with Oliver Raskin, Head of Research at Miro and former insights lead at Meta and Uber, to break down why assumptions kill great ideas and how marketers can audit them before launching campaigns. They explore the difference between optimism and magical thinking, why misalignment festers in kickoff meetings, and how to rewire processes to avoid costly mistakes. If you’ve ever wondered why “obvious” campaigns flop, this episode is a must-listen.
Key Takeaways
• Research is part inspo, part reality check, part insurance policy
• Most marketing starts by solutioning instead of defining the real problem
• Assumptions around audience, channels, and timing go unchecked
• Kickoffs are the hidden leverage point in campaign success
• "Pay now or pay later" is the cost of skipping alignment
• Research isn’t slow, bad process is
• Clarity of goal always surfaces hidden disagreements
• Synthetic audiences and qualitative at scale are changing research
• Surveys are dying, and that’s a good thing
• Messaging must map clearly from assertion to belief
Best Moments
00:02:20. “This role is part inspo, part reality check, and part like insurance policy.”
00:04:52. “Each piece that you’ve made a wrong assumption on chips away at your likelihood of being successful.”
00:07:18. “These are people problems, not data problems.”
00:09:10. “You relitigate these decisions over and over again because they probably needed to have been litigated properly once.”
00:11:12. “We’re already solutioning at this point.”
00:17:05. “Be explicit about all the things that need to be true in order for that outcome to be reached.”
00:28:47. “I’ll put you on the right trail. You’ll have to do a lot of iteration to get up the mountain, but at least you’re climbing the right mountain.”
00:41:20. “It’s the why, what, and how of your pyramid.”
00:54:55. “Take the time to make your assumptions explicit. It might feel uncomfortable, but it’s worth it.”
🎧 ExperiMENTAL is hosted by Sundar Swaminathan, Head of Data Science at Bounce and former Uber leader. This show is your behind-the-scenes look at how top marketers and data scientists make smarter decisions.
🧠 Expect unfiltered conversations, mental models, and case studies that help you cut waste, build conviction, and grow your B2C business.
📬 Subscribe to the ExperiMENTAL newsletter for deep dives and frameworks: https://experimental.beehiiv.com/
This podcast has been brought to you by APodcastGeek. https://www.apodcastgeek.com
A lot of that masks is a symptom of lack of alignment on why we're doing it. And essentially a lot of the assumptions that bake into that. And so I think that's just my first observation, and we're debating the solution. But what we should really be doing is debating the problem to some extent to say, well, why would I even get started? So there's always this bias towards being optimistic, but then that can sort of mutate into magical thinking. And that's, I think, where the problem is the seeds of the project's demise may have already been sown because we're already solution ING at this point. And this isn't by the way, limited to marketing this notion that we haven't really unpacked the problem. You know, and by unpack the problem, I'm really mean welcome to experimental, the podcast that cuts through the noise to bring you actionable insights in B2C growth, marketing, and data science. I'm your host, Sunder, former head of brand data science at Uber and the mind behind the experimental newsletter. Join me as I talk with industry leaders who have driven growth at companies like Uber, Spotify, and Netflix. We'll uncover the experiments, failures and breakthroughs that lead to real results. Now let's get experimental. Hey, Oliver, welcome to the podcast. Thank you so much for joining me today. Hey, it's good to be here. It's great to have you. You've got an incredible background in in research across some of the biggest consumer companies. I think we have a lot to jump into. You know, I think first off, I'd love to just start off with what is research and insights as a function T like how does that fit into most companies. And that's a great question. You know, I think of research and insights kind of is part inspo, part reality check and part de-risking, I suppose, you know, and I can I can get into any of those. But that's sort of the major, I think, benefits that research and insights plays for any organization. And then the way it shows up kind of depends on the situation. Okay. Cool. So you've worked at some of the largest consumer companies leading their research efforts. And I think it's really cool to have like a research perspective on it because it's a very rare one. First off, what do you see as the role of research and insights at a company? And that's a great question. This may sound a little bit glib, but I kind of think about this role as being part inspo for product or for creative part reality check and then, part like insurance policy. Yeah. Okay. I think we can get into all three of those. Let's start with risk policy and reality check, because I think those are pretty interesting ways to frame research. Right? So I think when we were chatting before, you mentioned that you have to be explicit about unpacking hidden assumptions, and that research can do that. Well, first let's start off with what are these hidden assumptions that marketing teams or product teams are making? Well, maybe I'll just stick with marketing just for the moment. I mean, it's such a massive space, but you know, the thing that I've just observed as well, first of all, as a researcher, your general job, I think it selects for folks who are observers. We really like to watch things unfold and try to make sense of of what's going on. And one of the things that I've noticed consistently through my career is that in the room, as we're it's trying to figure out what it is that we're doing or get alignment. There's also a lot of debate that's happening around like what we're doing, but a lot of that masks is, not masks. It's the symptom of a lack of alignment on why we're doing it. And, you know, essentially a lot of the assumptions that baked into that. And so I think that's just my first observation and what clued me into like, wow, there's probably something to this because we're debating the solution. But what we should really be doing is debating the yeah, the problem to some extent. Yeah. In terms of like, what are those? I think your question was like, what are some of those assumptions that are that are going in that people are like are hidden? I mean, gosh, there's so many from why is this the goal that we, you know, we've set for ourselves to say, for thinking about marketing is just are these the right channels to, you know, it make the impact. Is this the right audience that we need to be going after? Can we make this entire program ROI positive? Are people able to take action on this thing we're asking them to take action on? Is that is the value proposition and compelling enough? Is the value of the product strong enough? Like all these different things all have to line up in order to to reach your successful outcome. And those assumptions are going to be true to some degree. But when you add them all up, each piece that you've made a wrong assumption on, it's going to chip away at your likelihood of being successful at whatever it is you set out to achieve. Do you have an example that you're able to share about, you know, a time where you've been able to unlock these and was sort of what the end result was versus what was going to happen? I mean, a lot of these tend to be things that you discover in retrospect in retro or just in hindsight in some way, whether or not you've conducted a formal retro. But for example, boy, I'm going to I want to be pretty general here, but in a, in a past role, there was an assumption that there was like when we set the goal for adoption, there was a bunch of assumptions around how big the addressable market was, what the adoption rate was going to be, what the timeline for a change to occur would be, and also like, what are all the interactions and the group dynamic that needed to take place that were sort of largely assumed would sort of work themselves out. And naturally, every one of those assumptions was violated to some degree. And they all compound to meaning that you end up with marketing campaigns that were short of their ROI ambitions and ultimately, like the business result was also missed. Yeah, it's interesting because, you know, a lot of the experience that you have are at companies like, again, like a meta and Uber. Now, Miro incredibly data driven. So a ton of data, arguably a ton of resources. Right. Like in my mind there shouldn't be these assumptions. So where do you find that these assumptions come from. How do they enter. Like where where's the sort of sort of origin of these assumptions? It's less than around the origin of the assumptions, and rather that those assumptions have never really been made explicit to begin with. I've been thinking about that, you know, like, how does that happen? Because I think your point is really well taken that I've worked in companies that have access to an incredible amount of information and an incredible potential to be very thorough in mapping and testing one's assumptions, but it still doesn't happen, as consistently and as thoroughly as, in my opinion, it it should. And I think it's a lot there. There are a bunch of things that come together, but they're there, I think human there are people, people problems. Really. Right. You've got one that seems to really crop up a lot is there's just a, I guess, a bi a cultural bias to action. Right? So it feels uncomfortable to not get into it. Right. And so individuals don't want to be the, the one who says, well, wait, you know, wait, what about what about. Right. Everybody sort of wants to get into it and get moving. There's probably PTSD around, like analysis paralysis, and you don't want to be that guy. I think there's like sort of a good, I think a very healthy cultural norm towards being optimist, Dick, because if you aren't a little bit, taking on hard problems requires you to just imagine that it's possible even when it's hard. And so if you're indulging too much around that, all of the challenges are going to come up. Sometimes you say, well, why would I even get started? So there's always this bias towards being optimistic, but then that can sort of mutate into magical thinking. And that's, I think, where the problem is. And then you've got like, I think this idea that like, there maybe this try to remember the name of this term, but there's that idea that like, somebody must have thought this through already, so I'm not going to be the one to say, well, did you think about this, as a kind of like that? Is that the kind of like bystander. Yeah. That's it. That's exactly it. Right? Yeah. I was trying to think of the idea that it's somebody, somebody else is problem or somebody must take care of it, so I won't do it. And so, you know, and it's in these often in these social moments. Right. We're at a project kickoff or the thing is already leaving the station and you kind of missed your moment. And then only later do you realize you're like, you know what? And this is why I think that you need want to invest more on this front end. Is that when these assumptions have not been made explicit and people have not gotten on board, like they inevitably crop up later and slow everything down again, when in like, that's this idea of like you relitigate these decisions over and over again because they probably needed to have been litigated properly once. So you're going to pay now or pay later. And I think that's, that's kind of why, you know, why it happens. It's not for lack of information. It's sort of, I think the social dynamics around trying to get something going and your own self-doubt over whether or not you, you know, whether or not you've got enough to get moving. Yeah. And that's a really interesting thing to think about, is so many these companies, quote unquote, made it because of their bias for action, but they reach a point where it's arguably better for them to slow down a bit, be a little bit more deliberate. And so it sounds like a process thing that could be improved, like walk me through an average example of what the normal process looks like, especially for like a marketing brief marketing campaign, and then how you make it better with these sort of assumptions and unlocking them normal will sound a little bit dysfunctional, but maybe you'll tell me if this sounds familiar or not. I would love to get your perspective as well, but yeah, I think the normal example feels a little bit. Well, let's start with many projects. Don't start with a neutral solution. Neutral. They tend to be started by somebody that has, a particular functional discipline or expertise. And I kind of call it like, you know, a lot of hammers looking for nails, kind of, problem where there is already a presumption that the project is, requires a particular form of solution to, like, be more concrete. You're like, we start the project with. So we're going to do an ad campaign, right? Where I would argue you want to take a few steps back from. So we're going to do an ad campaign and say, so we need to what's called insert business outcome by some degree. You might also want to unpack why that's the goal. But let's just like let's start it there. But in many projects we'll start with. So we're going to do an ad campaign. And now let's with that ground truth, we're going to like begin to execute everything else. But I and in my opinion that's where the like the seeds of the project demise may have already been sown because we're already solution ING at this point. And this isn't, by the way, limited to marketing. This is limited. This isn't this notion that we haven't really unpacked the problem, you know, and by when I say unpack the problem, I really mean be explicit about all of the things we are assuming that are going to drive us to make decisions to, you know, to move something forward, whatever it is. I'm speaking very generically here, but so it'll start with that, and then we're going to start making it. And by the way, at that point you say and by the way, our audience is this. And when we define when we say audience, that definition, maybe there's a very broad spectrum in my experience, of the precision in which and how you've defined the audience, there is, there are channels that have already been predetermined. We're said, you know, so we're going to do an ad campaign and it's going to be a big television campaign that's going to run nationally. And again. And the answer to why, why all of those may have been top down, you know, mandate. It may have been an assumption because that's what we've always done or it feels right or any of those things. Maybe those are good calls. Maybe they're not, but they've never the, the, the assumptions that went into those have never been made explicit. So we can't really since we can't see them, we can't at least test them in some, some form. We can talk about what I mean by tests because it doesn't always mean run an experiment or do market research. It just means let's at least give them some of the intellectual rigor to like, expose them to daylight and make sure that we all feel like they all hold up. So that's like how a project gets started. I would, I would say, and then we're sort of now making plans, I would say that are closer to project planning, dependency mapping and so forth. But we're really now just getting into executional logistical planning, which is all super, super important. But it's not necessarily strategic, and it's not necessarily where a lot of the critical decisions had to be made. It there, you've already sort of presumed a bunch of those decisions. Now we're talking about execution. So we've kind of put the train on the tracks. Now we're executing, but there was probably some important conversations to have to have. And prior to those and then now take it to the other point, which is in your mind, knowing you have to balance speed and with a little bit more of being deliberate, like, what is a better process look like to you? Yeah. So one of the things that I'm, I'm loving that we're doing at Miro now, because I don't think that any company is immune to this for all the reasons I described as being very intentional around project kick offs and T, and that's in my opinion, the kickoff is probably and even the pre kickoff I would say is probably like the most important steps you can have, which is potentially like laying out the theory of the case. So getting very clear on what is the business outcome. But clear with precision. So so rather than saying I'm going to give give a generic example, you know, in conversation we might have these, these conversations like we need to get customers to think about Miro in this new way. And you're like, wow, that sounds like a great ambition, really important. I understand generally what what you're saying, but it doesn't give us grounding to truly execute on that because there's a lot of implications for how you sharpen that statement. So when we say customers, you mean current customers, prospective customers, what segments and our customers is here? Is it everybody or are there particular segments that are more important than others because those might have implications for channel selection downstream or I think you probably take my point. So you start with getting really precise and sweat, sweating the words I believe in, sweating the words everywhere because the precision around how you define a problem or a solution or an assumption for that matter, really, you know, can have impact downstream. So get really precise on your your goal and unpack it and why it is what it is. Then you have to be explicit about the assumptions that you are making about whatever it is that you are putting into that kickoff, whether it is the channels that you are assuming, whether it is the timeline that you believe it needs to or that you're signing up for, for this to take essentially like be explicit about all the things that need to have happened or all the things that need to be true in order for that outcome to be reached. So I don't know if we talked about it much, but I've been quiet. These are things that I think that I've always done instinctively. And then later I discovered that, very large, very smart organizations have been doing the same thing and far more codified ways. One of them is that I really believe in this idea of starting with the outcome and working backwards, you know, which is, you know, Amazon does this in there. I believe it's the PR, the fact where you sort of start with, like what? Let's paint the picture and then like now let's work backwards for all of the things that would have needed to have happened in order for that desired future to have, you know, to be true. I really like this because it allows you to sort of it's people tend to work forwards and then they ignore the stuff in the middle. I think it's easier to start with the, the, this outcome that everybody is excited about and then bit by bit, you work backwards through all of the nitty gritty things that may be like difficult to think about, and maybe they're not quite as interesting or feel hard, but you're at least starting with this really desirable end state that gets your imagination and gets the team engaged on the the outcome. And by the way, if nobody's engaged and excited to make that reality true, then you're also in trouble. So that's another good side. If you're setting a goal that's not truly aspirational. Nobody has. Nobody believes in the vision. That's its own set of problems. But assuming that you do have by and starting with something that's exciting, that everybody wants to see happen and then work backwards can be a really valuable way to unpack all the dependencies and the assumptions that would underlie them, to make them real. So in an ideal world, you're you've done this in as part of your kick off. Yeah. I was going to say it. So that alone feels like a really great first step that many companies miss is just what is the objective, what is the goal. And and you often see this in briefs that are not aligned etc.. Is that in your in your beliefs like argue. Sorry. The best way to phrase this is that alone. Like one of the most impactful ways to force assumptions out just by saying, hey, let's align on what we want to do this for. Yeah, I mean yours. I find that something that as simple as this is what we're trying to do, and we we see this. It's been really useful and in sort of goal setting processes at Miro, it is a place where many things tend to flesh out. There's a lot of very healthy debate on this. The goal why, if we went for this goal versus another goal to how did those things deliver on sort of what whatever ultimate outcome that we're trying to drive that lives higher up against it? So naturally, OKRs and so forth should be flattering. And I think it's a place where many healthy tensions are fleshed out. So yeah, absolutely. It reminds me of another point which was companies I think probably again, this bias to action, in my opinion, need to hold more space for this kind of conversation in, in this kind of planning process to happen. So like to normalize it, to codify it, to make it. Okay. This was it. Einstein. Was this you see the one who said, like, I would have spent 95% of my time trying to define the problem and 5% solving it. I don't know if that's that's like it at that's really that may be one of those classic misattributed sayings, but I think it sounds right. It's because once you've got you've really unpacked it and got everybody on board. So many other things go so much more quickly. Yeah. And just so you know, this is the exact same thing that happens in data science, right? So aligning is it as a priority. What's the actual outcome of the analysis then makes the analysis signifi laser. So it's interesting to see this pattern repeat again and again across functions. I want to quickly ask again because you've been brought into multiple companies now who clearly see the value of this function. Do you feel like they're actually leveraging research and insights to the best of their ability? Like is this something that they just say but don't do well? Like, what's your take on the effectiveness of research and insights at these organizations? I think it's variable. I've been in organization and I would say no organization is it's like a muscle that you develop. So I've been in companies where that have made the transition from, I would call them research averse to agnostic to addicted. I don't think, you know, addicted is not a good place either. You know, a versus not a good place. But there's a place where you develop a healthy relationship with the role of insights. And and I again, I believe deeply in this idea that it's part inspo, part reality check and part insurance policy. It's variable. The biggest challenge I've found with Insight is not, a dismissal of insights. It's and I think that research owns this problem as owns this problem as much as the organization does. More so is that we can be a bottleneck. We can slow things down at moments where the organization is ready to go and needs to go. And so we have to find ways to essentially had a colleague that would talk about this idea of intersecting the decision. Right. So you can't miss it. You have to be really you have to be very realistic about what's my window to impact this and, and recognize that the business isn't going to stop and wait for you. So take your shot at the moment, you've you've got it. And sometimes you're going to have to let some to torture sports metaphors, let some pitches go by because because you can't get them all. And sometimes the first question you have to ask yourself is can like, can I help in the window that I have available to do it? Or am I going and can I, can I, you know, maybe I'll get the next pitch. Yeah. Yeah. Sorry. Keep going. There's a great is a great cool analogy. Yeah I know my wife hates it when I torture sports metaphors because the truth is, is that I'm not really an avid sports fan, but it does have fantastic metaphors. So moving fast, they're very important. So I think if we can meet the business, they're there. That's one too is businesses don't necessarily know. This is another thing that I believe that research and insights owns this problem. As much as the business doesn't necessarily know how to reconcile conflicting signals, you know. So if you hear this from here, you see another signal like it says this other thing over here and the third thing here, and your gut tells you a fourth thing. And ultimately you need to make a call under pressure and move forward. Which one do you what do you do with all of that information. And so sometimes the easiest thing is to ignore all of it and go with your gut, right? Because at the end of the day, a call needs to be made. You're going to put your. And as a researchers we can tend to forget that ultimately we're not like we don't live and die on our own recommendation. And, you know, and I'd be very curious whether a researcher would be willing to sign their job up for the outcomes of a finding or recommendation. I'd be willing to wager that not too many would. But the reality is the people that you are supporting can be in those positions. So I guess give a little grace to the organization that has to make hard calls and live with them. Yeah, I would say it's variable organizations. Absolutely. We just did research that said that organizations continue to invest in insights and analytics, consider data and insights to be a critical competency for their success. And most of them feel like, you know, they're not they're not doing enough and want to be doing more. It's really about the how that's there. And I think, by the way, AI is is transforming this entire space just like everything else. Or I'm going to ask you to pause there because we're going to get to that in a bit, because I would love to hear your take on it. Two questions. So I don't forget them. What does research addicted look like? But then also is there a bit of around risk policy like or de-risking? I think there's an assumption that de-risking means zero risk, but as we know it means less risk. How do you frame that risk ness and saying, hey, I don't know. But yeah, how would you approach framing what the research is doing to the risk? All right. Let me take the first one, which was what is the research addicted look like? Research addicted. Looks like you are afraid to make a move without testing, without validating, without like comprehensive research. And some of that ends up being, I guess we would call it the CIA kinds of kinds of work. And it kills me, at the end of the day, what I want to see is, you know, a business moving fast. And if I'm the reason that a business is moving slowly, I mean, the biggest enemy at the end of the day, can is, is just like speed and irrelevance and just getting bogged down. You need momentum. The organization just needs momentum to keep moving. And not only that, you're in an impatient market with aggressive competitors. So speed is is really important. So addiction is that you just end up spending more time trying to get risk down to zero. Sometimes organizational risk, sometimes personal risk. Right. That's where CIA comes from. And then the outcomes, if they're not any better or different, then, you know, unfortunately you have no idea in the moment whether or not you've removed risk. And it's like, was that a good call to invest in that research? These things tend to be recognized. And, you know, in hindsight, after a lot of things have taken, taking place. So thinking about risk, you know, how do you talk about de-risking things? I talked to my partners. Is that research? Well, there's two things. One is that ultimately the best sources of data are not it's not the kinds of work that I tend to do, which is primary research and discovery and validation work. But it's they're not generally experimental or not behavioral experiments. And so the best sources of data are experimental in, in nature. Right. We got to run it, try it, see what actually happens in the real world with real money, with real circumstances, etc., etc.. Highly powered test. Very spoiled about working at meta where you had the ability to deploy highly powered precise tests and get data back very quickly, which ended up being very predictable and reflective of how things were working at scale. A lot of stuff that we do in market research is on the spectrum from being highly speculative in qualitative to, you know, more precise, but at the end of the day, the level of accuracy and precision is certainly less. But what you can have a problem of is like running all of these experiments in what is fundamentally like off of a fundamentally flawed premise. You end up with this, like local. What's the term? A local Optum, a local maxima. So maxima maxima. Yeah. So you can you know basically I've discovered the best of a bunch of bad choices where all the good turns out the good choices are over someplace else. Research can help put you on to the route where you're climbing a higher peak than the one that you you're currently climbing, if you want. If I'm going to use a new metaphor, but, I can certainly happen. So part of this is saying, like, look, I'm going to make sure that I'm, I'm de-risking this by ensuring that we don't spend a bunch of energy experimenting and burning cycles on the thing that ultimately is not going to get us to where we need to be. I'll put you on the right trail. You're going to have to do a lot of iteration and experimentation to get all the way up the mountain, but at least you know you're climbing the right mountain, or the least on the right face, to get to the highest peak where you might end up someplace else. If you didn't do this research, that's one area. Another one is I find this to be really quite self-evident. Once we've done some work with a client, it is especially in messaging, especially in the way customers react to things, is that we're often not our own. We're not the customer. And also the customer is bringing with them so much experience and context that we don't necessarily have, that those misunderstandings make it very difficult for us to predict the way they will respond to something, to our messaging, to our value prop, to the way that the creatives that we used to deliver that message, that simply taking the time to get feedback can be incredibly illuminating. And and it happens more often than not that you will discover, oh, I was wrong about that. Oh, I didn't see that coming. Yeah. So it goes back to to assumptions again. Yeah. Yeah yeah, absolutely. There's so many kinds of risks we could talk about. Like the different ones. But I was thinking about like in assumption mapping. Are you familiar with like the idea of assumption mapping from like the product ideas quite inspired by Teresa Torres, who's, a product coach that's been a champion of this idea of continuous product discovery in the product space. And she talks a lot about various forms of assumption mapping and like mapping the forms of risk. And she talks about the assumptions around desirability viability. And I think it's feasibility. Desirability is this idea we're making some assumptions on like do people want this thing? We're making some assumptions about viability, which is basically quite feasibility is called could we build it to have the technology? Does the metal exist that we want to make this out of? Do we have the right scientists around all that stuff? And then viability is does this create more value than it costs us in some way or another? Most often is like, is this thing going to be ROI positive? I think that can apply also in marketing. But then I was thinking through, you want to unpack more things like market and audience assumptions, behavior, assumptions, the way people like make choices about their ability to buy or their ability to adopt decision and purchase cycle is really important. Purchase cycles are really important. One, because you might be signing up to move a number or make something happen in six months, and it turns out that there is nobody like the purchase cycle for whatever it is that we're trying to drive. It's like a year, right? So if you want to move things in six months in and and people don't get the purchase takes a year to happen. Well, you're already set up to fail from day zero, you know. So yeah. All of those then you've got like brand assumptions, like, do we have permission to serve this space? If we have a reputation that will carry us from here to there, are we trusted those kind of things? Are we even hurt? Are we even known in these circles? You know, we talked about channel a little bit and then then like then messaging is the one which is quite near and dear to my heart. But you know, and that's where or is what we're saying relevant, resonant and credible. And you can violate those assumptions like every at every step of the way. And you can see how they add up to not being able to deliver on the outcome. There's a lot to unpack there, but something that I thought was very interesting that you brought up and I did want to get to, is the role of AI and all of this just. Yeah. What are your thoughts on actually. Sorry. Pause again. Let's we'll we'll do this AI question later. So unpacking this a little bit more right. You've brought up message, and I think in a conversation before that, when we were talking about message and premise testing, like, let's go a little bit deeper on that, like what is message and premise testing? How is that different than either creative testing or other types of testing? Yeah, awesome. One of my favorite topics. So creative. So we do we tend to do a lot of this creative testing, copy testing. We test the, all that stuff. I think these are all related. I kind of feel I feel like these are all part of the same, chambered Nautilus of this same problem, which is we're often like, leaping past all the, like, foundational things that set you up to be successful or not, and then only, like, intervene, like when it's kind of everything's pretty baked, which is a bad time to intervene. So copy and creative is the way that you articulate and deliver a bunch of the the message of the argument. Let's just call it an argument, for lack of a better term, the argument to the customer. And you do that by dressing it off in a story, and you do that by using imagery and semiotics and music and all this stuff. Right. But it's that's kind of like the delivery device. It's the sugar that makes the medicine go down. Right? Because we're all busy and we're we, you know, we're trying to intercept somebody in the middle of watching a football game. And you're they're not. They're like, please ingest my messaging framework. Now. It wouldn't work. So you have to do it. It's no form of a Super Bowl ad or whatever, but that's the stuff we're testing all out. But if all the things that went into that Super Bowl ad are like fundamentally flawed, like the most, you're going to get out of that creative, if you're lucky, is some attention and like a brand impression. So they'll go, I saw saying and it was for Geico and and sometimes that's the only goal of the campaign is to just like refresh memory and create, like all mental availability. Right? I just mean to make sure that people know that Geico exists and job done. But most of the time you're trying to do trying to communicate something about the brand or something about the product that is being offered to persuade people to, you know, think differently and buy something in some in some way, take some sort of action. So this testing message, testing, all of these things are all quite related is about making sure that you're starting from like a good sound foundation. And as I was like just before we got on this call, I wanted, I wanted to just understand where some of these ideas came from. And they go way, way, way, way back. And there is a book that was called and I'd heard this term bandied about, and I didn't know where it came from originally. It was called the Pyramid Principle, and it was written by it turns out was maybe there are many, many parents to this idea, but there's a book called The Pyramid Principle by a woman named Barbara minto. She was a partner at, I believe, McKinsey. I think she was like in the 70s. And she was like, I think she was the first female partner at McKinsey. And she wrote she had this whole protocol premise for structuring arguments and then consequently communicating ideas, which was based on this idea of principle. So if you've ever heard of, you know, you start with this idea, this assertion, and then you've got supporting arguments underneath it and underneath that is like evidence, right? You probably it's not an uncommon hierarchy. And you'll see this idea. But at one point that was new actually. And I think that she credits maybe Socrates actually. So maybe that's Tuesday, the parent of this entire idea of rhetoric. But her point was you need to start with an assertion or a premise. Same thing. Right? An assertion. And that assertion then is supported by some arguments. And those arguments are then supported by evidence which makes those believable. That's the general idea. And she has this premise that or just this idea that you must structure in this way, and that each level of the argument structure leads to ladder up. So it forces you to be really intentional about creating coherence between it at each level of this argument, and then supporting each piece with evidence and so forth. And when you're doing all of that, it makes for a more structured and compelling point. And this is what you see in marketing, right? This is where you see like what is our value prop. And then you might have these messaging pillars and and underneath that you might hear like reasons to believe or proof points. Same thing same idea. Right. We just translated the words to marketing speak. But it's it's the same exact ideas. We have a big idea. It's supported by some smaller assertions that are supported by some evidence, or claims. Same idea. So the challenge that we have in in this is a lot of times will we haven't even taken the time to make that those set of claims. And so the creative itself is kind of does it even is in the brief if you think about it that way, it may not even be based off of any set of coherent ideas, but we also don't necessarily know that it's those things all hold up. So one thing that you can do, and I find that we get an awful lot of benefit from doing this, is testing at each level of the testing is not the right word here. Getting feedback, pressure, testing this message hierarchy, ideally with your target customer. And by the way, that's where getting really specific on who your customer is is important. If you don't have if you have ambiguity there, then everything else downstream of it gets really hard to do because even your research space gets difficult to to nail down. You know, if somebody says, we want to make customers think differently, you're like, well, great, well, am I going to go interview? Who am I going to validate this with? And so forth? I need some I need some constraint to focus on. So start with the assertion. First of all test that. So you go if I told you that I could dot dot dot dot dot do you care. Does that sound like something that is a problem that you have. Does that you know, would that would you take, you know five minutes to listen to the rest, that kind of thing? Those are the, those questions. So you're talking about do you care. Is it relevant. Is it relevant and resonant? Really at the end of the day, if the answer is like, I don't know what you're talking about and I don't care, everything else is pointless. You don't have like a fundamental idea that people I like, they don't care if it's true or not because. Because they don't care if they got it. And so start there and get clear on the assertion. Then you have to understand whether or not the supporting arguments logically deliver the benefit above. So there's this idea of like what's the outcome? Then there's the how am I going to deliver that benefit? Because as we go down the the pyramid, we're getting into increasing degree of specificity, and they have to hold up the thing at the top. And so you're then saying, okay, what if I told you this, this and this and people will say, those things are sound good, but I don't get how if you deliver on those, it's going to give me this outcome. I often find that to be challenged. So they'll say that sounds good and that sounds good and that sounds good. Mom and apple pie kinds of benefits. But I can't make the leap between if you deliver me those, I would get the outcome you're promising. And then below that you have, like, evidence. So that's the how. So there's the idea is like the why what and how of your pyramid and I'm. No I think I'm mixing up the hierarchy here. The it's not as important to me is like the labels we ascribe to them, beyond the idea that we're holding up this bigger premise at the top with increasing degrees of proof, or at least increasing degree of argument to prove specificity and obviously tailored towards your target audience. Yeah, indeed. And what I've often found is that the root, the challenge that we have is that there is an assertion of, of a problem actually, there's an a, there's we're making a claim of a benefit, which presumes a problem. Sometimes that problem has been made to sit in the benefit, sometimes it's not. You don't always have to, because that's shared context between if you've gotten it right, the audience, it's a dog whistle, right. The audience who has the problem hears the benefit and goes, oh, that's the that's the solution to my problem if you've done it well. But oftentimes times the moment they hear the solution, they go, I'm interested. Then skepticism kicks in. You're like, well, how does it how how great sounds great. How are you going to deliver that to me? And you say, well, so, you know, supporting arguments and then I don't get it. And the challenge is that there's a I would call it a root cause linkage between the problems between the, the problem and that the top level value proposition is delivering and the benefits that these pillars are addressing that sit in between. So it's not I'm gonna think of an example that make this more concrete. I don't want to bogged down. And while I'm thinking about it, but we tend to do this like magical leap and it's a bit hand-wavy. Yeah. And we'll say, like today, if you had these benefits, then your team works. I'll just use Miro as an example. So, Miro, we, you know, core benefits that we have as a for a product is the fact that we enable teams to work together more flexibly, more seamlessly, more collaboratively, more energy and less friction and all of these things. So we'll talk about a lot of the specific examples about how we do that, and then we'll make a claim at the top. One of the areas that, you know, we're as we've repositioned reposition is not the right word. We've sharpened our positioning around being, a workspace for innovation. And we've made these sort of leaps between all of these core benefits that everybody agrees we do deliver and they do want. And this outcome of innovation, which people also want, without making the connection between how there are root cause challenges around innovation that our product actually solves for quite well. And so they're sort of left floating in, in between trying to figure out how I get here from there to here. And that sort of weakens the overall argument, even though many of these elements are quite compelling individually and collectively. The whole thing was not laddering up. Yeah. So that's the thing. That's the the secret is to get real explicit. And in your testing, this is what you'll find out is, things that perhaps were well understood to us internally that we thought didn't need to be made explicit, do indeed need to be articulated at some level of the messaging hierarchy and then consequently need to be communicated in, in our marketing itself. Yeah. So it's really more an extension of how we don't necessarily need new product, we don't need new solutions. It's delivering the solutions that we have to the outcomes that a customer would want in the right format. Right. Which seems to sometimes be missing to to your point earlier. That's really cool. There's a lot to unpack there. I know we're we're approaching time. So I wanted to do a quick pivot to your perspective of AI on research, and also just what the future of research and insights holds, and at least and we'll just keep it to 2025. I don't have to go beyond 2025. Okay. So what's my let's start with what's my, perspective on AI research? I'm a tech nerd optimist in general. I'm so fascinated and blown away by the pace of of development and change. I would say that anybody that is taking a Luddite perspective on, on, on this is going to have a terribly rude awakening. And I actually told my team as much and I said, if you're here's their truth, if you're if you're seeing a stampede of horses running your way at you, you have two choices. Like one is close your eyes and hope you don't get run over, or to figure out some way to grab one of them and get on the back and ride. And I'm certainly taking the the latter approach because I don't believe I'm that lucky to not get trampled in the process. One, two. These are pretty amazing horses, and I'm just astounded at how it's helping me every day in ways that I hadn't imagined. The biggest change that I'm seeing. In some ways, I think it's certainly going to transform what it needs, transforming what we do. The first role I'm starting to see is there's a lot of rhetoric around replacing with AI. You can avoid X, Y, Z, and, replace expensive and slow ABC. And I do think that a lot of like low value, low value and slow forms of customer feedback are going to be obsoleted. And I say good riddance. I would I'm my prediction is that and I hope this is what happens is that we see surveying be disrupted and replaced with qualitative at scale. I would I would call it big, certainly. So do you think that's through chat or what's that interaction model. Any. I think there's probably a million modalities for the the interactions to take place. But surveys were this concession to the fact that in order to get consumer feedback or public feedback, public data at scale, you had to reduce down the data exchange to something very, very simple so it could be administered and processed in a way. But so much noise is so much fidelity was lot is lost in so much noise is introduced in that process that in many ways I think it's like obscene that we would continue to like have this be the modality. And I think in many ways customers will sort of vote with their feet because they don't like taking surveys, and many of them don't, as a consequence. And this is especially prevalent in B2B, where we're dealing with audiences that are very busy, expect to get paid for their time, rightfully so. And, also attracts a tremendous amount of, of fraud, when the incentives become high, that's where fraud shows up disproportionately in very sophisticated fraud. So we have to meet meet customers where they are and engage them on their own terms. Because customers are willing to provide feedback. They just don't want to take surveys. So that's one area. The other area is, and just to finish that thought, you can do these same things at scale was high quality with like high dimensional data collection. I can have an interview with you, a chat I can we could transcribe this conversation and make sense of it. There's a billion things that can can happen where. But now we have the capacity to collect all of that at scale and capacity to make sense of it at scale, where we never could before. So, surveys. Goodbye, please. I you know, I don't it's not happening in 2025. But I would say the days are numbered and I'm, as much as I enjoy the process, I'm ready for it to be done. Then there's this notion of synthetic audiences, which I think are very is very interesting. So the idea here is that in many cases, and we know an awful lot about customers, and we spent a lot of energy understanding their mindsets and their language and, and etc. we do very little to reuse that information. If you think about it as any business, how much money they've invested over the years in collecting surveys and conducting qualitative research and and doing sales calls and all this stuff. Right. Most of that knowledge, that data let's it's data at this point, it hasn't turned into knowledge is sitting untapped. You can use all of that information. And that's what's happening now to train bespoke models to become, sort of proxy personas that you can interrogate in different ways. So you can create a persona that says, like, let's make a persona of, like the last 200 chief product officers that we did sales calls with. And then we can show that that amalgam persona, our messaging and say, what do you think? Critique it or build on it or etc., etc. and it will give you pretty good feedback and it's going to only get better. That's an example. There's also companies that are doing things. I feel like this might be a intermediate step, but they they call them synthetic panels or synthetic or shadow panels. This idea is that we're going to train these models based on all of this information, and then use that to simulate individual after in a survey data set, who would respond to your questions as if they were actual respondents based on what the models know about how individuals had responded historically. And then you can execute research, quote unquote, against this panel, quote unquote, to create things. In my mind, it's a bit like the equivalent of the moving picture or the or the moving stage play where you would put the camera up and everybody would sort of act. But we're still doing a play. It seems like it's, we're like shortcutting and speeding up something which is like an entire, like, something that is like, outlived its time. But but I understand the premise, but the point is still that the ability to execute and get feedback as sound, to use this as a sounding board quickly is so valuable. Going back to my earlier point of why companies don't always get the most out of research is that it's too slow. If you can speed it up, then you get you can build this muscle into the process more than you, otherwise you otherwise could. And in my experience so far, and I think this will quickly change. AI is not delivering doesn't deliver the nuance. It I think tends to give you sort of, squishy and well rounded AI things. So it'll, it's not going to give you like the sharp insight. It's not going to give you the creative unlock. It's not going to give you the non-obvious truth, but a lot of times the bigger challenges we have are not it's not brilliant. It's that we've moved too quickly and missed some of the fundamentals. And so, so like if we can at least knock off, you know, 75% of the obvious problems that we're just we have blind spots to because we're moving fast. We have human cognitive biases and, and you know, all of the challenges that we just have, we just can't see it because we're too close to it, that objectivity that the AI can provide and say, you missed this and you forgot about that is incredibly valuable and do that quickly. That gives us the ability to invest a bunch of the time that we bought back into the stuff that's going to give us the extra 20%. That makes all the difference. Yeah. Wow. There's there's a lot of sound, but it's a lot to unpack there. Well, this has been incredible AI. It's like research and insights I will say is one is in my opinion, like one of the most undervalued functions. And it's a voice that I don't hear often enough. So this is great. It's so cool to get access to your knowledge. I think we could go for like a few more hours about so many things that, you know, that you've got to share. But yeah, Oliver, I don't know if there's anything else you want to mention before before we start to wrap up. I mean, gosh, I would say, like, if anybody takes anything away from this conversation, it would just be to feel, to take the time to make your assumptions explicit, challenge your colleagues to do it. It might feel uncomfortable at first, but, there's that saying a stitch in time saves nine. I think these in this instance, we're making true for a reason. And you'll be I think it's pretty fun, honestly, is to like unpack some of these things, you know, otherwise you might find yourself signing up for a mission that you have no hope of ever achieving. Yeah, I think even the title of this episode, Oliver Raskin, Master of Metaphors. So you would not be the first person who has made that observation. I love it. Well. Oh, thank you so much. Thank you for your time. Glad we got to chat. This is just been really great and so much to take away, so I hope you enjoyed this as much as I did. Yeah, absolutely. I appreciate the time. Thanks for tuning in to experimental. If today's insights spark new ideas or made you feel like a smarter marketer, consider leaving a review on your preferred podcast platform. It really helps support the show. For more in-depth discussions and resources, visit Experimental beehive.com. Until next time, be curious and stay experimental.