Growth Activated | The B2B Marketing Leadership Podcast

Go-to-Market Science: How to Build an Experimentation Engine That Actually Moves the Needle with Sarah Renner

Mandy Walker Season 1 Episode 34

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 50:48

#34 - Your team is running tests. But are they running the right ones — and do they actually know what they're trying to learn? Most B2B marketing teams test without a clear hypothesis, report on campaigns without asking why results were what they were, and scale pilots that were never designed to produce conclusive reads.

Sarah Renner, VP of Marketing Strategy and Analytics at MarketBridge, has spent 20+ years at the intersection of data strategy and go-to-market experimentation — helping B2B companies replace opinion-driven decisions with evidence-based growth models. In this episode, she shares the go-to-market science framework her team has built, why most B2B experimentation fails before it starts, and what it actually takes to build a testing program that produces results you can act on.

What we cover:

  • The 3-part go-to-market science framework: data-backed, hypothesis-driven, and reproducible — and where most teams fall down
  • Why "did this campaign work?" is not a hypothesis — and what real hypothesis-driven testing looks like
  • The experimentation factory: how to pick the right tests for B2B's long sales cycles and small transaction volumes
  • Incrementality testing in practice — including a real-world example using a podcast as the marketing channel
  • Why your test result will almost never translate 1:1 at scale — and how to set C-suite expectations before you roll out
  • What's actually working in B2B marketing right now: quality over volume, offline custom audiences, and why you should be skeptical of any platform that won't show you the data
  • AI's role (and limits) in marketing analytics — and the free data science packages worth knowing about

Chapter Markers:

  • (00:00) Sarah's Background and MarketBridge
  • (05:30) Go-to-Market Science: The 3-Part Framework
  • (20:30) Building Your Experimentation Factory
  • (36:00) Scaling Results and Managing Expectations
  • (44:00) What's Working in B2B Marketing Right Now

Connect with Sarah Renner:

  • LinkedIn: https://www.linkedin.com/in/saraherenner/
  • MarketBridge: marketbridge.com/consulting

If this episode shifted how you think about experimentation, measurement, or go-to-market strategy, share it with a marketing or revenue leader on your team. And follow Growth Activated on Apple Podcasts or Spotify so you never miss an episode.

Growth Activated is produced by Mandy Hornaday.

Lead Like a CMO - Group Coaching Lab: Join the Waitlist

Let’s Keep the Conversation Going!
Loved this episode? Connect with me for more insights on B2B marketing leadership and strategies to grow your business.

🌐 Visit my website: growthactivated.com
🔗 Connect with me on LinkedIn: Mandy Walker
🔗 Get Your Free Marketing Planning Guide Today!

Don’t forget to subscribe to Growth Activated and share this episode with fellow marketing leaders. Let’s activate growth—together!

[GENERAL SHOW INTRO — PRE-RECORDED]


Mandy Hornaday:  Welcome to Growth Activated. I'm Mandy Hornaday, your host with 15 years of experience leading marketing teams ranging from small startups to large service organizations. I've built high-performing teams of all sizes and have seen firsthand how fast the landscape is evolving, making marketing leadership more complex than ever. Today, I help marketing leaders elevate their strategies, lead with confidence, and build careers they love. If you're ready to drive impact and unlock growth for yourself and your company, you're in the right place. Let's get started.


[EPISODE-SPECIFIC INTRO — PRE-WRITTEN]


Mandy Hornaday:  If your marketing team could run experiments like a science lab and finally prove what's working instead of guessing — today's guest says most B2B teams are testing the wrong things, or worse, testing without a hypothesis at all.


Mandy Hornaday:  Hey everyone, welcome back to Growth Activated. I'm your host, Mandy Hornaday, and today I'm joined by Sarah Renner, VP of Marketing Strategy and Analytics at MarketBridge — and one of the sharpest go-to-market truth seekers in B2B marketing that I've met. Sarah has spent 20+ years sitting at the intersection of data strategy and experimentation, helping companies replace opinion-driven decisions with evidence-based growth models that actually scale.


Mandy Hornaday:  In today's episode, we'll cover the science of go-to-market truth seeking — why teams default to assumptions over data, and how Sarah's three-part model reveals what's actually working. We'll cover building an experimentation factory: how to pick the right tests, avoid inconclusive results, and stop running experiments that can't move the business. And we'll cover how top performers win in 2026 — the shift toward sharper audiences, human-first messaging, and data-driven media buys that beat platform algorithms.


Mandy Hornaday:  If you've ever felt like your reporting is too shallow, your tests aren't conclusive, or your budget decisions still involve too much guessing — this episode will rewire how you think about measurement, experimentation, and strategic planning. Let's get into it.


[INTERVIEW BEGINS]


Mandy Hornaday:  Hey, Sarah, welcome to Growth Activated. We're so excited to have you here today.


Sarah Renner:  Hi, thank you so much for having me, Mandy. I'm really excited for this conversation.


Mandy Hornaday:  Me too. I have been looking forward to it all week. Before we dive in, let's start with your background. Walk us through what you're doing today and some of the key milestones that have led you to this place.


Sarah Renner:  I'm VP of Marketing Strategy and Analytics at MarketBridge. I've been here for about six and a half years. We specialize in go-to-market measurement — everything from marketing, sales, and product usage, to supporting companies through go-to-market transformations. I've played on both sides of the house, as my title suggests, across strategy and analytics. My background has varied. I've been in EdTech, B2B FinTech, other consulting companies — but I've been in marketing for over 20 years, doing everything from lead gen to demand gen consulting, marketing automation, Salesforce, all of it.


Mandy Hornaday:  Amazing. I feel like it's rare to find people who stay at an agency after they've been in-house. I usually meet people who spent time in agencies and then once they went in-house said, "I'm never going back." I did it the other way — I'm at an agency now, having built my whole career in-house. What has that experience been like for you?


Sarah Renner:  I was consulting, then went back in-house, and now I'm back to consulting. For me, especially at MarketBridge, it's the smartest group of people I've ever worked with. That's really what makes the difference — I like to be surrounded by people who are curious, interested in their work, and want to be A players. That's not always the case in every internal organization.


Mandy Hornaday:  Absolutely. And I think one of the great things about consulting is the exposure to so many different organizations — the learning happens a lot faster.


Sarah Renner:  It absolutely does. I love the variety, the new challenges, learning about a new client and their industry, and coming up with solutions that will actually make an impact. Whereas internally, you can sometimes be hamstrung by internal processes or politics. And there is a certain credibility that comes with being an external consultant — you have the collective experience, you're making a recommendation, and your client is paying for it, so hopefully they'll actually implement it.


Mandy Hornaday:  I know one of the things you're really invested in — you talked about it as the go-to-market truth seekers, this idea of experimentation factories, really getting to the crux of what's working and what's not. Walk us through that methodology of truth seeking.


Sarah Renner:  We coined the term go-to-market science. Think about it as three things: data-backed, hypothesis-driven, and reproducible.


Sarah Renner:  Data-backed means it all lives in the data. So many times when we're brought in, there's an internal story the company has been telling about what has and hasn't worked — and frequently the data doesn't agree with that. Companies come to believe a certain channel works or a certain conference works, and it may not. Everything we do is focused on the data.


Sarah Renner:  I think the biggest place people fall down is the hypothesis-driven part. We should have a theory about what's happening in the market and be trying to prove or disprove it. But frequently teams just say, "we're going to do the analysis — did this campaign work or not?" And I'd argue that's not a hypothesis. Why do you think it worked? Why were the results what they were? What were the market factors at the time? Which accounts did or didn't engage? There are lots of different hypotheses to form before looking at results and deciding strategy going forward.


Mandy Hornaday:  When you think about the hypothesis layer — are you also seeing teams form a hypothesis that something won't work and try it anyway? Or is the hypothesis usually that something will work and you're validating that?


Sarah Renner:  I think often there's not even a hypothesis. It's: we need to test something, let's go figure out what a test could be. As opposed to: these are the key things we want to learn, we think our customers behave in this way, we think this channel is or isn't working. Often it's just, we need to create a test, or we need to report out on what the campaign did. And without those questions going in, you end up with a very surface-level understanding of what happened rather than probing deeper to truly understand the prospect mindset or why one account converted and another didn't.


Mandy Hornaday:  I love that. It brings to mind A/B testing subject lines — a lot of times you just throw out two options. What are you really trying to understand if you don't have a hypothesis?


Sarah Renner:  Exactly. And the next level of thinking is: does that even matter? Maybe on subject lines it does, but if people aren't clicking on your email, maybe it doesn't. What will the business actually do with that answer? We're often under pressure to test, but if we find out this subject line is slightly better than that one — are we going to change our entire strategy for that? Maybe. But maybe it's such a small portion of our marketing budget or lead impact that it's not worth testing because it won't move the bottom line and isn't aligned to our strategic priorities. Always think through both the hypothesis and whether this will actually matter to the business.


Mandy Hornaday:  That reminds me of something I've learned as a leader: be thoughtful about when you ask people for feedback. If you're not going to use it, or it's not going to change your approach, don't go through the motions.


Sarah Renner:  Right. Or if your mind's already made up, don't ask for input. A lot of CEOs could use that advice.


Mandy Hornaday:  Okay, so data-backed and hypothesis-driven — what's the third?


Sarah Renner:  Reproducible. On the analytics side, that looks like: is the data centralized? Is the code, the spreadsheet, or however you're doing it in a centralized repository? Ideally with version control, so any other analyst could reproduce the exact result you got. On the strategy side, it's about documenting the thinking, the data, and the reasoning behind a recommendation so that others can see the chain of thought and understand all the inputs. Where teams often fall down is someone does an analysis, maybe does a competitive analysis or thinks through a new strategy — and then they leave. You're left with a PowerPoint or a single data point no one can trace back. You have no idea what they considered or what they might have missed.


Mandy Hornaday:  How often should we be retesting our hypotheses? If we've tested something, felt confident in the hypothesis, and proved it — how often do you revisit that?


Sarah Renner:  I'd start in a couple of places. Is there any internal data suggesting it might no longer be correct? Are other things working, and maybe we don't need to retest right now? Otherwise, I'd say every couple of years — and more frequently depending on macroeconomic factors. When COVID happened, everything was scrambled. The current economic climate, with tariffs shifting and things like the government shutdown, is another example. Buyer behavior has changed a lot even just in the past six months. Things that were true before may not be true now.


Mandy Hornaday:  Anything else you'd add on treating go-to-market as a science? What are the biggest gaps you see in how B2B marketing leaders are — or aren't — thinking about this?


Sarah Renner:  There's always some level of intuition we have to trust. But too often we over-rely on it. As our chief analytics officer likes to say: "me-search" — my experience, therefore that must be everyone's experience. That's often not the case. If we think more holistically, everything should be data-backed. Otherwise we fall into the trap of telling internal stories about what worked, regardless of what the data says — because someone presented on it six years ago, or because one experience became gospel. Meanwhile, companies are missing opportunities by making assumptions based on intuition or a handful of instances rather than all the data they actually have. And most companies have a ton of data — it's just messy or siloed.


Mandy Hornaday:  What would you advise marketing leaders who feel like they don't have a strong data set to work from? I was talking to a coaching client recently who just walked into a new organization and said, "they don't have any data for me to work with." Where should they start?


Sarah Renner:  You probably have more data than you think. It may not be centralized or easily accessible, but you're sending emails — open rates, click rates exist. You're posting on social — impressions and engagement exist. If you're doing digital or direct mail, analytics exist. You probably have website analytics. The data is there; it's just siloed. Bringing it together would be the first step. Too many companies look at every channel in isolation and never connect the story. I know that's hard work. But it's especially worthwhile now, because if you ever want to leverage AI, you need clean, structured data in one location. AI is not going to magically pull it together from Google Analytics, your CRM, Salesforce, and your email marketing tool in 16 different places. That future state isn't coming.


Mandy Hornaday:  Totally. Even just looking at movement within a target segment across channels — not just how a lead gen campaign performed, but how your messaging landed across social, content, everything together. B2B is complicated because there are so many influencers within an account, all potentially having different experiences with your company.


Sarah Renner:  Exactly. It's complicated — I'm not discounting that — but it's still worthwhile to bring it together into a holistic picture. You want to understand: did Mandy see this display ad, then click on this email, and then schedule a demo? Those might be related. Is there a way to connect them so you can see how the overall account is experiencing your company?


Mandy Hornaday:  Let's dive into the tactics of experimentation. For teams that aren't testing today, or are testing but not really grounded in what they want to learn — where would you encourage them to start?


Sarah Renner:  Start with strategy, and bring your analytics partners to the table. What are we trying to learn? What are our hypotheses? What would actually make a difference? Start with your strategic priorities and areas of focus — what do you think might be going on that you'd want to test — and then partner with analytics to understand what's actually possible and what makes sense to test.


Sarah Renner:  In B2B, we have fewer sales and longer sales cycles than on the B2C side. So think carefully about which metrics you can actually impact with a test. If you're always trying to run tests that impact closed revenue, you're going to be waiting a long time to get a read — and you may not even get a statistically significant result given the small number of transactions. Instead, think about the leading indicators: demo requests, video views, key content page visits. Higher-volume actions that are strong signals you should hand someone off to sales. Those are the things you can test against. Analytics can then help you run a power analysis — figuring out how much you need to spend, or how many results you need, to get a statistically significant difference between test and control.


Sarah Renner:  And for folks who get nervous: a lot of people say you can't do testing in B2B because of small transaction volumes and long cycles. Using leading indicators addresses that. The other anxiety is holdouts — the idea of a control group that receives nothing. In B2B, that's a hard sell when you've got long sales cycles and active pipeline. What you can do instead — and this is where analytics can help — is not make it all or nothing. Give one group 50% more spend on a specific channel, and another group 25% less spend. You've got a bigger difference to measure, without telling your CEO or sales team you're simply not targeting a segment.


Mandy Hornaday:  Walk me through a real-life incrementality test. Let's use a podcast as the example — if a B2B marketing leader wanted to understand whether their podcast was helping, or which promotional channels were actually driving growth on it, how would that work?


Sarah Renner:  Let's say you already have a set of marketing channels running — display ads, online video, paid search — and you want to know how much of that is directly driving podcast growth versus word of mouth and organic. You don't want to turn off paid for a segment because you're trying to grow. So instead, divide your audience in half. Increase spend for one half by 50%, decrease it for the other by 50%. Both groups are still getting something — you're not cutting off growth — but the test group is getting meaningfully more. You'd expect that group to grow at a higher rate: more impressions, more reach, more penetration. Then compare the two groups: the test group that got more spend drove X downloads versus the control group that got less. From that difference, you can extrapolate the incrementality of your paid marketing — and calculate the cost per net new download from the group that got more spend.


Mandy Hornaday:  So ideally you'd test one variable at a time to isolate what actually moved the needle?


Sarah Renner:  In a perfect world, yes. But in a complex B2B environment with long sales cycles, there are ways to run multiple tests simultaneously. It gets complicated — bring in your analytics partners — but depending on your goals and audience size, you can segment or flight tests so you're getting reads at the same time rather than running them sequentially and waiting six months each.


Mandy Hornaday:  When you say analytics partners — is that typically someone on the marketing team, or within the broader business?


Sarah Renner:  Ideally it's someone on the marketing team. But there are so many different setups. If your marketing analytics person is mainly answering "how many" questions — how many leads, what channels did they come from — they may not have the expertise for this kind of work. In that case, it might be someone in RevOps or finance who understands levers and testing logic. You may need to educate them on the marketing side, but they might bring the analytical rigor. And sometimes you just have to go external. Causation modeling and running multiple simultaneous tests at scale often require expertise companies simply don't have internally.


Mandy Hornaday:  It sounds like data science.


Sarah Renner:  It is. It gets very complicated, very fast. There are packages out there that help, but it does take real expertise. The worst case scenario is you run a test you think you'll be able to read and the result is inconclusive — you can't say one way or another if there was an impact. That's a loss on all sides. You want to at least know it worked or it didn't. "Ambiguous" is not a useful result. That's where your analytics team should be able to say: you need to spend this much, run it for this long, and you can expect this lift in your target metric — so that if you don't get that, you can confidently say the channel is not incremental or the campaign didn't work.


Mandy Hornaday:  How do you figure out the right starting investment? When I'm piloting something I'll think: is this a $3K pilot? $5K? $25K? And honestly sometimes it's just a gut feeling. How would you encourage people to think about that more rigorously?


Sarah Renner:  It's probably always more than you want it to be — because you'd rather know if something worked. What I've seen over and over is people shortchange the test and then say, "awareness tactics don't work" or "that channel doesn't work." That's how internal mythology gets created. The reality is they just didn't spend enough to make an impact.


Sarah Renner:  Analytics can help here. Take the demo request example: look at your historical volume. If you're dividing the country in half and you need 40 more demos in the test group over three months, but you're historically only getting 10 demos per quarter total, that's going to require enormous spend — and may not be feasible. In that case, choose a different metric or a different test. But if you're getting 60 demos per quarter and you need 40 more in the test group, that's more achievable. It may still be a large investment — which brings you back to: is this a must-know question? Will the answer shift our budget significantly? If yes, maybe the big bet is worth it. If not, find a more cost-effective test.


Mandy Hornaday:  On the flip side: the law of diminishing returns. Let's say your test shows paid advertising is killing it and you think you can 10x the results by scaling the spend. That almost never happens. How do you think about at what point returns start to flatten?


Sarah Renner:  It'll almost never be a one-to-one increase when you scale. A test that showed a 10% lift might produce a 2% lift nationally — and that 2% can still be huge. There are so many other factors in the buyer journey when you scale nationally or internationally. And part of it is the nature of proper testing: we're intentionally overpowering the test, spending more than we would nationally to get a clean read. We're not going to sustain 50% more spend on a channel just because the test worked. If you do see positive results, I'd recommend moving that budget up in 10% increments nationally and watching results carefully. Flipping the switch all at once is a big bet — and if you're not seeing the same 10% lift immediately, your board and investors are going to be asking questions fast.


Mandy Hornaday:  Such a useful thing to have in our back pocket when advising the C-suite — being able to say, "a 2% or 3% lift is the right expectation, and here's what that means in dollars."


Sarah Renner:  Exactly. Quantify it. Two percent may still be a huge impact to the organization. Nobody's going to turn that down if you frame it right. Don't go in saying "we expect 10%" and then deliver 2%. Set the expectation at 2%, celebrate the win.


Mandy Hornaday:  Any other dos or don'ts for setting yourself up for experimentation success?


Sarah Renner:  The biggest thing is: understand the buyer journey before you start testing. If you don't know what your buyer journey looks like or which channels are typically involved, that's actually a better starting point. You need to know how people are experiencing your brand — which indicators lead to a sale, how an account is interacting with you across channels, how different people within a company are engaging. Understanding that will generate a lot of the questions and hypotheses that lead to good tests.


Mandy Hornaday:  For teams that don't yet have the data to understand their buyer journey — how do you feel about customer interviews or surveys as a starting point?


Sarah Renner:  Both. And I know that's a little infuriating. If you're starting from scratch, getting your data right is going to take time — start down that path now. But while you're doing that, also do buyer journey research: qualitative and quantitative. The reason I say both is that people often don't know their own intrinsic reasons for doing things. Buyer journey research can generate great insights and lead you to hypotheses worth testing, but it can be misleading in aggregate. I don't remember the first time I saw a company's logo or why I became aware of them. What the data will tell you definitively is where someone interacted with you, what account behavior looked like, and what your sales team should actually be doing about it. Get both going simultaneously.


Mandy Hornaday:  When you think about tools — software, AI — what are the best-kept secrets for running experiments and analyzing results?


Sarah Renner:  From a data perspective: I'd be cautious about outsourcing everything to a CRM or CDP tool. Those have set customization options and aren't as flexible as building internally. And data storage is cheap — CRM and CDP tools are not. If you want to do ABM attribution or eventually build a marketing mix model, you'll want granular data you own and can analyze freely. Building internally gives you the flexibility to layer on AI tools later based on what works best for you rather than being locked into whatever your CRM offers.


Sarah Renner:  On AI specifically: I don't think AI is good at analysis. It's not great at showing its work or reasoning through data. Where AI can be genuinely useful is helping data scientists and analysts code faster — it accelerates that work meaningfully. But for test analysis done in a rigorous data science-backed way, I'd be skeptical of most AI tools. There are free packages worth knowing about — Meta has Robyn, there's CausalImpact, and a couple of other options data scientists can use for test reads. You do need a data science background to use them, but they're powerful.


Mandy Hornaday:  It's obviously 2026 planning time. What's actually working right now in B2B marketing? What are your clients getting lift from?


Sarah Renner:  Quality. Everyone is overwhelmed — there's just too much out there. We're talking a lot internally and with clients about quality interactions in marketing. On the brand side, that means real, emotional messages that resonate with people rather than quick promotional or call-to-action content. That tends to have a much bigger long-term impact.


Sarah Renner:  The other thing that's working is thinking critically about the audience you're targeting. In digital advertising especially, there's so much that's black box — platforms will optimize your campaign for you, but you don't really know who they're targeting. The companies we see having real success are building their custom audiences offline, before they get into any platform. Work with a data vendor to build a lookalike audience based on your actual data. That will outperform category interests and in-platform targeting options nine times out of ten. It sounds like going back to basics, but it works.


Sarah Renner:  Placement quality matters too. There are so many cheap impressions out there and people have gotten hooked on volume — cheap CPCs, cheap CPMs. But if it's not driving actual impact, it's not actually cheap. And if you're working with a platform, vendor, or DSP that won't give you the data to see what's performing, ask yourself why they don't want you to have that data. It usually means the answer isn't in your favor.


Mandy Hornaday:  So: human interaction and emotional brand messaging, a very sharp and intentional target audience built on your own data, and be prepared to pay more for quality results. All good stuff.


Sarah Renner:  Exactly. If a lead is going to convert, paying a little more for it is always going to be worth it.


Mandy Hornaday:  Well, Sarah, thank you so much. This has been such a fun conversation. For our audience who wants to learn more or explore working with your team — how do they reach you?


Sarah Renner:  I'm Sarah Renner on LinkedIn — you should be able to find me. And marketbridge.com/consulting for more on what we do. I'd love to have a conversation.


Mandy Hornaday:  Awesome. Thank you so much, Sarah. I appreciate the time today.


Sarah Renner:  Thank you so much.


[OUTRO]


Mandy Hornaday:  Thanks so much for tuning into this episode of Growth Activated. I hope this conversation sparked new ideas, challenged your thinking, and gave you practical tools to help elevate your impact as a marketing leader. If it did, I would love for you to pass it along to a friend or a colleague in B2B marketing. The more we grow together, the more we raise the bar for what marketing leadership can look like. And as always, in the meantime, keep activating growth for yourself and your company. See you next time.