Numbers and Narratives

Ibby Interviews Sean: How to Run Effective Tests

Sean Collins and Ibby Syed

Unlock the mysteries of data-driven marketing as we, Ibby Syed and Sean Collins, share our personal experiences from Peloton to Bilt Rewards, revealing how a deep dive into customer data can elevate your business's customer experience. 

This episode is a treasure trove of marketing insights, from the art of objective testing in marketing strategies to the delicate dance between short-term tactics and the long-term vitality of your business. We dissect Revel's daring attempt to re-engage customers and how it taught us that sometimes, it's not the price but the understanding of customer needs that wins the day. 

Rounding out our discussion, we scrutinize the perennial debate between relying on historical sales data and conducting fresh market tests. We invite you to consider an alternative approach that could redefine your next year's marketing playbook, as we examine the impact of different product lines on diverse customer segments. 

Speaker 1:

Hi, I'm Ibi Syed. I'm Sean Collins and this is Numbers and Narratives. Sean, how's it going? What's up? Ibi? Everyone that's listening. This is going to take a little bit of a different approach than what you're used to. Instead of us interviewing, should we say, a third-party guest, I'm going to be interviewing Sean. We're also going to spend some time talking about what we want this podcast to be how the idea came about and introducing ourselves a little bit.

Speaker 2:

Yeah, so who the hell are you Ibi?

Speaker 1:

My name is Ibi Syed. I am the founder of a company called Cotera. I used to run data science for retention, growth and CX at Peloton. That was my last job. A lot of what I did there was try to figure out how to use data to better predict upsell, better predict churn, better predict folks that were having a CX problem. And so when I left, I started a software company alongside my co-founder, tom Firth, to help businesses of all kinds better understand their customers, figure out where they were in the customer journey, provide personalized experience across marketing channels and understand their pains and concerns and how to fix them.

Speaker 1:

And so some of the folks that you've heard on the podcast are actually either current customers or partners of Cotera, and we obviously did not start this podcast to talk about that, but they ended up being part of my network and Sean's network and, yeah, a lot of them end up just being very knowledgeable about how to use data to make decisions at scale, and so we thought they'd be good podcast guests. Sean, what about you? What are you doing? Give us your life story, give us your background.

Speaker 2:

Yeah, so it was April of 89. So I am the VP of growth marketing at Built Rewards. Built Rewards is primarily known as a G2C rewards and loyalty program for renters. So the idea is that for the longest time, you know, the American dream has theoretically been owning a home, buying a home and building up wealth through that. But for our generation buying a home is sort of out of reach and a lot of people are stuck renting and kind of don't. I mean, you get a roof over your head, but there's a lot that you don't get from renting. Theoretically, you know, there are people who are renting. I say theoretically because there are people who are renting and have incredible places and just don't want to own a home and that's completely fine too.

Speaker 2:

But so we built a rewards program for renters. You get points on your rent. You can use those points towards your rent or towards a down payment on a house. And then you know the program has really expanded and so now you can earn points with partner restaurants and with rideshare, with Lyft, fitness partners, through, through a lot of other ways. We have hundreds of transfer partners. On the travel side, you can use your points to book fitness classes to to get a curated art collection, all sorts of stuff. I mean it's, it's, it's, it's wild and it you know, as I'm a customer, you're a customer.

Speaker 1:

I'm going to talk about Built for a hot sec after you're done here.

Speaker 2:

Yeah, I mean, it's so great Like you truly can. I mean, you're spending thousands of dollars a month on rent and now you can use those rent payments to also pay for a trip or per part of a trip, and that's pretty life changing. And it's also fun because, you know, we have a B2B component as well, where we partner with multifamily home and some single family home rental managers and we give them a rewards program that they can use to build a greater resident experience, and so we also get to do some B2B marketing over here, which is a lot of fun.

Speaker 1:

Yeah, but Built is I. Actually I was a customer before Sean and I met, but Built is a product I use every single day. It is an app I log into very regularly. There's like some, there's a lot of. There's two reasons why I like it. One, the point system is absolutely bananas. I have two other credit cards that I sorry because I run a business.

Speaker 1:

Like a lot of the things that I naturally buy throughout the day, I probably have a higher sort of credit spend than the average consumer because I, for various reasons, spend a lot of our. I have to go out to dinner quite a bit and take clients out and Built is also the credit card I use for a lot of those sort of like extracurricular business activities and it's awesome. I have credit cards that I pay for that don't give me as much value as Built does. The home ownership piece is actually something that's super cool. I have a friend from childhood I'm not going to name drop her, but she recently purchased a home and she actually told me that she used some of her built points towards the down payment on that house, which is absolutely bonkers. That's awesome. I live in New York and I'm a founder, so my income is not such that I can purchase a home right now there's no homeownership for us.

Speaker 1:

Yeah, no, but yeah, she used her built points and she'd been saving them up, for I think she'd been a customer for two years and she'd been saving them up for quite a while and she said it actually made a pretty significant dent in her house buying down payment. I hope that's correct. And yeah, the other part of it is just, at least on a personal level, like the points are great. I kind of like don't want to say this on the well, you don't have that many listeners, so like I'll say it anyway, but I haven't paid for, like a Hyatt hotel. I haven't paid up for a hotel in like over a year Like the.

Speaker 1:

The deals that you guys have with various travel partners are so good. I weirdly have become loyal to Hyatt as a hotel chain because of built, because the deals are so good. I I'm, I'm super happy. I love the product. I think the mission is cool. I think you rarely see the alignment of sort of like a financial product and something that helps in some sort of other like with the home buying side in some sort of other way. So that's super, super cool.

Speaker 2:

Yes, it's incredible. We have an amazing team over here, yeah.

Speaker 1:

Really cool folks. Well, the other thing that the other couple things that are, I think, interesting to mention is Sean is actually sort of part of the group of folks that we want eventually to gather a listening base from. That's kind of an awkward way of saying it, but fine, he's got a ton of experience on the marketing side both, for mostly, I guess I guess you're doing B2B. Have you done B2B marketing at other points in your career, Sean, or is this kind of the first go of it?

Speaker 2:

Yeah, my first job out of the army when I, when I founded that or co-founded that company, was all B2B Right, right, right.

Speaker 1:

So Sean has done B2B marketing and B2C marketing, both on the e-commerce side and on the mobile applications. I mean, do you call? Do you refer to built? Do you at least like see built similar to how you saw, like you know? Are you still trying to drive similar interactions? Or is built fundamentally? You see it as, like you know, financial marketing and you saw Revel as like consumer app marketing.

Speaker 2:

Yeah, I don't. I mean, I think there is a distinction. You know, we, when we, when I started, the app was like you had to have the app to get real value, like our web. Our web interface was pretty limited and what you could do is primarily a marketing page. Sorry, say where you're talking about. Built here, built, yeah, sorry, great Revel, you truly. You had to have the app. The website was just a website to, to market to, and you had to have the app. The website was just a website to, to market to and for, for some brand recognition and, and you know, ads on Google and stuff like that. The app was the way you experienced the product.

Speaker 2:

And so, for those who don't know, revel was when I, when I got hired there, it was a, a moped company, electric mopeds, and you could use your app, unlock one ride where you want to go and your ride and just leave. Leave the bike there and someone else would pick it up and take it for the next ride. They have since shut down the mopeds. They've launched an all electric vehicle ride share service, so similar to an Uber or Lyft, but it's all EVs and all their drivers are employees instead of gig workers, and then they also have a bunch of EV charging infrastructure, so but that you needed the app. That's how you unlock the bikes. That's how you lock the bikes all that With Built.

Speaker 2:

When I joined, you needed the app to actually be able to do much of anything. We have worked really hard to build near parity, and so there's a few features that we restrict to just the app, and that's primarily for security purposes. But you can do almost everything on web as well, and so the app is really just to enhance user experience and make it easier, like I would. I would personally rather be in an app than having to log onto a web browser, and so it's. That's kind of the thought is that it's a to make the experience easier for our users.

Speaker 1:

Yeah, yeah, it's a good experience. And, sean, how did the idea for this podcast and us start? We've known each other for a while now.

Speaker 2:

Yeah, this is. This is the magic of what happens when you respond to cold outreach. So Ibby hit me up absolutely cold outreach on LinkedIn, I think and I used to have a rule that I would respond to one piece of cold outreach a month and Ibby had the best outreach and so I said like, yeah, let's chat. We went and grabbed a coffee and had a really fun conversation for a while. It actually started, I think you were giving me a little bit of like career advice because I was thinking about how do I get better at data, and so you introduced me to a couple of people and we chatted for a while and that was really great and we talked about Cotera and so you gave us kind of a demo and we signed on as clients of yours for a while and so we were just working together and I think through mutual interest, we hung out a couple times outside of that work and had some really fun conversations over some beers and decided that we should just have those conversations not over beer, and that was this podcast.

Speaker 1:

Yeah, I think. And what are we trying to? What's the goal that we're trying to achieve? So far we've recorded seven episodes. This is I think this is episode eight, except for yeah, so I think we haven't released two of them yet. So at the time of present recording, I think we still have two that have yet to be launched. And so far you've heard folks from across various industries. We've had mostly consumer Pages, software Selena is software but most of the other folks that we've had on our marketing they've been largely retention, slash, lifecycle and customer experience, which we both believe kind of go hand in hand. I think, if you're talking to one, there's a lot that the one group can learn from the other.

Speaker 2:

And yeah, yeah, now it's just us.

Speaker 2:

So what do you, sean, when we originally decided side in particular and whether it's because they're bogged down with day-to-day tasks or sort of don't know where to start or don't have the resources, you know, a lot of what they want to do just sort of ends up on a wish list and never actually gets done, and that means they aren't growing as much, their businesses aren't growing as much and their customers and users are having less amazing experiences.

Speaker 2:

And so we thought it'd be great to, you know, take these conversations we've had about retention, about customer engagement, about data analysis and automations and personalization, and figure out how we can, you know, talk to people who are doing incredible jobs and have great stories and give everyone a little bit of inspiration on things. They can try and put it into a format where it's, you know, there's actionable takeaways. You know we try and put it into a format where it's, you know there's there's actionable takeaways. You know, we, we try and usually do things and whether it's a crawl, walk, run or a light, medium, advanced whatever but like giving you something you could do right now, today, if you wanted to, and then saying like and if you really want to do this like, if you want to go all in on this, here's here's what that could look like, and here's a software, or here's, you know the type of program you'd be looking to build, or something like that.

Speaker 1:

Yeah, no, that's, that's exactly right. The other thing that I guess I would mention is I think both of us come from a background where, due to just who we are as people and the places where we've worked in the past and where we work now, we, I think, tend to err on the side of actionability. I think in the data world, you have tons and tons of smart folks, right, but a lot of them are largely it's a largely academic field. If you hire a data scientist, oftentimes when you see a job posting, it'll require a PhD, and that tends to bring in folks that are a little more on the academic side. But I think you and I both believe that action is everything, and so focusing on actionability is super, super important. So, yeah, that's why we started the podcast. That's what we want to do, and, in the vein of that, I think we thought it would be fun to record this episode with me, the rest of this episode with me interviewing Sean on some various things that various types of marketers.

Speaker 1:

I think this, I think this, this advice generally applies to B2B, b2c and e-commerce. I think we're going to be focusing on testing. But, sean, do you have any? What's your, what's your first bit of advice. What's your first if you're, if you're taking your years of wisdom across various verticals of marketing, what is some unified advice that you can give to the listeners? And we'll dive into that.

Speaker 2:

Yeah, I think the first bit that I so you know, we were talking about what we'd want to do for this and I think we're going to try this format out and if you guys like it, please, please, let us know. If this is a format that is enjoyable, we'll certainly continue to have guests on, but thought this could be a little bit of fun too. So when I was jotting down some notes on on, so when I was jotting down some notes on, if I were to try and break testing down to some core principles, the first thing that I think absolutely people need to do is leave no room for second guessing, and what I mean by that is whenever, in my experience, whenever people want a test run, there is usually a treatment that they want to win. It's very rare that someone is just like, let's see what happens. These, either one of these could work. There's usually one that people are like this is going to win. I believe this or I want this right. We need to just test and prove that this is a better solution.

Speaker 1:

Right, right, You'll come into a people oftentimes is driven by executive action, right when it's like, hey, we want to reduce your job is to, you know, win customers back or reduce churn or have a successful product launch. It's always tied to some sort of business outcome and oftentimes like Should be, should be, yeah, yeah, should be Right, right should be, We'll get to that later too.

Speaker 1:

But what ends up happening a lot of the time is when you're testing something. It's kind of up to you. You're told as the channel person to hey make this a successful launch. And you're testing something where you're like all right, I'm going to run a test to support a product launch and you obviously want that to go well, and it's tough when it doesn't. There are obviously ramifications to working on a campaign and testing between different things and wanting to show progress and it's frustrating when that doesn't work.

Speaker 2:

Yeah, it absolutely is, and so I think that maybe this wasn't even the first thing. Maybe I should have started with some other stuff, but I'll be sub-bullets to this. But in my opinion, when you're running a test, you aren't necessarily trying to find the perfect winning-unquote winning variant right away. The first rounds of testing are identifying and confirming whether the variable you're testing is actually the lever that can move the needle for you okay, break that down for me a little bit.

Speaker 1:

So first off, do you have an example?

Speaker 2:

yeah, I mean, I think the the best example is is is the revel one that we were talking about the other day.

Speaker 1:

Yeah, so this is a crazy story. Tell the listeners the story.

Speaker 2:

So I think another thing you have to understand that, I believe, is that internally, organizations will tell themselves, will create a narrative and or, you know, have a hypothesis and will not test it and just say it and that will become fact even though it actually hasn't been tested or validated. So, revel, when I was there, when I first got there right I mean, it was mopeds and so that in New York City in particular, that means in the winter months no one's riding a moped, and so all your users that you acquired and engaged and retained from spring through early fall stop being users for a few months, and so that means that when it starts to become spring again, you have to sort of reacquire all those users all over again.

Speaker 1:

Yeah, I think a lot of seasonal businesses will seasonal businesses that sell products that are forest season are saying yes, right now yeah, 100% yes that is me. It sucks to try to get people back at the end of I know you're selling swimwear every April. It's like, oh my God, here's the other slog. Now I have to convince people to buy beachwear from us again rather than somewhere else.

Speaker 2:

There are companies that ride or die on the holidays, right oh?

Speaker 2:

yeah yeah, holidays are huge, massive, and so there was, you know, over time we retained a bunch of users from every year. They would, of course, start running in the spring. It was a really fun product. It was a great way to get around New York. It was way faster than calling a car or taking the subway. Taking the subway, in particular, unlike walking or biking A, it's faster. B you don't show up pouring sweat in the New York City summers Like it was a great experience.

Speaker 1:

Yeah, in the New York City summers it was a great experience. Yeah, and for people that don't live here, one weird thing about New York is it's really hard to get across Manhattan. It's also very hard to get from Brooklyn to Queens for various reasons, most of it related to the subway. Oftentimes, if you have a job in Queens but you live in Brooklyn, you have to first go into Manhattan to catch a train back out to Queens. I think is the G, the only one that goes up between the two.

Speaker 2:

I think everything else you would have to, to some level, go back through Manhattan, back through a different neighborhood in Brooklyn, over to Queens.

Speaker 1:

Yeah, or use the bus in a weird way. So yeah, revel, biking and revel. So being on two wheels effectively is the fastest way of oftentimes doing those. But again, as Sean said, like and most fun and most fun. But yeah, oftentimes when you're biking you pour sweat and so using a revel is is was, I think, one of the best ways of getting across super fun.

Speaker 2:

So we would keep. A bunch of users would obviously, you know we'd acquire one year, they'd go into the dormancy over the winter, they'd come back and start writing again the next year. But there was every year a bunch of fall off of people who were acquired and then didn't resume writing the next spring. Over the years that before I got to Revel there was, you know, obviously that accumulated into a pretty large churn base and kind of the thought that always came up because a lot of the feedback we got from user surveys was around pricing. And so the working hypothesis, which had sort of become de facto fact, was that the number one blocker or number one cause of churn or lack of, you know, re-engagement in the following was around price.

Speaker 2:

So we took that as our like, as our root hypothesis, that we wanted to go and prove or disprove that price was the primary lever, right, because you know, I think a lot of times we try and use testing to validate that a new treatment is better or a new feature is better. But it can also be a great way to actually identify the right problem area and then have that be what kicks off iteration and design and research, right. And so we were using to say OK, let's find out unequivocally if price is actually the biggest lever that will get these people back on the mopeds. If price is actually the biggest lever that will get these people back on the mopeds, and if it is, then we can go and figure out how can we make our pricing models that will be more attractive to these users. There were a bunch of ideas on how we could do that, but right before we do all this work, let's just see.

Speaker 1:

Just so I understand, folks are obviously coming back for the summer, for the spring and the summer after winter. You're trying to reacquire them. A lot of the reason that you think, a lot of the reason that you want to run this test, is to see if the thing that's stopping them from getting back on the mopeds is price. And so you are going. You want to run a large scale test focused on price, to re-engage users that were previously dormant and hadn't effectively hadn't been on the moped since, like the last time, the weather was nice.

Speaker 2:

Yeah, and some of you know a year between you know, like they missed an entire year since they had kind of churned, but that's exactly right.

Speaker 2:

So I had a had a really awesome boss at the time who kind of emphasized this point to me of like, if we go small here, if we don't go big enough here, people will second guess and say, well, the issue was you only gave 10% off or $10 off or one free ride or something like that, and so the incentive wasn't large enough and so we should run this test again with a slightly larger one and like, yeah, that that would really suck to put all this work in to do all this stuff and then have the result be as we present it. Um, you just didn't go big enough, so like, okay, so let's just remove that as an option. We are going to go so big that there is literally no way you can argue that our test proved or disproved at this point, because, again, we're not trying to optimize, we are not trying to build a long-term. Here's our win-back or reactivation strategy. What we're trying to understand is is this the single variable that will get a massive amount of people back on the bikes?

Speaker 1:

And this is oftentimes the single variable that will get a massive amount of people back on the bikes, and this is oftentimes the thing that's important for startups is like does this, does this fundamental thesis that we have actually make sense? The thing that you're trying to do is figure out all right, what, how is the way the world sees us, not like hey, is this product? Is one product better than the other, but rather like is this something foundational that we agree? In this case, price is the barrier, does, is that actually? This is a really, really powerful thing to have the answer to.

Speaker 2:

Yeah, 100%. I don't really want to do another side, but I'll do a quick aside. When I used to run testing at an e-commerce agency, if we didn't when we kind of started, we're starting getting up and running and kicking off a project with a new you know someone signs on for CRO conversion rate optimization. They want results fast, they want to see tests in market fast, but you haven't really had time to do a whole bunch of research and a whole bunch of analysis yet. So the fastest way we could get tests up and start getting insights wasn't by building out a whole suite of hypotheses and like building out. You know, then we have design and we have all these. You have to build out the alternate version you have. There's a lot of work that goes into running a test well. So what we would do is, while we're doing our research and our batches of designs and our first bits of analysis, I would just start.

Speaker 2:

I would treat AB tests and just remove modules on the website. Didn't, don't change anything. I would either just reorder things or just completely delete a module from the homepage or from a key page, and so all we're trying to understand there is is this module significantly impacting user behavior and purchase intent. And by starting there, that would help us say like, okay, we now know these three modules that we just removed and didn't really impact conversion rate aren't worth testing and optimizing for a while, but this one had a significant impact, and so this is where we're going to spend a bunch of time trying to get this one right, because that's going to have trickle down effects.

Speaker 2:

So, same idea. So now we're, at Revel, debating about how do we go big enough, and it was like, okay, do we give them like 24 hours free, do we? You know, like that's not really gonna be buzzy enough, and so we went with I can't remember the exact number, I'm pretty sure it was a hundred thousand dollars, maybe it was 10,000, but it was. It was a number that was outrageous. Um, and so, basically, we were sending out the campaign and we were going to give you, uh, a fixed time in which to to use those ride credits, um, but you could ride just continuously for free for days.

Speaker 1:

Yeah, so basically it was the message, just so I'm I was a Revel user, so the pop-up notification that would come up on my phone. I never got one of these which I'm upset about.

Speaker 2:

It was an email, because if you weren't engaged, you weren't we're assuming you're not opening the app right. And so it was an email with a subject line of, like you know, let's say it's $100,000. Maybe I'm wrong, maybe it was $10,000. Maybe it was $15,000. Really doesn't matter. The subject line was like $100,000 in ride credit through Sunday, or something like that. And so you're going to open that email. Like, let's be real, if you get an email from a company you used to use their product a lot and they said you have $100,000 for the next three days, you're going to you're going to give some context how much I think the average ride, if you were to take like a like a 10, 15 minute ride, it was what?

Speaker 1:

like a dollar a minute, like 15.

Speaker 2:

Is that right yeah.

Speaker 1:

Yeah, yeah. So it's a lot of money, like $100, like every, it's not like every ride costs like 100 bucks, it's like, or even a thousand, like it's. It's a significant, like it's basically free.

Speaker 2:

It's cheaper than riding an uber or lyft, and so, like I mean it was. It was perpetual rental for for a weekend if you wanted it, and the engagement was high enough. Like right, I think there were so many people who was wanting to know was this a typo or like, was this some? It just seemed too crazy, and so I was telling Ibi before the call started. This was my first big campaign mobile app. I did not anticipate such a high engagement rate from a churned user base and so did not rate limit my message well enough. And so, within like three minutes of sending the email head crashed the app.

Speaker 2:

The team had to get up and like spin up more servers to support all the traffic we were driving, because everyone was just blown away and thought this was some sort of joke and it was not. But what we found was that, like, certainly some people rode right, but not nearly as many as you would think took advantage of this thing. And so you'll say, listen, like it doesn't matter what size promotion we give to these folks, they're, they're not coming back to ride mopeds anymore. Maybe they moved, maybe who knows what happened, maybe they bought a car, but we can stop trying to figure out the right promotion to get people back on a moped, because we gave them unlimited free money and they didn't do it. What?

Speaker 1:

do you do as a marketer from there? Is it okay? We're going to just try to not re-engage customers again and just focus on optimizing the ones that we have. Where do you? What decisions do you make after that? I'm putting you on the spot here.

Speaker 2:

Yes, I mean so then. Then you go into like research mode, right, Like if because this was this hypothesis had been just thrown around without data for long enough that it was kind of like assumed to be true. We now basically had just proven that the facts of the world as we knew them were incorrect, and so we had to come up with alternate cases for reality and why this was happening. And so we went on a big streak of user research, talking to churned users, talking to active users, talking to at-risk users, and we started looking back historically to try and figure out what were the indicators of the groups that did end up churning, or the cohorts and the riders that ended up churning versus the ones that came back. Were there things we could do to proactively identify them.

Speaker 2:

And then you know some of the other things we've talked about, like we had and I guess it hasn't been released yet, but by the time you are hearing this, you will have probably heard that the other one, the idea of what are you doing between purchases, is such a that is, that is the key to lifecycle marketing is how do you engage your users when they're not in a buying cycle, and so we spent a lot of time thinking about like what, what can we do during the non-moped season?

Speaker 2:

And luckily, you know, I was getting there right when we were launching the cars. So we did already have a solution that we were launching with where you could ride our, you know, instead of riding a Lyft or an Uber, you could ride a Revel. They were all Teslas at the time. So it was just a really sick experience and so we got a ton of engagement on that and I think that massively helped. But we spent a ton of time researching and reviewing data and trying to figure out what else could we do, what were the levers that we could pull, but we knew that price wasn't one of them or wasn't the one that was worth spending time on.

Speaker 1:

Right, right, and I think I'm going to say this explicitly Of the Sean pillars, sean Collins pillars of successful marketing, we hit on the first one, which is effectively leave no room for second guessing. That is a massive test to run. You are like all right, what is the most? I think your LinkedIn profile says something that I really love, which is, I think, is it? What would we do? How would we do this, if we were insane?

Speaker 2:

Yeah, something. I think it's how we do this. If we were crazy, yeah, yeah, yeah.

Speaker 1:

Yeah, how would we do this if we were crazy? Like, yeah, leave no room for second guessing. Like there's only one answer to this which is possible and I think it hits on another pillar too is like I assume that there's some confounding variables that you like. Did you guys analyze anything else when, like, obviously price was the first thing? Did it give you any other insights after running further analyses that you might not have had? Oh yeah, you actually said this right. Like, uh, you guys ended up focusing on product education afterward and improving those metrics, but but I think we had a whole.

Speaker 2:

We redid our onboarding.

Speaker 2:

You know, as a result of the, the research we did, we did some really fun things that that I've I've carried on and thinking, thinking, are a great tactic.

Speaker 2:

So I actually, in the onboarding, I sent an email and the reply to went directly to my inbox and so it was just a plain text email. I said I'm your, you know your ride concierge or something like that, and genuinely would get you know a couple dozen responses a day and I would personally respond to them and we just start conversation and I would do everything I could to make sure they got set up for success. But I got to hear all the questions, all the concerns that they had from them, and so that was to me, one of the most insightful. You know, like user research is phenomenal, but user research when someone knows they're on a panel or knows that they're being interviewed, is not as just like raw and honest as someone talking to support, and so I think that that's where you should try and turn your support teams into insight centers and you should look for touch points where you can just interact with your users or your customers as often as possible in the least formal way possible.

Speaker 1:

Yeah, I think, yeah, talking to users understanding what they're talking about. Like we've heard this time and time again, both from, I think, the most recent podcast that I can go back to and I'm going to show the podcast again, it's Rebecca's episode, I think. She goes through and talks about how a lot of what they do at Coterie is look at what the actual interactions are coming in from the customers and figure out, hey, how can we open up a dialogue, both to fix the problem obviously you want to make sure that they're fixing the problem but also to just gather a state of the world. How is our business actually running? How are our users actually working? It's very easy For the business that I run.

Speaker 1:

We don't have thousands of customers, and so I can have a personal relationship with each. I know how often they're logging on, I know what they're using it for, I know their use case, I know how to get them successful on it. But if you're running a business that's talking to tens of thousands, thousands, millions of users, it's very, very difficult to try to figure out. You know how is any one of them, or how is even a group of them, feeling about the product at any given point, what are their pain points, and so that's a super cool way of doing it right when it's. You take this approach where you message them, you say, hey, like I'm here for you, this mailbox is actually manned by a real human being and I will do my job to fix that. Obviously, if you're running Instagram or something and you get tens of bajillions of messages after that, it doesn't work for all scales of businesses, but I would say it works for most scales of businesses.

Speaker 2:

Yeah, and there's ways you can slim it down right. This is the 21st century. You can have an email that goes to 1% or half a percent of your users. You don't need to have it go to every single person, um, but you can figure out key inflection points or your key moments where you want to have a chance to talk to someone at this point in time, or who just did X action or who hasn't done Y action, and set up a message. Or you could just also do it way less automated and just actually just send someone an email from your Gmail or your work Gmail. I guess maybe you have a work outlook or something too, but let's just be real. You have a work Gmail if you're listening to this podcast, so just send them an email, why not? What's the worst that happens? Someone doesn't answer you.

Speaker 1:

Yeah, I think a lot of times we've talked about this a lot and a lot of folks that work in marketing there's always this search for perfection. I think action is always better than inaction, where, if you're oh, what's the perfect email that we can send? How do we write this the right way? How do we do this? And obviously, if you're working for a gigantic company that has millions of eyes on it, this probably should focus that way. But for again, most businesses test, quickly, figure out what answer you want and look at hey, like how do I, how do I get the, how do I get the answer to this question in the best and the fastest way possible? And how do I make sure that I have the answer? Isn't being confounded by some extra variable, right? Like it's not due to some external reason? Like how do I test the single thing that I'm trying to answer and get the answer to it?

Speaker 2:

So I think that's like the perfect segue into point two. When you're running a test you need to define what you're trying to optimize for, but you also need to analyze against other metrics as well, because things can have second and third order effects. They could affect something, have an effect you didn't expect it to have. So I'm going to give another example. This one I'm going to be a little less detailed on because we're still running some tests around it. But at Built there was there's an engagement metric that we wanted to improve. We had a few hypotheses, what might be causing that to be not exactly where we want it to be. We put together a few different treatments and even within kind of this one treatment kind of concept, we came up with a couple variants of it with different incentive bonus structures.

Speaker 2:

So again, we were trying to get one particular metric to move. We ran the tests and there were two that were far and away the best. The one that moved that original metric the most actually didn't have great long-term effects, so it gave us a short-term win with a long-term return to normalcy, and so basically, we had a temporary blip of success with a high cost to us. And the second best one moved the needle on the primary metric, not quite as well as that first one, but it had long-term positive implications on long-term engagement and so, even though it didn't quote unquote win against the metric we were trying to optimize for, which is this one engagement point. It was far and away the better variant because it cost us less, it had a positive impact on the primary metric and it had a long-term positive impact on the primary metric as opposed to a very, very short-term one.

Speaker 1:

So you guys chose the long-term, you guys chose a long-term effect over the short-term effect.

Speaker 2:

We did.

Speaker 1:

Is that mainly driven by? How do you analyze? I assume that you're just looking at the financial reward of that one right when it's like, hey, we just make more money in the long-term, even though we reacquire 75% of the customers. Let's say it's a reacquisition campaign, because this is what's been going on in my brain recently we reacquire 75% of the customers in test B compared to test A. However, those customers end up staying for twice as long. Ergo, the net dollar amount that we make from that overall campaign is higher.

Speaker 2:

I think that's one way of looking at it. I think that's a pretty e-commerce looking way to think, to be focused on, right, it's like if let's go with that, you know, maybe the margin's higher because maybe you had, you know, short-term wins, but it was a huge discount and so you barely made any margin on any of these sales, or maybe it ended up cannibalizing future sales, right, these could be people who were going to come back and you had a great promo so it made them buy faster. But really that just means that the purchase they were going to do in a month at full price is now, and now it's going to be a longer period of time because their standard use time was going to be once a month. They actually bought a little early, so now their next purchase is going to be for 45 days or whatever, right. So there can be deleterious effects on the long-term side if you just optimize for a single short-term win.

Speaker 1:

I've worked at companies. It's kind of interesting. Sometimes companies will choose to boost the short-term effect as opposed to the long-term effect. If you go back and listen to the Rome episode, he gives the example of how a lot of e-commerce brands will try to acquire a cohort of customers at a much cheaper CAC. So CAC is customer acquisition cost. When you run a campaign you want to make sure that the amount of money that you make from that campaign if it costs me $1,000 to run a campaign and 1,000 people convert, then my customer acquisition cost around that time period is $1 per customer effectively. And what you want to do is you want to make that effective. You want to make sure that you acquire as many customers for as little value as possible, as little dollars as possible. But oftentimes what ends up happening is if you acquire customers and try to do things to reduce CAC, those customers are not good for you in the long term and you end up making less value from those customers in the long term.

Speaker 1:

But oftentimes companies will make that choice on purpose. A good example is if you're a public company. I've worked for companies where they're public and there's obviously whatever you can do toward the end of the quarter to boost your numbers in any capacity, even though it might not be the best for your business's long-term health. If you want to either minimize your losses or if you want to, like you know, either minimize your losses or get customers to come back and increase MAU or DAUs. Maus sometimes people will just take. If option A is in the long-term worse than option B, but in the short-term it's better, they'll still choose option A and there's actually a perfectly fine, I don't know, there's nothing wrong with that.

Speaker 2:

I mean, like there's other things too right. Like A you need to show growth and that's fine. That's just the matter of the way the world works. B you can talk about like halo effects. Right, like if it's an e-commerce product or something that people can wear or will post about or showcase or tell their friends about having more of your product out there, depending on your strategy. Your strategy if you're a Lux brand, maybe you don't want more out because that sort of devalues the exclusivity.

Speaker 2:

But, generally speaking, more brand awareness is good, and so getting tons of people wearing your shirt or having everyone be like opening your app or something like that can be a good thing.

Speaker 2:

You could say that, hey, we're going to acquire a ton of one-time or short you know, one-time purchasers or short-time users or people are only going to be on the free program and never convert to the paid subscription. But in that cohort there's still going to be people who are repeat purchasers. There's going to be people who do upgrade. And you could also say, hey, now we're getting insights, we're getting more data to understand who does retain versus who doesn't. We're putting the opportunity in the hands of our lifecycle team or in the hands of our email team, or in the hands of our email team or in the hands of our product team, and we're optimistic that we'll be able to get more people to stay or use our product or repeat purchase because we think we're getting better and smarter and now we have bigger test audiences and all that. There's other reasons to do it too. It's not necessarily a bad thing.

Speaker 1:

That's super. I mean, going back to the original point, that makes a ton of sense. You're going to. You got to focus on more than just one metric. You got to look at all of the effects that might come from the test that you're running. Yeah, no, this revisit test cohorts for the same reason that we just talked about of looking at other metrics to understand, kind of the full impact.

Speaker 2:

You should revisit the lifetime value, the engagement, the purchasing behavior, you know, maybe three months, six months, a year after a test is completed and compare that against the other treatment or the control or whatever. Because I guess give you another another look at at what the actual long-term effects are and that makes does get very complicated, right. Like is there, if you're testing a lot now they've been exposed to a whole bunch of different tests. But if you're, you know, depending on how you're doing your audience selection, hopefully that's pretty well spread across. So each original cohort from a past test has a bunch of different tests evenly spread across the audience. But I mean that's why you do hire data experts and don't just use like a basic, you know, esps, random 50% sampling, and you actually build audiences and do things in a controlled and disciplined manner. But look at the long-term effects because they can surprise you.

Speaker 1:

You can't, you can't. I don't think there's a way of analyzing in going back to your previous scenario. There's no way to look at the long like. You have to go back to those people and understand long-term engagement, right Like if you want to-, otherwise, what are you doing?

Speaker 1:

Yeah, your business is just going to like yeah, if you don't do this, you're just going to Going back to Rome's CAC, to LTV, trying to acquire customers at a low CAC. You have to figure out how those customers do in the long term, because it might just generally be oh, wow, yes, we acquired these customers at a lower amount, but we got absolutely screwed when we looked at their long-term effects.

Speaker 2:

We made a lower amount of money, yeah, and there's a professor at Wharton who's like a data science guy but all about marketing and he's all about LTV at Peter Fader and he's written a bunch of books on this. But he really he says, like you know, you aren't, your retention curve is going to level off and so you do need to constantly be getting new customers. No-transcript, think it's important to look at the long-term effects and to account for the fact that, yeah, like some people have an LTV that is or a lifetime, even that is three years, five years but most of your customers are going to have a one purchase customer lifetime.

Speaker 1:

And you're always going to have again. Yeah, you're always going to have that 60%. How do you optimize the 30% that will keep purchasing you again? How do you make that experience great for them? How do you make the experience great for everyone? But specifically, how do you find the group that performs well and how do you extend from them?

Speaker 2:

Anyway, last one, because this one is just. This isn't even like that smart. This is just something that annoys me and so maybe, maybe we can. People don't even need to listen to this, but things that annoy you.

Speaker 1:

often make good conversation, so as an email.

Speaker 2:

Email is my primary channel. That's the one that I am passionate about. Everyone loves to say we can just test the beans test, it'll be easy to test that. Like yeah, theoretically it is easy to create a different variant of an email with a different subject line or a different body copy or something like that. That's not hard. Sure, we can totally just test that. However, I get very annoyed because I don't think that most of those should count as tests.

Speaker 2:

You aren't coming in with a real hypothesis. One single email is not representative of that change, right. So like, if you test two different subject lines, even if you come in with a hypothesis saying like we're going to see what happens if we include a name or the exact same subject line without their first name, okay, you have a hypothesis that including a name in a subject line will get more opens. Great, no-transcript. Especially if you're doing it like hey, we're going to have four subject lines and we're going to let the AI and the ESP decide the winning subject line after 30% of the sends and then the remaining 70% will get the one that had the best engagement Great, no problem with that whatsoever. But let's not call that testing, because you're not learning anything. You're just putting out different variants and letting and trying to get the most opens. Nothing wrong with that. Just don't say we're testing.

Speaker 1:

No, that's a good one, that is, you were just. I mean, at that point, you're just like like what's it fair to call that?

Speaker 2:

Is that like optimizing variance? Yeah, I don't. We probably do need a better name for that. I said you should. I mean, why not do that right? Like you have the tooling, you're not otherwise. You're just guessing that one is better, and so why not take a couple swings at it If?

Speaker 1:

you're trying to figure out, rather than the thing that you're trying to figure out is like, all right, if you're sending an email trying to get folks to buy different, trying to get people to buy pants, you're like a, you're gapped and you're trying to figure out. Like there's, I think, like is it fair to say that an experiment would be, you know what performs better in the summertime? Sending and sending a message about hey, here's some new swimwear or which performs better, right, Like my hypothesis would be swimwear performs better than athleisure in the summertime and that like counts as an actual experiment, versus when you actually know that a swimwear performs better than athleisure. Like you're then trying to optimize and get as much value out of that as possible, and that's sort of the optimization of the variance. Is that correct or am I wrong? There You're the marketer.

Speaker 2:

Yeah, I mean I think you're right. I mean I think that, generally speaking, I would say, hey, we don't need to test that. We could just look at past sales data and see. So I think there's other ways you can get that information and so why waste a testing opportunity on something that you can get an answer through just data analysis? But that is a better test of like. Okay, we are going to run, have two groups that are statistically similar audiences and for a period of time, we are going to primarily talk about swimwear to one group, primarily talk about athleisure to another group, and we're going to see the short-term sales effects and long-term sales effects on these people and we will use that to inform the following year's marketing strategy. I think that's completely valid.

Speaker 1:

Everyone. Thanks so much for listening. Let us know if you like this format. We're very, very curious to get your feedback.

Speaker 2:

The number of you that you know listened to the entire episode, if you have thoughts, questions, if you want to be on the podcast, if you have someone you want us to reach out to, to be on the podcast. You can email me at sean, at numbersandnarrativesco.