Growing Ecommerce – The Retail Growth Podcast

Performance Max: Navigating the Labyrinth of Mystery Placements and Estimations

August 15, 2023 Smarter Ecommerce Season 2 Episode 15
Growing Ecommerce – The Retail Growth Podcast
Performance Max: Navigating the Labyrinth of Mystery Placements and Estimations
Show Notes Transcript Chapter Markers

What if I told you that the elusive world of Performance Max campaigns could be demystified? That's right, after an exhaustive year-long investigation, analyzing over 3000 campaigns, I've begun to crack open the black box of Performance Max. Together with the help of data experts, we will shed light on the hidden facets of these campaigns and guide you through the labyrinth of mystery placements and estimations. We’ll even unpack the unique challenges faced by multi-brand retailers and reveal the surprising trend of PerformanceMax leaning heavily into mobile traffic.

Curious about how many campaigns are perfect for you? Wondering how different catalogs and products might affect your segmentation? We’ll answer that and more. We'll delve into popular frameworks around product segmentation and the most favored bid strategies. We'll reveal why maximizing revenue or conversion value is the most prevalent strategy. But that's not all. We're also going to dive deep into the PerformanceMax efficiency, analyzing how these campaigns are meeting the target return on ad spend, and what exactly happens when campaigns exceed the ROAS target.

Now, you might be thinking, can Performance Max truly be trusted? How can we verify its effectiveness? Well, we have got you covered. We'll discuss the potential pitfalls and verification process of Performance Max campaigns. By sharing insights from multi-brand retailers who have used Performance Max, we aim to help you make informed decisions about this campaign type. So, buckle up for an episode filled with rich insights, compelling analysis, and valuable takeaways. Let's crack the Performance Max code together!

Speaker 1:

Welcome to growing e-commerce. I'm your host, mike Ryan of Smarter e-commerce, also known as Mac. Today, I'm delivering one of my world famous monologues. I'll give you some more introduction in a second, but, in a nutshell, I've been gathering lots of data about performance max campaigns for over a year now, and sharing it here and there. I decided, though, it's time to do a roundup.

Speaker 1:

So, whether you're a marketing director or a practitioner in the Pmax jungle every day, or an agency partner, I encourage you to listen in as I attempt to shed some light on this famously black box campaign type. It's a complicated product in its own special way, and I really try to be fair and balanced on this. Alright, let's get into it. So let's start this episode by breaking the fourth wall a little bit. I'm going to talk directly to you, dear listener, and tell you something about this podcast. Two of the top three most popular episodes on the show have been really deep dive, very specific episodes about performance max, and the topic of performance max campaigns comes up now and again, but it's actually been about a year since there was like a deep dive into performance max, and, in the meanwhile, one thing that I've been working on a lot is digging around in performance max, in the data there, because slowly but surely, smartery Commerce has acquired a lot of performance max data and slowly but surely I've looked at a lot of that data and so that's why I want to take this episode and dig into that a bit, share with you, kind of bring together one place like I don't know if you happen to follow me on Twitter or LinkedIn or something, you might have seen a post here or a thread there, but this information is all scattered about and maybe you've seen it, maybe you haven't, but I want to collect some of that in one place in this long form content here on this podcast and share that with you. And basically the way I imagine this going, I want to talk about two mainstreams. I want to talk about some kind of general channel characteristics of performance max and then I also want to talk about some some performance insights about performance max, the way that it actually performs in the way that most of us understand, which would be volume and efficiency or revenue, and row us, and then maybe at the end, depending on how structured I've been or not, maybe I'll throw in there some kind of open questions or things that I don't know yet, that I'd like to look at in the future, or maybe those things just kind of come up along the way and there's nothing left there. We'll see when we get there. So let's just go on that journey together then, if you dare, if you're willing.

Speaker 1:

Now I want to start with a story that I heard when I was a kid, or actually a parable. So what is a parable? Isn't that like a story with a lesson? Or is that a fable? I don't know, but this is the parable parable of the blind man and an elephant. I don't know if they're still teaching that in school today, but let me explain it. Or, you know, most of the folks listening is our old timers like me. Anyway, let me explain that parable in case you haven't heard it. Also, maybe they don't teach this in schools in Europe or wherever.

Speaker 1:

It's a very simple story actually. There is a group of blind men and they have never encountered or heard of an elephant before, and so they find this elephant and each one of them is touching a different part of the elephant and you know they're trying to understand this thing, categorize it, figure out what it is. One guy's touching the tail and he says this is a rope. Another guy's touching a leg and he says no, that's definitely a tree. Another one is another is touching the trunk you know this long nose of the elephant saying this thing's a snake. Another one's touching the side or the flank of the elephant and saying it's solid wall. So the problem is that each person has their own small, limited experience of what this unfathomable creature is and they're not really communicating with each other. They're insisting about their own experience being correct and they end up getting into a fight because they can't agree on what this thing is that they're all experiencing. Now, if they had only spoken to each other, maybe they could have kind of synthesized all these different views and learn something about what this creature is. So why would I tell you a story like that?

Speaker 1:

Because I think of this story sometimes when I'm thinking about performance max campaigns. Because performance max, you know it's this black box campaign type. There are different reasons why it is that way. There are technological realities like this is a AI campaign type of AI plays a prominent role here and we can't always know what's going on inside of an AI system, so that that's a reason why it must be a black box in a certain extent. There are other product decisions that Google have made where they've decided to exclude or limit certain types of insights or controls that used to be available in the past, and so it's a black box in that way too.

Speaker 1:

But in the end we have this very large object in front of us that we all want to understand better than we do, and that's one reason why I dig around in this data. I I think of that elephant and the blind men and I say, okay, here's this huge thing in front. Maybe I can't grasp the entirety of it, but if I can take a snapshot here and a snapshot there, maybe I can learn some things. And if I'm a little bit smarter than those guys and I let the data points speak to each other, then and I kind of try to stitch that together Maybe I can start to understand what this thing is. So, as I mentioned, we've kind of slowly but surely accumulated a large data pool here at Smarter E Commerce of PerformanceMax data. It started out as dozens of campaigns and then hundreds, and now we've got over 3000 PerformanceMax campaigns where we have access to that data. And so you know, I always have to be respectful. I want to be respectful of people's data and privacy. But what I feel pretty comfortable doing is taking that data and looking at it in the aggregate, in a way that's not going to expose the data of any one advertiser, but rather look at general trends and benchmarks in a way. I hesitate to use the word benchmark. It feels too much like kind of a single source of truth or two absolute. So yeah, but I like to take this aggregated data and try to figure things out.

Speaker 1:

One of those, really one of the first things I looked at here early on, was the question of how much of PerformanceMax is shopping and how much of it is this other stuff. Because, of course, performancemax it's a single campaign type where, depending on how you look at it, you can access the entire funnel or customer journey. Or maybe a more practical way of thinking of it is that you can access all of Google's ad inventory from a single place. And what I found back then was and it hasn't changed, I tracked this over time was that PerformanceMax is basically shopping. And this was a little bit surprising to me early on, because I used to say early on, I used to say that I think PerformanceMax is not just smart shopping plus or the next evolution of smart shopping, and this is sort of a yes or no answer.

Speaker 1:

The reality is, if we look at the numbers, we see that 89% on average 89% of spend in Max is associated with shopping or dynamic remarketing. 89% of it is associated with the product feed, let's put it like that, and this hasn't changed a lot over time. We know that more and more advertisers are actually implementing a feed-only approach, so this can even drive the number higher. But there's a range of values in there, like if you take all the campaigns and look at the middle, 50% of them 50% of them are going to have a cost share on that feed-based inventory, between like 76% to 95%. But then you also do have another 25% of data that's actually crunched into the top there, between 95% and 100%. So that definitely speaks to me of this whole. When you see 25% of data in that tiny range, that's the popularity of feed-only right there. And then you see another 25% of the data that lives in this other space, where there's more asset-based inventory.

Speaker 1:

So all this display inventory YouTube, gmail, discover all those things which brings us round to the next topic, like of that sort of non-shopping and non-dynamic remarketing, the non-feed-based inventory, what's going on there? And there is this so-called Performance Max Placements report in the UI. It's pretty frustrating. It's part of me wishes that they would have just either done more there or just left it out altogether. It's almost just agitating because it only tells you the impressions that you get in these other kinds of placements, and those include mobile applications, web pages, web display, and then this very special little thing called Google-owned and operated, which will include the YouTube inventory and everything else. And yeah, on average we see that about half 53% of the impressions are located in web display. About a third a 31% are in this Google-owned and operated soup with Gmail and everything in there, and then the remainder 16% is on mobile applications and each of those again, there's a range of different outcomes that are possible, but that's if we look at the averages across a lot of campaigns. That's what we're seeing Now.

Speaker 1:

Mike Rhodes actually I should also shout out Tobias here, I think he did kind of the initial thinking through of how to do this and then Mike Rhodes wrote a really great script and helped popularize this approach. The two of them and others have started digging deeper into the black box and into those mystery placements and so on, and they do this through a series of kind of clever A-calls or gackle Google AdWords query language calls and reverse calculations and estimations and they try to tell you not just impressions about costs and breaking down your video share and stuff like that. So that's very cool. You should check that stuff out. I feel like Google at this point. I don't know why they don't open up the black box more, because we all have access to like a free script that helps us report that anyway. But something that's in my backlog is analyzing our entire dataset and using that approach and the thing is that it's way too much data for a script. It just breaks First off, it times out and it's just too much stuff in there. So we need to do that via API. So if you notice all that stuff, I'm just don't know where. I'm not going to go down that run any further and there it out too much. But we are working on producing some kind of further benchmarks there based on that approach from Tobias and Mike. So when we're thinking about kind of the overall characteristics of performance max, those are two big ones that we discussed already how much of this is shopping? How much of it is not shopping. And then what is going on in that non-shopping black box, where I hope to shed some more light in the future. But definitely check out the script.

Speaker 1:

As I mentioned, another huge topic is this whole brand non-brand thing, and we had an episode a while back where I spoke with Kirk Williams and Ben Kruger both smart people about this kind of issue in some detail and it's something that, to be honest, like in our segment, in our market segment, is a little bit less relevant because we work with a lot of multi-brand retailers who have a different view on brand non-brand than a brand or manufacturer will. As a retailer, you're very happy to get in on that brand traffic that's out there. That's like a key kind of categorical facet of your inventory. Brand is absolutely central there. So I guess the question always comes up is about the incrementality of brand traffic and is it warm traffic or not, and is it polluting your overall performance? So some of these multi-brand retailers their own brand will be large enough that they start to have those concerns too, and then, of course, it goes full circle and becomes relevant.

Speaker 1:

But these days there are options for handling this topic. There's brand exclusions, which is the new kind of proper feature from Google where they are offering you some ability to control this and it's sort of on their terms it functions differently. The negative keywords that would be the other main way, which those are not. It's possible to apply negative keywords to performance max campaigns, but it's not desired by Google's product team. Let's describe it like that.

Speaker 1:

But another dimension to this of trying to figure out what's going on with brand is that you need to be able to see your search terms to find out in the end how much traffic was related to brand or non-brand. And that's again another area where Google has improved. They've offered the brand exclusion controls and they've also increased the usability of search term reporting and Google adds really positive development from them and I'm happy about that and I'm working on a way to take that new report and try to understand in a more aggregated way again the characteristic like on average, how much brand traffic are we seeing out there for accounts with strong own brands? But in the past I've done some spot checking there and, for example, a brand that will remain anonymous here we saw that they had 43% of their revenue was associated with brand traffic and you know that's a lot, because let's imagine that Google ads performance max is accounting for a pretty large amount of their overall web traffic and their overall online revenue, and then you see that almost half of that is coming from from branded PMAC traffic, and questions arise about the incrementality of that. Is this really bringing us cold traffic, new customers? Are we just retargeting, remarketing? And in the case of one of these advertisers, you know they have categorical PMAC campaigns, so they had one for, like, jackets, one for boots, one for ring gear, and you'd see that among these, the branded revenue share is different like 29% on boots and 51% more than half on jackets. So that's definitely a topic for them to be aware of. And then you need to build that into your strategy and reflect that. Like do I care about this issue in the first place? Where do I stand on this sort of philosophically, strategically, what do I want to do about this topic, if anything? But what we do know is that, left to its own devices, performance max will seek out that branded traffic. So you've got to decide where the line is for you.

Speaker 1:

So one of the yeah, let's say the last for now, the last kind of channel characteristic data point that I want to share with you is about the device mix on performance max, and I'm going to compare that to standard shopping here. So we've seen that that shopping and performance max this channel is just getting more and more mobile heavy every year, basically, and in fact I recently did like a long look back and saw that it was in, I think, q2 of 2016,. That's with shopping clicks, shopping inventory, where mobile overtook other devices as the majority source of clicks. So it's years back now and the trend has only intensified in the meanwhile. But what I think is interesting about performance max is that it leans even more heavily into mobile traffic.

Speaker 1:

So if we look at the impressions per device on shopping, we'd have like a 68% share of impressions and for performance max it's up to 72%. Now that comes at the expense of computers. Basically, in performance max we see a little bit less computer you know desktop and laptop. We see a little less impressions there. They're quite the same on tablets.

Speaker 1:

And then sort of unique piece of a device for performance max relative to shopping would be TV screens, and I don't know I was expecting to see more here, but so far Pmax doesn't seem to be serving a lot of impressions on TV screens. It was less than 1% of the total impressions, point zero, 6% to be exact. I still think that that number could change, likely could change in the future. I'm not sure. Maybe these connected TVs and smart TVs are too thoroughly in the domain of Google's programmatic offering or I'm not. I'm not really that deep into that area there, but I could certainly imagine that Pmax is going over time, going to serve more TV inventory. It seems like a growth area for them for sure. So this new sort of spin off campaign that's coming up to Manjen, you know, it's sort of like a spin off of Pmax, sort of a replacement of discover, from what I can tell, and maybe that looks very YouTube heavy and maybe we're going to see a lot more TV impressions from that demand and campaign type in the future too.

Speaker 1:

So those were kind of the channel characteristics that I wanted to talk about and thinking out loud here. But before we get into like the performance characteristics, maybe there's one other kind of characteristic we need to talk about a couple of characteristics about advertisers and how they're setting up their campaigns, because two things pop popping in my mind there. One would be the amount of segmentation that we're seeing and how that's changed over time. And then another would be like what's going on with the bid strategies and use, how people are optimizing these campaigns? So let's start with the segmentation. This is a touch older now.

Speaker 1:

I ran this analysis back in April. I could run it updated, but basically I looked at April 2023 against April 2022, when Pmax was a much newer campaign type. Back then, of course, still right in the midst of its adoption curve as it was, people were migrating. Asking me, yeah, migrating smart shopping over to performance mix. But back then we saw that just about 60% of advertisers were running a single Pmax campaign and then about the next third. So at that point most of them were running like two to five campaigns in that range. And now, in 2023, it's basically flipped. We're seeing that about a third of advertisers are running a single campaign and then the majority are running two to five campaigns, and there are some advertisers running six to 10 campaigns, even greater than 10 campaigns. We do see that and I think there can be cause for more campaigns. Like greater than 10 seems a bit iffy to me, but the factor for this, like how many campaigns is right for you, how much segmentation is basically going to depend on how fast it is.

Speaker 1:

Enlarge your inventory is, or let me describe it like this like you know, you could have a few hundred or a few thousand products not the hugest amount of product but if they're all so different from each other, then there could be grounds or reason for segmenting them. And on the other hand, like you could I just reminds me of a real case, a few years back actually, but you know, we worked with an advertiser who had, I think, over a million individual or discrete items, but they were t shirts and they were basically just combinator. You know, as a mass print company, when you think of those companies where you can get, you know you go on, they've got a million different novelty t shirts or whatever. They had different prints, different colors, different sizes and it was yielding this huge amount of unique IDs, but there it was kind of superficially different. They were actually very homogeneous. They had similar margins, similar price points and so on, and there wasn't actually that much reason to to highly segment an account like that.

Speaker 1:

So it's kind of this, this balance of yeah, like the, the size of of your catalog and also the amount of difference that exists in there If there's a lot of different categories, different brands, different margins whatever the case might be different price points, different audiences that these things appeal to. This will dictate the amount of segmentation that is right for you. You need to look for meaningful difference and segment where there's meaningful difference. That would be my top piece of advice. And you know, there's also some very popular frameworks around product segmentation related to, like this volume efficiency matrix, where there's whatever you want call them ghosts, zombies, leaders, these kind of these kind of products, and then hero products, and maybe I'll talk about that in more detail another time or write about that another time. But I would just encourage you. That's a great starting point, but I would encourage you to think not just along those two dimensions of volume and efficiency, which are partly already covered by the campaign itself, and look for, like, a more multi dimensional approach and look for things that are important to your business and that the channel is not aware of, bringing that off channel data. So you know, I'm glad to see that advertisers are segmenting a bit more than they were a year ago. I think that's important. I think that speaks to more sophisticated strategies and campaign builds. But you know it needs to be. It's a very strategic decision in any given account, yeah, so I want to share that.

Speaker 1:

And the next thing that I want to talk to you then about would be what we're seeing in terms of a bid strategy deployment. So in regards to that, bear in mind there are effectively four bid strategies that are possible in performance. I mean there's actually there are two where you can maximize your conversion value or you can maximize your conversions, but then within each of those you can choose to optionally set an efficiency target, either a target cost per acquisition if you're maximizing conversions, or a target row as if you're maximizing conversion value or revenue. And I think that when you have an efficiency target in place it changes the dynamic enough that to me they're kind of discreet bid strategies. They're just kind of nested hierarchically under the other two. But anyway, if we look at those, like 95% of our advertisers we see are maximizing revenue or maximizing conversion value. That's by far the most popular parent strategy which makes sense. I mean that's as it should be. Basically, maximize conversions can make sense in a legion scenario, but even then you might try to find a way to bring value based bidding in there. But then of that 95% that are maximizing revenue or conversion value. We then again see that a majority of those have a target row in place. So if we think of the four-bit strategies kind of independently of each other, it boils down to this 78% of advertisers of campaigns I should really say 78% of campaigns are running target row, as another 17% are running max conversion value with no efficiency target, and then the remainder are a max conversion bidding. The tiny, tiny man was was a target CPA in place. So target row is is by far the dominant strategy and performance max.

Speaker 1:

The question then becomes if it works, and the short answer would be yes. Yes, it does work, at least when you look at the in platform numbers. If you take the reporting that that is available to you, that Google is optimizing on like, everything looks good. I think there are questions and that come up in terms of like, does this look the same on your back end? We increasingly see advertisers are. They don't even really care about the rows and their ads account. It's just sort of a number they're. They're actually measuring the return on ad spend somewhere else in a different system, and then, like, the rows in ads is just a proxy. Oh, okay, I need a real rows of like five. So then in ads that means I have to set a row of six to get that, that kind of a setup which, if I were a product manager at Google, I would be concerned about this. But, to keep it simple, I said it works and it does. You know, what we're seeing is that with performance max, 47% of the time the actual row is is on target, which I define here as plus or minus 10% of the target rows. So about half the time it's matching the goal, which is exactly where you want it. Another 37% of the time it's exceeding that goal, which sounds great oh, we'll talk about that more in a second, maybe and then only 16% of the time it's not reaching the goal.

Speaker 1:

Now I have to say right away that our data skews up market. We're working with larger retailers who have larger budgets and larger amounts of conversion volume, and that does become important, and I think we'll touch on that in a second too. But one thing that was important to me, like I didn't want to just take a snapshot here. So I ran a time series going back to May 2022, through the end of May 2023, when I did this analysis a few months ago now, but I'm confident the picture remains the same and it was consistently the case that performance max like for the median advertiser and all our market segment it was, it was out outstripping the target actually most of the time.

Speaker 1:

Now, as I mentioned our market segment, you know to be clear, we're talking about multi brand retailers in Europe who are larger in size. It is is an imperfect sample. It's going to have its biases. I specifically went and looked at smaller accounts and smaller campaigns where they had less than 30 monthly conversions and there's a problem there. The actual row is is pretty consistently below the row's target. So you know actually a timer recording.

Speaker 1:

Just this morning I did a brand new analysis because, like, where's the break even point there? Or maybe I'm using I don't want to misuse that term or abuse that term but at what point do you have enough data that smart bidding starts to work? Because I mean, I'm not even really sure if there's an official answer from Google anymore. Shame on me for not knowing that, but I see answers on the market ranging from like 15 conversions per month to no. It takes 100 conversions per month and that's a big difference, also because that will be proportionately reflected in the amount of budget that you need. You need to have budget for 100 conversions per month and you know if you're in a high AOV and low volume category, you might even be more so concerns about that.

Speaker 1:

So my way of looking at this was to take conversion bins, like from fractional conversions to 30 conversions per month, 30 to 60, 60, 90, and then larger bins upward to greater than 1000 monthly conversions, and then look at like okay, the campaigns that fit in this. And I looked at six months worth of data. It's like, on average, like 2,300 campaigns per month that have target rows bidding in place. So I can I can tell if they're above or below target or not and you know, in the end they create like 14,000 data points. We take the six months and the thousands of campaigns, so it's a lot of data that went in there and and I look yeah, all right, per conversion bin above target, on target, below target. What's going on here and what I found was that this low bin is is more than half the time below target. A very narrow percentage of the time are they on target and then above a third of the time they're actually exceeding the target, but it's very so you can have a small volume campaign that is above target. But you know the question is about volatility, about predictability and actually as you increase the amount of conversion volume, you know you get into that 30 to 60 bin or 60 to 90 bin. What we see is that the amount of times Pmax is above target it stays pretty constant at about a third of the time, but the number of times that it's below target gets quite small and basically that that nice on target middle grows more and more. So basically, more campaigns are able to deliver on target and fewer campaigns are delivering below target as you have more monthly conversions. So it's very clear that Pmax loves that data, wants that data, needs that data.

Speaker 1:

Now let's rewind for a sec. Let's go back. I said a minute ago that about one in three campaigns seem to be exceeding the target rowist. What are the implications of that? Is that a good thing? I think it's. If you're not meeting your rowist target, people complain there's eyes on this campaign, what's going on? Hey, this is not profitable for us. People have a very clear reaction to that. It's a rational one. It makes sense. The question is should we be high-fiving and popping open the champagne if we're exceeding the return on ad spend target, because I would say that's not that rational. Yet it feels we're biased to respond that way. We're like, oh that's great, I'm beating my target, that's awesome.

Speaker 1:

The question becomes there there's a trade-off on the other side. The campaign is arguably over-delivering efficiency, which means that it could be missing out on revenue. You could be leaving money on the table. You've set an efficiency target and ultimately that's where you want to spend. How do you decide if you're over-delivering on efficiency or if you're missing out on revenue? Well, it's pretty hard. When I'm trying to aggregate this data, I can't just benchmark that either. Not directly, because if we're talking about checking out what percentage of campaigns are above or below target, that's easy, because people are setting an efficiency target in the UI and I can report that, but no one's setting a revenue target. In fact, that's not the way that the platform works. The platform TROAS, targetroas is a subset of maximized conversion value that's uncapped. It's going to deliver as much conversion value or revenue as it possibly can.

Speaker 1:

But I thought about this. I was thinking okay, what are some proxy metrics for headroom? Is there still headroom left in a given campaign? Two things that came to mind. One of them is impression share. This is not as readily available for performance max as for standard campaigns, but actually you can dig it out of auction insights reporting that shows you all of the impressions you're eligible to receive. How many did you get? Then, implicitly, the rest of them you could have a bit higher, or something like this. There's some action you could have taken to get the rest of those impressions. Yeah, if you're over-delivering on efficiency, then maybe that's a factor there.

Speaker 1:

Then the other thing I looked at is budget depletion. It's like, okay, you have a certain budget, you want to spend that, or you're willing to spend that. If it's not being spent, why? If this campaign is performing so well, why doesn't it use all that budget? So I hope those make sense as proxies.

Speaker 1:

If you can imagine a campaign that it's overperforming on, it's performing super efficiently, but you see that there's this headroom left in the amount of budget you have available and the amount of impressions you could have gotten, then you start to see what I'm talking about here. It was pretty interesting because I looked at this for campaigns that were over or above target. At target and below target, impression share was not correlated at all with these Impression share was basically the same across those three populations. The one that was correlated was budget. For me at least, maybe it shouldn't have been, but it was surprising how that worked out because in my head I think, okay, the more you spend, there's going to be a law of diminishing returns that kicks in, or dose response curve, this sort of thing, where at a certain point your ad dollars, each ad dollar that you spend, is going to bring a little bit less revenue than the last one. It's not going to be as efficient, and so you might then expect that campaigns where the budget is highly depleted, that they're not performing that efficiently. But we have to come around from kind of the other side, because the campaigns that were overdelivering or were above target, above the return on ad spend target, they were actually consuming more budget than the campaigns that were below target.

Speaker 1:

And it kind of makes sense actually, like what Google is doing here. What the algorithm is doing here is that it's making sure it can spend efficiently and then it's scaling into that, like it's finding a pocket of demand, some kind, and scaling into that. And what that pocket of demand looks like could vary and could be problematic. We'll talk about that. But it's fundamentally a conservative strategy. It's a low risk strategy to make sure that you can safely scale into demand. It sounds good, right, it's kind of what you'd want. But when you think about the consequences of that, it could mean that PerformanceMax is more likely to play it safe and remark it. Or it's more likely to play it safe and just spend a ton of cash on your best sellers, which are already selling fine through organic channels.

Speaker 1:

This could play out in a number of different ways. What I like to imagine here is like a tree on a high cliff in this thin cold air, this scraggly old gray tree, not a lot of leaves on it, and you think that thing's ugly, right, or kind of beautiful in some way. It's ugly but it's strong. Another way people say like not everything that glitters is gold. Well, also, not everything that is gold glitters. But this is a little abstract.

Speaker 1:

Let me bring it around to the point. Performance max campaigns look great in the platform, but this highly incremental growth. You know incremental growth is not easy. Incremental growth is expensive. It's it's not high volume, it's not necessarily scalable. It's hard to find pockets of incremental demand. You have to work for it and that's sort of, I think, the challenge here with the fact that performance max does do really well, your job is to check on the back end and and to test and make sure that this is actually incremental growth to you, because otherwise, yeah, it looks amazing in the platform but it's not particularly doing you any favors. So I hope that makes sense and you know what I think. That's about most of the stuff I want to discuss with you right now.

Speaker 1:

Let's try to bring this round to a close. I mean, we started out talking about that, that elephant, and I don't know if we're any wiser in hell. We have all these different pieces and parts of Pmax like let's recap quickly we found that the brand revenue is high, like many suspect. We found that the placements are rather intransparent. There's there's a lot of shopping in there, but then there's this other mysterious stuff and we can only go on Google's sort of promise that they're doing this cool, right moment, right audience, right offer kind of thing and perfectly display in the right ad and so on. Other people are afraid that they're just stuffing inventory in there. We're not necessarily going to be able to answer that question definitively.

Speaker 1:

I think where I'd leave. It is where I what I was just mentioning, this idea of testing. I mean something we've invested a lot in. We take a lot more seriously than we did 18 months ago Let me say it like that, and testing was always important to us. But something that I really appreciate from Google is they've got this free Python library called Google matched markets. Definitely have a look for it. Maybe we can throw that in the show notes as well.

Speaker 1:

Link to that, because this helps you create fair geographic comparisons and it helps you with, like your whole, your whole testing setup. What I like about it is that it has that kind of data science power from Google. It has the methodology from Google. It's backed up by their research. You can view research papers related to it. You get all that rigor and all the power of Google, the stuff that you want from them, the benefits, but then also you can see every line of code and you can create your own test 100% control at 100%. You're fully in command of the testing environment and Google is agnostic to it.

Speaker 1:

They're not aware that you're running a test, and I do like that element of independence and control when it comes to testing, because there are there are in in app or in UI testing options from Google. Google will help you do your testing directly, of course, and they make it simple push of a button. You don't need data science resources, you don't need anything, but I do have to wonder about that testing environment that they've created, and I just like to have that extra level of control. So if you're ever looking for help with testing, just reach out. And if you have ideas for data insights about performance max benchmarks, anything that you'd like to see, just reach out, and I'm always happy to talk about any of that stuff with you. And so, yeah, I think I'll wrap it up there.

Speaker 1:

You know, performance max looks great on the surface and I hope that it is as great as it looks on the back end too, but I think that that deserves to be measured, thought about, confirmed. You know, it's like that old saying trust but verify. So let's take that approach toward Pmax, toward this huge elephant. Let's trust but verify. Thanks for listening to growing eCommerce, and if you enjoyed this podcast, please consider sharing it with co workers, friends or within your professional network. We really appreciate it. This podcast is produced by Smarter eCommerce, also known as neck. To learn more, visit smarterecommercecom.

Understanding Performance Max Campaigns
Analysis of Performance Max Campaign Characteristics
Campaign Segmentation and Bid Strategy Deployment
Max Efficiency and Revenue Analysis
Measuring and Verifying Performance Max