ClearTech Loop: In the Know, On the Move

AI, Cost, Control, and Relevance with Margaret Dawson, CMO, SUSE

ClearTech Research / Jo Peterson Season 1 Episode 38

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 17:34

AI is getting embedded into everything, security workflows, engineering workflows, and customer facing products, often faster than organizations can govern it. The upside is real. The risk is in the assumptions leaders are making while they chase speed. 

In this episode of ClearTech Loop, Jo Peterson sits down with Margaret Dawson, Chief Marketing Officer at SUSE, for a slightly different conversation than our usual guest mix. Same three questions, but through a CMO lens shaped by decades of enterprise buying cycles. 

Margaret breaks down how security leaders can use generative AI to move beyond tools and tech, what it actually takes to embed security and privacy without slowing innovation, and how CMOs should talk about AI without getting trapped in AI washing. The throughline is practical: productivity gains are real, but cost, control, and credibility are not automatic. 

Subscribe to ClearTech Loop on LinkedIn:
https://www.linkedin.com/newsletters/7346174860760416256/  

Key Quotes 
“The mistake that we’re making as leaders is we are assuming that the integration of AI is an automatic reduction in cost.” — Margaret Dawson 

“Until all of a sudden, the CFO got a million dollar cloud bill.” — Margaret Dawson 
“Consumers get smarter, very, very fast, especially BtoB Tech customers, and they start to know what’s real and what’s not.” — Margaret Dawson 

“Being very specific on what it is doing for your product, for that customer, tying it back to the business outcome.” — Margaret Dawson 
“People trust their peers more than any vendor or anyone else.” — Margaret Dawson 

Three Big Ideas from This Episode 

  1. AI adoption fails when leaders treat it like automatic margin 
    The board level narrative is tempting, but dangerous. AI can improve productivity, but assuming it instantly reduces cost is how organizations create governance debt and business continuity risk. 
  2. Speed is possible, but only with guardrails 
    Embedding security and privacy is not what slows innovation. Confusion, rework, and incidents slow innovation. The answer is clear boundaries, clear accountability, and controls that keep pace with adoption. 
  3. In marketing, relevance and proof beat hype 
    Buyers calibrate fast. AI messaging has to be specific, tied to outcomes, and backed by customer evidence. Credibility is built peer to peer, not through louder claims. 

Resources Mentioned 

🎧 Listen: In Buzzsprout Player
Watch on YouTube: https://www.youtube.com/@ClearTechResearch/playlist
📰 Subscribe to the Newsletter:
https://www.linkedin.com/newsletters/7346174860760416256/  

Jo Peterson:

Hey everyone, thank you so much for joining clear tech loop. We're on the move and in the know. I'm Jo Peterson. I'm the vice president of cloud security for clarify 360 and the chief analyst at Clear Tech Research. And I am here today with a real rock star, Margaret Dawson, I set you up. Margaret, we should have had, like, you know, incoming music. If you're we should have had like, like, like, fireworks gonna do that, going off or something, right? In case you don't know, Margaret, she is the CMO of SUSE, where she works to drive Susie's global marketing strategy to enhance brand visibility and market press and market presence. And Margaret. Margaret is a tech veteran. She's had tenure at companies like Aptio, Red Hat, HPE, Microsoft, to name a few. And so, in case you're not familiar with what SUSE is doing in the AI space, they are actually integrating AI into their product offerings through SUSE AI, an enterprise grade platform designed for running private generative AI workloads securely and flexibly, both on premise and in the cloud and Some of the key features. So dig in, guys, they include curated AI library, built in zero trust, security, come on now, integrated observability tools to monitor AI models, GPUs, other components. So if you rainy Saturday afternoon, you got nothing else going. Dig in.

Margaret Dawson:

I love that you brought up the observability because I feel like I was actually in the observability space for a very cool company called chronosphere, before I came to Susa, and I noticed that the integration of that observability monitoring for things like AI is very hard, or even Kubernetes, like we have full stack observability for, you know, everything in a day, around logs, ventures and traces, but like understanding what is happening, you know, with your AI workload, is it hallucinating, you know, is it delivering the right things at the right times like that is a very hard thing to monitor. And I love that. We just integrated that as part of that, you know, as we built out our stack and added more observability capability. So I do think that is a key differentiator.

Jo Peterson:

It is and, you know, and to go back to what you're saying with containers, AI sometimes behaves like containers, because it can be ephemeral, right?

Margaret Dawson:

Agreed, 100% in fact, I like to say AI workloads are containerized workloads because we're not going to build an AI workload, you know, on some legacy on prem thing like that would be silly, silly, silly. So I think there is a lot of connection and intersection between how we are building AI workloads, how we are leveraging LLM models, and the cloud native world, and all the things that we've talked about with container security, cloud security, they're not separate conversations. They're related.

Jo Peterson:

That's an interesting sort of line you've drawn there. Yeah. So if you're new to the podcast, we ask three questions that are real time, because we want to get thought leadership from folks like Margaret and so on. With the first one, Margaret, how can cybersecurity professionals leverage generative AI to sort of break out of that traditional tools and tech mindset and drive more innovative thinking and execution in their security programs?

Margaret Dawson:

Yeah, that doesn't feel like a softball. That feels like a really big question, but I think that, you know, one of the things we were talking about kind of before we went live here, I want to come back to because I think as cyber security professionals, look at all the capabilities that AI is bringing, whether it's agentic or generative, or all the models that are available to them, I think we want to make sure and not throw out the baby with the bath Water, so to speak. Right? There are foundational security elements that we have always followed, that we want to stay true to. And when it comes to a genetic AI, the first thing I would start with is access control rules if I was to put my developer hat on, which, you know, I'm sure this is something everyone's thinking about, how can I use AI to help me code faster, better, more securely, right? They're becoming kind of your your favorite developer colleague or assistant, right? You're not going to give every developer access to your source code or to some of the foundational things around the code they are writing. So when I think of a security professional, how are you partnering with your engineering leadership to develop those rules of the road and go back to, you know, give it a persona, give it a level, right? So, what are the access control rules? What are the things that does and doesn't need to have access to? What is it allowed to act on? You know, a lot of people think that it can literally take over a human. I'd be very careful not keeping a human in the loop before you hit, go, right. We've seen a lot. Lot of things, even by non AI workers, where they push the button and, well, that just brought down the entire application. So maybe you don't want AI to do that until you're sure that it should be pushing the Go button, right, the ACT button, or the, you know, easy button, whatever you want to say. So that's where I would start, is that embrace the fact that everyone in your company is going to be coming to you with requests for AI so that we know if I was sitting here running IT security right now, assume that every single day, right you're going to get a new request, while you will have to look at things in a new way and probably develop some new rules of the road, start with what you already have. What does that AI need access to? What is the business outcome that you are trying to do with that, you know, like, how can we have zero trust? You know? Because remember, just like a human, you're not going to trust anything until trust. So assume zero trust until otherwise, right until you can prove otherwise. So start with where you've always started, and then start to develop out, you know, specific things based on the application of that agentic AI. Or, you know, if marketing is using your generative AI tools, like, you know, where is that content going? Is it all going back to the mothership? And that's the other thing that I would say is like what they have access to start there, where does the information, whether it's coding, whether it's copywriting, whether it's whatever is being created with that AI tool, and then be ready to be agile and keep reporting, you know, on that. But I think the last thing we need is a breach, because you allowed an agentic worker to do something that you would never let a human worker do,

Jo Peterson:

right, right? And so let's talk about balance for a minute. We all want to innovate, and we all want to be fast with that innovation. So how can organizations create that balance, embed security and privacy into their AI model development without slowing down that innovation.

Margaret Dawson:

I'm going to start with a data point, because I think it's interesting when you read I can look at research we've done, and I can look at, you know, third party analyst research, the number one concern of executives. And I don't care if you talk to a CIO or a CEO, is still cybersecurity, yep, right. Ai comes in somewhere, second, third, fourth, sometimes the second one, honestly, is cost reduction. And I think the mistake that we're making as leaders is we are assuming that the integration of AI is an automatic reduction in cost. We've somehow brought those together, and boards are almost perpetuating this mythology that, oh my gosh, you could get rid of half your workforce if you just integrated AI in a better way, right? Or you could cut your cost of production by some ridiculous amount, if you just, you know, integrated AI, so I think that we still need to stand tall as security professionals, you know, and say, Look, we're not going to do anything that puts the business at risk. Because I will tell you, the cost of, you know, breaking your business continuity, having a data breach is still much greater than any benefit you're going to get from from AI. And with that, I've totally forgotten the question that you've asked, because I

Jo Peterson:

got so excited about the extra. It was, it's good, it's good. But you're saying, you're saying all the things we're thinking about, and that's right,

Margaret Dawson:

but I think you can still get the product. So what I would say is, where I would always start it, and I say, this is like, I treat AI almost like my buddy, right? Okay, here's the code I am writing today. I am going to have you review the code first. So, you know, I've talked to a lot of developers that have done just this is like, review this code. Tell me what could be better? And I will tell you every time it does come up with things, you know that you could have improved, you know, the quality of the code, or you could have had a shortcut, or you could have actually added a capability that maybe you didn't think of. So I think there's absolutely benefit in the augmentation. So step one review, review improve, you know, and you're still the quality check of that. Next step is, okay, I want to create this. Could you write code in Python, you know, for this bit, and see what it does when you're letting it start from zero, right? And most companies are giving the agents access to training materials that allow it to know how to write, how to code in that company's way, right? So everyone has their style of coding, the languages they use, the tools they use, the libraries they use, the container based images, they use, whatever it is, right? Like we all have things that we do in our company different from others, but like I said, they don't need access to deeper source code. They don't need access necessarily, to the entire application code base. They don't need access to other things, right? So you. And then, as you're training them, you may give them more and more responsibility, but I think at this stage, we should still be the ones to review the final code, to review the final product, you know, before we take action or before we do things, but it absolutely will improve productivity. I just don't think it replaces humans at the pace or cost efficiencies that we think at this time.

Jo Peterson:

And you made such an insightful parallel a moment or two ago, and it made it brought me back to the days of cloud, when we were all adopted cloud, and everybody sort of drew that line wrongly so between cloud and cost reduction, right? That's right. Cloud was about agility, and it was about get to market quicker, and it was about a lot of wonderful things, but it wasn't about cost reduction.

Margaret Dawson:

That is such a great point. You know, I had forgotten that early kind of public cloud where, I mean, we remember you and I remember but maybe people that didn't have to live through the world of virtualization when there wasn't cloud, which, if it took six weeks to spin up a VM and all of a sudden you could go to AWS and spin a cluster, right? It was just like, oh my gosh, this is miraculous. I can do my job in one day, until all of a sudden, the CFO got a million dollar cloud bill, and they were like, What the heck is happening over hearing land, and why do I have a million dollar cloud bill? Or you all of a sudden, ended up getting, you know, 20,000 credit card expenses that added up to a million dollars. And then all of a sudden was okay, wait a minute, but we did make that mistake, and I think that is the perfect parallel, both from a cost and a productivity because, or security standpoint, because you and I were talking earlier, years ago. I mean, maybe it wasn't that many years ago. It feels like forever ago. I did this talk called network security. Is cloud security. Like, what are the fundamentals, and how do you adapt that to cloud? Same thing with with cost, right? Is that I think AI has huge benefit, and we should be excited about it, but we need to make sure we don't over pivot and assume it's going to solve all of our problems and immediately increase our EBITDA margin by, you know, 10 points and whatever else, right like, God forbid, AI, all of a sudden has people saying rule of 60. I think rule of 50 is hard enough, but for those of you that live in EBITDA land, but I think that's a great, a great way to compare. It definitely

Jo Peterson:

so little bit of a twist on this one as a CMO, because we're all consumers. How should you and other CMOS be thinking about AI adoption and their companies in terms of brand?

Margaret Dawson:

Foundationally, whether we like it or not, if your customer does not see your product being relevant. In the AI world, you're going to lose market share. I mean, you have to be relevant. I mean, just like with other technological evolutions, I think where companies are tripping over themselves is there's a lot of AI washing going on, just like there was cloud washing, and then there was hybrid washing, and there was whatever else washing. Consumers get smarter, very, very fast, especially B to B Tech customers, and they start to know what's real and what's not. And so as you roll out your AI capabilities, as you roll out your AI messaging being very specific on what it is doing for your product, for that customer, tying it back to the business outcome, right? Like, what is the actual value of using your product? Is it greater security? Is it faster productivity? And then as soon as you can getting those case studies out there, and this is, you know, from a marketing perspective, this is always our, you know, top priority is people trust their peers more than any vendor or anyone else. So the sooner you can have people saying, we've used this AI platform, this AI capability, this whatever it is, and this is what it did for us that starts to make the biggest difference in the world. So I would say, you know, be as honest as Jo. I mean, I believe in ethical marketing and honest marketing. We all try to, you know, do our messaging, but being really careful not to AI wash to the extent that it's going to bite you in the butt later, because it will right, because at some point that catches up. And it just takes a couple people saying, you know, they said this, but it's not what it does. And I think we're in that period of, what does Gartner call that, when it's the not the tropic disillusionment, but the part before that, like, I think really fast with AI. I think that cycle is like accelerated speed. But you know, we have to do it, and we do have to. Move fast. I mean, I think from an engineering perspective, you do not have the time, you know, to test a million different things. You've got to figure out pretty quickly how to incorporate some type of AI capability, even if it's just an agent in your knowledge base, helping people find information faster. But you just don't have the ability. And from a marketing perspective, you if you don't have an agent on your website at this point, you know you're already behind, if you haven't figured out, from a marketing perspective, your AI visibility and how you're showing up in AIO and geo and in addition to the traditional search, like you're already behind, like we're already moving, you know, down that path. But for me, it's also, I think, helping people really understand the differences. So, you know, we have a platform that you beautifully. I should hire you to do our marketing. You know, some of the capabilities around that, and it's all built on the foundations of the platforms we already have. You know, our rancher, containerized management platform, you know, our SUSE Linux and then other capabilities that we've integrated, as you mentioned, that is for people to build AI workloads and have infrastructure that's optimized for an AI workload that is very different than saying I have kind of AI ready automation, or AI infused automation In my platform that is helping that solution itself function better, regardless of how you are using it. And we have both of those things. And sometimes, I think it's easy to confuse a capability within a solution versus something made for an AI workload versus an actual LLM model or an actual other piece of technology that's in that AI. So I think specificity matters, and I think it's upon all of us, you know, in marketing and in product, to really make sure it's clear on, you know, what is this? How does it fit into your AI world, and what's the benefit it's going to bring, and then have those proof points go along with it.

Jo Peterson:

Wow. See, I threw you a curve ball and you just rose to the challenge.

Margaret Dawson:

Asked me companies I liked. I was ready for that one.

Jo Peterson:

Nice, nice. Well, this was so fun. Thank you so much for making time to visit with us. Y'all. I hope you got something out of it, too. I definitely did, so we'll catch you next time.

Margaret Dawson:

Thanks so much.