What's Up with Tech?

AI and Marketing Synergy: Ethical Considerations and Innovative Practices

Evan Kirstel

Interested in being a guest? Email us at admin@evankirstel.com

Discover how AI is transforming the landscape of marketing with insights from Tim Marklein, the visionary founder and CEO of Big Valley Marketing. We dive headfirst into the challenges traditional marketers face while adapting to the ever-evolving technologies and how Tim's team of experts is leading the charge with innovative solutions. Celebrated for their remarkable growth and recognized by the Inc 5000 list, Big Valley Marketing is at the forefront of blending AI with marketing strategies. Our discussion also explores the ethical considerations of AI's role in communication, from universities using ChatGPT for public statements to the delicate balance of maintaining a human touch in an increasingly automated world.

Join us as we unravel the ethical dimensions of AI in the creative process, particularly in crafting content and impactful marketing campaigns. With thought-provoking discussions on AI-generated ads and the broader implications for events like the Super Bowl, we weigh the significance of transparency and responsible AI use. Through personal anecdotes, we reflect on the transformative power of generative AI tools, enhancing productivity and efficiency akin to having an expanded team. As we look to the future, it's with optimism and enthusiasm, eager to explore the exciting intersection of technology and marketing, and the new engagement avenues it opens.

Crossing Borders

Crossing Borders is a podcast by Neema, a cross border payments platform that...

Listen on: Apple Podcasts   Spotify

Support the show

More at https://linktr.ee/EvanKirstel

Speaker 1:

Hey everybody, fascinating guest and topic today diving into the impact of AI in marketing and business with a true thought leader and innovator, tim. How are you?

Speaker 2:

I'm good. I'm good. I'll try to live up to that introduction.

Speaker 1:

Well, I think you will, and really fascinating topic. Really haven't talked about this on the show Before that. Maybe introduce yourself as founder and CEO of Big Valley Marketing, a company many of us know. But for those who don't, what was the big idea behind Big Valley and tell us about the journey.

Speaker 2:

Yeah well, so I started the firm 11 years ago. I've been doing marketing communications throughout my career. You know I say 25 plus years. I've just stopped counting the numbers after that and you know, I don't know.

Speaker 2:

For a while I have seen that just there's this, this problem in in at least the tech world, if not beyond, where a lot of the firms that have existed on the marketing services front whether it's an ad agency, pr agency, digital shop, et cetera ad agency, pr agency, digital shop, et cetera that they have not been as well set up to be able to meet the needs of today's technology marketers. So we've built out what we call a team of experts to be able to provide what we think is better service, more efficient and effective marketing, to be able to help them across a variety of disciplines. So we've been in business 11 years, really started to hit our growth stride about five years ago, which helped us become one of America's fastest growing private companies, according to the Inc 5000 list, two years ina row. So we feel like we've got some good momentum.

Speaker 1:

Fantastic. So lots of hot topics here AI disclosure, bridging the trust gap, historically low-level ethical responsibilities around AI and content creation, and on and on and on. Where do we start? How do we frame the discussion here when it comes to use of AI and marketing in tech in particular?

Speaker 2:

I don't know. Maybe I'll start with just an anecdote which I don't know got us pursuing this topic. So I was at a research conference last March and obviously you know OpenAI launched at GPT. What was? November 23. So this conference was March 24. And you know, everybody was talking about AI and what the impact of AI would be, and there was some that the conference I was at was really about public relations, research.

Speaker 2:

Stupid example but where there had been a shooting at Michigan State and some other college, or a lot of other colleges, were trying to provide commentary or provide their perspective on the shooting, the shooting, and one university in particular wrote their statement about the Michigan State shooting and had ChatGPT write the statement and that disclosed that ChatGPT wrote the statement. So you would think they're disclosing it. That's a good thing, right. But it's such an inappropriate use of AI because people are looking for like, hey, what is what does the university really think? Like chat TPT, can't you know chat GPT? Chat GPT could reasonably tell you. Well, here's what other universities are saying. Or here are the themes that people bring up. But but to actually, I think, have chat GPT write the statement and then say that they wrote the statement like, inherently, people are not going to trust that, and that just got me really thinking about like, okay, we've got some really kind of warp things going on, like that's kind of an extreme and, to me, a bad example, but it really speaks to well, what is the appropriate use of AI?

Speaker 2:

I mean, generative AI is the new game in town, but just AI more generally. What is the appropriate use? What is the responsible use? And I think there's been a lot of discussion about responsible use, but I don't think there's been as much discussion about appropriate use. And for us as a marketing and communications firm, we were really also just trying to dig into well, what is the responsibility of marketers and communicators on this front? We definitely have seen a lot of AI ethics discussion by technologists, but generally we haven't seen as much, I think, leadership or as much advocacy from business or marketing or communications leaders outside of the technology leaders. And so we decided to dig in and really understand, like, what are the issues, what are people saying about disclosure and transparency and what are the norms that are starting to form there?

Speaker 1:

Really interesting topic and that was an unfortunate anecdote. But every day there are more anecdotes, so I guess at some point it's no longer an anecdote. The Fantastic Five movie poster, probably generated by AI. Fans are a little bit upset with that. Google Super Bowl ad had to.

Speaker 2:

Where you know, it was a dad who had a daughter who was really a big fan of I forget which track and field athlete, but the daughter wanted to write a letter to the track and field athlete and the dad in the commercial was talking about Gemini helping write a letter.

Speaker 2:

And to me that is another, just higher profile example of, I think, inappropriate use. Not that it's oh, like, if you're trying to improve the writing, that's one thing, but you know, I would imagine the athlete would prefer to hear from the 11-year-old or 8-year-old or whatever in her own words, as opposed to hearing what ChatGPT kind of wrote for that person. And I can just imagine that, okay, there were people at Google who thought, hey, this is a great use case to be able to show, and I'm like, no, that's not really the most appropriate use of generative AI. If you're trying to improve writing or if you're trying to, I know there are a lot of appropriate uses, but they were trying to tap into the emotional heartstrings and tie in with the Olympics, which I get, but I don't know. To me it's an example where, like, if we're just looking at this from a technology lens, we're missing a lot of, I think, the human dimensions of this that need to be factored in more closely.

Speaker 1:

Yeah, so well said. So your solution is to openly disclose the use of AI, kind of close this trust gap. How would that work exactly? And, you know, do you think that's the ultimate solution?

Speaker 2:

Well. So when we started this, I was of two minds or and you know I've heard various people laid out, but laid out this way. But you know, on some level you know people. Let's just take the use of AI in a writing context. You know whether it's Grammarly, whether it's ChatGPT, whatever tools that people might use. You know, do you need to disclose that you're using it? And part of what we saw and part of what I've believed is that you know on some level, we don't disclose that we used Google Search to be able to help us research some things, to be able to inform a blog post or inform something else that we might be writing, or LinkedIn post or whatever. We also don't disclose that we use Microsoft Word spellcheck in the process. So why would we disclose that we use ChatGPT for some of the same kinds of things? So I think there's maybe a little bit of a different expectation that people have about AI, because it's new and people are trying to figure it out that maybe you need to disclose that more. On the flip side, I think there are legitimate reasons why people would want disclosure, because you know, it's what I call chain of custody of, like the ideas or the information, and it's because there are concerns with AI about plagiarism, about misinformation and other issues that I think where people would want to know hey, did you use AI to help you write this? Hey, did you use AI to help you write this? So I don't know.

Speaker 2:

I saw a tension there and realistic, I guess, legitimate perspectives on both sides. So we dug into it to really kind of find out well, what are people defining as policies? And you know there are a lot from individual organizations. There are a lot that have been published by associations that represent various, you know various job functions or industries, and it's kind of all over the map, yeah. And what seems to be common across all of them is there is an assumption that transparency breeds trust. And that's where we started to find, because a number of the studies that have been published about do people trust AI you know I'm rounding it to 80%, but you know 80% of people don't trust AI across a number of surveys.

Speaker 2:

Sometimes it might be 82%, 74%, whatever. But if you're saying, hey, we use generative AI in the process of this to be able to breed trust, it doesn't actually breed trust. It breeds mistrust at the outset. So I don't think it's just a matter of like, hey, disclose that you used AI, either in this writing or that you're using AI in this product. I think you have to get deeper than just the simple act of disclosure and get into a little bit of the how it's being used, but also the why it's being used and the who is using it. And this is where I think one of the key findings from the study we did was that it and this is where I think one of the key findings from the study we did was that most of the current guidelines focused on the what and the how of AI disclosure, as in, hey, we use chat, gpt for this, or we used, you know, perplexity for this, or you know the how being like, oh, a watermark on an image to be able to show, or some other things. And you know, to me those are okay coming from technology companies. Yeah, they would focus on the what and how, but I think where, where the disclosure matters more is the why which gets to.

Speaker 2:

Was this appropriate use of AI? Do I think, like no, yeah, I can understand why you'd use that from a, from a, an editing standpoint, to improve your writing, but also the responsible use, like, was this a responsible use of AI? Which ultimately gets to the who? So who is using it? So was the AI used, filtered by? You know Evan used this to be able to help him with his writing and you're saying, hey, I use this for editing help, like I think people go, okay, cool, that's fine If I use this to generate my own opinion about what something would be. I think people would look at that and say, wait, no, I want to know what you think I?

Speaker 2:

want to know what ChatGPT thinks. So I think getting much clearer on that, the who and the why, I think really speaks to the heart of where disclosure can be helpful as opposed to distracting or harmful to distracting or harmful.

Speaker 1:

Well, really interesting. Take so on a positive note. Do you have any more positive anecdotes of companies disclosing their use of AI or best practices or that sort of thing.

Speaker 2:

It's a little bit early to tell, because most of the other thing that we found is that most of the guidelines are. The other thing that we found is that most of the guidelines are well. There are two dimensions of this. One is they're what we would call first wave efforts, so they're drawing primarily from what an organization already has published as either their ethics or brand or values, professional guidelines. So I would say it's like oh, now we're dealing with this new thing. How do we adapt how we've always thought about our values or ethics to this new thing?

Speaker 2:

I feel like, you know, at that point, there's very little learning about generative AI in practice at that point. So I feel like there's a second wave or a third wave that's going to have to happen in terms of okay, now that we've actually used generative AI and had some back and forth and learned from that, how will those disclosure guidelines evolve? So you know, there are some good examples out there. We've included some of them in our report, but I think there are definitely. You know, there's a second wave or a third wave that's needed to be able to bring them more to practical experience. The other thing that we found is that there's very little empirical research on this, so most of it is based on oh, here's our belief system or our opinions about this, and so here's what our AI guidelines will be. But I don't think there's been a lot to really know what actually resonates with customers or, if you're in a political realm, what actually resonates with citizens, and, again, there hasn't been practical experience with it. So we need some more practical experience, but I think there need to be more empirical studies about.

Speaker 2:

You know, as an example, for some of the Superbowl ads that have been created or generated partially with AI, does that bother people? Does that matter to people? Are they like hey, it's in the creative process, so it's totally fine, as long as it's a good ad. It's a good ad. If it's funny, it's funny. I don't care what's behind it. Or do people mistrust it just because AI was involved in it? And obviously the companies doing Super Bowl ads haven't published the studies that they may have done, or the focus groups or the surveys that they've done to be able to test that.

Speaker 2:

But I think we need to get to a place where more of that is shared to be able to really learn from, because I don't think it's black and white, it's not like oh, either we trust it or we don't trust it. I think it's again goes back to is this appropriate use? Is this responsible use? If it's in the context of creative idea generation, like hey, that's fine. If it's in the context of like hey, I'm trying to research a topic and really kind of get to some insights fast. But if it's like hey, I'm trying to really base material decisions on this data that may be flawed or may be distorted or may, you know, have bias based on that, like okay, there should be a higher bar for that.

Speaker 1:

Yeah, well said. So let's take this down a notch to our sort of personal points of view a bit. I mean, I've seen Gen AI as a boon. I mean it's, you know, 10x my productivity and output. I mean just an anecdote here. I mean this recording this video will be edited by AI, turned into a blog, you know, automagically, and social posts, and it's pretty phenomenal. I feel like I have a team of 10 versus two, and I'm sure you're doing things on your end internally to scale and to leverage AI benefits. But what does that mean for our customers and beyond? You've heard stories of clients being quite unhappy that they found out their deliverables have been done by AI, and there's a lot of shenanigans happening too. How do we take a more ethical approach as content creators, marketeers, agencies, et cetera?

Speaker 2:

You know so many questions, so many elements there, but I think that the core of it is, I mean, how you're using it and how a lot of people are using it, I would say, is for automation and productivity, right, right, um, and I don't know to me what's interesting there is that it you know you've probably had to invest a lot of time to really learn how to use the tools in the back and forth, and you know you learn through prompts and other things like how to use them better. I I think where the real automation and productivity benefits are going to come not just for gen ai, but other ai is really when it gets built more into the workflow tools and applications that we use for other things. So instead of like, oh, I'm going to chat GPT to do this, you know, the grammarly use case to me is maybe a more uh, productive one going forward, because it's like oh well, we're using it in the context of very specific workflow. We have a client that uses AI that basically helps companies with RFP management, with RFI management and all kinds of other strategic responses that companies do, and AI is a big part of what they're building into their software and there's huge automation and productivity benefits for that.

Speaker 2:

On one level, hey, we've got these 50 different documents where the answer might exist. How do we go collate and sift through that to be able to kind of surface the right answer? Or, hey, we've got a 10-page white paper that describes how something works, but we need a two-paragraph description of it in an RFP response. So can you take the 10 page version and turn it into a two paragraph summary? Like that's to me just a place where generative AI is actually very productive and is good at you know, but that's different than you know. What is, what is the point of view that that a company has about responsible use of AI? To me, that's where it like no, that goes to the belief systems, that goes to the value, the values that a company has and where you need to go do that. So I think you know you brought up an example of clients. You know I think this is where it goes into really being thoughtful about well, what is the relationship and what are?

Speaker 2:

the expectations in that relationship. So for us, as a consulting firm working with clients, what do our clients expect? And I would say, hey, if we're using AI in our work process, they would expect to know that that's there. They would have questions about where we'd have to align on. Hey, is that responsible use? Is that appropriate use?

Speaker 2:

I think between a company and its customers it's a little bit harder because if you're a B2B company, let's say you've got 100 customers, 500 customers, 1,000 customers. If you're a consumer-facing business and have millions of customers, the expectations are going to vary quite a bit. So what we've recommended in the report is, whether it's about disclosure or transparency or other things related to AI ethics. Ultimately, I think you need to. You know be transparent where it makes sense and be transparent in a sensible way, but also you know build the feedback loop into that so that you're having this back and forth and, I think, being adaptable, not being knee-jerk responsive, but being adaptable to that feedback loop. And I think this is also going to be a moving target, because the understanding and the belief based on the experience that people have with AI today is going to be different six months from now, or 12 months from now or six years from now? So how do you evolve with that, as that understanding grows?

Speaker 1:

Yeah, really interesting points. And on the business side it's different drivers as well, the C-suite. They want to do more with less, and you know this year of efficiency we're in. What are you hearing from them on some of the challenges and opportunities? What are they looking for, as they, you know, budget this year, for example?

Speaker 2:

We're definitely seeing that. Yeah, the doing more with less intersecting with, oh, ai is an opportunity to do more with less On the marketing front. I feel like there's a danger there.

Speaker 2:

Well, the do more with less thing is dangerous anyway, because you can't cut your way to growth and every company that we work with is trying to grow and you know if you're getting the impulse from a CFO or somebody else to do more with less. I mean, the reality is you can't you know if your company grew 10 percent this year and you're trying to grow 20 percent next year and you invest the same amount or you invest less, like how are you going to drive a result with less investment? I don't think AI necessarily changes that equation, but that, to me, is just a more. I don't know some of the more business dynamics that exist there, but speak, you know, more specifically to AI and the productivity benefits and the automation benefits. I do think those are very real and I think you know you've seen as much as I have discussion within each industry or within each job function about what kind of impact that will have.

Speaker 2:

I don't think a lot of what you know, workers like us do, or it can be.

Speaker 2:

I mean, you can automate some things, but you can't automate other things, and I do think over time it's going to place.

Speaker 2:

There will be a premium for original thinking, there will be a premium on ideation, a premium on you know, like just as an example of for you, you know, things that you've learned over the course of 20 to 30 years that play into the questions you ask, or that the knowledge base that you have on things like I think those are going to be more valuable in the future rather than less valid. But there are easy things to do that like, hey, yeah, automate that, like let's use the tools to be able to help there. So I think the real power comes from the combination of the people in the technology together and, you know, by job function, by industry. That's all going to kind of be a little bit of a moving target, but I think the companies that would be most successful, the individuals that would be most successful, are the ones who look at the intersection of kind of the human and the technology. The human ingenuity and like how we use tools is as important as the tool itself.

Speaker 1:

Yeah, I got it. So, just as we wrap up here, what are you excited about, as we you know, head into 2025? Any travel or events or otherwise that you're looking forward to.

Speaker 2:

Yeah, well, let's. The travel part maybe has me a bit distracted. I'm always keen to go to some fun new places. Well, I've got a senior in high school, so we've got a college trip coming up in a month, so that's the one that's in front of me, but hopefully we'll find a sunny climate to go to it a little bit later in the year. On the work front, I don't know.

Speaker 2:

To me this discussion and just what we see with ai just reinforces to me how just exciting the technology industry always is. It goes a little bit in waves where, like, hey, we've got, I don't know, mobile was a new thing and that led to kind of a lot of excitement and innovation and then it started to become okay. Now that was baked into the cake, right. And then, oh, social media emerged and like a whole wave of innovation and everything else, and now I think AI is kind of the newest, the newest shiny object. And you know, I think this one's obviously going to be here to stay and have impact in a lot of different ways. But to me it's what makes our job exciting. It there's all kinds of change and the implications of that change and the ramifications on business and people and workflow and everything else is is always changing. We tend to be much more on the B2B or the enterprise side of things, so I think the consumer stuff gets a lot of headlines, but I think actually the impact on businesses is even deeper than that. So I don't know we're we're excited about that because it creates change and creates opportunity for everybody.

Speaker 2:

And actually I don't know one thing to me with the deep seek news over the past few weeks.

Speaker 2:

What's interesting there to me is just how much you know changing the mindset about you know how efficiently you can go build AI models and everything else. And hey, if you can do it for 6 million bucks instead of 100 million bucks, putting aside the geopolitical dimensions of that, I think that's good for innovation overall. I mean it's going to be more and more companies being able to adopt AI, more and more, I think, competition and over the course of time and you've seen it too over the course of time I think that competition is good, the innovation is good. I do think we also need to have just more you know, more discussion and more, I think, business and marketing people weighing in about how to be able to do this in a way where it's not just technology for technology's sake, but we also look at the ethical dimensions, the human dimensions of all of this innovation. And I think that's happening in pockets, but it definitely needs to be happening more and we'll continue to advocate for that.

Speaker 1:

Well said, and here's to that. What an exciting time to be in tech and in marketing. Thanks so much, tim. Really enjoyed the conversation Absolutely. Thanks for the time. Yeah, thank you, and thanks everyone for listening and watching and sharing. Take care.