
Ops Cast
Ops Cast, by MarketingOps.com, is a podcast for Marketing Operations Pros by Marketing Ops Pros. Hosted by Michael Hartmann, Mike Rizzo & Naomi Liu
Ops Cast
How To Find Use Cases to Use AI to Automate Ops Tasks with Tarun Arora
Text us your thoughts on the episode or the show!
Feeling swamped by marketing operations busy work?
AI-powered automation can help you reclaim your time — but knowing what’s real versus hype isn’t easy.
In this episode, Tarun Arora, a marketing tech veteran and founder of RevCrew, explains how AI goes beyond traditional rule-based automation by handling tasks that require human judgment. He shares how "agentic AI" systems act like virtual team members — making decisions, managing your tools, and only checking in when needed.
You'll hear real-world examples, from inbox management and campaign optimization to audience selection, showing how AI can eliminate busy work and free you up for more strategic projects.
Tarun also offers practical advice on where to start: focus on your biggest needs first, test real use cases, and remember — this is just mile one.
Episode Brought to You By MO Pros
The #1 Community for Marketing Operations Professionals
Hello everyone, welcome to another episode of OpsCast brought to you by MarketingOpscom, powered by the MoPros out there. I am your host, michael Hartman, flying solo today. Joining me today to discuss how to find ways to leverage AI and automation for ops tasks is Tarun Arora. Tarun has over two decades of leadership and management experience in marketing technology, operations and analytics domains at high-growth technology companies like Workiva, romini Street, new Relic and Cisco Systems. He has worked as a practitioner, both in-house and as a consultant. He recently launched RevCrew, a company focused on AI solutions for marketing operations to eliminate the busy work and increase productivity and growth. In addition to that, tarun regularly coaches marketing and operations professionals and helps them with their careers. So Tarun regularly coaches marketing and operations professionals and helps them with their careers. So, tarun, thanks for joining me today. Thanks for having me, michael.
Speaker 1:Yeah, it's good to have another coach and this is going to be a fun topic because I think where I've gotten to in my journey of trying to integrate AI into my life is I really wanted to help me with the mundane stuff. So I know we'll probably talk about more than that, but I think this I want this to be. I think it'll be a good people walk away as, like this is some practical stuff they can. They can take away. But before we get into that part of our conversation, it's always interesting to hear people's career stories, how they ended up in marketing operations. People's career stories how they ended up in marketing operations. I personally like to hear about kind of pivotal moments or key people who maybe had an outsized impact or significant impact on your career trajectory. So maybe you can walk through your career in a little more detail and then we'll kick off into the rest of the conversation after that.
Speaker 2:Yeah, sure, michael. So you know, honestly, I never knew that I'm going to end up in marketing operations. So I've been in the marketing and go-to-market space for quite some time I think over 20 years now. I started my career in engineering and back in the day, you know, there were no like SaaS applications. So we were building actually applications for sending out emails I mean, if you remember right, databases for hosting contacts and accounts, customer data platforms and all that kind of stuff, right. So I did that for many years. And then I switched to product management.
Speaker 2:I just wanted to understand the marketing business more. I was always fascinated by marketing but, being on the engineering side, I never saw the business side of marketing. So I wanted to understand that better and I got an opportunity to move to the product management side. So I did that and that was fascinating for a few years and that was the first time I got exposed to serious decision waterfalls and all that. I was doing it on the engineering side but I didn't know that these things are like demand waterfall and demand unit waterfalls and you know all those cool things right. So I did that product management for a few years and then finally I moved to the business side, on the operations, and for the last few years I've been leading teams on the operations side, you know, dealing with marketing technology, marketing data, analytics and just campaign operations and everything you know that comes into the marketing operations umbrella Terrific.
Speaker 2:So I think you asked about the pivotal moments. Yeah, yeah, yeah. So I think one of the pivotal moments was definitely when I moved from engineering side to the product management. I think that opened the world of marketing to me because for the first time I was directly talking to the business people and trying to understand, you know, what is marketing and how do you actually translate that into systems and data and analytics, right, and how to use that to actually optimize marketing, translate that into systems and data and analytics right, and how to use that to actually optimize marketing. So that was definitely one big, I think, pivotal moment in my career.
Speaker 2:I think the second one has been more recent. And then eventually I moved to the business side. But the second one I think has been more recent when I, you know, first actually used ChatGP Deep and I was, you know, and I was taken aback. I was like, oh my God, this is revolutionary, right? I mean this can change everything, and since then I have been fascinated by this and quite recently I have actually launched my own company. It's called RevTrue, and what I'm trying to do is eliminate the busy work that we have in marketing operations and try to automate this stuff so we can actually focus on things that matter.
Speaker 1:Yeah, I think I've told this many times, but I was slow to adopt AI, in particular, things like chat, gpt, but I've started really going to that as a first place in many, many cases. But actually this brings up an interesting thing for me. So, yeah, we hear these terms right AI, automation, llm, generative AI, all these things. It feels like sometimes they're used interchangeably, or at least interpreted as being the same or close to the same, but probably are not. What's your take on what is AI versus automation and some of these other things? How do you see them as either the same or different or related? What's your kind of working definition?
Speaker 2:I think there's just like so many terms out there right now. It's like overwhelming. There's just so much noise it's hard to discern the signal over noise right now. But to me, automation we've been doing it for a long time, right. I mean automation is something that you take, a process that you've been doing manually, right, and it's a well-defined process and you systemize it, right. You, basically, whatever way you use you know rule-based system, deterministic systems, rpas you automate the process so that it can be done repeatedly in a consistent manner without the need of a human person. Actually, for the most part Now, I think AI is again like a big umbrella and most people think of it as only Gen I, but AI has been around for quite some time.
Speaker 2:I mean, gen I is more recent, right, with the LLMs, right. But AI includes things like machine learning, which we've been doing for quite some years, right. You know the regression models and classification models, clustering, right. It also includes like deep learning, you know things like which are used in like self-learning cars and all that kind of stuff. So, but JetAI is definitely the it's, it's kind of the newest kid on the block and it's exciting. So you know, to me what I feel is that automation. We can use AI as part of automation to make make the process reason out by itself and take independent decisions. So you can use LLMs as part of your automation so that they can understand language better and take decisions whenever needed, and you can actually even automate the processes which we were not able to automate before, which actually needed some human judgment, which were not very humanistic and needed some reasoning thinking before you actually take the next action.
Speaker 2:But with LLMs, and especially with the reasoning models, we are able to actually use them as part of automation, where they can be more self-reliant and do things on their own right. I think the step up on that is agentic AI right. I think the step up on that is agentic AI right, where you're not just using LLMs but you're using, you're giving your agents tools to complete a task so they can take decisions on their own, they can reason things and they have a set of tools to take actions on, so they can actually not just tell you what the next step is. They can actually go, do it like and then do the next step after that and the next step after that and be independent. So I think yeah.
Speaker 1:Yeah, so I think maybe this is a this will be an example that I can think of. So at one point in my career I built I guess I would call it sort of a lightweight attribution model, basically using what, in this case, was Alteryx right, sort of a lightweight ETL thing that I would pull data from multiple sources, combine it together, mix and match, but every time an unexpected value showed up in the source data I had to go tweak the model. If you will, yeah, and I think that is what I think of is like it was automated until there was something unexpected, in which case then I had to intervene. What I think the difference, what you're describing, is you could have that same kind of automation, but rather than it have having to, like you know, stopping and me intervening, you might, it might be able to reason its way to oh, this new value is like these other ones. I'm going to categorize it, this other, the same way, and I'm going to keep going. Is it like something?
Speaker 2:like that. Yes, no, for sure, for sure I think it can take case of. You know it can take care of outliers, you know, and things, and we've what you're describing. You know we've been doing that all our life, right, that you know and things, and what you're describing you know, we've been doing that all our life, right, that we go in to fix something, that automation. You know just where automation just stalls, right, and we have to go in. So I think we now have the capability where it can actually take, you know, care of these kind of outliers and the things that stalls a normal automation process.
Speaker 1:Yeah, and I mean just from my own experience, like the biggest kind of use cases I've had with using ChatGPT or BookPlexity, or I still want to try Grok, but yeah is sort of a complicated question, right, or it's actually not just one question, so it's not a simple, there is an answer kind of thing there's maybe. Yeah, I want to give it some context, you know, and uh, but it's more or less it's like it's doing research on my behalf, right, and I'm giving it some direction, but then it's doing research that otherwise I would spend, you know, you know, you know probably hours, if not days, doing on my own Right, right, that is like that's where I've really found the value. So I can imagine having something like that in the middle of, say, I'm an SDR, I get a lead that is bubbled up and I'm supposed to follow up and it could automatically do some research on the prospect.
Speaker 2:Absolutely, Absolutely, absolutely. I mean, if you think the processes, you know the steps that an SDI would take when they get a lead. They would actually just go out on LinkedIn and internet and actually research that lead, you know, try to understand, you know where the company you know, maybe look at their LinkedIn posts right and try to understand what their pain points are and then, and you know, after doing this research, they would actually pick up a phone and talk to them because they have all this context. Now an agent can actually do all this. You know when a lead comes in, it can actually go to different systems and research this lead for you. You know both the account and on the personal level, and actually give you a summary in your CRM which is available right with the lead in real time. And then when an SDR wants to make a call, he's got all that context with them.
Speaker 2:Now with AI SDRs, I mean, the calling can also be automated. But that's a different discussion because you know different discussion, because there are people who like that. There are a lot of people who are opposed to that. But to your point, yes, a lot of the research and the steps that a person would somebody like an SDI would take to find the context before calling somebody can be done automatically.
Speaker 1:So that makes sense. So if I short version, I think you already said it right. Ai is a another tool in the automation toolkit.
Speaker 2:Absolutely, absolutely. I think it's just another tool in our in our toolkit, and I think we can decide if you want to use it or not.
Speaker 1:Right, right, right yeah, and there's always times when it makes sense and probably sometimes where it might be overkill or too expensive. You brought up agents too, so this is one that I still am trying to wrap my head around what that means. You briefly touched on what an AI agent would be like. Can you go a little deeper on what does that do? Sure, and I mean both from a professional and personal standpoint. I'm curious, like how would one use it.
Speaker 2:Yeah, excuse me, so you can think of agents Maybe. I think a good analogy is that. Let's say you have, you know you have a new hire, right? Um, what do we normally do? I mean, a new hire comes in we. We train them on our business knowledge and processes, right?
Speaker 2:yeah and we give them a set of tools. We give them logins. We give them, you know, a set of internal tools that we used. You know our wikis. We give them a certain set of external tools that we used. You know our wikis. We give them a certain set of external tools that we use and then, using the business knowledge and the set of tools, they can actually do the job that they were hired to do.
Speaker 2:Now, only in this case, you know it's not a human, it's an, you know, it's an AI agent that can that is doing it. So once you have, you know, you have this software system, which is built out of LLMs and other systems, and once you train them on your business knowledge and processes and give them a certain set of tools, so you may have to give them a login to your email, you may have to give them a login to your wiki and access to external tools like Zoom info, you know, your CRM, you know, and stuff like that. Right Now this becomes like you know your new hire. It has the business contacts, it has a set of tools. Now it can independently actually execute on that you know. So it can reason, it can take independent decisions, it can actually use the tools to get the things done.
Speaker 2:Now there might be cases, just like a new hire or anybody like even you know an existing employee, where they're doing something and they get stuck or they need approvals, right, or they need somebody to actually look over their work. You can always have, like a human in loop, what's called the human in loop you know thing, right, where an agent actually just messages a person saying that, hey, this is where I am. Do you approve this Right? Should I go ahead or not? And then you know it just carries on its job. So I mean, if you think of it this way, right, that it's basically somebody who understands your business knowledge and the context of your business, has the set of tools, has the brain, you know, has the thinking and the reasoning ability to actually do the job independently. I think that's probably a good way to describe an agent.
Speaker 1:So one of the maybe this is not fully an agent, but it feels close I know at least a handful of tools that are AI-based that will help manage your inbox right. It'll scan through, maybe recategorize them and we'll kind of learn from how your tone of voice, your typical, and it will actually draft responses if it needs to be a response or whatever. I mean, is that kind of an agent too right, something like that, or is it something a little bit different?
Speaker 2:I think, again, there's a.
Speaker 2:There's a lot of terminology out there, I mean, and there's a lot of noise, right, I mean, and honestly, there's no nothing right or wrong in calling that an agent versus not Right, but but you know the the industry definition that's actually emerging is that an agentic system is something that not only has the business knowledge and the context and is trained on your you know, whatever your business processes or your brand or your tone and your voice, but can also take independent, has also a set of tools, access to a set of tools, and is able to actually take independent decisions to use those set of tools.
Speaker 2:And it can decide what tool to use. Maybe it has like 10 tools, right, and it can decide that, hey, based on the input, maybe now I have to go update the CRM, or, based on the input, now a lead has come in, maybe I have to go out to the internet and research and research on these points. And these are the guidance that we've given them, that you research on these points, and these are the, these are the you know guidance that we've given them, that you research on these points. And then I'm going to use a tool that is actually going to log into the CRM and then I'm going to write that context and that research into the CRM. So it's it's using both your business knowledge and a set of tools to take independent decisions to do a job to complete. A job to complete a job.
Speaker 1:Okay, makes sense, okay. So I mean, and it sounds like what you're kind of doing with your new venture. But you know, I mentioned I feel like I'm a semi-late adopter I don't think I'm the last one and I'm finding a lot of use for the LLMs particularly, but I feel like I'm just scratching the surface. So, when you think about how to identify the kinds of things or use cases whatever terminology you want to use where AI and automation could be applied, and whether I guess maybe keep it to the domain of marketing, marketing operations, right, how do you do that? What's your approach to doing that?
Speaker 2:Yeah, you know, I think the first thing is, like any technology, uh, I think we need to use the technology to solve a business need, I mean. So I think we need to start from there. Uh, ai, like any other technology, is a tool that we use to solve a business problem. So I think we need to start from there, and I think the way to look at it the way at least I look at it is that there are hundreds and hundreds of use cases that you can apply AI to. I mean everything in, let's say, even if you just talk about marketing or go-to-market or sales, there are thousands of things that you can apply AI to. Now, where do you apply? Where do you start to apply? Now? I mean, if you take AI out of this equation, if you were trying to do something with technology, I think you would look at your most immediate problems, right, where technology can make an impact, and you probably get the biggest bang for your buck, right. I think we should use the same logic here, in spite of all the noise out there and the FOMO out there. Right? We should apply the same logic here that what's my immediate business need that I'm trying to solve Now in some company. That might be growth, right. Hey, we are not growing company. That might be growth, right. Hey, we are not growing as we want to be or as we need to be. So that's probably the place to start with. So you can think about growth use cases. You can think about lead generation. You can think about how can I increase my conversions, how can I do better lead scoring, how can I do better account scoring Things like that, that. How can I fine tune and optimize my campaigns? You use AI to use that.
Speaker 2:Now, let's say, your business problem or the problem right now is that you guys are busy, drowned in busy work. I mean, there's so much of busy work that there is, the productivity is low, the strategic projects are not getting done. Now, that's a place where AI should be used for automation of these kind of busy work or processes right Now. It could be your data hygiene, it could be your list uploadings, it could be your reporting processes, any task management, even project management, things like that right. And if you were, let's say, if you're trying to cut costs, like hey, we don't have the budgets, now you can do things like AI content creation right, so you don't have to hire that many content writers. Right, you can use it to.
Speaker 1:For our audience. He paused because I was shaking my head kind of like questioning that statement about content creation and just in general, like it's interesting to me because I think that was the biggest use case that people talked about with ChatGPT and all the content creators were like worried about I actually I think it's true that it can be used for content creation, but in certain cases I don't think it's great for well. At least I haven't seen it truly do like innovative, creative new content like from scratch. It's really good at repurposing content.
Speaker 2:Yeah, no, I think I totally agree with you and you know I've been I've been trying to post more regularly on LinkedIn lately and I one thing I found is that you know, like most of the people say that it's great for creating the first draft, I actually have found that it's better if you actually give it a draft and then ask it to draft something. I haven't found it great in creating a first draft because that it sounds too machine-like, it sounds too cheesy. That first draft right, it's like a little bit cringy. So what I do is I give it certain draft, I give it a small draft and then I ask it to improve that and then iterate on that. But I think you're right.
Speaker 1:What's interesting? Because I've had experience, good experience for drafting, um, things like emails, in some cases, posts, or helping other people with some things that are unrelated to what we do, but what I've done. In some cases I've helping other people with some things that are unrelated to what we do, but what I've done. In some cases I've had a little bit of a draft like here, or at least, or like here's what I'm thinking, like bullet points, but I tend to give it a huge amount of context. Um, and this is what this is what I've learned like this is significantly different than what I would do with, like a typical Google search or something like that. Right, it was like I would give like all this context and then give it and then refine it If it doesn't quite get what I think I want out of it.
Speaker 2:Yes, no, totally right. But I think what I meant from a cutting cost perspective is you know it can at least take you 50% there. You know it can at least take you 50% there. You know whether it's you using it for your first draft or you're using it for generating emails, right, or you're iterating on it. It's not at a point where it can just be your independent content writer, honestly right, but it can take you some ways there and help you probably go faster.
Speaker 1:Yeah, yeah. So here's an interesting. So I've been thinking, as you talked about this right. I've often you probably have gone through this right.
Speaker 1:We're often asked like how to prioritize all the possible things we could do on any given day, given week within operations, and I always my reaction to those kinds of requests are always, well, like stack ranking is always going to fail, right, and if you do some sort of like low, medium, high, it's also going to fail, because it's just it's missing the nuance of what's the benefit, what's the level of effort or cost, right, and there's a kind of a trade-off. So there's like at least two dimensions to it, right, and you know, sort of broadly cost benefit. It feels like maybe there's a third dimension when you think about like what are those things that I could do that could be automated, whether or not they leverage AI, which is, how repeatable are they? Something like that, right, or you know and you know and that's. And maybe another component which I think you got to like there, like how time intensive are they, especially if the time intensive part of what the current process is is also a relatively low value kind of thing.
Speaker 1:Maybe it's important but not really. There's not a Privacy compliance is a big one to me, monitoring an inbox. If you're sending out emails and you're getting actual replies, you need to be watching for people who say unsubscribe me. There's a compliance component there. It's a huge time sink if you're sending any kind of volume out, right, cause you're getting automatic replies. I'm out of office to see. You have to sift through that and it's a manual process for most places, right?
Speaker 1:To do that and there are a few tools out there that and I've had some but like that feels like a the kind of thing, like it's important. There's not a huge amount of value in it. There's a risk associated with it, time-consuming, but repeatable, right If you give it, especially if you could give it some broad rules that an AI could, an agent could do interpretation.
Speaker 2:Yeah, and you're totally right, and this is actually one of the automations with my new venture we're looking at to automate. You're totally right that this is one of the busy jobs that people in marketing operations do and there is compliance risk to it because a lot of times, just depending on the volume of the emails that you send out and then the replies that you get back, right, I mean, I have seen that people just stop managing these inboxes. Yeah, you know, because there's who wants to look at like 500 emails. You know a day or a week or whatever, right, and you know, once you stop doing that, you're losing on. I mean, of course, you're facing the compliance issues and if you're not opting out people, but you're also losing on opportunities to enrich your database. You know from alternate contacts which are from out of office, right, and you're also missing on some high intent leads which have actually asked you a product question, or. Or sometimes you also see that, hey, somebody has actually asked for a demo, asked you a product question, or sometimes you also see that, hey, somebody's actually asked for a demo. Somebody said that can I talk to sales or can I have a demo, and it's buried in your inbox and nobody's even looking at it.
Speaker 2:So I think there is value more from Michael in even this kind of work where you might. There are high intent leads buried in your inboxes. There are things like compliance issues. There is a dollar value to your database enrichment because you're not letting your database go stale. You know you can and you don't have to buy new contacts if you can actually just get, you know, scrape contacts out of these return emails. So there's definitely value in this from a dollar perspective as well. But there you know, to your general point. I think it's basically the prioritization has to happen on. You know you're totally right that stack ranking might not work, but the prioritization has to happen based on hey, what's the impact of doing something and what's the level of effort in doing something, and if there is an easy and of course there are some quick, easy fixes or the quick fixes where you either throw, not throw, but you put effort or a little bit of money in it and you can actually solve it pretty easily.
Speaker 2:So I mean, these are definitely the top where you say okay, I can solve these very, very easily, and let me do that. And then I'm actually going to look at the other cases which are important.
Speaker 1:Yeah, I mean, when I've done that before, I always end up with a quadrant right Relatively low effort, relatively high return. Those are kind of no brainers, you just you got to go do this Right and then you've got the. So the other end of it was high level effort, relatively low level of return. Other end of it was high level effort, relatively low level of return. Like those sit in your queue forever unless you just absolutely have nothing else to do, which never happens, or unless there's someone, like with a title or some sort of regulatory compliance, says you have to do this, like there's exceptions.
Speaker 1:But the other two quadrants are the hard ones to figure out, right right right, right right so, um, and if you add in in this element of is it a repeatable, like if you're trying to replace an existing process, as one of the things you do, like there's also this element is like, how repeatable is it? Um, how much time does it consume of our team that could be otherwise used to for other things that are of higher value, right, I think that's a framework that I've kind of in my head, so, so maybe one more thing. So how do you kind of go back to, like, how do you go about through the process, like, say, you identify one, two, three, like a you know half a dozen things that you could automate in some way? How do you decide where you, when or if you should apply, say, I'll call it I even hate to call it traditional, but traditional like rules-based automation versus something that's AI enabled along the way.
Speaker 2:Yeah. So I think a very clear distinction is if you're looking for a deterministic output right and something can be solved by if, then else, or rule-based right.
Speaker 2:You don't need an LLM in there and most of the automations that we've done that. If this happens, then do this. If you get this kind of an email, or maybe lead routing, you look at lead routing. If you get a lead from this region for this product, route it to this person. I mean, that's a simple rule-based automation right Now. More AI-based automations would be, let's say, list enrichments, things like classifying your job titles into personas. Right Now, we build all of us in marketing ops. I think this is one of our favorite use cases that have been debated forever right, and we build rule-based things for that. Right. We've built like wildcard searches. We've done like keyword-based things. Right, if job title is this, job level is this.
Speaker 2:Oh, yeah, normalizing job titles fun stuff, creating personas out of that right. But an LLM can do it very easily. Of course, you can give it certain guardrails based on your business, but it is able to do a much better job of actually converting these job titles into personas. So things like that. I mean, if you want to automate, so far we have automated this process, but we've always found outliers. Right far we have automated this process, but we've always found outliers right.
Speaker 2:We've always had to go back and keep adding stuff or keep, you know, changing stuff in there, right. But with the LLMs now it can do a much better job of you know making that kind of determination. Because this is not totally like rule-based. It cannot be 100% rule-based, but something like that is a great use case for automating with an LLM right, I think.
Speaker 1:I'm going to throw you a curve ball at you here. So, like on that one in particular, it feels like a combination of having an LLM that could handle the yeah, unexpected scenarios right, they're outliers, like you said. Could it also be, could it also generate some sort of confidence level on that as well, because then I think that combination would be actually really valuable.
Speaker 2:Actually it's funny that I was talking to somebody yesterday just about this on generating a confidence score. It definitely can, it definitely can, it definitely can. And based on your business, you know business context, right. Sure, what do you want these personas to be? Because you know one. For somebody, a certain title might be a certain persona, for somebody else it could be a different persona, right. So you can definitely generate like a confidence score on that. And then you know, based on that, if the confidence is low, you don't have to, you can actually have the system not automatically do it and actually get a human in loop, right? Versus if the confidence is high, you know it just does its thing.
Speaker 1:Yeah, or or you could just leave it just like, publish the conference level and the normalized value, and then if someone wants to use it, they can make a choice.
Speaker 2:For sure yes.
Speaker 1:Yeah.
Speaker 2:For sure.
Speaker 1:Okay, so that's interesting. So that deterministic part is a good one. All right, we talked about monitoring an inbox. What are some other common challenges that you? You know? Have heard from others that in like marketing revenue ops that seem ripe for AI and automation.
Speaker 2:Yeah, I think one of the biggest challenges I hear and again it's from an operations perspective there are a number of things you can solve for marketing, right, I mean more from an operations perspective is the challenges around planning a campaign, selecting audiences and campaign creation and execution as well. I mean, this is such a drawn out process that I think it's ripe for some kind of an intelligent automation there. I mean, even if you think and Michael, you've been being in ops, you've probably seen that how sometime how drawn out it can be to select an audience and how many back and forths are needed to do something like this hey, if I select all the VP of marketing in this region, what is the count? And then, if I add this filter or if I remove this region, what is the count? And then, if I add this filter or if I remove this filter, what is the count? And it goes on and on, right. Sure. So just same for campaign creation, where you continuously try to tweak the copies and things like that. I think this entire process can have intelligent automation for this to do it better and be more self-service for the campaign managers and the dimension people as well.
Speaker 2:One of the other things I always hear is that the campaign managers or the dimension people, once a campaign has gone out, they don't get real-time insights about the campaign, how the campaign is doing right, unless they actually ask somebody and a lot of times they're slacking us ops, people and hey, how many registrations do I have? Or they have some kind of a dashboard, or, if somebody is more Salesforce savvy, they can actually go to Salesforce and actually see this campaign right. Sure, but it's again something that they have to keep chewing. Now, you know if something is intelligent enough to actually tell them that, hey, your campaign is performing well or not performing well, and maybe this is the way to optimize it. I think that's where intelligence in this process can come in. That you're actually doing a webinar. You know it's been two weeks since you launched it. Your registrations are still based on the registrations that you've got so far. You're not going to meet your numbers, so maybe you want to send out another email, maybe you want to target another set of people, things like that.
Speaker 2:You know, there are, I think, a number of cases regarding personalization, data analysis and insights and all that kind of stuff that can be done Right, but this is something you know, more closer to operations, where you know we are day in and day out, like into, like campaign planning, audience building, campaign creation, execution, providing them insights about the campaigns Right, and then answering questions about hey, what's working, what's not working, is this offer resonating or not right? Is this copy resonating or not right? I think all this can. There's a lot of scope here for automations, and not just automations, but intelligent automation.
Speaker 1:I love the idea of the like call it early signals from a tactic, because I know maybe you and I have talked about this even when we talked before. But I think of one specific example where I was, where a person was running a webinar, everything launched and really nobody was asking for reporting or anything until about two weeks before the webinar. And then somebody asked Well, it turned out. Well, there was zero registrants in the webinar platform. Now, the good news was, people had registered. It was captured in them. In that case, I think it was Eloqua Eloqua and Marketo, one of the two. It just hadn't flowed, like the data hadn't flowed over to the webinar platform.
Speaker 1:The downside was, like, all the very personalized stuff that would come out of the webinar platform with specific links, and all that was not hadn't been going out, and so, um, it would have been nice to have had something that.
Speaker 1:Just, hey, I know, as part of my launching of a webinar, one of the things that's going to happen is there's going to be whether it's an agent or some sort of automated thing that's going to start giving me insights into how it's performing.
Speaker 1:Right, all the promotion, um, maybe as a as compared to expectations or goals or as compared to similar kinds of tactics, um, to that kind of audience that you know. I think that would have been really valuable. It would have been early signal like oh, there's a problem, like in this case it was more of a operation, like a system level problem, not a performance of the, the campaign problem per se, but like any of those kinds of things would have been helpful to get like that and I see a lot of teams that are moving so fast, like they're always, they get one thing launched, they go to the next one. They're not really right taking advantage of that data, I could see. Then you take it a step further and you go like I'm going to launch a campaign. It's like, yeah, this is the characteristics of it. It's going to go and go like here's how I suggest you do your segmentation, what you should expect, like actually to help you generate the expectations for the new one.
Speaker 2:Yeah, no, absolutely. And there's so much scope in actually helping you plan campaigns based on historical performance of it right, when it can even suggest that, hey, based on your goals and based on your historical performance, this is what you need. Maybe these are the numbers that you need to target right? These are your historical conversion rates and stuff. Yeah, Right, these are your historical conversion rates and stuff.
Speaker 1:Yeah, interesting. I mean, I'm like super excited about all these kinds of potential things out there because it feels like stuff that we've talked about that you know, at the end of the day, has been, you know, required human capital and time. That was limited, and now you've got something that could potentially replace some of that not all of it, because I still think there's a need for the human intervention, insight, whatever you want to call it right. That is not totally there. It feels like Although I am probably not aware of stuff where it actually is a little smarter than the humans, doesn't have the bias maybe, but anyway, so you're launching your venture or have launched it. I mean, are you seeing, you know, are you expecting that there's going to be more like commercial kinds of solutions that are solving some of these more operational? It feels a little bit tactical, but not not tactical kind of a hybrid things. Or do you think it's going to be, you know, people doing bespoke solutions based on their particular entity or some hybrid of those?
Speaker 2:Yeah, I think it's always going to be a hybrid. There's definitely going to be commercial solutions that address problems right and there's going to be innovations that come out in commercial solutions, and I think there's still going to be a place for custom, bespoke solutions where your process is something really different. The industry that you work in is, let's say, heavily regulated and a commercial solution, or at least a commercial solution that is not specifically built for that industry, is not available.
Speaker 2:In that case, you may have to build a custom solution. So I think it's definitely going to be hybrid and with tools like Zapier, make, netan and hundreds of other you know AI platforms out there, I think people would also start building you know these automations at scale within their organization. So I think there's a place for both.
Speaker 1:Yeah, okay, yeah, and it seems like stuff like Clay is also big right now. Yeah, yeah, yeah is also big right now.
Speaker 2:Yeah, yeah, yeah, yeah. Clay's impressive.
Speaker 1:Um, yeah, I don't want to do an ad for them, but, um, I did one little play around, played around with it, and within a few minutes I was able to generate something that I think I probably wouldn't be able to do on my own, even man. I don't think even manually, like any kind of close to automated, and certainly manually, would have taken months. It's pretty yeah it's crazy.
Speaker 1:Yeah, um, well, that's, this is awesome, uh, any, so we've covered a lot of ground, but is there anything else that you wanted to that we didn't cover? That you want to make sure that our audience hears about?
Speaker 2:yeah, I think. I think one thing I tell everyone is that, and one thing I generally see is there is a lot of like FOMO out there. People are like feel that if they don't, if they're not doing it, they're missing out, or if they're not doing it in in a very big way right now, they're missing out on something you know. Honestly, I think this is just mile one. This is just getting started, so I think the the right way is probably to learn it, experiment with it and apply it to certain use cases, see what works, what doesn't work, and then go from there. I think that's important, because I think people are just getting overwhelmed and they're also having this FOMO of missing out, right. So, which is which probably is not right at this point in the technology curve, you know? And the second thing is, you know, what I also see is that people are thinking of AI as some kind of a magic wand, you know, and not thinking of it as just another technology. I mean, agreed that it's revolutionary, right, it's as just another technology.
Speaker 2:I mean, I agree that it's revolutionary, right, it's not just another technology. It's definitely something that can change a lot of things, but at the end of the day, it's a technology and like any other technology, right, I mean, it's not a magic wand. It needs work, it needs development, it needs testing, it needs experimentation, it needs deployment and even after deployment, you have to actually maintain it, right? I mean, these LLMs are, the technology is evolving so fast, so rapidly on a daily basis, right that you have to maintain these applications. So I think people are ignoring, or at least not looking at, the hard work that's needed to actually implement these technologies, and just thinking of it is like, hey, we put AI to this and this is going to solve the world's problem. I think that's not the case. It's just another technology that can be put to use, but it's, again, hard work like any other technology.
Speaker 1:Yeah, yeah, it's interesting because I think the combination of those two really described, maybe, my own experience, which was, I think, the first couple of times I tried. I'll just keep it to chat GPT. I was unimpressed, right, it was just. It was like oh, I don't get why everyone's all up and excited about this and I think because of that, whether I was a skeptic or I just like it, just felt like I was like it wasn't worth the effort. But at the same time, I kept hearing so many people talk about it. Maybe I'm missing something.
Speaker 1:And it wasn't until I can't remember there was probably a particular thing I'm thinking of where I was struggling with trying to do something.
Speaker 1:I was like, maybe this is the kind of scenario where it would help and it did. And that was the catalyst for me to go like, oh, trying to think about, like, what was different about that. And it was really like it wasn't just a simple question, right, it was something more complicated, it needed a little context. And then I saw some other people with examples of the kind of prompts they were doing, like, oh, this is what you actually can't just go ask it a random question, and it doesn't really think about you. It doesn't have any context, it's not. It's like would be asking somebody on the street, right, it's not really 80. And um, but up until that point, right, the idea of even thinking about should I try using chat, gpt or one of these tools to help with this problem I have was way down the list of things I might try. Right, it is now like getting much closer to the very top of that list of things like, oh, can this help me with this problem? And I think that's been the shift for me.
Speaker 1:When I think about it, it's happening much earlier in my thought process.
Speaker 2:Yeah, no, absolutely, and I think, going forward, I think it's going to be top of the list right of things to try before you try anything else. The way you know, one thing I try to do is that whenever during the day I think of something that I have to do, you know I force myself to first go to ChatGPT right and try to, you know, get the try to see if it can do it right, and nine times out of 10, it's actually able to give me a new perspective right If I'm brainstorming or researching something, and also give me the direction to do something. So I think it's awesome and I think, slowly, people would. That would be the first place where people would go to to actually even start anything.
Speaker 1:Yeah, I mean, I did that with. I've done that with at least two of my kids on very different things. One was college choice and one was on a warm-up routine for a track meet. Two very different things right, both of which I have a little bit of knowledge about, but not all the knowledge I needed to help them with it. In both cases, it generated something within a few minutes that was absolutely useful.
Speaker 2:Yeah.
Speaker 1:Crazy, absolutely. So, no, it's going to be interesting. I'm involved with an advisory board for an engineering school and this has become a topic on that on that front as well, right, cause I think there's the obvious like should students be allowed to use those things? But there's also just like the idea that kids coming in probably in the not too distant future, are going to come in with already having like their education agents coming with them, right? No, I mean it's weird, like this is why I'm like it's weird to think about, but, um, pretty, it's funny my I think my oldest son actually is like you know more about this than I do.
Speaker 2:No, absolutely, and I think, uh, there's no point in keeping students away from it. I mean, it's a, it's a tool just like any other tool yeah you know for them to use and actually, uh, learn from it right. Use it to increase their productivity, to increase their knowledge.
Speaker 1:It drove me crazy. When my kids told me they weren't allowed to use Wikipedia as a source for a research paper. I was like I get it, there's crap out there, but there's also stuff. You've got to learn how to decipher what's real, what's useful, what's not, what do you like what? How to decipher like what's real with like what's useful, what's not. What do you like? What do you like? What do you trust, what do you not trust? Same goes for this yeah, no, absolutely so well.
Speaker 1:Uh, tarun, thank you so much. Uh. If folks want to learn more about what you're doing or hear more about your perspective on all this, what's the best way for them to do that?
Speaker 2:yeah sure, I mean, the best way I guess is reach me on. Yeah sure, I mean the best way I guess is reach me on LinkedIn. You know, just DM me, send me a connection request. I think I would love to connect and talk more on this.
Speaker 1:So LinkedIn is the best place. All right. Well, I can attest he asks good questions and he likes to learn from others. So again, tarun, thank you so much. Appreciate it. Thanks again to our audience for continuing to support us and giving us ideas and suggestions. If you have an idea or suggestion for a topic or a guest, or you want to be a guest, feel free to reach out to Naomi, mike or me and we would be happy to talk to you about that. Till next time. Bye, everybody.
Speaker 2:All right. Thank you so much, Michael.