
The Digital Project Manager
The Digital Project Manager is the home of digital project management inspiration, how-to guides, tips, tricks, tools, funnies, and jobs. We provide project management guidance for the digital wild west where demanding stakeholders, tiny budgets and stupid deadlines reign supreme.
The Digital Project Manager
Turning AI Experiments Into Agency Impact: Transforming Curiosity Into Measurable Results
Experimenting with AI is exciting—but how do you make the leap from tinkering to transforming agency operations at scale? In this conversation, Galen Low brings together Melissa Morris (Agency Authority), Kelly Vega (VML), and Harv Nagra (Scoro) to talk about how agencies can carve out space for experimentation, align AI use to business goals, and actually implement the good ideas that emerge.
The panel shares stories of saving hours on PM tasks, setting up accountability frameworks, and creating safe spaces for knowledge-sharing. They also surface the tough stuff—fear of job replacement, cultural resistance, governance challenges—and how to navigate it with clarity and empathy.
Resources from this episode:
- Join DPM Membership
- Subscribe to the newsletter to get our latest articles and podcasts
- Connect with Melissa, Kelly, Harv on LinkedIn
- Check out Agency Authority, VML, Scoro and The Handbook: Agency Ops podcast
What is the AI holy grail that agency operators are seeking out by giving their people time and space to experiment with AI?
Kelly Vega:I would usually spend about 30% of my week doing what now takes me less than 5%, no question.
Harv Nagra:This was a couple years ago. During our monthly all hands, we would create space for anybody to volunteer to kind of present some of their experiments, sharing that knowledge with other people.
Melissa Morris:Don't throw it out to the team. Hey, we're gonna X, Y, and Z. Let's make that happen. When everybody's accountable, no one's accountable. Treat it like a client project or it's just not going to get done.
Kelly Vega:We actually made a standard agreement that we would have everything transcribed and everyone was just like, please, yes, accountability. That's great.
Galen Low:All right. Today's session is all about taking your agency from a state of like tinkering with AI into a state of like truly benefiting from AI enhanced operations at scale. So let's maybe meet our panelists. Melissa Morris, who is a founder of Agency Authority and also like a prolific LinkedIn video poster. You were posting so much video. It's all great. It's all valuable. I have to ask Melissa, in addition to you posting like high value agency related content almost every day, I've also seen your name appear on podcasts alongside names like Sharon Tarrick, Robert McPhee. What is your regimen for keeping your energy so infectious everywhere that you show up?
Melissa Morris:Yeah, I drink a really lot of caffeine, Galen. No, that's a joke. I think a couple of things. I've been doing this a long time. I really feel like I deeply understand the agency owners I work with. I'm super excited to support them, so the extrovert in me is very happy to go on and share lots of good information and hang out with them. And I also have a really great team that supports me, so we have a really strong content management system in place and workflow, so they make it super easy for me to just get to pop on videos and talk and hang out, and then piece out and they handle all the hard stuff for me.
Galen Low:I love that. What should I pick on next? Maybe I'll pick on Ms. Kelly Vega, who is a Program Director at VML and also a viral TikTok comedian. Kelly, you recently reunited with like a household name account at a familiar household name agency after taking a little bit of a road trip into a completely different niche and universe. Does it feel like you've woken up for a dream in a parallel universe? Like what's different about how things worked at your agency before versus now?
Kelly Vega:Big time perspective. Sometimes, you know, whether you see it as like grass is greener on the other side, it wasn't necessarily greener or not, it was just a different experience. And when that experience came to a head, I'm like, okay, what's next? And what is next was going back, which I've never done before. I've never boomeranged. So it's been wonderful. I mean, I focus on agency operations and delivery for large accounts. I came in with a fresh perspective and experience under my belt to speak for it, so I'm really happy and excited to be here and talking about AI and agency operations.
Galen Low:It's the joy of agency stuff too. It's like, you know the variety, right? So you can kind of go and cut across verticals and cut across industries, come back with lessons that you've learned in like adjacent spaces. And bring in that perspective.
Kelly Vega:Yes. And seeing that at a small scale, at a larger scale, and kind of having that, I mean, so much to be said, it's cool.
Galen Low:I'm gonna leverage some of those perspectives today. So thank you for being a part of this. And last but not least is Mr. Harv Nagra, Head of Brand Communications at Scoro and host of The
Handbook:Agency Ops podcast. Harv, you recently revealed to me that you actually had your acting debut in a video case study when you were a Scoro client. Now you're actually at Scoro using your agency ops background and your Oscar worthy acting chops for a new LinkedIn video series called Ops Quickies. So, the big question for me is how long until we see The Handbook as a major motion picture, or at least maybe more video content from you.
Harv Nagra:Well, I think it's in production already, actually. Horror story, you know, evil client team not doing their time sheets and ChatGPT shutting down. So I think you're gonna love it.
Galen Low:I would watch that. Thriller. Absolutely.
Harv Nagra:Yeah. Waking Nightmare. I'm based in London in the UK, but originally from Vancouver, Canada. So excited to see some of you from BC over there.
Galen Low:Yeah, Harv and I share a hometown. I was originally from Vancouver, BC as well. We did not know each other in our time in Vancouver, but I'm glad we're connected now and I can rope you into doing a panel discussion online from the UK. Alright, let me tee this up. I'm hearing from a lot of agency folks that their agencies are asking their staff to dedicate like two to four hours a week on like mandatory AI experimentation and exploration. While I don't necessarily find this surprising given like AI's promised potential, I will say that it was just a heartbeat to go when there just wasn't enough room to take two to four hours per week and still hit an 80% utilization rate. You know, and, but while agency leaders are likely hoping that these experiments and explorations will lead to like innovative breakthroughs that will 10 x their business, the reality is that tinkering with AI won't necessarily get them there. Experiments need frameworks. They need criteria for success. And beyond that, successful experiments need to be like mobilized into an implementation stage in order to benefit the organization at large. So really the question is, what's the best way to go from informal AI experiments to agency-wide AI supported operational enhancements, and what is at stake if agency folks aren't able to get there? Paint a picture. You know, we're talking about AI, we're talking about growing an agency or any type of organization, but sometimes it's unclear of like what that destination looks like. So I thought I'd ask this panel. What is the AI holy grail that agency operators are seeking out by giving their people time and space to experiment with AI? Like is it just about giving staff exposure to the technology or is it about accelerating into this like sort of hybrid human AI business model? And if it's a ladder, like what does that even look like? Kelly, you wanna kick us off?
Kelly Vega:I'd love to. Yeah. I think that really the time that is being encouraged to spent to investigate AI and look into it, I mean, from a PM and ops perspective, it's all about efficiency. It's all about putting admin and making that more efficient, taking away the analysis paralysis. You know, when you are spinning your wheels on meeting recaps and who said what, there's transcriptions that you can put into a recap that you should read through and should make sure everyone's name is spelled right and should make sure that there's not some, a line that says someone was frustrated when you didn't necessarily wanna note that. But to take away a lot of that overhead work that you're otherwise spending, I would usually spend about 30% of my week doing what now takes me less than 5%. No question. So I think that is a big focus default because that's where my head is, that operations process, standardizing process. I could go on about many examples with that, but that's where my head goes. That's where I've seen a lot of PM and digital PMs go.
Galen Low:I like the human in the loop stuff, right? It's not zero. But also like the other perspective that I hadn't really thought about is like. How painful it is actually for your bosses and your bosses to watch like their top performers probably do Excellent meeting minutes and notes. Yeah. With names spelled right and, but like, they're like, oh, but like what I really appreciate about you and your talents is not that really right? Like, I want you to be doing other things. Like it's actually not just painful for the people who have to do some of the administra, but actually could be painful for supportive leaders who like actually want you not spending 30% of your time doing that because you're better at something else.
Kelly Vega:Exactly. It's, it supports delivery. It certainly doesn't replace the talent or the strategy that is necessary and that your brain otherwise needs the space to focus on and can focus on. My burnout is less because I am simply not using that energy toward things that our admin and not productive toward a strategy.
Galen Low:I like that the holy grail is like not that overwhelmed, that like breeds shortcuts are spreading oneself thin. Melissa, you work with a lot of agencies to look at their operations. I'm imagining that you've got a pretty good perspective on this as well in terms of like what the people you're talking to are holding up as their sort of, you know, holy grail, their vision for the future. What are some of the things that you hear?
Melissa Morris:Yeah, I think you know, quite similarly to what Kelly is, and that's on a very just like practical day to day, what does this look like? But I think to go a step above that, what they're really looking for is increased profitability. I think agencies can sometimes end up in these really thin margins, particularly creative types of agencies and industries. Those are notorious for getting out of scope and three rounds of revisions turns into 10 rounds of revisions, turns in, you know, then the client change directions. We're back to the drawing board, right? So I think there is always a desire for agencies to be coming back to where can we simplify? Where can we streamline and where can we build in some breathing room, in our margins, in our timelines? And I think it is a struggle in some of the creative spaces. So where can we really double down on administrative type things like meeting notes or creating slide decks or whatever that might look like for your agency to maybe give us a little bit more margin and give us a little more breathing room where we for years have just been struggling to like get the reigns around it.
Galen Low:I love that. In one of our past events, I think it was in July, you know, we were focusing on creative projects. And the thing that kind of struck me was that like, it doesn't have to be this uniform thing. Maybe it is, right? Maybe like for a project manager, someone leading a project, you know, focusing on delivery or an operations, it's like that day-to-day operational efficiency, but like from like a project perspective for the teams like. There is a time and a place where you do want like a lot of time and human energy spent to like create stuff and then you know, if you look resizing images, like, it's just like there's a moment where, okay, well I'd rather that, you know, I not have my best designers and creative folks resizing images or doing like localized copy for a campaign. There's areas where, again, where it's like, it's just like it's painful to watch my best people do these things and maybe AI can help. I like that it's not necessarily just like cool, like dial up AI to 110, like across the board, just finding the spots where it's gonna make that difference.
Kelly Vega:Yeah, I mean, building a Jira ticket is something that seems like something that could be ad adminy. This is like a re meeting recap or summarizing an email. But really like there's a lot of technical stuff that can go in there, and if you're taking a long-winded response from a technical. A developer, someone who's more tech-minded, and you can take that ongoing sentence and plop that and be like, Hey, I need a Jira ticket out of this. And then you're reading through and you're like, okay, actually that part is, and if you're including like the platform, which it's on or whatever, you're gonna get AI to beef that up, and then you slim it down and you react to it and then make it fit into your ticket. There's gonna be labels, they'll suggest that you don't need all of that. But like that has saved so much time for PMs when it's like, make this Jira ticket. Sometimes they just freeze or it doesn't get made for a day just 'cause they have to have time to think about it.
Galen Low:I'm that guy. You like, yeah. It takes a few days to create a cheer ticket and if only I can get a start sooner. But I also like, I'm also that person who like, that's the important human bit for me as well. You know, I think you're saying that better have my ducks in a row to share with my team. Like what? We should be doing.'cause that has knock on effects if I get that wrong. But also, yeah, there's an inordinate amount of time that we underestimate of like clicking buttons, copying and pasting, finding information and different tools. Yeah. If that can get streamlined, like that is a wonderful thing. I thought maybe I zoom out from there and sort of make this real. Because I think for folks who might be hearing about this for the first time, maybe that's not how the organization works. They're like, oh, I'd love to have like AI recess every day. Like, that sounds fun, but what has that looked like in some of the organizations that you've been talking to? Harv, I know you, you talk to a lot of folks you know in your role at Scoro. Do you hear a lot about this?
Harv Nagra:Yeah, actually, well one thing that came to mind is like, this was a couple years ago when I was at my past agency. It was just at our monthly all hands. It wasn't kind of mandatory time we were putting in, but we were encouraging people to like play around with tools, right? And so during our monthly All hands, we would create space or have a little space in that session for anybody to volunteer to kind of present some of their experiments or what they've learned. So that is being exposed to everyone. The other week I was at a breakfast event for AI and marketing spaces, marketing companies, agencies and so on. And something I heard somebody say that was really nice is, I can't remember what they were calling it, like, a task force where people have volunteered to join this working group in the business. And on a monthly basis they get together and they have to present something. So that could be some really interesting and relevant AI based news. Or an experiment that they run, and the fact that it happens on a monthly basis and you have to share something, means it gets done, rather than just being this thing that maybe I'll get to and you never prioritize. Right. And again, the benefit of this is that you're sharing that knowledge with other people. Right. Where I work now at Koro, there's a few things that we have going on. We've got an AI Slack channel, so again, we're sharing kind of the news and experiments we're doing, and we've just recently done an AI skill survey across the business as well to find out what people are using. I mean, we have very strict usage rules, so we already know what people are using and what they're kind of, because it's been vetted for data. That's what I kind of recommend, some of those kinds of things to just make sure there's a regular cadence of kind of learning, presenting and knowledge sharing.
Galen Low:It's such a cool like, analysis tool that in a very optimistic way. I've seen some that like, seem very cold and sterile and like you might lose your job if you answer the question wrong. This is like driving education, which I think is like the big piece and I think, you know, the theme of sharing information I think is huge. I like that forcing function. You know, it might not be for everybody. I'm sure there's people culturally who like, you know, be rolling their eyes and like, oh my gosh, I have. To do this like AI thing and then present about it. When am I in grade school? But I think also it's the forcing function to share knowledge because we're not sharing this knowledge. Nothing can scale, nothing can sort of disseminate. It's easy for someone to just like maybe accumulate all these like ways of working that are excellent. But only they do and therefore the like impact to projects as like this big instead of like that big. It comes with its own challenges after that. But we'll get into that. Does anyone else have like an example of like how an organization or an agency is doing is structuring sort of this experimentation?
Kelly Vega:Yeah. We actually have a proprietary tool. It's very robust and it's, even when you go, it's spliced out with like creative needs, operational needs, strategic needs, and within there, whether you're looking up set prompts or just types of files or what have you that need to be created. So given the robustness, we'll say, it's funny how often I'll just go to the chat function that I've been using, you know, elsewhere forever. So not to say the robustness isn't nice when you need something specific, but that more so happens when you just stumble across it. Yeah, I'll use the chat function simply. So having that proprietary tool kind of naturally has this default of like, use this is yours to use. If you have feedback, let us know here. And it's something our clients are aware of and use. And so really it's less about a certain time that's allotted encouraging us to experiment with the AI and more. So it's kind of up to us and encouraged to be like, show us how you do it successfully when that happens. So any task that I'll have, I mean go even going through a Jira export of time spent, right? Jira's robust reporting functionality, great. But if I'm looking for something so specific. I just don't have time to dig for and mess with filters on. I'm taking that export if everything with security and all the guidelines are being followed, but with that I can be like, Hey, tell me this about this, or how is this trending? Or, you know, it helps me just start thinking and then I'm back in Jira looking at how I can optimize. So it's not just giving me these answers of what to do, but it's helping me. It just kind of, so anyways, I'm getting more into the use of it, but it's just so encouraged, given that it's a default tool now for us. It's just a matter of now how we're using it, not if we're using it.
Galen Low:Do you have an opportunity to like input into that proprietary tool? Is it just training on all the stuff that people are doing in their day to day, like, or is there like time spent going like, here's how I do a thing, I'm gonna upload it, or, you know, I'm gonna train this model on it.
Kelly Vega:I do wonder. I mean, I, it's associated to my account and so all of the behavior and the actions that I'm giving it, it is, I have noticed the messaging of summarizing is looking more and more like. My voice, so to speak, right? But I still don't use that as copy paste into my emails. I'm still using like the bulleted things that are provided or processes outlined, and then I'm finessing it from there and making it my own without Makes sense. And there are adjustments needed. I would say even, gosh, if I were to quantify it, 10 to 30%, I'm adjusting of recaps or linking things out, right? Showing that I do care about the recap I'm providing you and you're not just getting something spit out with the same emojis in their spots and the em dashes, right? Like, okay, so I digress.
Galen Low:Or em dash.
Kelly Vega:Right. And I like, I legitimately would use them. Now I can't use them because people would be like, okay. Like, well first we all know we're using AI. Second court em dash.
Galen Low:Lemme go in and add some typos to my email before I send it.
Kelly Vega:Yeah, exactly, kind of.
Galen Low:Utopia is it's too clean for humans to believe, so we need to mess it up a little bit.
Kelly Vega:Misspelled Definitely.
Galen Low:Every time. I wonder if maybe we arc that into the thing that I've kind of been hinting at, which is that I think taking that, I don't know. I call it tinkering. I don't know if that's probably. I'm not offending anyone by saying that, but there is this sort of like getting familiar with, you know, the technology and its capabilities. There's the helping yourself improve your own personal productivity. And personally I applaud the fact that, you know, an organization like an agency, which is like built on projects and billable hours, is willing to like invest non-billable time for folks to, you know, experiment with AI. But I mean, arguably even the best AI experiments might just sit on some shelf and gather dust, like, you know, all those hackathons that we've done over the past decade or so. But like, you know, it requires some structure, it requires some formality actually. It's not just the tinkering. But to my ops folks, how can agencies set up a structure that like vets great ideas and then like bakes them into the strategy and then moves them into implementation so that they can be part of the fabric of the business, not just like a tool that uses over here in this corner. How can these projects avoid becoming that dreaded, like internal project that's always like the lowest priority versus client work. Almost shippable. It was great. We spent, you know, all of our experimentation time building this great tool, but every time we try and like roll it out, we get that big project from that client. So it's just gonna sit there and do nothing. How do you guard against that?
Melissa Morris:So this is something I talk a lot with our clients about because obviously in the ops space, you know, ops, projects, activities, whether it's working, creating SOPs, your project management tool, whatever that looks like, it can be not the fun stuff sometimes, right? And inevitably when there's client work, that's the stuff that gets pushed to the side. So when speaking with my agency owners, I'm always reminding them, if you have an internal project that needs to get done and gets across the finish line, then you need to resource it just as you would a client project. Don't, you know, throw it out to the team. Hey, we're gonna X, Y, and Z. Let's make that happen. Guys, when you know everybody's accountable, no one's accountable. I think we all know that, right? And then also don't give it to somebody though who we already know is on a big deadline who's already at capacity with their own client deliverables. Because inevitably it's just not going to get done. So how can we treat it like a client project build in real milestone moments? Is it a creative brief? What are we looking to solve? Right? It's, I think it's also to say, Hey, go figure out how we can use AI to make the agency bigger, faster, stronger. Well, that's a tall order. Like that's very ambiguous. But to say, right, to use Kelly's example, Hey, writing Jira tickets is a tremendous lift. It takes a ton of time. Sometimes I get it and I'm just like, I need a minute to even sit on it. How can we lean on AI to help us with that? So let's get real specific about what we're trying to accomplish. Also specific about what the expectation is for that. I could spend the next three months saying, I'm looking into it, I'm looking into it. I'm looking into it. What are my first steps? Like, what do I want looking into it to look like? When do I want that buy? I want you to spend 10 hours. I want you to spend it looking at these type of tools. This is the budget you have. Also, don't, you know, tell me you found a great tool, but it's gonna cost us $800 a month, right? Whatever that looks like. So really build in some parameters and let the person know, this is my expectation while you're off experimenting. And then you roll it out just in the way you would a client deliverable. Show up at that weekly standup, that weekly huddle. Tell me the update. Tell me the roadblocks, tell me the challenges. And when you resource it and treat it like you do client delivery, it gets completed. Like client delivery gets completed.
Galen Low:I like that. I also like the, and you mentioned resourcing. But also in my head, I'm going like, put a dollar sign next to it. And like, sometimes it comes down to the business casing, because fundamentally my assumption is most of these agencies and other organizations are investing, literally investing time to improve their business. To improve operations, to improve quality of life for their employees. So there needs to be a return. And I like that notion that like even at the experimentation level, like there should be a hypothesis. And then I like Carrie's note in the chat, which is that AI implementation must be aligned to business goals and objectives. Because I think that's the other thing. We were talking the other day internally about. Our experience in the past and past lives with the ideas Dropbox, right? Just like put in your project idea in the Dropbox and like depending on your organizational culture, your team culture, you'll either get lots of ideas that are very aligned and you're like, cool. Like, yeah, that'd be great if we could like climb over that ledge and increase our margin by this much. Or, you know, increase our operational efficiency, reduce the time it takes to do a thing. Or you could have, you know, the ideas that are just so outta left field that like nobody wants to look at the Dropbox anymore. Right? So like there's actually work to be done to culturally align everyone towards like the broader goals, the experiments themselves. And you're spending this time need to have some kind of hypothesis or goal. And I like that idea that you're kind of prioritizing it like you would a portfolio of projects anyways, it's like where is our strongest return going to be? That's the one we should resource and invest in and treat it like a real project and put a dollar sign next to it so that we don't go, oh, but that client has, that's a $20,000 project we could just take on tomorrow. Yeah, but we could have a $4 million return on this project. That's actually only taking us, you know, like$30,000 worth of resources. Then it's kind of got that framing. And that like motivation to actually get it done and not sit on the shelf.
Melissa Morris:Yeah, and just to kind of add to that just a little bit, like how relevant is it and start to prioritize, right? Like that return on investment. Like I can say, Hey look, we can create a slide deck in five minutes from this transcription. Look at how awesome that is. Okay, well we create one slide deck every 18 months. Like we're not gonna roll this out, train the team, like throw a parade. Like that's amazing. But at the end of the day, I don't care, right? Like when I have to make a slide deck in another 18 months. I'll take a couple hours, I'll knock out the slide deck and then I'll move on. So calibrating, right? Like the input and resources in what is my ROI out?
Galen Low:I like that sort of prioritization.
Kelly Vega:I think there's something to be said too that the buy-in of the team too, right? Like I think that excitement from some doesn't always mean complete like agreement or buy-in from everyone. So to explain those business goals and what the, whatever it be, KPI or the efficiencies that you're showing, as long as people understand that it can move fast once there's buy-in. And people are like, oh, okay. Oh, okay. And now it's more of a priority.
Galen Low:Like that's change management piece. I also like, I wonder if there's, like, in my head I'm thinking like, is there nuance? And you know, I'm open to anyone's thoughts on this, but you know, when you do that project that like you've never done before, you actually don't know how long it's gonna take. And a lot of this AI stuff is kind of new. So there's nuance, like with the folks who might be resistant. They're like, great. Yeah. You know, hire me to train my replacement. Thank you. Or they're just like, this is so silly. Like, I don't know how to do this. Who knows how long it will take to actually operationalize this. Yeah. Like how can organizations, agencies in particular, like, how can they navigate that so that the projects actually, you know. Fit within the sort of constraints that we've mapped them out to fit within. And is it okay that it's not like when we're thinking about like the best kind of, you know, scope creep or like the change request that comes in from that internal project to be like, actually, you know what, this is taking longer. I don't know even know what my question is really, but like, is it as simple as just like having another project? Or does AI kind of create a sort of nuance or inflection that might actually be a lot more complicated than just your standard fair bread and butter project that you do for clients?
Harv Nagra:I think, you know, no matter what you're doing, having a kind of a guess or an estimate of what you think, it could take time boxing something is always just really valuable, right? You don't have 40 hours a week for somebody to be doing these experiments. You maybe have a couple hours per week, and so saying like, we're gonna dedicate 10 hours to this. Over the course of three weeks and then we can assess. And if it doesn't come to a conclusion, then you can decide like, are we gonna continue this? Are we gonna park it or are we gonna kill it because like it's too complicated or this is not turning out the way we think it is. So I think that's a good way of just kind of time boxing, I guess is the point. And coming back and assessing if it's kind of getting somewhere that's useful. Right. That's what I would do anyway.
Galen Low:I like it also because it's like a microcosm of how like round funding would work anyways. It's like, we'll give you a bit of money to get this far, then we'll give you more money if you get that far. And if you don't then we might not give you that much more money. Or we might not give you any money at all. But like, yeah, what are those sort of milestones that allow us to iterate towards value? But you know, I guess shrunken down. Hopefully less shark tanky, but still the ability to sort of pause and go, is this heading in the right direction? And the culture to say, yeah, maybe we spent a whole bunch of money into this so far, but it's really not working and we need to, like, it's okay for this to fail and, you know, let's like cut it off now before we bleed out more.
Kelly Vega:AI projects and initiatives. There's instances where just because you can doesn't mean you should. Many instances. I mean, I think more so we hear some of those coming from request sides from the client who may not know all the best practices or where it's applicable and whatnot. I think that's a very important recognition at some point to be like, okay, just because we can, should we given either the effort that's left or what it's not doing so far, assess.
Galen Low:I'm like, should I take this to devil's advocate zone? And I think I will, because in some ways I'm like, all right, let's review. So we're experimenting with things we kind of need to, you know, be able to measure them. Implementation might require us to have a plan and resource it like a project, but also it should be iterative. And then the devil's advocate in me says like, well, maybe we do what probably most organizations want to do today. It's like, why don't we skip all that planning stuff? We're gonna be iterative anyways. Let's just take all the good ideas that we came up with week over week, and let's just like start the first half step of each one of them. That should be fine, right? Because it's iterative. We can measure, we're not investing a lot, and then we can organize, but like why wouldn't we just iterate on everything all at once? Like planning is for pre AI, you know what I mean? That's the olden days. Like that's a black and white photo. Why might that be a good or not good idea in your mind?
Melissa Morris:Yeah, I'll jump in for a second. So I think anytime we're rolling out something new, whether it's AI or not. We need to understand that there are varying degrees of buy-in, as Kelly mentioned from the team tech ability, comfort, and so I think we have to be careful to not leave some people hanging. So we do this a lot when we're rolling out new time tracking, project management tools, CRMs, whatever that looks like. We have a very dedicated plan for when are we looping the team in, do they understand high level why we're doing this? What is our training plan? And then what does support look like for people who need additional handholding? Because inevitably you have people who are very into tech, they're really pumped about AI, and they're ready to lean all the way in on that. And then you have others who are more apprehensive. Maybe they're just not as tech inclined in that way. Maybe they just don't really see the value. And then I kind of wanted to circle back just real quick to something you said before too.'cause I think it's worth mentioning and important is the fear. Some people may be having. Am I creating the resources? Am I showing them how to replace me with an AI bot? I'm getting nervous about that. Or are they going to now ask me to do twice as much work because they think I can't do it twice as fast? For now, I'm going part-time instead of full-time because they can handle this. So I think there's definitely a different charge around AI than maybe we've seen with some other, you know, tech advancements where people may have a real fear about job security. And so I think being extra careful to be very clear about, this is what we're doing, this is why we're rolling it out, here's the training and the support, and having a space for team members to share any thoughts or concerns is gonna be really important.
Galen Low:I love that. Yeah. It's like that's the thing that, you know, it can't be rushed, I guess, or at least in my opinion, is that like we could technically start rolling out a hundred of new ideas tomorrow. What does that do to our brains as humans? What does that do to our like emotions and like the way we're able to conduct ourselves? How do we keep tabs on all 100 goals and KPIs that we're all trying to achieve that somehow, you know, go towards some North Star maybe, or maybe it doesn't and like that bit is probably where, you know, a lot of these implementations are falling over.
Kelly Vega:I mean, I don't right now have a PM team reporting to me, but if I did, I mean I certainly from a PM ops perspective might say like juniors, I want you to work on. How to use AI for things like the recaps and come up with a standardized prompt. What's the best prompt for after backlog refinement? What's the best prompt for after a sprint planning? What's the best prompt? You know, all of that and have the juniors do that. And then I'm having the mid-levels looking at, like, look at our documentation and confluence. Whatever is secure to be able to upload high importance, start uploading some of those pro whatever, see how we can standardize some of our processor documentation. And if that's more of like a tech lead thing, great. And then your senior levels, you're like, okay, look at 2026. I know you're not, maybe you're not the account managers, the sales team, but how, for me, your PM perspective, can you strategize for 2026 and whatever that means. The initiatives you have, the different projects and they're like. It can get bigger and bigger. So I think you can have multiple initiatives going at once. Now that's just me and my PM ops world and what I would come up with off the top of my head like now. So I think that there is something to that because if there's a big initiative that everyone's like, whoa, AI, okay, here's all the little steps. And then like other people aren't really pushing 'cause they're like, well, they're pushing and I don't really care. Whatever it is. But yeah, so I digress. I think that there could be value to that.
Galen Low:I like the right sizing of the experiment. It's like, don't get the new hire intern to like rewrite how we do payroll. Yeah. Like, I think there's that. But I think the other thing that you touched on was like and I do wanna like circle around back to os, but like, or maybe I am doing that now, like. You know, in an operations role, you need project managers to lead these projects. And going back to what Melissa was saying, it's like these are projects, and in fact, to what Harv is saying, and we were gonna iterate through them and literally need to make decisions. It's a program or a portfolio of projects or initiatives that need leadership. And yeah maybe that's as valuable as client work. Like is that the next stage of, you know, how we're investing in this? Where I would be able to say, yeah, that's okay. Like, you know, my PM team, you know, they're usually, they're running whatever, let's just say four projects, ha at a time. One of them is gonna be an internal project and you know, Kelly and Ops is gonna be the sponsor here, right? She's our client and like she's kind of overseeing how this all rolls out, aligns with their goals, like becomes SOPs and like that's what that internal initiative looks like versus I think what I was kind of devil's advocating about where it's like just everyone just do stuff. It'll be fine. Like do it and start because it's better than standing still isn't always true.
Kelly Vega:Yeah. I was gonna say it's something deliverable from that, that you could have is like screen record yourself making a ticket from scratch, screen record your whole screen where you have to all click around to now screen record yourself taking that same information and doing it with chat, and now you have a deliverable to whatever. Whatever you wanna say. But also to your point, Melissa. It can get to be a slippery slope when they're like, oh, great, well then do more. Here's two more projects. So you have to protect that other space to be like, well, no. Now I have two hours heads down to focus on our strategy for this, or to help churn tickets with the client, or whatever it is. So you're holding that space for the other, what was it? But that's more now, so that you don't have to be as scrambled and it can be higher quality.
Galen Low:Also that trust thing. Right? The building the trust to be like. Rerecording of you doing your job. I promise I won't judge it. I'm not gonna judge it. I judging it. Yeah, we're judging it. Yeah, Sam, we're going. But we want it to be more of it. It's like, yeah, there's so many layers of things to totally ice through. To get people to even participate in the project the way that they would if it was just a regular project that didn't have to do with their job or new technology or.
Kelly Vega:And perhaps that's more along the lines to have approval to use an AI tool, right? If some aren't onboard or they're skeptic for good reason for whatever it is, so education.
Galen Low:You mentioned like, you know, data security, compliance and stuff like that. Harv, you mentioned that scar, there's like certain tools you can use because, you know, they've been sort of vetted into the sort of overall governance program around AI tools. I guess maybe my question is like, is or is it not one of the things that folks seem to be either sidestepping missing or fully taking into account when they're rolling out their sort of like internal AI processes slash you know, what is the right level of governance at a certain stage? Data privacy, compliance, all that.
Harv Nagra:Totally. I think it comes down to the organization and kind of the maturity around that, right? Over the past couple years since like this whole AI kind of race started, I think there has been a lot of experimentation and there was a lot of concern, especially in the early days of people like uploading client data and stuff like that. I'm sure that was happening. I'm sure that happens now. At least now we've got like kind of clear guidelines for our businesses to say, this is what you're allowed to do and not. Like shadow. It has always been a problem that ops and finance people have had to worry about people kind of just finding tools that they want to use, downloading them, installing them, asking for, you know, signing up for a subscription. And the thing is, like, one, you end up having too many things that you're paying for that you, you might not have control over. And number two, not everyone's benefiting from those kind of platforms, right? And sharing that knowledge and opportunity. So I think that's why having those kind of controls in place. From the operational point of view to say, this is how we kind of select products and make sure that they're safe and this is how, what our guidance is on what you can and can't use that platform for. So I think that's super, super important.
Melissa Morris:And I would add to that, I think we've been talking a lot about it from the context of within our own agency or within our own business, but I think having some clarity for your clients and a line of visibility on how you are using it when relevant and appropriate. I'm just thinking of a company we're working with. They have conversations that are quite sensitive to the point where if brought into a court of law, suddenly there's a transcript of the meeting that never happened or shouldn't have happened or recording of it. So there's definitely certain situations where you would want to be very careful and make sure your client knows. And I mean this is standard practice about recording and such too. But do I take that recording and go back and make it a transcription and save it in my Google Doc?'cause it's really easy for me to go back and reference what we talked about, make sure they know that or that's okay. And just knowing what sort of documentation you're keeping and what implications that may or may not have down the road.
Kelly Vega:We actually made a standard agreement that we would have everything transcribed or anyone had the ability if on these calls to transcribe or record at any point. And everyone was just like, please, yes, accountability. That's great.'cause just the interruption of like asking is this okay now it's not for everyone, but that has been simply nice.
Galen Low:Yeah.'cause you've worked in a lot of like heavily regulated industries as well, which I imagine some of those clients would be like, no, thank you. Did it surprise you? And they're like yeah, please do that.
Kelly Vega:Actually, for me, it was more the other way around where it was preference to have it recorded for simply accountability reasons. I mean, because something could so easily be associated wrong, or when it came to, gosh, so many technicalities with regulations and permissions and whatnot. It was actually from, in my experience, defaulted to.
Galen Low:That's fair. It's like the opposite is like, because of regulation, we do want to have a record of this thing having happened.
Kelly Vega:Yes. Now I will say from the top of a meeting that I simply want to be more casual, not because of accountability things, but I'll say I'm not gonna record this one. If you feel like it absolutely precedent any time, but sometimes I know that having that can change the era of a conversation. So, be mindful of that. Like I'm not doing that with my one-on-ones with my PMs. I'm not a you know, blown out. That's good. We're gathering requirements. The red button's on.
Galen Low:Love it. A big thank you to our panelists for volunteering their time today. I know we hang out all the time, but this has been so much fun. Also, just getting you three in a room together has been loads of fun. Thank you so much for your insights.
Kelly Vega:Yeah, thanks for having me.
Melissa Morris:Thank you.