Live Chat with Jen Weaver

How to Use AI to Audit, Scale, and Train Your Team With Confidence | Episode 17

Jen Weaver Season 1 Episode 17

AI isn't something you turn on like flipping a switch. Chances are, your team is already using it. That chatbot pilot you ran and the sentiment analysis humming along in your stack, those are AI at work. The challenge isn't adoption, it's making it work for you instead of against you.

This week, I’m joined by Jen McCorkle, a data-driven leader with 25+ years of experience in revenue generation and analytics, for a grounded, tactical conversation about how AI is reshaping customer support from the inside out and what you can do to lead the change. We unpack the alphabet soup (LLMs, GPT, ML, agentic AI), share practical use cases, and get real about the risks, biases, and blind spots that come with these tools.

If you’re ready to stop reacting to AI and start directing it, you need to give this episode a listen.

What we get into:

  • Why your AI tool might be hallucinating… and how to spot it
  • The problem with sentiment analysis (and what to do instead)
  • How to start learning prompt engineering without taking a class
  • Guardrails, audits, and smart pilots before you scale
  • The rise of AI governance, and how support leaders can get ahead

Whether AI feels like magic, mayhem, or just another Monday, this episode will give you the clarity to stop reacting and the confidence to start directing.

Keep Exploring:

📬 Subscribe for weekly tactical tips → Get Weekly Tactical CX and Support Ops Tips

🔍 Follow Jen McCorkle on LinkedIn for more  insights →Jen McCorkle on LinkedIn

🎙 Keep listening → More Episodes of Live Chat with Jen Weaver

🗣 Follow Jen for more CX conversations →Jen Weaver on LinkedIn 

 🤖 Sponsored by Supportman: https://supportman.io


Episode Time Stamps:

0:00 Why AI Feels So Overwhelming

3:17 Generative vs. Agentic AI Explained

6:50 AI Use Cases That Actually Work

10:45 Real-Time Agent Support with AI

13:30 The Flaws in Sentiment Analysis

17:04 Hallucinations, Bias & Bad Data

20:30 Building AI Literacy on Your Team

23:40 Always Pilot Before You Scale

27:15 Human QA for AI Systems

32:42 Will AI Replace Support Teams?



Speaker 1:

Don't let it fake its empathy. If the chatbot says I am so sorry you've gone through that, I can understand. I'm thinking like no, you don't. You've just been programmed to say that I'm looking for that empathy in the human. I don't necessarily want the empathy in an AI.

Speaker 2:

Welcome to Live Chat with Jen Weaver, the podcast where top support professionals unpack the tools and tactics behind exceptional customer experiences. In this episode, finally, we're demystifying AI for support leaders. My guest, jen McCorkle, takes us beyond the buzzwords and into practical, no-fluff strategies to harness AI in your support team. She is an amazing support leader who has tons of experience with data and AI. We talk about how to start smart with chatbots, how to keep AI tools honest and to build the skills your agents need in this AI world that we're in now. If you've felt lost in the AI alphabet soup or wondered how to integrate AI without losing the human touch, whether that's in your contact center, in your support queue or in your own workflow, this episode is for you.

Speaker 2:

Before we get started, though, our QA tool, supportman, is what makes this podcast possible, so if you're listening to this podcast, head over to the YouTube link in the show notes to get a glimpse. Supportman sends real-time QA from intercom to Slack, with daily threads, weekly charts and done-for-you AI-powered conversation evaluations. It makes it so much easier to QA intercom conversations right where your team is already spending their day in Slack. All right on to today's episode. As we all know, ai is in the headlines all the time right now. We haven't done a ton of episodes on AI, but today I'm here with Jen McCorkle to share some baseline information for customer support leaders who are awash in this new world. Would you go ahead and just let me know a little bit more about what you do, how you came to this point in your career and how people can get in touch with you?

Speaker 1:

Yeah, absolutely Thanks, jen, and it's a pleasure to be on the podcast with you. I'm really excited to share some information about AI and what our customer support teams can do to really leverage it. I am a data-driven leadership expert and I help organizations and people get comfortable with AI and data to make better decisions. And leadership is not just about the gut and the human and the emotional element, but it's a lot of using information to tell a story and make decisions, and that information can be both quantitative, like with data and numbers, or qualitative, which is more subjective, about the human element side of leadership. So I really work with leaders across all industries and all functions inside of a corporation to get ahead.

Speaker 2:

That's fantastic. I'm so glad you're here. We have a lot of content that you've prepared, a lot of really interesting stuff to talk about, so I want to just jump right into it. Can you just give us kind of an intro to why AI matters, amidst all the noise about it? What does this mean for customer support leaders?

Speaker 1:

Yeah, it is a lot of noise, and what we say with data is if data isn't helping you make a decision, it is noise and AI is the same thing. If AI isn't helping you, it's noise and AI is a toolbox. It's not a brain, it's really a tool that we use. You know, especially with all of this noise and our fire, when it's really hard to kind of get that picture of what can it do for me and I think, a lot of people. They say AI and they're really talking about generative AI or chat, gpt kinds of things that generate predictive text. So that's what it is. Gpt is generative, predictive text.

Speaker 1:

So you remember a while ago like, yeah, like I remember a long time ago when you first started texting on your phone and it would start to pop up the words oh yeah, that's generative predictive text Okay.

Speaker 2:

But that wasn't based on AI necessarily.

Speaker 1:

Oh, yeah, absolutely.

Speaker 2:

Oh, okay, I had no idea. Yeah, okay. So when you say AI, what's the simplest way to explain kind of the GBT, the LLMs, all the alphabet soup for support teams, maybe that aren't super technical.

Speaker 1:

That's a great question because AI is artificial intelligence and it's in many, many flavors. So we talk about GPT, which is generative predictive text. It's basically a large language model that's been trained on infinite amounts of data to help this AI algorithm understand how to think like a human, and that's what AI does, is it? How do you think like a human? So some of the flavors are, and we hear agentic AI a human, and that's what AI does, is it? How do you think like a human? So some of the flavors are, and we hear agentic AI a lot. It's a really big buzzword right now.

Speaker 1:

And agentic AI are tools that act. So this is like trigger workflows, auto-close tickets, automated ticket routing based on intent. Those are agentic things. They act, Whereas generative AI or GPT generates content, and what we've seen is we've moved from that predict the next word in my text on my phone to being able to write articles and books and generate art and video and audio. So it's really it's moved so quickly in the last. I mean it's like every six months we're making a major jump in what AI is capable of doing Some of these things.

Speaker 2:

that's kind of in the future, maybe with the voice, but what we're seeing now with AI in customer support definitely chatbots, but I've heard more buzz about how AI can help us internally with our operations on support teams. What are some of the use cases you've seen that maybe work really well?

Speaker 1:

Obviously, there's the agentic and the routing of tickets and so forth, and generative AI being able to write scripts or write FAQs or generate those kinds of things that it would have taken hours and hours and hours of a human time to go through and look at all of these calls and figure out what are the most common kinds of questions or concerns we're getting and then what was the best response and what was the thing that people understood the best?

Speaker 1:

That's how AI can really speed that up and generate. That is taking massive amounts of data and then being able to say here's the summary of what we found in the next best response. Next best task is another thing for agents is being able to as an agent is on a call, being able to pop up right on the screen in the tool this is your next best action to take. So, for example, I've worked with call centers in telecom and one of the next best actions we had been working on is, as you're talking with Jen Weaver and she's talking a little bit about her telecom needs what's the next best promotion that would be perfect for you to help you save money, drive higher customer satisfaction, get you embedded in adopting all of the different products and services that we offer.

Speaker 2:

So that's an AI thing, is being able to pop that up for an agent, so that it's like right there, yeah, and as somebody who's been on chat and phone as the customer support agent, well, and as a customer, it's very hard to think through that kind of data. What tier is the customer at? What are their needs? What is all this data? While I'm talking to them, right, it's like I don't have that many streams in my brain, and so just being able to have something compute, that for me, does make a lot of sense. But then the AI, whatever it is, is not speaking to the customer. It's presenting it to me as the human and making me better.

Speaker 1:

Absolutely, and you know, jen, I've been in data and analytics roles since 1996. I'm coming up on 30 years of a career in analytics and data, I know right, and we were doing artificial intelligence and machine learning back in the 90s in colleges. This has been around for decades and I think one of the things that I would stress for anybody who's watching that might be in the analytics team, the BI team, the business intelligence team, any of the data science teams is go sit and watch your agents, do your write-alongs with your agents and watch them all on the phone. What has surprised me the most, and especially very, very large data centers or very, very large call centers, is you've got agents with five, six things going on on two different screens and they're writing this up while they're checking an address and they're doing these different things and it is amazing the kind of multitasking these agents are doing yeah, and as a customer, I'm not seeing that, and so I'm just thinking there's such a delay, why is it taking them so long, exactly?

Speaker 1:

And, as an analytics person, we don't always understand why, but we put this algorithm together, we're popping this next best action. We're giving you what you want. Why aren't you using it? And when you go sit down and watch, you're like oh yeah, this needs to be much simpler than I made it.

Speaker 2:

I think a lot of call center employees definitely feel like the tools and support that they receive are not necessarily tailored as they would be if someone had watched them actually work.

Speaker 1:

Yeah, exactly. So let's talk a little bit about TLA, the three-letter acronyms. I started my career in IBM and everything was a TLA, a three-letter acronym, at IBM. But let's talk about that because I think it's important, as somebody in a support role or maybe somebody that's not technical, to be able to identify what kind of AI somebody is talking about. So we talked about agent tech and we talked about GPT generative, predictive text which the most popular ones right now are chat, gpt and the clock. What else are we looking at? Yeah, so NLP, which is there's actually two different kinds of NLP. There's natural language processing and then there's the other one, which is there's actually two different kinds of NLP. There's natural language processing and then there's the other one, which is neurolinguistic programming. Two different things, same acronym, two different things, different things.

Speaker 2:

Great, that makes it easier.

Speaker 1:

So I was talking about. I remember when I was working and I was working with a company that scales educational content and materials and working with the marketing team and I said something about NLP and this person says you know, I actually have a master's degree in NLP. I'm like you do. She was the email content specialist and so in her job, being able to have neuro-linguistic programming was very important. And then we realized that it's funny, in NLP, in neuro-linguistic programming, we had two NLPs that were very, very different, but natural language processing. This is what we use to help AI understand things and speak like a human. So this is understanding my order didn't arrive and AI understanding what that meant Having a voice I am recognizing I want to cancel my subscription and being able to recognize I want to cancel my subscriptions to be routed here. I want to cancel my subscription is to be routed here.

Speaker 1:

So that's how we use NLP to make GPT think and understand like a human. Llms are large language models, so this is massive amounts of content that GPTs are trained on. So I think what's interesting is like with ChatGPT when they were training the model with OpenAI. They were adding things like books and websites and lots of different content in there to train it. But it was also trained on Wikipedia and Reddit, because Wikipedia is the largest source of human-generated content and it's important that it learns how to talk like a human. So that's why they use Reddit and Wikipedia to understand how humans, the everyday human, would write and type and talk.

Speaker 2:

Yeah, with not perfect grammar, with some inconsistent capitalization and just the syntax, probably things we don't even think about as human beings. But how would—it's probably impossible to separate the syntax, the grammar, the way that people write, with the content, so you're feeding it both the content and the style and then it's using that information. Is that related to what we talk about as hallucinations? It's a term I've heard.

Speaker 1:

Maybe we're getting off track? Yeah, no, hallucinations and bias are definitely there. And think about it as a human. We hallucinate things, sometimes by having a missed memory or, you know, I didn't remember it that way and I think I'm correct until I find out I'm not correct.

Speaker 2:

Yeah, or if I'm confident that I know a fact, until that's challenged, I still am confident about it, even if it's incorrect.

Speaker 1:

Right, and so that's kind of what we talk about with hallucinations is sometimes AI. Chat GPTs generate things that don't exist. For example, I did some data visualizations for a company that I'm consulting with and I fed it into chat GPT to see how well could it summarize the visualizations that I created and be able to generate content like an article or a summarization for an executive. And as I was checking the data, every single data point was wrong. Every single data point that it saw on the chart was wrong. So I would do things like 34%. It put 39. So I don't know if it's just the way that it processed the image in its little brain or if it was trying to subtract things. I have no idea where it got it, so you have to be very careful.

Speaker 2:

What did you do with that? Did you go in and manually edit those?

Speaker 1:

I highlighted everything and showed them to the client on. This is why we have job security. This is my job security. You can't just feed it in, and that's what I was finding is people will feed it in and they're like this is great, it's a great summary, and they never checked it. So think of it as. Think of your AI tools, your collaborative tools that support you. Think of them as somebody who just started. They went on the job and you just got to double check everything until you're really, really confident that it's doing it right.

Speaker 1:

Oh, the other thing we talk about is AI and ML. That's the other algorithm or the other acronym that I wanted to talk to you about, and machine learning is what ML stands for, and machine learning is how we program AI to think like a human, so it's training models, testing models, having it do the predictive analytics, creating these algorithms and so forth. So that's sort of we talk about AI and ML machine learning, and we talk about AI as chat GPT, and we talk about AI as agentic AI, and so when somebody says AI, it's important to ask them what kind of AI are you talking?

Speaker 2:

about LLM, GPT, ML, different kinds of AI.

Speaker 1:

Yeah, for me, as a data scientist myself, when I talk about AI and ML, I'm talking about creating predictive algorithms that will predict customer churn, or creating algorithms that will identify agents that might be able to close tickets versus not close tickets or upsell versus not upsell. So that's when I think about it. I'm thinking about an algorithm I'm developing that's going to predict something that we could then take an action on. So it's a prescriptive kind of an output.

Speaker 2:

Yeah, and what I'm hearing you say repeatedly is that it's about taking actions.

Speaker 1:

If it's helping you make a decision, if it's helping you take an action, that's where it's really important and that's the intersection of what I do with data-driven leadership is it's not just about the data and the analytics, but the AI, because we use AI every day to get information. So how do I take my data and my analytics and everything I'm doing in AI and blend it together so that I am the strongest leader I can be? How can I future-proof my career by using these tools collaboratively?

Speaker 2:

Yeah, so I would love to dig into that more. What of all that alphabet, or of all the tools that are available, what flavor do you see as kind of the low-hanging fruit that any support organization or support leader can adopt pretty quickly?

Speaker 1:

Yeah, I'd be surprised if people aren't using chatbots. I use it on my website and I'm a solopreneur. I was in a one-person company and I use it on my website a lot just to automate those quick things that are coming in that you can quickly answer. It's that low-hanging fruit, that tier one kind of stuff that has for resets and using your chatbots to do those kinds of things.

Speaker 2:

And that has an immediate ROI, because you're then like when we implemented a chatbot, it took I don't know like 70% of our conversations, which for our support team, was an immediate relief of hours spent that we're putting now to more valuable stuff.

Speaker 1:

Yeah, Knowledge bases and FAQs. Using your AI tools to research and find out what kinds of conversations are agents having and how can we put together the best knowledge base and the best FAQ for our customers and being able to get that back on the website or big successes that teams you've worked with have run into, maybe because of the way the AI was set up, or maybe as a lesson for the rest of us.

Speaker 1:

Yeah, that's a really great question, you know, thinking about it in the terms of, like call centers. One of my I shouldn't say favorite, but one of those things that kind of makes you go hmm a little bit, is when we look at sentiment analysis. So when you do a sentiment analysis, what's happening is AI or machine learning algorithms are looking at the words, the transcript words, to see where there are positive, negative or neutral words, and then, based on a scoring algorithm, getting things negative scores or positive scores and looking for phrases and words. It then is going to come up with a sentiment number and that sentiment number will then translate into positive, negative or neutral, and I think a lot of agents have seen that Was this a positive sentiment or a negative sentiment? So they see the positive, negative and neutral, but they don't always understand how that was generated, how that was actually derived in the algorithm. And so, as we were working on these things, we would get these things back where we would report out as an analytics team that an agent had particularly low sentiment, and then they would go listen to the calls. You know supes would go listen to the calls and then find out like this is fine, what's going on.

Speaker 1:

So as we started looking through it we found two things that were really interesting. The sentiment analysis currently is trained on transcribed information, so it's not getting tone and inflection. So if you're sarcastic and you made a sarcastic comment, it might come off as very negative in the call, but it was actually sort of a thing that you were using to build a rapport relationship quickly with your customer. So it doesn't do tone and inflection. But what was really interesting is the word cancel. So it doesn't do tone and inflection, but what was really interesting is the word cancel. I'm working with the same telecom company that I worked with. Cancel comes in as a negative.

Speaker 2:

So as I was Cancel my subscription. I don't like you anymore.

Speaker 1:

Yeah, exactly so you're working with like sales and retention analytics and somebody says cancel, that's a negative thing, but for troubleshooting and repair, cancel can be a good thing. Somebody that had an appointment and for tech to come out to their home and visit their home and the agent was able to resolve their problem on the call, would you like me to cancel your appointment? Since I fixed your issue, cancel was coming up as negative. You thought it was a positive customer output. A lot of these tools aren't strong enough to bifurcate its algorithm to say, if it's this kind of a call, think of sentiment this way and if it's this kind of call, think of it that way. So there's a lot of these tools are what we call black boxes.

Speaker 2:

If you hear somebody say it's black box in relationship to AI, it means something's happening behind the scenes that you don't know and nobody knows, right Like even the developers of that tool, put stuff in and get stuff out and in between, no one knows what's going on. That's exactly right. Yeah, that makes total sense. And so we were talking about inputs and outputs. Have you seen support teams input bad data, train it on bad content and what does that look like, I guess? And how do we avoid that?

Speaker 1:

When I started my career at IBM, I was working with outsourcing clients and one of the things we would outsource would be help desks and technical support for their customers so very large companies. They would take over their desktop support and help desks so that their customers were getting better support from IBM. We worked with those agents back in those days and we would add these forms so that they could ask the customer these questions, and what we found is a very common agent behavior is they pick the first thing on the drop-down because they just got to go faster, or push to get on the next call right. And it's funny because it happens today, In 30 years, that agent behavior hasn't changed.

Speaker 2:

When you're measured on the number of calls you do per hour, it disincentivizes you from creating good data.

Speaker 1:

That's correct, and so that's why it's really important as a data team to understand all of those kinds of pressures that are coming in and how it can skew the data and being able to set it up correctly. So one of the companies that I was working with previously was getting ready to start doing AI-generated text and notes from the call so that the agent didn't have to do post-call work to stick it in.

Speaker 1:

And so it was a little bit more of an accurate way of getting the right information, not having to rely on the agent, and put the agent where they really are the most valuable, which is connecting with their customers and making those relationships with our customers. So yeah, it's a garbage in, garbage out thing. If you have garbage coming into your machine learning algorithm, you'll have garbage coming. What's a?

Speaker 2:

guardrail that you would recommend a support team use to try to help prevent an AI agent or even an internal ops tool from hallucinating the value away.

Speaker 1:

I think especially for support teams, as AI starts to replace those tier one sort of low-hanging tasks and repeatable tasks. So in that kind of a case where we see these things being automated from the tier one support, repurpose that time into having them check and validate the outputs from the algorithms. Check and validate the outputs from the algorithms. So if customer says this and chatbot recommends this action, get those logs and make sure that they're being checked by a human to ensure that the decision your AI is making is the right decision.

Speaker 2:

So this brings up a whole new career for AI babysitters basically and I don't think we have all those roles and titles defined. I think it's emerging and maybe I'm getting ahead of myself a little, but we could talk about how does a customer support leader future-proof their career so that they're building these AI skills that they need to then have these roles?

Speaker 1:

And this applies to everybody those of us who are not using AI will be replaced by those who are, and I don't think that AI is going to replace people. I think it is going to reshape our roles. As you said, they're still becoming part of our culture. We're redefining. What does this all mean? How does it all look? And again, I'm speaking from what I know today. This could all be different tomorrow, but again, those simple, low-level tasks can be automated, but that's where we can then have our agents going through and checking the outcomes.

Speaker 2:

Customers can tell when bots are faking empathy. So we maybe should build bots to be bots and not to try to pretend, because more and more customers are being aware of they're aware of when they're talking to a bot, but also Customer and public attitudes about AI are shifting rapidly, and so people are beginning to trust it more. Some people are beginning to trust it less, so we have these varied attitudes that customers are coming at. How do we prevent ourselves from overtrusting the AI to just do whatever it's going to do? How do we calibrate those chatbots or those tools for a changing?

Speaker 1:

customer perspective. I think you hit the nail on the head. The first thing is, don't let it fake its empathy. If the chatbot says I am so sorry you've gone through that, I can understand. I'm thinking like, no, you don't. You've just been programmed to say that.

Speaker 2:

Right, and that's annoying and unnecessary.

Speaker 1:

Yeah, it would be like if I was talking to an agent that said I am so sorry this happened to you, let me fix your problem. There's no empathy there in that voice. I'm looking for that empathy in the human. I don't necessarily want the empathy in an AI. You're not trying to be a person, you're trying to be an AI, just be an AI. So, understanding how it's trained against us.

Speaker 1:

The biggest thing is understanding how something is trained and knowing where it stops. It stops being a human because it just tells me I've got a great idea. When I ask it to find my blind spots, it will. So you have to make sure that you are prompting it to do that. So, thinking about your chatbots and setting up a chatbot from a customer support perspective, making sure that you're being transparent, that this is a chatbot here's what I can do, here's what I can't do. But getting people the opportunity to opt out of that chatbot experience and talk to you as human. Our aging communities hate AI and computers. They want to pick up the phone, dial the number and talk to a person.

Speaker 2:

I can't imagine wanting to call a person as a millennial, but my parents want to.

Speaker 1:

Yeah, and my grandmother, when she's 90 and she doesn't get on the phone anymore but as of a few years ago, she would get very, very angry when she would have to go through the IDR or the automated kind of things and she would not use a chatbot to save her life. So giving people the opportunity to opt out of your chatbot to get to a customer service agent is, I think, really important in understanding what does our customer want. And sometimes we try to automate it because we think about how it's going to save us time and how it's going to reduce our times per call and our first call response and all that kind of stuff, and we're thinking about it from the metrics perspective and the KPI perspective, but we're not thinking about what does our customer want. And that's really the most important thing as far as when we talk about CX, we forget what the customer wants.

Speaker 2:

Yeah, and I think, especially in the world of tech, we're very accustomed to working with not just AI but various computers and tools and IVRs, whereas I think we lose touch a little bit with your average customer who maybe doesn't sit at a computer all day long and maybe has very little trust or proficiency, not even older folks, but just anyone who I tend to think. Everybody works with computers right, but coming back to reality, not everyone does.

Speaker 1:

Not everybody does. That's very true, and that's where it's the governance is. What's really important is that you have, as a customer support team, have governance put in place.

Speaker 2:

So tell me more about that. Do you mean like I could have in the future my title might be customer AI governance technician or something?

Speaker 1:

like that? Yeah, specialist Somebody. That is part of that QA process. How do you do QA and QC for your chatbots and your AI responses, so having that human in the loop to make sure that they're catching those errors? As somebody that's programmed AI, I love when somebody tells me that something's doing something wrong because I've set it up the best way I can and the more information I get from the humans about where it's failing and the more information I get from the humans about where it's failing, the better I can be as a developer to have it catch up If I wanted to prepare myself for a future.

Speaker 2:

if I'm a support leader and maybe I want to move into AI governance, are there courses you recommend or ways to become more AI literate and move in that direction?

Speaker 1:

Seems like there's a course for everything out there and, honestly, you don't need them all you don't need them all, I think, getting really understanding the tools that you're using in your support groups and understanding how they're being implemented and how they're being scaled in your organization, and then learning things like maybe it's less about taking the class and learning how to ask, so asking for things like I'd like to work with you on tuning your AI algorithms or tuning your AI models. I want to be a part of the QA process, and that's the hands-on kinds of stuff.

Speaker 2:

That's really helpful because I've also heard this advice and I think it's good advice unrelated to AI Become an expert on if your team uses Zendesk, become a Zendesk expert right. Get whatever certification you have. If your team uses Intercom, really dig into FIN and how that works. It's sort of a variation on that old saying my mom used to say do what you can where you are with what you have right, and then you can learn a lot from learning a specific tool Totally, and there's tools that are changing every day, so get used to using them.

Speaker 1:

Get a chat GPT account or a cloud account and ask it. Here's what my job is and I'm worried about content. Engineering is a big thing. I'm worried about what's going to happen to my job. Give me five ways I can future-proof my role as a tier one support person or a tier three, or a supervisor or a leader. Help me, you know, identify my things that I can do today to start taking action.

Speaker 2:

That's great, I love that. And you used this term that was brand new to me not long ago prompt engineering and I've kind of waded into this world, but can you tell me a little bit more about what is that?

Speaker 1:

Prompt engineering is how do you ask AI to do something without introducing?

Speaker 2:

your own bias. Prompt engineering is a technique you can learn where I'm giving it. Prompts that challenge my own biases.

Speaker 1:

Do you remember? We talked about garbage in, garbage out a little while ago, about how, if you feed your algorithm bad data, you get bad algorithms out? This is what prompt engineering is. If you give it a very simple or I don't want to call it a garbage prompt, but you give it a prompt that isn't really what you want it's going to give you what you ask for. So prompt engineering is going a level deeper. So I'm going to be very transparent and kind of vulnerable. Right now.

Speaker 1:

I am horrible with shopping for clothes. I don't like shopping. I walk into the store and I look around and I'm like, okay, there's all of these things and I don't like it, can't do it. So if I asked my chat GPT tool to recommend some outfits that would look nice on screen and that are in a certain price range and this is my color palette, like I like blues and greens and blacks, right, and it's going to give you a whole list. But is that really what you want?

Speaker 1:

So when I did my prompt and I literally seriously actually did this I told it I hate shopping I don't know what anything is called and I don't accessorize. I prioritize comfort over fashion or fit. I want things that look nice, that stand up well with being washed, that will last. Okay, now I'm like I'm up to like eight things. I'm telling it here's what I want it to project about me and being confident in me and what I'm helping me do. I want it to be a bit trustful and I need you to tell me where everything's at. What is it called? Now, it's interesting because it gave me this stuff. I'm like this is fantastic, I love it. Every link I clicked on was wrong.

Speaker 2:

I've had that experience too.

Speaker 1:

But I asked it describe the piece of clothing that you're recommending so that I can go do my searches, and it helped me so much.

Speaker 2:

Yeah, you got specific and it was useful to translate and give you things to search. But it actually linking to. I did this with shoes not long ago and I was like none of these are even. They're all out of stock, like what? But it does. It doesn't give you the right links, but it gives you information for you to use. That's really interesting. Back to kind of the big question that might be. It's on my mind, it's on hopefully, hopefully, people are thinking about this. Will AI replace us as support people?

Speaker 1:

I think it's going to replace some of the work we do. So I think that AI is replacing 70 to 80% of common FAQ tickets. They're handling it in a chatbot world or you know other ways that they can log in and do things on their own Routine order status checking. You don't need to call an agent to find out where your thing is. It can actually send it to you. You're tracking logs, all that kind of stuff. You can log in and do that. Chatbots are perfect for that. And then we've already talked about things like password resets Anything that's really really simple, that can be automated, but where the human element is essential.

Speaker 1:

As we talked a little bit about relationshiping, being able to um, if, if, so, if a device fails, something is not working, ai can't navigate or build the trust. If something stopped working or something upset a customer, what about and I mean this never, never happens in the cs world that somebody calls in and they're angry because they call them five times? Ai is not going to be able to handle that customer. That's had it, that called in five times and they're not getting their resolution, and then AI will probably not be able to handle that. It'll probably just take that person off a little bit more.

Speaker 1:

Tier three level support the stuff that you really need a person to troubleshoot. What were we talking about when I was talking with the developer and somebody asked ChatGP to develop code and the code was beautiful, it was perfect, it was laid out correct. When they actually implemented it, it didn't work. None of the code worked. So I think that's where we see a lot of these things, where it looks like it can do the work, but you really need the human to be able to troubleshoot it, implement it, fix it. You also have things where which probably never has happened to you where somebody says something but that's not what they meant. But as an agent, you can tease those things out Like is this really what you meant? Is this really what you were looking for?

Speaker 2:

Like if there's a typo in someone in a customer's reply for what you were looking for, like if there's a typo and someone in a customer's reply, all along they've been talking about one thing and then they have one reply. That's like I do not want a refund, but the not was an accidental typo. They do want a refund. I can gain from context, from the rest of the conversation, that they do, and check in with that. Yeah, complex emotional problems.

Speaker 1:

People are still going to need to do that. And then, obviously, we talked a little bit about agency needing to supervise AI and handling those trust moments being in part of that loop where humans are moving up the value chain and letting AI do the stuff down. Here we're moving up the value chain where we can provide better value and handling these complicated, highly charged, highly emotional kinds of interactions with our customers, If you were advising a support leader setting up AI governance today, right now.

Speaker 2:

what are the first three steps on your checklist?

Speaker 1:

So if I'm setting up AI and I'm getting ready to scale it or adopt it inside of my organization, the first thing that I want to do is ask for transparency from the vendors the vendors that are creating these tools, or your internal IT teams or development teams that are creating these tools. Ask for the transparency. How is this being trained? What data is it being trained on? Sometimes we don't know what we don't know and there might be a better data set to train it on. So ask those kinds of questions and present it. As I want to help, I want to make this the best tool we could possibly use to automate things and make our customers happy and keep our agents happy. Asking for ways to audit the decision logs and asking for vendors for ways to do the explainability. How can I explain how this is working to other people in our organization? Guardrails, so setting up those guardrails for CX. So company policies Sometimes it gets it wrong, especially if you've had two or three different policy changes. Sometimes the AI will go get the last one. So making sure that things like warranty or those kinds of things like as far as refunds and things like that are the most up-to-date and doing that kind of check.

Speaker 1:

Building AI literacy for your agents, helping your agents understand how to use AI in the front lines and how to collaborate with my AI tools. When should I use AI-directed content? When should I use a chat, gpt or a general tool? Teaching them how LLMs work and what is an LLM a large language model, how does it work, so they understand the risks and they don't blindly trust the outputs. And then piloting, first, in the escape. I think a lot of times we make the mistake because this tool comes in and the vendor says it's going to fix every problem for us and we implement it on scale. And start with the pilots. Start with certain sites or certain kinds of calls you know if you're a multi-site organization or if you're a group that's gotten multiple kinds of different calls coming in start with one of those segments of call center agents to ensure that it's working correctly and pilots are perfect. And involving your agency in the outputs and the tuning and the QA process that we talked about and helping them get involved as well.

Speaker 2:

It seems like that's a great opportunity to use that percentage of agent time that's not in the queue Because it's good not to overload agents or specialists with too much queue time, and so they can do some AI trading and AI governance a little on the side. Cool, great. Well, is there anything else? I think we've covered everything. I mean so much. I almost feel like this could be two episodes that we just talked through.

Speaker 1:

Yeah, you know, I think the biggest thing, the last thing I would say to your CS and your CX leaders that might be listening to this is AI is already in your support stack.

Speaker 2:

You just need to learn to lead with it. Yeah, I love that. Learn to lead with it Fantastic. Well, thank you for being here.

Speaker 1:

This was so much fun. Thank you for having me. This was really fun. Thanks, yeah.

Speaker 2:

Huge thanks to Jen for being here and educating us on AI. I hope you're leaving with clear, actionable steps to bring AI into your support operations or to better manage the AI tools that are already being implemented, without sacrificing empathy or accuracy. If you enjoyed this conversation, please do subscribe to catch the next episode wherever you're listening or watching, and if you know another support leader facing AI overwhelm, please do pass this along. Thanks for listening and we'll see you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.