Shh… IT Happens

AI Burnout Syndrome

Eddie Clark Season 1 Episode 2

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 37:07

Artificial intelligence is moving fast. Businesses are adopting AI tools everywhere; yet many leaders are discovering an unexpected problem. Instead of saving time, AI can create confusion, extra work, and even employee burnout. In this episode, the team breaks down two growing concerns for business leaders: the risk of advertising influencing AI results and the emerging concept of AI Burnout Syndrome. When companies rush to implement AI without clear processes or expectations, employees often end up doing “work about work”; reviewing AI outputs, fixing mistakes, and learning new tools faster than organizations can manage them.

The conversation explores what smart organizations should do instead. You will hear practical advice on setting realistic expectations for AI, protecting trust in digital information, and using automation to support human creativity rather than replace it. The hosts also discuss why many AI projects fail, how leaders can prevent burnout, and where emerging tools like AI voice agents may actually create real value. If your organization is experimenting with AI or feeling overwhelmed by the hype, this episode will help you think more strategically about how technology should support your business and your people.

Key Takeaways

  • AI tools can increase workload instead of reducing it. Research cited in the discussion shows that 77 percent of employees report AI increasing their workload because they must manage prompts, review outputs, and correct mistakes.
  • Most AI initiatives fail due to poor processes and messy data. If companies do not standardize workflows and organize their data first, AI cannot produce reliable results and often makes existing inefficiencies worse.
  • AI should free people to think, not push them to produce more volume. The real value of AI comes when it removes repetitive tasks so employees can focus on creativity, critical thinking, and decision making.
  • Ad-driven AI could erode trust in digital information. If AI results begin prioritizing advertisers over accuracy, business leaders may struggle to trust the insights they rely on to make decisions.
  • Leadership strategy matters more than the technology itself. Organizations need clear expectations for how AI will be used; without a plan, teams become overwhelmed by too many tools and unclear goals.
  • AI requires both human oversight and proper training. Employees must understand when to use AI tools, how to prompt them effectively, and how to validate the results they produce.
  • Burnout often comes from poor AI implementation, not the technology itself. When leaders treat AI as a productivity multiplier instead of a support tool, employees experience cognitive overload and constant pressure to produce more.
  • Voice-based AI agents may become a practical next step for automation. Emerging tools that combine AI with voice systems could streamline workflows such as customer interactions, support tickets, and scheduling.

This show is brought to you by:

SPEAKER_01

Hey everybody, welcome to Sh IT Happens, the podcast where business leaders learn what really goes on behind the scenes in IT and how to turn technology into a profit center instead of just overhead. I'm your host, Veronica Sands, and each episode I'm joined by Eddie Clark of Solve IT and John O'Wiler of Century Technology Solutions. No tech jargon, no scare tactics, just real stories, real lessons, and practical insights business owners can actually use. Gentle warning, there may be strong opinions. Let's get into it. All right. So let's talk about what's happening in the world of IT that business leaders should care about. The news has been pretty active lately regarding IT and AI and its impact. So, John, what current topic in the news piques your interest right now?

SPEAKER_02

Yeah, this is a little bit for everybody, not just business owners, but consumer level too. Back in October, November, OpenAI hired somebody for the sole purpose of starting to do an ad platform within a Chat GPT. And now in January, they published some stuff on it and it's starting to go live pretty soon. And I know if you watch the Super Bowl, there were some really big Super Bowl ads about Anthropic really put a stick to it about OpenAI doing ads in their content. And I just think something to discuss is the ethicalness of it, like what to watch out for. Because if you want to look at it, you really have some big players. You've got OpenAI with ChatGPT and their public and private models. You've got Anthropic Claude, you've got Google Gemini, and those are the three biggest model creators. You do have Copilot, but Copilot runs on Claude and OpenAI algorithms. So they haven't really developed their own. They're running on those in the back end. And you do have some other third parties like Perplexity and stuff like that that are much smaller. But Gemini, if you think about it, Google has always been an ads platform. They've been an ads platform for 20 years. And if anybody was going to start ads in an AI space, you'd think it'd be Google. But even them, knowing that most of their revenue is coming from ads, is like, nope, we're not doing that. And if you watch that Super Bowl ad or read some of the articles, Anthropic has a very big ethical issue about it. And I think if you look at something like Facebook before ads versus now that there are ads in Facebook, which granted, it's been years before when Facebook started introducing ads, it's been years. But if you look at the type of content and you look at the type of usership and just the way things happen, I'm really worried that adding ads into something like Chat GPT results could really affect the way that we think about information or the way information should be used. If the answers I'm getting out of my AI is not actually based off of knowledge and it's based off of whoever bid the most money or wanted to pay to influence me, I don't necessarily like that. I it, you know, what I like about encyclopedias and dictionaries and even like Wikipedia is you go and you get some information and it's not influenced by money, really. If you bought an encyclopedia or you bought the Webster's dictionary, you can get the definition. Now, yes, open AI has come out and they've come out with like, oh, how we're gonna do this kind of ethically or whatever, but I really worry that's a slippery slope that the next thing you know, there there's slight changes in what's coming back to you based on who paid more money or based on who was involved in that ad platform and stuff. And I just I don't really like that. So I figured we'd talk about that for a little bit and see how I'd love your opinion and what you think about it.

SPEAKER_00

So yeah, I think back to what I mean, Sam Altman of all people said when a platform's getting into financial issues, they automatically reach out to ad space, and that's a signifier there. And I don't necessarily trust the results as much. Like when I do a Google search, just using that as an example, you can you now see where okay, this one this is a paid result. I always skip those, I don't trust those anyway. So is what we're gonna end up with here something more like that kind of search on steroids. How can you trust a digital assistant to represent your needs and go out and gather information for you based on what it finds out there if it's influenced by ad dollars? I don't see that making a lot of sense.

SPEAKER_02

Yeah, and I'm glad you brought up the financial point too, because open AI originally was a nonprofit, then they switched to for-profit. They had a lot of backing from Microsoft and Elon Musk, and Elon got mad at them when they switched to for-profit and went out and created Grok as a competitor. Microsoft has been openly saying they're gonna back down their investment in open AI. And I think you're right, I think it is a financial problem. But when somebody has a financial problem, they're willing to do anything to stay alive. And I just I don't like that. I don't get right with me.

SPEAKER_00

I guess it depends on the usage. You know, there are paid versions of these things, and if those paid versions keep that out of the realm of the results of what you get, okay, great. They've got to make money right now. So it makes sense to me that they would need to do something about that. Whether or not this is the right solution for them or for the industry is really in question for me. If as a standard user using the freebie version, if I'm going out and I'm just going to do my resume or create a letter or something, and it shows up with the brought to you by Oreo cookies in the middle of it, that's not the kind of result that I'm going to be looking for.

SPEAKER_01

I will absolutely check something if Oreo sponsors it. I will absolutely buy it. I don't even care what it is. But that is a fantastic point, though. I mean, what you're reading, it's always considered the source, right? How do I know that what I'm reading is factual and not swayed or brought to you by whatever, whomever.

SPEAKER_00

Yeah. Gotta be able to trust the results. And when you start monetizing, it seems to me like that's just that's taking away the level of trust that an end user can have in the results.

SPEAKER_02

Yep. And I think it's a slippery slope. Yet ChatGPT has several levels of buying, right? And yes, you have the$20 version, they have what the$150,$200 version, the full professional enterprise thing. And I'm sure that right now they're keeping it out and stuff, but at a certain point, the way these algorithms work and stuff, if it's influencing the data, the training data, or it's influencing the popularity. Like if you think of TikTok or YouTube Shorts or something like that, it's always done on what's more popular, what people have watched for longer and faster and more and shared more, right? And I'm worried that how do you keep that out of the paid versions and the free version if it's the same model? Like if in the free version this one area has been highlighted thousands of more times because it was popular and kind of viral, how are we sure that's not influencing the paid models? Even if it's not saying sponsored by or brought to you by, what if that activity alone is adding more relevance to the training? How can you tell me it's not? Because isn't that how reinforced training and reinforced learning is one of the key aspects of Chat GPT?

SPEAKER_00

Yeah, and AI is so pervasive now. Every single product we use or come across now has an AI version, and they get you hooked on that first, and then they start wanting to charge you for it. Oh, well, we're giving you the first six months free. It's kind of like you know, your neighborhood drug dealer. Take the first here, take the first whatever for free, and then they get you hooked on the product and want to start charging you for it. And I just don't, I don't know. The jury's out on that for me.

SPEAKER_02

I don't have a problem paying for some of the AI if it's really beneficial and really helping me and my workflow or helping efficiency, or I'm getting value out of it. What I want to know for sure is what I'm paying for isn't influenced in a certain way. So you want your AI to be here in its knowledge, right? There's so much talk about training AI models, and that's the whole argument of private AI models versus public AI models, because you can take open AI's 5.0 or 5.2 model and train it on your own data and host it yourself and run it yourself, right? And it's not the public chat GPT data. You're already having a lot of debate, a lot of the way companies are looking at that, and companies that are doing their own private model because of that. And now I think this just clouds it, even if it is paid for or not paid for, like it just clouds it because you can't guarantee me that the data is clean anymore.

SPEAKER_00

Yeah.

SPEAKER_02

At least in my opinion. I don't know. They move prove me wrong somehow, but it's kind of crazy that, like I said in the beginning, that someone like Google is like, nope, we're not doing this. And they are an ads company and have been for 20 years and made billions upon billions of dollars off of ad revenue. And they're like, No, we're not doing this. That says something to some extent.

SPEAKER_01

Yeah, it does. Absolutely. And especially with how prevalent AI has been in the news lately, if we're speaking to the business owners, the decision makers who are listening to us right now, are they supposed to be excited, nervous, wait and see, hold your breath with something like this with ads?

SPEAKER_02

How do you think we should approach a little bit of caution and nervousness? Now, if they're the ones selling the ads and they're getting benefit off of it, obviously they're gonna be happy about that. But I think for a data in general and the way the world works and the way the internet works in general, I think you should be cautious. I think you'd be scared of it. I mean, here's the truth: I grew up playing Mario on the normal Nintendo. And nowadays, if Mario were a video game, every time that you like crushed something, every time you died or got to the end of the level and there was a 30-second ad and stuff, it would not be near as popular as it was when I was a kid. It just wouldn't be. And unfortunately, that's the way the internet has gone. And there was a glimmer of hope for AI to not necessarily go that direction to actually be a source of truth again, because there was a time when ads really weren't a thing on the internet, and we're talking years ago, the 90s and stuff, but data was more pure. People could publish on forums and publish websites and stuff, and what you found was either someone's opinion or fact, it wasn't influenced by a lot of the dollars and ad spend and stuff that it is now. True.

SPEAKER_01

I think a lot of people too are going, well, I paid for this service, and now I have to watch ads on top of that. I mean, you see that with Hulu, Netflix, stuff like that, where it's just like, oh, great, I'm paying for the thing that I'm paying for by paying for it, you know. It's quadruple dripping.

SPEAKER_02

It's a slippery slope. You know, open AI is saying, oh, we're only gonna do it this way for now. Or they're not saying for now, I'm saying for now, but oh, we're only gonna do it this way on the free tier. And then next thing you know, a year or two later, it's like we're gonna go ahead and introduce this into these other areas too. And we're gonna continue to introduce this into these other areas too. And I think you're right, Eddie, as long as they're burning billions of dollars more than they're making, they're gonna continue to have to do something. And with Microsoft pulling out and things like that, I don't see an end to it. I'm just scared what that does to all the other systems that are using Chat GPT, and then what that does for the other models that could follow them.

SPEAKER_00

Yeah, the only thing about this topic that surprises me is that it didn't start sooner. I read recently, I think it was the New York Times article that investors are starting to lose their confidence in this AI initiative, some call it a bubble, because it's not paying off fast enough. And so here we are, we've still got a pretty long development cycle before return on investment. And it's not just the development side, it's the end user side. And like I said before, it's permeating everything. There's an AI for this and AI for that. It really can be overwhelming. Yeah, I I agree.

SPEAKER_01

All right, awesome. Thank you so much, John, for that little news update. I think AI is not going away in our world or in the tech industry, and it's definitely because of that not going away on our podcast. So we will definitely be revisiting this soon, I'm sure. But I did want to move on to our hot takes and cool fixes. It looks like we have a topic this month. We're talking about AI burnout specifically. So, Eddie, please tell us more about AI burnout and what exactly it is.

SPEAKER_00

Yeah, sure. So there's a new term called AI burnout syndrome. And I just mentioned that AI is in everything and how overwhelming it can become. It's pretty much identified by work exhaustion, which is driven by the rapid adoption and mismanagement of AI tools in the workplace. So I read an article in Forbes that AI can often become a stress multiplier for people because there are a lot of factors driving that. But um, let's see, they estimate 77% of employees are reporting that AI tools have increased their workloads, not made their life easier. And that is leading to a 45% higher rate of burnout among frequent AI users compared to non-users. And the things that are driving these statistics are like workload expansions, doing work about work. Right? You know, you got to come up with processes, you got to do all of these things to make your work work better for you. AI allows for faster output, but that can lead to employers increasing their expectations in terms of performance of their employees. More output, more output, more output. Do more with fewer people, with fewer humans. You hear that a lot. We got to cut overhead. So AI can automate tedious tasks, but it still requires a significant amount of human oversight. Prompting, you got to get your prompts right so that the output is coming out the way that you want it. You've got to validate the results, and all of this leads to cognitive overload. Another one of the problems is there's a constant need for upskilling, having to learn how to use the AI tools at your disposal. We hear a lot about security. Oh, we don't want to expose company data that shouldn't be exposed, right? But nobody's really thinking about what is a methodical approach to implementing AI? What do we hope to get out of it? How do we avoid that cognitive overload so that we're using as humans the better part of our brain power that AI can't do, but delegating the tasks that AI can do in a way that gets us those productivity gains?

SPEAKER_01

That makes a lot of sense. And I do want to actually just follow up with a quick question because I also want to get John's input. But Eddie, as far as when we're talking about AI burnout, we're speaking primarily to employees and staff and stuff like that. So, what should a business owner look out for as far as AI burnout? What's this evidence or the breadcrumb trail that business owners should be following to stay on top of something like this?

SPEAKER_00

Well, it's pretty much the same as it always has been. Are people burning the midnight oil? Do your employees feel as though they're never really catching up? And the way to stay on top of this is through your one-on-one meetings with your team, making sure they're comfortable, setting goals is a very positive way to figure out what we can accomplish with this tool. Because essentially, AI is just a tool. So, what can we accomplish and what should our expectations be? What are our limitations? When is it time to set AI aside and just do the work? Have the human do the work. So figuring out what those things are is, I think, crucial to preventing burnout.

SPEAKER_01

Absolutely. You got a hammer and a nail, but somebody still has to swing the hammer, at least where we are today. We don't know what tomorrow holds, but there's still a human aspect of it that definitely needs our attention. John, what do you see here? What are some of your thoughts?

SPEAKER_02

Originally it was a three C's, and then it started with the five, it went to the five C's, but there's a guy named Jason Lowell out of Utah. He was in the public sector, worked for some of the big master distributors, things like that. But then now he's a professor at the University of Utah and teaches AI stuff. But he talks about how critical thinking, several other things like that, creativity are always going to right now be with the human. And that's what the work should focus on. The idea is that AI should should take out some busy work for you, a repetitive task for you, things like that, that allows you to be more critical thinking, more creative, more cognitive. A few things like that. I don't remember all five of his five C's, but I will Google it later now that this has come up. But it's a very smart approach. And I think he's right in some extent. But he's also had a lot of time to research this and being a professor at a university, and then also coming from the corporate sector, where he was part of teams that sold this type of stuff for several years. He's got an interesting insight. And so I think what you should, what your goals around your AI implementation should really be is kind of like what Eddie said, you need to have a plan about what is what the expectations are. But if it's not freeing your people up to be more creative or more cognitive and to be able to do some of those aspects, the more critical thinking things, then you're you are using it wrong. I saw a great LinkedIn post and I posted on it actually about like someone was like, I want AI to do the dishes and hold my laundry and stuff like that. But the purpose was is I want it to do that so I can focus on, I think they said like art and creative writing. And this was like a person that was like an artist, right? And so if AI, and which my comment to them was you should check out the X1 Neo, because that is something in the next six months that will do that for you, that you can buy right now, no pay to answer sponsorship here or anything. But um, if you're thinking about it that way, of what can I do to not free up my employees to be more productive, but free up my employees to be more creative and more critical thinking on the things they're working on, to be more the directors than the cogs, then I think you're doing it right. If you're putting in AI systems that can help with some of that busy work, some of that task, maybe some of the analyzation of things, but then you as the human are able to sit back and make educated decisions because of that, or be able to direct things differently because of that, then that's more of a success than you know what you're saying, Eddie, where okay, you got this done in five minutes, now do it again, now do it again, now do it again, now do it again, now do it again, now do it again, now do it again. Yeah, that's honestly just another repetitive task with AI. So something that may have taken me an hour or two before, yes, I can do it in five minutes, but if my boss is pushing for me to now do a hundred more of that because I can do it in five minutes, that doesn't really help your employees.

SPEAKER_00

Yeah, that's a really great point. I think it's important for leaders to understand that you still need to prioritize human connection.

SPEAKER_01

Yep.

SPEAKER_00

You need to protect time for team collaboration and in-person interaction, even if it's on Teams or Zoom or whatever. That is really vital. And perhaps some sort of realignment of management expectations is in order. Focus on using AI to reduce workload, sure, but don't use it just to increase volume and pace. That's where people start to get burned out.

SPEAKER_01

That's a tale as old as time, honestly, Eddie. Because no, we're not introducing a new concept here. Connect with your employees, make sure they're using their tools and using them properly. Um, that that's been true since the dawn of time. So it's a new game, but the rules are the same.

SPEAKER_02

I think something to add to is clarity. People that are unclear is unkind to some extent. I don't think there's a lot of clarity around these AI projects. And where I get that from is there's a statistic from last year, the end of last year, out of MIT that says 90% of all AI proof of concepts fail. And they give some really good reasons on why they fail, but a lot of it has to deal with the underlying organization. Like, where does all your data live? Is your data even really accessible by humans, let alone AI? What are you giving it to make these decisions? Do you even have your processes well defined? The truth is, if you walk into a room of 20 people and they're all doing it slightly different, you can't have AI fix that. They all you need to have that process defined on how things are working and clarity on what that is, and then introduce something to automate that or to help with that decision-making process or to help with that workflow. But if everybody is doing something slightly different, why are they all doing something slightly different? Which way is the best way at that point? How do you determine that? And if you as a company can't answer some of that, you're probably not ready to try and introduce AI into that environment.

SPEAKER_00

Right. I mean, let's face it, like with any system, junk in, junk out. I talked to so many business owners that or business leaders that they're looking at this as, oh, this is a silver bullet. Oh, it's going to do all this. No, it's if you don't have solid processes down, to your point, John, if you don't invest your time into building something the proper way with human first connection and understanding what can go right and what can go wrong with that, you're going to uh exponentially make things better or worse.

SPEAKER_02

I just thought of something else, too, is a lot of people experience this with AI, they don't may not have the proper training or they may not have trained the system. So on both sides of the fence, the AI may not have been properly trained or have the right knowledge. Yeah, but sometimes the human doesn't have the right knowledge about AI or how to use it. And I do find some. Companies where they've introduced a bunch of AI bots or something like that, and people don't know when to use which one for why and stuff of that nature. That's its own problem. But the other problem that I see is okay, because they're using it wrong or it wasn't set up right or the prompting isn't right, or whatever some of the other reasons you said is now I've got this AI that's doing stuff, but it doesn't do it exactly how I want it. So now I have to spend this time correcting it and going back and correcting it, and then going back and correcting it. And that just I think that's a big problem too that causes burnout, is like you said, with the silver bullet, it's not a silver bullet. You do need to go through iterations of it to get those types of things out of your processes. But if your employees are facing that problem where you bought some AI tools and did not really set them up right and did not really train your staff right, a lot of the problems may not be, yes, the product could be increased, but it also could be decreased or negatively affected by yeah, maybe the AI did this faster, but now I'm spending 20 minutes cleaning it up, and I it's just really annoying. And I think that would cause burnout as well.

SPEAKER_00

Yeah, so you kind of alluded to a point about the ever-changing environment. There's a constant learning learning curve there because the engines are changing, the where they're plugged into, and the applications are changing constantly, and then there's the overarching fear of job insecurity for a lot of people in the workforce. So, again, that human-first approach of reassuring people look, AI is not here to replace you. You hear about that in the news a lot because in the news, they're just trying to get viewership, clickbait, what have you. But that is what permeates our culture right now is this fear of I'm gonna lose my job to AI. Well, that's up to leadership to allay those fears. And it doesn't help that 96% of C-suite leaders expect AI to improve productivity, but 77% of employees report that AI has actually increased their workload. So until this gap or problem gets addressed, we're gonna see more and more of our employees experiencing AI burnout and just burnout in general. If you're burning the candle at both ends when always on culture, that's gonna burn you out. So setting boundaries is really important. I tell my team all the time, I feel it's really important to maintain a work-life balance. I'm not always the best example of that, but it's something I think that needs to be expressed and needs to be followed up on. We incent our clients financially to not want us to work after hours, and that works out pretty well for us.

SPEAKER_02

Yeah, after hours charges, holiday charges, definitely.

SPEAKER_00

That's right.

SPEAKER_02

What I was gonna say is Microsoft has a good way of defining this. They I was at a conference last year sometime, and it's also in some of their marketing, but they believe that everybody where they got the name copilot from, uh, yes, they've copyrighted, but copilot was a term before that. If you think about flying and stuff, they believe every person should have an assistant, aka a copilot, and then every process or every department should have an agent that's working those processes, and that's how they see it going. Now, does that mean it's replacing the humans? No, you should be working alongside them, and that's kind of Microsoft has pretty good content on that that you can find out there that it would be interesting for most business users or most business leaders to try and understand what Microsoft is trying to tell them in the way that you should use these things.

SPEAKER_00

They really do. In my certification training and at the Microsoft campus in LA, that was what they were hitting on the most. And this was a couple years ago. And that that is exactly spot on. It's because co-pilot, the paid version, has access to whatever data lives in your Microsoft application ecosystem, it can that you have access to. But it's vitally important to get the security right, it's vitally important to understand what the capabilities are and from a leadership perspective, set out expectations for what this technology or what this tool is going to do for the organization. And again, focusing on the human aspects of oversight, of training, of validation, and whatnot.

SPEAKER_02

I'll give one other thing thought on this, and it's Star Trek versus Star Wars for five weeks, right? So if you think about Star Trek, you had commander data and you had this ship that could literally do everything itself, but yet the humans still made the calls and all the important things. They still pressed the button, they still told the computer to do things, right? Even though the computer could have done all that stuff itself, they were still the ones saying, Hey, let's do this, let's go warp nine, let's do this, let's shields up stuff of that nature. However, if you watch certain episodes, you know that the ship could do all that without the commands of the humans, right? And so I still think it's important for humans to be in the loop of that and to be in charge of some of that. If you go to the Star Wars side, though, but it was a lot more chaotic. If you didn't have the force and using the force, people were poor and they had problems and stuff. And so at a certain point, you got to think about what society you want to live in. Do you want to live in the Star Wars society where if you didn't have the force, the war just ran over you and you were part of a bunch of poor colonies out in space? Or do you want to be more on the Star Trek side where everything is enhanced of what you can do with the systems at hand, but the systems at hand aren't in control, they aren't rolling over you, they're made for the greater good of everybody.

SPEAKER_01

And so if you Star Wars is where automation goes wrong. 100%. That's a fantastic example.

SPEAKER_02

100%. The Death Star was definitely a method of death. And in Star Trek, you have some people outside of the Federation that may be more of a different mentality. But if you look at the way the Federation operates, if you're a Star Trek fan and understand it was very computerized, everything could have been done by computers. In fact, there was no reason for computers to exist. You even had the cyborg, right? Cyborg could do everything. But the major critical thinking, the major creativity, the major thought processes were all still done by the humans.

SPEAKER_00

Well, there was one series where the ship actually achieved consciousness. Um, but you know, I'm I'm getting a geek out because I'm a tracking.

SPEAKER_02

Yeah, but it but it's it's a far-left example. If you look at the main Star Wars, I mean sorry, this main Star Trek kind of lineage of Next Generation and the original Star Trek, and even like Voyager and some of those. Well, we're getting really geeky now. Um Deep Space Nine, they all kind of had the same bonus episode. Yeah.

SPEAKER_01

Um, but you know, those are all really incredible points, too, because we're dealing with something that we don't yet understand. And even if we do understand it, what it is today isn't going to be the same as what it is tomorrow. So I think some of the takeaways for the AI burnout is that the human component still matters. Observe your tools, observe your people. Make sure you've got the right person sitting in the right chair, uh, you know, the right tool for the job. And honestly, speaking of tools, which leads me into our final segment here, I want to talk about tools. And John, I hear you've got a great one here. We got tool time, not necessarily pretty or colorful, but real tech that helps businesses actually run better. So, John, chat me up. What's one tool or product you're really excited about right now? And what problem does it solve?

SPEAKER_02

It's actually a group. There's two or three products. You have Rotel, you have Vapi, Poly AI is another one, infravoice is another one, but they're all voice agents that you could tie into a phone system or use through some different systems. A lot of people are using them in phone systems. And what I feel is important about these is let's face it, sometimes you talk to AI and you know it's AI and it sucks, right?

SPEAKER_00

However, it tells me all the time.

SPEAKER_02

Yeah, however, some of these systems actually do very well. And it's something I've been playing with because voice interaction with AI, whether it's through a telephone or through other means, is becoming more and more important. And if you think about it, a lot of people where they started with Alexa or Google Home or Siri was more voice activated. And I think people are more natural with some of the voice activations. And so I think some of these products tied into some of your workflows. A lot of people think of AI as just chatbots. And while that may or may not be totally true, there is a lot of other AI systems out there, like visual and audio and stuff like that. A lot of people, when they interact with Chat GPT, Claude, Copilot, it's all really with text and typing out text. And I think we're getting to the point now where you're starting to see people layer on voice more. Not that it wasn't always there, it just was more of a outlier. You had to really be putting your stuff together if you're doing voice. But I think you're at a point now, like you have Hard Rock, which came out with Laric, which is built on poly AI. That is a person that's like an interactive person. You can call and talk to them, but you can talk to him a few other ways where you're talking to Laric, and Laric admits that you know that she's an AI and can tell you all about dinner times and reservations and all sorts of things from all of the different systems that Hard Rock has. I've personally like some of the stuff with Vappi, which is similar, but what I like about Vapi is that it can tie into your phone system very easy through SIP protocols, which is a technical jargon for how voiceover IP systems or how the nor the newer voice systems work, it can tie in very easy as an extension to almost any phone system, where some of the others, like Rattel, you have to actually have a public phone number to call into it, but you can also talk to it through other means, such as an app on a computer or some things like that. So I just wanted to bring some of that to light. I think it's the some of the next evolution that we're going to see. And I think it's the next area where there actually could be some strong investment that makes sense. If you can do some things with some voice agents to speed up, because people type a lot slower than they talk, and people read usually a lot slower than they can listen. And so if you could speed up things with voice agents, and if the voice agents aren't annoying because you have set them up and programmed them, I think it really will save time and also help when times when people don't want to work. Like you said after hours earlier with your team and stuff like that. Well, if I could have a basic AI voice agent that people could talk to or call into and say, hey, look, I'm whatever AI, everybody at Solve it's busy right now, or everybody's at Century's busy right now. Can I help you with some things? And if it can answer the questions, great. If not, then it forwards it on. But if it could take in basic data to open a ticket or determine how critical it was and if it should actually wake up one of your texts or not at three o'clock in the morning, or if it should just go into the ticketing system for the next morning, how powerful is that for companies like ours? But if you take that to like restaurant reservations or some things like that, you just want to check your restaurant reservation real quick, maybe change the time by 15 minutes. The truth is, AI can do that. And if you do it through a voice agent, it becomes very easy to start that communication. So I think there's a lot of opportunity for it.

SPEAKER_01

What are some things that people should look out for when shopping for these tools? Because it's not necessarily lookout as in any gotchas or anything like that. But what would someone consider when looking at tools like this?

SPEAKER_02

Well, actually, we had a client recently that kind of had a missfall with this, where they chose one of the systems that required public phone numbers. And so what they were doing is taking, and they did this kind of without our advice, they didn't really talk to us, but they ended up taking their phone system, forwarding it to this other system that then forwarded back into the phone system that then may forward back to the AI agent that their phone system. And so the next thing to know, they started getting all these bills for a lot of money because all that stuff was going over the public telephone systems, right? It was going from their system to a normal phone number across the public network, and then that AI assistant may forward it back to the phone system to get to somebody else or something else, and then it would forward back over the public phone system. So they started getting billed for usage and it started getting very expensive. And so I would say you need to talk to your phone company and talk to maybe some experts about making sure that the architecture is right with the phone system you're using, or the architecture is right with the other systems you want to use. Because, like I said, you don't have to use phone. It could be an app that lives on a computer, it could be like an Alexa plugin or something like that that you could start doing. And so there's a lot of opportunity there, but this is probably something you don't shop for alone. This is probably something that you need somebody to talk to and talk through what your goals are and what you're thinking to give you what is out there because there are a ton of companies in this area, not all of them are the same. Now their marketing and sales departments will tell you yes, we can do everything in the world, right?

SPEAKER_00

Very expensive, yeah. But yeah, start with the end in mind, bring in an expert, start with the end in mind, and let the expert help you determine what products and services are the best fit for your need and your budget. Yeah, yeah.

SPEAKER_01

It sounds like you need to make sure you engage people who understand the nuts and bolts and guts of what you're doing, and not one person can necessarily do all of that. You've got the folks who are trying to sell you the tool, and their job is to sell you the tool. They could tell you all the amazing things that it does, but they may not be overly transparent about any long-term costs or education on that. And then you need to engage your IT, someone who does understand your infrastructure. It could be a great tool, but maybe not great for you, or maybe only need certain features of that tool. So getting a real 360 view and getting everybody involved. I can't tell you how many times we're just like, let's get a call together, let's huddle up, let's get a call, let's get your guys, our guys, their guys, and any random guy off the street that may know something about SIP control. And let's just let's talk about it because you need more than one braid there.

SPEAKER_00

Yeah, absolutely. Determine the stakeholders, get them all in a room, even a virtual one, and just start spitballing initially and then pull in your experts after you figure out what it is you want to accomplish.

SPEAKER_01

Absolutely. All right, great tool tools, John. Thank you. That's it for episode two. That wraps up this episode of Shh IT happens. Remember, most IT problems don't start with crashes or hacks, they start with small business decisions that quietly add up over time. The goal of the show is to help you spot those moments earlier, ask better questions, and use technology to support your business's growth instead of slowing it down. If you want help applying what you heard today or dealing with something that feels like it might turn into a bigger issue, you can reach out to Eddie at SolveIT or John at Sentry Technology Solutions. Thanks for listening. Until then, make IT happen.