AI in Action
Welcome to AI in Action, where we explore the latest in artificial intelligence and what it means for your business. Each episode delivers sharp insights, breaking news, and real-world strategies to help you prepare for AI and put it to work.
AI in Action is brought to you by Fast Slow Motion. Our team helps growing businesses put AI to work with practical, scalable solutions. To learn more about how we can help you implement AI in your business, visit fastslowmotion.com/ai.
AI in Action
How AI Eliminates Data Silos in Support Teams
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Support teams often struggle with scattered data across multiple systems. In this episode, Eric Housh and Zack Terry walk through how a global enterprise unified its data to power an AI agent that delivers faster, more informed answers.
They break down the architecture behind Salesforce Data Cloud and Agentforce, along with the role of semantic search, unstructured data, and retrieval systems in improving support efficiency and onboarding.
Prefer to watch? Find the full episode on our website: https://loom.ly/WofMnys
AI in Action is brought to you by Fast Slow Motion. Our team helps growing businesses put AI to work with practical, scalable solutions. To learn more about how we can help you implement AI in your business, visit FastSlowmotion.comslash AI. Welcome back to AI in Action. I'm Eric Hush, joined by Zach Terry, Director of AI here at Fast Slow Motion. And today we're diving into a high-level implementation for a global enterprise data security company. These are the folks who protect sensitive information for the biggest brands in the world. When you're in the business of security, your support team has to be top tier. Zach, they came to us with a vision for what I'd call the holy grail of support, a truly unified internal AI agent.
SPEAKER_00That's right, Eric. Yeah, this was really more than just a basic chatbot. It was much more sophisticated, and we'll kind of get into all the different components that had to come together to create this agent and what it was able to do. But really, it was about giving the support reps a sort of like a super brain that could reach into all of these different data sources, some of those data sources in the CRM, several of those data sources outside the CRM, to provide a single sort of unified response where their engineers were able to ask questions, get responses, and hopefully solve the problem of having to understand where everything lives specific to any sort of given question or problem that a customer might be having. And then also just proactively reach out and get those solutions using this tool.
SPEAKER_01Yeah, it sounds like the support engineers, when they would get, would, would, would ingest these problems, had to go several different places to find out what the solution was. Not really fast, not really effective, probably not very consistent either.
SPEAKER_00Yeah, it's a problem that a lot of businesses have, which is as you grow, you start to absorb more tools, and different departments use different tools for different things. And so you can imagine a company like this, which is a technology company, they're building these solutions to help ensure security for their customers. And they're doing things like managing a code base, they're managing an entire knowledge base, and in some cases, this one specifically, multiple knowledge bases. So this wasn't just one knowledge base in Salesforce. It was up to three knowledge bases, two of which were not even on the platform, right? And so you can imagine the day-to-day of let's say a new support rep just trying to get up to speed and understand where everything is and where to go and find answers to questions. They may not even know where to start, let alone getting into the right tool and then searching for and finding that solution, right? And so uh this was really about consolidating all of those separate sources of data. It's a highly technical environment, and so that solution exists somewhere, but where it is, that may take a little bit of digging. And so the goal here was to make it easier to dig, right? We don't want to have to just keep on digging ourselves into a hole. It was really about trying to proactively find those answers. And so the solution was unifying all of those data sources in a way that Agent Force, an AI agent, was able to go and actually retrieve answers from all of those different data sources.
SPEAKER_01Yeah, I can just imagine those poor support reps, you know, when they get a problem, they're having to go open up five different systems, they've got five different tabs, they're having to search through them. Just sounds like uh just a friction-laden process.
SPEAKER_00Exactly. And the potential here was really moving from searching around in the dark to more answering. And so the way we described it was we didn't really want to build just a search engine or a search and an answer engine. We wanted to build something that was able to synthesize the information from the ticket that they were working, understand the unique problem that the customer was facing, and in some cases, go out and actually find the technical solution and then surface that directly to the support rep so that they didn't have to either have that institutional knowledge or understand where to go and actually find that solution. It was just able to go and understand where to search and retrieve that for them, give them that solution, as well as some other capabilities that we sort of baked into the agent. So it's more than just getting an answer. So the goal was really to take them from that, you know, we we say maybe a 50-50 shot. It might even be lower than 50-50 because you may have to search through a bunch of different sources before you find the one that you think is going to solve the problem. And so taking that from 50-50 to more 90%. So again, you know, we've talked a lot about these AI tools. They're not 100%, right? And they really aren't designed to be. But if I can get from a 50-50 shot to a 90% probability that I'm finding the right answer, or even maybe a confident response that we don't have the answer. We weren't able to find it. Maybe this is a novel problem that we need to document going forward. And then in the future, we'll be able to leverage that in this sort of agentic solution. That's that's really the goal, right? We want to make this easy. We want to take this from a manual process where you may not even know where the answers live to having something that can, you know, 90% of the time find a good answer for you to actually solve whatever problem a customer is facing.
SPEAKER_01Well, let's get into the cool nerdy stuff. So to make that vision work, you moved into a deep integration with Salesforce, Data Cloud, and Agent Force. How did the architecture bring that vision into life?
SPEAKER_00Nerdy is the right word. So this is a lot of different data sources that are connected with Data 360. And we we've talked about this before. Data 360 used to be called Data Cloud. You'll probably hear us say both terms interchangeably, but uh these days, Data360, and this is Salesforce's tool that allows you to connect to and ingest data from external sources. And it also can do that for your internal CRM. That's what they call the internal pipeline. And so if we have a bunch of tickets and communication related to those tickets, so let's say things like emails, the internal case feed where you're chatting with sometimes it's the customer, sometimes it's just internally, but it's it's communication that's going on about that ticket. And then let's say you have a knowledge base in Salesforce. Let's say you have a separate knowledge base outside of Salesforce. Let's say because you're working with an engineering team, maybe you're using something like Jira to track all of uh the different feature releases and the bugs and the problems and things like that, right? You can kind of see how this compounds into having a bunch of different sources. And so the goal with Data Cloud was twofold. First, it's to harmonize all of that data. We want to get all of that data into a central location so that Agent Force, the AI tool that we're building on the platform, has the ability to reference all of that information. And then second, we need to empower something called semantic search. We've talked about this a little bit before, but semantic search is the ability for you to ask a question or provide a statement. We call it an utterance, and for the AI to understand not just the keywords that you're providing, but the underlying meaning behind those words. And then it has the ability to go and search through those sources, that unstructured data, things like email threads, case feeds, knowledge base articles, and then line the meaning of that question up to the meaning of what you have in those sources. And the way that that works with DataCloud is it uses a vectored database to embed all of that information into this sort of multidimensional space. It's really technical, it's a little bit complicated, but ultimately when you're using Data Cloud, it's it's really kind of a matter of just flipping a switch. Now, that's not to simplify the process. You have to do a lot of planning. You have to make sure that you're mapping the data correctly, you're connecting to the right sources. All of that is still complex. But a huge benefit of using Data Cloud is it has the ability to leverage a vector database for all of that unstructured data, which makes it relatively easy to enable semantic search. So that's the first thing that multi-source harmony, getting data cloud set up to ingest all of these different sources, the historical dialogue that we have going on related to a ticket or a case. And then it's configuring Agent Force on top of that. So, from a technical perspective, what we're doing is we are configuring search indexes on top of those data sources. Those search indexes are then used to create what are called retrievers. And these are the mechanisms by which Agent Force is able to take your utterance and then go and search through all of that documentation and unstructured data to find the semantically similar meaning to ultimately hopefully arrive at a solution from all of those results, right? And so with Agent Force, the Atlas reasoning engine, when you're asking a question, it is determining, okay, am I asking about a previous case or ticket? Am I asking about some historical conversation? Am I asking about a technical brief or a knowledge article that we have, an established solution where we have guidelines? Or do I not know at all and I need to search all of those sources at once? So it's able to kind of determine can I efficiently search because I know we're talking about a historical case? Or do I need to broaden that search to look not only at case history, but also look at specific knowledge articles or things like that? So those are the kind of the two first steps. It's getting data cloud configured, getting the data ingested, setting up the indexes, setting up the retrievers, hooking all of that into Agent Force, then deploying that in a Salesforce user-facing way. And so it's it's really an employee agent if you're familiar with the Salesforce agent terminology. But all that means, if you're not familiar with it, that just means when you log into Salesforce, you can click a button and a chat window slides over and you can start interacting with that agent, right? That's what we built. And so then the support reps, when they're working tickets inside of Salesforce, they can go and start asking questions with an agent. It has the context of whichever case or ticket they're working, it's able to understand the specific issue that has actually been surfaced on that case, and then it can go and perform some of those searches for you. And then beyond that, Data Cloud and Agent Force have a built-in citation system, which means when you ask a question and it's grounded in your business data, whether that's case history or whether that's knowledge from either the internal knowledge base or an external knowledge base, you're gonna get links back to the actual source of data, which, and we talk about this all the time, it's always important to verify. So these responses, even though it's it's less likely to hallucinate because it's grounded on your business data, by being able to provide those citations, you as the individual working with the AI now have the ability to go and verify the information that's been provided. So if you get a chunk of text that says, this is how we should solve this problem, then you have a link that says, This is where I pulled that solution from. So you can then click into that and verify it before you provide a response back to the customer.
SPEAKER_01So we basically turn that AI into this super lead engineer who's read every ticket the company's ever closed. But uh always, and you said it there at the end, uh keeping that human in the loop, just make sure that we're delivering uh correct, concise information.
SPEAKER_00Super important. And yes, it is it is sort of taking the role of an engineer in that it's able to go and and access the documentation and provide answers grounded within that documentation. But I would say it doesn't replace the importance of an engineer. So an engineer, someone who's been working these support cases for a long time, maybe they're not actually building, you know, the code, but but they have to be familiar with the features that are being provided and how they kind of work together. And when I get a really technical, let's say I get an error code directly from a customer, I have to know what that means and be able to search through it. And so what this provides is a way for knowledgeable employees and maybe newer employees to kind of be on the same footing, right? And so instead of having that institutional knowledge locked up in multiple years of experience, as long as you have it documented, as long as you have the process somewhere and that is connected to these data sources that the AI has access to, that means that a brand new junior employee that's just been onboarded can get the same information out of the knowledge base as a senior engineer. Now, of course, a senior engineer is going to know how to ask questions a little bit better, or maybe a lot a bit better, right? Because they've been doing it for a long time and they're probably intimately familiar with all of the processes and the knowledge that we have at the company. But what it does is it helps speed that onboarding process quite a bit.
SPEAKER_01So uh in onboarding, speeding that up, obviously that's measurable, uh massive impact for a company. What are some other uh measurable results that that this sort of power now in the hands of that support team can deliver?
SPEAKER_00I think you can distill it into a term called knowledge velocity. And so in general, it's just it's speeding up that entire process. And so, yes, onboarding is a great use case because it allows new engineers to get up to speed really quickly and they can answer questions they may not otherwise be able to answer because they have this sort of assistant that's able to go and find those answers. But it can also do things like surface a hidden fix. And what I mean by that is maybe you have something in your history of solved tickets that is similar to the problem you're solving, but it was never documented. And that's huge because should it be documented? Yes, absolutely. But we all know the reality, especially when you're working with a very large volume of tickets, right? As the complexity of the organization grows, it becomes more and more difficult to consistently document these results. And so because this tool is hooked into not only the entire history of all of the tickets, cases that have been logged, but also the communication that has been going on with those cases, that means that I might be able to find an answer to something similar that I'm working today that maybe we solved two years ago, but we never documented it as an official, let's say, knowledge article or a knowledge source, right? So that I think is a huge, a huge benefit to something like this, which is uh, you know, we talk about this a lot where a lot of the business value is stored in unstructured data. So things like call transcripts. When we're talking about a service use case, it's usually an email thread, or maybe it's an internal chat or communication that's been going on in something like Slack or Google Chat, right? And so when you're able to also connect those together, then you're able to not only tap into the official documentation that you have at the business level, but you can also tap into these sort of informal conversations where a lot of this problem solving actually happens, right? And I know for me, when I'm trying to solve a difficult problem, I'm usually not the only one doing it. And I tend to be having conversations in chat with some other key team members to help solve it, right? And unless I take that conversation and turn it into formal documentation, it's probably just gonna get lost, right? I may never see it again. And so I should be clear that this does not excuse a business from needing to establish those processes, but that's usually a process that we have to put in place. And so we should still do that. But in the meantime, you can get real value by searching through that information and finding those sort of hidden fixes that maybe don't exist in real documentation. So I think that really helps speed up operational readiness. Um, it allows you to get all of those different sources combined, whether it's formal, informal, structured, unstructured, kind of get it all together. And then you have an AI that's able to go and search those results and then proactively help provide solutions based on documentation and informal conversations.
SPEAKER_01So, Zach, for the CEOs, the IT leaders uh in our audience out there that are tuned in, what's the big picture lesson here?
SPEAKER_00We beat this drum a lot, so we're gonna beat it again, but context. It's all about context, right? That's that's super important. And so the AI is only gonna be as smart as the data that it has access to. And it's only gonna be able to articulate processes that have been documented. And so I'd say the number one thing is to focus on that foundation, focus on making sure that you're capturing data. And when we talk about those sort of unstructured conversations, those informal conversations that are happening, yes, this is a tool that can help with that. But the right fix long term is to formalize a way to document solutions so that you don't have to connect eight different sources. Maybe you're cutting that in half because you have a formalized process to actually create the documentation for what your business would establish as an official fix. Because that's the other danger to connecting a lot of this uh sort of informal information is how do we know that that's actually the solution if we haven't peer reviewed it and documented it, right? And so even if we have the ability to go and search that information, it's helpful. But the right process is to make sure that we have an ongoing way to document this, which all comes back to getting that data foundation solid and set up in a way that's going to provide the official right answers and then basing that foundation there. So context is king, making sure you have a good data foundation is super, super important. And then we heard this term when Mark Binioff was talking about the updates that they're making to Slack. They're creating this sort of Slack CRM, and he called it uh a single pane of glass. And so Slack is, you know, kind of the single pane of glass where you can interact with everything. You pull all the data from your CRM, you have these AI assistants that can go and access that data. And so if we think about this solution, it's not the same as moving everything over to Slack, but what we're doing is we're we're giving access to a single agent that has access to all of the information that you need. Now, I don't know that a single agent is the right solution for everyone. And in fact, there may be a scenario where this evolves into multiple agents that have more defined specific use cases. But for now, with this solution, what it does is it gives the entire support team access to an intelligent assistant that can go and find information for them. That might evolve into something a little bit more bespoke over time. And so uh that that's the second thing the single pane of glass, increasing the value of AI every time that you use it and you add new data to the system, when you ingest all of those sources and you have them all connected, that means that anytime you make a change in one of those other systems, it's reflected back into that unified source, which is huge, right? And then uh the third thing is just building trust through transparency. You know, if you want your experts in your company using AI, then ensure that that AI can show its work, ensure that it's providing citations so that you can trust but verify. I don't even know if I would say trust, I would say always verify. Yeah, yeah, we got to keep that human in the loop and make sure we're verifying everything. But if you create pathways to make that easier, such as ensuring that retrieved results provide citations, then the agents that are actually, you know, working with that AI system have the ability to relatively quickly go and verify or say, hey, this is this is not correct. And so in that case, you can go back and continue working with it. But um, yeah, really having that transparency, anything that we can do to make it easier to verify the information and um keep that human in the loop in a way that is efficient is gonna be great. Because otherwise, if if we didn't provide a citation, there's a couple problems. If you're if you're newer, you may not know if it's wrong, right? And so the only way for you to verify that is to either bring that to somebody more senior and say, hey, what do you think about this? Or to have an established citation to the documentation that shows you if it's correct or not. So that I think is also a really important side of uh building solutions like these.
SPEAKER_01Just like high school algebra. Yeah, I was gotta show the work, right?
SPEAKER_00That's right. That's right. You got to show those proofs.
SPEAKER_01Let's uh let's wrap things up for uh for the leaders uh this week. What are what are three practical steps anyone listening can take this week uh to sort of uh uh have a project like this within their enterprise?
SPEAKER_00Definitely. Now I'll say this is probably not right for everyone, right? This is for organizations that know that they have different data silos, disparate data sources, and they need something to unify it, and then they need something to intelligently leverage that data. So if you meet that scenario, then the first thing you should do is just map out those data silos. Where does all of your data live? And you know, this example is specific to a support team, but it can apply to any department. But if you're using the support team as an example, what data sources does your support team currently need to go and access throughout their work stream? Are they going into Slack or Google Chat or Teams, depending on what you're using there? Do you have a system like Jira or something else set up that's tracking tickets? Are you using Salesforce as your CRM to track cases? Are you connecting those cases to tickets in some way? Uh, do you have an internal wiki page or knowledge articles or a knowledge base? Just think about all of those different areas that a team is interacting with and just map them out. So that'll kind of give you the landscape of what you're working with. And then the second piece is start to audit that quote unquote dark data. So ask your team how much of our best troubleshooting is stuck in old email threads, is stuck in chat conversations, or maybe it's stuck in a senior support engineer's head because that person's been there for so long that they have all of that knowledge. So Is that knowledge something that just lives in an employee's head, or has it been properly documented in a way that an AI system can actually leverage? And then the third thing is think through a retrieval use case. So if you've got all of your knowledge mapped and you understand where that unstructured sort of dark data can live and you've come up with a plan to unify it, then think about how to go and actually retrieve that information. And so think about some questions that the team might ask and where that data might live. And then that can really help when you go to actually plan an agent force agent. It informs you on well, this is how I need to structure these prompt templates to retrieve this information. This is how I need to structure these actions, the instructions, the topics that you might put in place. And going in with a plan like that, I think will streamline the process significantly because otherwise it's a lot of trial and error.
SPEAKER_01The thing I love about this project, Zach, is that it proves that the future of enterprise support, it isn't just about faster answers. It's about more informed answers. And when you unify the data, you're really empowering the people. I got to imagine for those support reps, that's a quality of life thing, right? I mean, like they're they're gonna enjoy their job more just because it's it's a lot easier to execute, execute quickly and execute correctly. So for anybody out there that's listening, if if this is resonating with you and you want to bridge that gap between your data silos and your team's potential, we'd love to have a conversation. Fastslowmotion.com slash AI. Zach, bring us home.
SPEAKER_00All right. Just remember that that institutional knowledge is probably your greatest asset. So identify where it lives and then figure out if the foundation is there. And once it is, then think about how you can bring AI into the room to actually help go and retrieve those answers and streamline that process for you.
SPEAKER_01Well said, my friend. See you guys next time.