What's Up with Tech?
Tech Transformation with Evan Kirstel: A podcast exploring the latest trends and innovations in the tech industry, and how businesses can leverage them for growth, diving into the world of B2B, discussing strategies, trends, and sharing insights from industry leaders!
With over three decades in telecom and IT, I've mastered the art of transforming social media into a dynamic platform for audience engagement, community building, and establishing thought leadership. My approach isn't about personal brand promotion but about delivering educational and informative content to cultivate a sustainable, long-term business presence. I am the leading content creator in areas like Enterprise AI, UCaaS, CPaaS, CCaaS, Cloud, Telecom, 5G and more!
What's Up with Tech?
How Agentic AI Turns Calls Into Continuous Learning
Interested in being a guest? Email us at admin@evankirstel.com
Customers don’t call to chat with a bot—they want outcomes. In my chat with Jim Palmer Dialpad's Chief AI Officer we dive into how Dialpad’s AI-first platform turns real-time voice, video, and digital interactions into agentic workflows that act on behalf of users, learn from every result, and hand off seamlessly to humans when it counts. No black boxes. No bolt-ons. Just a unified stack where speech recognition accuracy, in-domain adaptation, and transparent evaluations compound into trust you can measure.
We talk through the nuts and bolts of building AI where it belongs: inside the media layer, not tacked on after the fact. That foundation unlocks higher-quality transcripts, reliable summaries, and precise suggestions that downstream models can use without drowning in errors. From there, in-domain training becomes the engine for better answers: when support questions share patterns across industries, the system adapts faster and resolves issues with fewer escalations. And when a case does escalate, the agent sees the distilled context—intent, key points, attempted steps, and recommended next actions—so the conversation keeps moving.
The flywheel is the real breakthrough. As new issues surface in a rolling window of conversations, the platform quantifies impact and proposes automations. Humans review, approve, and refine. The system executes, measures completion and containment, and folds learnings back into models. Layer on governance—clear data policies, observability, and rigorous evaluation—and you get AI that leaders can trust and teams love to use. We also share how internal hackathons and applied research push practical features over the line, keeping innovation and execution in lockstep.
If you care about customer experience, agent performance, and measurable ROI from AI, this conversation brings a clear blueprint. Subscribe, share with a colleague who owns CX or support, and leave a review with your biggest automation challenge—we might feature it next.
Can't keep up with AI? We've got you. Everyday AI helps you keep up and get ahead.
Listen on: Apple Podcasts Spotify
More at https://linktr.ee/EvanKirstel
Hey everybody, super excited today to talk with DialPad on reinventing customer experience and more with a gentic AI. Jim, how are you? I'm doing well, Evan. Thank you for having me on the show. Well, thanks for being here. Really excited for this chat. And for those new to Dialpad, how do you define yourself, describe yourself these days?
SPEAKER_01:Well, I myself, uh Chief AI officer, overseeing all of our AI effort here at Dialpad, but Dialpad is really one of the first truly AI native, AI-first communications platforms and contact centers. So we've been investing in this AI integration for quite some time. So I myself have been at this doing AI in the uh communications and conversational intelligence space for 11 years, having started uh talk IQ uh quite a while ago. And I kind of awkward in saying, even though it was over a decade ago, but still being pioneers in the AI space is uh is amazing, is still because we're measuring things in minutes, not really years or even beyond that, here for AI. But uh I I co-founded a startup called uh TalkIQ that was in conversational intelligence, and we were acquired uh seven over seven years ago here by Dialpad to bring AI in to grade it into Dialpad natively. So AI-powered native communications platform uh that unifies every customer and employee conversation, really in one single pane of glass. And we bring together all of the channels as in, you know, the digital and the voice. And my favorite, my favorite uh pitch, really that that way to connect the dots here is what I originally was talking about in our pitch deck many, many years ago was bringing that last offline data set online. It's basically bringing voice online and then bringing it online at scale because we have so many conversations. Uh, voice is such a strong component when we're talking B2B or B2C, just talking to our customers. So there's so much opportunity as we've seen, because the market has uh has grown pretty significantly here. Uh, I think we we hit the nail on the head when it came down to here's an amazing application of AI. So that's my long version for you, Evan.
SPEAKER_00:Fantastic. Well, your enthusiasm is infectious. And you had a big agentic AI launch in October last month. I can't believe we're in November. What was it all about? And what was the impact um uh you're trying to make there?
SPEAKER_01:I mean, it was really to introduce our agentic AI story. Really, it was to show the vision ultimately for the what the you know the next era is going to be for our customer experience. It's just not another chatbot, and everybody loves or hates chatbots. So it's it's much, it's so much more. It's the it's building on our agentic or our automation that's AI powered, right? That's that can act as in you know, do it on behalf of the user. Uh, it can learn and it can improve continuously, right? So it's working side by side with the employees. And I think if if we get that chance to kind of talk about differentiation and where I think we're we're we've got that that significant advantage, it's that we're not just throwing this AI out there and hoping for the best. We are building this to grow with your businesses, grow with your customer needs, and then also really try to solve that that cold start problem, right? Which is a there's been a significant amount of effort to kind of almost put AI and especially automation or on top of AI or agencai AI and try and then building all of these controls around it to kind of control your destiny of building an AI-powered communications platform or anything using agentic AI. So what we're doing is we're building it up to have that advantage of being able to uh not only be accurate, fast, cheap, et cetera, uh, just but to be able to grow with your business as well.
SPEAKER_00:Fantastic. And you've made an extraordinary investment over the past number of years. Uh uh, we won't go over the numbers, but what what how do you think about AI first versus bolting AI on later? How does that change what's possible with agentic?
SPEAKER_01:I think if you bolt it on a little later, it's it's it's a limit, there's a limiting factor, right? And it's you're you're is it controlling hallucinations? Is it just limiting the complexity of the agentic workflows that you can complete? It's I'm not gonna say it's bad in any ways, and it does buy time to be able to invest in in the future, but it's also like you still have to almost start from scratch if you wanted to bring that in-house, if you wanted to train. And what what I'm starting to see around built around a lot of the AI are the things like guardrails, governance, uh AI parenting. So, just how are folks being able to manage something that they they can't determinism is another really kind of yeah, I hate to use the term here, nerd term, but is is it non-deterministic or deterministic? And it's really hard to build up infrastructure around something where you don't really know how it's going to immediately change. So there's uh there's a kind of inherent risk to the bolt-on strategy. But if you're smart about it and you can do the bolt-on strategy and invest in-house and be able to move up in parallel, you know, as I'm seeing a lot of that happening in the space, but that's a big advantage for us, having done that and been doing that, investing in the AI uh and being able to move in parallel and getting this out to market in the right way at the right time with the right value and the right basically another term we use uh uh a lot on the technical front here is observability. Just need to be able to observe it. Is it successful or is it really not? Uh, how are the evaluations working, right? So you see a lot of the bolt-on strategy where they're the being able to evaluate it, you're you're leaving your customers in the end, ultimately being the ones who evaluate it and either they love it or they hate it, and they they might love it for some time and then it stops working and then they might churn. But if you're building up the infrastructure around it to you know, not only observe uh you know uh completions and containment and all of kind of the standard things, but also evaluating it because so many interactions and use cases and even the underlying AI will change, right? It's that that evaluations is really that that massive investment. So I'm kind of answering your question and posing a whole bunch of other questions, and I got, yeah, there's a smile. So we could dig in on a number of those things, but it's really we have made a really intelligent and really the smartest decision to to have the right investment. And I could go into a lot more details and and publish papers that we've been able to pack. That's great. Academic conferences. Actually, there's a conference happening right now. Ump, which we've uh we've published at and had accepted papers before. And it's another really beautiful thing because it's it's we're starting to see slowly over time here, this merging of academia and industry, especially with machine learning and AI, where we're able to, being you know, a a large-scale company with an applied science division building AI to be able to contribute back ultimately. And so it, I think a lot of our latest demonstration, we had a couple of accepted papers that are talking exactly about that.
SPEAKER_00:All right, I'll pause there because there's a bunch of no, that's that's amazing innovation. I had no idea what you were doing behind the scenes. I'm sure customers, partners would be intrigued to hear more. And you talk about your agentic AI handling, any channel, any direction. Maybe give us a peek under the hood, as it were. What kind of architecture, what model design, how does that all possible?
SPEAKER_01:Well, um, how much time do we have again to talk about the model architecture? I, you know, it's less about it's less about the model architecture and more about it again being fully integrated. So kind of looking under the under the hood here, we are a centralized communications platform, digital, video, voice, all going through essentially the same channels. We have access to that real-time media, as it were, which is what we've been able to build and essentially integrate directly all of our AI. So even what we uh have had in the past with just specifically our speech uh recognition, I'm sorry, our our transcription uh engine, essentially, it has to be as accurate as possible for our customers' use cases because anything downstream, like anything we might want to send off to the earlier generative AI or LLMs, needs to be 100% close to 100% accurate as possible. So it having that direct integration into the media, being able to scale this up with our customers' needs, and really ultimately being able to use the the essence of the conversations that our our customers are having to better train more uh accurate models for their use cases. And I'm circling around something here, which is really a huge advantage for us, and it's it it comes down to in-domain adaptation. And what I mean by that, you know, let me let me back up just a little bit. We have a lot of business use cases that have customers, and there might be a different totally different companies in different parts of uh America or even international, but a lot of their customers are asking the same questions for like a customer support use case. And when you see the similarity of what is spoken or what is written or what is texted, it you you it's easier to use that in-domain data to increase the accuracy for being able to do AI or do generative AI for those specific use cases. So it's almost just like a slam dunk of, and this is what we've been kind of reproving with a lot of our published work that this in domain, you know, that the the specific things in a business context are really important for us to be able to have the the most accuracy in. So all right, we just want to keep I'm gonna probably keep coming back to that as one of a big you know differentiators for a lot of people.
SPEAKER_00:Well, it's an important theme. And um, you you said for many years that customer service isn't about automation, it's about evolution, and this is clearly a big evolutionary leap. But just describe that agentic loop between humans and AI. What's the balance? What's your philosophy there? How do you keep that loop productive and flowing?
SPEAKER_01:Let me start off by saying it's still like in industry, it's so disconnected. You'll you'll have an existing uh customer relations, you might have a CRM platform, you might have a telephony infrastructure. You have a this kind of uh multiple systems to help you manage your business and manage your customer relationships. And then what I what we are seeing now, especially with this full now agentic kind of loop, is that we're able to learn off of the what is being said in a lot of the conversations because that changes from time to time. Yes, I I did say not even a minute ago, two minutes ago, that you'll see a lot of the same thing, but it's also still a sliding window. There might be a new issue that you a lot of your customers are calling your call center about. But it's that continuous rolling window or this continuous loop to be able to build uh features inside of DowPad, for instance, where we uh we can support a full call center. And then we can start to see, hey, there's a new question that is arising in a lot of the customer uh support calls that have been popping up. And it's something that we haven't seen before, we haven't built an insight off of before, and we don't have any agentic automation off of before. So, say it's like a new product, you launch a new product, and then there's something wrong with it, and you start getting a lot of inbound calls, and then you have to train your agents to be able to answer those questions. So there's that whole human-to-human, how do I get my agents to know the right information? But what we're building is something that semi-automate at first, you know, this evolve with the support and the human experience here, but we can provide those insights. Here's something new that we haven't seen on a large number of calls with like a low predicted CSAT value, you know, that the not happy customers. And then here are some insights on how we might be able to help you automate that. But we're going to get someone in there to kind of review that, not fully automated all the way, but provide them all the steps in this now circle and this this feedback loop. And it's almost uh uh my favorite analogy here is it's now now it's turning into a flywheel. It's not only a feedback loop, but it's like a feedback loop that's gaining momentum and you can really accelerate your business. That's my favorite part. But before, uh basically, in the uh before the last year, if not two, it was really hard to show the potential for an AI feedback loop. And you've got uh the builders and the practitioners talking about reinforcement learning, and that's still really hard. You need to continuously be training the machines and machine learning and AI based off of what the human interaction is doing. And after all, that's kind of the essence of AI, right? Is you're training a machine to do what a human does and embedding all of that inherent bias. And if you introduce something new that it hasn't been trained on, it's there's a less of a likelihood of that it'll be as accurate as you would like it to be. So that's the beautiful part about this now feedback loop where we can provide those insights for something that none of the models have been trained on just yet, and then be able to bring that back into the system and start to build those automations. And then there's a kind of another layer to this, and I like to use wedding cake, not onion, for this analogy because you just keep building on this, and I like wedding cakes a lot more than onions. But the beautiful part is now we get to do those automations on behalf of our customers and then our customers, customers or users, but you you can continuously be learning from that. And we can provide, we can bring that back into the models, we can bring that back into the the customer experience, and and we can provide that observability or the the the metrics to prove that this is actually successful, right? This and this whole the whole transition into this kind of you know positive outcome-based billing instead of just billing by minutes, you know, people need to really trust the outcomes before they're fully just going to buy into that, hey, you know, I'm gonna pay on outcome-based. So it's it's full circle. They know what and they can build trust. And and this is on both ends. Our users that are driving a say a contact center, and then their customers as well. And I'll admit it's it's been pretty amazing, even just being a light user, just trying to take my step back and seeing some of the things that are already happening in the field and talking to uh you know, voice-powered AI and seeing real positive results. And it's been really exciting being able to help promote that, right? I it it feels like I'm not just talking to a chatbot that can't answer my questions. There is meaningful outcomes, meaningful resolution to those uh, you know, even complex and simplified workflows. All right. So there's the flywheel and the whole feedback loop. And it's such an easier story in my position to tell now, because you can say, hey, look at this. Here's the here's the lift, here's the ROI, here's the it completing it, and here's why you can trust it. And and the trust is is probably the biggest breakthrough uh you know for us, the builders and practitioners for AI.
SPEAKER_00:So what about it? Wow, such important takeaways, uh amazing insight there. And yet, as the day-to-day users that we are, we all see customer experience or experience it broken, uh, where you don't have seamless handoffs between AI and human agents. Um how can we get to this level of uh the vision that that you're describing and you know, synchronicity between AI and the human agents? And you know, maybe you have some, you know, you have thousands of customers, but some anecdotes or stories or customers you can talk about for real-world use cases or success stories that you could share.
SPEAKER_01:Not can't share just yet on the explicit names. Wait, stay tuned, stay tuned. That's the best I could say. But but the it the the anecdotes are it's it's how we're how we're showing the back to the trust, but how we're showing the actual resolution, right? Of being able to here's here's this automation, this agentic automation can solve this issue with a high level of accuracy. And so I I think the you you almost set me up for for a kind of a peach here because this is exactly what we're trying to do. It's not just totally take the person or the agent, right? That that person who's using say dial pad's uh communications platform and their customer, we're not trying to replace those agents, right? We're trying to give them that that seamless handoff because the automations aren't going to handle every single case. They're not 100% accurate. And that seamless handoff is pivotal. And it's a beautiful thing to have that all the basically that handoff, being able to have the the workflows, the agentic workflows be able to handle the multi-step, even complex workflows and always have kind of an escape route. And then sharing and bringing all of that context and then also insights on top of that context, not only what the customer said or complained about or asked, and then when it's diverting it to an actual agent, yes, that's seamless. It's inside of uh the dial pad application inside of our inside of our platform. But then the most important part is what meaningful context from what the customer is fighting the chatbot about gets passed on to the actual agent. And it's not just here's the full transcript of what was said or the full uh uh uh script from what they had texted or emailed, but it's the you know, here's the insights from it. Yes, you can get to that to get to the explicit detail, but here's the insights, the summary, the important kind of points, the action items. Um, maybe we'll extract those highlights as well as offering then the insights. And and if you're talking about a kind of a complex workflow, there's really unique things where you can say, hey, the the the bot, if you want to call it that, or the HND workflow missed this opportunity, or it escaped and went to a human, escalated to a human because it we didn't consider it to be accurate enough to do this one step in the process. But here's what we believe it could have been, or here's the other option. And then kind of it's almost like you're you're bringing the the agent along for the journey, even though they don't know, and then offering them what the potential next steps could be, but they could easily override it. And then having those integrations with the downstream, you know, because the real point with agentic is that automation, right? It's that what are we doing on behalf of the user and what systems are we gonna are we needing to make calls to? Tool call, tool calling, basically, is kind of that semi-technical term that we use uh in the industry here. So that that having this really seamless agent to customer handoff and and really platform is really important. And I'm really excited about a lot of other tie-ins for that as well, for even more complex workflows, because it's not always just a it's not a kind of a high volume, um, you know, low touch, high volume kind of call center use cases. It's more the the agent might need to call back, right? And thinking about all of that other data acquisition for a lot of other agentic use cases or you know, another thing that needs to happen. And agentic can help so much with that, helping the agent get what they need to get done, those repetitive tasks, uh the semi-automation, etc. So that's you know, seamless handoff. But then the other part is that agent can help us build effectively that reinforcement learning. They can help us tell, tell us what works and what doesn't, what was right and what was wrong. And so there's a huge really unique opportunity for us to be able to you know use that data to continue to adapt and and have our models and our systems and our agentic workflows get better over time. So that's that's kind of the the that's the real power. I have a lot more to talk to on that one, but I'll let me let me pause real quick.
SPEAKER_00:No, and it's a fascinating uh vision. It's amazing to see you execute that. I guess just a final question on you and your role as chief AI officer, relatively newish title. Um, how do you see yourself fitting into the overall dial pad mission? And you know, as it looks like you're having to balance innovation and all the research and et cetera, you're doing with execution. So, what's the right balancing act in your perspective?
SPEAKER_01:I the the position is really starting to take a couple of different forms. And I almost like to think of it like the classic CTO, kind of especially for technology companies. There were, in my eyes, two different types of CTO. There was those that were very future focused and pure innovation and driving those kind of tiger teams to pure press the innovation, and then there were the applied CTOs, where they were all about efficiency with the current product and the build and the development. And then there were some that just did it all. And I like to think of the chief AI officer as still uh there's there's more than two options, but the number one thing that I think is common across every other uh chief AI officer that I've met is governance. What are you doing with your data? What can you do with your data? Are you being responsible with said data? And then the rest of it is really that that kind of uh blend or or smearing of innovation and just pure uh build, build, scale, go. And so for me, it's all of those things. We have our applied research, we have um or applied science, and we have a lot of a lot, we have a great team, uh 22 PhDs, we've got how many patents? 17, 26 uh published um um you know, accepted papers. So, and and we've been building this up over time, and we've got uh a substantial amount of data for us to be able to leverage and use as well, where it's we've already processed over 11 billion minutes of business conversations. We've generated hundreds of millions of uh generative AI completions, if you want to call it that, but just you know, user-facing features that people can't live without. And the future is very bright for us being able to scale that and continue to build out all that functionality. But it's that it's that not only coming back to to this my my position is is driving innovation, fostering it, also keeping it so that because because it can't be just a full-blown uh research project and and there's no timeline and when the grant money runs out, basically, but more you're time boxing it. Here's our hypothesis, here's our uh our way that we can run the experiment in a very narrow window of very narrow window of time, and just keep steering in the right direction. So I just the whole idea of applied science. I'm I'm absolutely I'm 100% behind it. And then there's the so the governance and the data governance and the systems and being able to know what our customers need to know from that, how are we govern the data, how do we use that data, how do we protect that data? All of it is just that that kind of complete story. And then also being able to interface with the rest of our our our build. You know, what does the user experience look like? How do we create that feedback loop, like we were talking about a little bit earlier, data collection, and just making sure that we, that you know, my team and the company can show our customers that it's getting better and better and better. Not because of a third party making it better for us, but more this is getting better for your explicit use cases and being able to show those kind of evaluations or tests really to show them that this is getting better.
SPEAKER_00:Wow, great philosophy, great approach. Um, we're uh the tail end of the busy season traveling. I probably were Gartner symposium. Uh, what's next in the run-up to the year end? Um will you be on uh uh para firma as far as uh in the office? Or are you any any other events coming up? Anything exciting uh in the next month or two?
SPEAKER_01:Oh uh I mean, besides us just putting so much effort into the agentic story, I mean it's it's a it's a it's a given at this point in time. And just how many breakthroughs are happening, not only on a weekly, but like a daily basis and even with our team, it's just it's it's it's mind-numbingly exciting. I just I don't uh kind of shaking all the time because it's so exciting. And and the best part is is having an amazing balance of there are things that we are building and we know we need to build and they in and getting it all just in the right order. So from an execution standpoint and building, I'm beyond, beyond excited, and especially for what's coming up in uh in January uh to be continued, but it's also where where the market is going, where our customers are you? I don't have to sell a dream anymore. The AI is not a dream, it is real, and we are helping make that continuing to make that real and really saving time, pun intended there. So excited about a little great many things. And then the other one is uh hackathon, and it's been a while. So we have our own internal hackathon, and uh very function. It's uh it's one of the most fun. Every company I've ever worked at so the hackathons are the best for the engineers, but it's it's the really the most creative time where we can say, here's this this one piece of the stack that we just add this one feature, put it in just the right spot. And we've had so many breakthroughs, so many feature ads that we're literally we build it, everyone goes, Oh, oh, there it is. I see it. It's functioning, and it's one merge away from getting into our production systems and deploy that to our customers. So um every single one we every single time we do our hackathons, uh, we have amazing features coming out that our uh our customers love.
SPEAKER_00:Amazing. Well, I've been a fan and a user of DialPad for gosh, probably a couple decades. So it's amazing to see you take this to a whole nother level, a whole nother plane. And your enthusiasm is amazing. I I kind of get the same dopamine hit from the LLMs and the tools and the apps and the platforms. So it's an exciting time. Thanks so much for joining and sharing just a little bit of the vision. Yeah, it's great.
SPEAKER_01:Thanks, Evan, and thanks for uh uh being a user customer. This is fantastic. So keep yeah, keep the feedback coming. And I'm really excited to uh take us to level.
SPEAKER_00:We'll do. And thanks everyone for listening, watching, sharing. Also, check out our companion TV show, Tech Impact.tv on Bloomberg and Fox Business. Thanks, Jim. Thanks, everyone. Thanks, everyone. Thanks.