Evolving the Enterprise
Welcome to 'Evolving the Enterprise.' A podcast that brings together thought leaders from the worlds of data, automation, AI, integration, and more. Join SnapLogic’s Chief Marketing Officer, Dayle Hall, as we delve into captivating stories of enterprise technology successes, and failures, through lively discussions with industry-leading executives and experts. Together, we'll explore the real-world challenges and opportunities that companies face as they reshape the future of work.
Evolving the Enterprise
The Enterprise AI Balancing Act: Strategy, Culture, and Value
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode of Evolving the Enterprise, SnapLogic CMO Dayle Hall sits down with Amit Dingare, Chief Artificial Intelligence Officer at PRGX Global. Amit unpacks what it really takes to move beyond AI hype and create measurable business value. He shares how enterprises can align AI with business strategy, avoid common pitfalls, and build the cultural foundations needed for long-term success.
From practical frameworks like the AI “Sharpe Ratio” for prioritizing use cases, to lessons learned about leadership buy-in, data readiness, and internal PR, Amit brings clarity to a space often clouded by buzzwords. He also looks ahead to the promise of agentic AI, where systems don’t just advise but take real action, unlocking the next level of enterprise productivity.
Dayle Hall:
Hi, welcome to our latest episode of Evolving the Enterprise. I'm your host, Dayle Hall, the CMO at SnapLogic. Today, we're looking at how to move beyond some of the AI hype and deliver measurable business value. I know everyone is out there right now thinking about, okay, there's a lot of AI hype, but how do I actually deliver more value to my business, make sure that we have the right innovation and technology to really help the company. That's what it's all about. Yes, some of this is cool, but we're here to do a job for our enterprises and our businesses. That's what we're going to discuss today.
Our guest is Amit Dingare. He is the Chief Artificial Intelligence Officer at PRGX Global, where he leads all the AI strategy, execution, and innovation for AI. He helps organizations identify the right opportunities, overcome data challenges, and be able to turn AI into something tangible that businesses can really track, that they can get value from, and make sure that it sticks throughout the organization.
Amit, welcome to the show.
Amit Dingare:
Thanks, Dayle.
Dayle Hall:
Yeah, of course. Let's just start with a little bit on your background, how you came into this position, because obviously the rise of the Chief AI Officer has been meteoric, particularly in the last couple of years, but it's been around for a while. So tell us a little bit about yourself and how you got into this position.
Amit Dingare:
Yeah. First of all, I work for PRGX Global. It's a recovery audit company. I joined the company about 18 months ago as a Chief AI Officer with the intention to bring value using AI. But I've been in this field of artificial intelligence for a long time, even when it was not called AI. I used to work in statistics at one point, then I worked in data science, and then everything got rebranded as AI. So here I am. I've been doing it for nearly two decades, started my career as a demand forecaster in retail, and then slowly made my way into machine learning, deep learning, and now gen AI is all the rage. That's what gave this field the AI title.
It's been a fruitful journey so far, very good in terms of creating value using numbers. If you like that, this is really a good field apart from investment banking. Maybe not. I'm not sure. But yeah, that's my thing. I have worked in various industries. I worked in consulting, worked in retail, CPG, manufacturing, pharma. I have seen it across various industries.
Dayle Hall:
Yeah. Given you've done a lot of data science, machine learning, you have a very broad but a very specific background to this area. Just to kick off, what has been the most surprising thing you've seen in the last few years? What have you seen that just made the hairs on your arm just stand up because you're just so excited about some of the things that you've seen?
Amit Dingare:
Yeah, I think the scope of things that now AI can touch has just broadened significantly in the last, I would say, three to four years. We have been creating predictive analytics for a long time using machine learning, deep learning, and a lot of experience in that. But the closeness to human capabilities is something that has really been seen only in the last three to four years. And that's what's exciting, and also at the same time challenging as well because that has also increased the expectations that AI can do. So you have to manage the expectations. There is also a lot of hype that comes from that. So you have to manage that hype, dim it down, and really show the light on what AI can do to your stakeholders, to your board members, and your senior leadership.
Dayle Hall:
Yeah. I think with our customers specifically, we've seen them do some incredible things. Some of the things that we're doing is just actually really simple, because SnapLogic does data integration, app integration, all those kinds of things. Just having a simple tool that we can use to have customers just describe the pipelines that they built, that someone could have built five years ago- you see the face of some of these IT people, they're like, wow, this is amazing. And yes, there's a lot of technology that's been helping that work in the background, but it's just a very simple thing. But the experience is significant. Obviously, we're seeing tons of other value outside of that.
From your perspective, because you've seen this development, what do you think the biggest misconception right now is around AI in the enterprise? Where are people potentially misunderstanding what AI is and how it can help?
Amit Dingare:
There are several misconceptions, but the biggest one I would say is treating AI as a technology problem or technology solution, however you look at it. I think that's the biggest misconception. For you to get value using AI, it's a cultural shift. It's a mindset shift that's required. It touches all the aspects of the enterprise. The change management is a very important dimension that comes in the picture as a result of that. Disconnecting AI from your business strategy is also a wrong thing in that AI is just a part of your business strategy. There is no thing called AI strategy. It's a business strategy that eventually turns into your AI outcomes.
Dayle Hall:
Yeah. Do you think a lot of the enterprises that at least you talk to, that misconception, do you hear that a lot when you talk to other clients or customers or partners? Are the enterprises really pulling it in as part of their business strategy, or are they looking to solve specific use cases? How are they approaching bringing in AI to their enterprise?
Amit Dingare:
It varies by customers. There are some customers that have been doing it for a while and now learned their lessons, and they have figured out that you really cannot treat it as a siloed, isolated problem in the way you did in the past. And then there are many others. It's a Pareto principle where 20% have realized that this is not a separate thing than your business strategy, but a majority of them are still doing that. They're still treating it as a separate thing. As a result, you hear things like, oh, I'm not seeing value from AI. Yes, because some of the things that are required that you're not doing correctly.
Dayle Hall:
So in both scenarios, listening to this podcast, what would you advise someone as to how to approach AI initiatives or use cases initially? Is there a methodology or is there a way that you've seen it be more successful from your experience? How do you approach bringing AI initially into the enterprise?
Amit Dingare:
I'm a big fan of the balanced scorecard approach, where you have a scorecard at the enterprise level that tracks your KPIs. And then you start connecting each of those KPIs and see where AI can make a difference. AI then becomes part of getting to that balanced scorecard KPIs, starting from the top, your business KPIs, and then several of these business KPIs. These are the KPIs that AI can influence or add value. And then you start going from there to see, well, for each of my business lines, how do I create value for each of those KPIs?
That went really well for me in my last three jobs. I'm a big fan as a result. And what that leads to is this natural connection between your business strategy and your AI strategy. There is no separation there.
Dayle Hall:
Yeah. You said you've done that over the last three companies. Do you have an example of what a good KPI is? I think anyone listening to this, I'm sure they would like to have a balanced scorecard. They would like to have KPIs that show the value of these projects that they're going to embark on. What is a good example of a good KPI in your experience, maybe from a project that you've worked on?
Amit Dingare:
There are definitely, by industry, separate KPIs that come in the picture. I can give a very good example of a KPI that one of my previous employers hired. It was in manufacturing, and the goal was to increase safety. Their goal was to have less than a certain percentage of accidents. There are different classifications of any accidents that happen in an industrial facility. We then took those and said, well, where AI can help. And that's where we said, well, AI can help in reducing the away-from-work time period. That was one KPI that they had, or average away-from-work KPI, which is, how many days you are away on average in a given client from work as a result of an industrial accident should go down. We took that as a KPI and said, well, how AI can help.
And then we built computer vision and various sensor-based monitoring and tied that using an AI system that could create alerts and avoid accidents in a manufacturing plant and really shown the difference how technology can help in that area. So that's how starting from the top and really creating tangible projects to create the value looks like a typical setting.
Dayle Hall:
Yeah, no, I like that. It's a great example of something tangible, that you can use AI to help deliver value, but you have to put it to something measurable and trackable. And I think the people that are seeing the most value are the ones that tie it back to a business problem or something they're trying to solve within their organization. I think that's very telling, and that's probably the best way for people to look at it.
How do you determine at this point, though, what is the best thing to work on? Because I think enterprises are always trying to improve certain things. Again, they're trying to solve business problems, deliver better products and services to their customers. They've got internal challenges. There's probably a lot of parts of an organization, an enterprise these days that are looking to solve certain problems and looking at AI. How would you advise someone with multiple potential use cases? How do you prioritize?
Amit Dingare:
One thing I have really learned is, in most cases, people have a framework of creating a two-by-two of value versus effort. And then you take use cases, and you see where they stand on that two-by-two. But this particular two-by-two typically ignores the probability of success. And one thing that I have developed and actually wrote a post on LinkedIn for that is called Sharpe ratio. In finance, if you think about making investments, you also think about the risk of those investments. Applying that same methodology to AI investments is something I call as AI Sharpe ratio. What you do is you look at your value, but then you adjust for the probability of success or risk of implementing that, however you call it. And then you come to a metric that allows you to compare all these use cases.
Now, of course, the risk measurement is not perfect. You need to have some level of assumptions made in that. But having a framework like that gives you an ability to compare several use cases in parallel. And when you think about the risk of implementation or probability of success, you take into account various factors. For example, is the data ready? That's a big challenge. Also, I was going to say, in answer to your earlier question, what are some of the challenges or misconceptions that exist in business, one misconception that I have seen is people think that AI is like this magic wand that's going to solve all the problems.
Dayle Hall:
Oh, come on, Amit. Don't tell me that. Everyone thinks that. It's going to change everything for us. Come on.
Amit Dingare:
I think it's not there yet. I feel like it's 10 years away. You don't need to worry about data problems, but not right now. So is your data ready? Is your change management aspect? It takes into account all of those factors and creates a risk assessment of the use case. And then you adjust your value for the risk. You may see that in some cases, while the value of the project is quite high, but for the various risk factors that I just mentioned, this use case is not ready for launch at this point. So then that's how I prioritize, and I found that to be very successful so far.
Dayle Hall:
Yeah, that's a good way of thinking about it. I think there is this concept of everyone wants to show their leadership teams that they're using it. Even me, to some extent, I put a bounty with my own team to say, hey, if you can find a way to use AI, show measurable value, you get a cash bonus. It's not the amount of the bonus that really matters. It's the fact that we want people thinking about ways to solve problems. Again, go back to business value or go back to your own production, getting faster at what you do or taking some of the manual stuff out. Yeah, I think that's a great way of looking at it.
As you mentioned, tying it back, a clear value to the business, that has to be the best way of looking at it, right?
Amit Dingare:
Yeah. And the value of measurement, I think it's not always easy to do, especially when you are starting a use case. I typically have this notion of plus or minus two standard deviations. So you don't need to have a perfect number. When you start the use case, you probably start with a range, which is broad. As long as you are consistent across all use cases, that's all that matters.
And then as you start, as you complete the use case, like you achieve 25% of the use case, you start refining that value metric. So you narrow down the range from two standard deviations to maybe one standard deviation. And hopefully, as you go towards the end, you have a good understanding of value by the end of your use case. Again, it's coming from my statistical background, but that really helps me to think and visualize how the tracking of the use cases and how to measure the business value along the journey of the use case.
Dayle Hall:
Yeah. You don't have to name any names, but do you have an example of where this hasn't worked out, call it a cautionary tale of how someone may have tried to use AI? It doesn't have to be a massive failure, but as we're trying to chase these great initiatives, and I don't want to call them shiny objects, but as we're trying to get value, do you have an example or some guidance of what to be cautious of? Okay, tie it to KPIs, great. Show measurable business value, perfect. But what happens when this goes wrong?
Amit Dingare:
Yeah, those cards are too many to tell. I'll choose one. I actually worked in this one use case. The use case was actually around productivity, how do we improve productivity of a particular process that we had in a manufacturing setting. And did all the legwork beforehand, we felt that the business was ready, the data was in a good enough condition for us to use, so checked all the boxes, started the use case.
What we made a wrong assumption about is the actual leadership buy-in for that use case. We had certain assumptions in these leaders. We had the larger leadership present in this business, said, oh yeah, yeah, we are all in for it. But behind the scenes, somehow we have not managed to achieve their buy-in. So we did the use case, did the pilot, shown the success. We all sang Kumbaya, had a pretty nice launch party of the use case. This was in a manufacturing plant.
In the moment, we actually deployed it, and we all went away. We started seeing the usage of this use case was not good. We actually had built a model where the model would do certain recommendations. And if the people use that recommendation, the amount of time they had to spend on doing that would have gone down. But people were just not using it. We tracked this information behind the scenes. We just realized that somehow the team was not empowered to start using this model. For various reasons, people felt that that may lead to unemployment, a lot of job loss, and whatnot, all the typical human concerns.
But what we did is we then went back, and what we saw is the usage was quite different to this particular manufacturing plant, had three shifts. And we saw that the usage in one shift was much better than the usage in other two shifts. We just looked at that, and we found that this one team leader for that particular shift was very tech supporter. He himself was learning AI and data science by doing courses. So we said, well, we need to use his enthusiasm to get support.
What we started doing is we doubled down. We put more effort on supporting him, making him successful. As a result, we saw that when that particular shift started using our model, we saw the reduction in the KPI that we were trying to actually measure, the effort required. And then what we did is we used this particular person's kind of like clout in the company to go and have a larger presentation where we invited all the entire department. He came and showed, this is how we were able to actually see the performance improvement because of the AI model. When he became our champion, we started seeing better usage and better adoption. It was something that went really bad at first, and then we changed it around.
Dayle Hall:
Yeah, that's good. Now you mentioned the term, there's a champion for the project. And this is something that I think about a lot. As projects get proposed or opportunities come up for certain use cases, how important could be a leader in the business, could be within IT, could just be a champion for the project, how important is it to have that leader? Where does that person sit?
And then in terms of how you manage ongoing projects, how are enterprises, customers, how are they setting up? Is it like a cross-functional- I've heard of task forces and steering committees and all these kinds of things. What have you seen that's working, and how would you advise people to manage these ongoing projects?
Amit Dingare:
Yeah, it's a good question. Having this champion is a very important thing for the success of the project. I think one thing that if I look at my past and compare the projects where I saw success versus where I didn't see the success, I could see having the right champion was a big factor in that. So I take enough effort nowadays to find this champion early on.
There are certain criteria that you need to apply. One is this person needs to be in a high enough position to make an influence on the decisions, needs to have respect and the clout within the organization. And then you start empowering them to make them successful. As a result, you get the business success that you're hoping for from the projects.
I really practice agile methodology and mentality in running AI projects. I'm not a big fan of having too many committees because you run into too many cooks in the kitchen problem.
Dayle Hall:
Yeah. And then nothing gets done.
Amit Dingare:
Nothing gets done. Exactly. I feel that you typically need to have an operating committee that is your project team and the business immediate stakeholders where this champion sits. And then you need to have an SLT, which kind of meets so often to understand, because also keeping the SLT informed and aware of your success is very important.
As you do that, I feel that you can probably manage all the branding, all the PR around your projects really well. And what you do is typically how these champion become part of the SLT presentations as well. So you are now providing a forum to this person to talk to the senior leadership, and that really helps to get the traction for the project.
Dayle Hall:
You said something there that just pinged something in me. You said, manage the PR on the project. That's an interesting concept, because obviously when we think of PR, we're thinking the media or externally, what we're doing as a brand, those kinds of things. That's an interesting concept.
How important is internal PR, internal promotion, internal communication, around some of these big initiatives? How important is that for the success of the project as a whole?
Amit Dingare:
Again, this goes into the whole change management area. Not all of those learnings are coming easy way. Some of this became hard way for me.
Managing the PR of the project, celebrating the wins is a very important aspect for changing the culture, the mindset of the organization as a whole. You start having these showcases where you celebrate your success. You talk to the larger company. You almost want to have an adoption group within your company that people can come and talk about what worked, what didn't work, and just bring the human nature of these projects in front of everyone. That really helps to get the buy-in from people.
I am a big fan of having this PR within the company. You want to provide the right avenues for people to show successes. And the moment you start doing that, the moment you start providing these forums to the champions, you start seeing good traction for your projects. You start seeing a change in the organizational culture over a period of time.
Dayle Hall:
Yeah, no, I like that. Again, I think there's oftentimes where we launch internal initiatives and it feels- I don't know if always some of the technology innovations get the same kind of publicity internally, but I think with AI and the value it can bring, I think that is going to be a growing part of what we should be thinking about internally. It's good. I like that. And I think anyone listening to this should be thinking about, yes, we want to show success and it's great to link to a business value, but let's celebrate the wins internally. As you said it perfectly, it's just as an important part of what value we could bring externally.
Let's talk a little bit about, obviously to take AI to its true value, one of the things that is key is the data, the data that you're using internally, potentially external data, and how important that is. SnapLogic itself, that's one of the things that we talk to our customers about all the time and making sure that they get access to the right data. It's kind of one of the basis of our entire platform.
But I think one of the things that I would be interested to hear from you, what are the issues you originally see, particularly on the data side, to make sure that AI can actually be successful? What problems are you seeing? What would you recommend to someone to kind of have a baseline for their data?
Amit Dingare:
Yeah, the data issues, we can spend the entire podcast just talking about them. But I think there is, again, one misconception that people have, is I have data for certain things, therefore we can do AI use cases. It's a misconception because AI use cases require data at a certain granularity that you may or may not have. It requires information that you may not be capturing. Completeness of data is a very important aspect, and there are different notions people have when they talk about data.
I'm tracking all my consumer sentiments really well. Are you tracking all of their feedback, and are you tracking their actual comments? Sometimes people are tracking just the comes up, comes down things. Stuff like that actually can impact the success of your use cases. So completeness of data is very important.
Quality of data becomes the second part, where you may have data and perhaps it's complete, but there is so much garbage in that, that models cannot really learn from that data. I've seen that a lot of times, that all kinds of junk data is coming from various sources and just impacting- nobody has looked at those sources for a long time.
Dayle Hall:
Yeah. I'm going to ask a question now, and I realize that this is a very nebulous question, but what are the things that you would at least recommend to customers? Where do they start with their data to make sure that it is at least as ready as it can be? Do they focus on that before they even start with the AI initiatives? What would you recommend?
Amit Dingare:
It's like the chicken and egg problem because sometimes we don't know what kind of data you require for your use cases. I have evolved my position on this. Back in the days when I worked on something like this, I would say, let's have a data governance committee and let's look at all the data that we have. And I realized that something like that, when you start a governance initiative, it's very hard to connect that to business value. Basically, I've seen many of these governance initiatives that people ran just did not really reach any outcomes. They remain disconnected from the actual business outcomes.
My current position is that, basically, the data assessment actually should be part of your AI use case readiness. I mentioned about this risk, and as a part of that risk identification, you basically look at the data and you understand the readiness of the data at that point in time. And then you start bridging those gaps. If you identify certain gaps in data, you start bridging those gaps, and that may take some time. So in that case, you may deprioritize our use case, come back to it after some time, and you focus on something else at that point.
What I saw is when you do that in a use case-centric fashion, there's always a connection. You never lose that connection to the business one. And then, therefore, those initiatives actually start. Even the governance initiatives that are specific to use case start bearing fruits.
Dayle Hall:
Yeah, I like that. That's a good way of thinking about it.
We talked about measuring it to longer-term KPIs and business value. What's your thought on the quick wins and the things that- again, this comes back to the communication internally. You probably want to show some immediate success, or at least some progress towards that KPI. How do you think about being, say, agile enough, but how do you think about making sure that you do have things to talk about rather than saying, this is a big project, it's going to take a year to do, and people potentially lose interest? How do you keep the organization focused, excited with potentially thinking about these quick wins?
Amit Dingare:
I think this is, again, a very important factor of change management. To your point, our attention spans have become smaller and smaller over a period of time. We can blame TikTok for that.
Dayle Hall:
Yes, I'll blame TikTok for every conversation I have with my kids these days for attention span.
Amit Dingare:
People are expecting results faster and faster. And look, you have to change accordingly. Ultimately, if your goal is to succeed in an enterprise setting, create tangible value, then I think you have to adjust to the needs of the time. What I have found to be working is I always focus on quick wins. Even if you do that two-by-two, I mentioned about value versus effort, I would say, focus on quick wins, which are low effort, high value, if you can find those use cases. Low effort, low value, that's still okay. But you have to make sure that you provide the outcomes quickly enough.
My typical rule of thumb is this thing that I tell my team, 14 weeks is a period, you need to have some demonstrable outcome in 14 weeks. If you think about large projects- not every project can fit in that 14 weeks. So what you can do is then you can slice the project in such a way that you are producing this value every 14 weeks. And why 14? Well, that's something I had tried deep over a period of range, and then that's something that I found to be the most tangible. That may change as people's attention spans become even shorter, but at this point, that's what I feel like the people have enough attention for 14 weeks. But if you go beyond that, you may start losing their interest.
Dayle Hall:
Yeah. At least that gives you a full quarter- most of us operate on quarters, so that gives you a full quarter to potentially get things done. It doesn't mean people are checking in every four weeks. I understand that with your experience, I like the concept of the 14 weeks.
When we talk about internal communications and PR and all those kinds of things, do you have guidance on how to roll this up to the executive team? Do the executive team really care about just the KPIs, or do they want to see project progress? What kind of things have you seen that the executive team are interested in hearing, and is that the same thing? Is that still on a 14-week cadence? What's your experience there?
Amit Dingare:
There is no one answer to that. I think it varies by different companies. I've worked in companies where they said they didn't want to know the status throughout the project. In some cases, they were like, oh, leave me alone and come back to me when you have results. So you have to just understand the pulse of the organization.
What I typically do is I have my operating committee meeting, say, every week. That's when the team meets. But then we typically present to the SLT on a monthly basis. At that point, it has enough progress to be made in that one month to have some demonstrable outcomes for them and start showing the realization of value in that period, but it's not too frequent. My phase approach I mentioned of 14 weeks, I typically have around three SLT meetings. That's enough frequency that I found to be working for them to really understand and get a pulse of the projects. But again, you have to just understand the pulse of your SLT, your senior leadership, and then change your approach accordingly.
Dayle Hall:
Yeah. Anyone listening is probably thinking, my executive team is going to want to know on a weekly basis.
I read a lot about some of the demands from executive teams. Some customers post on LinkedIn. There's just comments of the CEO, potentially, not our CEO, but just other CEOs, just asking, like, what are we doing around AI? I haven't experienced that specifically, but I do know that it's becoming a bigger topic at executive level.
My question for you is, how much of the AI initiatives are being driven from the executive level? Were they just looking to be more productive, or potentially take costs out of the business? We know that's a big part of AI. How much has been driven from the executive team with those goals in mind, and how much has been driven from within the organization, the people that are using it on a daily basis?
Amit Dingare:
I feel that the AI projects won't succeed unless there is a senior leadership buy-in. It has to be some level of buy-in from the senior leadership. And that's where positions such as mine are emerging, is to how that connection to the AI initiatives. That doesn't mean that they need to be in the weeds for every single project. Typically, what you do is, I think, again, going back to my earlier mention of balanced scorecard, you can understand the business KPIs, you tie your AI road map to those balanced scorecard KPIs. And then what you do is you start rolling out these AI initiatives within different business units, different areas of your business.
And then you have these committee meetings, where you start showing the success of those. I feel that that is a very good way to keep the top-down mandate on these use cases from the SLT, at the same time, not taking too much of their time for driving this AI initiative. They are now at that point. They basically set the strategy. They are informed. If you think about the RACI, they are at this point informed, but the responsibility and accountability is now shifted to the business units to own those initiatives and drive them to success. I think that to be working really well.
Dayle Hall:
Yeah, that's good. But look, as we come towards a close, one of the things I like to ask all the guests, because again, it's such a big part of our daily lives, personally and professionally, with the change and what's coming with AI, is there something that you are most excited about? It could be now, it could be 12 months, could be a couple of years time, whether it's personal, professional. As you see the possibilities and you're in this technology every day, and you look at these possibilities, what are you most excited about seeing and the potential that we have around AI, generative AI, all that concept?
Amit Dingare:
If you ask me, one thing that I'm really excited about is the whole emergence of agentic AI. I know there is a lot of hype and a lot of misconception about this area. But I think people have to realize tangible value. They have to move from AI being a passive advisor to AI taking real actions. And that's when you start seeing the tangible business outcomes from that.
It's easier said than done. There are still a lot of challenges in getting the best outcomes from agentic AI workflows, but I'm really bullish on that, that we have done some work and our early results are really promising in that area. That can be very transformative. That will help organizations to move to the next level where some of these mundane tasks can be done by AI agents, and then people start thinking more strategic work and start focusing on more human connections, more relationship building, that kind of stuff that, of course, AI probably won't be there as to human. I feel like that will change the dynamics of the organizations. I'm really watching for how it evolves over a period of time.
Dayle Hall:
Yeah, no, me too. We're doing a lot of the agentic work internally. I'm excited because what I see is just basic things. Like if you're in a sales organization, we built a sales agent that helps you to pull data from multiple sources around prospects that you're trying to target. It's such a simple thing, but it's such a hard job being in sales, being in SDR. You know what I mean? It's hard to appeal. Everyone gets thousands of emails and calls, but if we can use this sales agent to make sure that when we do reach out to prospects, that at least it's relevant. We've had AI helpers, but then what we're reaching out with is something that is relevant, something that we know that they care about. At that point, are you going to take a call if you read something that sounds more relevant that someone's done the research? Of course you are. It's those little things. And we're using it across marketing to help us be more efficient, of course. Those kinds of things, I think, will have a lasting effect on an enterprise.
If I could use AI to actually help manage my kids’ and mine and my wife's calendar much better and pull all that together, I would like an agent for that, too, because that drives me insane.
Amit Dingare:
That's a little uphill ask from you.
Dayle Hall:
That's right. I don't think AI is that smart yet, no.
Amit Dingare:
Not yet.
Dayle Hall:
I really appreciate your time today. If the listeners want to learn more about what you're doing and what your company's doing, how do they connect with you?
Amit Dingare:
I'm on LinkedIn, very active. People can find my name and connect with me. I'm always open to new connections. I do post regularly on LinkedIn, so they can definitely follow my posts and comment and interact. I think that's the best way to connect with me.
Dayle Hall:
Great. I appreciate your time. Thanks for being on the podcast.
Amit Dingare:
Thanks, Dayle.
Dayle Hall:
Thanks, everyone. That's the end of another episode. I think we had a very rich discussion today, a lot of discussion around communication and how to set up and start some of these AI initiatives. We talked about bringing it back to business value and tracking KPIs and how important it could be to have a champion. We went all the way through into data, how do you make sure that your data's ready and how important is that in the process.
So thanks for joining us on this episode, and we'll see you on the next one.