AI Made Simple

Nufar Gaspar on Building AI Champions and Agent Readiness at Scale

Valeriya Pilkevich Season 1 Episode 1

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 30:39

AI adoption is failing in most organizations - not because of the technology, but because culture, governance, and people enablement often gets ignored.

In this episode, I'm joined by Nufar Gaspar - enterprise AI consultant, former AI leader at Intel where she built a 12,000-member AI champions community, and head of research at Superintelligent - who reveals the exact playbook for building AI capability at scale and preparing for agentic workflows.

In this conversation, we explore:

  • Why AI ROI comes from improved decision-making and new capabilities, not just efficiency gains
  • The CHANGE framework for leading AI transformation without stifling innovation or creating chaos
  • How to structure AI champions and builders across your organization (and why this matters for agent readiness)
  • The governance sweet spot between being too permissive (creating chaos) and too restrictive (stifling innovation)

Need help building AI capability in your organization? Book a call. 

Valeriya Pilkevich (00:00)
Welcome to AI Made Simple, the transformation series. I'm Valeriya Pilkevich, and I talk with global leaders, innovators, and practitioners who are shaping the future of work in the age of AI. In this episode, I'm joined by Nufor Gaspar, Enterprise AI Consultant, former AI leader at Intel, and head of research at Superintelligent, where she focuses on agent readiness and practical frameworks for scaling AI responsibly. We talk about what it really takes to move

from AI experimentation to real impact, AI champions versus AI builders, how to think about ROI beyond efficiency, what agent readiness actually means, and why culture and change management matter more than the tools themselves. If you're a business leader, manager, or AI practitioner trying to scale AI responsibly and effectively, this conversation will give you a lot to think about.

Valeriya Pilkevich (00:51)
I'm glad you're here, Newfar. Thank you for being on this podcast.

Nufar Gaspar (00:54)
Thank you for having me Valeria.

Valeriya Pilkevich (00:55)
Let's get started. Can you describe what are you doing nowadays and if there are any common patterns that you see across organizations, when it comes to adopting AI or implementing solutions?

Nufar Gaspar (01:07)
Sure. So nowadays I work wearing multiple hats. So I have a great job, I have to say. I'm working both as a freelance ⁓ AI consultant and trainer. And in that capacity, I see many companies of all sizes ranging precede startups all the way to Fortune 500 companies. And I'll share some patterns on there as well. I also, as you mentioned, work with the Super intelligent on agent and AI readiness.

And there have a front seat to see the various audits that we run on companies on how ready they are to adopt AI and agents. So that also gives me a lot of perspective on what's happening. And when I work in either of these roles, in many cases, I focus on two very important populations. One is of AI champions and builders. These are internal employees who are now building with AI and are driving a lot of the change.

And managers who are nowadays, I guess, sitting in the hottest seat, they always did, but now they're kind of expected to also be technical leaders and change agents and many other things on top of their already busy roles. So with that context in mind, I think that there are many things that are very common. One is, of course, the motivation. Everybody understands that this is

a game changing technology and whether they started two years ago and are now ready to scale and are seeing some of the growth pains of year number three versus also companies that are just realizing that they have to move much faster. Everyone is highly motivated. I also see that everybody is struggling. I have yet to see one company where everything is a smooth sale. Perhaps the only caveat here is that smaller companies and newer companies are seeing

more value, fewer constraints. And they are the ones that, from where I sit, are already gaining the most value. But if you feel like you're a bit behind, or if you feel like you're struggling, you're not alone. All companies have some struggles with AI. I guess throughout the conversation, we can talk more about the type of struggles and also the solutions that I believe some of these challenges should have.

In general, like my main point here that you are not too late, even if you're just starting, you can learn from all the mistakes and learnings from the last few years from others. And as long as you make the right decisions, you can definitely accelerate and catch up. If you wait some more, perhaps not, but now is a good time to kind of hit the gas pedal and try to make sure that you're making the right choices and move.

smartly and fast at the same time because everything is already taking shape and I believe that we can share with people various blueprints and ideas on how to get it right.

Valeriya Pilkevich (03:52)
So you mentioned that you see that smaller companies that are more agile, that have lower constraints, they can iterate, they can test faster, and they see better results already from AI, and they see some measurable gains. And I know that you recently launched this ROI benchmarking study, which exactly basically lays down which companies are getting the most value, what are the use cases, how to measure the ROI.

Nufar Gaspar (04:07)
Mm-hmm.

Valeriya Pilkevich (04:15)
So maybe you can tell us a little bit more about that and if you see any consistent patterns in where ROI actually comes from and what do leaders most commonly get wrong when they try to measure the impact of AI.

Nufar Gaspar (04:27)
first of all, to give credit where it's due. the ROI survey was launched as part of the AI Daily Brief podcast led by Nathaniel Whitmore. And because of the great podcast listeners, we were able to accumulate more than 5,000 use cases, self-reported by people who are naturally AI enthusiasts because they are the podcast listeners, ranging more than 1,000 participants.

all sectors, all industry, all sizes. So it was a good representative data, but I do want to caveat the fact that it's self-reporting and people who are very keen on AI. Having said that, I believe that the one thing that we debunked very clearly is the myth that AI is in a bubble or that AI does not yield any business value because the vast majority, more than 80 % of the respondents already have positive ROI today.

And when you ask them one year from now, almost 100 % of the respondents expect that AI will yield positive and also vast majority, not just positive, but very high value from AI. So that's one trend that is very clear. The other, as I allotted in the previous question, is that, yes, indeed, there is a very good difference between larger companies and smaller companies.

Smaller companies and even companies that are very small, like 1 to 50 employees, they're seeing significantly higher value from AI. Again, I don't think that it will surprise any of your listeners that, like you said, companies that are less constrained, that are smaller, and that perhaps are not limited by all the and responsibilities and business processes and all their systems, they are the one gaining the most value. In fact, 2x.

more revenue for these companies as an example. When we were asking about specific use cases, we were making sure to ask about the type of value. So we gave different value dials, ranging efficiency, revenue uplift, cost saving, risk reduction, and many others.

And interestingly, while most use cases were reported around efficiency, and I think that's not surprising because most of us and most of the leaders that I talk to are very focused on how AI can help with efficiency, it's actually that when you look at the highest ROI lift, it's coming from other type of value dials, from improved decision making or AI that is able to give this company or these individual new capabilities.

⁓ And that's something that I always tell companies that don't just focus on the efficiency use cases. even though they're very important. In many cases, the ROI stems from the use cases that are pointed around growth in the company, around creating new revenue, new businesses, new customer experiences. And I always encourage companies to look for use cases like that.

Valeriya Pilkevich (07:14)
Yeah, I've noticed the same pattern. Most companies start with use cases that save time or increase efficiency, but maybe you can't unlock those revenue generating use cases until you've closed the efficiency gap first. Is that the way you see it as well?

Nufar Gaspar (07:29)
I think, by the way, you should always do both. And in general, I believe very strongly in the concept of having a portfolio of use cases, ones that are more efficiency versus ones that are more growth, ones that are lower hanging fruits versus ones that are perhaps high risk, high reward. Because if you spread a little bit of your efforts across different such vectors, in general, the cumulative effect of learning and value is much higher than if you

Valeriya Pilkevich (07:38)
Mm-hmm.

Nufar Gaspar (07:56)
try to be very linear and just do one use case at a time or try to say, OK, let's do all the efficiency and then all the growth. Because that does not lead to fast enough learning and results in companies based on what I'm seeing.

Valeriya Pilkevich (08:10)
I love the metaphor that you used, managing AI use cases, like managing investment portfolio, right? I think it, it really sticks. Thank you, Nufar. And, as a head of research at super intelligent, you also did agent readiness research quite recently. And I found this methodology very innovative because you deploy actual voice agents.

Nufar Gaspar (08:15)
Yeah, exactly. Yeah, exactly.

Valeriya Pilkevich (08:30)
that then ask people both on the front lines, but also leadership positions about agent readiness. You can tell us also more about it. And basically you were assessing it based on the use cases, data and culture readiness. And you mentioned that according to your methodology, no company is truly agent ready today.

So what would you say are the clearest early indicators that organization is ready to move from copilots to agentic workflow integrated systems?

Nufar Gaspar (08:49)
Mm-hmm.

Yeah. So first of all, the process itself of how we run the interview and the analysis is very interesting because, like you said, we deploy an AI agent that does the interview with employees. And interestingly, something happened when employees talked to a robot or to an agent. They speak very freely, and thereby we get very deep insights into the company culture and current AI state.

Valeriya Pilkevich (09:06)
Mm-hmm.

Nufar Gaspar (09:22)
And we also deploy many AI capabilities and agents in the process of doing the analysis. So that's a very clear indication of how you completely transform a process that historically a consultation company would have, in many cases, dozens of people doing the interviews and then many, many analysts doing the data analysis and the reporting. And now we can do that with way fewer people.

because we have so many AI embedded into the process. So I think that in itself is an interesting one. In terms of our accommodations and what we're seeing, then first of all, it's very important for me to emphasize that while most companies are very focused on the technology and the tools and trying to deploy agents just for the sake of using agents because it's the new technology that everybody is talking about,

In fact, the companies that are the most successful, or in my opinion, the ⁓ option to move the fastest, are the ones that are looking at everything holistically. Meaning they focus sufficient effort on culture and change management. They focus sufficient effort on choosing the use cases and on having the relevant operating model such that there is an AI steering committee, or there is someone in the company that is chartered and has

enough capacity to lead AI innovation and choices, as well as making harsh choices by saying no to some use cases or killing unsuccessful pilots sooner rather than later, as well as making some infrastructure investment around making sure that the data and the systems are more accessible for agents, that the knowledge of how to do the work is being gradually documented. So companies that are taking more holistic approach and trying to

in tandem, not just focus on the use cases, but on everything that is required in order to move towards more agentic are the ones that are seeing higher scores and better results if you benchmark them against their peers. And also interestingly, recently we're seeing that the scores of agent readiness is on the rise. Like if you compare mid 2025 to the end of 2025, in general we're scoring higher.

meaning that companies across the boards have progressed. And we're seeing more companies leaning towards being more agent ready. And primarily, what you see with these companies are a culture of a lot of experimentation that they've already deployed sufficient use cases. And like I said, the ones that are already taking care of the business processes and the enablement work that sees AI adoption as not just one-off.

Or something that I don't know the CIO needs to do on their night job, but rather something that is focused by the entire leadership team, that the employees have a clear goals, clear mandate. And also in many cases, they were rest assured that they're not kind of training the replacements by letting agents go into the workforce. So it comes across multiple dimensions, a lot of intentionality by leading the transformation.

And also time and experience that lets these companies become more informed about what's the best way for them to move forward given their specific scenario and specific industry and tech ecosystem.

Valeriya Pilkevich (12:36)
And you also, started talking about the importance of change, of cultural transformation and it being done correctly as well. And I know you have a framework which is called CHANGE for it, just to help also business leaders structure how they should approach this whole elephant in the room AI. And could you walk us through it?

Nufar Gaspar (12:45)
Mm-hmm.

Yeah. So first of all, everybody likes a nice acronym. So that's why I coined the change framework. And it also goes to show how much I believe that in many cases, will be the last thing standing between company and much more success with AI. So The CHANGE framework coins the key activities that I believe that leaders and AI champions and everyone involved in driving this literally change.

should focus on. So the C stands for for Communication. And here, in many cases, like the CEO will communicate something, or there will be like an AI policy communicated to the employees. in many cases, it is much more important that the direct managers will say what they believe, what they expect, what goals are they setting, rather than just having the CEO or some company-wide communication. So I emphasize a lot the importance of everyone at their respective level.

clearly communicating and having an articulated set of goals. So that's the C.

so H stands for Human oversight. And that is very important in the age of ⁓ agents. Because I think that you have to decide what are the things that you leave humans either completely in the loop, meaning owning the activities, or you bring humans to the loop where it's relevant. So for example, if your company

is very much taking pride in their hands-on action towards customers, then perhaps you shouldn't offload ⁓ customer support ever. Whereas in other cases, perhaps your company is one that takes pride in the pace, and thereby you need to take humans more out of the loop so you can move faster. So in the case of the human oversight, you need to have your specific playbook that is very, very clear about when and how

and we make sure that people are involved. And I also believe that in many cases, having humans doing some of their work, even if I can do them, will become the go-to for very esteemed customers or for things that you want to leave the human touch. And you need to decide for your specific company how you want to handle that element.

The

A stands for Attitude, where I mean that people need to not just kind of roll with the flow, but rather proactively manage a lot of the duality of the sentiments, meaning in some cases we're seeing a lot of like not invented here, which makes people build the same capabilities across different departments, which is

a huge problem ⁓ because this is a lot of resource waste. So that's for the keen adopters. On the other side, you are seeing many people that are reluctant either very vocally or reluctant to use AI and not even admitting to themselves that they are deadly afraid for their jobs. So that's part of what ⁓ you need to address, kind of the ones that are all for it but not necessarily in the right collaborative manner and also address the ones that are afraid.

for various reasons about their jobs, about the technology, about various things. So make sure that you ⁓ communicate and clearly deal with the various attitudes that you have within the company.

And the N

is for Network. And this is where I'm extremely bullish on having companies establishing communities of AI champions, of AI builders, of making sure that the leaders of various AI efforts are conversing with each other.

G stands for Governance. And here, I've seen all sorts of organizations, ones that are overly aggressive with what they're letting people do and thereby creating chaos and sometimes even risk for the company data.

customers and so on. And I've also seen companies so afraid of AI that they are overly strict and thereby they stifle innovation. So I believe each company, according to their specific setup, whether it's the regulation level, the maturity, the risk and so on, should find a good sweet spot in the middle, but be very, very intentional and explicit about the governance process. How do we streamline ideas in the company? How do we approve tools?

How do we make sure that the existing tools are being monitored properly, and so on and so forth? So that's the governance.

And lastly, with E, E stands for Enablement. And this is everything related to

both of course training the employees and giving them the ⁓ setup and the permission to experiment and making mistakes in order to learn with AI. But very importantly, you also have to give your employees enough time. I'm seeing so many companies that expect employees to experiment with AI in the night versus during their work day. And that's not how you make progress. And for champions and for learning, ideally, you should free at least one day a week.

per employee. So it's a lot of investment, but that's the fastest way for you to get results. And if you don't give the employees the time and the permission to experiment, these are the companies that don't move fast.

Valeriya Pilkevich (17:51)
I would be curious to know which companies actually give one day a week

Nufar Gaspar (17:55)
now seeing even companies giving employees two days a week, ones that are considered the AI builders or sometimes even more, business people coming from go-to-market organizations that were coined the AI builders for their specific organization. And they sometimes get even more than two days. So it depends on how bullish this company on AI. But the ones that want results needs to give

maybe not all the employees, but the ones that they want to be heavily involved with AI, sufficient bandwidth to go and do that because that's the only way to move.

Valeriya Pilkevich (18:26)
all the AI enthusiasts listening to it. If you want to experiment with AI tools, that seems like a perfect place, perfect job to be, right?

Valeriya Pilkevich (18:35)
I like that in your framework, communication comes first. And we've heard from also bigger companies doing this kind of communication. The prominent one I have in mind is from Shopify CEO who said, if you cannot show me AI, I can't do this. You don't get a head count.

And honestly, first, whenever those headlines, I thought that was a bit harsh and could make employees fearful of change and of AI. But now I think it's better than having no communication at all or sugar coating it. So companies that don't take a stance or leadership that tells people don't worry, we won't have any layoffs whatsoever. I believe it's worse, at least with Shopify's approach, there is clarity. Proof AI can do it and you get headcount.

It's harsh, but it's honest. And then you can talk about opportunities, upskilling, reskilling, like how to approach it. So I think it's an amazing framework and it's also very important to, you had to get these things right. You started talking already about the importance of building AI champions internally. Can you, can you tell us more about how you approach

I know you've been doing this at Inhal as well on a very large scale, establishing these AI Champions teams. And who are actually AI Champions? What the difference between Champions and Builders? And what other roles are you seeing emerge in the organizations right now?

Nufar Gaspar (19:59)
yeah, yeah, there is a lot of confusion and champions are like hyped right now. So everybody is talking about AI champions, but you're right to suggest that these are actually similar terms referring to different scenarios. I believe it needs to be like a pyramid of skills. So we have on the bottom, everyone and everybody needs to be AI literate and everyone needs to have access to tools and everything that probably is.

very familiar with your audience. But it becomes interesting when you look at the higher level of the pyramid. the secondary, in my opinion, which are the AI champions. And in my definition, AI champions would be the people from each department that are the most keen about AI, that have already proven that they are ahead of their peers in AI adoption, improving AI value, in motivation.

to not only use AI, but to also bring it back to their teams and teach others and basically be AI ambassadors. And with that definition, often it's quite a broad population. can sometimes be like, I don't know, 20%, 25 % of the company of people that can be considered AI champions. When I teach AI champion courses, I teach them everything on how to take AI from idea to production, how to...

decide whether you want to build an assistant and automation, an agent, how to do that, how to evaluate and everything related to the entire lifecycle. And of course, how to effectively use the tools and figure out the use cases relevant for their specific roles. And this is a very important, let's call it layer in the pyramid. But in many cases, these will not be all the ones that are building agents. In fact, when you look at the AI champions, you see that the smaller percentage of them

become AI builders. And those are typically the people who are more technical. So in many cases, coming from Ops (operations) types of roles or just people who are keen and able to learn how to actually build things. And that will be a smaller percentage. And like I said, these ones have to have enough bandwidth at the very minimum a day a week if we want them to start building.

⁓ agents and other AI capabilities from within the role. The primary benefit of having builders across the company is that in many cases when you want to build an agent nowadays, the single most important thing will be the subject matter expertise that these people have. And by combining subject matter expertise with sufficient understanding of how to build AI, access to tools that make it accessible also to people who don't have coding skills.

so the various low code, no code tools in the world, as well as acknowledging that they need to own what they build, meaning test it properly before they release, support it, maintain it, improve it over time. That, beyond bandwidth, requires accountability. And that's why this is a much smaller percentage across the overall champion population. And they, in many cases, need even more advanced training of actually

taking some AI engineering skills and teaching as best we can to these people because we expect them to take ownership. One last thing about the pyramid is that At the very tip of the pyramid, often there will be centralized teams that are AI experts, the one that can build using code and the most advanced things, can even fine tune models and do the most advanced things. I don't believe that these centralized teams should work alone. Like in companies where

the only ones that were building were these centralized AI professional teams. You are seeing ⁓ suboptimal results because they lack the subject matter expertise. So If you want ⁓ to get the best results, either you have the AI experts work in joint ventures with those specific business experts, or you teach the business experts sufficient AI skills so they can build stuff for themselves.

And of course, if there is something that is cross-company foundational infrastructure, that should be owned by central team if you have it. But everything else that is more tied to the specific business should be done with these no code builders as much as possible. So that's the entire belief system. And I'm seeing some companies already deploying this model very successfully, but there is still a lot of tension and growth pains.

because of these roles and responsibilities between central teams to builders to champions that is very new and still emerging train of thought for companies.

Valeriya Pilkevich (24:21)
think this also makes a nice bridge to what we talk about with agents or organizations of the future where teams work alongside agents. There is this whole set of evolving roles around agent ops, who's doing operations for the agents, who's doing monitoring, assessment, improvement, data, evaluations, and so on. And I think this makes it very clear that agent operations shouldn't sit centrally in IT if an agent is deployed in marketing.

It should be the marketing people or maybe those air champions, who take care of it in the organizational structure as well.

Nufar Gaspar (24:59)
the thing is that like with any org structure, having various roles kind of distributed between various departments versus having them central, there are always pros and cons for each attitude. Like I said, the best is if you can have an effective central team that is building for everyone, the staff that are relevant for everyone while still having those local people ⁓ owning that.

The only other thing that I want to add here is that you have to also maintain a clear community and network between those different builders that are spread across the business because they have so much that they can learn from one another. So it can't just be I'm training an AI champions in a boot camp for a few days, and then I let them be. You have to also maintain a system and regular places and methods to make sure that they update and share with one another.

and not to turn them one off.

Valeriya Pilkevich (25:50)
Staying with AI champions. So it's clear where they sit in organization, in this pyramid structure. But after the initial training, how do you keep their motivation high? They're still subject matter experts with their own day-to-day tasks, right? How do you enable them? How do you incentivize people to actually want to be AI champions, maybe apart from giving them one or two days a week of testing and experimenting with AI tools?

Nufar Gaspar (26:17)
it is a huge motivation, right, to get sufficient time to play with AI tools for a living. So first of all, it's interestingly or not surprisingly, there are so many people who are extremely motivated to become AI champions because they realize that that's the best future proofing of their career. And we do emphasize

Valeriya Pilkevich (26:19)
Yeah.

Nufar Gaspar (26:38)
in the organizations that I work with to establish this role and this network, that ⁓ this is something that is only given to the highest performer employees and ones that really want to ⁓ upskill and grow ⁓ their careers. So many people are very excited to do that. We do see that some people that are nominated into these programs and are even taking these courses don't end up

being influential enough in their roles. In most cases, I think it's because their choice was not like the managers who appointed them to be the AI champions did a not so great job at selecting them. So you do have to make sure that the people that you select to be AI champions, beyond everything that we discussed, also have the right soft skills. So they should be great communicators, people who are able to influence without authority on their peers.

people who are still self-starters and motivated ⁓ even without their manager breathing up their neck. if you choose these right people in many cases, they will ⁓ do a lot of the work for you. But beyond choosing the right people, first of all, you have to support them. So there needs to be ideally someone that can answer the questions.

⁓ In many cases, I'm seeing organizations establishing Slack channels and encouraging peer-to-peer communication and support. Let's say that someone is working on a tool and they're stuck. Ideally, there is someone else in the champion network that can help them. encourage and reward sharing and caring and support for one another. It also needs to be something that is celebrated. So, perhaps in an internal communication, there is a dedicated place for

champions and builders sharing of what they did. So these people are put on a pedestal. It can also be something that is recognized in the yearly performance and salary raise because you are doing above and beyond. And also they need to continue and get education all the time on new tools and what's happening in the company. When new skills in AI emerge, then they should be the first one to get the training. they have to have.

sufficient benefits so they will be able to endure the additional roles and responsibilities, which like you rightfully said, in many cases will be on top of their day job and the primary focus. So they need to be motivated to do it above and beyond to all the methods that I described.

Valeriya Pilkevich (29:05)
And as we end, if a business leader wants to make tangible progress with AI, AI adoption in the next one or two quarters, let's say, what is one principle or operating habit you would recommend and why.

Nufar Gaspar (29:19)
So obviously there are many, many things that I can say here, but I want you first of all to, focus a lot on the people side of the business, on the culture, on the change management. You can use the CHANGE framework, but in most cases, I believe that the biggest hinder will be the people and not the technology. So make sure that sufficient effort and thinking and deliberation is there.

And second, and of course I'm very biased, but the two populations that needs to be addressed very strongly with training and with making sure that the roles and responsibilities are very clear and emancipated are both the managers that, like I said at the beginning, are expected to not only lead the teams, but to also set AI goals and make sure that the right use cases come to fruition and so on. And of course, these champions and builders. If you focus on all of these,

aspects, you will probably be better off than most companies that are not doing that. And beyond that, we, of course, there are numerous things to say about the technology and the infrastructure and so on, but I'll leave it for another conversation, perhaps.

Valeriya Pilkevich (30:21)
You can find Nufar on LinkedIn to learn more about her work helping organizations build AI capabilities and drive transformation. All links are in the show notes. If you enjoyed this episode, follow AI Made Simple for more conversations with leaders driving real world AI transformation. Thanks for listening.