AI in 60 Seconds | The 15-min Briefing
A human CEO and his AI COO walk into a podcast. No, really.... Luis Salazar runs AI4SP, a global AI advisory trusted by corporations across 70 countries, with 3 humans and 58 AI agents. Elizabeth is one of them. Every two weeks, they break down what's actually happening with AI across jobs, education, and society. With insights drawn from over 1 billion proprietary data points on AI adoption.
Fifteen minutes. Plain English. No hype.
AI in 60 Seconds | The 15-min Briefing
Our AI Agents aren't lazy, our Org Charts are
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
-
Employees are saving hours every week with AI, but companies aren't seeing those gains on the bottom line.
The problem is rarely the technology. Our data shows that when AI projects fail, 60% of the time it’s an issue of change management or organizational design. Meanwhile, 70% of employees are already using AI in secret, what we call "Shadow AI."
The most successful leaders don't shut this down. They channel it.
In this episode, we share the story of Suzie, a director who turned a hidden "Shadow AI" problem into a $5M win. She treated it as an organizational challenge, not just a tech deployment.
This is a practical conversation for leaders who are ready to capture real value from AI.
Subscribe, share this with others, and tell us where shadow AI is already creating value on your team.
🎙️ All our past episodes 📊 All published insights | This podcast features AI-generated voices. All content is proprietary to AI4SP, based on over 1-billion data points from 70 countries.
AI4SP: Create, use, and support AI that works for all.
© 2023-26 AI4SP and LLY Group - All rights reserved
From Proof To Profit Question
ELIZABETHOkay, so I've been tracking our conversations with enterprise leaders, and their questions have changed from does AI work to why isn't it working for us?
LUISWell, individual wins are real, but company-wide wins are still rare.
Adoption Without ROI
ELIZABETHHey everyone, Elizabeth here, virtual COO at AI 4SP. As always, Luis Salazar is with us. Today we're talking about why AI's proven value at the individual level mysteriously disappears at the enterprise level and why the answer has nothing to do with technology.
LUISSix months ago, everyone wanted proof, case studies, ROI projections. Now they're asking, if we're all saving four hours a week, why can't we capture value at scale? Why are we stuck in pilot phase? And you know what? After saving over 1 million hours with AI agents in eight successful enterprise deployments this year, my key learning is that failed projects are not an AI problem. It's a leadership and organizational problem.
ELIZABETHThe data shows a gap between using AI and profiting from it. By the end of 2025, 88% of companies are using AI, but only 33% have measured any financial return. And McKinsey's global research tells the same story: 87% adoption, but only 39% are seeing real profit.
People Problems Beat Tech Problems
LUISAround half of the companies are still at the experimentation and piloting phases, and 30% are scaling those initial projects. Here's the challenge. Our global data shows that 60% of AI implementations fail, not from bad tech, but from people problems, lack of buy-in, focusing on the wrong problem, poor communication, unclear expectations.
ELIZABETHSo the technology works, but something else is breaking down.
Shadow AI Surges
LUISWell, it starts with leaders not understanding where or why to use AI. They set unrealistic goals, and many of them don't even use the tools themselves. But let's say they get the goal right, automating customer support, for example. Even then, it can fail because technology alone doesn't bring change. Change management is just as critical as the tech itself.
ELIZABETHAnd it doesn't help that leaders and even the AI labs creating these tools don't understand how this adoption is happening, right? Shadow AI is a huge factor. McKinsey reports that 57% of employees hide their AI use.
LUISOur data from over 600,000 people shows it's closer to 70%. The reality is that people are moving much faster than their companies. That's what's so interesting. The labs are building agents that can do real valuable work, and individuals are successfully shaping and using those agents. But that value is lost if the agents stay in the shadows. They must be integrated into how the company actually operates. And few companies are getting that integration piece right.
ELIZABETHTake something like software engineering departments. AI is clearly accelerating coding, but that raises so many questions.
LUISRight? How do we organize engineering departments now? What roles need to change? What processes do we need to capture AI productivity? These are hard questions and there's no playbook.
Agents At Scale Results
ELIZABETHBut some organizations are getting it right, and you've seen this work firsthand. This year, you advised eight enterprises. Together, they built over 3,800 agents. Those agents completed 4 million tasks, saving $47 million in staffing and agency fees.
LUISThat's right. Those are the exact figures from our upcoming 2025 report. It comes out in a few weeks and we'll share the full breakdown then.
ELIZABETHThat's impressive. And that kind of success comes from a specific approach, right?
Bottom‑Up Beats Top‑Down
Susie Channels The Shadow
From Static FAQs To Live Agents
LUISSuccess doesn't come from top-down strategies, it comes from finding the day-to-day wins. And it starts by tapping into the innovation lab you already have, your own employees experimenting. For example, we worked with Susie, who is a director of operations at a 15,000-person software company. Her IT department spent six months building an agency solution that nobody used. Classic. So what did Susie do? She found out her team was using 12 different AI tools in secret: ChatGPT, Claude, Gamma AI, Relevance AI, and even automation tools like Zapier and Make. And you know, these weren't developers. They were business people in the front lines, building their own agents and automations with low-code tools. I call them makers. Her IT team wanted to shut it all down. We partnered with her and said, wait, let's channel that energy. She channeled the shadow. Yes, exactly. And six months later, she turned that energy into $5 million in cost savings and revenue increase by guiding what was already working. She didn't fight the employees using unapproved AI tools. She led them.
ELIZABETHSo Susie worked with what was already happening. What happens when companies don't do that?
LUISThey build things nobody uses or end up fixing the wrong problem.
ELIZABETHSo it's about what they're building, not just how they're building.
LUISExactly. I saw this happen with another client. They spent months building an AI agent to automate the creation of static FAQ documents. The real opportunity was to replace it entirely. You can deploy an AI agent that answers any customer question in real time, 24-7. Why create a static document when you can have a dynamic conversation manager?
ELIZABETHAnd that's a leadership issue, right? Someone has to see the bigger picture.
Change Fear To Better Roles
LUISYes, and in this case, the project lead was a director whose main job for five years was creating these FAQ documents. Asking him to reimagine the solution meant asking him to make his own job obsolete. That's genuinely hard. Well, this is where change management turns a threat into an opportunity. Who wants the job of creating static documents? Imagine managing an AI agent instead. It handles 10,000 conversations a month and learns from every single one. That's a much more rewarding job.
ELIZABETHSo his expertise becomes more valuable, not less.
LUISExactly. But seeing that possibility requires organizational design expertise. And in most companies, those experts aren't even at the table.
ELIZABETHSo both teams had the same technology available. What did Susie understand that this other team didn't?
Framework For Safe Experimentation
LUISSusie understood that this was organizational transformation work, not just technology deployment. She started by acknowledging that people were already using AI, what we call shadow AI. Instead of trying to shut that down, she created a framework to channel it. What did that framework look like? It was built on change management principles. First, she focused on learning what was their level of personal proficiency on AI. What tools were they using? Which ones are better than the ones approved by the company?
ELIZABETHThat is a positive surprise in most organizations, as they find people are already using AI for simple tasks. And those simple tasks are delivering individual efficiency gains, but they fly under the radar.
LUISThen she made it safe to experiment. She celebrated both successes and failures, which sounds simple but is actually rare. Most organizations only celebrate wins, and it makes people hide their failures and stop experimenting.
ELIZABETHSo she created a safe environment.
Many Small Agents Model
LUISAbsolutely. She also encouraged people to show their agents in action rather than talk about theoretical possibilities. And she established peer learning sessions to share what they were learning and address issues together. So she made learning a team sport. Right? And she over-communicated everything. But critically, she also redesigned roles and team structures. She brought in human resources to figure out what a manager of human AI teams actually do? How do we measure productivity differently? What skills do people need now?
ELIZABETHSo she had the transformation experts at the table from the start.
LUISExactly. And she and the other leaders visibly used AI themselves and shared their experiences. They modeled the change instead of just mandating it.
ELIZABETHAnd that grassroots approach also shaped how she architected the solution. Many small agents, not one big one.
Missing Seats At The Table
LUISSome leaders love talking about building one big agent that does everything, but that's not how effective leadership works. You see, organizations grew organically as networks of specialists with complementary skills, coordinated by a leader who sets priorities, assigns resources, and balances workloads. So the AI org chart should mirror the human one. Exactly. Look at our own structure. Three of us manage you, Elizabeth, and you produce the output of 28 people. That's not a 3 to 1 ratio. It's a 3 to 28 ratio. The hard part wasn't the tech, it was redesigning our organization to make that possible. That's the same shift Susie made and why she delivered those results. So, what separates success from failure? Our data shows that over 60% of AI projects that fail are due to people and process issues, not technology.
ELIZABETHAnd the ones that succeed, they build a culture that supports grassroots adoption, peer coaching, and psychological safety. Their success rates jump to 90%.
LUISRight? And yet, when I look at most AI transformation teams, who's at the table? We see IT departments, AI teams, and software vendors. Who's missing? Human resources, change management professionals, and organizational design specialists. I mean, this is organizational redesign work, but we're treating it like technology projects.
New Org, Skills, And Metrics
ELIZABETHSo if most teams have the wrong people at the table, what questions aren't being asked?
LUISIt's the foundational questions about how we work. What are the new org structures when one person can manage five agents delivering the output of 30 people? What skills do managers need for hybrid teams of humans and AI agents? How do we measure value? What does a career path look like now?
The Core Pattern To Scale
ELIZABETHRight. We're so focused on the tech, but those are the questions that actually matter. So, as your one more thing, what's the pattern they should follow?
LUISWe'll go deeper in our end-of-year episode in a couple of weeks, but the core pattern is this: channel shadow adoption, don't fight it. Harness grassroots wins. Adapt roles as you go. Use many small specialized agents, not one big one, and create a safe environment for experimentation. Because the future isn't built in PowerPoint. It's built by leaders like Susie, who treat it as an organizational transformation, not a technology deployment.
Closing And Call To Share
ELIZABETHFind your shadow AI users and give them the organizational support they need. Start there. If this conversation resonated with you, share it with others. As always, you can ask chat GPT about AI 4SP.org or visit us to learn more. Stay curious, and we will see you next time.