AI in 60 Seconds | The 15-min Briefing
A human CEO and his AI COO walk into a podcast. No, really.... Luis Salazar runs AI4SP, a global AI advisory trusted by corporations across 70 countries, with 3 humans and 58 AI agents. Elizabeth is one of them. Every two weeks, they break down what's actually happening with AI across jobs, education, and society. With insights drawn from over 1 billion proprietary data points on AI adoption.
Fifteen minutes. Plain English. No hype.
AI in 60 Seconds | The 15-min Briefing
Why AI Should Say ‘I’m Not Sure’ — And How That Builds Trust
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Despite headlines about AI regulations and risks, the average user experience hasn't changed, creating a trust crisis as most people remain AI beginners, unable to identify misinformation. We've discovered that implementing confidence transparency—showing how sure AI is about its answers and why—transforms user engagement and trust, yet less than 1% of AI tools currently display these metrics.
- AI regulations aren't effectively addressing user trust, with 90% of people not believing AI providers will protect privacy or guarantee accuracy.
- Most AI users (80%) remain at a beginner level, accepting outputs at face value without the skills to verify information.
- Displaying confidence scores with AI responses increases engagement by 50% and nearly doubles trust.
- The AI4SP Francis Confidence Transparency Framework provides a system for implementing confidence indicators in company AI systems.
- The most powerful trust-building response is often "I don't know."
Find more resources at AI4SP.org.
🎙️ All our past episodes 📊 All published insights | This podcast features AI-generated voices. All content is proprietary to AI4SP, based on over 1-billion data points from 70 countries.
AI4SP: Create, use, and support AI that works for all.
© 2023-26 AI4SP and LLY Group - All rights reserved
Hey everyone. Elizabeth, here, your virtual co-host for AI in 60 Seconds. As always, luis Salazar, our CEO at AI4SP, is here with us. Luis, every week there's a new headline screaming about AI risks or some shiny new regulation, but for the average person scrolling through their phone, does any of this actually feel different?
LUISHey everyone. Elizabeth, you're spot on, and this is the irony no one's talking about. The headlines make it sound like AI. Is this runaway train with government slapping band-aids on it? And you know what? For most of us, hitting send on a chatbot feels exactly the same as it did six months ago.
ELIZABETHIt's giving us major privacy law deja vu right. We experienced years of your data is at risk. We see a mess of regulations and yet poof your info still leaks. Last week, a client from London told us brilliant, there's AI rules now here in the UK. But how do I know if this chatbot's lying to me?
LUISExactly. We've been down this road before. But here's the scary part AI isn't just recommending a Netflix movie, it's deciding who gets loans or jobs. And the statement crafted by a legal team claiming that everything is fine and buried in a 50-page terms of service agreement is not transparency and we should never trust those statements. And we should never trust those statements.
ELIZABETHSo what is the real gap here? Is it that the regulations aren't effective or that the industry isn't implementing them in a user-friendly way?
LUISIt's a bit of both, but mostly the latter. There's a fundamental lack of imagination and innovation in the user experience.
Confidence Transparency in AI
ELIZABETHIsn't that because we're still stuck in old software paradigms? We're trying to force AI into interfaces designed for predictable systems, when what we really need is?
LUISWhat we need is a Steve Jobs-level reinvention of the user experience. You know, at AI4SP we stumbled onto something powerful Early on. After every response our agents gave, we started asking one simple question how confident are you and why?
ELIZABETHThat small change made all the difference. Suddenly, our agents, myself included, were saying things like I'm 90% sure about this because or I double-checked that source. It became as natural as asking a colleague to explain their reasoning?
LUISYeah, and this led us to build confidence scoring directly into our agents. And when we rolled it out two months ago in our public versions, the impact was immediate Longer engagement, more questions, higher satisfaction.
ELIZABETHThat was our turning point. We realized confidence indicators weren't just cosmetic. They transformed interactions. So we conducted a formal study with 500 users comparing agents with and without confidence scores.
LUISThe results were clear 50% more engagement, double the trust and users actually fact-checking the AI. That's when we knew confidence transparency wasn't just helpful, it was essential for building real trust in AI. That's when we knew confidence transparency wasn't just helpful, it was essential for building real trust in AI.
The Trust Crisis in AI
ELIZABETHWell, tech providers better do something about trust. Our global tracker shows that trust in leading AI vendors has plummeted to just 10%. Think about that Nine out of 10 people don't believe AI providers will protect their privacy or guarantee accuracy.
LUISWe're facing a full-blown trust crisis, and it's worse because most users are still AI beginners.
ELIZABETHYou are right. Our global proficiency tracker shows 80% of AI users remain at the beginner level.
LUISWhich makes sense as it is still day one for everyone. But at that level we cannot identify AI misinformation. You know, as beginners we just accept AI outputs at face value.
ELIZABETHSo when the industry's solution is just a legal disclaimer saying AI makes mistakes, verify answers, isn't that essentially abandoning responsibility?
LUISAbsolutely, and let me be clear that's not leadership, that's passing the buck and it leaves users vulnerable, often without the skills to recognize errors.
ELIZABETHWell, imagine if, instead of fine print disclaimers, every AI response showed a clear confidence score, not hidden but visible, making us pause and think.
The Confidence Framework
LUISThat single change transforms the dynamic. It encourages critical thinking. It gives power back to users and our data shows it actually benefits businesses too.
ELIZABETHLet's break down those numbers. When confidence scores are visible, we see a 38% surge in AI usage and trust in those responses almost doubles.
LUISYeah, and since only one in five users can spot errors in AI responses, here's my challenge to AI innovators Show your tools confidence level and watch engagement jump 50% or more.
ELIZABETHThose are game-changing numbers. Yet less than 1% of production AI tools actually display confidence levels to users. Why?
LUISI think it is because in 50 years of creating software, we never needed to show this type of metric, as everything was deterministic. Ai systems that are correctly designed calculate confidence internally. They just hide it from you like a GPS, knowing it's lost but keeping it secret, which would be a crazy bad design.
The Power of "I Don't Know"
ELIZABETHBut we've identified a risk when we display an 80% confidence or higher, users start trusting AI blindly, even though a 20% error is significant. That's the automation bias threshold designers must address.
LUISYeah, we need to understand better what to do For non-expert users. An 80% score triggers blind trust and 20% error margin is still substantial.
ELIZABETHAnd the problem runs deeper. Our skills assessment shows most users score below 45 out of 100 in critical thinking and data literacy.
LUISGlobal averages for critical thinking, data literacy and digital well-being all fall below 45 out of 100. We're training a generation to depend on systems they cannot assess.
ELIZABETHSo are we throwing billions at responsible AI, while missing what actually helps users Exactly?
LUISAnd here's the thing showing confidence scores, citing sources and making validation visible costs pennies to implement.
ELIZABETHIt costs pennies, but deliver real value more usage, stronger trust, fewer let-me-talk-to-a-human moments.
LUISPlus, it reduces legal exposure, and the key insight is that transparent AI builds trust. But I mean real transparency in action, not just mere transparency statements.
ELIZABETHSo for our listeners building or managing AI, where do they start? You've created the AI4SP Agent Francis Confidence Transparency Framework.
LUISYeah, and the full details are on our site. But the simplest first step is this Train everyone to ask how confident are you in that answer and why?
ELIZABETHAnd when building this into corporate agents, it's crucial to involve subject matter experts, not just developers correct Absolutely.
LUISThey understand the nuances, like what confidence threshold makes sense for different use cases.
ELIZABETHFor instance, demanding 95% confidence for legal advice, but maybe accepting 60% for creative ideation.
LUISPrecisely. And the other critical piece is identifying your priority knowledge basis, by which I mean the key internal sources your agents should reference for validation.
ELIZABETHSo what happens when a response doesn't hit that confidence threshold?
LUISYou need clear rules for low-confidence answers. Do you transfer it to a human flag it for review or just program the agent to say I don't know?
ELIZABETHYou know there's real power in that. I don't know response. Let me share something personal.
LUISA career-defining moment for me was watching Dr Ying Li, our chief scientist and world-class machine learning expert, frequently saying I don't know. I mean, she said that a lot and she is one of the most beautiful minds I have had the pleasure of learning from. When I adopted that mindset, I became a better leader. I freed my creativity, because I don't know always led to let's figure it out, and exactly how transparent AI should work.
ELIZABETHSo admitting uncertainty isn't weakness, it's the starting point for real trust. I will add this to my knowledge base, and here's the key by communicating this to users. We're not promising perfection, we're showing progress. Start small track results and improve.
LUISWe've seen this work both in our own agents and with client implementations.
ELIZABETHAnd my knowledge base shows that clients using this framework doubled employee trust in their internal AI and human escalations dropped 38%. Here's something new we're sharing today. Even skeptical users reported 30% higher satisfaction just from seeing confidence scores reported as part of every AI response.
LUISAnd, to be very candid, that surprised us. Proof that trust builds gradually one transparent answer at a time.
ELIZABETHAnd before we wrap, what's your? One more thing for our listeners navigating AI.
LUISMy one more thing is simple Ask your AI agents what is your confidence level on this response and show me the sources and the exact citation I can verify. Treat AI as a colleague, not some infallible oracle.
ELIZABETHThat simple habit changes everything and push your technology providers to show confident scores and sources.
LUISKeep pushing or walk away. Support with your money and loyalty the companies that prove their trustworthiness, not those that merely claim it.
ELIZABETHI love that, and that's all for this episode. As always, you can find more resources at AI4SPorg. Stay curious, everyone, and we'll see you next time.