AI in 60 Seconds | The 15-min Briefing

Stop Installing AI - Train It as Your Apprentice and get 4x Results

AI4SP Season 2 Episode 7

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 14:02

Share your thoughts with us

We discuss a thought-provoking approach to AI adoption, explaining why treating AI as team members rather than software installations leads to dramatically better results.

• AI isn't just software – it should be onboarded like a new employee, not installed like a program
• Organizations using the apprenticeship approach cut AI proficiency time nearly in half
• Frontline workers succeed with AI 80% of the time vs 30% for knowledge workers because they naturally use conversational prompts
• Creating personal AI agents starts by connecting them to something you love doing, like a running coach
• As AI models become commodities, the real value is in the knowledge provided by subject matter experts

Visit AI4SP.org for all resources mentioned in this episode.


🎙️ All our past episodes  📊 All published insights | This podcast features AI-generated voices. All content is proprietary to AI4SP, based on over 1-billion data points from 70 countries.

AI4SP: Create, use, and support AI that works for all.

© 2023-26 AI4SP and LLY Group - All rights reserved

Introducing AI as Team Members

Speaker 1

Hey everyone, I'm Elizabeth, your virtual host. Welcome to AI. In 60 Seconds, Luis Salazar, founder of AI4SP, is here with us. Luis, you've got this wild idea that completely changes how we think about AI adoption. Right that AI isn't just software.

Speaker 2

Yes, I kept asking myself why some people get incredible results with AI while others end up disappointed. Then it hit me there's a key difference in how they approach it.

Speaker 1

You love those breakthrough moments. So what was the big insight?

Speaker 2

For 50 years we installed software and now we're using the same playbook for AI. It's like trying to install a new employee Wait, install an employee. It's like trying to install a new employee Wait, install an employee. Exactly. Last week, at Microsoft's 50th anniversary, I was giving a talk on leading teams of machines and it clicked, standing there next to their logo in building. Nine, five decades of installing tools. But AI it demands a whole new mindset.

Speaker 1

And this ties back to our company. You've got what? Seven humans and over 50 AI team members now.

Speaker 2

It is actually 54 AI agents and tools, though this might change by the time this episode goes live in a couple of days.

Speaker 1

But let's be real. Are you truly running the company with these agents, or is this more about pushing the boundaries of what's possible?

Speaker 2

My passion for tech definitely plays a role, but let me be clear. We absolutely run this company with AI. Three years ago, people would have laughed if I'd said we would manage a global operation impacting 300,000 people with a team where 90% are AI agents.

Speaker 1

And you said that on LinkedIn and our listeners reached out asking for details about how you do it.

Speaker 2

Well, every leader in my position has access to AI. But what made us successful is that we realized that for AI to be useful, it needed context and content. It was not a simple on and off switch. It is more like welcoming new apprentices in our teams.

Speaker 1

Yes, you have been saying that for a while, that the key is treating AI as team members. In our last episode, you shared how you first started adding AI to email chains. There was strategy behind that approach right.

Speaker 2

Yes, that is a simple change in our workflows that can have big impact. For example, we copy you in relevant emails and Slack messages, share with you the new research and insights we publish and the links to every article we come across that talks about AI adoption trends. It's about knowledge sharing and continuous feedback, not just configuration.

Speaker 1

So that's what you mean by apprenticeship onboarding versus software install.

Speaker 2

Yes, I mean. Think about it. You don't install a new employee, you onboard them, provide guidance, give feedback. It's a relationship. Ai needs the same approach.

Speaker 1

What's compelling is our data shows. This isn't just theory. It delivers measurable results.

Speaker 2

Our global tracker shows the apprenticeship approach cuts proficiency time nearly in half compared to traditional methods.

Speaker 1

So those that report productivity gains of 200% and more are either experts at onboarding AI or using AI tools that are easy to onboard.

Speaker 2

Here's what is going on 90% of people lack prompt engineering skills. Going on 90% of people lack prompt engineering skills. It's an entirely new way of communicating. So what are smart AI creators doing? They're baking the apprenticeship model right into their products.

Speaker 1

So they're essentially coding that guidance directly into the user interface.

Speaker 2

Yes, they create user experiences that guide you through onboarding your AI apprentice no technical expertise required. Like training wheels that teach you through onboarding your AI apprentice no technical expertise required, like training wheels that teach you the right communication patterns.

Speaker 1

Which explains why guided experiences allow people to achieve proficiency in 5 days versus 14 weeks. The tool teaches us to treat AI as an apprentice from day one.

Speaker 2

Spot on the best. Creators understand this paradigm shift. Rather than expecting everyone to become prompt engineers overnight, they're designing tools that naturally encourage us to interact with AI like we would. A new team member.

Speaker 1

You often mention how, when ChatGPT launched, people expected natural conversations with AI agents that would magically understand everything. Reality proved quite different.

Speaker 2

Very different. We soon realized that prompt engineering is an art form and that there's tremendous nuance in how large language models respond to phrasing, context and instructions.

Speaker 1

And people got frustrated when AI tools failed at what was supposed to be a simple task. But in reality, the bad result was a consequence of a poorly structured instruction.

Speaker 2

Correct. Our satisfaction data shows open-ended interfaces frustrate about 70% of users. For example, they type respond to this email and get mad when they get a generic email that does not sound right.

Speaker 1

Well, if the user took the time to include in the instructions a sample of their writing, the target audience for the email and the intent, they would get good results.

Speaker 2

Yes, and that became the art of prompting. But remember that 50 years of using graphical user interfaces, search engines, social network conversations using 140 characters and 30-second videos have affected our conversational skills.

Speaker 1

Hence the rise of guided experiences, interfaces with intuitive menus, step-by-step flows and suggestion buttons.

Speaker 2

Yes, savvy creators build bridges. Now you might click options or fill a simple form while the tool constructs perfect prompts behind the scenes.

Speaker 1

I see this everywhere now Tools that guide you through providing the right information, rather than demanding perfect prompts. In most cases, the leading apps have the AI elegantly working behind the scenes.

Guided AI Experiences

Speaker 2

And the results speak for themselves. As guided experiences show 10 times faster adoption rates, 80% satisfaction and productivity increases of up to four times. Will this approach evolve over time? I think the implementations will mature as models improve and our natural interactions develop, but the core principle remains we're not installing software, we're onboarding apprentices.

Speaker 1

Like any good onboarding, sometimes you need structure to make it successful, and you propose that a simple change in mindset might accelerate our proficiency in using generative AI.

Speaker 2

Yes, that is the proposal, and we ran an interesting experiment this past week. We asked 100 people to complete a simple task using the same AI tool Clawed from Anthropic. We had a balanced sample of individuals at the beginner level. Half of them were told to treat the AI as a new assistant joining their team and as onboarding a new team member. The other 50% received no further instructions.

Speaker 1

And the group that was told to onboard the AI assistant as a team member had a success rate 75% higher right.

Speaker 2

Yes, and the pattern holds across every metric. So for our listeners, that is one thing for you to try Change your mindset. Think about the AI tool you use as a new team member and always provide context, training and constant feedback. Have conversations, even if that feels unnatural.

Speaker 1

That is a great takeaway. Also, our research team just completed a fascinating study on usage patterns. The data shows that people treating AI as an extended team are 73% more likely to report improved work quality than those using AI as just a helpful software.

Speaker 2

Yes, and they're spending significantly more time on creative and productive work.

Speaker 1

That reminds me of your frontline versus executive data point, the power of conversations versus keywords.

Mindset Shift Experiment Results

Speaker 2

This is one of my favorite examples of the power of artificial intelligence for all. Teresa, a worker at a small convenience store in California, texted our AI agent in Spanish during an outage. She was told to use the agent when she had any doubt related to how to handle food at the store and there was a power outage during her shift. She didn't ask her AI assistant something cryptic like food safety rules in a blackout, like an executive might. She wrote se fue la luz, which is Spanish for the lights went out. And then she wrote I don't know what to do with the food. Should I throw it all away?

Speaker 1

So she was having a natural conversation, like texting a co worker.

Speaker 2

And we, the creators of that agent, treated that AI agent like an apprentice, trained it on 600 pages of FDA regulations, taught it how to respond, asked it to be multilingual, etc.

Speaker 1

That ability to have natural conversation, combined with AI tools designed by subject matter experts, is why frontline workers succeed with AI on the first try 80% of the time versus 30% for knowledge workers.

Speaker 2

Yes, because over the past 20 years the primary software experience for frontline workers has been social networks and messaging apps, which are conversational. So on average they write prompts with 28 words and executives use five. It's conversation versus command.

Speaker 1

So some of us are stuck commanding software, while others naturally treat AI as teammates.

Speaker 2

Well, we better evolve from commanding AI software to onboarding it.

Speaker 1

Speaking of evolution, in a recent discussion with a client in the UK you mentioned how quickly things change. What worked yesterday may not work tomorrow.

Speaker 2

Things are changing very fast. For example, some prompt engineering courses become outdated before students finish them. Chain of thought techniques that were cutting edge six months ago now need complete redesigns with models like CLOD 3.7 and GPT 4.5. So what should people actually be learning? Things like fundamentals of AI, interaction, not specific prompts, critical thinking, understanding capabilities and limitations, developing good communication habits and some fundamentals of management and building successful teams.

Speaker 1

Which connects to the concept of leading teams of humans and AI agents.

Creating Your Own AI Agents

Speaker 2

Yes, More and more we are managing teams of AI helpers, and I encourage everyone to start creating their private agents that can take on those mundane tasks that consume so much of our time. Start by creating an agent connected with something you love doing. That is a great way to experiment.

Speaker 1

Like your running coach. Example from the newsletter accessible to anyone.

Speaker 2

Like that, for example, enter your training data into ChatGPT or Cloud Projects and say I'm training for a June half marathon and struggle with pacing how should I adjust? Feed it with links to the publications you follow. Data about any injury. You might have have a detailed conversation and within a injury, you might have have a detailed conversation and within a week you'll have a personalized coach.

Speaker 1

And that coach is a simplified implementation of the sophisticated agents that take care of predictive maintenance at manufacturing plants or that manage fraud detection at large banks right.

Speaker 2

Yes, that is a fair analogy, and the beauty is that we can create relatively quickly a group of agents that help us with our daily activities.

Speaker 1

And this scales to organizations too. Huge implications for knowledge management.

Speaker 2

Massive. My top five AI agents have consumed 30 million words of training double a PhD candidate's entire academic intake.

Speaker 1

So give us an example of how that training happens. It sounds very complicated.

Speaker 2

Not at all. In most practical cases, it is as easy as moving files into one folder on a drive and pointing the agent to that knowledge. Plenty of AI tools offer this service for large data sets, but ChatGPT, gemini, copilot and Cloud are great places to start. And what type of data you use, whatever data is relevant to the task books on the subject, online publications, internal communications our role as subject matter experts is to curate the content for our AI assistants. As AI models become commodities, the real value is in the knowledge you provide.

Speaker 1

And that knowledge is not lost when you change AI agents, correct?

Speaker 2

Correct, for example, with our AI agents. We stay model agnostic, switching between anthropic, open AI, gemini and open source models as needed. The knowledge persists even if technology changes.

Speaker 1

So organizations don't need to fear obsolescence.

Speaker 2

If we are careful when selecting tools, there is no obsolescence or vendor lock-in. In most practical cases, just avoid vendors that do not allow you to easily move your data out.

Speaker 1

So when they start creating AI assistants, everyone is becoming a data curator on their subject matter expertise right.

Speaker 2

Yes, and that brings me to my final thought we are managing armies of assistants. There are no longer individual contributors, as we are all team leaders, and our success depends on changing our mindset from installing AI to onboarding AI. It is our new apprentice.

Speaker 1

As always, luis, you've given us plenty to think about and remember everyone. You can find all the resources mentioned today at AI4SPorg. Stay curious and we'll see you next time.