Digital Transformation Playbook

Seven Practical Lessons from Companies Making AI Work

Kieran Gilmurray

Forget the exaggerated promises and theoretical applications of AI - this AI generated episode dives deep into the tangible, measurable results that seven leading companies are achieving right now with artificial intelligence in their daily operations.

TLDR:

  • Implementing AI requires an experimental, iterative mindset—it's a fundamental paradigm shift, not just installing software
  • Morgan Stanley achieved 98% daily AI adoption among advisors after rigorous testing, dramatically improving document access and client relationships
  • Indeed boosted job application starts by 20% by using AI to explain why specific positions match job seekers' profiles
  • Klarna reduced customer service resolution times from 11 to 2 minutes while maintaining satisfaction, projecting $40M annual profit improvement
  • Customizing AI models produces significant gains—Lowe's saw 20% improvement in product tagging accuracy through fine-tuning
  • BBVA empowered 125,000 employees with AI access, resulting in 2,900 custom GPTs created in just five months
  • Removing developer bottlenecks accelerates innovation—MercadoLibre built a unified AI platform that catalogues 100x more items
  • Setting bold automation goals pays off, as demonstrated by OpenAI's internal system handling hundreds of thousands of tasks monthly

The transformation is real. Morgan Stanley has 98% of their advisors using AI tools daily, dramatically shifting their time from document searches to client relationships. Indeed increased job application starts by 20% through personalized AI-powered explanations. Klarna slashed customer service resolution times from 11 minutes to just 2 minutes while maintaining satisfaction levels, projecting a staggering $40 million annual profit improvement.

What separates these success stories from the countless stalled AI initiatives elsewhere? We extract seven critical lessons that apply across industries: start with rigorous evaluations that prove value before scaling; embed AI directly into your products to enhance customer experiences; begin early since AI value compounds over time through iteration; customize models on your specific data for dramatic accuracy improvements; put AI tools in the hands of domain experts (BBVA saw employees create 2,900 custom GPTs in just five months); unblock your developers through unified platforms; and set bold automation goals from the beginning.

The most important insight? Implementing AI isn't merely about installing new software—it represents a fundamental paradigm shift requiring an experimental, iterative mindset. The companies seeing the greatest returns approach AI as a continuous feedback loop rather than a one-time deployment, using each interaction to improve their systems. Could your organization be the next success story? Listen now to discover how to move beyond the hype and create measurable AI impact in your business.

Support the show


𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray

Speaker 1:

Welcome to the Deep Dive. Today, we're really getting into how artificial intelligence is actually being used successfully inside companies. Forget the hype for a second.

Speaker 2:

Yeah, we're looking at real world stuff.

Speaker 1:

Exactly. We've got some great stories, lessons learned, from seven companies, really, who are kind of leading the way. Think of this as your you know practical guide to enterprise AI what works, what doesn't.

Speaker 2:

Right and these insights. They come from digging into what these seven specific companies did. We're focusing on the actionable bits, the things you can actually learn from, not just like theory. So the mission today Basically to pull out the most useful, maybe even the surprising takeaways on how to actually make AI work in a business setting. It's about understanding the key ideas and hopefully avoiding some common traps.

Speaker 1:

Okay, let's kick things off. One thing that really jumps out is that bringing AI into a company, it's not just installing new software, is it?

Speaker 2:

No, not at all. The source calls it a new paradigm and that feels right. It's a fundamental shift. You need this experimental, iterative mindset. You try something, you see what happens, you adjust.

Speaker 1:

It's a different way of working really.

Speaker 2:

Definitely, and we're seeing this shift make a big difference in three main areas, according to the analysis. First is workforce performance people getting more done, better quality, faster.

Speaker 1:

Okay, that makes sense.

Speaker 2:

Second, automating the routine stuff, you know, the tasks that just eat up time, freeing people up for well more valuable work, the higher level thinking. Exactly. And third, actually powering the products themselves, making customer experiences feel more personal, more responsive.

Speaker 1:

Interesting and OpenAI itself how they work kind of mirrors this right with their different teams.

Speaker 2:

Yeah, they've got research applied, deployment teams, this constant loop. They learn from how customers are actually using the AI and then they feed that learning straight back into improving the tech.

Speaker 1:

It's very iterative so deployment isn't the end point.

Speaker 2:

It's part of the learning cycle precisely that continuous feedback is crucial, which actually leads us perfectly into the first big lesson. Start with evils.

Speaker 1:

Evils Evaluations right, so test things properly.

Speaker 2:

Exactly. It's about having a systematic process. You need to measure how well the AI model actually performs for the specific thing you want it to do, not just in general, but for your use case.

Speaker 1:

Right, because how else do you know if it's actually helping? Morgan Stanley is a great example here, isn't it?

Speaker 2:

Oh, absolutely. They're in finance, very relationship driven. So naturally they were asking okay, how can AI really help us here?

Speaker 1:

Yeah, seems like a valid question for their business.

Speaker 2:

So their approach was intense evaluations for every potential AI application, no exceptions.

Speaker 1:

And these weren't just quick checks, were they? I mean, they looked at things like translation.

Speaker 2:

Right, not just accuracy, but the quality of the translation. Did it sound natural?

Speaker 1:

And summarizing documents.

Speaker 2:

Again. Accuracy, sure, but also relevance, coherence. Did the summary actually make sense and capture the key points? They even compared the AI's output to what their own expert advisors would do.

Speaker 1:

Wow, that's thorough. So comparing it against their best people.

Speaker 2:

Exactly and look at the results. After doing all that careful testing, they got the confidence to roll it out wider.

Speaker 1:

And the adoption rate is huge now.

Speaker 2:

It is Something like 98% of their advisors use these open AI tools daily, daily, and think about finding information. Document access went from 20% to 80%.

Speaker 1:

That's a massive jump, and finding stuff is much faster.

Speaker 2:

The bottom line Advisors spend way more time actually talking to clients, building those relationships.

Speaker 1:

Which is exactly what they wanted. I saw a quote from Caitlin Elliott there, mentioning how positive the feedback was, how much faster they could follow up with clients hours instead of days sometimes.

Speaker 2:

That's it, proving the value upfront with data, built that essential trust with the advisors. So these evils. Just to be clear, it compliance, if that's relevant. Safety is always key. What you measure depends entirely on what you're trying to achieve with that specific application. It ensures the AI is effective, but also reliable and safe.

Speaker 1:

Okay, lesson one test rigorously, prove the value. Got it? So lesson two embed AI into your products. This sounds like moving AI from behind the scenes to front and center with the customer.

Speaker 2:

You've got it, embed it right into the services you offer. It can automate a lot of the drought work, freeing up your team, sure, but the bigger thing is personalization. Oh so Well. Ai can process just vast amounts of data, right, so it can help tailor the experience, make it feel more relevant, more human, ironically, because it understands the customer better.

Speaker 1:

Okay, like it knows what I need before I even ask.

Speaker 2:

Sort of yeah, look Indeed the job site huge platform.

Speaker 1:

Everyone knows Indeed.

Speaker 2:

They're using GPT-40 mini-nating, the smaller, efficient model, to improve how they match jobs to people.

Speaker 1:

How does that work in practice?

Speaker 2:

It's not just showing a list of jobs anymore. They're focusing on these. Why statements?

Speaker 1:

Why.

Speaker 2:

Exactly Using AI, they analyze your data, the job description, and then they explain why this specific job is a good fit for you. In emails, in messages.

Speaker 1:

Ah, so it connects the dots with the job seeker.

Speaker 2:

Precisely. Their invite to apply feature does this too, highlighting where your profile matches the job spec. The key is that personalized explanation, the why it really boosts engagement.

Speaker 1:

And does it work? Do they see results?

Speaker 2:

Oh yeah, pretty significant ones. In their tests they saw a 20% jump in people starting applications 20% and a 13% increase in what they call downstream success, basically more people actually getting hired.

Speaker 1:

That's fantastic, especially considering their scale. How many messages are we talking about?

Speaker 2:

Over 20 million messages like that a month and they have what? 350 million visitors monthly.

Speaker 1:

Yeah.

Speaker 2:

So those percentages mean a huge number of better matches potentially.

Speaker 1:

Incredible and they manage this efficiently.

Speaker 2:

Yes, that was key. They worked with OpenAI to fine-tune a smaller model. They gave similar results but used 60% fewer tokens. Much more efficient at scale. Their CEO, Chris Hyams, mentioned this opens up real revenue opportunities because the experience is just better for everyone involved.

Speaker 1:

So embedding AI connects people better, makes things more relevant. Okay, that makes sense. Lesson three then Start now and invest early. Sounds pretty direct.

Speaker 2:

It is. The core idea is simple AI value compounds. It builds over time. The more you iterate, the more you learn, the better it gets. So the sooner you start experimenting and investing, the bigger the payoff down the road. Like compound interest for innovation Kind of yeah, klarna is a great case study here, the payments and shopping company.

Speaker 1:

Right, I know Klarna.

Speaker 2:

They developed an AI assistant for customer service Pretty quickly. It was handling two-thirds of their service chats. Two-thirds, that's huge it is Equivalent to hundreds of human agents, and resolution times dropped dramatically from about 11 minutes down to just two minutes.

Speaker 1:

Wow, and the financial impact.

Speaker 2:

Projected $40 million profit improvement annually. And, crucially, customer satisfaction stayed just as high as with human agents.

Speaker 1:

But that didn't happen instantly, right.

Speaker 2:

Absolutely not. That's the point. It came from constant testing, learning from every chat, refining the AI over time. That's the iteration effect.

Speaker 1:

And it's not just customer service at Klarna is it. Ai uses broader.

Speaker 2:

Much broader Around 90% of their employees use AI tools daily now 90%. Yeah, so the whole company is getting comfortable with it, using it to speed up projects, improve customer experience in other ways. Their CEO, Sebastian Semyakowski, talked about this leading to better customer outcomes, more interesting work for employees and better returns for investors. It's a win-win-win.

Speaker 1:

So early investment plus widespread adoption equals continuous improvement, makes sense, which leads us, I guess, to customizing these things. Lesson four customize and fine-tune your models.

Speaker 2:

Exactly. Off-the-shelf AI is good, but tailoring it is often where the real magic happens. The companies getting the biggest wins are usually the ones investing time to customize and train models on their own data. Openai has even put effort into their API to make this easier.

Speaker 1:

Okay, who's a good example of this?

Speaker 2:

Lowe's the home improvement retailer. Think about their challenge Massive product range data coming from all sorts of suppliers, often inconsistent or incomplete.

Speaker 1:

Right. Trying to describe everything accurately must be tough.

Speaker 2:

And understanding how people actually search for things online. It's complex, so standard AI might struggle with all that specific product knowledge and customer behavior.

Speaker 1:

So what did Lowe's do?

Speaker 2:

They fine-tuned open AI models using their own product information. The result a 20% improvement in how accurately they could tag products 20% better tagging is significant for search results. And a 60% improvement in detecting errors in the product data itself. Nishant Gupta from Lowe's was pretty excited, calling it a major step forward for their online experience.

Speaker 1:

So fine-tuning is like tailoring.

Speaker 2:

That's a perfect analogy. A general model is like a suit off the rack. Pretty good Fine-tuning is like getting that suit tailored perfectly to you. It just fits better for your specific needs.

Speaker 1:

And the benefits are clearer accuracy.

Speaker 2:

Yep, because it learns your beta.

Speaker 1:

Better understanding of industry jargon.

Speaker 2:

Domain expertise exactly.

Speaker 1:

Consistent brand voice.

Speaker 2:

Crucial for branding.

Speaker 1:

And just faster results overall, less fixing needed.

Speaker 2:

Right plus manual editing. The source also mentioned OpenAI's vision Fine-tuning. Imagine applying this to analyzing images for e-commerce or medical scans, or even autonomous driving Tailoring visual understanding.

Speaker 1:

Okay, so customization is key for optimal results. Then, lesson five Get AI in the hands of experts. This means the employees doing the actual work.

Speaker 2:

Precisely the people on the front lines, the ones closest to the processes, often have the best ideas for how AI can help. They know the pain points intimately. Trying to design solutions from the top down often misses practical realities.

Speaker 1:

It makes sense Give the tools to the people who know the job. Who's doing this well?

Speaker 2:

BBA, the global bank. They took a bold step gave 125,000 employees access to ChatGPT Enterprise.

Speaker 1:

Wow, that's a lot of people. How did they manage the risk?

Speaker 2:

Carefully. They involved legal compliance. It security right from the start to ensure responsible use. Set up guardrails.

Speaker 1:

And are employees actually building things?

Speaker 2:

Yes. Elena Alfaro, who leads AI adoption there, said the platform is really easy to use. People can quickly create their own custom GPTs to solve specific problems they face, cutting down prototype time significantly.

Speaker 1:

How many have they built?

Speaker 2:

In just five months, employees created over 2,900 custom GPTs 2,900?.

Speaker 1:

What kind of things are they used for?

Speaker 2:

All sorts. The credit risk team uses them for faster, more accurate credit worthiness checks. Legal built one that answers about 40,000 policy questions a year. Customer service is automating sentiment analysis on feedback.

Speaker 1:

So it's spreading organically through different departments.

Speaker 2:

Exactly Marketing risk operations. They see it as investing in their people, amplifying their skills. And that deep research feature is a big one too. Chatgpt Enterprise can pull together info from hundreds of sources, create these really in-depth reports. Bbva found it saves about four hours per complex research task on average, putting PhD-level research power in the hands of experts.

Speaker 1:

Empowering the experts really unlocks potential. Okay, so that leads to lesson six Unblock your developers. This sounds like a resource issue.

Speaker 2:

It often is. Developer teams are frequently stretched thin, and that becomes a major bottleneck for rolling out new innovations, especially AI applications.

Speaker 1:

So ideas get stuck in the queue.

Speaker 2:

Pretty much. Mercadolibre, the big Latin American e-commerce and fintech company, tackled this head on. They worked with OpenAI to build a unified platform called Verdi.

Speaker 1:

Verdi what does?

Speaker 2:

it do. It's essentially a central workbench for their 17,000 developers. Powered by GPT-4.0 and GPT-4.0 Mini, it speeds up building and deploying AI apps.

Speaker 1:

How.

Speaker 2:

It cleverly integrates language models, python code APIs all accessible via natural language, so developers can build faster. Python code APIs all accessible via natural language. So developers can build faster, more consistently, without getting lost in the weeds of the underlying code. Plus security, guardrails, routing it's all built in.

Speaker 1:

So it streamlines the whole process for developers. What impact did it have?

Speaker 2:

Dramatic acceleration. They said Specific examples. They improved inventory management using the vision capabilities to tag products, cataloging 100 times more items. They boosted fraud detection accuracy to nearly 99% by analyzing millions of listings daily. They automated translation of product descriptions for different dialects, generated review summaries that increased orders. Personalized notifications.

Speaker 1:

So a whole range of improvements across the business, driven by making it easier for developers.

Speaker 2:

Yes, and they plan to use Verdi even more in logistics and other areas. Their tech SVP, Sebastian Barrios, said it lowers the cognitive load for developers and just lets the whole organization innovate faster.

Speaker 1:

Makes sense. Remove the friction, speed things up. Okay, final lesson, lesson seven set bold automation goals, using OpenAI itself as the example.

Speaker 2:

Right. If anyone should be good at automating internally with AI, it's them. They looked at their own support teams.

Speaker 1:

And found inefficiencies.

Speaker 2:

They saw people spending a lot of time on necessary but repetitive tasks, digging through different systems, understanding customer context, writing standard replies, taking simple actions.

Speaker 1:

Stuff that could potentially be automated.

Speaker 2:

Exactly so. They built an internal automation platform that sits on top of their existing tools. The whole Automate the routine tasks, speed up getting insights and taking action for customer issues.

Speaker 1:

What was the first thing they automated?

Speaker 2:

Customer responses and actions right within Gmail. The platform could instantly pull up customer data, find relevant knowledge-based articles, draft a reply or trigger an action like updating an account.

Speaker 1:

And the result.

Speaker 2:

Big efficiency gains faster responses, teams more focused on the customer, not the process. It now handles hundreds of thousands of tasks monthly and is spreading to other teams. The key was setting ambitious goals from the start, not just accepting the status quo.

Speaker 1:

So aim high with automation. Okay, these seven lessons paint a pretty clear picture.

Speaker 2:

They really do, and the takeaway seems to be that, while the specific AI uses vary, the successful approaches have common themes that open, experimental mindset. We talked about Rigorous evaluations focusing on safety, starting with projects that give a good return relatively easily, and then just iterating learning and expanding.

Speaker 1:

And the results are tangible, aren't they? Faster processes, better accuracy, more personalized customer experiences and, importantly, more rewarding work for the employees themselves.

Speaker 2:

Absolutely, and we're seeing this trend now of integrating AI workflows with other tools, using agents to handle more complex multi-step tasks.

Speaker 1:

Like OpenAI's operator concept.

Speaker 2:

Exactly An agent that can navigate the web, use apps, gather data autonomously like a human assistant, almost without needing complex custom integrations. Think about automating software testing or updating different systems automatically. That's where things seem to be heading.

Speaker 1:

Very interesting. Now, before we finish, we absolutely have to mention security and privacy. That's always a huge concern with enterprise data.

Speaker 2:

Critical. The source reassures on this. Key points are the enterprise owns its data. It's not used to train OpenAI's general models. They offer enterprise-grade compliance, like SOC2. Yep, soc 2, type 2, csa, star Level 1. Encryption, granular access controls, flexible data retention settings. Enterprises need that control and assurance.

Speaker 1:

Essential for trust absolutely.

Speaker 2:

It really is Building that trust is fundamental.

Speaker 1:

Okay. So wrapping this up, this deep dive really shows enterprise AI isn't magic, it's a journey. It's about experimenting, evaluating carefully and embedding AI smartly where it adds real value, empowering people both inside the company and the customers they serve.

Speaker 2:

Customer interaction could be transformed if you adopted this kind of iterative AI approach. If you empower the people who know the work best, what could be the first small e-value you might run to just start exploring?

Speaker 1:

That's a great question to ponder. And if you do want to dig deeper into any of this open AI for business, those specific case stories, chat, gpt, enterprise features. Safety details, the API. Check out the resources listed in our source material.

Speaker 2:

Plenty more detail there.

Speaker 1:

Definitely Well. Thanks so much for joining us on this deep dive today.

Speaker 2:

My pleasure.

People on this episode