The Impact Team Gulf

AI Execution - The Uncomfortable Truth

The Impact Team Gulf Season 10 Episode 5

In this Episode Mark Rothwell-Brooks is joined by Debra Bilikha and they explore why organisations — especially banks and financial institutions across the GCC — are struggling to move from AI enthusiasm to AI execution. While pilots and demos are everywhere, enterprise-scale deployment remains rare. The core issue isn’t AI itself, but a lack of AI readiness.

The discussion highlights the biggest barriers to implementation: chaotic data landscapes, missing governance frameworks, shadow AI, weak monitoring, and the absence of meaningful outcome-based measurement. AI exposes underlying organisational weaknesses, particularly in data quality and accessibility, which account for the majority of delivery effort.

Mark:

Welcome to the Impacting Gulf Podcast. I'm Mart Rothwell Brooks. Today we're diving into one of the biggest shifts hitting organizations, especially banks and financial institutions, across the GCC. The move from AI excitement to AI execution. Now everyone wants AI, everyone's running pilots, but the real question is, why is it still so hard to get AI working at scale? So from chaotic data landscapes and weak governance to shadow AI monitoring challenges and the uncomfortable truth about measuring actual return on investments, we're going to break down the real barriers leaders are facing as they try to modernize their organizations. This isn't hype, this is practical, unfiltered reality of what it takes to implement AI inside a regulated high-stakes environment and what needs to change for banks to truly capture value. So let's get into it. So let's start with something that's becoming increasingly clear. Organizations don't actually have an AI problem, they have an AI readiness problem. Buying an AI tool is easy, but creating an environment where that tool drives genuine business value is very difficult. Across the GCC, we're seeing the same pattern. Great pilots, great demos, lots of PowerPoints. But when you ask, where is the scaled enterprise deployment? things get very quiet. And the reason is simple. Most organizations haven't built the operating model required for AI at scale. They haven't redesigned workflows, they haven't fixed the underlying data problems, and they haven't built the necessary governance framework to ensure AI can be deployed safely and consistently. AI exposes all the weaknesses that were already there. And if we're honest, data is where most AI initiatives go to die. It's not glamorous, it's not strategic, it's not something you can talk about on stage at a conference, but it is the foundation for everything. In most banks, the data landscape looks like this: core platforms full of legacy data models, business units storing data in incompatible formats, critical documents trapped in PDFs, emails, or scanned images, no clear catalogue of what the data actually exists, and no easy way to access it across teams. And here's the uncomfortable truth. 70 to 8% of AI delivery is simply getting the data ready. The model itself is the easy part. So when an organization says, we want generative AI, the real question is: do you have the data that's reliable, accurate, classified, and accessible in real time? And most don't. And that's why value is slow. Without a strong data foundation. Deborah Billiker runs her own AI consulting practice.

Debra:

You don't really have any chance of really leveraging the benefits of AI because obviously if you've got rubbish data, unstructured data, then you get rubbish AI, and then you start to see hallucinations in your in your AI implementations. So data putting together a data strategy is a very, very, very key piece of the puzzle.

Mark:

So where is the new front line of AI risk? And this is where things get even more interesting, especially in the financial sector. AI governance is not just an extension of IT governance, it's completely different discipline. You'll need to understand where the models are used, what data they're trained on, how you validate the output, who signs it off, how do you comply with the regulators, how do you stop unapproved AI usage? Because shadow AI is real. Employees, often with the best intentions, are using unapproved AI tools to solve daily problems. And I've talked about this in a previous podcast. They're uploading documents, they're feeding in customer information, and organizations are completely unaware. Regulators across the GCC are already asking hard questions. How did the model reach this conclusion? What controls exist around the training data? How are you monitoring bias, drift, erroneous conclusions? AI governance is no longer an optional framework. It's becoming the new cybersecurity.

Debra:

The experience that I see is that everybody just says, I want a tool, I want to build it, and I want to go to market. And sometimes when you implement or when they implement some of these AI solutions at the point of production, when you start to look at this, these real AI governance frameworks, you go, you don't meet this regulation. You don't meet this, you don't meet this, and you can't go live. So it's usually important to understand the governance frameworks, understand the regulatory environment that you're working with in before you actually start implementing your AI solution.

Mark:

One of the most misunderstood aspects of AI is monitoring. People assume once the model is deployed, the job is done. And that couldn't be further from the truth. Models degrade, data changes, customer behavior evolves, fraud patterns shift. And if you don't monitor your AI models continuously, you won't detect bias creeping in. You won't catch performance trapping. You won't spot hallucinations early, and you won't satisfy regulatory expectations.

Debra:

Once that's in place, then we can start to say, okay, well, now we know what your North SAR is, now we know what your what your business outcomes are, now we know what your data landscape is. And if we've structured that, you know, we know who's governing the data, all of that good stuff, then we can start to say, okay, well, we can then start building some automated workflows, improve your processes, etc. etc. And then we can layer or overlay AI on top of it. So in a nutshell, AI is the last thing that comes to mind.

Mark:

Banks, especially, need real-time monitoring, not quarter reviews, not annual validation, but continuous oversight. And with the increased use of LLMs, you need additional layers of protection. Output filters, prompt governance, hallucination detections, LLM firewalls like the solutions emerging in the market. Contextual being an example. So let's talk honestly about measurement. Now every vendor will tell you their AI tool increases productivity by 20, 30, 40%. But the real question is, did it improve the outcomes that matter? For example, did it shorten onboarding cycles? Did it reduce audit findings? Did it decrease operational cost or risk? Did it make regulatory submissions more accurate? Did it reduce your exposure? These are all metrics that matter. But most organizations don't measure them. They measure activity instead of outcomes. And this is why so many AI programs struggle to justify return on investment. Because there's no clear link between the tool and the impact. If you want AI to scale, you need to define what metrics from day one. Not as an afterthought. Not at the end of the project, but day one. Here's the part people rarely admit. The biggest barriers to AI adoption aren't technical at all. They're cultural. Employees worry AI will replace them, not helped by the media coverage of AI. Managers worry that AI will expose inefficiencies. Leaders want AI, but can't articulate what problem it should actually solve. Many organizations launch AI initiatives with no clear owner, no defined accountability, and no consistent decision making. And without cultural alignment, AI becomes just another IT tool. Boys with their toys, not an organizational capability. The organizations that succeed treat AI like a change program, not a technology upgrade. And I've said before, digital transformation, I believe, is done. AI transformation is where it's at. These organizations are investing in training, they're building cross-functional teams, they are aligning incentives, and they create a culture where experimentation in AI is safe.

Debra:

A lot of organizations struggle to articulate what the business outcomes are. And it goes back to the same problem. If you don't do that in the beginning, yes, you're going to have a very, very nice, shiny tool. And then you get to the end and you're like, well, why did I do this in the beginning? Like, what's it really achieving for me? So what does good look like?

Mark:

When we look at the banks and organizations that are generally pulling ahead, they're doing a few things right. They've established clear AI governance frameworks. They've classified and cleaned their high value data sets. They've built secure access controls for LLMs and they've redesigned workflows for AI, not bolted AI onto legacy processes that are inherently inefficient. And they've put some monitoring in place, real monitoring in place. And they're delivering fast value, safely, and at an enterprise scale that's measurable. These organizations aren't talking about AI, but they're operationalizing it. So the message is clear. AI will change everything, but only for organizations that fix the foundations first. Data, governance, monitoring, measurement. These aren't side topics, they're the main events. If you're a bank, if you're a regulator or any organization trying to navigate the complexity of AI adoption, whether it's governing frameworks, LLM safety, shadow AI, or secure enterprise deployment. The team here at the Impact Team Gulf is working with institutions across the region to shape that journey. So reach out if you'd like to have that conversation. And until next time, stay curious, stay critical, and keep building the future.