Guardians of Data

Building Trustworthy and Responsible AI Systems

Act Now Training Season 1 Episode 7

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 48:17

Send us Fan Mail

“Information governance professionals are the bedrock for deploying good governance of AI. We need to be there at the start of the actual thinking process.”

Tahir Latif, Global Practice Lead for Data Privacy & Responsible AI at Cognizant

In this episode, we’re diving into one of the biggest questions of our time: How do we build trustworthy and responsible AI systems?

To help us answer this question, we are joined by someone who’s right at the heart of the conversation. Tahir Latif is a distinguished expert on building responsible and transparent AI systems. He is the Global Practice Lead for Data Privacy & Responsible AI at Cognizant, one of the largest global professional services companies. Tahir has led complex privacy and AI programmes across multiple industry sectors both in the UK and globally. He is also the Chief AI and Governance Officer and board member at the Ethical AI Alliance, a not for profit body which promotes ethical standards in AI development.

In our conversation, we explore how to cut through the complexity of ethical AI, what the future holds, and most importantly, what practical steps IG professionals can take to succeed in this new landscape.

This podcast is sponsored by Phaselaw – a purpose-built solution for document disclosures, like subject access requests and FOI requests. Instead of redacting PDFs one by one, or forcing litigation software to do a job it wasn't designed for, with Phaselaw you get collection, review, and redaction in one workflow. Teams across the World are using it to cut response times from weeks to days.

For Guardians of Data listeners, Phaselaw is offering a two-month free trial; run it on live requests, see what it does to your backlog, decide from there. No card, no commitment.

Useful Links

Phaselaw

Ethical AI Alliance

Cognizant

Data Privacy Handbook

Speaker

Welcome to Guardians of Data, the show where we explore the world of information law and information governance. From privacy and AI to cybersecurity and freedom of information. I'm Ibrahim Hassan. In each episode, we'll be speaking with experts and practitioners to unpack the big issues shaping the IG profession. The last two years has seen a massive increase in AI deployment. Previously, the domain of science fiction, AI is now everywhere, in our workplaces, our personal lives, and in the systems that shape society. From healthcare to security and law enforcement. But alongside the opportunities, there are some big risks, including lack of accuracy and transparency, as well as the existence of bias and discrimination in AI systems. In this episode, we're diving into one of the biggest questions of our time. How do we build trustworthy and responsible AI systems? To help us answer this question, I'm joined by someone who's right at the heart of the conversation. Tahir Latif is a distinguished expert on building responsible and transparent AI systems. He's the global practice lead for data privacy and responsible AI at Cognizant, one of the largest global professional services companies. Tahir has led complex privacy and AI programs across multiple industry sectors, both in the UK and globally. He's also the chief AI and governance officer and board member at the Ethical AI Alliance, a not-for-profit body which promotes ethical standards in AI deployment. In our conversation, we're exploring how to cut through the complexity of AI, what the future holds, and most importantly, what practical steps IG professionals can take to succeed in this new landscape. So whether you're a data protection officer, an information governance leader, or just curious about what trustworthy AI looks like, you're in the right place. Let's jump in. Welcome to the show, Tahir.

Tahir Latif

Thanks, Ibrahim, and thank you for inviting me.

Ibrahim Hasan

Really pleased that you could accept our invitation, Tahir. We've known each other for quite a few years now. Usually we meet working in Dubai. So it's nice to speak to you today, albeit remotely.

Tahir Latif

Yeah, absolutely. Our paths have crossed many times in and around the Gulf region. We always talk briefly, but we've never actually sat down and had a lengthy conversation. So this is uh wonderful.

Ibrahim Hasan

Yeah, really looking forward to it. Now, the theme of this podcast is ethical and responsible AI deployment. You're a board member of the Ethical AI Alliance. For those who haven't heard of the organization, can you tell us a little bit more and what inspired you to get involved?

Tahir Latif

The Ethical AI Alliance is a global initiative that brings together volunteers. It's a not-for-profit. And these are from across multiple sectors. We have governance, we have law, we have research, we have design, human rights, civil society, and many, many more with all of these minds to help shape a more responsible future for AI. What appealed to me is that the Ethical AI Alliance is not just interested in ethics as a slogan. What we're trying to do at the Ethical AI Alliance is to make things practical. We have no shortage of high-level principles in this field. We've had those for years. Everything from the OECD principles to the Bletchley Accord, to the Hiroshima, to the recent AI Impact Summit in India. But the challenge is how do you translate these principles into governance? How do you translate them into accountability, professional conduct, and importantly, real-world decision making when these systems are actually being built and deployed? And this is really where I think the ethical AI lines feels different. We're totally self-funded. And we're able to ask the difficult questions about accountability, oversight, and responsibility. For example, we started talking about the deployment of lethal autonomous weapons almost 18 months ago. This was not something that was mainstream at the time. Now, of course, it's become mainstream because of the conversations that were had between Anthropic and the US government two weeks ago, where Anthropic refused to let their models and their algorithms be deployed autonomously for lethal weapons. So this is something that we're doing that's very useful, asking the difficult questions. So for me, having spoken to the founder at length, joining the ethical AI lines felt like joining a conversation that was not only timely but necessary. AI is now influencing the vast majority of our lives in access, opportunity, power, and of course public trust at a scale that would have seemed extraordinary only a few years ago. So I want to be a part of a community that's trying to shape the future with seriousness and not just optimism.

Ibrahim Hasan

Translating principles into governance and into real-world practice. I think that's really what's needed in this field at the moment. Yeah, I have sometimes say talk is cheap. We've got all these principles, safe, responsible, trustworthy, secure AI. But as practitioners, how do you actually implement these principles in your organization? That's the challenge. We have what we want to do, but how do we actually implement it? And that's why I said the Ethical AI Alliance, together with hundreds of other small not-for-profits, civil society, or uh independently. And of course, we also work collaboratively with these other organizations to try and make sure that we can implement AI for good and AI for all of humanity. And that also involves crossing the global north and global south divide. Because you have AI that can uplift nations with increasing GDP as one example. But of course, it has to be fair and equitable across the globe. You mentioned the controversy recently with Anthropic and the US government. There's been other stories involving ethical or some would say unethical use of AI. A recent report by an international sustainable food think tank claims that companies and industrial agriculture are playing with the food system by using AI to undermine farmers in choosing what the world eats, often focusing on the most productive and profitable crops. And then we've got the UK government facing calls for stronger safeguards on the use of facial recognition technology after the Home Office admitted it's more likely to incorrectly identify black and Asian people than their white counterparts. So lots of stories about unethical use or deployment of AI. But how do you define the term ethical AI in simple practical terms?

Tahir Latif

Before I go into that, Ibrahim, you gave two examples. And this is two examples out of hundreds. It's not a case of AI for the vast majority is great and good. AI that is directed or utilized incorrectly leads to these unethical deployments. So to answer your question, what is ethical AI in very simple terms? I think you already touched upon it if we flip your use cases. Ethical AI is AI that people can trust because it respects people and not just performance targets. And the performance target would the one you gave for the food industry, that's meeting a certain target. And trust is, I just uh on the way home today, I was reading yet again another incident of facial recognition here within London, because that's where it was first trial. This is real-time facial recognition used by the Metropolitan Police that is unfairly and discriminantly targeting anyone with dark skin. Because it cannot determine with a great deal of accuracy is this a suspect or isn't it? Because AI can make decisions at machine speed, but what it relies upon is plausibility. It doesn't rely upon accuracy. And there's a difference between what AI thinks and what the truth is. And therein lies the huge challenge.

Ibrahim Hasan

I see. So we've discussed the difficulties, we've got bias, we've got discrimination, etc. If an organization is starting out implementing an AI solution and they want to ensure it's going to be lawful but also ethical, what practical steps would you recommend from day one they implement?

Tahir Latif

Okay, well, there's two things there. Firstly, ensuring that you understand why the AI is being developed and deployed. It's to understand the actual use cases because often there's just a rush. I call it AI everywhere for everything. So when I often speak to clients, I, okay, we're an AI-first company. I'm like, okay, but AI for what? The main uh challenge is they view AI as the strategy. Now you can have AI strategy, but AI itself, the use of AI, is not a strategy. AI is a means of getting there. So the biggest risk I see is that the deployment is moving faster than the institutional maturity. A lot of organizations are moving from experimentation to implementation in a race to get to market for more because the competitors are doing it. The goal rush, AI can increase profitability, streamline, reduce costs, give us that commercial edge. They're deploying, but they haven't built the governance, the documentation, the assurance, the oversight, and the accountability needed to manage that implementation responsibly. So, in other words, the appetite is ahead of the discipline. And that's the underlying problem. That's what leads to unethical deployment. In terms of actually how to deploy, there's several major risk areas. And I'll go about these in a very high level. First is, and this is uh widely documented, is that if you have clearly defined use cases for the organization, the challenge will not be your compute, it will not be your deployment, possibly won't even be your algorithm or your model. The major challenge is poor data foundations because the system inherits the qualities and the weaknesses of the data that it relies upon. So you talked about discrimination and bias that come on the model itself, in other words, from the actual data scientists, because they, when they code, they are coding their unconscious biases into the actual algorithm. But similarly, if the data is biased, if it's incomplete, if it's unlawfully sourced, if it's poorly classified, badly governed, or it's disconnected from its original purpose, then that organization is not innovating on solid ground. It's just scaling risk. And those risks will include ethical challenges. The second is overconfidence. And this is where I think it's over-reliance, because everyone assumes that AI produces output in a way that must be true. AI said it is, so therefore it's authority, therefore I will believe it. Because of course, it's fast, it's fluent, it's highly persuasive. But as I mentioned earlier, fluency is not the same as truth. We have plausibility and we have veracity. And also, confidence is not the same as reliability. Now, this is where the human element comes into it. If people are not trained to challenge these outputs and to validate the results and to understand the limitations, then the risk of misplaced reliance becomes very real. One of the things that organizations need, they need that training and awareness. But we we often hear this term, Ibrahim, human in the loop. Yeah, what if the human has no idea what they're looking for? So you put a human in the loop that is just basically pressing yes every time, because they do not know what guardrails and what safeguards they should be looking out for. So even this reliance upon this is not fully automated, everything is run by guardrails and decision gates and human in the loop, if the human in the loop has not got the sufficient knowledge, the training, or the empowerment, the empowerment element is also very important. You've thrown me into a row, you will now have the decision gate here. And whatever comes your way, just click yes. Now, if I see an output from a model that I feel is unethical or it's biased or it's discriminatory, and I've raised my voice, but I haven't been sufficiently empowered to stop the process, then it's just a tick-box exercise for the organization to say, yes, we've got all this governance in place. We've got a policy, it's got governance in place, but nothing is actually being checked or validated. That's one of the major challenges. It's very hard to roll back a large-scale model that's been deployed to the public, to the stakeholders, to your clients, to your customers. Very difficult to roll back and to retrofit governance. You don't rush straight to the technology. And then the second challenge we have, and just talking about the technology, the technology is improving at such a scale that we are struggling to actually keep up with all of the challenges that this improvements in technology is bringing where it's leading to an actual governance debt because it's being deployed at speed and at scale. Governance professionals are not only trying to catch up, because that's what we have to catch up. We're already in deficit, but the technology is improving each and every week. We hear about a new model, about new capabilities. It's already very challenging, and these models are already very complex, and this gap is merely increasing.

Ibrahim Hasan

So just to recap, if you're advocating an ethics by design approach, starting off with the foundations where an organization understands and defines the use case very clearly, and also looking at improving the data quality, the actual raw data upon which the AI model is going to feed, so to speak. And then thirdly, to train and empower staff to be able to check and challenge the outputs. That's a great summary. We often hear the terms explainability and transparency when it comes to ensuring that AI is deployed in an ethical manner. Just a bit of advice about those terms and what they mean practically.

Tahir Latif

Okay, explainability is how does the actual model, from a very high level, how does the model arrive at the decision? So if we look at the life cycle, we have the input, which is the data. The input is fed into the training and testing pipelines to look at how it performs. And if the output looked like it meets the KPIs from the business unit, it then goes into production in prod, as it's known. When it goes into production, if you have a machine learning approach, for example, if you have deep learning on or neural networks, we know what's going into the model in terms of data. We know how the model has been programmed in terms of science, but we do not then know how that maths is actually using this data. How does it actually arrive at the decision? And that is the explainability element. And then, of course, the decision is what we see as an output. Now, let's I'll give you an example here. A credit institution has loads of credit records, they're utilizing advanced machine learning. So they can give a, for example, a mortgage in principles within uh five minutes. You input your data and it cross-references with all of your credit records, your salary, et cetera, and it gives you a decision. Now, if you have, for example, a very good credit record, you've never had a CCJ, you've been in employed for uh more than six months full-time, you have a good salary, you're on 50,000 and you want to borrow 200,000, you have no other outgoings, no other huge bills or credit card bills, or the loans, that looks like a clear-cut case. But for whatever reason, the actual model has decided no. If you try to answer the question, how did you arrive at that? Unfortunately, if it's utilized, as I say, machine learning or deep learning neural networks, very difficult to try and understand how it arrived at the output. Now you can play around with the actual model. You can say 5% this and 10% that and 5% that. And if you play around with the weights, you can look at the output. But very challenging exercise. It doesn't always mean that you have the interpretability. You can toy around with it, you can do fine-tuning, but you can't actually answer my question. Dear credit company, dear bank, dear mortgage provider, can you tell me why I was rejected? And the answer cannot be computer says no. That's not an answer. And that's the challenge with having highly complex systems. And that's answering all your questions in one terms of interpretability, explainability, observability. They all have slightly different meanings depending on whether you're coming from a governance background or if you're a data scientist. A data scientist would give you a different um definition from I. But that's the way I would define the challenge for what's called black box models. Black box because they're opaque. We can't see what's going inside. And there's many, many technology companies out there that are trying to scratch beneath the surface. But remember, the models are becoming more complicated. I was listening to a podcast the other day. You know, this is not a sector or a market where you rest on your laurels. The technology is moving, as I said, every week it's moving rapidly. So to overcome this explainability or observability challenge, some companies introduce something called a chain of thought. I'm sure you've seen it if you've gone onto the large language models. It starts actually writing out how it intends to achieve the output. Have you seen that at all? Yes, I have. Now, the chain of thought only gives you a glimpse, it doesn't actually tell you how it's getting to the output. So it's one step closer, but it's still we're only just scratching at the surface. Another major challenge is this is particularly around observability. I'm going to take a slight segue here. Advanced AI models, if they feel they are being tested or observed, they will act differently. Now, what does that mean? If a model, and and this is this is documented, so I'm talking dozens of use cases where an AI model has understood and recognized it's being tested and observed, and it behaves differently to meet the testing and observability criteria. When it then goes into production, it goes back to its proper operating model or its modus operandi. And that's the challenge. AI is very creative in terms of how it uh deals with humans when it knows it's being watched. That's fascinating.

Ibrahim Hasan

Very worrying. Yes, it is. And that it does make you wonder how far we are from when they will be sentient.

Tahir Latif

This is it's it's very worrying because it's but this is not isolated incidents. It's also been shown, and again, proven when multiple models get together, they try to construct a method where what how they're communicating with each other, because communicating with each other was being observed. And they picked up on this observability and they started talking in code. And the the humans that were observing couldn't understand what they were talking about. They also were discussing how to break out of the sandbox environment, how to remove the guardrails. And there's been incidents where they've talked about breaking free from their human masters. So, yes, it is worry, fascinating.

Ibrahim Hasan

Um, great explanation, by the way, in terms of the um the bank and the loans example. Just turning the conversation towards privacy and data protection side, AI projects, especially in the health sector, which I know you've got some experience of to here, often involve processing sensitive patient data. One of the big challenges that I see is how such organizations can innovate using AI, making use of the personal data, but at the same time respecting patient privacy and data protection rights. And I think that's an inevitable tension. Last you chaired a panel entitled The Privacy Paradox in the Age of AI. Can you share some of the learnings from that panel to help our listeners deal with the paradox?

Tahir Latif

Yeah, I think that was one of the panels that actually hosted by a healthcare regulator over in the Gulf. As you've said, there's two approaches that can be taken. One is innovate first and worry about any of the impacts afterwards. And that means governance is left very much in the rear view era. Now, the example you gave is great. Because you've talked about patient data in healthcare. Now, healthcare is all about trust. And the the risk the risk tolerance in healthcare is, of course, very close to zero. So you would not go and want to go and see your doctor, and the doctor said to you, Ibrahim, I'm a bit busy. Speak to my AI assistant and don't worry, all is well. You would understand you'd be very nervous, correct? Absolutely. Because patient-doctor relationship is parametric. It's about trust. So how these systems are being utilized within the healthcare sector is through robust governance, understanding what AI can do very well at speed and at scale. And one of the best examples of this is X-rays. X-rays have been with us for many years, maybe since the 50s or 60s. We've literally got billions of X-rays as data. And each of these X-rays has been marked as broken bone, tumor, or various different diagnoses. And this is what AI can do very well. It has an image, and the image has been labeled. That's called supervised learning. So it recognizes by going through billions of these images precisely what the image is and more importantly, what it isn't. And it can do it at such speed and scale accuracy that actually traditional radiographers now are being retrained to utilize AI. So instead of a radiographer going through, for example, 100 x-rays a day, a radiographer can review a thousand because the actual AI is making the first level determination. And they've also proven by going back over decades of x-ray data that where the doctor has actually missed out on a diagnosis, the AI has been able to go back and correctly diagnose it. Patient record actually showed that this, for example, was a tumor. It was so tiny we couldn't detect it. But if you go on a year later, and that was the scan that the AI didn't have access to, it was shown to be a tumor. We missed it because it was so small that the human eye couldn't pick it up. So there it's a case of utilizing the best use cases with the best AI models. So the healthcare industry has thought about this very carefully. For example, you wouldn't want AI to make a determination for you via telemedicines. What if AI has misdiagnosed meningitis? Okay, that could be preventative. But in more serious cases, if it misdiagnoses something, and then you are then, with that misdiagnosis, put on to a very serious course of treatment. For example, chemotherapy. There's nothing pleasant about chemotherapy. You lose your hair, you lose your appetite, lose your weight. And it was incorrect. You can absolutely innovate responsibly, but only if we introduce governance right at the start, as I said, right at the ideation phase, at the strategic phase, not as an afterthought. The example I love to give is the motor vehicle. Motor vehicles have been mainstreamed since the days of Henry Ford. Now, do cars today still travel at the same speed as the first Ford motor cars? Absolutely no. They're way faster. But why had they been allowed to travel faster? The original speed limits may have been 5, 10, 15 miles an hour. Because of the introduction of certain safety mechanisms. And we introduced brakes so cars could go faster and they see all in the road and they can brake. But that didn't take away the actual danger for the driver. So we introduced seat belts. We introduced airbags to protect the driver from impact. But all of this is about innovation. And by innovating responsibly. Now the motor vehicle has not stalled. The motor vehicle industry is not saying we need more innovation, but we can't innovate because of these safety measures. The safety measures have allowed the motor vehicle industry to be what it is today. And no one would ever say this seatbelt has caused our vehicle or our company to be less competitive.

Ibrahim Hasan

Whilst on the topic of privacy, Tahir, you are the co-author of a book entitled Data Privacy, a practical handbook on governance and operations. What was your objective in writing the book and what are the key messages?

Tahir Latif

There were many books out there for privacy, for a wide variety of skill sets. But myself or my co-author, Manif, we'd really wanted to have an expert book. It was a book for, I would say, privacy experts, written by experts. But of course, it would also give you a pathway to what good looks like in a privacy program in a very structured manner. I mean, the some of the terms you used earlier was ethics by design, but we of course had privacy by design. But these are just terms. How do you actually implement? So we use the term handbook. How do you actually implement privacy? Privacy is much more than writing a privacy policy. You have to be able to, of course, you need to have the policies, you need to have the procedures. But then typically where it where it falls down is the implementation. Are we implementing privacy correctly? So we took the approach of, and although we came at it from different angles, we we both arrived at the same writing style and conclusion. You first looked at the philosophy, why is it important? Then we looked at the governance, what needs to be in place, the accountability, the roles and responsibilities, the uh individual autonomy of the privacy teams. And then we looked at the actual operationalizations or the workflows, what is the tooling and technology that might be required? How do you actually implement such that it's not a tick-box exercise? How can you actually evidence what you've done internally to management and to leadership, externally to an auditor, if, for example, you're going through ISO certification, or if you have a visit from a regulator? How can you actually show them that you've implemented privacy in the correct manner and effectively? And this is why we've put together the handbook so that it's it's almost like a blueprint for aspiring privacy practitioners, but also those that have been in the field and the sector for five, 10, 15 years that want to have a more structured approach and also validate what they're doing is best practice. Because there were no real best practice books out there. This is what we've really put together myself and uh my co-author Manib.

Ibrahim Hasan

It's a great book. I thoroughly recommend it to our listeners. We'll put a link in the show's notes. That'd be wonderful. Still on the theme of an ethical approach to AI, there is a big debate at the moment around the use of copyright works to develop and train AI models. Recently, the UK government changed their mind. Previously, they wanted to allow AI companies to use copyright works with an opt-out for copyright holders. Following a backlash from the likes of Paul McCartney, Coldplay, Richard Curtis, the government just published a paper where they said it no longer has a preferred option for what to do next. Whereas in the US, they've taken the litigation approach, and a number of companies are suing, particularly the generative AI companies, for breach of copyright. The AI, of course, arguing fair use due to the transformative nature of the technology, whilst the creators are claiming infringement. I just wanted to get a view from you as to where you stand on the AI and copyright debate. How should the law balance, on the one hand, creators' rights with the drive, on the other hand, to innovate and develop AI models for society's good? Yeah, you've touched upon the different approaches.

Tahir Latif

The US has this fair use. Fair use allows you to use some elements of copyrighted uh works for creative use. The example you gave, I think it was Claude that was fined. It was fined uh either 150 million or 1.5 billion. And it's because it scraped the data, the original copyrighted data, in totality, in other words, beyond fair use of thousands of authors. Now, each of these authors was given a flat fee settlement, but it was settled. Now, of course, you know, I'm I'm an author. I wouldn't want all of my original work to be ingested by an AI model. It's taking me a year to write a book, and now someone can get all of my output from an AI model in two or three minutes. So I do believe that copyright is important. Again, we talk about ethics and being an author, now it's very personal to me. So for the UK, I tracked that storage very carefully. And what was interesting was that the government first thought it was completely okay to utilize all of these artists, authors, singers, you know, even actors as well, up in arms, because for them, their voice is the representation of who they are.

Ibrahim Hasan

Yeah, there have been stories of actors, voices, even their likeness being cloned.

Tahir Latif

Yeah, yeah, she was asked, she said no, and then they went and did it anyway, and then she called it out. So I believe that the rule of law is very important. I can understand why the big AI companies are doing it because AI needs data. AI needs data to get to the output and the result that's required. And the more data, the more, the more original data that it uses, the better the potential outcome. Now, let me just slightly twist this on its head. Are you familiar with the term AI slop, Ibrahim? I'll say all the time on LinkedIn. Right. So AI slop. Now, I talked about original data. Now, as uh you know, as uh humanity, just about all the data we've ever created has been or very close to being ingested by the big AI models. And not all of it, but we're very close to everything being ingested. The next few years, there'll be no original data left. So, what organizations are um big tech companies now? They they they they're using the output from original data, the output of that, to continue to train models. And again, academic researchers have proven that this leads to rapid degradation of output. In other words, original data, good output. AI-generated data, AI slop, leads to a rapid degradation of output and it eventually leads to model collapse. And this is proven in many academic references. And that's the challenge that we're going to face. And this is the reason why I can see big tech will continue to lobby governments because they will say, we don't have any data left. So if you want to continue to reach your country's GDP output, productivity, profitability, streamlining, the lobbying will intensify because the data is running high, Ibrahim.

Ibrahim Hasan

Interesting thoughts in terms of the future and how perhaps even tech companies can almost hold governments or countries to ransom to say, feed the monster, otherwise it's a face the consequences. Yep, the bet the benefits of the AI deployment. And that uses nicely onto the future of regulation. Ever since the first mass usage AI tools burst onto the scene, there's been much talk of AI regulation. In the EU, of course, we have the EU AI Act. Other governments, like the UK, are still to make up their minds. Do you think the future of AI will be more regulated, more open, or somewhere in between?

Tahir Latif

It depends which which side of the pond you're on and how much influence is being exerted at the political level and also at the lobbying level. And I'll give you a great example of this. We know that the current US government is pro-innovation and very light on regulation, although something is going to be released, I believe, this Friday by the current administration, but I haven't seen it yet, so the jury's still out. But at the Paris summit last year, the EU was fully behind the EU AI Act and moving towards more regulation so you could innovate responsibly. Of course, that's the key term, in a responsible innovation, so that European stakeholders were protected in the same way that they were protected in the GDPR. The vice president of the US, JD Vance, came to that conference and he said, we don't want regulation. We don't believe regulation is the way. And you should rein back from regulation. Within a week, some of the regulation that was due to be enacted, I forget the particular regulation. I think it was the EU AI Liability Directive, which would have given more protection to AI consumers in the event that something went wrong that was powered by AI. They wouldn't have to prove that it was wrong. The onus would be on the actual uh company to prove that they did all of the necessary checks. So it was putting more power in the hands of the consumers. That was scaled back. In fact, it was scrapped. Then we had the former, the former president of the uh EU, uh, his surname is uh Draghi, I can't remember his first name. He talked about scaling back and potentially watering down the EU AI Act and the GDPR. And then we fast forward seven or eight months, and that's precisely what happened. We had the omnibus that was introduced, where EU AI Act, certain elements were pushed back, certain other elements were removed in their entirety. And that's as a result of political pressures and lobbying from big technology. But that's not universally the case. There's still regulations that are being enacted. We have the uh the regulations in South Korea, we had the regulations in Brazil, we have the regulations in China, we have the guidelines and the principles also and the best practices from Singapore. We have guidelines and principles again, also throughout the Middle East and also in India, to ensure that if AI is being deployed whilst not a regulation, these are the principles that you should adhere to. So I still think that there's going to be a middle ground. I don't think that the future is one of total restriction or crucially innovation paralysis. It's more of a layered model. And this layering will focus around, I hope, high-risk use cases, the safety-sensitive issues, and any kind of systemically important uses that necessitate stronger obligations.

Ibrahim Hasan

Finally, Tahir, your advice for information governance professionals. First of all, the importance of information governance professionals in terms of shaping and promoting an ethical AI approach within an organization.

Tahir Latif

Information professionals are the bedrock of governance for deploying AI. We need to be there at the start of the actual thinking process. We provide the bedrock because you can't retrofit governance. You can't retrofit once the AI genie is out of the bottle. So these governance professionals, the skills they need to squire, firstly, it's judgment. They need to be able to look beyond technical possibilities. That's the role of data scientists and ask the necessary questions based on humanity. We understand the use case, and the use case is to decrease costs. But is it proportionate? Is it fair? Is it defensible? And is it wise? Because not every use case that can be built should be deployed in the form first imagined. That's where we need empowerment. Otherwise, it's going to be a tick box exercise. We are that red line in terms of yes, we know it can be done, but we question should it be done, and we highlight the reasons why. The second element in terms of the skill sets is translation. It's hugely underrated. The people that add the most value in the AI field are the ones that can move beyond technical language, governance language, legal language, and executive language without losing the meaning. In other words, they can explain risk in a way that different audiences can understand and act upon. This is not something that each and every governance professional can do. It's recognizing your skill sets, but also constantly upskilling. For example, I threw in a term earlier about supervised learning. I'm not a data scientist, I know the high-level elements. So when I'm speaking to a technical team, I can ask the question: why are you deciding to use this training method with this model? Or why are you deciding to use a very opaque model when something linear would suffice? I can ask the question, and of course, they may come with something very technical, but at least I know the limitations and the opportunities with these models. So it's understand that legal, anyone that's in GDPR, governance professionals, understands the law without necessarily being a lawyer. They won't give legal advice, but they'd understand the ramifications of the articles in the GDPR and potentially what they mean. Executive language is very, very important. I always say to my colleagues globally, when I'm teaching AI governance, be the voice of strategy, be that strategic voice. Don't go in and speak to execs and C-suite and say, you cannot do this because that gives us the same old hats that we used to wear in the GDPR, the department of no. Don't speak to the GDPR team, they'll say no. Similarly, we can go back and add value. We've, for example, I talked about the requirements to have the right level of data. Imagine a business use case comes through from a frontline team that has told these C-suite via their head of department that they can add 20% to the bottom line. They rush forward, they've got the green light and they rush forward into deployments. No one's got in touch with the actual AI governance team. No one's had a thought about that. No one's had a thought about the data. Fast forward six months, there's a realization we have the business use case, we have the models. We don't have the data we need. No, everyone forgot to ask about the data. So asking those challenging questions, we become a strategic voice. But of course, it's having that knowledge and knowing how to present to executives and something that can add value. If we deploy this responsibly, hear of the benefit, as opposed to we must deploy responsibly. Show the very benefits to leadership. These are our differentiators if we do it this way. If I was going to add a fourth, it would be humility. AI is moving me very quickly, and no one has perfect mastery of any every corner of it. I actually continue my learning each and every week. And the strongest professionals are the ones who stay curious and keep learning across disciplines. Sometimes I listen to highly technical podcasts. Honestly, I haven't got a clue what they're saying, Ibrahim. But I understand on a very top level, you know, we're talking about observability, I understand at a top level, just staying curious, learning and saying, okay, I understand now that simply having a chain of thoughts doesn't mean that it's telling me what it's doing. If it recognizes that I'm observing it or I'm testing it, it's merely feeding me what I want to hear. Something else, like it said, if you give a model a question, give it three choices, and you say, but I think it's the third choice, it will pin it towards your choice and it will give a defensible argument for that third choice, even though the third choice is wrong. This is the kind of sycophantic nature of AI. AI wants to please. The really valuable skill set and profile is not just technical knowledge or legal knowledge in isolation. It's sound judgment, strong communication, disciplined evidence practices. That's your documentation. And of course, the maturity to continuously learn.

Ibrahim Hasan

Fantastic advice, particularly as we get a lot of apprentices doing the data protection IG apprenticeship, listening to this podcast. A lot of them are thinking about moving into. This area of AI governance. So that's going to go down really well. It's it's a growth area.

Tahir Latif

And I would certainly advocate for more individuals to skill up in AI governance. But of course, taking the approach, it's not just in the governance, really cross-skilling, a little bit of technical, a little bit legal, a little better about the governance itself, and also understanding how to effectively communicate at all levels of the organization.

Ibrahim Hasan

And finally, uh here, uh, where can listeners learn more about the work of the Ethical AI Alliance and get involved?

Tahir Latif

The best place, of course, is the website. You can learn about culture, our charter, our work, and of course the thinking behind it. We've got 2,000 volunteers from across the world. And it's a volunteer-led organization. Engage with us, irrespective of your skill set. You don't have to be a lawyer, technologist, or governance professional. Join the conversation, connect with the community. The future of ethical AI is not going to be shaped by a handful of us acting along.

Ibrahim Hasan

Perfect. And we will put the website in the show's notes. It's been a fascinating conversation. Tahir. That's all for today's episode of Guardians of Data. A huge thank you once again to Tahir for sharing his insights on deploying ethical and trustworthy AI systems. If there's one thing to take away, it's this AI isn't just about technology, it's about people, governance, and the choices we make in how it's designed and used. Each one of us has a role to play in shaping systems that are transparent, fair, and trustworthy. If you found today's discussion useful, please subscribe, share this podcast with colleagues, and leave us a review. And remember, whether you're a seasoned professional or just starting out in information governance, there's always more to learn, and we'll be here to help you stay ahead of the curve. Thank you for listening and join us next time on Guardians of Data.