BPP TechSphere
BPP TechSphere
The Governance Trinity: Why Data Protection is the Ultimate AI Enabler with Veronica Ayitey
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode of TechSphere, host Idris Fabiyi sits down with Veronica Ayitey, Head of Data Protection at BPP, as businesses rush into the "Agentic Shift," data has become more valuable than oil. Veronica explains how to navigate the "Governance Trinity", the critical intersection of Tech, Information Security, and Data Protection.
What we unpack in this episode:
- InfoSec vs. Data Protection: Why Information Security protects but Data Protection secures.
- The "Baked Cake" AI Dilemma: We explore the friction between the GDPR’s "Right to Erasure" and large language models. Can an AI ever truly "forget" your data?
- The Hidden Cost of "Free" Tools: Why your business data is the currency paying for "free" AI, and how to navigate third-party vendor risks.
- Privacy by Design: How to embed data protection at the start of your AI buildout so you don't have to face the regulator later.
Key Quote: "Privacy isn't what slows a business down, it's the reason why a business grows... Treat your data with the same care that you treat your vision." - Veronica Ayitey
About our Guest:
Veronica Ayitey is the Head of Data Protection at BPP (Lyceum Education Group). She specialises in turning complex privacy law into clear, actionable guidance that allows innovators and business to build cutting-edge tech while maintaining the highest standards of data security and consumer trust.
I'm Idris Fabiyi, Head of Technology and Innovation at BPP University, Estio Training, Firebrand Training & host of the TechSphere podcast. I'm on a mission to demystify complex technology and make it accessible for businesses and learners.
Follow me on LinkedIn: Idris Fabiyi
My Medium.com profile: Read my Articles
Get in Touch: idrisfabiyi@bpp.com
Hi, welcome to this episode of BPP TechSphere. I'm your host, Idris Fabi. And in this episode, we're going to be joined by Veronica AT, who is the head of data protection at BPP, now known as the Lyceum Education Group. So V Veronica, tell us a bit about yourself.
SPEAKER_00Thanks for having me, Adris. So as you already stated, I'm the head of data protection at BPP, now a part of Lyceum Education Group. And my role is essentially to drive data protection compliance across our growing group of family brands. So part of this is turning complex privacy law into clear guidance that helps people like yourself who are innovators to protect the data that we have and to innovate safely. And I'm sure we'll be going into that in much more depth during this podcast.
SPEAKER_02Absolutely. It's been a long time coming, but something I was really, really interested in because I feel there's some type of trinity when it comes to uh when it comes to business. You know, so we've got the um the the the guys who like to get in with the tech stuff, who like to develop things and make things, which is common, which is normal. And then we've also got this information security and then data protection. And I know the three have to work together. Um, and if I'm being honest, I see data protection sometimes as the thing that might be you know, it's the elephant in the room, no one wants to say that. I'm gonna do something, and data protection is gonna say no. You know, could you um could you could you speak to that how data protection should be should be seen?
SPEAKER_00Yeah, right. I mean, yeah, you've been honest, and um it's something that people do do say, um, perhaps a reputational thing when it comes to data protection, but I think I'm hopefully one of those people that are here to change that perception. Um so I mean, if we look at today's truth, right now data has surpassed oil in terms of value. So I always say that data is the new gold. But today, unlike the gold that we have, we used to have sitting in vaults, data's digital, it's renewable and endlessly reusable. So the more data that we have, obviously the more growth potential we have and any business has. Um so my mantra is that I'm trying to create cultures where data privacy enables business growth. So the opposite to what you said earlier on. So something that enables business growth and doesn't block it because privacy isn't what slows a business down, but it's quite clear that it's the reason why a business grows. And we'll we'll come to that in more detail during the podcast. But I think the reason data protection is so important is because when there's trust around how a business uses data, when customers, clients, learners in our case trust our brand, they'll give us more data, and having more data equals more growth and more revenue.
SPEAKER_02Absolutely, cool. So what you're saying is um data security is, or data protection rather, is a new security system for the gold that we have.
SPEAKER_00Exactly, exactly. That's a great way of putting it. So you wouldn't leave your gold lying around unlocked in a vault, unsecured, right? And so I think our role as data protection and practitioners is to show people how to keep that gold secure, how to use it compliantly, and also help it grow in value. So we'll help the business to avoid reputational damage, loss of consumer trust. And we've all seen you know, over the course of the past few years where different companies have had data incidents and we've you know, they've seen their stock prices fall, they've lost consumers, and they've also faced fines from authorities. So they face fines from authorities like the ICO. Um so with the speed at which data and technology is moving, it's really important, more so now than ever, to have a data dedicated data protection function. So I think having that data protection function isn't optional anymore. It's essential now. And, you know, yes, um, teams like our data protection team, we do a broad range of things to support businesses. It can be from something as mundane as showing people how to password protector spreadsheet and the circumstances when they should, to, you know, assessing the impact of new AI tools or new third-party vendors. Um so we're like a business's sounding board almost. And so we don't make the decisions for you, but we show you how to make compliant decisions. So I think we're moving away from that blocker and moving towards being a business supporter, a business enabler. Awesome.
SPEAKER_02I'm not sure if that was a slight admission to um blockage of uh of of innovation, but you know I confirm or deny. You know, I did think about our audience sometimes, and you and you just mentioned that there's an authority called uh ICO. Could you speak to what the ICO is and who why are they punishing people?
SPEAKER_00Well, the ICO, now um it's being called the Information Commission, now they're rebranding, is essentially the uh watchdog for data protection in the UK. So they sort of watch over and ensure that companies um abide by the GDPR, the UK GDPR, and where they fall afoul of the regulation, or there are complaints about the way they're risk they're handling people's data, then the ICO can step in and take enforcement action so they can range from fines, or they could just be stepping in to correct the way your company is handling its data. So it's very important that we try as much as possible not to fall onto their radar and you know to comply with some of the guidance that they produce. So they produce really helpful guidance for businesses and for individuals, um, as well as making sure that everybody is following the GDPR and and doing what they're supposed to do.
SPEAKER_02Thanks for the uh thanks for the clarity and and you know, don't be uh surprised if I ask you some silly questions. So, you know, always think there is no such thing as a silly question. Absolutely. Cool. So um, yeah, this is really, really helpful. So information security. Yep, so we've got information security versus data protection. Could you just tell me about this trinity that we have going on in there? Could you explain what the difference is between um information security versus data protection and where you two meet?
SPEAKER_00Yes. So I think we work quite closely in synergy. Um, there's a lot of crossover between what we do. I think the easiest way to describe it is that information security is protecting the entire house. So they're looking at the systems, the networks, you know, the technical measures that are in place to protect the house. So their world is full of things like access controls, encryption, firewalls, um, and making sure people can or cannot get into the right systems. Um and with data protection, on the other hand, we're more about protecting what's inside the house. So we're looking at the sensitive stuff that is inside the house or the goals. Um, we're looking at personal data, uh, regulated information and how it's collected and stored in the house, how it's shared, how it's deleted, and we're also looking at um privacy laws like the GDPR and looking at consent to share that data and how long that data is kept, making sure the organization uses it responsibly responsibly. So there is a lot of crossover and synergy in what they they do.
SPEAKER_02Um, have you got any examples of any incidents that might have taken place that warrant um the need for this this trinity to exist?
SPEAKER_00Yeah, so certainly. I mean, if there is a third-party incident where you know an external actor is perhaps trying to obtain access to our domain, or there's something like a phishing attempt, then we'll definitely work together with information security to assess and and conduct any remediation following that investigation of what's happened. So, you know, information security will look at the technical measures in place, whether they've been able to get access, the how or what, when and why.
SPEAKER_02The damage, you mean?
unknownYeah.
SPEAKER_00The damage, exactly. Yeah, yeah. And then we will be more interested in looking at um if data protection, uh if personal data, sorry, has been accessed during that incident.
SPEAKER_02Interesting. And so um obviously I'm gonna talk about AI, but what what does AI do in this um in this situation? How does AI play a part? Does it make your job um, I imagine, it's significantly harder?
SPEAKER_00Yes, I would say that AI certainly does amplify that risk. So what we're now beginning to see is the risk interface is changing, and as AI systems proliferate, we're seeing more and more sophisticated attacks. So in this space, um, I think if you look at phishing, for example, AI can now learn behaviors to make phishing attempts more realistic. So I think when it comes to things like that, um information security has a big role to play in like plugging any security gaps, etc.
SPEAKER_02You know, you you just mentioned phishing, and I just want to uh elaborate for our listeners who aren't sure what phishing actually is. Phishing is when a um a malicious actor, a bad actor, attempts to almost trick you into um believing that they are a legitimate entity. For example, you might shop um or you might bank with Barclays and you might get a password, you might get a password reset for your Barclays account. And what that phishing would look like is a legitimate email from Barclays. It looked like it, if you're not careful, if you've not spotted the signs, it looked like it and will request you to authenticate to the Barclays website, and they'll probably tell you, yeah, you weren't able to log in. And some phishing attacks are sophisticated enough to use your multi-factor authentication and then gain access to your information that way. So it's just uh, if anything, a note to our listeners to um to always be mindful of any messages they get in their personal emails as well as their professional emails, because yeah, information security and data uh protection is always at risk for those kind of things. So what's really interesting, so the use of third-party tools. Yeah, this is one because I know here at BPP um now Lyceum Education is is very, very tough on this stance, and you know, what we're seeing as leaders for a reason, for these reasons, you know. So when you say third parties, um what kind of dangers are out there when it comes to the use of third party tools for your domain for this uh data protection? What kind of dangers are there?
SPEAKER_01So yes, uh there are a number.
SPEAKER_00So with third-party tools, of course, they can be a great tool to use, but we have to do the correct due diligence, uh, vendor due diligence to ensure you know their systems are secure and the data that we're putting our data in actually has the correct technical and organizational measures to protect that data. So where you know we're leaders in that space in terms of um producing new third-party tools ourselves or um using other people's tools, they obviously have to go through the correct vendor due diligence, and that's where information security and ourselves in data protection will work together to assess whether those measures are adequate to protect the data that we're going to put into them. And you'll have seen in the press that there have been numerous incidents now over the past few years, which were actually caused through the supply chain by using third-party suppliers. So if we're putting our data into a tool and that data is going to be processed by that tool, then you know we have to make sure that that tool or that organization has the correct measures in place to protect that data. So that is really where the risk lies and where we would look at how to minimize or mitigate that risk.
SPEAKER_02Oh fantastic. And just um again, for deeper deeper insights and knowledge, really. Are we saying that uh this is where I suppose you're turning to the department of enabling rather than saying no? Um you're saying so there are there are some third party tools that might actually fit the bill, so to speak, uh, but this is why it's really important that uh an organization has good communication and uh and good ownership over this this type of thing. Is that is that what we're saying? That there are actually third-party tools that might so the correct thing to do is to always report them in and you know have yourselves do the due diligence that's necessary.
SPEAKER_00Exactly, exactly. So where it's something that's going to have access to our systems or we're going to be putting personal data in, the best thing is to have it go through our formal due diligence process, and then we can do that assessment, and we can even negotiate with the supplier um to make sure that they have you know sufficient tools in place to protect that gender.
SPEAKER_02You know, I did a podcast um a while ago, um, and we spoke about the famous uh cookie banners, you know, and uh and I think one of the points that we were getting across in that um podcast briefly was that sorry, not sure if you heard that, but I'll uh repeat myself. You know, one of the points that we um got across in that podcast was the fact that we pay for all these free tools. It might not seem as though we pay because they do a fantastic job, but there is a danger in um in the fact that I suppose you know yeah we do pay with our personal data, um, and when we're on work systems, um we pay for data, and you know, that's almost a um a catch where you know you could fall foul to for the for the unknown, really.
SPEAKER_00Exactly. That that's a really good point, yeah, yeah. But you'll find that with the free tools, they have to sustain themselves somehow. So often your data is the currency that they use to sustain themselves. Whereas if you actually go for maybe the business, the enterprise tier, then it will come with more explicit contractual guarantees, and you can have those conversations with the supplier as well about how the data's used. So if it's uh an AI system, you might be able to negotiate the terms to make sure that your data isn't used to train their model. And generally with the enterprise version, you'll be able to have more control and oversight over how the data's being used. That's interesting.
SPEAKER_02So you would interest and didn't actually know that as well, and I'm sure there's a large um section of our audience who didn't know um that the pay for tears, in essence, they have a bit more protection over uh over over our data. It's really interesting, you know. And then there's there's been instances. Have you got any examples of any instances where um data obtained by a free tool kind of got out of hand or led to something, you know, a bit more serious? Sorry, I just like the glass. I don't know if you've got any any examples.
SPEAKER_00I think, well, not internally, but externally. I think definitely in this press, definitely. Um I think we all saw that big um leak last year where um Chat GPT, the free version, um had an indexing leak. So I think people had been putting in, you know, having sensitive conversations with the free version and you know, uh of a variety of things, you know, legal matters, personal information, and um uh somehow accidentally they ended up indexed on Google. So they're shared conversations, they had somehow shared the conversations, and then there was an index for people to click onto and inadvertently see those conversations. So although OpenAI did quickly remove the feature, you know, thousands of chats had already appeared in search result results. So that is one such example. Goodness.
SPEAKER_02So Veronica, GDPR. That's something I'm pretty confident everyone is uh aware of. If they're not, it's um what is GDPR in in a nutshell? What is GDPR in a nutshell? Could you explain to us what GDPR is in a nutshell?
SPEAKER_00Um, in a nutshell, it's the regulation that tells organizations what they can and cannot do with people's data and also gives individuals greater rights and controls around how their data is used, accessed, and um really gives them autonomy over how their personal data is used. So around the world, GDPR is always referenced as the gold standard.
SPEAKER_02Interesting, right. And that leads me on to a question. If we have all this talk about AI um training on our data, you know, what control do we have then when it comes to GDPR? If um if our data has been ingested into an AI model, what control do we have? Do we have any?
SPEAKER_00Well, it's a good point because we're now entering one of the biggest frictions in modern technology. We see advanced AI systems, especially those using deep learning, don't really explain themselves, and data goes in and data comes out, and then in the middle there's this tangled, opaque space that we're calling the black box. And um, even developers sometimes struggle to explain to us what's happening inside that black box. And so it sort of conflicts with the GDPR in a way, because the GDPR is all about transparency, accountability, and giving individuals control over how their data is used, as I said before. So with GDPR, people are supposed to have the right to understand how decisions are made about them, to access their data, and also to be able to have it erase, which sounds pretty straightforward until you now make a request uh from a model that's already been trained on your data to delete your data. So it's a sort of like um baking a cake and then asking the egg to be removed from it. So obviously, so like in the GDPR, you have like rights to erasure and to be forgotten, to have your data deleted. But once that cake is already baked, then how are you erased from it? How are you forgotten? And can the system forget you? So it does create that tension.
SPEAKER_02Well, that's super interesting, and it's a great analogy, it really is. You know, so um are we saying we're entering un uncharted territory here? Is that is that what you're saying? That when it comes to um the right to be forgotten, it can't really happen with a uh with a model. Has there been any legal instances of of this?
SPEAKER_00Um so we're now seeing the cases come into court on these topics, especially where we're seeing models that have been trained um using data that we might say is you know copyrighted or proprietary. So that it's now coming into fruition. So we're not really sure how you know the courts are going to decide in this area. So, for example, Getty has had an unspeakable. successful case. I think you'll have seen Getty Images that take the pictures at all the celebrities. Yeah. And so they've been um having an ongoing court battle with um an AI model that they claim has trained um using the copyrighted Getty images logo. And so they've been having an ongoing battle and it it's not been quite successful as of yet. So we're really yet to see how um these sorts of cases are going to be decided. So it is a new and uncharted territory we're entering into. So as we were saying before it's easier to um get it right in the beginning than to remediate later. If your data doesn't go into these systems then you know you wouldn't be asking for erasure you wouldn't be asking to get the data back. So it's about really taking ownership of how we use data and being careful with how we use data with these new tools.
SPEAKER_02Well yeah that's a really good example um I hadn't heard about the um the in fact I had heard about it the stability AI one. Yeah and it's um it's things that have cropped up several times when it in when it comes to AI basically because um given the way that these models train especially on images they use a um a large variety of images to create their own version of it. So when you ask for an Apple for example it knows what an apple looks like but your Apple might be different to all the other apples out there. When you see your Apple you know there's the question well so it's super interesting. So is there any examples of things that um you know you do well that other businesses could emulate or you know take a note from yeah I think we're quite forward thinking when it comes to this space.
SPEAKER_00I guess if we're talking AI in particular there is an AI governance committee in place at BPP and on that committee we also have our chief legal officer um and so at the outset when we want to review a new AI tool we'll already begin asking the correct questions looking at what data goes into it what access does the tool have is there a human in the loop how we can mitigate bias and what steps we need to put in place to tell individuals about how their data is going to be used. So we're able to look at all those key questions right at the development stage before deployment to make sure those transparency obligations are met. And also in general when we're using any third party tools as I mentioned we have our due diligence process that is really stringent that information security and data protection work together on and you know we'll go through with a fine sooth comb what data is accessed where is it going and the benefit of me also being um a lawyer is that I can also look at what contractual safeguards are in place if we're having someone process our data. So I think in this space we're really really forward thinking and we've done quite a lot in terms of our AI governance and our tech governance.
SPEAKER_02Do they get any benefits?
SPEAKER_00100% and there's quite a number of AI initiatives to support learners that are being developed internally right now. So that's another forward thinking thing that we're not just utilizing other people's tools but we're also developing our own and you know the technical teams have the benefit of data protection to support them while they're developing those tools because you know those tools are really powerful but also students have to trust putting their data into those tools so we're able to reach the the perfect balance um improve the learning experience but also be compliant and be transparent about how their data is being used. So we're giving learners real control in that um space so I'd like to say we definitely treat privacy and data protection as part of that design and it's never an afterthought so you get the best of both worlds with us you get smarter education but you also get strong rights protection. You know can I ask um can I ask some silly questions about the right to be forgotten you know when you when you spoke about GDPR is it true that that's different to asking to be deleted from a system is that true so so essentially there is the right to erasure and the right to be forgotten but almost technically they're the same. The diff the key difference I would say is that when we talk about the right to be forgotten, people are more thinking about delisting their personal data on platforms like Google whereas when we get a request for a right to erasure it's more of a request to have their personal data deleted so if you want Google to delist you from their search you'd normally ask them um make a right to be forgotten request on their page and ask them to delist certain searches that would lead to you but if you want to be erased for example you want to write to a company to erase the details the personal details they have of you then you make a right to erase your request and they may they they will delete the data except for if it's under a certain exemption like they need it for a legal reason perhaps so they will yeah so they will respond and and let you know what they can and convert that is super helpful because um suppose it it also asks a few other questions I had um is it true that because of GDPR you can approach any company that stores information about you and ask for that information to be well deleted.
SPEAKER_02Right so if that's true does one need to know the difference between delisting versus Erasure or forgotten versus um the the two or when you ask is it just something that they would decide or yeah so the important thing is I think it's um is the phrase content rather than form or form rather than content?
SPEAKER_00Whatever the phrase is um companies should look to the substance of what that person wants. So it's the same with um when people make a data subject access request you can ask a company to provide you with the personal data they have about you. But you don't have to say I'm making a data subject access request. You can just say I want a copy of the data that you have about me and they'll provide it to you. And you cannot say to somebody because you've not mentioned Erasia or Forgotten or DSA that I'm not going to action your request because of that. So um it's it's about the substance of what the person is trying to do and you being as helpful as possible as an organization to help the individual to exercise those rights. And you can even go back to them with clarification if you are unclear there's lots of ICO guidance that says you can ask for further clarification when someone makes a report thank you that's super helpful and I know it's going to be helpful for our uh for our listeners absolutely so do you want to move it on um slightly because again for those who work in businesses or own businesses um own departments and and such a want some value for them as well for our listeners you know so how do we train staff when you've got a workforce and we've we've got the shiny new tool um this AI thing how do we train staff what's the best way what's the best approach because I assume that's something that your department will have to uh take on you know yeah so I think uh it starts with having a really strong data protection culture which is something I believe we do very well so it's part and parcel of having that culture that people will use tools with care when they are really um up to date with data protection and and how to protect people's data. So as an example we've run training and awareness programs we've done lunch and learns um where we talk about hot topics to do with data and you know things like new tools and AI are discussed in those lunch and learns. We have clear policies and it's important to have clear policies around use of AI and use of technology and data protection in general. And then also on a practical side give people secure alternatives you know um I think in your previous podcasts you were mentioning using copilot and obviously we're using co-pilot um and so people have alternatives they can use they don't have to use the free tools um and then internally there's also um a gen AI course that people can um book themselves onto so there's a a variety of things we do and and practical steps that organizations can take are just educating staff you know about those risks with the third party tools you know as we've discussed today your data being out there in the atmosphere and not knowing the security measures that are in place to protect those data and providing them alternatives as well as I mentioned and then explaining to them the implications um of not using those tools safely and correctly and then also regularly reviewing and updating how your policies and looking at how people are using those tools and and how the uptake is um and then the best thing of all I think is also to involve different business units um to build that data protection culture. So you'll talk to you know the tech teams you'll talk to the innovation teams you'll talk to information security data protection um and the people on the ground doing the development work and then everybody feels like they have a vested interest in having a strong data protection culture. Amazing really appreciate that thank you and um if you could give some advice to any leader who's listening to this now if you could give advice to adopting AI no you've just given some amazing things that we do um but if you could give them uh some advice about adopting AI without the the correct governance around it what what would that advice be so I would say firstly of course AI has huge benefits that's undoubtable so we're definitely not AI phobic we're forward thinking and yeah I understand the huge benefits but I think there will only the benefits will only outweigh the risks if you treat your data with the same care that you treat your vision. So set your use case and look at your guardrails and your boundaries and then involve the appropriate stakeholders from the outset so don't wait till the end to deployment to now bring in your data protection team or your AI governance. And if you don't have a resource not every company has a resource for AI governance is quite a new function then you can have champions across the business or you can involve information security you can involve data protection and maybe have it as an additional add-on to their function. So that way you always embed privacy by design from the outset and I'd say secondly just understand how you are using people's data and how you can meet your accountability obligations. So don't think of good governance as a hurdle it's better to get things right at the outset than to have to remediate later.
SPEAKER_02Awesome so something I like to do to to round off our um podcasts is I like to do a bit of a quick fire. Can we go for it?
SPEAKER_00Okay nice and easy tea or coffee coffee coffee excellent and in school what was your favourite subject English interesting giving you a lawyer by trade kind of I originally wanted to teach English and if you could give a 16 year old Veronica some advice what would that be some career advice not just what would that be um it would be don't worry so much everything's going to work out in the end. And would you like to uh finish on any any notes or any comments um so I think just to go back to as we said in the beginning of the pod remember that our data is gold so when you're innovating keep that at the forefront of your mind and think about how you're safeguarding your gold how you're keeping it in your vault awesome thank you here as well thank you thank you