The Digital Project Manager

A Privacy-First AI Strategy: What It Looks Like and Why It Matters

Galen Low

What if regulation wasn’t a blocker to AI transformation—but a strategic advantage? Galen sits down with Lauren Wallace—former Chief Legal Officer at RadarFirst and a veteran in legal, product, and AI governance—to explore how regulated industries can harness their existing compliance muscle to lead responsibly in the AI era.

They get into the practicalities of building privacy-first AI strategies, setting clear ethical baselines, and creating internal momentum across cross-functional teams. If you’re navigating digital transformation in a high-stakes, high-compliance environment, this episode delivers grounded advice and hard-won insights you can act on.

Resources from this episode:

Lauren Wallace:

Privacy is a fundamental human right. It is a baseline expectation that allows us to live our lives in a self-determined manner. In the U.S., historically, we've treated the value of that personal information as belonging to the company that collected it. In the EU, privacy is fundamental human right and the right of determination around the use of your personal information belongs to the data subject—the human.

Galen Low:

Is regulation the red tape that will hold regulated industries back in their AI transformation?

Lauren Wallace:

I think heavily regulated businesses actually have an advantage in designing and implementing compliant AI programs. They're already baked in to existing regulatory frameworks. About a year and a half ago, we launched a series of monthly lunch and learns on ethical AI. We talked about transparency, we talked about accountability. This is most fun. Like everybody loves war stories, right? I pulled out example, shocking hot-off-the-presses, things that have implicated these principles of transparency or bias mitigation.

Galen Low:

Welcome to The Digital Project Manager podcast—the show that helps delivery leaders work smarter, deliver faster, and lead better in the age of AI. I'm Galen, and every week we dive into real world strategies, new tools, proven frameworks, and the occasional war story from the project front lines. Whether you're steering massive transformation projects, wrangling AI workflows, or just trying to keep the chaos under control, you're in the right place. Let's get into it. Today we are lifting the lid on AI transformation in regulated industries and how a privacy first approach to AI can actually accelerate innovation, drive cross-functional collaboration, and pave a smooth path towards sustained impact. With me today is Lauren Wallace, strategic advisor and former Chief Legal Officer at RadarFirst. Lauren has an extensive background spanning legal business development and executive roles at brands like Apple, Microsoft, Nike, as well as venture capital and private equity backed startups. She's known for her practical and accessible guidance to legal, product, marketing and development teams around the responsible use of AI. And she's a force of nature when it comes to navigating compliance in regulated environments. Lauren, thanks so much for being with me here today!

Lauren Wallace:

Thanks for the invitation. I'm excited to be here.

Galen Low:

I'm excited to be here too. We were just jamming in the green room. I wish I recorded that bit. I'm very excited about this. We've got a good energy match. We see eye to eye in some ways and in some ways we may not. I'm really fascinated by your background because you've got this sort of mix of like fast-paced startup culture and the arguably less fast pace of regulated industries like for example, financial services. I think we can zig and zag today, but here's the roadmap that I've sketched out for us. To start us off, I wanted to get one of those like big burning questions out of the way that pressing, but somehow paradoxical question that everyone wants to know the answer to. But then I'd like to zoom out from that and maybe just talk about three things. Firstly, I wanted to talk about what a privacy first AI strategy actually looks like at the component level and what executives and regulated industries need to put into place to achieve it, as well as maybe what the benefits are. Then I'd like to explore some examples of what a privacy by design approach looks like in practice and how senior department leaders bringing their company's AI vision to life can set themselves up to avoid painful issues in the future. And then lastly, I'd like to explore how the competitive landscape will look after five years of organizations of various sizes having gained momentum on their AI strategies. That was a big mouthful. How does that sound to you?

Lauren Wallace:

Well, except for the five year perspective part, Galen. I think we can, I think we can cover a lot of ground here.

Galen Low:

Alright. I mean, you know, we'll bit our crystal balls out and we can yeah, we can go 3, 5, 10. We'll play. I thought I would start off by asking you like one big hairy question, but I'm gonna tee it up first. When we think of heavily regulated industries like financial services or healthcare or energy or telecom, et cetera, most of us think of like limitations that force organizations to move slowly. And then the follow on thought from there is that AI transformation in regulated industries will also move slowly leaving these industries like in the dark ages while the rest of the world accelerates into their AI strategies. So my big hairy question is regulation the red tape that will hold regulated industries back in their AI transformation? Or is it the foundation that might benefit everyone in the end or something else entirely?

Lauren Wallace:

Yeah. I'm gonna go with something else entirely. Well, you know, start somewhere in the middle on the options you've described. I would turn this kind of into the middle and say, I think heavily regulated businesses actually have an advantage in designing and implementing compliant AI programs. And that's because the principles that underlie AI governance, like transparency, accountability, bias monitoring, prevention, these are essential elements of an AI governance program. They're already baked in to existing regulatory frameworks. And that's a bigger scope, I think, than some people necessarily think.'cause it includes things like GDPR, which regardless of what industry you're in, you're probably pretty familiar with it. But if you are in the banking industry, you're subject to fair lending laws. If you hire people for your business, you're subject to equal opportunity laws. There are a host of other civil rights and consumer protection frameworks that, in particular may include prohibitions or restrictions around using algorithmic decision making. These rules, like the model management rules that the banks have used for years, they've been in effect for ages. It's called algorithmic decision making, that it easily translates into, you know, what we're looking at with all these AI tools. So these big, heavily regulated institutions in some of the sectors that you described, they already have robust existing compliance programs for those frameworks. And they have resources assigned for governance to these frameworks. And they have controls and they have tooling designed to ensure compliance with these frameworks. So I'm not saying it's an easy thing to bolt on AI governance to that, but at least you're starting from someplace when you want to add AI compliance. I've worked with a lot of these big, heavily regulated institutions on their AI compliance programs and you know, it runs the gamut. Some of 'em are kind of commenting cold or they have a institutional allergy to AI and you kind of have to get around that. But there are plenty of them that are very sophisticated. They have these existing programs and they're just looking about how do we expand this? How do we bring on the talent and the expertise that we need? How do we bring in our community in a new way so that they can participate in AI compliance at the larger scale that we're working against? They've also, and I know a lot about this, been subject to a very complex patchwork of security incident notice requirements. Now, my community, my radar, first people out there, I see you. I love that you're using radar first for this. We have that product, which I'm a giant fan of, has about 500 global privacy and security incident notification rules on the books that you can test your incidents against. I think people need to understand that it doesn't matter where the incident arises, whether it arises from a misdirected facts or from an AI that you've enabled to use personal information. An incident is an incident. If personal information is involved, you are subject to these instant notification rules. So you gotta know that and not think that you can kind of set AI incidents to the side. You certainly can't just like pin the blame on your AI. We've seen that. That doesn't play out very well. So you need a process to manage incidents, but now you need to enhance it. You are not just starting from scratch. From my, that was a very long answer to your short question was I actually think the bigger challenge is for the mid-size businesses. They may be facing these regulatory headwinds for the first time when they implement AI into their workflows, and then where do they get started? They may have never had tools where they could analyze data at scale. That these new, and some of them relatively inexpensive and fairly accessible AI products might enable them to do. But now all of a sudden they're doing processes. They're using information in ways that they haven't before and maybe exposing that information to a host of new risks. So for those folks starting the governance process from scratch, I say, you've got your work cut out for you. That was my long answer to the short question.

Galen Low:

I was like, waiting for the hopeful bit afterwards. You've got your work cut out for you, but.

Lauren Wallace:

Let's go to the hopeful bit, then. Because I think you can always start with the basic principles of privacy by design and then spread those principles across to protection, not just of personal information, but protection of the company's proprietary information.'cause you got a whole bunch of new risks here when you put just company information onto ChatGPT or whatever. You wanna assess and bolster your security posture. We have stunning new attack surface with AI. So you gotta make sure you've got your security dialed. And most of all, and this is what I hope we'll spend most of our conversation talking about today, is getting your internal conversation going to really understand what you want to do with AI, what you think you can do with AI, and what is the real ROI that you hope to achieve as we enter that conversation in our communities. I like to separate it into kind of two vectors. One is what are your internal uses of AI for productivity, for replacing tools that you might currently have, for just functional enhancements and what are your product development use cases?

Galen Low:

Mm-hmm.

Lauren Wallace:

And there's very different considerations for each of those.

Galen Low:

I definitely want to get into that. I'm glad you mentioned about the muscle, and I'm glad you mentioned GDPR because I come from the digital world, so there's two things in my world where we found ourselves chasing your tail in terms of regulation. One was GDPR, the other one before that was accessibility, right. When with the ADA Act and we were just running around like chickens with our head cut off because we were like, wait a minute. This sounds so legal. We don't have a process for this. We don't even know. We don't even have like the beginnings of a conversation about regulation and fines and reporting incidents, and even just like data visibility. We didn't have all of these things and we were really scrambling. And there's some folks who came out ahead on, especially on the accessibility side. I found that like it became a requirement and we needed to figure it out fast. The agencies that did help their clients kind of build out, you know, web solutions that were accessible, experiences in the digital world that are accessible and like got a leg up, but it was such a scramble because we were looking at it the other way, which you mentioned, right? Which is like, yeah, let's just like try all the tools. Let's move fast. Who cares? This is digital, this technology. We're not bound by all of these rules. And suddenly we got a taste of that. We're like, oh, okay.

Lauren Wallace:

Guess what? Rules? Yeah. So many rules and so many rules that are designed to express complimentary principles. But may exercise them in very different ways. So if you have a GDPR versus the CCPA, the California Act that was followed up by the CPRA, they come from the same good place, but they implement very differently. And so how do you as the digital project manager try to make sure, and I'll tell you this is what a lot of my customers, my clients do, is just try to find that high watermark. Try to find what's consistent across the regulatory environments that you're operating in, and treat everyone as if they were subject to that same high principled approach. And that kind of try to like run along the bottom and do the least that you can do in every case, that might seem like that's gonna save you some regulatory exposure, but it's gonna cost you so much more in implementation and in Headspace when you're trying to go do interesting things. So just like pick the best way that you can do it and do it. You mentioned accessibility. This is a subject that is very near and dear to my heart, and I'm working with some other groups on that, and I would love to come back and have another conversation with you about accessibility in AI because I think we're making a lot of assumptions that guidelines, like WCAG, where you're just kind of checking the box on these attributes of your website are suitable when you're dealing with a whole different kind of cognitive load that people are using or not using, or may not have access to. When they're using AI either in a direct interaction, like in a consumer facing tool like copilot or something, or when they're interacting with AI in their, in the physical world. So when we look at people with disabilities who have been really empowered by things like Alexa and HomeKit to set up their homes or their environments in ways that are so much better and more convenient and easy to use. But these things are being moderated by AI in the background, and so you don't see it, you don't tell it what to do, but again, your operating environment assumptions have been made for you by someone else and this incredibly powerful and people deserve a lot more from this. So I'm excited to have a whole separate conversation about accessibility with you.

Galen Low:

I would love that equity has been coming up and equal access has been coming up a lot in these conversations, and I think you're right about the sort of high water level. I also worked on some projects in accessibility, web accessibility with the WCAG stuff where it definitely was a box ticking exercise and it definitely was a couple weeks before they were due to be penalized. And it definitely was sort of walking along the bottom, but it actually added even more risk because it sounds to a lot of folks it might be like, ah, but you know, Lauren, aren't we over engineering? If we're doing this, like if we're engineering to the highest level of regulation of like, you know, the common points. But I've actually seen it. I know that it could be scraping the bottom and just doing the minimum is actually more expensive in a lot of ways. You come back to it later on and you spin up all these other initiatives to like improve it. It's just putting off the inevitable.

Lauren Wallace:

Yeah. You don't wanna have to go in and do point fixes later on. And also, I think another thing about selecting this high watermark from a regulatory perspective, it's going to encourage an ethical approach to the work that you do. So even where you don't have the regulatory obligations spelled out somewhere that you can like point to chapter in verse, you might say, well, we still think this is the best way to do it. I think that's important just because we wanna be good corporate citizens and we wanna be able to tell our kids that we didn't, you know, do terrible things in our jobs. Also because you don't know going in precisely where the challenges to you might come from. It might not come from a state regulator or a federal regulator or European regulator. It might come from a class action. It might come from some entrepreneurial legal team. Somewhere that's determined there's an opportunity here. It might come from your competitors. And so simply being able to point that you met some kind of notice requirement and some regulation somewhere, that's not enough. You need to show that as an organization, you are operating in accordance with ethical guidelines that you had in your organization. And I know that's one of the things we wanna talk about today is achieving buy-in. You know, as a project manager, like that's your day, right? Is getting people on the same page, but how do you communicate about that page? You want people to get on. What are the tools that you have available to you? What are the words? What's the vocabulary?

Galen Low:

I love that. I wonder if we could dive in there because, so the note I had here was like, I've seen you speak on the topic of developing privacy first, AI transformation strategy. The thing you're saying about just being ethical. And sort of communicating that across everyone's on the same page, that's also resonating with me. I wondered if maybe though you could like break down for my listeners the core components of like what a privacy first approach to AI transformation looks like. Maybe some of the benefits, but even if we wanna like cast wider than just privacy, because I think privacy is part of that ethical approach. It's still part of the fabric there. So if you wanted to zoom out a little, I'd welcome it as well.

Lauren Wallace:

Well, I'm gonna zoom in first and you know, get a little personal, a little bit controversial maybe. To me, privacy is a fundamental human right. There's a baseline expectation that allows us to live our lives in a self-determined manner. And organizations and states should cautiously infringe on that. Right? Or should I? They shouldn't, right? But if they're gonna like, be cautious about it. Please. Where I'll get a little controversial is looking at sort of the global landscape for privacy. In the eu, which is the bellwether for treating privacy as a fundamental human right and really specifically articulating what they mean by that in GDPR and then by extension in the EU AI Act, which relies really heavily on GDPR to say, okay, let's start with privacy and then let's go from there. That's the DNA of the EU. Privacy is fundamental human right, and the right of determination around the use of your personal information belongs to the data subject, the human. In the US historically, we've treated the value of that personal information as belonging to the company that collected it, belonging to the company that uses it because they extract value from it, and so surely if they extracted the value must belong to them. That's where I'll get a little controversial, less controversial to say that there are other jurisdictions around the globe where the value of the personal information, in fact, the specific content of the personal information is deemed to belong to the state because the state is going to use it in the best interests of the data subject. So these are three very different baseline theories of who owns and should get the value of the personal information. So, coming around from that. I think when you look at your company's ethical guidelines, you say, okay, where do we fall in this mix? This is where you start. It's not with a list of principles like fairness, accountability, bias mitigation. These are good, these are great words, and they can help you build the template that you're all gonna fill out together and you're gonna share on your company and say, these represent our values. But at the core, do we as an organization believe that we serve by extracting value from personal information? We believe that we serve by helping our customers use personal information for the benefit of the data subject. And you're allowed to pick, you know, in the US you're allowed to pick those things. I wanna help those companies that wanna serve the interests of the data subject.

Galen Low:

Right. Yeah. You had mentioned the sort of European model being the sort of bellwether model for individual privacy as a right. It's funny when you say that about like, I'm gonna zoom it out as a North American experience for me, is that, yeah, I've kind of accepted the fact that my data is like currency. It's transactional. When you said, oh, because they've extracted the value, they own it. I was like, yeah, it makes sense. In my head I was like, oh yeah, I didn't pay money. I like, because like I find that, you know, in some ways the consumer in me is like, okay, well if I don't have to give you money and it's quote unquote free. Then that's fine. And I think that's where the, like I'm maybe zooming out too far, but where the end user starts. Allowing an organization to be less cautious with my data because I'm like, sure, take my email address and then enrich the data. Like I work in media, right? So I'm like, we're looking in, like I've seen some of the backend of some of this stuff. And I'm like, some of it's better than money in terms of value, but the ownership thing, I was like, wow. Okay. Yes. And that's the thing I, that always stuck with me with GDPR is like people taking control of their own personal information and having agency over it, but then the gap of like. But who's got time? Like I barely have time to delete photos outta my camera roll. Like I'm not going and figuring out where my data lives and like writing requests to be like, could you please, you know, delete this data permanently from your server. People are doing that. It just wasn't me. And I was like, okay. We almost allow it in some ways.

Lauren Wallace:

Kudos to organizations like NOYB, which stands for None Of Your Business, which is the Max Schrems' organization. Did you know that? NOYB, None Of Your Business. Because they can exercise it right at scale. For us as individuals, it's excruciatingly difficult to exercise this right individually. Now, a lot of the states, and this is I think something that's moving forward with AI regulation that has been kind of missing from privacy regulation is the obligation to get consent to inform and then to require consent, and that consent must be informed. Consent in the privacy world has been implied in so many of our transactions. And when you go and you know, you click through that cookie banner or the whatever form and the product that you're using, always mindful of the idea of that this is a baseline maxim in privacy. I'm sure you heard it a million times, but if the product is free, then you're the product.

Galen Low:

Yes. Yeah.

Lauren Wallace:

But it's very easy to get seduced by the functionality that is being promised to you. And sort of think, well, my, it's just me. It's just me sitting over here in my space in Portland, Oregon. How much could my information personally be worth? But in the aggregate and in the enrich environment as you're talking about, and this is one of the emerging risks that comes from AI, emerging privacy risk that exists in AI because of the capability of enrichment. And inferences that can be drawn from enriched data. So the information that you provided to this one vendor in this one instance, in order to buy this one thing that may standing alone, not have a ton of value, pennies maybe to somebody but aggregated and enriched and resold to third parties that can use it for other things, now you're dossier data value is much higher. And how can you, as the data subject go into each of the, you can't. That's not available to you. You can't remediate that. So again, my heart goes out to the organizations that want to be ethical in the first place and have deep awareness of the data that they have, where they got it, if they had the consent to use it in the first place. Are they using it in a manner that exceeds the scope of that initial consent? And could they defend that somehow? This. Expanded use is consistent without original intention. Was the data subject informed at the time that they were supplying this, that the, what their consent was gonna be used for? Again, these are not things you wanna have to go back in to your legacy data set. Now you go out and acquire a company, you buy 'em for their data. Right? You're not buying 'em for anything else. I was, this is a little tangent, but I was thinking about this the other day. There's a big story in the news recently. I'm not gonna name names, but a big bank had bought a startup. That startup provided services to people who were seeking financial aid, students who were seeking financial aid, and the big bank bought this startup because the startup said that they had 4 million names of people who had bought their student loan services. And the bank obviously saw this as a feeder of information about individuals that they could use to then target those individuals as prospective customers for the bank. The customer acquisition costs to the bank are about 150 or so dollars, depending per successful customer acquisition. Somebody who goes ahead and opens a checking account, right? So they were willing to pay $175 million for these 4 million names. That's about $40, $45 a name. So you can see why they're, that Delta 40 to 150 or so, they thought it was about, it's worth about a hundred dollars per person to go in and buy a hydrated vetted list. Of prospects for their banking services. Well then they found out those 4 million names were fake. They'd been synthetically generated, you know, using some kind of AI program. And as soon as they actually went and tested those names, they discovered that they were all dead ends. And so $175 million in this project that they had done. They sued the company, they backed outta the acquisition. You know, maybe they got the money back. I don't know. I thought that's so interesting. They bought this startup not for its technology, not for its people, but solely for the names on the list. And given the public information about the acquisition, about the lawsuit, we could see the attributed value very clearly of each of those names. The company was worth nothing if those 4 million names weren't accurate. So take that into your own life. And think, okay, what if me or one of one of my kids was in that data value pipeline from a very early age and now up to the age where you might be opening a checking account or getting a mortgage from this bank. So that transactional value that you see when you think, sure, click to buy, except whatever. My data in this moment doesn't feel like it's worth very much, but if you look at your lifetime value, well, I'll tell you this big bank thought you were worth at least a hundred bucks. So would you do something different with that a hundred dollars than what you know, that decision that you just made?

Galen Low:

There's so much there because it's such an analog to what you were saying earlier about a lot of folks are thinking about AI and AI transformation in their workplace as the thing they interface with and the skill that they need to develop to do their job. It's not always evident, the thing you just described, which is like what is in that black box of what happens. And usually we think of it as, well, we don't know how some of these models work, this machine learning, they are fixing their own code and we don't fundamentally know why some of this AI works, but that's not the black box. We mean here. Black box, we mean here. Is that actually the stuff you don't see after you hit submit or after you submit that prompt or after like where's it all going? How is it being aggregated? Is it transferring hands? And like going back to that, the model, right? Where you're like, you sit down and you decide, okay, you know what? We are not the EU model. We definitely feel like we get value from our customer's data. Let's start there. There's still so much work to be done, I guess, to understand then where all this data goes, what we are responsible for and like what ethically are we responsible for as an organization. I think that's. An interesting question that, I mean, I don't know if we need to answer today, but we could try.

Lauren Wallace:

It's kind of a big question. There are so many big questions when we step into this realm as citizens, as parents, as consumers, as corporate citizens, there are just so many big questions and I find that is a pretty giant ocean to boil and I'm just standing over here on the edge of it and I've just like got a big lighter trying to boil that ocean with, it's too much. So for me, I like to start with a use case. What is the actual thing you're trying to achieve with this one actual application and this one data set that you might feed into this application? What is the ROI you really hope to extract from that? Are you saving money? Are you making money? Are you gonna be able to add customers and then start to break it down from there? It's too much in the compliance circles. We talk a lot about tone at the top. Which I think is a very valuable construct is how do our executives, our leadership, our board, talk about their values and how do they send those values down into the organization? But that only gets you so far where we're operating at the project level. You gotta break it down again and say, okay, what are we trying to achieve? What's the use case that we're after? What is the actual ROI. How do the ethical principles that maybe our leadership or maybe the regulations that we're subject to, how do they align to the things that we're actually gonna do? And if we don't know, because that black box, is it safe to proceed? And I've said no. In my capacity as chief legal officer rate our first, I tell you what I mean. We looked at hundreds of vendors over the course of the year. I actually did a little readout on it. There was a pretty high proportion that we declined. We said, we don't have enough information, or they haven't posted a trust center that we can really dig into and get the information that we need, or they're using underlying models that they're not disclosing to us. They did some research and I discovered that maybe they're also over here and I, you know, that's not a tool that we've vetted, so I think feeling confident, saying no. Is absolutely fine some of the time, and that's where it helps to have the ethical framework where you say, okay, one of our ethical principles is accountability. Let's say pick that one. Well, if I don't know the answer to one of these questions in this kind of straightforward analysis of what is this product supposed to do for us, who's gonna be accountable for it? I can't just turn to the vendor and say, well, I didn't really understand what you were doing when I bought it from you. Accountability is a tough principle. It's very aspirational. And it's a good aspiration. But really, who do we mean when we say accountable? Do we mean our board is accountable? Do we mean that our senior leadership is accountable? Do we mean that individuals on the team are personally accountable? And I think you can start again from the project perspective like this. Use case oriented, ROI oriented is setting up teams that are accountable to each other. And when you're setting up this team for a project that's gonna use AI. I think it's so important to bring in your whole community, and clearly you have product at the table. You have engineering at the table. You might have security at the table, but you've gotta have, let's say, customer success at the table. Does CS have scripts so that they can respond to customer questions about this functionality? Do they understand the functionality well enough that in the absence of the script they could at least have vocabulary to surface the question to someone else? Product marketing, do you have documentation? That thoroughly explains, that demonstrates you understand what this product is gonna do for you. Marketing. When you go out and talk to your community, do you know if your customer community, do you know whether they're in that allergic to AI category or at the other end, you know, very eager to try new things category. How are you gonna tone and tune your messaging so that when you talk about new features or functionality that you're bringing out that are AI enabled, you're meeting them where they are. Legal, of course you got legal at the table. I gotta say that. Importantly though, back to the customers, you may have customers in your portfolio that prohibit the use of AI on the products they buy from you. That can be buried in the contract somewhere like, and it might not say AI, it might say algorithmic decision making or using models. You might have a 10-year-old contract that prohibits this use, so you need to get legal the table too. And then also looking at it from a regulatory perspective. But now you're all around the table. Now is when you develop the accountability to each other and say, I asked you and I did not hear back on this.

Galen Low:

I'm glad you went there because as I was thinking about, it's a complicated question, right? Accountability. Like who? And I was like, okay where are we gonna take this? And it's kind of like we hold each other accountable internally because we are a team. The question I actually had like crafted here, which I think we've kind of covered, is that like something like privacy? It's only as strong as the weakest link in a lot of cases, right? So you get a team together and if there's like one person on the team that's like, yeah, it doesn't matter, I'm just gonna whatever, upload this data set to like an a public LLM or we'll just skirt by this because like, we can get away with it. We've gotta deliver by Friday. So let's just get it done. We'll circle back. But that sort of like culture of being, of holding one another accountable for that value, I think is really strong.

Lauren Wallace:

And here's what I love about privacy as a guiding principle that can play out in so many different ways. Unlike so many other of the considerations that we take into account when we're looking at development or product designs, whatever privacy is personal, do a quick survey of your organization sometime. Ask everyone, whether they or someone that they care about has been victim of a personal information breach. I mean, on the one hand, we all get letters from all of our proprietors all the time saying, oh, by the way, you know, and you should set up your Experian credit report monitoring. But if you've ever personally had your data misused and apparently for the benefit of somebody else. It is a literal nightmare, and I've been through it myself personally. It's not why I became a privacy lawyer, but it's not why I became a privacy deal.

Galen Low:

Fair enough. Yeah, I hear you. I hear you.

Lauren Wallace:

And I was talking to someone recently, this was just a week or so ago, who had recently just kind of gone through this whole process of trying to rebuild their life after having been their bank account was hacked, I think. And then all kinds of bad things happen from that. So when you are trying to encourage accountability and care for each other. In this project context, when you can bring privacy in and see how really personal it is, do kids really want their children's information to be available to anybody, you know, with a passcode and for the lifecycle that we talked about when we looked at that banking example, really looking ahead and thinking, well, this information that I put in here, it's about my child or about my home or something. This might be very valuable to someone five, 10 years from now maybe. Can I stop that now?

Galen Low:

I wonder if we could take this whole vertical slice and look at it maybe from a project or product perspective, even just from that, you mentioned tone at the top, and yes, it's important, but also it's not the be all end all. So if we started just beneath that, you know, developing a product that's going to use. AI or algorithmic processing in some way. And there's a couple pieces that you had in there and you know, tell me if I'm right or wrong about this, but like I'm thinking of like the education piece, right? You mentioned like having the lexicon or even having the empathy to have that thought of it's just our customer's data versus that could have been my kids' data. And then the like, having everyone at the table, I know some folks listening are like, we can't have everyone at the table for everything. Lauren, like, that's gonna be so expensive. Might work for you in financial services, but like, you know, we're scrappy all the way down to like, if somebody breaches that value on the team, what does it look like for that team to like start to repair it? Those are kind of the three points I wanted to look at.

Lauren Wallace:

Let me look at it a little differently. I wanna think for a second because you have intentional misuse of personal information that is a terrible thing and it's a criminal offense in some places, but you always have inadvertent and unintentional use of PI. And that I think is something that we can be very accountable about being defensive about. Most of these tools that we go by do have configurations that you can enable. That will make your PI, or again, your corporate confidential information, let's like kind of bundle those together for this purpose. There are defensive things that you can do in the first place, first at the baseline security layer, but also with these products as you bring them on board and you start to use them. Well, let's first of all make sure we do those things. Let's avoid the inadvertent misuse. So let's sweep that out and now we get to the core issue you're talking about, which is the intentional or maybe careless or reckless.

Galen Low:

Right, yes.

Lauren Wallace:

Use of people's personal information because we do have deadlines to meet. We've got something to bring out by Friday.

Galen Low:

Just don't do that. Well, actually I'm glad you said that because when you said earlier that you have to say no, that you've had to say no. And when you look back historically, you've said no a lot. And my wife works in insurance. Right. And they're, you know, I'm very heavily regulated, especially here in Canada. And risk and compliance is very serious and business gets turned down because of risk, which is not a lot of the case a lot of the time for other organizations in other industries where they're like, we just we're leaving money on the table by not having this here. Versus what is the risk of us taking this money if we haven't done our due diligence on the other end? Or, you know, on the infrastructure or on the attack surface. Or, I'm actually really glad you said that. Right. Just don't, which is like, that is a skill that. It's a tough one, right? Even when it's your job, I imagine being the one who's saying, no, we can't write this business, sorry. Or you can't deliver this product. Sorry. Or maybe not, sorry. I dunno. Right. It's like it's a tough position to be in. How do you wave that flag? Especially when it's not really "your job".

Lauren Wallace:

Yeah. Well I'll say there's no pleasure in it, right? It's not like, some sort of vindictive thing. And I think we can revert to the shared vocabulary and tools that we have. To talk about it organizationally in a more productive way. And the most old school way of doing this is your good old risk matrix where you've got likelihood going up one side and severity going across the other side, and you can have a conversation, a real candid conversation about finding your.in that field and as an organization agreeing, well, our line is to the left of that thought. Below that dot. And so we're not gonna do that thing. It doesn't have to be an individual thing. Say organizationally, we have a risk approach that our board believes in, our investors believe in, and our regulators believe in. And if we don't stand behind it day to day, we don't wanna have to come back in, in two, three years and say, oh, we kind of went outside the lines that time. And if we went outside the lines because we really didn't understand what we were buying or what we're doing in these black boxes. Ignorance is no defense under the law. Yes. It never has been. It never will be. So I think tone at the top, shared values and the personal investment that we have in our privacy and the privacy of our families and loved ones. I think that's always a great place to land.

Galen Low:

I love that. And you know what I love about it is that part of me was like, okay, well are we gonna talk about standing up when we say building the muscle around, like ethical use of AI and privacy and all these values. Part of me was like, oh, is it gonna be like, okay, well build your, you know, I don't care. Small startup build an enterprise grade multi personnel team that handles risk compliance, you know, law and all that. They're like, oh, we don't even, like, we'd have to hire like 20 people and we'd have to like write the book from scratch. But actually, I think what you're saying, tell me if I'm wrong, tone at the top is important because you set values. Then having the education within your teams to be able to say, that sounds like it might be on the wrong side of our risk threshold. Can we have a conversation about it? Is the beginning of that muscle, even if you don't have a chief legal officer or if that person is like also wearing three other hats, you know, at your organization, it's still about having the vocabulary and the culture and the tone to have that conversation. The other thing that I found interesting, 'cause I was gonna press you on this, but the more I thought about it, the more it made sense. I said earlier, oh, my listeners are gonna say Lauren, we can't have all these people at the table. We can't bring everyone to the table every time we do anything. And then I thought about it and I was like, but actually when it comes to AI right now, maybe you kind of do need to because it's not something that is known. It's not something where the risk is low. It's not something that, you know, what people are gonna do with it. You know, left to their own devices and actually bringing people to the table helps have that conversation more than just getting work done. It's like culture building.

Lauren Wallace:

Yeah. And culturally, I don't know, an organization that isn't either going through or contemplating an AI transformation at this moment. And you cannot disregard that many individuals in your organization, maybe personally averse to the proliferation of AI in their lives. They're looking around their room and realizing that every single thing that they do is touched by AI. They don't really understand it very well. And how could you, I mean, no shade. You cannot understand it. It's not accessible to you to understand. And so asking these people, these individual human beings who may have this personal aversion to AI for themselves, for their security, for their families, for their job security, for the security of their personal information to say, okay, shut that off, because we are aggressively pursuing an AI transformation strategy. That's, you may get your initiatives off the ground, but I think they're gonna fizzle out when folks haven't had the opportunity to develop some fluency and some confidence. So, I wanna tell you one thing we did at RadarFirst, about a year and a half ago, we launched a series of monthly lunch and learns on ethical AI. Very casual thing. But everybody, of course, was invited and it was meant to be an open forum, and we covered one topic in each session, so we had about 60, 90 minutes each time we talk about human agency and oversight, or we talk about transparency, we talk about accountability. And for that, I used the baseline EU AI school guidelines. It was before the AI Act came out, but we had a lot to work with already. This stuff's a out there. So we did a bit of education on how these guidelines had been developed, where they came from, what was the baseline sort of human rights principles underlying each of them developed some vocabulary about it. We showed some examples. This is most fun, like everybody loves war stories, right? So we, I pulled out examples. I don't know if you know the AI incident database. It's tremendous resource online. Just look it up. AI incident database. Like, whoa, shocking. Hot off the presses, things that have implicated these principles of transparency or bias mitigation. So we talked about the example that was given in each case. What went wrong here? What could they have done in the first place to avoid this, or now that it has happened, what's their responsibility to mitigate it? And then we just had an open forum to discuss it, and the people who came into the room were at every stage of their personal AI transformation journey. Maybe by the time they left the room, they hadn't changed where they were in their right. Fair enough. You know, estimation at all. But when they went back to their desks and worked in their own AI enabled or AI enabling projects, they know how to framework to operate in, they had a vocabulary to discuss concerns with their team, or escalate concerns to legal or to compliance, or to the product team as appropriate. It was a great experience. I recommended it to everyone. It's super fun to do. It also reinforces that we're talking about human virtues here. Human values here. The impact on ourselves, on our families, and on our planet for the future.

Galen Low:

Who would run those sessions if they don't have a Lauren Wallace.

Lauren Wallace:

Oh, if you don't have a Lauren Wallace, gosh. Well, this comes back to another question actually, which is who runs your steering committee? Which is, you may not have one, but it's nice if you do and it's nice if it's extremely multifunctional to bring in everybody and bring in people also at various levels of seniority in the organization.'cause some of your freshest thoughts are gonna come from people who are not necessarily been there very long or have grown up in their careers very much yet. But who should run that? It's a great question. It's the same question for each of these things. Is it someone on the product side who might be very close to how your company is literally thinking about enabling it? Should it be somebody on GNA who is very close to how, looking at how your company is spending money on AI or anticipating saving money on AI, you do have to have a tone at the top contribution to it. So you know, you wanna have an executive who's at least a sponsor of the thing, but I could see where you could even share that assignment month to month among different people. And get a little different value every time.

Galen Low:

I love that, and I love the, you started out using the word community and now I can see it sort of transpiring, you know, not necessarily the traditional community of practice, but the act of getting together to share information, to figure out where there's alignment or where discussion is needed, and then arm one another with the vocabulary to continue that conversation in the day-to-day. Because I, in my head and probably some of my listeners, right, it's like, oh, do we need to hire someone who's an expert in like AI and privacy and like ethical implementation of AI to run lunch and learns versus I think tone at the top plus an executive sponsor, plus the people who are doing the work and the decisions that they're making and what they're thinking about so that we can have that dialogue. It is a cross-functional game right now. It is a multi-function sort of conversation that needs to be, had to set the new stuff that we wouldn't have had before. Because not every organization was contemplating or executing an AI transformation before, and now it's table stakes.

Lauren Wallace:

It came on as fast too.

Galen Low:

Absolutely did. And I, that's what I like about what you said initially, right. So from the organizations and institutions within regulated industries actually had that muscle already, in some cases, to be managing this risk and dialoguing about this risk.

Lauren Wallace:

And they've made these risk threshold decisions before. It's not a new conversation. So you are talking about getting institutional buy-in and the authorization to say no to things where you haven't already sort of integrated your risk perspective into what your company does. That's gonna be a hard problem to solve right out of the gate on a case by case basis. Back to your point about who should run these things, I don't think there's anything wrong with bringing in a consultant to help you establish the framework for how you're gonna have these conversations, because otherwise, oh my God, you're gonna talk all day and you're never gonna get everything done.

Galen Low:

That's fair. That's fair.

Lauren Wallace:

But the people in your organization are the experts that should be just providing the inputs to this framework and helping decide how to decide on what your outputs are.

Galen Low:

To round us out, I wanna talk a bit about the future. I don't think I'll do the crystal ball thing. We've been mentioning some of these regulations that have happened in the past, right? GDPR, and you know, there's some conversations right now slash over the past decade or more about legislation lagging behind use of things like social media. We've been talking about how folks in regulated industries might actually have a leg up because they have built the muscle to have these conversations around risk and compliance. They have the notification systems, they understand the attack surface. But then I'm thinking in my head, I'm like. You don't wanna have to go back and like rework things if possible. And yet legislation hasn't really caught up with AI either. And I guess my question is, are all organizations kind of going into a gray area a bit? And do we have decades ahead of us of like just making some assumptions and then having the law come back and change everything? Or is it maybe flipped? Will actual behavior now influence how legislation manifests?

Lauren Wallace:

That was a great question, Galen.

Galen Low:

Just to round us down with a little question.

Lauren Wallace:

Yeah. This is a little bit of a mind blowing kind of a question there. No problem. Well, let's think about the law for a minute. Law is just a way of writing down what our shared ethical principles are and putting it in someplace where people can find it so they know. You may not agree with everything that got written down, and you may not share that ethical principle, but you know where to go to find out what the community thinks about this shared principle and how you're supposed to carry it out. And the litigation process primarily exists to handle issues when they're novel, they're new, they haven't come up, there hasn't been time to process them through legislative cycles. So looking at case law is super interesting to see what's actually coming out in the litigation context for how people are so class. We talked about class actions for a few minutes a little bit ago. It's a whack-a-mole thing where you see these big foundation frontier model providers, sometimes winning, sometimes losing. So there's a lot of breathing room in how these things are playing out. But then these litigation outcomes do tend to get kind of poured back in to our legislative outputs. But it takes a long time. You know, we say. The wheels of justice grind slow, but exceedingly fine. So you end up with 180 page long legislation that intends to capture every possible scenario so you don't have novel outcomes that give rise to a legislative situation.'cause that's very expensive, in time consuming and awful for everyone. That's part of the reason it takes so incredibly long, and I don't think we really can be looking to our legislators to act fast on this, kind of don't want them to. We saw in Colorado when they came out with their EU legislation that they acted pretty fast. They got something pretty good, and then they had to yank it because they couldn't figure out how to actually enact it. So companies kind of start to stood up their compliance, stand up their compliance programs proactively before this rule came out, and then it got pulled, and then maybe it's gonna come out in June and then maybe it's gonna come out after that. So. You can really get whips on by, I think, proactive regulatory compliance. If you are trying to do it on a point by point basis. If you have developed your tone at the top, if you know what it means to be an ethical organization, if you know where your risk threshold is, you're gonna put your legislative or your litigation risk over kind of on the right side of that category and say, okay, we're willing to face that. If it comes to that, but we don't know enough right now, but what we know about ourselves is what we believe is right and what we believe our customers expect of us, and we can act on that today.

Galen Low:

That was a fabulous answer, by the way. That is amazing. Thank you so much for this. Just for fun, do you have a question that you want to ask me?

Lauren Wallace:

I do, Galen. We've had a very interesting conversation. We've been talking about really hard and really interesting work for the last 45 minutes or so. And talking about how complex our day-to-day is. And so I wanna check in on your wellbeing and ask you when's the last time you took a vacation. And if you went somewhere, where'd you go?

Galen Low:

It was a year ago and it was Mexico and it was a big huzzah with my wife's side of the family, the whole family, her sister my kid. Not the dog. Dog stayed home, but it was lovely just kind of sitting, hanging out. We got great weather and it was very much a recharge and a reflection on like family, if that makes sense. Good and bad. Right. You know? And life happens fast and you don't always pick the directions. Things go in terms of health and careers and everything. But yeah, that was the last time. We're due. I actually feel, even as we go through these conversations around AI transformation, and I think you're right that like there's very few organizations that aren't thinking about this right now. Some feel forced to, some are excited about it, but it's like adding to our stack of things to think about in an already fast-paced world. Yeah, there's a lot going on. Scheduling time, finding the time for things, finding that balance. You know, it, it is tough. I think it's tough for a lot of people right now. Maybe insidiously, maybe without people knowing it. Yeah, I think there is like a, there's an impact that I think we need to dialogue about a bit more in the zeitgeist around, yeah. What our lives look like now that we are being asked to kind of drink from the fire hose on a lot of things.

Lauren Wallace:

Our cognitive load is imponderable at this point, and sometimes all you can do is do what you did is take your people, go someplace warm, and just appreciate it. I'm glad you did.

Galen Low:

And temporary. Yeah. Flee temporarily.

Lauren Wallace:

That's great.

Galen Low:

Awesome. Lauren, thank you so much for being here with me today. I really enjoyed our conversation. For our listeners who enjoyed the conversation as well, where can they learn more about you?

Lauren Wallace:

Hit me up on LinkedIn, we'll go from there.

Galen Low:

Awesome. Fantastic. I will include a link to Lauren's profile in the show notes as well as some of the legislation we mentioned, NOYB in the show notes. So check those out and yeah, I think that's it.

Lauren Wallace:

Thank you, Galen.

Galen Low:

That's it for today's episode of The Digital Project Manager Podcast. If you enjoyed this conversation, make sure to subscribe wherever you're listening. And if you want even more tactical insights, case studies and playbooks, head on over to thedigitalprojectmanager.com. Until next time, thanks for listening.