Curiouser & Curiouser

Building AI We Can Actually Trust

Alice Season 1 Episode 3

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 53:35

AI governance sounds straightforward until you try to actually do it.

In this episode, Mo talks with Laura Powell, Senior Director of Partnerships at LatticeFlow AI, about the messy reality of building AI systems organizations can actually trust. They discuss translating responsible AI principles into technical controls, why many systems never make it into production, and how teams should think about risk as agentic AI systems become more common.

🔗 Podcast: https://alice.io/podcast

Follow the show so you don’t miss the next episode.
New episodes every two weeks. Stay curious.

00:00
AI is going to drive everything in the future. Like, let's not pretend that that's not how it's going to be. We are like on the precipice of either something really, really meaningful or something really, really terrible. We have an opportunity right now to fix this problem and it's still that gap can be closed. But the longer we wait to close the gap, the longer we wait to find these creative ways to diversify the data going into these systems, the worse it's going to get and the harder it's going to be to reconcile that in the future.

00:29
If AI has ever made you stop and think, wait, what is happening?  You're not alone.  I'm Mo, and I'm a security researcher asking the same questions.  On Curiouser and Curiouser,  we're having open conversations with experts,  researchers,  and leaders working at the edge of this space,  talking through how AI is taking shape,  what's shifting,  and how people inside the work are thinking about it as it happens.  So join us and listen in as the conversation takes shape.

00:56
Welcome back this week.  I'm super excited to  be introducing  and sitting here with Laura Powell, who I'm actually not going to be introducing because luckily you're not going to see this, but there have been at least four takes where I've tried to introduce Laura. She's just got an incredible background, a really fun story,  and it's very technical and has so much knowledge. I'm going to let her introduce herself  and we are just going to go from there. So Laura.

01:23
Please save me more embarrassment and  go ahead.  My name is Laura Powell. I'm Senior Director of Partnerships for Lattice Flow AI. It's a company, a startup based out of Zurich, Switzerland, uh but with employees worldwide.  And  I have,  I'm bringing kind of an interesting background to this partnerships role. I'm not a traditional business development or sales person. I actually have more technical background.

01:51
I spent many, many years in tech working through different roles in product and engineering, in cybersecurity, privacy, legal risk management. I spent a couple of years building out some responsible AI programs at a large tech organization. So I spent some time in this space. I have lots of thoughts about AI and how you manage AI risk in business. And yeah, really looking forward to our conversation today.

02:20
Yeah. I mean, like you said, you have like this really interesting background and you've had this like very cool story. I remember the first time I met you, um,  you had,  uh, you've been saying like you were at Indeed for a really long time and you essentially built up the privacy program there from like zero to a hundred, right? And now like Indeed is known to have one of those like top tier programs. So you're kind of bringing all of this experience over to this new role. So, um,

02:50
But I do have a kind of a question. Maybe we can look back.  When you were at like, indeed trying to figure out like GDPR compliance, did you ever have like a moment where you were like, we need to like mess this up really bad first before we can actually get things done, right?  Did you ever have like that kind of moment? Yes. um I  it's funny. I'm like thinking, what will I say that my old boss would be like have a heart attack listening to me say? But  yes, absolutely. I mean,

03:19
We were very fortunate and unfortunate at the same time that we didn't have any serious incidents or anything at Indeed. We went through a lot of effort and put a lot of resources into building a really robust program. uh But it was  often,  like those are difficult conversations and not everybody is bought into the whole, like we should do this because it's the right thing. People are like, okay, but what's the bare minimum of the right thing we really need to do?

03:49
Is there were definitely moments where i wished something would go horribly wrong so people could see what happens if. You are putting the right you know controls and mechanisms in place and uh it's weird like it's really weird space to be to be thinking i wish something would go terribly wrong so that way i would have a  easier time arguing for doing the right thing it's it's a very challenging.

04:14
place to be in, but I think anybody who's been in the risk management space, this probably resonates with them. Yeah. And I know that kind of was a very less sealed question, kind of out of nowhere with no context. Like, why would you ask me this? Why is the first question we're talking about why everything is going wrong?  But it does bring me into like kind of what I want to  think about today. uh AI adoption is  super fast and everyone is uh just putting it somewhere.

04:42
And I think that's a really good thing. And it's  very necessary for  business enablement and  honestly empowering like all sorts of business owners to keep up and compete  with companies and organizations they never would have before. At the same time with anything, we adopt a certain amount of risk alongside all this innovation.  And there's kind of been this fun, like,

05:09
paradigm insecurity that I think we've been tracking, right? Where it's like the more convenient something is, the less secure it is, right?  And things are so super convenient right now.  And I don't know that AI uh has become  as insecure as we think it is, but I think that's because we just haven't had a major AI incident yet.  And  we've had lots of talks about

05:33
how things can go bad in AI and  almost every piece of research I've seen is all about theoretical  and, or it's a very niche case where it's really bad. Like even the automated attackers that come out of AI, right? It's super niche, open source.  Some of them are open source, but it's still quite niche and still pretty, we haven't seen it like, I would say in the wild scaled off  affecting organizations widespread, which is kind of like why.

06:00
In some cases, we're laying back a little bit and we're saying, well, let's let, you know, as things happen, we'll improve, but let's not slow down innovation. So  the skeptic in me is like, man, um I really wish we had a good AI incident. And I think that this year it should happen. I don't think it's going to happen this year, but I think it should.  And there's a lot of, there's a lot of reasons why I like think that.

06:26
We're only three weeks into the year ish so there's still plenty of time left for that that to happen  but i think you're right it's i mean obviously nobody wants some  major league catastrophic incident to happen but. It's kind of sad but as humans that's kind of what it takes to get us  to pay attention  and you know i think that  there's a lot of.

06:50
There's been a lot of little incidents out there. um One of my favorites is like a car dealership that had a chat bot on their website and you could negotiate for your purchase price with this chat bot and somebody got the chat bot to agree to a one dollar purchase price for a brand new SUV and the dealership like had to honor it.  but people love stories like that because you're like, yeah, you're rooting for the underdog and the regular.

07:19
You know, Joe off the street got a $1 SUV. Good for him, but it just, it's also a really good illustration of how you can't really anticipate all of the potential ways that things could go wrong. And I think that that's what we'll see with a major incident is it's going to be something that a developer somewhere at one point was like, that's an edge case. We don't need to worry about that. And it just could have potentially catastrophic results. And.

07:48
I don't know, things like that might be what's necessary to get people to really start paying attention and start thinking about, well, what sorts of protections and guardrails and controls do we really have in place with AI and  what's not being done? Yeah. And it's kind of been an interesting, like Tuesday afternoon activity to just like sit in on like a, like an engineering meeting or even a product meeting and be like, Oh, well, like what, are we doing now? Right. Like what's happening today? And.

08:18
hearing about all the reward without any of the risk and just seeing like, oh, this is going to be great. Like we're going to do this. Right. Um, on the playing devil's advocate a little bit though, there's always like this, um, I feel like there's always like the need for AI governance or like, you know, there's always someone in a room who's like, we need this AI governance. Um, but so much of that AI governance piece is still theoretical, right. And it's still stuff that it's like, yes, we should theoretically have this.

08:46
but there haven't been like any real examples of like, okay, well, like this is why you need it. know, things like catastrophic risk for AI are very theoretical where there are scenarios that we cannot imagine that we just haven't seen yet. And there's still just like kind of pipe dreams, you know, you can read more about these risks in depth in a science fiction novel than you can in seeing and practice. So I'm wondering like,

09:13
Not only like what are kind of like, how do we like think about that? But you're kind of like the, like in my opinion, you're like a great leader in the sound board sort of for this space. So like, what do you think about like, like actually the technical work that we need to do and like, are we actually ready to kind of prove some of these like theories and then give people a way to move forward? So my brain is going in.

09:37
like five different directions right now because I have a, there's a lot of different thought trails that I could take off of that setup. Part of me thinks about the fact that uh if we try to zoom out and think about the actual risk that AI systems pose to the average consumer, there's some MIT study, I think it was at MIT.

10:02
that came out a little while back that said something like 95 % of AI systems in development don't actually make it into production. Many companies,  one, either don't trust the system well enough, like they don't feel like they have a handle enough on the risk to be able to deploy it into production,  or  they're not able to get the return on their development investment in the system to be able to deploy it into production. There's too many, um you know, there's too many gaps or there's too many

10:32
like  inaccuracies in the performance or whatever it might be that they're not able to get into production,  mostly for performance reasons, sometimes for risk reasons. So that's one framing of this situation. Then you have that, like what you're talking about with a lot of these risks are extremely esoteric. They're very academic in nature. They're described in all of these like published papers.

10:59
and we don't have these real life incidents to point to. So many people see that and they're like, yeah, conceptually I get the idea of that risk, but it doesn't really seem practical. It doesn't seem like it's something that's gonna happen tomorrow. So I don't really need to be worried about this highly systemic impact to the entire world type of risk that  people might be talking about. And then you have like, you're legal folks.

11:23
who are really worried about this stuff. They really want this like very cut and dried black and white compliance answers from their engineering teams. And that's just not realistic, especially in the world of AI because  so much  of it by nature is very unpredictable. It's very stochastic. It's not like, it's not very deterministic, right? So there's a lot of unpredictability in it. And then you also have like  the angle on this that is just the reality of business operations.

11:54
So many people who are responsible for risk management in an organization don't actually understand the technical aspects of these systems. They don't understand how AI works. They don't understand why an engineer can't guarantee a 99 % accuracy rate 100 % of the time, right? Like you  just, you can't do that by the nature of a lot of these systems, but that's what your risk management, your legal, your compliance folks, that's what they want. And so,

12:23
you end up with all this gray space in between where it's like, what risks do we actually need to care about? What risks are realistic to happen? What's actually preventing us from  getting the return on investment from this system? And what's even a reasonable expectation there? And then ultimately all of this boils down to, how do I really understand these systems? And how do I know what they're supposed to be doing? And how do I know if they're doing what they're supposed to be doing?

12:52
And in my mind, this comes consistently, almost always will come down to technical control and assessment of these systems.  And I'm not here to promote the company that I work for. I'm here like as Laura today, but I'm very fortunate in that I believe this personally, and it is also  part of what we do as an organization. We take that high level kind of regulatory or legal or risk management framework

13:22
that comes in business or legalese, and we translate that into,  how do you actually technically assess an AI system for these aspects? And what kind of assessments do you need to run? And what kind of perturbations do you need to put into the data set? And what kind of scenarios do you need to measure these systems on? And then not just how do you  map those risks into actual technical evaluations, but when you get a blob of data on the other end, how do you turn that

13:51
and interpret that into something meaningful that can feed back into this business management and risk management cycle. Right.  And,  know, like I know you mentioned like, oh man, like I'm here as Laura, I'm not like selling the company, but like, let's be real, right? Right now, this is like this transparency piece, which is essentially what you've spoken about is the hottest thing right now.  Everyone wants to have more transparency in their AI, right?

14:18
This is like everybody wants to know what's happening in the black box. Nobody wants like this like weird Schrodinger's cat scenario where you assume the best, but  on the inside your AI is dead  or something's happening.  It's either one or the other. We don't want to have to take guesses there. We want to be able to understand the outcomes. We want to understand what the inputs are doing.  And we've only seen this,  we've kind of kept moving forward  and it's this weird

14:47
It's this weird tech that that has exist that only exists because we're building in this iterative loop where we build upon a solution that we don't fully understand and we continue to stack layers  of I don't know what it's called. I don't know. It's like, um,  it's like new tech that like, which is weird because it's tech that that we are building on top of that isn't really type that because you know, it's new.  Um,  and we just keep building on top of it. Right. It's almost like  instead of.

15:18
It's like getting interest on credit card debt,  Like that's all it feels like. Exactly. Yeah, it feels like we've got like kind of this like amazing black card, right? And we just keep swiping and there's no limit to it. But we understand that there's this massive interest that we are continuing to accrue or just not making payments or we're not making that full minimum payment, even though that minimum payment takes like 30 years to pay off anyway. It is crazy. Well, we're not here to talk about like, you know, what credit cards do to consumers. Insane. um

15:48
But  in this case, we've got this AI credit card that we just continuously swipe. But, you know,  we can call it a credit card. also kind of think of it as Jenga, right? Like in Jenga, like, you know, you never know which piece is going to be the one that like kind of breaks the tower. Sometimes you go and you push  and this one feels like it's obviously going to make things fall. But the one that was like really just like loose and you saw, thought it was an easy pull.

16:17
that's the one that makes the entire thing collapse. So I'm wondering if there's like something that um you've noticed that has been like this  danger in plain sight, but it's like no one has really thought about it as like, this is the bad part. Like for me, for example, uh in agentic workflows, it's always like how one agent interacts with another agent. And  in these like kind of, uh

16:44
In these kinds of scenarios, when they're interacting, where you have more more agents, there's just so much more variance and you don't know when a response is actually getting weird. um Because that first, that first or that second or that the third thing that they say in this multi-step process, right? um It's not as consistent as like just watching  a single chatbot execute, or you can track step one, step two, step three. Now you've got step one  and now you have another agent beginning their own step one, right?

17:14
and maybe their step one goes and tells another agent to do a step one and they get a response and it changes the whole tree of probabilities uh and changes how uh a response actually looks at the end of the day. And you don't know which ones are influencing uh the other really. No, I think you're right. Agentic systems are definitely a hidden.  I mean, I don't even know that they're that hidden. I feel like it's a very obvious risk area.

17:43
We really don't even have a full handle on a single LLM based system, let alone  what it looks like when you start chaining these LLMs together or you give them really highly uh autonomous decision making. This is one area that like is honestly a personal,  it's like a little pet peeve of mine is that people talk about agentic systems and there's different types of agentic systems, right? So you could have a single LLM system that has

18:10
decision-making autonomy, it's connected to tools, it can take actions on its own. You can have an agentic system that is actually just a string of multiple models or multiple LLMs chained together that maybe none of them have uh agency or only one of them has agency. All of them could have agency. There's many different ways that agentic systems could be built and could exist and all of them  present different risk profiles and take.

18:37
different ways of managing their risks. And I feel like as an industry, we haven't even standardized how we talk about these systems. So if we can't even use the same terminology to refer to them, there's no way we're going to be able to meaningfully govern them and manage the risk associated with them. So I feel like  everybody gets so excited because, I'm a very visionary person too, I get very excited when I think about all the cool stuff you can do with AI, but

19:06
I also feel like, I mean, that stat that I mentioned earlier, so many  AI systems don't even make it into production. We kind of need to like slow our role as an industry and figure out, okay, let's take it step by step. Let's actually get these systems working as independent units. And then maybe we can start chaining them together and figuring out really cool things that we can do in the world. But we're just, everybody's like full steam ahead, just rushing into stuff and not even paying attention to.

19:35
Well, hey, we still haven't figured out step one yet. For those of you heading to RSA this March, you know how chaotic it can be. Honestly, there are so many vendors, there's a ton of booths, all this.  With this year's theme focused on community, we decided to slow things down a bit and give the community a space  to take a break  and  maybe join us for a cup of tea or two. Stop by booth S2051  and you'll see what I mean. Thanks. See you there.  It's a very weird time to be very excited about things. Like I know oh

20:05
Like  an example, right? Like there are so many really good and interesting AI models out here. And this is actually, we're to get back into an incident where this is kind of like one of the, one of those reputational things that we've seen. All these models are really good  and,  or they are getting better, I would say.  And I know for me, I'm like a huge user of like AI assistance in coding.  Like to the point where I've got like a whole.

20:34
I've got a whole team now of agents that I like work with. So I do like a lot of the architecture and design and then I like, you know, set all the standards. And then I have a little agent  product manager who kind of helps me shape the product vision. And then we go from there and I talked to my, my AI architect and we're like, Oh, Hey, like, what do you think about this? And he's like, no, no, you're dumb. Like, of course you are human. you know, it's  so we go and we have these back and forth and eventually we'll get to something we can all agree on and make something silly.

21:04
Recently, I've started playing around with a couple of different models and  one of them, uh one of them specifically for the image generation piece got really, really weird. um And I don't know if I can, ah, I can say that again. So recently, I'm sure you've heard about it, but GROP got into this really weird issue, right? Where I think it's kind of like one of these like prime examples of an AI incident, but I don't know that we're making a big deal about it because of how much

21:34
I don't want to say good about how much like innovation is there in the product by itself, right? So there's so much inference and power behind Grok that  there's a little bit of validation, not from like me personally saying it's validating, right? Or validating it, but there's validation from the community that says we are actually going to allow these bad things to happen because it needs more data or it needs...

22:01
to be able to generate these things so that it can generate more things, right? So  it feels like this is a trial and error in production.  And we are like watching it happen. uh even like the compliance side where,  you know, in California last year, there was a case where uh an individual had been generating  explicit images of people without their consent, right?

22:29
Again, illegal, not allowed to happen. In this case, we have something very similar happening, right? um Maybe not as explicit, right? Still like clothes are still, you but still revealing is the problem.  And so we've got this kind of like incident that's happening, but it's not really an incident, if that makes sense. So like, what is like really the... I don't know if there's actually a solution there, you know what I mean?

22:56
Like we're kind of allowing innovation to happen, but we aren't really taking some of these like ethics and principles that we need in the, in lieu of, well, we can, we are going to do better one day, right? Like loosen the safeguards, let more things happen, let more experimentation happen live, and then figure out what we need to do. It's kind of scary and it's difficult because the, you stifle or try to control

23:26
the creative process that limitation can often be very. uh But it can be detrimental to the output right but at the same time the this isn't just somebody making art this is  can have real impact on real people in very real and  negative and terrible ways and especially as a parent i think about.

23:54
The potential risk to children and people who don't who don't even know they're at risk of the type of stuff that exists out there that can be done with AI.  think there has kind of always been this. False dichotomy of  risk versus innovation and I just I call BS on it. I don't think it's real. It's like I like to use the. em I like to use the  metaphor of rock climbing. You know you can.

24:23
You can go farther, go higher, take riskier climbs if you have safety gear than if you don't. It's not stifling you from being able to to climb higher, right? Yeah, it may take you a little bit longer, but it also means you can go farther. And I think the same is true for technical innovation. Yeah,  there's a little bit of, well, in some cases, there's a lot of overhead to risk management. And yeah, you're going to have to put in some safeguards and implement some controls that you

24:53
won't feel like doing and it's gonna feel like it's slowing you down but really in the long run it ends up like not just being ethically the right thing but also helping me know the company the organization developer whoever helps them get farther.  And give you structure to what you're doing in a way that. Can actually measure your progress you actually know what you're doing where you're going how far you're getting. em Otherwise it's just it's this the kind of organic.

25:22
uncontrolled growth that you can get is not necessarily going to go in the right direction. It's weird. Like you say, you say the rock climbing thing and you think about the different types of climbers. I know nothing about rock climbing, but I do know that there are some of those people that there's like, yeah, the people that like go and they climb without gear. And then there's like these  crazy set of people that like just like stick your hand in and they just  like, they do that crazy thing. Right. And those are like,

25:50
the fastest people and you always look at them or like I've seen the videos. I'm like, oh my gosh, that's so cool. But I'd be horrified to do that. Right. And it feels like that's where we're at. Right. Where um we've kind of just we said, hey, go and like free climb  and  let's let's see what happens,  which is a scary thing because the happens is a little bit scary. But I'm wondering if we can kind of enable this kind of thing  like free climbing a little bit safer.

26:18
So if we think about like the 80-20 principle, right? Where, you you get 80 % away and 20 % is whatever the other part is. I'm sure I'm absolutely butchering it, right? But in this case, it feels like transparency is how we get to enablement. And the hard part is actually enabling transparency. But then what's like the 80-20 there, right? Are we kind of looking at the...

26:46
incident and the outcomes of the incident as like an 80-20? Where like if we do 20 % of this work, we present 80 % of these catastrophic outcomes, right?  Or are we more thinking about it as like a, well, if we do 80 % of this work, we'll protect against these 20 catastrophic outcomes, right? Like it kind of feels like we haven't really figured out which side we're working on. um Where I think in at least some of the organization problem or

27:15
the organizations I've spoken to, it seems like all of this work is only going to close such a small gap. So like, what is like kind of the, what do you think that, well, where are we actually at in this kind of sec section? Well, I love that you brought up the, 80 20 rule. Anyone who's ever been on any of my teams before will hear this and they're going to be like, Oh, Laura's super happy right now. Cause this is like definitely a,

27:44
you know, principle I live and die by. But um I think, so there's kind of two angles on this.  One is  for an organization,  as really realistically, we're mostly talking about businesses here. And for a business that's meant to make money, which most of them are,  you know, we can set nonprofits aside for a little bit, but most for-profit organizations, they,

28:12
you know sometimes they will want to do the right  thing sometimes they will feel like they have to do the right thing but all of its gonna cost money its all gonna cost resources to operationalize and realistically to get them to do it and to stick with it it has to be scalable so we have to come up with solutions that are automatable that are scalable that are easy for people to enact otherwise its just not gonna happen it just won't and so

28:40
We have to find ways to  do this effectively  and  to be able to do it at scale.  not just like, yeah, you can  run a ton of technical evaluations. Great. If you can figure out how to do that at scale and cost effectively, good for you. But you're also going to end up with a bunch of data and like,  you don't just evaluate these systems and  measure their performance and put guardrails on them. Like you don't just do it for fun, right? You do it because

29:08
you get a set of data on the other end of this effort that lets you make informed decisions as a business. you know, I  know I've mentioned this a few times now, but it's an important statistic about businesses aren't getting these systems into production for the vast majority.  And as a business,  not getting something into production, there's an opportunity cost there, right? So if you can put controls and assessments in place in a way that allow you to

29:38
release something into production and achieve some kind of return on that, like that's really valuable, but you're going to have to do it at scale. You're going to have to do it in a way that your organization is actually able to make meaningful business decisions on the other side of these  assessments, whatever data you're getting out of, you know,  out of looking at your systems and controlling them. And I think, you know,  I am a firm believer in this kind of 80, 20, you know, and it does depend on how you interpret it.

30:07
You know, 20 % of the effort for 80 % of the results or which I think in risk management is probably the interpretation you want to take. But the thing that really worries me about that is. You know. AI systems are built off of pattern recognition and they are built off of making sense of the data sets that are fed into them and inherently.

30:36
outliers in the data are just noise and the system  generally just rejects it  and doesn't really do much of anything with it. And you think about the fact that  most  really catastrophic events are probably likely to be triggered in some way by something that a developer considered an edge case. It was just an outlier in the data, the system didn't know how to handle it, it hadn't ever encountered that before.

31:03
And so it does something horribly wrong because it that wasn't part of its set of patterns right and. That if you're if you're taking that 8020 or or 2080 approach. Is there there's a little bit like that's a little bit scary right and so then you have to start thinking about okay  what's the what's the actual 20 % of effort for 80 % of the coverage for that small section and so do you you.

31:31
Think about building in certain capabilities for failure modes within your AI systems. if you,  telling the system as part of its instructions, literally, if you encounter something that is outside of like this set of procedural guidelines, then just reject it, like, or pass it to a human or whatever. Like this is what stuff like human in the loop is actually meant for. uh But a lot of,  a lot of people,

31:57
who are responsible for developing these systems are not necessarily thinking about failure modes from the beginning. They're more thinking about operational success performance modes. Yeah. So that makes a lot of sense. And I'll come back to human in the loop. I have some thoughts about human in the loop, but I also want, you mentioned something really interesting about like that, the 80-20 kind of split and like when we're considering solutions in that space, right?

32:23
let's just think about it a little more, sit in it. And specifically when it comes to an organization and composition and maybe like where the responsibility kind of lies, right? Again, 2026, yeah, we need an AI incident. But I also think that governance, compliance, transparency, all of these things is actually what's going to be the competitive advantage for a lot of these organizations, right? Trust your AI is like this, just the easiest thing to think about whenever you see like...

32:51
Okay, well, like how do we trust  a company? How do we trust a product? If everyone's using AI, now it's trust your AI, right?  AI is always meant to speak confidently and be confident. So  we are going to believe it, but can we actually trust it, right? The belief and trust here is totally different. So you have this great experience. You've worked cross-functionally  with C-suite, with very um high level decision makers. So like,

33:20
When we look at this like competitive AI governance advantage, where in an organization does this like,  what, does it come down to? What's like the fundamental problem? Um, is it more of like a technology problem or we just don't have access to the things that we need or like, um, we just don't have like a company doesn't have it. Um, is it a people problem where like we need to upskill folks where there's just not enough people? Um, is it an incentive problem where maybe there's just like, we do the risk reward and it's just not worth doing some of this stuff.

33:50
Or is it something maybe entirely different? think, you know, it may be a little bit of all of the above. um Every one of the things that you mentioned were a part of the challenge that I experienced specifically in building out the privacy program at Indeed, which I think is a very analogous uh scenario to what we're talking about now with AI trust. You know, when, when privacy kind of  was this kind of nascent area in

34:19
in the technology industry. It was like,  you're hey, Laura, you're the privacy person, go solve this problem. And it's like, okay, well, to solve this problem and do what I'm actually supposed to do in this role, I have to touch literally every area of the business from marketing to HR to sales to product and engineering. It like everything, everybody had some level of responsibility when it came to privacy, even if it was just.

34:46
you know somebody taking a privacy training course and that was like all they did there still a little bit of responsibility scattered all throughout the organization. And it required by and from senior leaders that require dedication of resources in many different areas a lot of resources that people didn't want to give up and required that really. In depth translation of legal and compliance requirements into.

35:13
actionable operational plans and product and engineering requirements. And there's not a lot of people who can bridge that space. Like that's a really challenging skill gap. So I do think some of it  is upskilling. I do think some of it is helping people to understand that there's responsibilities spread across the entirety of an organization when it comes to AI. But I also think a big part of it is just

35:40
efficiency and tooling. Like I'm a firm believer that most humans are good humans. And most humans will want to do the right thing. But you pretty much have to make it really easy for them. And you have to make it really straightforward and kind of tell them what to do sometimes. And then they'll do it. em But that that that getting to what is the easy thing? What do they have to do? How do we operationalize it? That translation is is really a unique skill set. And I think

36:09
If organizations don't have that already in-house and they don't have the people they think that they should or could upskill for that, they should absolutely be seeking resources outside.  most people don't really wanna go to like one of the big four consulting companies.  Some organizations can't  afford an engagement like that, but there's lots of like boutique firms out there that can  do this or even just.

36:36
hiring uh an individual person as a consultant to come in and work with a team. From  what I've seen in the industry so far, the organizations that have been the most successful are the ones who are really willing to commit to that cross-functional approach.  It's not just one person's responsibility. It's not just one department's responsibility.  It really takes a whole village  of  people across organizations to really make it successful. Cool. So

37:05
I do want to kind of go back a little bit. uh As someone who's in the Silicon Valley bubble, I've found that  my stance and like kind of my like viewpoints on AI are totally different than other folks. So you're based out of Colorado. I don't know kind of what the, what it looks like there.  You know, there's a lot of, again, there's a lot of different variants in even in California, but.

37:32
What's it like in Colorado? I know I was in New York and people are still very skeptical about AI. uh How does it feel like on your side? It is interesting to see like the different geographical regions and how they're responding to AI. I think Colorado is really interesting because  on the eastern side,  on the eastern slope, on the eastern side of the Rocky Mountains, you've got Denver and Boulder, which you have a lot of tech there. People  are generally...

37:59
pretty excited and pretty forward looking in those communities. I live in Southwest Colorado, so I'm on the Western slope. uh There's not really a whole lot of tech out here.  I live in a town of like 20,000 people and there's a local college that uh actually has a whole program around AI and building out AI capabilities in this area. uh What's really interesting,

38:25
that I've really noticed since living out here is, know, in Southwest Colorado, you have a lot of exposure to, uh there's a lot of Native American reservations around here. So you have a lot of exposure to really highly rural populations.  And  you really start to see the spread  of populations that are exposed to AI, that their data is available to AI systems, that they even have access to AI systems.

38:54
And you know the the em organization at the local community college is really trying to bridge that gap. The town I live in has this weird pull of you have a lot of highly technical people in this town who work remotely. So  they're all kind of involved in that space at the local college and. It's been really interesting to watch how that organization is trying to.

39:21
build up capabilities for more rural, less exposed communities  that otherwise don't really have a voice in AI and who maybe in a lot of cases don't even really know anything about AI. That brings up something kind of interesting. uh when  we think about AI, right, like in these communities that are kind of like less doubt, um it's not just like,  it's weird. Most of the country doesn't even have like reliable internet, like most of the country, right?

39:50
which immediately disconnects them from this massive wave of tech that's happening.  And even more so, uh just basic things like broadband access isn't even available in some places, right?  So when you think about it, like, who's actually being, like, excluded from,  like, AI and, the ability to use it?  And,  you know, there's this, like, uh

40:16
AI is fed by data. Like this is the gasoline that feeds the engine, right? So there are so many folks that are just not included in this piece and they are just not able to contribute to this massive data set. um Especially now that  we're reportedly, we are reportedly have trained on all data and existence. And I know I've already said this before in another episode, but I'm going to say it again, cause we're out of data, we're out. And now we're like,

40:45
telling people, we want to pay you money to come and train AI, right? But like, I don't think these folks are being engaged to go and train AI. So we're just training on a data set of people that likely already have access to AI, that are already using AI, and I bet you are being influenced by AI and are now training AI  based on these kinds of things. So we've created this like weird reinforcement loop that I don't necessarily think is  moving us forward in  the best of ways. um

41:15
but like, so like, what's actually happening now, right? Like who's actually being excluded and like, go from there. Yeah, no, it's like the world's biggest echo chamber. It's really interesting to think about, you know, I mentioned earlier that data that is outliers gets largely ignored by these systems. And, you know, as we are in this,

41:43
feedback loop and as we continue to  source data from the most well represented populations,  not only are we just like  the data that exists for maybe the underrepresented populations, it could be there, we could seek it out, we could find it, but it's not gonna  be a huge percentage of the overall set of data that we're feeding into these systems. And then,

42:09
because of the way these systems work, because of the speed at which they're working and evolving, it becomes a compounding problem. And that gap of representation is just going to get bigger and bigger and bigger. And that's gonna start happening faster and faster and faster. And we really need to come up with some creative ways to get access to some of these underrepresented populations because that's really, like everybody wants to talk about, let's make, let's.

42:36
let's increase the access and increase the ability for these systems to keep innovating, to keep doing what they're doing because it lets us build better and bigger and faster and more incredible systems. Okay, if you actually wanna build more incredible systems, diversify the data set that you're using to feed into them. Because as it stands right now, what's going into these systems is not representative of the real world. It's representative  of pockets, quite frankly, of privilege.

43:06
And  I think there's such an opportunity to change, like AI is going to drive everything in the future. Like, let's not pretend that that's not how it's gonna be. We  are like on the precipice  of either something really, really meaningful or something really, really terrible because we have this, we have an opportunity right now to fix this problem and it's still, that gap can still be closed. But the longer we wait to close the gap, the longer we wait to find these

43:35
creative ways to diversify the data going into these systems, the worse it's going to get and the harder it's going to be to reconcile that in the future. There's this really uncomfortable reality that I don't think we address often.  We've seen it in a couple of different fields, but underserved communities and people who typically don't have access, they usually end up becoming the beta testers for these types of things, like newer waves of innovation and AI systems, right? Not through voluntary requests, but...

44:03
they can get tested without consent  and  they sometimes don't get the benefit. So um I did some creeping  and  I see that you work with a food van. uh So I think something that's really interesting, I do a lot of  food-based volunteering there as well.  One of the things I sometimes think about is

44:31
there's a lot of food insecure folks that we have.  And  I don't, and I am usually on the hands-on side in terms of like putting food into boxes. I don't actually  work on the side that like maybe determines or what systems like are used to actually figure out who gets what, But like,  what, I wonder like how like these AI systems are actually gonna be like either maybe helping or hurting like these folks, not just in how like we distribute food, but if we even go back further.

45:00
in terms of like just serving communities, right? Like at what point do AI systems affect people in communities that are just like, oh, hey, like,  um,  healthcare access, right? Like you actually can't get this, or this is actually the only type of medication you have access to.  Um, when does it start determining things for people?  Um, and like, when is it being done as a test? Like, uh, there's enough times where it's inaccurate or makes a mistake.

45:29
you know, are these underserved communities accidentally being exposed to, uh, kinda like as a test bed. I know I did a lot of jumping there. There's a lot of mental math you actually do to get to some of these conclusions, but it's an interesting thing to think about. Um, food banks are one place that I think about, but I also think of like, you know,  I was in the city a couple of weeks ago and I was, there's like, you have this salvation army kind of thing, right? So like, how are they using AI to like help increase their services and then.

45:58
If so, like they're likely testing on people that have never given consent or have had to go through a consent workflow. Right. And at the end of the day, these are not only the folks we're missing data from, but they are now  kind of subjected to AI and all these different risks as well. Yeah. A lot of people don't think about the fact that this has already been happening for  like a long time, a decade. I mean, if you think about traditional machine learning systems and using them for

46:27
you know, like predictive modeling, you take something like a major food distributor,  and  they've been using traditional machine learning to predict where they're going to have their highest sales and therefore their highest inventory needs of certain types of products in certain areas. And so then you have problems like the fact that lower socioeconomic status areas traditionally have had poorer diets, poorer access to like fresh foods and produce and everything.

46:56
And then you think about the fact that these organizations, these models are typically  like optimized in order to make money. So they're going to look at the past and they're going to say, oh, given the distribution of food sales in this area, these are the products we think are the most likely to be successful in the future. And so those are the products that are going to be the ones that are ordered. And those are the products that are going to be the ones shipped out to those different communities. And so then you just create this like systemic problem of

47:26
continually  underserving these communities with poor food choices because the systems are built optimized to make money. They're not tuned to make better food choices for those people. And so this kind of thing has already been happening for a really long time and it's just gonna get worse with AI. It's just gonna get faster. It's just gonna get more efficient. And until people start...

47:53
thinking about risk, not just from a risk to the business perspective, but from a perspective of like, what's the risk to the human on the other end of this? What's the ethical thing to do in this situation? We're not gonna make better choices. And you you mentioned um like healthcare, decisions being made in healthcare. Well, like.

48:14
different people have very different genetic makeup. We have very different genetic predispositions. We have very different responses to different medical treatments. And so if these systems are built off of data sets that aren't actually representative of the person who the decision is being made for, it's not gonna make good recommendations. It's not gonna make good predictions. It's not gonna help the doctor make a better choice. And these are the kinds of problems that will just continue to compound.

48:44
Unless we start optimizing these systems differently, we start thinking about them differently. We start realizing the fact that what they do is replacing a lot of human choice where AI just doesn't have ethics in the same way that a human does. And we have to account for that. Last question, right? So I want you to imagine something. Maybe this is a little bit of sci-fi. It's 2027 and there has been a catastrophic

49:12
AI failure in governance and something that makes headlines and it costs a ton of money and it hurts a lot of people. And we think about this future and what do you think that we could have done differently, maybe in 2025 or 2026, something very preventable that we're doing now that would stop this horrible 2027 from occurring? Yeah, I think it's educating people.

49:38
people don't understand what they're doing. Like everybody loves chat GBT and they wanna go talk to it about absolutely anything and everything in the world. But it's this classic uh product angle of like the more information you give the system, the better it can make your life easier, right? The more convenient it gets. And chat GBT is like the  mama of all of that, right? Like it will suck information in.

50:06
like a crazy sponge and I think people don't understand what they're doing when they're sitting there chatting to their their like AI chat bot. They have no idea how much information they're actually giving it and the kind of like profiling that could be happening on the back end for something like that. So I really think educating people, teaching them how to be aware of privacy and security considerations online, helping people understand the way AI systems actually work even at a very high level.

50:35
I really think the more educated we are  as  as an entire group of humans,  the better we can make decisions and the more informed we can be when these companies are just doing all this crazy stuff behind the scenes that most people don't understand. Yeah, no, I think we're totally aligned. mean, uh so mine, I was thinking like also education, but from the side of overreliance, I think overreliance is actually going to be like a massive, massive issue.

51:04
We've already seen like cognitive decline come out because of AI and people relying on these systems. We've seen a lot of, again, there's already a ton of misinformation out there. AI has made it much easier to make misinformation seem legitimate. And I think we're also seeing a de-skilling in people. So folks who are relying on AI a ton are losing some like skills like critical thinking and reasoning, you know.

51:31
I'm sure that one day when voice at AI is cheap enough, we're going to start losing the ability to read. I think reading comprehension is going to get hit really hard, but we never know. I don't know. I just saw something the other day that was like, better start eating healthy. Your future doctor is using chat GPT to pass medical school. Yeah. You know, it's like, never know. But yeah, I think overreliance is going to cause a huge issue and people are building systems they don't necessarily understand.

52:00
and they are  building products that are essentially  houses of cards.  So I think there's a lot of interesting things. I am excited for 2027 though. I don't know what's going to happen in 2027, but  hopefully GTA 6 is out before then  because I've been waiting for years.  Video game reference, I'm sure.  someone will understand it and someone will laugh really hard about it. uh My cat won't. um

52:28
That's what we got for today. So Laura, thank you so much.  Where can people find you? Are you going to be at conferences? Do you've got anything that you're doing that's going to be really cool? I don't have any specific conferences planned, but people can find me on LinkedIn. yeah, that's probably the easiest spot. um I'm not really a social media person, so I'm really not on anything other than LinkedIn.  Very cool.

52:57
Thank you for having me. This was really fun. And if you do get a sci-fi episode, let me know. I'm totally If we don't do it here, I will get a sci-fi episode done somewhere. Someone will listen to us rant about sci-fi. I'm super excited. I'm more excited about sci-fi than I am about security. um I don't care if we include that. I want everybody to know that I'm a sci-fi nerd and I'd rather do more sci-fi than security. All right, Laura, thank you so much for your time. uh It great talking to you and hope to see you soon. If this episode helped cut through the noise,  like or subscribe so you don't miss what's next.

53:27
Thanks for spending time with us.  Until next time,  stay curious.