AI or Not

E048 – AI or Not – Evan Benjamin and Pamela Isom

Season 2 Episode 48

Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.

Think your team is “AI ready”? We take a clear-eyed look at why most organizations overestimate their capabilities and how to move from buzzwords to measurable maturity. With returning guest Evan Benjamin—AI governance practitioner and tireless “AI nuggets” educator—we unpack what readiness really means for enterprises, agencies, startups, and solo builders, and why context matters more than any one-size framework.

We connect proven disciplines like CMMI and ITIL to the AI era, showing how capability-based maturity, lightweight governance, and resilient practices can accelerate outcomes without drowning teams in bureaucracy. From POCs inside a single business unit to repeatable processes across the org, we share practical steps for building systems that are safe, auditable, and effective. We also dive into the messy realities of today’s tools: agentic AI browsers, prompt injection, tracking pixels, and data retention defaults that quietly expand your risk surface.

Privacy and security get the spotlight. We walk through how to read terms of service and privacy policies, manage consent, and control data sharing with ad networks and brokers. Then we layer on hardening steps: MFA and passkeys, VPNs, mobile EDR, disk encryption, safe travel habits on public Wi‑Fi, and decluttering apps that over-collect. You’ll hear why fairness testing is not the same as bias checks, why evaluations must happen at multiple lifecycle stages, and why it pays to red team not just models, but the evaluations and governance themselves.

We close with a call to action: co-create a practical AI maturity model aligned to sector and use case, and stand up a train‑the‑trainer effort so consultants, attorneys, and project leads can deliver consistent, effective education. If you’re ready to swap hype for real capability—and build AI that lasts—hit play, share this with your team, and leave a review with the one control you plan to implement this week.

[00:00] Pamela Isom: This podcast is for informational purposes only.

[00:27] Personal views and opinions expressed by our podcast guests are their own and not legal advice.

[00:35] Neither health, tax, nor professional nor official statements by their organizations.

[00:42] Guest views may not be those of the host.

[00:51] Hello and welcome to AI or Not, the podcast where business leaders from around the globe share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and digital transformation journeys.

[01:06] So we have an exciting returning guest with us today, Evan Benjamin.

[01:12] Evan is a Project Delivery Senior consultant.

[01:16] He's a colleague who creates AI nuggets and pursues AI governance.

[01:23] So that's how we met. That's what I like about him and that's why he's joining us again today to talk about some pressing issues that we want to discuss with the listeners.

[01:35] So, Evan,

[01:36] welcome to AI Or Not.

[01:38] Evan Benjamin: Yes, thank you, thank you.

[01:41] Just quick introduction.

[01:43] I've been working in IT for many, many years and I've been doing everything from just regular system administration to computer forensics to cloud management and I've been working with legal attorneys,

[01:56] helping them with legal tech,

[01:57] helping them review documents on premise in the cloud.

[02:01] So I worked for the government, for service providers, for corporations,

[02:07] and I was all legal tech until I ventured into AI gen AI and then I slowly taught myself agentic AI and then I got my Chief AI Officer designation and, and my very hard AIGP certification, which was very tough.

[02:27] And I started getting all these certifications and then I started learning,

[02:31] enrolled in an AI auditing program which Pamela, you're very familiar with.

[02:36] So I'm trying to complete my AI auditing studies and I'm helping small businesses and and big companies with compliance for EU and gdp, AI act and gdpr and also with security and privacy for AI systems.

[02:55] So that's me in a nutshell. I write on LinkedIn. Please follow me. And I write what's called AI Nuggets and every day I take an article and I analyze it and then I target this for attorneys,

[03:08] AI developers and project managers to help them in their practice. So that's me in a nutshell.

[03:16] Pamela Isom: I am excited to have you here and I like the nuggets that you post on LinkedIn and I just like having conversations with you, period. So thanks for being here.

[03:31] I want to quickly go into,

[03:34] now that you've told us about yourself, a little bit more,

[03:37] let's talk about AI readiness. So the question I have for you and the thing that I've been pondering is are we ready as an organization for AI and are we getting better in the readiness process.

[03:52] So what's your perspective? I know that we talked about the different countries and that it may vary depending on the countries,

[04:03] but you have some really unique perspectives when it comes to AI readiness. I'd like to hear more about that.

[04:10] Evan Benjamin: Okay.

[04:11] When we say AI ready, we have to segregate this for large companies, small companies,

[04:18] solopreneurs,

[04:19] and government.

[04:21] I learned that we have to segregate this for public sector and private sector.

[04:26] So we cannot adopt one definition for AI readiness. The mistake that a lot of consultants are making is that we walk with one framework and we walk into federal and non federal and small companies and we say, follow this framework and if you achieve this score, you're AI ready.

[04:45] And someone,

[04:47] Pamela, someone needs to develop a scale for public sector, private sector,

[04:52] large companies,

[04:53] small companies,

[04:55] service providers.

[04:57] Someone needs to come up with a scale from 0 to 100.

[05:01] So a country I was telling you about,

[05:03] a country like Bulgaria, did this, and they came up with a scale for public sector organizations in Bulgaria. And they said between 0 to 100, are you ready? And if you're like 0 to 20,

[05:15] well, you got a lot of work to do. And if you're 20 to 50,

[05:18] if you're 50 to 80.

[05:22] So they looked at most of the public sector companies and they said most of them are like 49. A lot of them fell 49 out of 150 because they were constrained by infrastructure.

[05:32] They were limited by their infrastructure.

[05:35] So I was trying to like, before that. My fear is that in this country,

[05:40] if I walked into a government agency, what would they say?

[05:44] Right?

[05:45] And, and they have their own.

[05:48] They, they got FedRamp rules. They got all these rules. So even if I say they're AI ready, FedRamp will say no,

[05:55] or some other regulation will say no.

[05:58] So if I walk into a large company and give them this scale, what will they say? So we need a maturity model. I'm big on maturity models because when I did legal tech, we had a maturity model.

[06:10] We had our own maturity model and call for ediscovery. And we would walk into a company and say,

[06:17] I'm going to ask you these questions. And based on how you answer it, I'm going to tell you if you're mature or not.

[06:23] How many people do you think couldn't answer those questions, simple questions on preservation and legal hold and all this stuff, and they thought they were ready.

[06:33] So if I walked in and asked questions,

[06:38] real questions, Pamela, real questions on how often do you test? At what stage do you test? How many iterations do you do.

[06:49] How many people have true AI literacy in your company?

[06:54] If I ask everyone these questions and compile the score for them,

[06:59] I'm afraid most of us are going to be in the low 40s. Low 40s or 50s.

[07:05] There might be a few smaller companies who are ready and they're up at the 80.

[07:10] I don't think there's anyone who's going to be 90 and above.

[07:13] And that's just my personal opinion.

[07:16] So people have a. People overestimate the level of readiness they have. They overestimate their level of readiness and they're afraid to see what they're missing.

[07:27] So that's what I'm afraid of. I don't mean to take away from anyone out there who's truly on a path.

[07:34] Who's truly on a path,

[07:36] but I'm just saying there's a lot that you're missing.

[07:39] There's a lot that you're missing under the hood.

[07:42] And if someone. If I'm driving a car and the car runs okay and someone opens the hood and shows me that there's something going on,

[07:51] I'm going to worry.

[07:52] I don't care what my car looks like on the outside.

[07:56] Someone uncovered a defect underneath the hood.

[07:59] And that's what I'm doing as an AI auditor. I'm uncovering defects that people don't want to hear. So what do you think about that?

[08:08] Pamela Isom: I think that that's what the auditing is all about. And I think that's why we should embrace those of us that are looking into cultivating AI auditing. Because what we're trying to do is make the innovations, as I said earlier, sustaining, right?

[08:25] We want sustaining innovations. We don't want solutions that are good until you look under the hood.

[08:33] Evan Benjamin: Right? See, you just came up with a slogan.

[08:38] Pamela Isom: Yeah, perfect. Until you look under the hood, right? That's not perfect.

[08:43] So we don't want that. And we always talk about how we want solutions that are resilient, right? We want resilient innovations.

[08:50] Well, how are you going to get there? And that's why in this day and time, governance and the responsibility of AI auditing and those types of the lookout for AI safety should really be embraced and the mindset should be,

[09:07] as I said, sustaining innovations, because it's all about innovations.

[09:11] Where I think we mess up is we have this rush to bring products to market,

[09:17] this rush to bring products to bear.

[09:21] This race that's going on,

[09:23] and I'm not so sure what, why.

[09:26] It's not a race to the top.

[09:29] It's not. It's a race to the bottom.

[09:32] Evan Benjamin: Right.

[09:32] Pamela Isom: If these solutions don't work, if they're not sustaining, if they're not valuable, but only for a minute.

[09:41] Evan Benjamin: Right.

[09:42] Pamela Isom: What's the value? Right. What's the use?

[09:45] And so that's my take on it. So I agree with you that we need maturity models.

[09:53] And as a part of those maturity models, readiness should be there, but readiness should be broken out into many different components.

[09:59] Evan Benjamin: Right.

[10:03] Pamela, we don't have an AI maturity.

[10:06] So can we agree that in the future AI readiness should be called AI maturity? Or do you think.

[10:14] I don't like buzzwords, but do you think that AI readiness should be called AI maturity?

[10:20] Pamela Isom: I don't think it will hurt. When I think about,

[10:22] is an organization ready for AI and what level of readiness are they?

[10:29] The first thing that comes to mind is maturity.

[10:31] So I would agree with that.

[10:34] If you're.

[10:35] In fact, just go back to the. What was it? The CMMI days.

[10:39] Evan Benjamin: Yes, yes.

[10:41] Pamela Isom: We could look at that as like a baseline and then cultivate that. So we could start there or look at. I don't want a solution to here, but I don't know why we wouldn't look at something like that as the start of the readiness.

[10:56] Because that's what it came about for. Right. Readiness for it, Organizational readiness.

[11:01] Same type of stuff, right?

[11:02] Evan Benjamin: Yep.

[11:03] Pamela Isom: The question becomes, are you ready for what elements of AI? Are you ready to use AI for helping you with government contracting,

[11:14] ready to use AI for helping you design or helping you think through responses to RFPs.

[11:22] So organizations have to look at their use cases and then start to look at what type of maturity models to put in place or associate use cases with the maturity levels.

[11:37] So certain type of maturity level. Certain type of use cases fall within certain levels of maturity.

[11:44] Right?

[11:44] Evan Benjamin: Right.

[11:45] But can I add that to what you said?

[11:49] Does a whole organization have to be ready, or can one division or BU be ready? Because if I'm walking into a multinational corporation,

[11:58] I don't want the whole organization to be ready. I just. Maybe there's just one division testing something.

[12:04] I want them to be ready.

[12:06] I want them to plc. I want them to do the quick win, and then the rest of the organization can follow their lead.

[12:14] What's wrong with that?

[12:15] Pamela Isom: There's nothing wrong with that. That's the way cmmi, the old CMMI works. Right. It was a specific organization.

[12:22] And I'm not meaning, like it's an element of an organization or a subset of a domain,

[12:29] and then they create a process,

[12:31] and then that process becomes repeatable across the organization.

[12:36] So you always would start small, you wouldn't try to, don't go big. It's definitely not with AI that. Oh no, yeah, I love that.

[12:43] Evan Benjamin: I love that. And you know,

[12:45] Pamela, you and I think alike because cmmi, the C stood for capability and what does AI have?

[12:54] So we gotta, gotta take the capability and apply the maturity of that capability. Oh my goodness. I just preach myself happy. That's what we got to do.

[13:04] Pamela Isom: A lot of times we need to go back to. Go back to our roots.

[13:08] Evan Benjamin: Yes,

[13:09] yes. And speaking of roots, Pamela, you remember IT service management when they had ITIL and IT service management and they would manage everything according to change management,

[13:22] release management,

[13:24] incident management.

[13:26] So I asked someone this the other day and they didn't know how to answer this.

[13:31] Why can't we do ITSM or ITIL for AI?

[13:36] And they said that would be too much and that would slow things down. But just like you said, going back to cmmi,

[13:45] I think we got to take those old frameworks and I think we got to modify that for AI. That's what I think.

[13:51] Pamela Isom: So those are like baselines. So we use those as the foundations and then you look at ways to cultivate them to fit the times.

[14:01] There's nothing wrong with that.

[14:02] You remember back in the ITIL days, which is still going on, right? Itl?

[14:07] Evan Benjamin: Yes, yes.

[14:08] Pamela Isom: In organizations there's just so much.

[14:11] Right. You had to go through this, you had to go through that.

[14:14] There is so much and I think we had a tendency to get lost in the so much.

[14:22] And so there's always opportunities to streamline and still focus on quality.

[14:31] And there's nothing wrong with that because again I tell itsm,

[14:36] service management, IT service management, those are all still needed because that's the infrastructure behind AI and some of the solutions that we want to integrate.

[14:48] Right,

[14:49] I definitely agree with that. I just think that at the time that it was, there was so much, it became kind of burdensome, which is why I always push for lightweight governance.

[14:59] I don't mean cut corners, I mean go after efficiency and not a lot of bureaucracy.

[15:07] So this is the time to do so.

[15:09] Yeah. So I wanted to talk real quick about AI browsers. So we had discussed AI browsers. There's a couple things that's on my mind. But browsers came out a lot because the browsers are changing now.

[15:24] So Atlas has come out.

[15:27] Comment is out there by perplexity.

[15:30] We don't know what some of the others are, but they're AI based browsers and you have Some concerns with those,

[15:39] so I do as well. But tell me what your concerns are about agentic based browsers.

[15:48] Evan Benjamin: Okay, that's a good question because you will see a lot of people write on LinkedIn about attack vectors, about people being hacked using agentic browsers. So ask yourself,

[15:59] why do we need an agentic browser? Because these are tools.

[16:04] What do agents need to do your stuff? They need tools.

[16:10] So if you don't use an agentic browser, agents can still use tools. But now if you prompt for something and you're looking for a product,

[16:21] all the big labs want to ties,

[16:23] they want to use LLMs for products,

[16:26] right?

[16:27] So OpenAI anthropic perplexity, they want you doing everything through the LLM and they want to get your data.

[16:36] So they don't want me just using ChatGPT for research and Google for web,

[16:41] they want me doing everything in ChatGPT or perplexity.

[16:46] But if I do that,

[16:47] they've already got my chats,

[16:50] they've already got my information because I forgot to turn off the memory, I forgot to turn off privacy in my LLM.

[17:00] Right?

[17:01] They gave me the chance to opt out and I didn't.

[17:05] And if you forget to do that, Claude is really going to get you. Because Claude said if you don't opt out by a certain date,

[17:14] instead of keeping stuff for 30 days, they're going to keep your stuff for five years.

[17:19] Just because you forgot to toggle off your privacy.

[17:24] People don't even toggle off their privacy in Google Chrome.

[17:28] Do you think that people are going to take time out to toggle off the privacy in ChatGPT or Claude?

[17:36] No.

[17:37] So I'm trying to train people on that and show them privacy settings.

[17:42] But you know, Pamela, they get upset because they say this is too much,

[17:47] just let me enjoy my LLM, let me enjoy my agents. And they think that if you toggle off too many things, the agent's not going to be able to do anything.

[17:58] That's what they think.

[18:00] But your agent can still do something. If you toggle off retention,

[18:06] your agent can still do something.

[18:09] So how do we tell people to protect themselves and not worry about what the agent can do? Right.

[18:17] That goes back to your innovation versus safety versus security.

[18:21] So if I use Atlas and I give it a task and it goes through and it opens up my browser and it books me a ticket and does whatever I need it to do,

[18:34] it's capturing all sorts of information about me.

[18:37] But we found out that it's easy to do prompt injection through those browsers.

[18:43] So People are doing research where ATLAS is being prompt, injected,

[18:49] and when the answer comes back into my chat,

[18:54] I've been prompt, injected,

[18:56] and I'm bringing that back into my chat. And now I take that and use that for another chat,

[19:02] and guess what happens? You amplify that. So for me, Pamela, you get it. A lot of people get it. For me to explain this to people,

[19:11] it's falling on a lot of closed ears because they think it's too technical. They think I'm trying to chill them.

[19:19] They think I'm trying to chill their activity.

[19:22] I'm trying to chill their agent. And I'm not.

[19:26] I'm trying to prevent you from being followed. Because what's happening is in the browser,

[19:33] they can put a little pixel that retargets you. It's like a cookie.

[19:38] So all of a sudden you're typing something in ChatGPT because you feel friendly towards ChatGPT, and ChatGPT is liking you.

[19:46] They're saying, pamela, how can I help you? Hey, Pamela, remember the last time we talked,

[19:51] you were looking at some shoes.

[19:53] Did you like those shoes?

[19:55] So all of a sudden you're being retargeted,

[19:58] and if you go to another site,

[20:01] you're going to see those shoes follow you because of what you typed in chatgpt and what you did in the Atlas.

[20:08] So I'm just worried that a lot of people are uploading pictures of their family.

[20:15] I posted last week, there is a custom GPT that asks you to upload your X ray.

[20:22] And someone out there is going to upload their doctor's report, their medical report, their X ray, and say, chatgpt, what do you think?

[20:31] Use that custom GPT. Use Atlas. Go do me some research.

[20:35] I'm afraid that people are going to give away too much information.

[20:41] So I'm not trying to chill, I'm trying to protect. There's a big difference. So,

[20:46] again, I'm going to need your help to get the word out. But I hope someone's hearing this and they're saying,

[20:52] we can help them.

[20:53] Go ahead and create your custom GPT,

[20:56] create your agents, but know what's happening in the background, because prompt injection is. Is real.

[21:02] Pamela Isom: Yeah. And it's not just a. It's not just technical jargon.

[21:06] Evan Benjamin: No,

[21:07] no. It's going to happen.

[21:09] A lot of people say,

[21:10] oh, my neighbor got malware on her computer. Ain't going to happen to me. Guess what?

[21:16] It happens and it's going to happen.

[21:18] So we're trying. We're trying to protect you and keep you from harm that's all we're doing.

[21:22] Pamela Isom: I think that that is one of the things that we need to pay attention to when it comes to AIC safety. And I think that ties to AI readiness.

[21:30] Are you aware of some of these vulnerabilities and are you mindful of what you can do to mitigate some of the associated risks?

[21:38] So that should be a part of that whole readiness playbook.

[21:42] So that was the part that we were going to talk about when it comes to the browsers, because that is introducing a whole new set of attack vectors.

[21:51] But then the other thing I wanted to talk about is this here.

[21:54] Evan Benjamin: Yeah, a whole new set of problems.

[21:57] Pamela Isom: Yeah, a whole new set of problems. But then, remember when the situation with Meta. Right. And so there was an announcement that came out recently that Meta is going to update its privacy policy to begin selling targeted ads across its family of social media apps based on data collected from consumer interactions.

[22:21] So that just kind of sounds convenient,

[22:25] but also concerning.

[22:28] Evan Benjamin: And you said that that. That introduces a whole other set of problems.

[22:32] Pamela Isom: Right, right. So that sounds convenient. Right. But it's also concerning because how's it going to target. Who is it going to target advertisements to and where does it stop?

[22:44] So when you talked about the things that you mentioned earlier and you mentioned targeted advertisements, like if I choose a pair of shoes when I'm at one browser and then I go elsewhere, and then there's that advertisement about those shoes, because I'm being followed like the traditional cookies, Right.

[23:04] So I'm being followed.

[23:06] Evan Benjamin: Right.

[23:07] Pamela Isom: How are they deciding? First of all, they shouldn't be using our information.

[23:12] But second of all, how are they going to use that information and to target who?

[23:18] Because I don't think they're going to just limit it to me.

[23:21] So how are they using that information and how do we protect ourselves? Again,

[23:27] so they're concerned with data privacy,

[23:30] understanding the terms and conditions,

[23:33] understanding what information we're allowing them to use. But I'm also concerned that whether we consent or not, because this is about consent,

[23:42] they're doing it anyway.

[23:44] And so the only thing I know to do in that particular case is bring awareness to the fact that this is going on. Did you know?

[23:51] So you can be careful when you're using some of these tools like WhatsApp and different things,

[23:57] because maybe it sounds convenient, but do you understand the full picture as far as how your information is going to be utilized?

[24:06] And so what I did was I went out and.

[24:09] And I started looking at like, where are we finding these types of risks and these vulnerabilities I saw different big players having issues in this space.

[24:21] I saw unauthorized data collection with South Korea, and There was a $15 million suit I saw in Kenya.

[24:30] There was something similar.

[24:33] And there were failures of representation.

[24:38] Right. So Western culture was misrepresenting the Kenya culture.

[24:44] And it caused some issues. Right, it caused significant issues. So there ended up being billions, I think, in suits. Right, right. And Apple has agreed to pay 95 million to resolve claims that Siri recorded private conversations.

[25:00] So I think we want to make people aware that this is going on and that this is our information.

[25:09] So be very steward, like when it comes to how we are using the tools,

[25:15] because the convenience that it seems to be providing may turn out to be a significant inconvenience.

[25:23] And so that's why I wanted to share that. And then I went and I said, okay, so what are some of the things, you know, I'm always talking about cybersecurity.

[25:31] So what are some of the things we can do? Well, we can use strong passwords,

[25:34] still. Go back to that. Use the multi factor authentication.

[25:39] Force these companies to help us understand how they're going to use the data.

[25:45] Know where the data brokers are, where are they, what are they doing?

[25:49] Ask them how and what information do they have on us? They may not reveal it, but they're supposed to disable some things, like you mentioned, Right. Disable the ad identifier on your phone,

[26:02] turn those blockers on.

[26:04] And then one that I was looking at that was like, we should declutter the apps on our mobile devices.

[26:12] Evan Benjamin: Yes.

[26:13] Pamela Isom: Those apps hold information.

[26:16] So I think it's important for us to take a look at what some of these organizations are that are out there to help with these types of issues.

[26:26] And then also look at what we can do to and be very prudent about understanding the terms of service.

[26:36] And what does this mean? And you know, for the longest,

[26:39] these terms of the terms of service are so complicated, the data usage agreements are. Can be so complicated until you just accept it.

[26:48] That's not acceptable today.

[26:51] Evan Benjamin: No. And got to remember, Pamela, that most people confuse privacy policy with terms of service,

[26:58] and they're two different kinds of documents. So if you're not used to reading privacy policy, you're not going to understand the terms of Service.

[27:07] And for OpenAI,

[27:09] I read both.

[27:11] I opened up the privacy policy, forced myself to read it, an attorney taught me how to read it,

[27:17] and I opened up the terms of service and I read it.

[27:22] And they're the most boring document you could ever read. But you Gotta know what they're promising and what they're not promising.

[27:29] Pamela Isom: Yes, I agree.

[27:31] Evan Benjamin: So AI literacy should include reading those and understanding them.

[27:36] Pamela Isom: So maybe part of the AI literacy that we're talking about should include,

[27:41] and even the maturity should include organizations having a.

[27:47] Part of the orientation in the AI program is understanding the terms of conditions, the terms of service,

[27:55] understanding the privacy agreements, the data use agreements.

[27:59] Right. Understanding what those are and what those mean.

[28:03] Evan Benjamin: Yeah.

[28:03] And if they don't want to read it, have a quick training session,

[28:07] get everyone in a room, explain it, and then they sign a document that says, yep,

[28:13] we heard it, we read it.

[28:15] Because if they don't do that.

[28:17] We had a recent hackathon where we had to open up the OpenAI terms of service. And do you know, Pamela, that OpenAI says basically, your outputs belong to you, you're responsible for your outputs.

[28:30] But if someone is harmed downstream, think about this.

[28:35] If someone is harmed downstream,

[28:38] does OpenAI still have any liability? Is there upstream liability?

[28:43] Yes.

[28:45] Because you don't know if that harm was hidden all along.

[28:50] That harm could have started with OpenAI and then swum downstream all the way to the,

[28:59] to the deployer who might be in the eu and someone in the EU was harmed. And we can go all the way up the value chain and say that defect was latent, it was hidden.

[29:15] And OpenAI will still be liable even though their terms of service says, pamela, it's on you. Your output is your output.

[29:23] And they can't say that.

[29:26] So that's what I'm talking about.

[29:28] If you don't read it, you won't know if they're liable.

[29:34] Pamela Isom: I think that we are at that day and time where we are going to have to pay closer attention and just focus on being more data stewards. Right. More stewards of.

[29:47] And this is still about data.

[29:49] It's really, it's AI, but this is about data and being good stewards of our data and understanding the provenance and understanding the lineage,

[29:59] understanding the governance,

[30:01] and then also knowing our rights.

[30:07] Because I think we have a tendency to think that we don't have rights. And these policies,

[30:14] these agreements,

[30:17] a lot of those, if we just accept it, we give up our rights.

[30:21] So I.

[30:22] The reason why I wanted to talk to you and have this discussion is to remind us and help reiterate the point that these things are.

[30:33] We empower ourselves. If we pay attention to what's going on,

[30:37] if we pay attention to these practices and know what's happening and get a better understanding of how to protect ourselves, so we don't have all the answers because legal is still catching up.

[30:51] But there are some things that we can do. Some of the things that we rattled off, like turning off some of the options within the browsers and especially on our devices.

[31:01] And the mobile devices are so convenient,

[31:04] which is why the capability is easily integrated into the mobile devices.

[31:09] But maybe we should think twice,

[31:12] right?

[31:12] Evan Benjamin: Can I remind people that on your mobile device, if you hold up your mobile device,

[31:17] you need a special protection called endpoint.

[31:20] Endpoint.

[31:22] Endpoint detection. Like you need different protection on your mobile phone. Like they have something called bitdefender.

[31:32] How many people download Bitdefender on their phone? They're going to protect their laptop, they're going to put endpoint protection on their laptop,

[31:42] but they don't put the same endpoint protection on the mobile.

[31:46] So download Bitdefender on your phone.

[31:51] If you're going to use Atlas or anything on your phone,

[31:54] is your phone protected?

[31:57] We're going to walk around on our phone with ChatGPT and the Atlas browser.

[32:02] We're not all sitting at our desk where the laptops are protected. Right.

[32:08] So even when we travel, Pamela, think about this.

[32:11] I'm traveling. I'm in an airport.

[32:14] How do I connect?

[32:15] What network am I using at an airport?

[32:19] A bad one.

[32:21] I'm using the airport WI fi and I'm using that to connect to Atlas.

[32:27] Can you think of anything worse?

[32:29] Pamela Isom: No.

[32:30] Evan Benjamin: So I'm not using a vpn. I'm not. And people need to be security conscious with every device.

[32:38] And I see people at the airport walking around with three laptops, three iPhones,

[32:44] and they just, they bragging, they're saying, look how busy I am. And they're not protected.

[32:51] So I worry about them. I worry.

[32:54] What do you think the answer is, Pamela? I mean,

[32:57] how do we get people to protect all their devices? And that's not even a. That's just common sense.

[33:04] Right?

[33:05] Pamela Isom: I just think it's awareness. I mean,

[33:07] what we can do is we can help them to understand that these,

[33:12] these challenges are something that we have to. To address.

[33:18] And all of us, not just our cybersecurity leaders,

[33:21] not just our chief AI officers, all of us should take responsibility and do the things that we can do to protect our devices, to protect ourselves. Not just our devices, but to protect ourselves, because this is about protecting ourselves.

[33:36] If your information is on your mobile devices, then of course you want to protect your devices. That's just it, period.

[33:43] The more information you make available about yourself on your device, the more you want to protect that device.

[33:50] That means lock it down.

[33:52] So recently I was ensuring in my organization that we get some of the tools installed to protect, to encrypt our devices, right? To encrypt our hard drives on top of the virus protections and VPNs and all this stuff that we have going on,

[34:09] right?

[34:10] In case the devices get stolen, get lost, et cetera, et cetera.

[34:15] Same thing with the mobile devices. You want to be sure that for the phones that you have these types of protections on those devices because they can get swiped, stolen, lost.

[34:30] And so have you done that? And even some of the devices that we're backing up to,

[34:35] we're backing up to the cloud. But really,

[34:38] who has the passwords,

[34:40] Right?

[34:41] Who has the passwords besides you?

[34:43] What are you doing? So are you thinking beyond that? Right? So yeah, I'm backing it up, but am I thinking beyond that? Because I back up my mobile device, my phone,

[34:53] to a cloud environment,

[34:55] but who has my password besides me,

[34:58] Right.

[34:58] Evan Benjamin: Have you ever tried restoring.

[35:00] People know how to backup, but they don't know how to restore. So in a crisis, would you even know how to restore all that?

[35:07] Pamela Isom: All those are practices that we have to put into place and then we have to test it out. So, you know, I'm big on red teaming, so my red teamer will come along and he'll.

[35:17] He did it recently, right? Came along and tried to. I get him to run it against my environment and run the test and do things we do, we practice that.

[35:28] Do we practice,

[35:29] practice recovery like what you said? Those are things that we can do. And if we look at it as empowering ourselves, that may help,

[35:38] right?

[35:39] Do that. You are empowering yourself and empowering your organization, which is what you want,

[35:46] right?

[35:46] So even.

[35:47] Evan Benjamin: And do you think,

[35:48] Pamela, that people talk about bias and everything else like that insecurity,

[35:54] but what about fairness?

[35:57] Like a lot of products don't test for fairness, which is different. I think they have something called fairness testing,

[36:07] but that doesn't get covered when you do security testing or bias testing. So the reason that people aren't AI ready is that they don't know.

[36:15] They just say the word bias and they think they're ready, but they don't. They haven't tested for all the types of bias and they haven't tested for fairness, which is different than bias.

[36:27] So there's a whole lot that they're missing because no one has educated them. So I think that it all begins with AI literacy. And the AI literacy programs are a little bit weak.

[36:37] So I think that's the first thing,

[36:40] we gotta strengthen our AI literacy if people want to be really AI ready. I think there's a lot of people out there doing AI literacy and we need more. We need more.

[36:51] So.

[36:52] Pamela Isom: Yeah, I agree, I agree. And literacy is a. Is a broad one too. I say when it comes to literacy, just get started, get going, right? Because there's like so many pieces of literacy,

[37:04] so many opportunities,

[37:06] right? And when it comes to AI literacy. So just do it, get going, right? Because as you start to work from a literacy perspective, you'll dig into one area and then you'll realize, okay, so now it's time to move into this area.

[37:19] Evan Benjamin: And.

[37:19] Pamela Isom: And it'll just. It's kind of like dominoes, right? It'll just start to.

[37:23] One will lead to another.

[37:24] If you too much at one time, then it's overwhelming. But if we just get started,

[37:30] you'll see that there are different areas of literacy that we need and it actually becomes delightful to move into the next area.

[37:39] That's what I find with my customers, right? So with my clients, it's like we navigate from one area to the next.

[37:46] The thing is,

[37:48] get going, right? So that's what I tell my clients because that's what we have to do now.

[37:55] So this has been good. I think that there's one other tool called Privacy Badger,

[38:01] which is there to block online trackers. Oh, yeah,

[38:06] there's that tool. And that might be something that we can consider and remind our listeners to look into, to install and include on their devices.

[38:18] I think we should look at the AI lifecycle and understand the AI lifecycle. I always do a diagram, has the AI lifecycle and then where are the attack vectors,

[38:31] where are the vulnerabilities? And it's all throughout that life cycle from just all throughout, right? So it's good for us from a literacy perspective to even understand the AI lifecycle.

[38:42] Mine, I draw it as a serpent from the standpoint of business design and development all the way through prompt engineering and getting a response back.

[38:56] So that's what I do. From a lifecycle perspective, that's a literacy that we need.

[39:02] And then overlay that with where are the potential vulnerabilities and then where are the opportunities to red team?

[39:10] So that's what I do to ensure that we are building, again, resilient testing practices.

[39:18] So that's something that I recommend. But what I was going to say is we're to the place where we're about to wrap up here,

[39:26] and I wanted to know if you have anything else that you want to share with us,

[39:32] with the listeners and then also wrap up with your words of wisdom or call to action or final takeaway.

[39:40] Evan Benjamin: Okay, this was great. And I always honored to speak to you and tell you all the new stuff that's coming out and who knows what we're going to be talking about the next time we meet.

[39:51] It's just going to be just amazing.

[39:53] But I do want to say, Pamela, that there's something called AIE valves,

[39:59] and a lot of people are now looking at AI evals, which are evaluation of your AI system at different.

[40:08] It's not the same as testing,

[40:10] but it's related to testing. You have to evaluate your system at different parts of the life cycle.

[40:17] And believe it or not,

[40:19] if you ask two people to do an AI evaluation and they come up with a some good numbers,

[40:26] that doesn't mean that it's always going to be good at the next stage.

[40:30] Here's something new for you, Pamela. Someone actually mentioned take my eval,

[40:36] take my A, E, A I eval and red team that they didn't say red team the model and see how. See how weird that sounds. They said the red team just my evaluation.

[40:52] So the. So the red teamers came in and they said,

[40:55] let's not look at Claude right now.

[40:58] Let's look at Evan's evaluation of Claude.

[41:00] Pamela Isom: Yeah.

[41:01] Evan Benjamin: Did Evan do a good evaluation or did Evan leave out? Yeah, something. So can you believe that you can red team anything now?

[41:12] Pamela Isom: You should be able to red team anything. So I don't know if you remember, but I'm. I always talk about red team, the governance.

[41:19] What is your governance process?

[41:21] How effective is your governance? Well, how are you going to know if you don't red team it? And that makes me think of what you just described, which is more than governance.

[41:29] But to red team your eval makes a whole lot of sense to me.

[41:33] Evan Benjamin: There you go.

[41:34] So my takeaway is that let's take a common concept like red team and let's apply it to everything.

[41:43] Let's red team our literacy. Let's red team our governance.

[41:46] Let's red team our eval. Let's red team everything and see how effective we are. Because the key word today is are you effective with your literacy? Are you effective with your training?

[41:59] Are you effective? Because if you're not, the EU AI act says that you need to show that you're being effective with it. Otherwise you got to change your methods.

[42:09] So this is not just an AI governance thing. This is an EU AI.

[42:14] That was my takeaway that you needed to red team everything. But we need to develop a new model of AI maturity.

[42:23] That's my takeaway and call to action and I'm calling all my AI consultants to even come up and help develop a new standard for that.

[42:32] Pamela Isom: So your takeaway is that we need to establish a level AI maturity model and also different areas of red teaming?

[42:48] Evan Benjamin: Yes.

[42:49] Pamela Isom: Okay, what can we do and what should we do next?

[42:54] Evan Benjamin: I think we need more train the trainer sessions because all of us are trying to go out and change the world for all these companies.

[43:03] I wish we would have a train to Trainer session where consultants or attorneys or project managers.

[43:10] I want to develop a train to trainer system where you cannot go out and talk to a company unless you. You've gone through a train. You know the train to trainer.

[43:21] Exactly. Because maybe you're missing something and we're relying too much on just regular training out there, which is very good.

[43:27] But now we need Train to Trainer. We need Train to Trainer for AI and people like you and me and, you know, other good people out there should get together and come up with a new train to Trainer for AI.

[43:40] Pamela Isom: Maybe we'll do that. I'll be back in touch with you. Maybe we will do something like that.

[43:45] Evan Benjamin: Yes.

[43:45] Pamela Isom: Well, it's been good talking to you.

[43:49] I appreciate the conversation today. I appreciate the words of wisdom as well and more so the call to action,

[43:57] because that's what we need to do. So I appreciate the call to action, but this has just been really great. You have a lot of insights again.

[44:04] And so I hope that we'll be able to take it away and those that are listening will be able to take this and think more about safety. Right? Think about safety, think about self empowerment,

[44:16] think beyond cybersecurity and how we are doing things to protect each other,

[44:23] protect our tools that we are using.

[44:27] And just wiser, right? We're just wiser about navigating in this ecosystem in which we live.

[44:34] So I really appreciate you being here.