
AI Proving Ground Podcast
AI deployment and adoption is complex — this podcast makes it actionable. Join top experts, IT leaders and innovators as we explore AI’s toughest challenges, uncover real-world case studies, and reveal practical insights that drive AI ROI. From strategy to execution, we break down what works (and what doesn’t) in enterprise AI. New episodes every week.
AI Proving Ground Podcast
Staying Ahead of AI Policy and Governance with a Global Framework
AI isn't just evolving — it's accelerating into every corner of business and society. But while innovation surges ahead, AI policy and regulation is playing catch-up. In this episode of The AI Proving Ground Podcast, two of WWT's foremost AI and cyber experts — Kate Kuehn and Bryan Fite — dive deep into the fragmented and fast-changing world of AI policy, regulation and governance. Plus, what every enterprise should be doing right now to stay ahead of regulatory change while building AI systems that are secure, inclusive and future-proof.
AI has been reshaping the world for decades, but the current wave of AI, led by generative models and autonomous systems, is different. It's not just enhancing industries, it's disrupting them. Governments are beginning to respond with new policies and guidelines, but, if we're being honest, the regulatory landscape is fragmented and harmonized. Global governance remains elusive For enterprise leaders. This creates a growing challenge how do we move fast and innovate while staying responsible, compliant and trustworthy?
Speaker 1:On today's episode of the AI Proving Ground podcast, we talk with Kate Keene, wwt's lead on global cyber advocacy and a governor for the United States on the Global Council for Responsible AI. We'll also talk to Brian Feit, a principal security consultant for AI, about how organizations can leverage the TRAI framework to build AI systems that are ethical, transparent, inclusive and secure, even as technology evolves and regulatory expectations shift. A quick disclaimer on this episode this episode is based in part on a concept paper that does not provide any specific or actionable legal advice, policy advice or other professional guidance. Any references to laws, rules, regulations and frameworks are illustrative and intended to demonstrate how a purpose-specific framework for trustworthy and responsible AI could be created. For any specific framework creation, please do your own research and reframe from relying on this paper for the latest facts. Kate, how's it going?
Speaker 2:It's great Thanks for having me here. Appreciate the time.
Speaker 1:And Brian. Brian with a Y. I won't fault you for that, but welcome.
Speaker 3:Thank you, great to be here.
Speaker 1:Yeah Well, the both of you, just before we get started, have amongst the best LinkedIn profiles that I've seen Kate, risk executive, cyber advocate, board member, advisor, investor, speaker, hacker. As if that wasn't enough, a mic drop mom times five. Pretty impressive.
Speaker 2:It's just duct tape and band-aid. My friends, it's my life. I love cyber, I love my kids and that's pretty much it. That's all I do in life.
Speaker 1:And Brian equally impressive resume here. But I am struck by the header image. Deepfake Justice League. Are you moonlighting for us?
Speaker 3:Well, no, we're just a campaign to try to make the world a safer place, given the adversaries out there, and just the tools are outpacing. The fakes don't look like the valley anymore. They're very hard to determine, so we need to make sure that we can help our fellow humans.
Speaker 1:No, absolutely Well, we got lots to get to today talking about AI policy, regulation and just the. You know the environment overall around cyber and AI, the regulatory landscape, you know it seems to me to be one of the toughest and most difficult areas for you know, whether it be enterprise AI leaders or just organizational leaders to wrap their heads around. You know it's a fractured landscape. What is compliant in the US might not be working over in the EU or not even viable in a place like China. It's a difficult, complex landscape out there. Kate, I know you've got a lot of expertise in this area. I was just hoping, before we get into the thick of this conversation, level set for us on what the current environment is from an AI policy and regulation landscape as our clients or organizations experience it.
Speaker 2:Yeah, I mean let's start with some groundwork on it. So you know, all organizations today are going through. We've used the term digital transformation for a number of years, but the reality is is now we're in the middle of a digital revolution, and the reason is is that technology impacts every area that a company uses. I mean literally. It's in every part of an organization. I used to say every part except climate, and then a woman very smarter than me even showed me in climate. So there's a technology component to all areas a board looks at. When you have technology. There is cyber, because cyber is nothing more than anything to do with the computer or computer transmission. And now you have AI being considered. Because organizations are looking at AI for one of two reasons it's either going to help them on a cost impact perspective make employees smarter, faster, more productive or there's a brand differentiation where it's being used to innovate and create. But it's creating this digital revolution when we look at it from a legal aspect, this concept of trustworthy and ethical AI because AI can be used for really good things. For really good things. It can also be used to spoof people, to make it harder to detect when threats happening, to hallucinate data and make data do different things that it doesn't need to do, and it's going to continue, and we're going to see AI get further and further and further into our lives. So, as that happens, the regulatory community is trying to figure out how do we maintain a technical and an ethical baseline for our consumers and our companies and our government.
Speaker 2:The issue that we face is, you know, looking back in history, we didn't do a good job of having, you know, in essence, one version of cyber regulation. Cyber policy starting we were sporadic with it. Cyber went from the guys in the back of the room is an afterthought to a forethought in about a 15-year history. We're trying now with AI not to repeat the mistakes of the past. So you're seeing organizations and governments starting to look at it from a regulatory perspective to say, okay, we can't have 10 different agencies regulate this. We don't want to see states regulating everything differently. We want countries to have kind of a baseline across the board.
Speaker 2:But the best of intentions paved the road to hell, because you saw the EU release something. You saw our organization, our government, release and then pull back and now release again. You've seen the states start to get in. So we're actually going down the path of what we did in cyber, where everybody's starting to poke at it, and the goal we're seeing from board organizations, from the ISACs and others, is to try and hit the brakes and get organizations and especially interconnected governments, to look at the regulatory landscape with one perspective, because it's massive concern for boards, for companies that are looking to implement AI is what they're implementing in one area going to be considered regulatory safe in another area?
Speaker 1:Yeah, Brian. What are the implications then on cyber teams for organizations around the world? Pick your vertical there. You know what is all that shifting, what is all that uncertainty? What does that equate to for how organizational cyber leaders are trying to position their, their companies?
Speaker 3:Yes, engineers don't like uncertainty, and the rate of change and the impact of those changes create unique challenges. I think the the advantage that we have in our practice area is we've our North Star is something we call trustworthy and responsible AI, and what that actually is is a unified compliance framework of all of the global frameworks of the regulations and various decrees, and by having that what I'll call lens or Rosetta Stone, it makes it very easy for us to consult to our clients. Internally, we drink our own champagne, but also when we go out and consult with the industry on, because they want to take the advantage of all the promise of AI, but they want to do it safely and securely and economically, and so the benefit of having that joined up view really can provide the lens that's right for that particular organization and allows us to meet them where they happen to be. Now, if you hear me talk about this to people at conferences or consults, trustworthy and responsible AI is hard, first of all because nobody really agrees on what it is, and that's okay, because it is different for every organization. But through our lens, we can say here are the things that matter to you your mission, your stakeholders where you operate the industries that you operate in and having that kind of view, then we can customize the messaging and the advice and consult that we provide our clients in the industry. And the beautiful part about this is it's just fluffy words. We can actually give you the KPIs to measure that and we can walk back.
Speaker 3:One of the nuances that we found when we started to try to put together this unified compliance framework even though there's so many of them out there there are our stuff, the NIST I love NIST and our North Star, as it were, would be NIST AI 600-1. And that contains what I call the dirty dozen. They're the 12 bad things, the way humans can be harmed if we don't get these AI systems correct. And the beauty of that is then we can map from there and framework walk wherever you need to be to be compliant and we know compliance doesn't equal security, but it's still a cost of doing business and then we can equally say here's a threat catalog, here are the controls that have a high affinity for mitigating that bad thing, and so you can distill the world of possibilities down to two or three decisions. And if you ask us, we'll be very prescriptive. We'll tell you if this was our environment.
Speaker 2:This was what we're responsible for this is the path we'd take, but ultimately it's the client's choice. It's interesting, brian, because this is why Brian and I are good yin and yang. He can say things like NIST 600 is our North Star. If I said that in the Dirty Do the dirty dozen, if I said that to a Congressman or to a board of director, they're going to look at me like I have 10 heads. So I always have to distill and make human language out of Brian's language. Funny, funny story.
Speaker 2:So this week I was headed out to Washington for an offsite and some meetings and got on the plane in Denver and it was like you know the old song clowns left of me, jokers to the right. I, honest to God, had a senator and a congressman sitting on either side of me, struck up conversations, started talking about. You know what's going on. I'm in cyber, I'm an AI, they were asking opinions and you know it's interesting because the the frameworks that we're talking about and we look at the regulatory landscape. Everybody's grappling with the same thing. So from a lawmaker perspective, when I put my hat on from a government, they're looking at how do you create regulation around a couple of areas. One is data privacy. So you think about consumer data privacy. You think about how businesses are going to use or misuse data in their AI models. What do we need to regulate around that? The second piece is really around coming further into what Brian's saying is trustworthy practice. You know how much is it the company's responsibility to ensure that the images that are being used, things that are putting up, are real, that we're not inundated by deepfake, and where is that line of responsibility in ensuring the non-malicious use of content from an AI perspective?
Speaker 2:The third area is around misinformation, disinformation. So you know we really take in this country to heart the idea of the first amendment and freedom of speech, but where is that line when you start to bring AI in on misinformation, disinformation. And the fourth is responsible use, and that comes into you know. You know there was a big joke that you know you could go to the Amazon chat bot a couple of months ago and type in you know, give me examples of malicious code or your customer list. I'm not just picking on Amazon. This happens a lot and all of a sudden the wrong data gets spit out. So one of the things you know in talking with you know again, sandra on my left, congressman on my right was, you know. We highlighted and I went through.
Speaker 2:What I love about the work we've done is Brian's exactly right, there are 32, I think as of today hopefully this isn't dated by the time we release global frameworks out right now and everyone has a little bit of a different flavor of what you should do from a guidepost. Again, those guideposts what boards care about? Misinformation, disinformation, insider threat, reputational damage those are the main and operational ability to create operational efficiency. Those are the areas that a board looks at AI for and the pillars that we've come up with. There's five of them four trustworthy and responsible AI, data privacy and protection, security standards, regulatory framework alignment, economic impact, ai governance and ethics guidelines. That really distills down for the layman why we love NIST 600 and why we've pulled these apart into areas that are consumable for the average person to understand where regulation is heading.
Speaker 1:A lot of what the two of you are talking about is in this research paper that the two of you helped co-author, amongst several others here at WWT Trustworthy and Responsible AI at the global scale. I mean this is a very deep piece of research. One of the most deep pieces of research that I've seen us come out with has everything from those pillars that you just mentioned to implementation, to quick and easy wins. I mean, this thing goes down the list for enabling organizations to handle those 32 frameworks. Brian I know Kate was just about to hand it off to you Talk about how an organization can use this research to help advance you know their AI strategies amidst that uncertain ground that we've been talking about.
Speaker 3:Well, yeah, besides the stories that we hope resonate with them, there's some tools in there, and you know, kate, you hit it on the head. You've got to translate to make the complex simple, and even though sometimes I hope I'm doing that, I don't. But the lens that you can take is who's your stakeholder? So maybe the CFO is really going to care about the pennies and pounds, and so their lens can be how do we make our data centers more efficient and, at the same time, that efficiency could appeal to the mission statement of the organization around sustainability. So those are, you know, two different stakeholders who care about two different things. But, at the end of the day, if we have that informed framework to say this is the optimum solution that meets both of those stakeholders needs, we get to yes, faster, and they can both feel, hey, I'm getting what I need out of that, and so the work that was, by the way, that exercise was really excellent.
Speaker 3:There was a lot of people who helped with us, but those use cases and the stories on how we're actually doing in the field, we're taking, you know, the voice of the customer, the wicked problems that they brought to us and said we have to be compliant, but we also don't want to stifle innovation. Guess what? You don't have to choose one or the other, you can choose both. And actually it's funny. I used to tell people, because I was in InfoSec, I was the Bureau of no. It's funny. I used to tell people because I was in the InfoSec, I was the Bureau of no. Now I'm the facilitator of yes, because I can actually come in and show how we can meet all those stakeholder demands at the same time and we can be compliant and secure at the same time.
Speaker 2:I can attest that he used to say no a lot and he says yes a lot more now. So it's totally true, he's now the guy of yes when he used to be the guy of no. But the reality is is, as you look at the regulatory landscape and you know we're going to see a lot of change this year from an AI perspective, you know our government's going to be coming out with some new guidance. There's now the Global Council on Trustworthy and Responsible AI that's been stood up with. I think we have 23 countries right now in the council. You're going to see a lot of change.
Speaker 2:But the reality is is what Brian just said the kind of the comparison of yes or no, and the body of work that was created by Worldwide, and this guide is an example of where we're headed. So this piece of research was, you know, created by probably the most amount of groups I've ever seen at Worldwide coming together to collaborate. We had sustainability, we had cyber, we had AI, we had digital, we had everybody looking at this, and that really lends to the fact that our customers, like I said at the beginning of this, it's no longer just digital transformation, because that concept has been, you know around for a while. I mean, brian, what we were saying digital transformation 15 years ago, I think, when cloud came out. Right about that yeah, forever. We're in a digital revolution and what I love about the work we're doing at worldwide is that I'm just you know, we have, like the implementation approach that's also highlighted initiate regulatory mapping and identification of key stakeholders. Brian nailed it it's no longer just a CISO or risk executive, it's the CEO, it's the CFO, it's HR. Everyone has a role to play in how we look at not just AI adoption but digital revolution adoption, pilot projects to test cross-border data sharing. Data is king now and the cyber component of it is just one facet of how data is going to go through an organization.
Speaker 2:Third piece full-scale implementation and metrics rollout. As we look at these new digital transformation programs that AI is generating and kind of pushing the envelope on the regulatory piece, the pilot programs to implementation and then measuring it constantly for accuracy and impact is going to be key. Monitoring and revision how do we continue to learn? This is never going to be a static environment. We will never see AI and how we leverage technology today be static again. And then, fifth, the communication and enhancing your AI culture because, just like we used to say that cyber needed to be culture, now digital revolution, digital embrace is where culture is going to live in cutting edge organizations. So the fact that we can take a regulatory framework mismatch and create a roadmap for digital transformation, I think is a secret sauce for our company going forward.
Speaker 1:Well, brian. So Kate just outlined the how or not the how, the what to do, how to implement it. But what might some obstacles or challenges be, as an organization might be looking to go through that process and what are we even driving towards in the end? What are the outcomes that would arise if and when we go through that implementation process?
Speaker 3:Well, great question. So what I'm seeing? It's kind of what we talked about the Bureau of no, there's a lot of fear. So there's folks say this is too dangerous, we can't do this, we shouldn't do it. They might be giants I don't know if you know that reference to an old black and white sci-fi show but the point is that the bigger fear should be of not adopting some of the most transformational technology available to us, and so, to get over that fear, we also have a flip side of that, which is FOMO fear of missing out or everybody going around with their AI hammer looking for nails to hammer in, when, in fact, a Google search could do just as good a job of surfacing something.
Speaker 3:So it's that balance and it really is most important, and the organizations that I've seen that are doing it the best actually are saying OK, you want to do something. Here's an intake process. What is your wicked problem that you're going to solve with AI and what is the business outcome we can expect, and we're going to hold you accountable for delivering that. And then going in and testing things and doing fast fail, not being afraid to fail, but fail fast and also making sure that the business case is there before they start running, after you know those, those value the value proposition. They also just basic safety training.
Speaker 3:We do see some organizations that are running and doing training because you can hurt yourself or others and you can do it, you know, with all the best intentions. Because the thing about some of these AI agents or systems is it's all about the data and they will find the data, or systems is it's all about the data and they will find the data. And if the data is not properly curated, classified, labeled, it's very easy for you to accidentally do something very harmful, and so we're trying to find ways to. You know, not necessarily say let's do all this huge investment now, but here are some basic rules of the road safety rules. Here are the guardrails and controls that you already have in the organization, so just turn them on or reconfigure it and then if there is a major kind of threat category or control that you're missing, that we quickly identify that and say here are the things that need to be true in order for you to move faster and adopt this transformational technology.
Speaker 2:I was going to say you bring up a really good point about the adoption and you know, kind of the FOMO versus the fear. And what I see too is, you know and I was joking about it we saw it a year ago and we're seeing it even more now. We have a bit of you know, see no evil, speak no evil, hear no evil going on. We have about a third of our customers that are running as fast as possible. We have a third going not really sure I want it in my environment. We have a third going, not really. You know, one way or another we're kind of on the fence, and part of that has to do with culture, is, if you think about it, you know. I'll give you the example of when Brian and I first went to college many, many years ago, I had one computer in my dorm room. It had a green screen and there was one computer lab on campus let's start there and there were no cell phones. You had to be mega rich to have a cell phone.
Speaker 2:By the time I entered the workforce everybody had a computer and everybody had a cell phone. So that's like five, six years and all of a sudden we've got both. You fast forward. My kids, my older kids, grew up with a computer. They understood cell phones, but they didn't grow up with AI. They're almost adults. They just are starting that AI journey where we joke about zero trust. Baby, my youngest will never know a world that doesn't have AI.
Speaker 2:So when you think about that from a workforce perspective, you're dealing with companies that have people that came before, in essence, the computer revolution, the cell phone revolution. You have those that have entered the workforce that don't know anything but leveraging and relying on technology, and we're about to bring in a generation that will have known nothing but a world with AI. So leveraging the unique depth we have at Worldwide, I think, is one of our superpowers in this space is that we can help and then also help future-proof that the investments that are made are things that are going to continue to help be measured and impactful to organizations. And that's, I think, where you're seeing these walls come down and why you're seeing so much more collaboration, holistically.
Speaker 1:Brian, I do want to go back to the trustworthy and responsible AI research that we put out, that we've been talking about. That wasn't just made up out of thin air. That was actually based off of real work. Correct me if I'm wrong that we did from Malaysia and the Association of Southeast Asian Nations. Correct.
Speaker 3:Yes, and even before that, that was where the insights came from earlier gigs to support that great work, to try to spread the good word about this unified compliance framework, or the lens, the Rosetta Stone, to do it right. If it's okay, I'll give you a little antidote here. So this AI, machine learning, neural networks this stuff's been around for a long time. In fact, wwt made huge investments well before I built, 10 years ago in this, and I've had the pleasure of working with data scientists throughout my career, and the one thing that I learned early on in my tenure here which is, you know, a year was that when we did the interviews with our clients who were trying to make sure they had the right policies and everything, we'd always end up talking to the data scientists and I was like, ok, data scientists, you've got the lab coat, you've got the degree, I'm sure that they're doing things securely. Wow, I was wrong. Only because they care about the data and they don't care how they get the data and they want more data and they want questions to answer, and so that was their focus. And even you know we're talking. It's like well, we don't really worry about that. You know, we're assuming that the data owners had done it. We're assuming that the infrastructure folks had done it, and so that was very telling to me that there was an opportunity here to educate the PhDs on kind of how to do it right and, at the same time, provide the business the confidence and assurance they need to take on those bigger projects.
Speaker 3:And Kate, you put it right. You know culture counts and so understanding you know, the sector, the industry or the culture, and it always begins that the tone at the top matters. So when a CEO comes out or the mission said we are going to embrace this technology, we are going to do it responsibly and we are not going to harm humans or stakeholders or the planet, those are bold statements and we love that. Then it's figuring out okay, how do we actually do it? And so that is where we get to come in and help do it.
Speaker 3:And the one sector and help do it, and the one sector that it probably has the biggest backlog and you'd think, wow, they, you know, is financial, and the reason they love, love, love data science, because they've been doing it forever and in fact they're heavily regulated, you know, to have to prove the model doesn't drift, and all these things in there. You know the harmful bias isn't there and they've really been good at that. They're fearful of generative AI, because my calculator doesn't hallucinate, but these LLMs do, and that is really where I, you know it was a wake-up call. So being able to actually work with these very mature industries, with these really smart people, to just, you know, hey, look at it this way and you could measure it this way and to translate between all those different cultures, it's just it's it's very rewarding and also, you know, it's been challenging at times. But I think you know this. This, the paper, is a culmination of kind of all those experiences and we have the tool set to really help people do it the right way and be successful.
Speaker 2:I think you just summed up. You know so and, by the way, before I even say this, if you want me to get your calculator hallucinate, I'll work on that for you if you'd like that. But I think you just summed up. You know, the relationship that we have here at Worldwide is you know, I see us at WWT. I see us a lot of times. We make really bold statements to help customers and then you and others and the amazing mad scientists go figure out how we help our customers. And you know, leveraging the ATC and the proving ground and all the different you know partners we have, we're able to create some really interesting you know, groundbreaking solutions across the board.
Speaker 1:Yeah, Kate, you know the idea of trustworthy and responsible AI seemed to be making its way into policy regulation and then certainly geopolitics shift, and you know we're trying to understand about what's going to be happening with the new administration in terms of this policy. I do want to read a quick quote. I was reading a New York Times article from late March and it was from Laura Caroli, senior fellow at the Center for Strategic and International Studies, and she said that issues like safety and responsible AI have disappeared completely from leaders' concerns, just given the nature of where politics are today. I want to just gut check that with you. Is that what you're seeing out there right now? And then, if so, what does that mean for transparency as it relates to how organizations are thinking about AI?
Speaker 2:No, I totally, I disagree 100% with that statement. I don't think it's totally disappeared. I think that this administration and I'm not going to speak on behalf of the administration, but what we've done is kind of hit pause and trying to figure out what are the lines and the delineation of where we should have regulation versus self-governance. And what I mean by that is if you look at the five pillars like we just talked about data protection and privacy, you know thinking about responsible use, things like that there's a line there because when you have adversaries using AI on one side and us using it on the other side, what is that role and responsibility of companies between it? How much can we govern and how much are we going to be impassioned and pushed back? So you know, we've seen some fits and starts. There was an open call with the FTC last year regarding putting in much stringent, more stringent parameters around deepfake on consumer facing websites and holding almost up to criminal charge for not having proper deepfake protection. We've seen now the last AI executive order be pulled back and really the reason is that they're trying to understand that line of governance versus self-governance, versus what is possible from a regulatory standpoint. We're dealing with very bleeding edge technology. We're dealing with bleeding edge concepts and one of the big concerns with this administration and I'm 50-50 on this one is there is a big concern about stifling innovation in the name of regulation. So how do we make sure that we're bringing responsible and trustworthy solutions forward, but not over-regulating to decrease our innovative lens?
Speaker 2:The other reason why you've kind of seen a step back in the safety aspect is there's very interesting geopolitical scenarios going on right now. That's nothing new. But there's also the question of how our adversaries are going to use AI and while we don't typically take what's called an offensive stance in cyber, our private organization is going to have to start looking at offensive cyber, and will AI have to come into that? There's a whole question of body around that. So these are tough, you know, conversations to have from a regulatory and a legislative perspective. You know, being on the Hill this week, the new cyber and AI leadership in this government is not fully baked. We don't expect it to be for another month or two. It's coming and then I think we'll see kind of a reshift and focus on what are the bounds and parameters from a safety and a regulatory standpoint that we can stand up to and feel confident that we're protecting our consumers, protecting our businesses, but not overstepping the mark of reducing innovation or the ability to leverage AI if necessary in a cyber attack scenario.
Speaker 3:Yeah, and Kate, just you know, add on to that. I don't think you have to. There's always going to be change. So again, our approach is cognitive of that and we, basically our lens, can, can modify, can adjust as needed, and if you don't need those frameworks, you don't care about them. We can limit that view and be very specific and provide prescriptive guidance. But at the same time, good is good. So you don't necessarily have to be fully immersed in the ecosystem or the planet to know that using less energy is going to get more compute or more value is a good business decision. You don't have to necessarily care about certain social programs to know that if your algorithms can't speak a language or denote different facial recognitions and that's part of its success criteria is to be able to do that the solutions aren't going to be fit for purpose. So I do think that the lens, if we take the social, political, geo stuff out of there, good is going to be good because it's fit for purpose.
Speaker 2:Well, and Brian, I think you know and I'll turn it back on you that's why we wrote this paper is you know, to your point and to what we've been talking about. We can't wait at this moment for, in essence, the lawyers, the regulators, the legislators in any country you know, to catch up with the innovation that's happening. And what I love about you know the pillars and the guide is it takes the best of what's out there right now and it's boiled it down to look, this should be your North star. Here's five pillars, here's an implementation guide.
Speaker 2:If you follow this, you're basically following the best class regulatory landscape that's out there today and the best frameworks. And you're doing it in a way that, while you might have to tweak a little bit to go towards NIST, or you might have to tweak a little bit to go to the EU or whatever it is, you're at least putting guardrails on your build. That is good, that looks to your point, best in class, and that's, I think, what our goal is is to make sure that there's a North Star our customers can align with. That's going to be symbiotic with whatever regulatory pieces come out to some degree.
Speaker 1:Yeah, I love that idea of good is good, brian. I wonder, are some of us overthinking this or over baking it just with the? You know either the fear of missing out or the complexity of the situation, but you know, in the end, like you said, good is good. Good cyber standards sometimes are often the most basic cyber standards.
Speaker 3:They are, and I'm glad you said that hygiene is the number one thing. I think in a I don't know if I should say this on your podcast, but in a couple of years of maybe 18 months people, it's just another application. If you really look where the focus and what's different about generative? It's everything above. I'm going to list 853, right, the foundation of good security, and if you don't get that right, you have no hope of getting AI right. But then if you mature and you do that the right way, that little bit it's the life cycle of a model. So the models are trained with data and even though our friend Sergey Bratkus would call them weird machines because large language models are, it's part code, it's part data and then that interaction with humans or other machines through the prompts and the responses. That's where all the action happens and that's when we talk about the dirty dozen or the 12 things. We got to be concerned that they're all happening up there. So you want to make sure that your models aren't using other people's intellectual property, because that could expose, you know, hurt them, but you know again, it could hurt your company. If it turns out you can be sued. So that's where the foundational models that are being used and what they're trained are so important. So the fact that we're having these conversations is good you can never have too much conversation about this and raise the awareness but absolutely we have to do stuff because if you don't, if you think that people are not using AI in your environment, you're mistaken. So this whole idea of shadow AI if you thought shadow IT was bad, wait until you see shadow AI, because they live amongst us and it's so much easier to accidentally leak secrets or have your intellectual property consumed.
Speaker 3:And we were at GTC last week and the keynote was really about the fact that we used to do retrieval of get data and where the data. Now these are thinking machines and the paradigm is changing. I haven't used that term paradigm for a long time, but it is.
Speaker 2:You're going deep today. You're quoting Sergey and using paradigm. I'm proud of you, my friend.
Speaker 3:But I think we have to have the conversations. We should not be afraid, we should boldly go. But let's not be uninformed, let's make sure we're armed properly. And I love the term the guardrails make it easy to do the right thing and hard to do the wrong thing.
Speaker 1:Yeah well, brian. I mean you've already dropped two pretty specific NIST references. Do we need to take the rest of this time to quiz you on what else is in the NIST document there?
Speaker 2:Would you like me to pull out the NIST guide and we can do like NIST hacker Jeopardy and see how far down he can go? We could have a lot of fun with that.
Speaker 3:No, but I would be happy to talk about the human friendly threat catalogs that I built on those 12 dirty dozen. And because that actually allows you to you know seriously, because that allows you to have like a human conversation. So you know, some of the tenants are your AI chatbot application should not tell you how to hurt others, like how to make bombs. They should not tell tell you to hurt others, right? That seems pretty pretty logical. They shouldn't use other people's intellectual property, because we value that. We want to make sure that creatives are compensated.
Speaker 3:We certainly don't want to expose our private secrets to training models, and so a lot of the very simple things that people could do is read that SLA, read that EULA I know they're terrible to read, but just hey or ask a simple question to your vendor Are you training the models with my data and other people's data? How can I be assured that my secrets aren't being shared or going to be leaked? And it's very telling when you look at service level agreements, because that'll tell you what the remedy is typically for a breach of that confidence. And you know I can only tell you that typically people don't necessarily want to take on more liability. So I would think that those SLAs are purposely complicated. So have a partner that can actually help you understand it, or even just give the right questions to them so they can do the right thing.
Speaker 2:I think that's our next blog. We should do a blog around your research there. It'd be cool.
Speaker 3:The Magnificent Seven is what I call them, but I'm not sure anybody really wants to write that paper.
Speaker 1:Yeah, I mean one of the most important things I think anybody can say, brian, you mentioned it ask the right questions, kate, as it relates to responsible AI, trustworthy AI or anything in the regulatory environment right now, what are some questions that leaders aren't asking right now that they should be asking?
Speaker 2:Yeah, I mean. So, first of all, when you're looking at what you should be asking, there's a whole methodology. We talk about the frameworks and we talk about that. There's 32 of them. Most companies aren't ready yet for framework. We're at a methodology stage.
Speaker 2:So the first is you know, assess, get a group of cross-functional leaders from. You know the actual users that are going to be using the AI up to your leadership and assess where is AI going to create value and impact in your organization value and impact in your organization. The second thing is quantify. What are the risks associated with that, with bringing it in? You know what are the benefits, the opportunities and what's the risk. You can have something that's going to. You know, if you're going to create a, you know an AI engine that's going to allow you to help understand all the different. You know, say, in a hospital, all the different types of patients that are there. That'd be amazing. What kind of research are you doing this and that? But if it exposes PII and all the patient data on the backend, that's really bad and it's going to hurt your reputation.
Speaker 2:So what's the opportunity, what's the risk? And then it's from an assess, quantify, and then it's a question of going from there. How do you remediate? How do you actually look at it and go okay, if something does go wrong, how do we roll it back? What can we be focusing on? What are the impact and what's the impact across our risk layers? You know from operational, reputational, all sorts. You know IP. If something gets leaked, what is it going to do? So there's a whole process there on methodology, of really understanding what are the use cases we think we're going to use it for and then what's the impact if we actually, you know, put it in both from a monetary and a holistic risk perspective. That's the first step. Then you can get into frameworks and everything else, but the first is make sure everyone's on board with both sides of that coin opportunity and risk.
Speaker 3:Well, I think Kate kind of hit on it. This diverse stakeholder group called a center of excellence or birds of a feather, or even innovation moratorium day, because people are using it. There is a kind of a human dynamic we found in a lot of our interviews where people are ashamed of using AI they don't want. And in fact, if you think about, maybe, how you first heard about generative AI, it was those scandals of plagiarism where students were having AIs do their chat, gp2, their homework for them, and it's like, oh, that's bad. Well, now the universities are. And it's like, oh, that's bad. Well now the universities are having to figure out no, these are the tools they're going to have to use in the future. So we have to figure, we have to change as educators. How are we going to teach people to use it safely and responsibly, and what part do human machine interface have in the future? And so we did.
Speaker 3:I think you know a center of excellence or a central place where people can come to see what AI is already approved in the organization. It's like, oh, we've got that, I don't have to go reinvent it. Or moratoriums on citizen innovators to say, please bring us your ideas, we want to hear about them. We want to showcase them. I mean, honestly, I've got to pitch the culture here. We've been doing that. There are some pockets of excellence where somebody and it's like come out, show us what you're doing. And then you're like I want to be a pickpocket of excellence, I want to go take some tools from that one and this one. So I'm because what we do in our practice area besides, you know the regulatory bits we do applied AI and what's weird is you know we do something.
Speaker 3:Six months ago, one way I challenged my team. I said okay, here's a problem. It looks similar. Are we still doing it? The best way? Is there a more efficient way? And things like co-pilots that understand the application you're using and some other tools you could. The best thing is bring the people together, showcase what's working, you know, de-stigmify the use of this technology, actually raise it up and celebrate it, but also make sure people understand that sometimes everything isn't the nail that the AI hammer hits. There's other ways that work better.
Speaker 1:And the one thing I'll tell you do not automate a broken business process or give it to an AI, because it will only fail at scale. Love it. Well, I know we're wrapping up on time here, so I do want to thank the two of you for joining. I know each of you have busy schedules between travel, client engagements and everything else in between. That we obviously listed on your LinkedIn profile. So thanks again and we'll hope to talk to you soon.
Speaker 3:Thanks for having us. Bye Kate.
Speaker 1:Bye. Okay, as we wrap today's episode, three key takeaways that stand out for any organization looking to lead responsibly in the age of AI. First, building trustworthy and responsible AI starts with a strong foundation, and that means aligning across the five essential pillars data, privacy, security standards, regulatory compliance, economic impact and ethical governments. These aren't just checkboxes, they're imperatives that must be embedded into every phase of the AI lifecycle. Second, achieving this level of alignment isn't accidental. To stay on top of this shifting landscape, you'll need to start with regulatory mapping and stakeholder engagement, move through real-world pilots, scale by measuring metrics and evolve through ongoing monitoring, communication and cultural reinforcement. It's a blueprint for progress that doesn't sacrifice trust for speed. And third, while the regulatory landscape may be fragmented today, the need for a cohesive global framework is becoming clearer by the day. Organizations that act now, investing in a holistic governance strategies, will be far better positioned to navigate change and build public trust. A special thanks to Kate and Brian for sharing their insight and wisdom today.
Speaker 1:If you like this episode of the AI Proving Ground podcast, please consider leaving a review or rating us, and sharing with friends and colleagues is always appreciated. This episode of the AI Proving Ground podcast was co-produced by Mallory Schaffran, naz Baker, brian Flavin and Stephanie Hammond. Our audio and video engineer is John Knobloch and my name is Brian Felt. We'll see you next time.