NASPO Pulse
Welcome to the NASPO Pulse Podcast, your source for exploring emerging public procurement issues. Join us as we engage in insightful conversations with procurement professionals, partners, and industry leaders.
Discover a diverse range of perspectives and opinions on various topics that are shaping the procurement landscape. Whether you're a state procurement official or interested in the field, this podcast provides essential insights to keep you informed. Tune in for the conversations that matter in the realm of procurement.
NASPO Pulse
AI for Government: How Public Procurement Can Adopt It Without Getting Burned
The promise of AI in government is huge, but so are the stakes. We sit down with Dr. Cari Miller and Dr. Gisele Waters, co-founders of the AI Procurement Lab and leaders behind IEEE 3119, the first standard dedicated to procuring AI and automated decision systems. Together we break down how public buyers can make smarter, safer choices—turning values like transparency and human oversight into concrete policies, contract clauses, and day‑to‑day practices that actually hold up under pressure.
We start with practical first steps: form a truly cross‑functional procurement team, define a real problem, and assess data readiness. You’ll hear why “getting ready to get ready” is a smart move, using small, low‑risk pilots to clean data and build capability before rolling out bigger tools. We share cautionary case studies, including a clever pothole detection project on trash trucks that drifted into unintended surveillance, and we explain how scope, safeguards, and community accountability prevent harm while preserving benefits.
From there, we get specific on AI policy versus contracts: how to require model provenance, audit rights, incident reporting, redress processes, and exit strategies that avoid vendor lock‑in. We talk SPI and the hidden risk of metadata, why human‑in‑the‑loop matters for public trust, and how change management helps teams see AI as a useful tool rather than a threat. By the end, you’ll have a roadmap for responsible AI procurement that blends governance, ethics, and measurable outcomes.
If you found this valuable, follow the show, share it with a colleague in public procurement, and leave a review to help more listeners discover these tools and ideas.
Follow & subscribe to stay up-to-date on NASPO!
naspo.org | Pulse Blog | LinkedIn | Youtube | Facebook
Hi everyone and welcome to NASPO's Pulse, the podcast that focuses on current topics in public procurement. I'm your host, Julia McIlroy. Today's guests are Dr. Cari Miller and Dr. Gisele Waters, co-founders of the AI Procurement Lab. We'll be discussing AI and procurement. Hi Gisele and Cari, welcome to Pulse. Hi. Hello. I'm so happy to have you here. So to begin, I'd love to hear about your professional backgrounds and what led you to create the AI Procurement Lab. Cari, let's start with you.
Dr. Cari Miller:Well, um, I am a longtime corporate strategist. I came out of uh the digital, you know, had a front row seat to digital transformation back in the 2000s when everybody freaked out about having two zeros come through at the turn of the century. Um, and so I progressed through a marketing uh channel with AI mainly. Um, and then when I got into my doctorate program, I started looking at uh the downsides, uh, the risks, and uh how do you mitigate those, and mainly um from a worker perspective, so in the workplace. But um, what happened was I kept coming back to this procurement issue and thinking, why are these people buying this? Like, why wouldn't you stop this at the procurement doorstep? What is wrong with these people? And so I got connected with Gisele, who had started this um, she'll talk about it, this uh standard and IEEE. And it was off to the races at that point. I just couldn't look away. I just couldn't stop doing the uh the procurement side of AI. It's it is the absolute place where your governance strategy is operationalized, it's the front door. So that's sort of my journey uh to this moment in time. Thanks, Cari.
Dr. Gisele Waters:Gisele? Okay, so my background. Um very different from Cari's. Uh, I have a PhD in educational psychology and lots of teaching uh of various populations, and human-centered design and research is also in my background. I've been at NOVA Southeastern University for 20 plus years at the Fischler School of Education, criminal justice. Um, but I've also been doing that while working with privates and public uh nonprofits building multidisciplinary guidance and tools. But what hooked me into this advanced tech IEEE AI governance uh space and into leading responsible AI procurement, but more specifically the chairing of the 3119 standard at IEEE, when I was asked was this unique patient services project with advanced tech enablements, and it was born from the human-centered design research I was doing in healthcare. I led the development of a new concept senior clinic at a federally qualified health center that required really complex blood data integration and big privacy challenges across native and vendor applications. So when I realized there was no standard and the legal and technical experts at the FQHC uh didn't appreciate how different the AI capacity uh changed diagnostics and the privacy requirements. That's was when I said yes to the development of the first standard in the procurement of AI and automated decision systems at IEEE, and that's when I met Cari, and things just ballooned from there into the founding of AIPL.
Julia McIlroy:Thanks so much. Quite a diverse background, but how interesting what led you to you know the creation of AIPL. Just super quickly, for our listeners, can you give us a little bit of background on the 3119 standard and also IEEE for those who may not be familiar with it?
Dr. Gisele Waters:Sure. So IEEE is the Institute for Electrical and Electronics Engineers. It's uh been around for almost a hundred years developing standards in all sorts of industries, including things like energy and information technology, of course, other things, uh, physics and so on and so forth. And only in around 2017 uh did IEEE get into the AI governance standards. And so they were one of the first, if not the first, standards development organization worldwide that said, oh my, AI is different, and we need to provide some guidance here uh from um industry folks and nonprofits, and really uh ever since the beginning of the first standard I was on, um, which was um uh uh back in 2017-2018, multidisciplinary teams have all always been a part of these standards development uh organization projects, and they really do last between you know three to five years, depending on on the length of the writing and rewriting. And uh with respect to 319, it's an incredible standard that was developed between 2021 and 2025, and we led a a group of international experts across all sorts of disciplines, from law to social science research to information tech folks to AI developers, to folks like uh Cari with her background, myself with human-centered design, legal, um, all sorts of uh incredible team members uh that could not have made it uh possible without this incredible interaction. And essentially it has six processes, the standard for the development um for the procurement of AI and ADS, and over 26 tools are in this standard, and it really is the first of a kind uh worldwide. No, no other ISO hasn't done it, um NEC hasn't done it, other ISO uh other sorry, standards development organizations haven't yet ventured into the procurement function specifically. They've certainly ventured into the development of AI standards development, but the procurement was uh we were the first to enter that space. So we're excited about it. It was published this year, and we have a follow-up training which helps apply the standard. That's also available through IEEE and AIPL collaborated on that as well.
Julia McIlroy:Thank you. That's so fascinating. So, speaking of procurement, for a procurement office that's thinking about AI, what's the first step they should take to lay a strong foundation before even starting?
Dr. Gisele Waters:So uh, you know, people are are buying AI whether they they really know it or not. But for early stage organizational steps, I think one of the first things to do is to create an AI procurement team that uh is focused not on using AI to buy, but focused on processes, on the how they're buying, to look at both AI as a category, which is unique, but then to really be circumspect and evaluative and critical of their own processes. So creating an AI procurement team that's cross-functional, and there's a lot of details that would even be uh a part of that uh high-level step, and certainly assessing their own technology portfolio and their governance of their existing technology portfolios. Um, Cari will get into this a little bit later, uh, but analyzing the current state of data governance and what type of data, how uh how it looks across their enterprise, um, who is impacted by all these uh prior components. And then if they have not already thought about an AI policy, there's certainly now networks of folks that have AI policies that you can identify and adapt to your organization and entity. And then I would say be cautious and start slow and define a procurement case, define a problem, define a need, as it happens in any other product category, but with AI more specifically, um, and in the 3119 standard, for example, there's a whole process dedicated to just defining the problem or the challenge and and the need that one thinks uh the entity has that that AI may or may not be a solution for.
Julia McIlroy:Thanks, Gisele. So we've heard the phrase getting ready to get ready when it comes to AI adoption. Can you break that down for us? What does it look like from an organizational point of view?
Dr. Cari Miller:I know, don't you love this? I love this phrase. This is straight out of um, I think Texans use this phrase, I'm getting ready to get ready, or they say actually I'm fixing to get ready. Um so actually, this phrase has a dual meaning. Either you're procrastinating, as in, no, no, no, I'm getting ready to get ready, don't worry about it. Or they're excited to get, I'm getting ready to get ready. So um, and that's funny because organizations have the same affliction. Oh no, we're getting ready to get ready. We're gonna get on it, sure, trust me. Um, or those that are just rushing right in. Um, I the from an organizational perspective, um, I picked this phrase up from uh the IRS actually, because they were getting ready to get ready. And essentially what it means is you know, there's a big stage to be had with AI. There's huge applications, there's huge opportunities. Every single one of those applications and opportunities requires a massive amount of data and a massive amount of um, you know, people that understand how they work, a massive amount of like I I came from the digital transformation world. You are going to remap your processes, you're gonna have to help people understand how that looks and how that works and get on the train and know it's not gonna take the job away, you're gonna have to just do the job different. Let's so let me show you how to do it. So there's all of these things that you you're prepping your culture for, um, but it's a lot of prepping your data. And so what I was watching um the IRS do was it was just such small potato stuff, but for them, when you think of the IRS, think of how much data that group has. I mean, it's just massive amounts of data, right? And so they have all this um data that they, you know, over decades, they didn't, it wasn't clean. It's just garbage in, garbage out, like everybody's data. And so they were using that data cleanse opportunity as internal small AI pilot tests. So they were using AI to clean their data to get ready for AI. I thought that was brilliant. They were getting ready to get ready, but in the process, they were learning with AI. So it was very small, very controlled, kind of low-risk sandboxes. They would take it by size at a time, have a very concrete, well-defined uh use case. I just thought it was great. And it was a model for a lot of organizations to try it that way. You know, don't go big or go home. Just let's just stay inside the house for a minute and just try this in a small sandbox first. Yeah, they were fixing to get ready, right? Yeah. Yeah.
Julia McIlroy:So data is at the heart of any AI strategy. How can procurement teams get their data ready to take advantage of AI tools?
Dr. Cari Miller:Yeah, so let's first, before I jump into how they get their data ready, let's double back and define what a procurement team is. Because Gisele and I are getting this um crossways with people sometimes. We say procurement because that's what we know, that's what we say, and that has historically meant something in a very specific way. It means the the buyer in a department that is compensated on saving the company money, that knows how to write a contract and keeps track of stuff. You know, like there's a specific job and a role that has the board procurement in it, right? That's not what AI is. When you go to buy AI, the procurement team is a much bigger concept. It is a program manager or a business owner or business leader. Um, the procurement person has their role to play, an IT person has a role to play, a legal person has a role to play, you may need an ethicist. It depends on what you're buying. And so when we say it, we mean a group. You know, we mean brains at the table because you're gonna have, you need different thought patterns and partners here. And so when you look at data, this is why I got doubled back on that, because when you're looking at the data, you're just looking for different things. I mean, there's all kinds of data, right? There's people data, and some of it's sensitive. So you need the HR person to say, oh, California law says you can't, or a union number is a sensitive piece of data. Okay. Guess what you need to do when you do AI? That needs to be labeled properly. So now you need your IT person to listen to your HR person so that that's labeled properly, so that when your procurement person goes to buy a contract for something HR related that's highly sensitive, everybody along the chain knows that piece of data was captured, labeled, and it's ready to go. So that's kind of how it works. And there's there's just all kinds of data like that across the entire ecosystem. Okay.
Julia McIlroy:You know, that's a fantastic point about the procurement team. Really, we're talking about true cross-functionality. Yes. And folks understanding what their role is, but also how to collaborate with others on that team. So yes, we're procuring something and it's a pure procurement team, but it's really a cross-functional team that makes that procurement. So at the University of Idaho, we had uh data that was considered SPI, sensitive and personal information. So we were always very careful with SPI when working with uh whether students or faculty. So you would need to have someone who understands on this team what the organization considers SPI and make sure that that doesn't become part of someone's AI information that's out there in the world, right?
Dr. Cari Miller:Not only someone, but also a lawyer because these laws are changing so fast. And then what's very interesting that's occurring with AI data is there are is metadata is becoming really important in SPI. So metadata is data about data. And so that is where you have security vulnerabilities. And so when we talk about Microsoft Copilot that's running inside it, you think, oh, I've got a license, it's all safe, it's fine. Yeah, but your metadata is your actual problem because who stops and says, everything in this file, uh with all of my key associate uh phone numbers and addresses is SPI. You don't do that, you just say, you know, Cari's contact list. And so then, but that's SPI, and maybe I've just left that open to everybody. So it really is very tricky, and you gotta know it. Oh great point.
Julia McIlroy:So policy updates are critical when introducing AI into procurement. What kind of clauses or requirements should leaders start thinking about to make sure that AI uses stay ethical and transparent? So I'll take that.
Dr. Gisele Waters:Uh, and I think that we need to differentiate between AI policy and then the AI contracts that we're going to engage in between buyer and vendor. And in regard to the an AI policy and what kinds of provisions or uh sections and clauses that might be needed in AI policy, that really is a um an organizational decision, um, a champion decision, the executive that uh might lead the efforts in saying, hey guys, we don't even have a policy in and of itself. Uh the GovAI coalition has has been a really good coalition of a network of folks, uh, as you know, uh of government public procurement folks across the United States. And it's been fascinating to see how they interpret what a good AI policy is, what a good vendor AI fact sheet is, and they've been doing some, you know, some fundamentals about core um analysis on this issue and what components should or should not be in an AI policy. The complexity of any organization's decisions on how to keep uh the guardrails on what can be transparent or how much transparency is required really is about just setting up this overarching strategy in an AI policy. I'm not talking about contracts because I'm I can let uh Cari go crazy on contract clauses with respect to buying AI. But in terms of writing a good AI policy, you can start from existing AI policies that are all over. They you can be find them anywhere and everywhere, and you don't have to, you know, completely start from scratch. And they typically all begin with really addressing the fact that our organization thinks disclosure is important. It's we think it's important that we know what we're buying to a degree of detail that we've never asked about before. Um, information technology in the past is not the same as the existing AI products out there. And so what you ask for is very different on what shows up in a contract versus just a general organizational AI policy. And there doesn't need to, that's not rocket science these days anymore. So just start with where one already exists, the GovAI coalition is one of, you know, one of many, and then you can adapt it to what the organization's needs are.
Julia McIlroy:Thank you, Gisele. So you had mentioned that the IRS started with a small pilot project. Can you share some other low-risk, high impact AI use cases and public procurement?
Dr. Cari Miller:Uh there's two more that come to mind for me. Um and the first one that comes up for me I got from the AI coalition. Um sorry, the Govai uh coalition. The guys came on and they were, I think this was out of Tennessee. This is genius stuff. I don't know how people's brains work this way. I I just love this stuff. So they attached to the back of trash trucks an AI, I think it was, it wasn't LIDAR, but it was a different type of an AI. Um, did you hear about this? It was a different type of an AI monitoring solution that would detect potholes in the streets while the trash, so think about that. The trash truck goes on every single street everywhere, and so it's detecting potholes and it's sending this information back to the streets department to say, hey, there's a pothole here. Now, what they wanted to do was upgrade it with LIDAR so that it could tell how deep the pothole is and sort of some other data points about the pothole. So I guess they could tell how, you know, how much time it would take to fix it or whatever. But just knowing that there was a pothole there, then they could just send it out quickly and you know get it fixed. I just thought that was genius until it wasn't. And so the follow-up story we heard on that was that the technology started to pick up homeless camps and it started to send back information on um homeless encampments. And then they were like, okay, that um wait, that was not the intended purpose. So what which I say that because it's a great example of unintended consequences on AI and why it is important to watch your AI all the time. It's like a toddler, you know, you guys got to keep an eye on it. What is it doing? Oh no, it did something bad. We have to, you know, bring it back in, put it in the timeout. What's it need? We've got to retrain it, you know, all that stuff. So that was one case study that I just thought was genius until it wasn't. And and then I think they're working on it. Um, another one was um just small pilots going on at local governments. Um, this again, genius crowdsourcing, genius crowdsourcing. So again, GovAI got a group of people together and they crowdsourced um what's called translation pairs. Now, translation pairs are used to automatically, well, with AI assistance, translate language on websites and for emergency service announcements. So I think they did 50 different languages. And so if you if you think about how unique the language can be in a municipality when you're trying to communicate things, um, you don't want those translations to be left to chance. And so what they did was they would catch key phrases that were just unique to the municipality, you know, trash collection, and just we, you know, just unique phrases. And um they put it into the translation pairs technology, and then they trained the AI to know how to do those translations so that they could do that stuff on the fly. It it was just it's been an amazing project. Um, very innovative stuff. So those are my two.
Dr. Gisele Waters:Uh I think when we talk about free pilots, just one of the things that I would like to add in terms of uh seeing these issues from a risk-based approach is just that free pilots from vendors can be dangerous in the very ways that uh, at least in the first case study that uh Cari mentioned, that once they are integrated in enterprise and the applications are now embedded, you know, you uh this vendor lock-in issue and the fact that you can't very easily extricate yourself once already in bed with someone, um, you know, without really having had, because it's not needed or required, public deliberation about, say, um, an application that does impact large uh populations or even vulnerable, very vulnerable populations, like uh the homeless in the case that she mentioned, it's really important to consider very carefully uh any kinds of applications that would lean into these things that start to look like we're getting surveillance information, we're getting experimental information on communities or specific groups and so on and so forth. So that that is what makes this AI procurement space so really specifically needed to better understand, to really respect that it needs additional attention, another layer of diligence before you buy something. I mean, it's just you know, it's it's really important to think about.
Julia McIlroy:Yeah, we're definitely not procuring widgets, right?
unknown:Correct.
Julia McIlroy:And great point about uh sorry to all the vendors or suppliers out there listening, but back in the day we'd call that vendor creep, right? It starts with a free application that's now embedded and that you're also now paying for. And there was probably no sort of competition to acquire that. So it it gets complicated. I love the pothole idea. Definitely not all potholes are created equal. So to have some LIDAR on there, that's what an incredible tool. So change management often makes or breaks tech adoption. What advice do you have for procurement leaders to get their teams comfortable and confident about AI's role in their work? I can't tell you how much I love this question.
Dr. Cari Miller:I feel like this is the one little piece of the puzzle with our AI transformation here that has been missing from so many companies. We're just so ready to just rush in and do AI. Um, but it really is just like every other technology transformation we've ever done. So when you do a technology transformation, you have to do change management with it, or else you're gonna have less of a chance that it sticks, right? So on the one hand, I think change management is easy to some extent if you are a basic thinker, like, okay, people want to be respected, they are, you know, anxious, people are generally anxious, so we have to appeal to their sense of anxiety. You know, they want to know, they need to know, they deserve to know, so we just help them know. You know, that's just change management, really at its strip-down core. On the other hand, it's hard because it takes so much time. It is so, I mean, you have to have the patience of, you know, just a saint. It's you have to paint a vision, it has to be compelling, it has to actually make sense. You know, sometimes we put AI products in and it's like, what were we doing with this? We didn't have a business case, we just thought it sounded cool. That's not the way to do AI, right? AI is not the uh means to the end. It is, you know, it's just a tool, so it has to make sense. Um oftentimes people forget to do the what's in it for me part of explaining these tools. So how can you benefit from it so that you get their buy-in? Um, and then you know, give them space and time to ask some questions and you know, fumble with it. And here's a novel idea. Maybe ask them about it before you just put it in and get stakeholder engagement. We are huge fans of stakeholder engagement, and by the way, we have seen that make your solutions 10 times stronger when they come in because people on the ground tend to know a little bit more about situations oftentimes. Um, and you really do have to help them upskill and learn and uh give them a safe place to fail. That's okay, that's part of digital transformation. We expect that. Um, and you know, you find your leaders and you have them help the laggards along in a soft way, and that's how you get it to stick, you know, you just it's caring and and patience, and sometimes we forget that. But but that's uh that's what success looks like. That's a great point.
Julia McIlroy:I think in the end, as you were uh saying, uh people want to know how is this going to impact me? What's going to happen to me? And not saying that people are self-centered, but you want to know how to do will I have a job tomorrow? What's it going to look like? And explaining to them, yes, you have now another tool to use, to use in doing your job. So but there has to be explanation and some easing in. Exactly. When we talk about getting roles and responsibilities in place ahead of time, what's your advice for teams that aren't even buying these tools yet? How can they start laying the groundwork?
Dr. Gisele Waters:So I'll go ahead and take that. I think that it would, I would argue that it's going to be hard pressed to find a team that's not buying these tools, being pressured into buying these tools is has an upgrade or some kind of um integration. It will be hard to find one that is not already buying. However, with that said, I think we we touched upon this earlier, and that was uh when when Cari was talking about the nature of how a procurement team develops their competencies that become cross-functional. That issue of cross-functionality is not just at the technical level, but you have to have a team member or a team set a set of team members that have both general cross-functional input into the process of buying. And I think what we may have lost a little bit of is that are we only using procurement team here because we want them to use AI to buy? Or as we fall back on our background specifically at the AI procurement lab, it's helping build capacity in responsibly buying. AI by changing your approach to what you buy, when you buy it, and how you buy it. That is very different. And so procurement team competency to us means something very specific, and we actually defined it in the standard with very specific elements that added, for example, something like the ability to adapt to changing circumstances, the ability to perform the requisite tasks without the need for particular virtual resources. It's the nature of having an understanding that human in the loop is important when you're trying to avoid risk, reduce risk, identify a risk appetite. Those types of folks are rare. And so that's where I would begin is asking questions about what are the best team members that have cross-functionality and what kind of cross-functionality do they have? Is it only technical? Or is it also general? Or is it about procedural and operational issues? That's where it would start.
Julia McIlroy:So you've mentioned human in the loop, which I love that phrase, and also upskilling. So how should organizations strategically invest in training their employees?
Dr. Gisele Waters:So I would say start small. There is now, unlike three, four years ago, uh, before there was a GovAI coalition that was really starting to integrate a network of public procurement folks that said, How are we going to do this with AI? Um, what kinds of processes should we start with? There is a lot of free resources out there. There's a lot of very affordable resources out there, not only teaching people why AI is different and what is AI and what does AI mean to my organization, but there's now an ecosystem that really is beginning to flourish. In this space specifically, we were, you know, one of the leaders in it, and we're very proud of that. Um, but now there really are some resources across the way. Um, myself as well, and and Cari in her own way, and with the AIPL and IEEE, there's now even training, you know, that um a variety of organizations are now dedicating building tracks of training in this intersection of procurement of AI. Not just buy AI in order to make your procurement more efficient, but literally what we focus on, which is how to buy AI more responsibly. And you start with what can be within budget, within whatever the organization can do, but you must uh respect again the fact that AI is different and unique and understand that it's a category that that needs to be taken seriously because of the impact to vulnerable populations, especially if you're a public procurement professional. And what does that mean to my licensing, or what does that mean to my certification, or what does that mean to my educational background? How is it that I'm gonna become a better procurement professional in the first five years or in the first 10 years? What does that look like in 10 years? When we were building the standard together, I think that's really what kept us going, that fed the passion, was that we were building a standard for for for a space that didn't even really exist yet. And so now the derivatives of that product are gonna be many, I would suspect. And um as as AIPL also we're very, very proud of kind of you know driving that niche and the groove, the first groove in the ground about how to teach people uh to do this better and in a more responsible, transparent way.
Julia McIlroy:So you've mentioned AI values and principles as an important starting point. So, how do leaders translate those values into something concrete that drives decisions, especially around procurement?
Dr. Cari Miller:Yeah, this to me feels like a full circle moment for me in my career. Um, because what is interesting is we say AI as if it's some new unicorn of, you know, this magic thing, but it is still digital transformation. And so when in digital transformation, what we commonly would do is we say, do we have a business problem or an opportunity or challenge that we're facing? It takes us too long to do X. Our competitors are doing this uh faster, better, you know, smarter. Can we do it that way? And so you evaluate, you do root cause analysis, you understand the scope of your problem and your challenge and what really is causing it, plaguing it, whatever. From there, your needs are born. And you know, that may or may not be an AI issue. In this day and age, it could be an AI opportunity. And so that is how you get to arrive at an A procurement opportunity. However, there is a step before that for values and principles, and that is depending on the risk level of what you're trying to solve, you have to go through a series of evaluation points where you have to ask yourself, do I want to solve this with AI or is it too risky? And if it's risky, but I think I can tolerate it, then you have to ask yourself, am I willing to do the mitigations necessary? And you have to know that before you go into the procurement because oftentimes we're finding people take walk into the procurement and think, well, I'll just tell the vendor they're gonna have to do all this stuff. It's like, uh no, guys. It is a two-way street. There the vendor will be responsible for some things. Absolutely. They have to have, they have to give you decent tech. We've always expected vendors to give us decent tech. Don't hand us a bunch of you know things that are broken already and or things you didn't feel like doing. But we have a responsibility once we get hold of that. We want to make sure we're not misusing it or abusing it, or if there's an incident that we take care of our part of the incident, if it required a redress of any form or fashion, we have a process and policy in place to handle that redress on our side. The vendor's not going to do that for you. So you have obligations too. So it's a whole series of things to consider, um, which is what we teach. That's what the AI procurement lab, that's what we teach, is how to get all those ducks in a row so that you can really extract that value without just letting all the risks kind of linger in there. The benefits are there, but if you let those risks linger in there, they will erode all that good ROI, that juicy ROI you're trying to get out of that thing. Yeah.
Julia McIlroy:And I love that you you're really beginning with what problem are we trying to solve? Absolutely. Instead of just, well, everyone's buying AI, so we need to buy it too. But really, how is this going to benefit our organization? Yep. So I think that's a that's a great strategy. Okay, lastly, what's your proudest moment at the AI procurement lab? What has brought you joy and satisfaction? Gisele, let's start with you. Oh Lord, that's putting me on the spot.
Dr. Gisele Waters:Um, I would say, you know, when these disciplines intersected in a way that the audience that we always talk to, whether they were a small government uh department or whether it was literally a security entity that was transnational and international, when we talk to people about the fact that we are advocates for a change in process and advocacy for making sure that human is in the loop when you buy AI and how you buy it, it was the immense hunger, the incredible curiosity that everyone had for, oh my god, why didn't we think about this? Oh my gosh, we should have asked better questions when we bought. Oh my goodness, why didn't we think about reflecting on the existing processes that might not match the potential risks that might come? I was really always just incredibly thankful, A, that I had an incredible partner uh to help work through all of this with me because her brain is magnificent. Um, but that folks really responded to the need for this standard and for the need for a risk-based approach that AIPL was an advocate for, was championing. And I don't know, I mean, from the beginning till now, I I don't think the hunger for the guidance is gone. It is only beginning. And so to me, it wasn't one moment, but rather a set of common types of many moments, if you will.
Julia McIlroy:Thanks, Gisele. Cari?
Dr. Cari Miller:I that's it's so funny because I was gonna say it's not one thing, it's a it's a a cacophony of every so we've spent four or five years, I don't know, just looking at this puzzle, researching the puzzle, putting the puzzle pieces together, figuring out this wouldn't work this way, you can't do it that way. It has to go this way. Oh, look at this new thing. Oh, plug that in. No, oh, that changes this. Like we have spent a long, long time digging and digging into all of this. And when now we're at the point where we get to share that information, and so saving people all that time. And so when we go and talk to people and about their you know narrow sliver of like, well, I'm trying to solve for this, or I'm trying to figure this part out, and we're like, oh, here's the answer to that, or think about it this way, and their light bulb goes off. That is so rewarding. It's like, oh yeah, I just it's I guess it's a being a teacher. That is the rewarding part. This is what teachers feel like when they get to teach students, and the student knows how to do math, but it's it's a cacophony of that. I just it's not one thing. Although if if anyone has ever developed a standard that has, you know, takes a few years to get, and you you have to do it with a um a group of people and it's consensus-based, and it's like birthing an elephant. When that standard finally said publish and it's live, that was that was a pretty good moment. I think Gisele probably would uh would say it's also a pretty good moment. In fact, she's getting us a little award for that too, by the way. It's a good standard.
Julia McIlroy:Fantastic. I love the analogy. That's great. Cari and Gisele, thanks again for joining me today. I appreciate it. Thank you for having us. Thank you. And to our friends in public procurement, remember we work in the sunshine. Bye for now.