
The Buzz with ACT-IAC
The Buzz with ACT-IAC
ICYMI: Opening Keynote AI Acquisition forum
In this episode, Kareem Fidel of CGI Federal introduces Zach Whitman, Chief Data Scientist and inaugural Chief AI Officer at GSA, who discusses GSA's AI strategies and initiatives. Whitman elaborates on the practical experimentation and deployment of AI systems, the importance of data hygiene, benchmarks for AI performance, and the challenges and opportunities in AI adoption and acquisition for federal agencies. Key topics include AI in acquisitions, talent readiness for AI, and the fine balance between technological advancement and maintaining accuracy and precision in AI governance.
Subscribe on your favorite podcast platform to never miss an episode! For more from ACT-IAC, follow us on LinkedIn or visit http://www.actiac.org.
Learn more about membership at https://www.actiac.org/join.
Donate to ACT-IAC at https://actiac.org/donate.
Intro/Outro Music: See a Brighter Day/Gloria Tells
Courtesy of Epidemic Sound
(Episodes 1-159: Intro/Outro Music: Focal Point/Young Community
Courtesy of Epidemic Sound)
Announcer: [00:00:00] With that, I'd love to kick things off and welcome. Um, Karim Fadel, vice President of CGI, federal to the stage who will introduce our first keynote, so Karim.
Karim Fadel: Thanks, Hugh. Good morning everybody. My name's Kareem Fidel. Uh, I lead, uh, CGI Federal's AI Go to Market Strategies, and I have the distinct, uh, honor to introduce and have a conversation with our keynote guest today. Zach Whitman, uh, chief Data Scientist at, uh, GSA and, uh, chief ai, uh, officer, uh, in these roles, uh, he supports the data analytics, uh, center of excellence leads to cloud, uh, migration, uh, data.
Karim Fadel: Architecture modernization, change management data and procurement and strategy formation modernization efforts at external agencies. Hold on. I'm not done.[00:01:00]
Karim Fadel: Uh, as GSA inaugural, uh, chief AI Officer, uh, Zach is responsible for maintaining awareness of GSA AI activities. Establishing processes to measure and evaluate AI performance, establish and chair AI governance bodies, and lead the development of GSA AI inventory and other reporting among I'm told when he has time, many other responsibilities.
Karim Fadel: Please welcome Zach.
Zach Whitman: Thank you.
Karim Fadel: Okay. Let's, uh, let's start with GSA. Can you talk, can you talk about your AI strategies and initiatives that, that are in place?
Zach Whitman: Yeah. And thank you for having me. Um, too gracious. I don't do all those things. Who knows Who does those things? Um, just trying to keep up because like Tim was saying, we are at a new point in time.
Zach Whitman: Um, with GSA, we are thinking about, uh, a few different things. [00:02:00] Um, one as GSA, we have to be an enabler for. Other agencies to perform their mission and we have to figure out what efficiencies we can find that supplement their mission on top of their, uh, procurement efforts, on top of their digital strategies.
Zach Whitman: Um, and the best way we knew how to do that was to look internally and think about what practices work for us, because a lot of what we do will apply directly to the other agencies. Leading by example has been something that we have been focused on because nothing is better in this world right now than practical expertise, but having the, the experience of knowing what it takes to deploy AI at scale to a diverse workforce.
Zach Whitman: With a variety of different missions, uh, and doing it in a way that is safe is observable and allows for transparency to the public in our practices. So [00:03:00] the, the main effort that we've been focusing on is, uh, the practical experimentation and then deployment into production of AI systems that ranges from.
Zach Whitman: General purpose chat bot type tooling to mission specific, uh, direct implementation to augmented existing workflows that have AI capabilities being added to them, um, to, uh, original research, like our safety efforts. So there's a whole bevy of different activities that surround this type of work, given how general purpose AI can be, which leads us to this very wide range of frontier efforts that need to be.
Zach Whitman: All running in parallel.
Karim Fadel: Okay. Perfect. And so you, you, you've clearly leading the charge. What, what returns have you, have you seen so far? Early days. Yeah. But what returns have you seen
Zach Whitman: so
Karim Fadel: far?
Zach Whitman: I, I think, you know, I, one thing I I think is really cool is that while we've been an early adopter, we've seen a [00:04:00] ton of really interesting work happening across the federal complex.
Zach Whitman: Um. For example, NASA was an early adopter in building their own interfaces and using boutique and bespoke, uh, AI tooling for engineering purposes. For example, uh, energy has been building their own. Some of the labs have been building their own LLMs to, uh, support their specific mission. We've been much more in the focus of enablement for general.
Zach Whitman: General practice. Um, GSA is not a, a science organization. We are in the business of facilitation and support. And so our focus has been what does it take to have general adoption to an agency? What are some of the steps that you would want to consider when deploying general purpose tooling for back office drudge reduction type activities?
Zach Whitman: Um, and so our, our initial focus has been on deploying. Um, a general platform that allows for, [00:05:00] uh, mission augmentation, uh, for, um, uh, using LLMs in a general purpose way where we don't have full control over each use case. Um, the idea being is, um, like the internet, like word processors, you can take those in a number of different directions.
Zach Whitman: And so our focus has been we need to make sure that these tools are available and we have the telemetry. That allows us to ensure that the tools are being used appropriately. For mission specific activities. And, um, we have safeguards in place to allow for our employees to experiment with these tools without fear of, uh, you know, some sort of like data exfiltration issue or doing something wrong.
Zach Whitman: I think the, the availability of the commercial product tooling has made it so that you see a lot of contractors, you see a lot of feds worrying about having to use these tools because one, there's a cultural impasse here. Am I cheating by using this? Is this really not my job? Um, there's the other [00:06:00] part of it which is like, can I use this tool?
Zach Whitman: Am I like, can I upload this document and then do something with it? There's a fear there. And so trying to bridge that cultural gap where we deploy tools that, uh, assert control over the data over the network and say. Please experiment with these tools. Please try them out. Please use them for your day to day.
Zach Whitman: We've applied safeguards to ensure that you're not gonna get into trouble by using them and putting the wrong document in. And, uh, we then observe how that is being used and then modify the product to make it more useful for them. Things like moving from a chat interface to an API centric position where we're ma the, the maturation of our, uh, AI journey has started, started to take off, and we're seeing a lot more.
Zach Whitman: Direct embedment of the tool into third party apps, whether or not they came with AI or not, we can now enable those tools. Um, so I think it's really been about how do we move from that, like phase [00:07:00] zero familiarization zone to phase one adoption to phase two. Augmenting our existing workflow. The next step for us would be rethinking our workflows with ai first in mind, rather than augmenting existing workflows, how do we rethink the entire process knowing that we have these tools at our hands?
Karim Fadel: And so with that priority and with that mindset, data hygiene has to be at, at the forefront of, of, of your, um, of your thinking. Yeah. Can you talk about some best practices that agencies should be thinking about and some lessons learned that you have?
Zach Whitman: The juice is really not worth as much of the squeeze without the data.
Zach Whitman: That that's been something that has been helpful to convey with these tools because you, the, the tools themselves present an immediate value proposition. I put something in, I get something out. The value of that output is really gonna be predicated on whether or not has access to your mission data and whether or not that data is authoritative and whether or not that data is accessible.
Zach Whitman: To the models or to the systems [00:08:00] themselves. It's fine to hook it up to a data store, but if it does, if that data hygiene, if you don't have good metadata in place, if you don't have clean ontologies, if those ontologies aren't legible and understandable by the models, you're not gonna get the, the, the, the value that you could if you had done the boring, dirty.
Zach Whitman: Data, classic work that is not AI at all, but needs to get done and is totally dependent on your subject matter expertise. All of the nuance of your data sets will get lost if you just plug it in without any kind of consideration for trying to explain why this field has this nuance and this definition is not the same as that definition.
Zach Whitman: Easily confus. If your analysts are getting confused by your data, the AI will be confused by your data. And so thinking about how to make your your metadata legible before you start thinking tech, before you start thinking mcps, you, [00:09:00] you start thinking about, you know, uh, building agent layers. The data hygiene will be the thing that gets you the value prop back.
Zach Whitman: You can build the infrastructure and, but you need to be able to show that the quality. Isn't up to speed. And so that's, the data is huge and the measurement of the quality is huge. So like all these new models as they come out, if you're not measuring the performance of them, if you don't have benchmarks in place, then you will be reliant on anecdotal, incomplete evidence to make decisions on which model works for which use case.
Zach Whitman: And as you make changes, how do you know you're improving? And so there's, there's, there's two factors here. As you bring up this. New tech and you're trying to apply it to your existing processes, you have to bring your data at that same rate of investment to ensure that, that that investment is actually made valuable.
Zach Whitman: Otherwise, you're, you're going to, you're gonna leave a lot of value on the table.
Karim Fadel: You mentioned benchmarks. Where, [00:10:00] where, where would we find those? Are those available?
Zach Whitman: How, how often are they they updated? So the benchmarks are an interesting, so there's a lot of commercial benchmarks on there. These are the things that, uh, the models will like.
Zach Whitman: The marketing team for the model builders will cite, Hey, I'm better than this other model builder 'cause I scored a higher score on this thing. You need to keep an eye on those, maintain awareness of those, but then also consider internal benchmarks. One thing that we're working on right now is for, for GSA, we're focusing on, uh, mission centric, functionally driven benchmarks.
Zach Whitman: Things that we care about that general purpose model builders will never be able to cater to. They're too specific to what we do. Things like. You know, if you know inside GSA baseball, PBS type activities, far based activities, um, if you're thinking about like FAS fleet, we have a lot of these very specific things that we do that.
Zach Whitman: We can benchmark the quality of these models against far is a good example. It's a public document. All the models really know [00:11:00] about it. How good are certain models at answering far based questions? We've developed our own method to measure the quality of that. And it's important because when you get a new model in, for example, um, you need to know if the new model's better than the old model in terms of answering far based questions.
Zach Whitman: If you're gonna augment it with like a rag, for example. Do you need to? How good is that model at answering FAR-based questions? Is that fit for your use case or do you need to continue to improve upon it by doing something to that model? Either bringing new data in or potentially augmenting it with some sort of agentic flow.
Zach Whitman: To improve that quality? Or is it good enough? You know, there's a big difference between creating, um, you know, ideas for a press brief versus like doing something for an acquisition. These are very different stakes that we're talking about. And so having, uh, acceptable standards for when a use case can employ a certain model needs to be backed up by data needs to have science applied to that to say, here are the scores that this, uh, model can reach.
Zach Whitman: Here's the [00:12:00] variability when we retested it and we retest it constantly, and you know, is this model fitting within the two deltas that we need to, uh, have confidence that this model is appropriate for this high stakes use case.
Karim Fadel: And it's, it's clear at this stage that not all models are, are created equally.
Karim Fadel: And so it's important for agencies to think about what they're trying to achieve when selecting the, the models.
Zach Whitman: Yeah, it has to deal with the domain, the use case. Is this a code, is this specific for coding? Is this specific for acquisitions? Is this specific for far? Is it specific for, for pr there's a million different routes you can take.
Zach Whitman: And so having telemetry available so you can measure the performance of the different facets of your business against, uh, these models to select the right model. Uh. Is a is a complicated question. Is it the cheap one? Is it the fast one? Is it the more robust one? You need to, you need to have a way to measure that.
Karim Fadel: Alright, we're gonna switch gears and talk about, uh, AI and acquisitions and acquiring ai. So talk about it at, at a higher level, what role do you think AI is gonna [00:13:00] play in, in
Zach Whitman: acquisitions? I think it's gonna be endemic. Um, I think it already is. I think we get a lot of RFPs that have been written in part by ai and I think, um, the idea of sorting through responses is going to be naturally, um, in part augmented by AI in, in a way.
Zach Whitman: Um, we, we started ai, our AI journey early. Um, one of the things that we're thinking about is how do we move problems left? How do we, uh, continually catch the issues without having to apply subject matter expertise? To something that could be caught by something a little bit more automated. Um, compliance language for 5 0 8, uh, that was one of our early use cases that we put into place.
Zach Whitman: Just making sure that like the submission ticks the box. That was our starting point. Now we're starting to move into process efficiencies. How can we find and have summaries that allow, uh, our subject matters to focus their attention on the things that really need to, uh, be focused on rather than spending time doing rote re [00:14:00] reviews and routine, uh, efforts.
Zach Whitman: So, uh, in a big part we're looking at process automation, and the other part of it is looking. Uh, uh, after the fact to ensure that we're seeing consistency across our acquisition practices and doing a deep dive into our data holdings of, uh, prior acquisitions and monitoring. So we're, we're seeing it from a full stack of, uh, early, um, acquisition phases to, uh, existing to postmortems on, um, the entire life cycle.
Zach Whitman: Cool. And,
Karim Fadel: and so talk about headwinds and, and tailwinds. What, what do you see as the biggest, as the biggest challenges and what do you think will accelerate adoption of AI and acquisitions?
Zach Whitman: Quality and, um. Accuracy and precision to me are like the things that are most, uh, we're most focused on. Obviously we all are aware of the potential for scenarios like hallucinations or, um, you know, the, the lack of, uh, completeness in reviews.
Zach Whitman: Um, and so before we deploy anything, like I was saying earlier, we need to make sure that [00:15:00] we're meeting standards of quality. Uh, and that's where making sure that we have standardized. Means to benchmark, uh, the approach and know just how precise the tool is and its accuracy. So, um, you know, in terms of precision, we have things like needle in a haystack type test where we try to ensure that it can recall very specific information from a complete document.
Zach Whitman: And, and, and in terms of accuracy or completeness, we're looking for, um, things like understand, making sure that there aren't, um, any. Uh, potential misinterpretations or hallucinations, uh, and then, uh, making sure that, um, the, uh, completeness of the document is being reviewed and understood. Um, so there's a few different facets that we're, we're taking, uh, consideration on and the.
Zach Whitman: The main effort there is making sure that we can take the time from the SMEs to develop these measures, because ultimately they're the ones [00:16:00] who have the expertise to, to define these measures. And then it's up to us, the, the chief AI officers to, uh, operationalize their, their measures so that we can provide them with a useful tool.
Zach Whitman: Um, and so, you know, making sure that people have who are already overwhelmed and super busy can carve out some time to help us build that automation is, is a challenge for sure.
Karim Fadel: And then how about acquiring ai? What, what should agencies think about, uh, what, what risks should they try to mitigate? Uh, what, what are the key considerations for agencies when, when acquiring ai?
Zach Whitman: So we, uh. One, being able to detect that AI is present in the response is, is kind of a weird but required thing that we need to start to really press on. Um, it, it's important for us to know that the solicitation includes AI because oftentimes it won't be clear that AI will be a part of a deeper stack.
Zach Whitman: Um, so first having that awareness of what is in what is out. [00:17:00] Secondly, we wanna make sure, uh, that. The, the basics are covered. What is the model card? What models are being used? How is it being used? Um, these are questions that we've baked into our governance processes where if you want to bring an AI solution to the organization, who built the thing with, what data was it used?
Zach Whitman: Um, to, to be trained. Um. Who manages it? Where is it run? These are types of questions that we faced when we are looking at deep seek or early, right? Like what kind of biases are potentially present in this thing? And, you know, you can find deep seek, uh, on all the major hyperscalers, right? So it's potentially easy to acquire.
Zach Whitman: Um, and it's not, you know, it's being run on US hardware, but the, the, the, the durability of those models, uh, to be broken are quite low. So there's a variety of different factors that we need to know about that AI solution to control for the risks and make sure that we're measuring them consistently. Um, we [00:18:00] have, uh, also, I, I would say, I would suggest, um, as part of the governance, having separation of concerns for the risk, appetite of the agency in terms of like what kind of risk profile does the agency want to take versus who's actually doing the measurement of that risk.
Zach Whitman: And so we've separated concerns for the executives to say, this is the risk profile, this is the landscape that we wanna see. Here's where we'd like to see our risk being, uh, applied. And then for the expenditure of that risk, we look at the safety team. Which is one level below that, they're the ones quantifying the use cases risk, and then applying that to the, uh, the, the risk profile and trying to find a, uh, a, a balance, uh, that meets the executive needs.
Zach Whitman: And so, um, I would say separating out the concerns, making sure that your executives can set the strategic direction and allow for your lower level, uh, practitioners to define the risk and identify them. And in that acquisition. Having all of those [00:19:00] questions being addressed upfront before you even get to security.
Zach Whitman: Um, if the use case isn't applicable for the applic or for the uh, agency, or it doesn't fit that risk profile, there's no need to do any of that security work.
Karim Fadel: Again, switching gears, you talked about, um, ai, augmented code generation where we're talking about models. Industry will start to bake in efficiencies based on, uh, maturity in that, in that vein, how should buying agencies think about, uh, those efficiencies when they're considering levels of effort in this new paradigm?
Zach Whitman: Yeah, the, it, it's, it, it's an interesting point because you have. I think it goes back to the culture. It goes back to the, the, the systems in place that you have available to, to benefit from these systems. Um, I, it has to do with your workforce how much you wanna build versus buy. I think it, it. We, we've seen a shift in, in, in the past where we [00:20:00] are far more interested in, um, you know, kind of farming out the, the development efforts.
Zach Whitman: And now we're seeing more of an inboarding effort where we want to build a little bit more and using AI to help us build and empowering folks who aren't necessarily engineers to, to begin to use these tools to develop their own tooling and augment their own workflows. And so, um, you know, with, with the sense of development, we, we've been taking the approach of.
Zach Whitman: Letting more flowers bloom than not. We want to see more tooling being put into place, uh, of, of our workforce and, and trying to understand what original ideas, what organically can come out of that. Like what happens when you empower your workforce to begin to decompose their workflows into discreet.
Zach Whitman: Steps and then begin, give them the tool, the tools to begin to actually write some automatable steps. And, and so like, I think that's been an interesting perspective and it's led us to have a, a higher appetite in terms of [00:21:00] purchasing these tools, acquiring these tools, and then baking them into, uh, processes.
Zach Whitman: The only connection that we'd say to ensure that we have these controls in place is having a singular point where we can see what models are being used and then how they're being used. And so it comes back to observing the use, but allowing for tooling to, uh, exist, uh, as in as many places as being asked for, uh, where we possibly can, not to add anything to your plate, but perhaps another opportunity for benchmarks.
Zach Whitman: Yeah. Well, to standardize. Absolutely. Like I think the benchmarking piece is. That, that's why we are constantly asking for kind of like a, a singular, inter uh, proxy where we can gateway through the calls so we can maintain the prompt in response. Tele telemetry. That's where we can sample from the field, what kind of prompts are being asked, and importantly, how are the models responding and apply those benchmarks in an evergreen manner rather than it just being at acquisition, always [00:22:00] running our benchmarks and always improving our benchmarks, adding processes to have.
Zach Whitman: Subject matters continue, uh, subject matter experts continue to flesh out our benchmarks. Using practical application has been a huge benefit for us.
Karim Fadel: Let's talk ag agentic ai, uh, ag agentic AI has, has been all the rage 2025. Uh, what, what potential impacts would you suggest we consider, uh, when looking at autonomous, adaptable, goal oriented systems?
Karim Fadel: Um, and, and, and their effect on, on decision making.
Zach Whitman: Yeah, the, the hands off is, is the real step. Like at what point do you take your hands off the handlebars and trust the system? And for us, we are starting with instituting the technical hops required and the infrastructure that we need. So like the data piece, um, building, uh, databases and then building on top of those [00:23:00] MCP servers, concepts like that where we can start to.
Zach Whitman: Construct experiments and demonstrate that we can measure the quality, we can observe the type one and type two errors, and then from there we go back to what are the standards of care that we need to apply to this use case? What are the requirements and what importantly happens when they go wrong?
Zach Whitman: We've read teaming the concept of when this problem happens, what are the outcomes and what are they? What are the firewalls that are in place to allow us to rectify that problem? And so we're the, the main idea behind the age agentic space for us is starting with as narrow a definition as possible.
Zach Whitman: Having humans in the loop at that, at that step. And then once we've proven that we are at or above the quality of direct human practitioners, we can then take that, uh. Intervention [00:24:00] out and move it one step to the right of that process. And taking that stepwise approach allows us to be very consistent, minimizing the variables in play as we measure those processes, and, uh, ultimately begin to bridge larger and larger gaps between the workflows and our intervention steps.
Karim Fadel: I think we have time for, for one last question. When you and I were, were prepping, we were talking about, uh, the talent aspect of, of ai. Can you talk about, uh, what agencies can do to, to ready, um, their workforce and maybe share some things that, uh, GSA is doing?
Zach Whitman: Yeah. The, the talent is, I think, as complicated as the AI itself, the talent aspects, the communication.
Zach Whitman: Is an incredibly complicated step for us and something that we've been learning a lot about. We've been consistently doing studies, um, of how the workforce has been reacting to AI tooling. We started with making public tools available, uh, and allowing our employees to [00:25:00] create accounts and begin to use them for non-sensitive use cases.
Zach Whitman: We then surveyed, uh, that, that, that group that put their hand up and. Took that initiative and found that we were seeing overwhelmingly positive responses to the availability of those tools. Uh, some concern about the limitations, uh, over-reliance on these tools was a big factor that came up. Um, and frustration or fear that if they uploaded data to that is that.
Zach Whitman: Is that a problem? Um, and that's where we originally learned about some of the, the fear-based controls that we needed to, uh, absolve the, the employees of to making sure they felt like they were in a safe position, uh, to, to use the tools and experiment with them. Um, culturally, it's been a huge problem.
Zach Whitman: The sample set for that user group, by the way, was entirely early adopters, which is going to skew your results entirely. Whenever you do that, you're gonna see a bunch of people that are standing for this stuff. They love it. Pulling back the, the, the non-ad adopters or the, the, the [00:26:00] laggards in your adoption curve is where we're seeing the largest impasse.
Zach Whitman: And so we've felt the best approach for that to combat that is to blanket the organization with the availability of these tools so that you come into contact with them More often than not, um, this breaks down a lot of barriers of traditional tech where if they haven't received training or it's new. I generally fearful of it is like a natural response.
Zach Whitman: I've, I've not seen this technology, I don't know how it works. I'm not really gonna engage with it. Making it so that it's much more approachable has been step one. We've been also doing, um, these weekly, uh uh. I dunno what you call 'em, like workshops? No, it's like little, uh, I don't know what they call it, like demos, Friday demos is what we call them where it's, every Friday we get everybody um, uh, an open forum to show off what they're doing.
Zach Whitman: And usually these are like. Devs that are building something. But [00:27:00] oftentimes we're getting folks who have never coded in their life or being like, I built a little web app using this chat bot, and now I can like, kind of automate my, my workflow and oh, here's the chat history. Here's how I did it. Here's the code.
Zach Whitman: And it, it just begins to rough, like shave the rough edges of the corners of this cultural issue of. I shouldn't be using this or this is cheating, or, you know, I can't do this for my work. Um, and it also empowers people to start to think about, uh, just because you're not a dev, you're not trained to be an engineer, doesn't mean you can't start to do these.
Zach Whitman: Semi-technical thing. So that's been helpful and it makes it real for other people
Karim Fadel: who haven't started using it. Yeah, I could go for another 20 and another 20 after that, but unfortunately we get kicked off the stage. Unfortunately. That's, uh, all we have time for you giving us a lot to talk, talk about and think about.
Karim Fadel: Thank you very much. Soray. I don't know if we have time for an audience question.
Soraya Correa: I, I wanna. Ask this question 'cause I think this is a really important question that came through the chat. Um, the question is, how will you balance the time and cost of accuracy and [00:28:00] precision in AI governance and workforce adoption with the rapid pace of AI evolution and the growing need for AI to accelerate and scale real solutions?
Soraya Correa: To stagnant government problems. That's a really loaded question. I couldn't resist it. It's like the Stu I hell. Little stank
Zach Whitman: on the end of that stagnant government problems. No, I mean, yeah, it's a fair point. I mean, I think what's interesting is we've seen, like if you just wi wind the clock back a year or so, everyone was a prompt engineer for a minute and then RAG started to take over.
Zach Whitman: Now we're into MCP territory and agents. Um. We are being outpaced for sure. I think everyone is like, like Tim was saying, we're all holding on by our fingernails. For us, we've always gone back to this isn't necessarily a technical problem, this is a use case question. Is it appropriate to do something like this?
Zach Whitman: And if it is, prove to us that that's the legit solution, rather than being like, this thing can do it. Look, [00:29:00] I can show you this one time that like when I prompted it gives me the perfect answer. Let's just go and deploy that. It's like, that's great. Do that a thousand times. Red team, it show me that you can do more than that.
Zach Whitman: And so we've been thinking about how we can give folks, uh, enablers. An example would be I wanna build a rag that does like far chat bot, well, I need a test to prove that it's pretty good. So let's use AI to build that initial test, and then we'll have some requirements to bulk it out. Then we'll start to sample field and deploy as a demo.
Zach Whitman: Then we can start to roll it out to more and more people. Sure. So it's having the infrastructure in place where you can slowly turn up the dial in terms of usage users and in terms of production, so that you can do it in a way that is, uh, tangible, feels like you're actually making progress, but also doing it in a way where you're constantly reinforcing your measure.
Zach Whitman: From field samples. And so that's where we've been taking the virtual.
Soraya Correa: Yeah, we have to create believers, right? We gotta get them. We got them adapted [00:30:00] to it. All right. Well unfortunately we're out of time. But I really appreciate Kareem and, and Zach, your excellent insights and certainly I have a couple of other questions, but what I would recommend is grab Zach as he's running out the door.
Soraya Correa: 'cause I'm gonna do the same thing anyway. Um, but thank, join me in thanking these two gentlemen for the excellent presentation.