HSDF THE PODCAST
The Homeland Security and Defense Forum proudly presents HSDF THE PODCAST, an engaging series of policy discussions with senior government and industry experts on technology and innovation in government. HSDF THE PODCAST looks at how emerging technology - such Artificial Intelligence, cloud computing, 5G, and cybersecurity - is being used to support government missions and secure U.S. national interests.
HSDF THE PODCAST
Transformative Al & Technology for Decisionmaking Part 1
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Welcome to “HSDF THE PODCAST,” a collection of policy discussions on government technology and homeland security brought to you by the Homeland Security and Defense Forum
In this episode, we explore how the Coast Guard turned AI from promise to practice by embedding governance in architecture, treating data as a product, and keeping humans accountable at critical decisions. Real use cases in HR and audits show faster, fairer outcomes with measurable gains.
Featuring:
- CDR Jonathan White, Cloud and Data Branch Chief, U.S. Coast Guard
- Carin Quiroga, Chief Data Officer, Immigration and Customs Enforcement
- Courtney Whelan-Stillmun, Principal Architect, Google Public Sector (moderator)
This discussion took place January 22nd, 2026, at HSDF’s Technology Innovation in Government Symposium
Follow HSDF THE PODCAST and never miss latest insider talk on government technology, innovation, and security. Visit the HSDF YouTube channel to view hours of insightful policy discussion. For more information about the Homeland Security & Defense Forum (HSDF), visit hsdf.org.
Defining Transformative AI In Government
SPEAKER_00Okay, yeah, thank you so much. Yes, it's a pleasure to be here with Commander White and Corinne and looking forward to diving into everyone's favorite topic of data and AI. So I'm just gonna jump right into it. Um so when we talk about transformative AI for decision making in government, what does transformative realistically mean today? Better speed, better insight, better outcomes, better trust, et cetera.
Coast Guard’s Three-Year Cloud Shift
Building The Integrated Data Platform
SPEAKER_02Yeah, so I think um, you know, there's a long lineage in transformation, right? There's the the the way we look approach technology is is uh probably a little ad hoc sometimes, right? We say, oh, we need to replace this system, uh, we need to build this new capability. And so about uh three years ago, when I started uh this journey as the cloud and data branch chief, it was the first time the Coast Guard actually made that a focal point. Right? It was it was an actual uh organizational change that we we did. We said, hey, we're gonna build a structure here uh of a team that is 100% focused on getting us into this future state. And so what our view on transformation is radical change, right? It's radically transforming how we approach problem sets. Uh so we we when I planned this out, I created a vision for how we would uh be interacting with this new environment in the future. Um that vision had three main principles speed, resilience, and value. Uh, and I put some underlying criteria on that for us. And every year we would pulse check our our movement through that uh transformation. And so now we're at today, uh we have over 25 workloads in production in our cloud environment. I just finished a whole bunch of programmatic reviews for them, and we have a lot of satisfied customers, right? So that's three years of work, 25 workloads in the cloud. That's an incredible amount of opportunity for us and excellence. Uh, but beyond that, we've also created a lot of foundational technologies that are uh paramount to this talk in particular. So we've built a data fabric which we call Surveyor. Uh that was under the guidance of our chief data officer in our Office of Data and Analytics. We also created a software factory uh DevSecOps pipeline capability. We've added uh three low-code environments uh to our repertoire, and we're building capability off of those environments. And all of those form this what I call like an integrated uh platform experience. Right? So we're taking all of these capabilities, putting mission um applications on top of them. The underlying data is shared between those applications, and then we can build analytics and AI and API and every other experience we want off of that shared experience. Uh so that's the level of transformation that we've done in three years. It's not a one-for-one replacement.
Real Use Cases In HR And Audits
SPEAKER_01Um Yeah, and um, for me the transformation, it's you know, previously on speaking engagements talking about AIs, we're like, we're not there yet, we're not there yet. And it's exciting because this is the first year I can really say like we've seen those the transformation, and we're really starting to use AI to make those informed decisions. And so for me, it's really about changing from being using data and being reactive and you know, looking at it after the fact, but really using it in an automated way, getting faster decisions, um, more informed decisions, things that maybe we would have missed by looking at it without AI. Um, so we're using it right now with our HSI, with our work um work employment, and we've seen it grow and make tremendous success by taking what an auditor would look at from a form that was handwritten, pulling that data in, giving reliance and reliability on the data, but then also highlighting where there are some nuances that saying, hey, you should look at this. And what it's done is one, decrease the workload and the backlog that was almost going to be like impossible to get through, but then also um just taking that time for the officers and agents to do other stuff. So it's been really transformative, and you know, there's tons of use cases that we've had, but it's really about you know getting more mission and value and outcomes, having more informed decisions, having more quality of those decisions being made.
Operationalizing Governance And Zero Trust
SPEAKER_00That's great, yeah, and taking a data-centric and foundational approach really makes a lot of sense, making sure you have that foundational layer in place to build on top of. And if you think about, you know, the history of AI, this really came into play in 2023, is when OpenAI released Chat GBT for the first time. So it's really new technology. Of course, we've had predictive AI since really got big around the 2010. So we've seen it continuously grow, but it's still a relatively new technology. So it's amazing to hear about that transformation. Um, so when it comes to governance, ethics, and guardrails, how do you operationalize the principle of human in the loop given your both your missions? And in real-world systems, how does it go beyond being just a policy statement?
Authorized AI Services And Guardrails
Human In The Loop By Design
SPEAKER_02Yeah, so governance can be a four-letter word in some places, right? But I think it's really important to establish that up front. Right? You want to establish the norms, the rules of behavior, how you really want to interact with this technology, especially if it's new. Uh, so it's very important that we do that in a very open and collaborative way. Uh that goes from when I'm all the way down in the trenches here in the in the technology layer, right? The infrastructure layer. Normally you don't talk about governance there, right? It's that's usually governance is applied to you. But what I did was a very active, like, let's reverse that that approach, right? We're building something new. Let me bring the people from headquarters down into where we're operating in an engineering service, essentially, and saying, How do you want us to represent your governance? Right? We want to build that right into the framework that we're having. Not bolt it on, not put a layer on top, but it's infused into that experience. And there was a couple trends that happened during this, right? So there's a there was a zero trust uh movement. We have a goal of 2027 to be zero trust compliant, right? So that was that was a piece of governance that is really technical in nature, but it actually transcends into governance on data security and interoperability. And then you also have the AI revolution, which happened, right? Oh, guess what? Now you have to deal with AI. And and that is a uh very evolving situation, right? So every single year we basically have to look at our governance rules and basically scratch them out and rewrite them again in order to keep pace with how the commercial industry is moving and simultaneously how fast the government is moving. So we've been starting with, I would say, unauthorized AI applications, they became unauthorized, and now we're replacing them with very much authorized systems. Right? A great example of that is the introduction of Genai.mil. And here's a it's a it's an interesting philosophical question. So if if some other entity deploys a capability like that, can you just immediately start using it? What's the ATO rules on that? What is the ability for us to share these experiences between components? Because typical deployment has been I'm deploying it for myself, right? I'm building a Coast Guard service or I'm building an army service or whatever that is. But now we're building these transcendent services that are going across all manner of lines. And I really like what's happening is that that's being attached with the rules of behavior and saying, I want you to use this. This is a safe place. I don't want you to put your stuff in the unsafe place. Right? So I think we're having this great uh trade-off between capability and governance that's happening right now. And we're on board with that train. So we released a bunch of rules of behavior, uh ways of interacting with AI, the authorized services. We've been on the, the Coast Guard has been on the forefront of doing that. So our chief data and AI officer has been keeping pace with that that movement, which is really exciting. Um, on the other side of the coin, when you talk about uh data governance, which I think Corinne's gonna talk a lot about that, you know, we build the infrastructure for data governance, right? I I always say to our customers, I don't really care what your data is, it's ones and zeros to me, right? It could be completely encrypted, it really doesn't matter to me. But what I want is to make sure the rules of ingest and egress in my system are in place and comply with your requirements. Because at the end of the day, you're responsible for your data, right? There's a data steward, data owner. And I'll let Corinne continue that one.
SPEAKER_01Yeah, and as far as the human in the loop, like it can't be an afterthought, right? It's got to be built into process because I I know for other things, time we have so much work on our plates. Life is so busy right now, it's so easy to like get comfortable and rubber stamp and just approve. So you have to take the human element and build it into your workflows, your automation. You have to identify critical decision points that a human must make those decisions. And for me, it's it's you know, training, it's telling what what are the limits of AI and what are the capabilities of AI so people understand that, but also at the same time, making the decision maker know that at the end of the day they are accountable. And AI isn't here to replace us, it's not here to replace those decision makings, but it's here to help make those decisions more informed and easier for us to make. Um, so it's really about the human judgment aspect and making sure that people know that they're accountable for it.
Crossing Boundaries With Data Sharing
SPEAKER_00Yeah, absolutely. And you both hit on a key point of integrating those guardrails and controls into that foundational layer, into the security layer, embedding it into the architecture as you're building out the solution. It shouldn't be an afterthought or something you bolt on later. It's something you incorporate into the architecture of the workload you're developing. So that that makes a lot of sense.
SPEAKER_02I'd also like to tease out a little thing, but data sharing is a real big concern, right? And we we share data all the time uh with our DHS components. We also share data with DOW. The Coast Guard is kind of this like sharing nexus, right? We we have we have all the data, we want to give it or take it or whatever and combine it. So when you cross those organizational boundaries, now all of a sudden you're crossing governance lines, you're crossing uh schema lines, right, at the real technical level. Um, there's a we have to really set norms and standards about what exactly do you want to do with that data? Because how I present it to you, how I give it to you is quite important in that, right? The context of which that data is going to be used should be understood up front. But uh do you have do you want to share that?
Data Fabric, Mesh, And Products
SPEAKER_01Yeah, no, absolutely. And I think like that's where having your governance, knowing what data we have in inventory, having, you know, we're through this whole thing right now with DHS. There's a lot of data sharing going on with code sort of and even outside of DHS. And so it's like ensuring we have governance and that we know our inventory, making sure that we have quality data, it's real-time, authoritative sources, but also understanding like having the infrastructure and the environment, the data platform to be able to share. Um, so that's something actually we're working on this year, and one of our biggest priorities of our CIO and also of DHS CIO is to create that environment and that data architecture that we're able to share more easily. Um, that we have the platform for data fabric, data mesh. Um, we have, you know, data products. So we don't just have the raw data, but we have data products that are clean and consumable and ready to be used. That is one of our highest priorities and our key aspect to make sure the data gets out there.
Why AI Value Lags: Data Quality
SPEAKER_00Yeah, and that's been something that we've seen emerge as a really key priority in 2026. I saw a stat in 2025 where it was basically 78% of organizations are using AI in some way, shape, or form, but only 5% are actually seeing value out of those use cases. And one of the big reasons for that that was cited, I mean, there's a multitude of reasons around individuals using it, being difficult to measure that type of value, but really biggest was the data and actually being able to make use of the data in a governed, controlled, and efficient and auditable way.
SPEAKER_01Yeah, I would agree with that because there there is a lot of data quality challenges. And like two of the use cases that I that are like just off the head when I mentioned the I9, another one is looking at the resumes. Those that's new data coming in, right? Resumes is new data. We have the old data, like, hey, this is the minimum requirements, but it's new data. It's it's coming in and we're taking it as it is. I think as we get to data that we already have and like linking those together, it will be a little bit more challenging because we are running up and seeing that data quality is.
Modernizing BI And Transparent Pipelines
Treating Data As A Product
SPEAKER_02Yeah, and that's the journey we went down for the last six months, right? We're trying to, we have transitioned our business intelligence tool from a legacy system to our modern data fabric. You know, we finished phase one, we're on to phase two right now, and the number one challenge for that is representing the data. Because you have source, basically what you have is these legacy pipelines that may have been built at some point or baked, right? They're they're just sort of somewhat static, probably. So you're going from the static system where you're taking source data from known applications, you're doing a transformation, you're putting it in a data warehouse, and then you're scavenging that information out of the data warehouse. The data as it is represented across that entire chain is very, very different. And so by the time you're doing analytics or measures or metrics or anything off of that information, you don't really know what happened beforehand. And so what I say is when I'm selling the data fabric and the data mesh concept is transparency, transparency, transparency, right? It's all about representing exactly what happens, controlling that, owning it, right, being making it visible, and ascribing an owner to that. Because what we don't want in the future is static data pipelines. We don't want that, right? Data changes all the time, the use of that data changes all the time, and you need to be able to be dynamic with it. And I often come I compare data pipeline work to software development work, right? We really should be thinking of it like that, right? Data teams are actually software development teams. They're writing Python code, they're doing SQL, they're using AI all the time, right? So I've used AI in those data pipelines to help smooth data for delivery, right? Because I'm I'm in the fight, right? I am helping this journey out. And that is the uh that is a turn that I want to see in policy as well. It's like treat data as a dynamic asset. And by doing that, you treat it as a deliverable, like a product, an actual product, not just the end result, but everything that goes into making it.
SPEAKER_00Yeah, absolutely. I mean, if you think about it, we're we're developing microservices that have AI components to them. So why would we not treat that in the same way that when we're developing applications? Um I know I'm deviating here from from our from our question.
Strengthening Judgment With Explainable AI
SPEAKER_01It's my fault, sorry.
SPEAKER_00Um but anyway, yeah, so going back to uh a bit of the human judgment and and accountability, uh, in a high-stakes environment, how do you ensure AI strengthens human judgment rather than narrowing it? And how can leaders clarify that trusting AI does not mean automating ethics and compliance?
Confidence, Pilots, And Rollout
SPEAKER_01Yeah, I think it's it's it's hard, but I think that like another example like that we use is the HR. Um we have resumes coming in, tons, and we're hiring like crazy, and it's a huge process. But we realized there was an AI aspect of you have your minimum calls, let's look at the resumes and let's use it. Where I think the human judgment, where AI helps that is it could flag those that says, hey, this doesn't match, this doesn't meet the minimum calls, but it's transparent with the reason, like because of this, found on this, and citing exactly why it made those decisions, and then having the human come in and review that and make the best judgment. Okay, was this right? Did they make the right indication or the right flag for us, or is this wrong? And it's important for two reasons is one, you're accountable for that decision at the end of the day being made, but also it's important for you to look at that to understand, hey, do I need to refine our training model? Is there uh something that we miss that we could do better? I mean, it's always gonna be improving more and more as new things, you know, new experiences, new, you know, degrees come out, things like that. It's like, hey, was this a minimum quality did it meet it? And so refining that and making sure you're constantly adjusting and learning, I think is key. And then you're building trust. I mean, I do think full trust of AI is gonna take time, but I think as we're deploying and as we're seeing the response, the growth, the confidence rates. Um, when we were doing the I-9, it started out as like 89% confidence of like what it was reading. And then we, you know, tweaked it. We got up to 97%, and we did it at a small pilot site, and we did a lot of auditing and reviews over that, and then we realized, hey, like this is giving a lot of mission value. We're seeing that input, it's being reliable, it's being truthful. Let's go ahead now and go nationwide rollout, which we did. And so I think those kind of things, keeping the judgment in place, but slowly doing it. I mean, even the other day, uh, one of my cousins was like, I'm applying to HSI and I'm not getting, and they said I didn't even meet the grade nine quals. And I'm like, immediately in my head, even I was questioning it. I'm like, was that because of the AI? Like, I gotta go back and flip, like, let me look him up and see and like really look to see because he was like, I'm confident I meet the superior qual. So like I think it's gonna take time, but I think as we're showing these use cases and we're showing success stories, it's gonna build confidence over time. I mean, and it's good because we should be questioning it, right? Absolutely. We don't want to go ahead and like make huge mistakes that yes, that's a job, but like it could be something else in the future.
Thresholds, Escalation, And Monitoring
SPEAKER_02So I'm I I take two paths to this question, right? So there's there's human-initiated action, which is a lot of AI as today, right? You have to go to a site, you have to type in your query or add your documents, and then hit enter, right? And then take that data and do something with it, right? And so like human in the loop is very much explicit in that, in that context, right? Then there's the other path, which is AI happening, which is what Corinna is talking about. So sort of these um non-person entity type of agenti AI, right? And as you start shifting workloads from the top one to the bottom one, the data quality requirement is shooting through the roof. The testing around that shoots through the roof, and the ability to monitor and audit the not not from a like an IG perspective, but like from a performance perspective, right? Your your point exactly, like somebody calls you and says, Hey, I I totally met this. What the heck? You should be able to see, oh yeah, your thing was totally flagged as anomalous, right? Right. So let me let me shift that over to the email reviewer.
SPEAKER_01Hired.
Ask Hamilton: Policy Answers Faster
Journey Mapping And Measurable Gains
Bringing AI Into User Workflows
SPEAKER_02Totally anomalous, right? Um, but like there's there's these, you know, there's extra things you have to consider in the development pipeline when you're starting to do this. And when do you shift, when do you shift out of AI and hand it to a human, right? It might be that your confidence interval is below a 30% threshold or whatever it is, right? And then it shun the case into a human-driven workflow. Um, so for our purposes, like we're very focused on the top line right now. So we're building the product, Ask Hamilton. Uh, very much want to shrink the time to get an answer to a question on very specific Coast Guard questions. Uh, and you think about all of our policies and manuals and message traffic and all the things that hit the everyday user, and they have to be totally aware of it at all times. I'm not totally aware of everything, right? So it but if I have a question, like someone one of my junior officers comes in and says, What do I do here? And I'm like, Well, let me look up the manual and I pull it from my own OneDrive and it's out of date, right? It's but it's like, oh, I downloaded this 10 years ago, right? Um, and so now I have to go for a hunt, right? I have to find that document. And so, how about this? How about you just I'll tell him, hey, just ask your question and ask Hamilton and get the reference and read the reference that it pulls back, right? I just shortened that kill chain of that problem substantially. Um, but that's built on Coast Guard data. Same thing with our actual mission communities, our marine inspectors, our naval engineers, our our legal teams. Um, Coast Guard is a lot of legal reviews. You know, can they at least find the information they're looking for really quickly? Still putting that human in the decision tree 100%. We're doing these journey maps where we say, hey, here's the process pre-AI, here's all the choke points, here's all the failure points, here's all the time spent doing this work. And that's a really hard thing because you've got to start elistening to the negative stuff, right? Nobody wants to talk about the bad stuff, right? But you got to talk about the bad stuff. Then say, okay, if I put AI in here, what do I collapse in that journey map? And maybe I take a three-day process down into a six-hour process at the I on the ideal side, right? And so that is what we're gonna use to help sell this solution, right? And also test it and validate it. Um, so that's that's human in the loop for me. And and when we start moving that closer to the lower tier, I think it's much more of bring the AI to the user, right? So in the app that they're doing the work and in the case management tool or whatever they're doing, they have an AI assistant working with them along the way. Right? It knows the context of their case, it knows the context of other cases, and it can help facilitate that journey uh uh through the workflow.
SPEAKER_00Yeah, that makes a lot of sense. And yeah, Corinne, what you were saying resonated with me as well around an iterative approach. You know, all the workloads that I I work on as well with some of our customers, it's always iterative. It's groups learning the technology, breaking quote unquote failing, and then trying to figure out that way to the solution that makes the most sense and determine the right level of human involvement based on.