
AI Proving Ground Podcast
AI deployment and adoption is complex — this podcast makes it actionable. Join top experts, IT leaders and innovators as we explore AI’s toughest challenges, uncover real-world case studies, and reveal practical insights that drive AI ROI. From strategy to execution, we break down what works (and what doesn’t) in enterprise AI. New episodes every week.
AI Proving Ground Podcast
AI Hype to Mission Impact: Dell Federal, NVIDIA and WWT's Government Playbook
Federal technology leaders reveal how to start small, set data‑cyber guardrails and scale fast — all without blowing the budget . Listen to Dan Carroll (Dell Federal Field CTO), Ryan Simpson (NVIDIA Chief Technologist for Public Sector Partners) and Brandon Bulldis (WWT Federal Civilian Engineering Director) discuss how federal government agencies are sprinting from AI hype to real mission impact. Hear wins like multimodal chatbots and desk‑side AI supercomputers, plus three battle‑tested rules: guardrails first, design for surge and invest in people. Watch or listen now and leave with a clear playbook for infrastructure, talent and funding.
More about this week's guests:
Dan Carroll is the Field CTO for Cybersecurity – US Federal at Dell Technologies, helping government customers meet cybersecurity and compliance goals. He partners with teams to build solutions that align with standards like HIPAA, FISMA and NERC-CIP. Dan also leads R&D collaborations with groups like NIST and DoD labs, shaping future tech in areas like 5G, IoT, supply chain assurance and digital twins.
Dan's top pick: Advancing IT for Federal Agencies with Dell and WWT
Ryan Simpson is the Engineering Chief Technologist for Federal Partners at NVIDIA, driving AI and data analytics adoption across agencies through the NVIDIA Partner Network. With nearly 20 years in government, including key AI work at USPS, he holds 16 patents in AI and image processing. Ryan brings deep expertise in aligning advanced technologies with public sector needs, policies and mission-driven impact.
Ryan's top pick: About NVIDIA & WWT
Brandon Bulldis began his career as a Communications Tech in the U.S. Air Force and now brings over 20 years of IT experience to WWT's Federal Civilian Engineering team. With a passion for continuous learning, he's helped public and private sector customers align technology with mission and business goals. A strong advocate for WWT's Advanced Technology Center, Brandon is dedicated to empowering organizations through education and innovation in an ever-evolving tech landscape.
Ryan's top pick: Public Sector Overview
The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.
Learn more about WWT's AI Proving Ground.
The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.
Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.
Generative AI and other AI technologies are making waves in the public sector, promising to transform how government agencies, big and small, operate. But despite the enthusiasm and progress, agencies face real hurdles in adopting AI effectively Data silos and quality legacy infrastructure not suited to run AI workloads, budget constraints and skills gaps, not to mention security concerns. The list goes on. Today, I'm talking with three experts helping government agencies move from AI slideware to AI that actually works Dan Carroll, a field chief technology officer with Dell Federal. Ryan Simpson, a senior solutions architect with NVIDIA, specialized in supporting public sector agencies. And Brandon Boldis, wwt's Director of Federal Civilian Engineering. Between the three of them, decades of experience helping governments tackle the toughest questions when it comes to enterprise IT, and today they'll give us insight into what they're seeing across the AI landscape, what's working and the first few moves every public sector leader should be making right now. So stick with us.
Speaker 1:This is the AI Proving Ground podcast from Worldwide Technology everything AI all in one place. Let's jump in. Dan Ryan, Brandon, thanks so much for joining us on the AI Proving Ground podcast. I know your schedules are very busy these days. How are you doing?
Speaker 2:Doing well.
Speaker 1:Yeah, excellent, very good. Well, ryan, I did want to start with you. We're talking about AI in the public sector. So we're talking federal, state, local the list goes on and on for the amount of public agencies that we have around. And, like other industries, this is a conversation of buzz and hype and momentum, but also careful deliberation. So, just to kick us off, if you had to explain the opportunity that the public sector sphere has with AI, if you had to describe that to a cabinet member or a leader of one of these agencies, give us the elevator pitch on what that opportunity is right now.
Speaker 3:I think the short elevator pitch would be we're always trying to do more with less, right? I don't think there's a single agency out there that it's ever been able to kind of accomplish all the goals it's given with the budget it's given. I think the opportunity to help optimize the way that our workforce is being implemented is extremely tremendous. Not only that, but when we think about you know, a lot of times I had colleagues that I worked with that either had PhDs or master's degrees in mechanical engineering or what have you, and they spend most of their day, you know, answering emails or were going through and getting PowerPoint presentations, instead of like focusing on what they were trained on and probably what they were hired on and the ability to kind of take away some of those mundane tasks for them. It doesn't mean that they're not going to be writing emails, but it might be able to help.
Speaker 3:We're seeing some of these workflows doing summarizations and auto-responding and then you're just kind of signing off on it. I think those are super exciting. I've always been the kind of joke inside. Nvidia is like I'm the one that automates everything I do. If I do something twice, the third time it's fully automated, and I think there's lots of opportunity to kind of help with that. That's going to help streamline, you know, response times. If you're for public service. It's going to help if you're, you know, potentially an employee just asking hey, how much time do I have?
Speaker 3:for leave, or what happens if X, y or Z being able to answer some of those questions? Yeah for leave. Or what happens if X, y or Z being able to answer some of those questions? Yeah, I think it's super exciting. Just the amount of potential out there is kind of unknown. There's so many ways we could utilize this tech right now.
Speaker 2:Well, yeah, and I'll add to that right. What's interesting is that obviously, everybody's looking at what's going on in the current administration, what their goals are right. Some of the current goals that they have defined is driving efficiency, driving down costs for public services, and that isn't unique at the federal space. State and local are looking for those same type of things, and if I was talking to a cabinet member, I would tell them that if they don't move on AI, they will be left behind, both whether it's a local agency, whether it's a state agency or a federal agency. The biggest thing they need to do is keep connection with what's going on in the commercial space and what's touching everybody's everyday lives. If they want government connected to people.
Speaker 2:Ai adoption pursuit is critical, and it's also critical on the world stage If the United States wants to keep its competitive edge as it relates to how we perform in the world. In order to do that, AI is the key right. I know we're going to get into this as the discussion goes on. It is not the hype that past it trends have been. It is truly transformative in how it is touching us every day now and how it's going to touch us tomorrow. Brandon, what are your thoughts?
Speaker 4:yeah, I, I love everything you guys are saying. I think for me, the simple 22nd that I would say is it puts data, it puts knowledge, it puts that stuff in the hands of their employees and, to a large extent, puts it in the hand of the citizens to be able to get answers quicker, and all that. If anybody's had to call, pick an agency doesn't matter which one it is and try to get through and navigate that AI is going to be able to help get us all that information quicker in a way that actually makes sense from that point, both internal and external.
Speaker 1:Yeah, well, it's interesting. You know the three of you talk about, you know, automating mundane tasks or plugging into what's happening in the commercial sector, or just the transformational power that AI has. Brandon, I'm curious. It's this weird kind of balance, like, of applying AI to those mundane tasks but also recognizing that there's a lot of buzz and a lot of hype and a lot of art of the possible out there. Do you think that hype is getting in the way of just incremental progress as it relates to AI and public agencies?
Speaker 4:Absolutely is One of the things that all these agencies, all these leaders, struggle with is is it AI built within tooling to help them be more efficient within that aspect of it, or are they using AI from some sort of mission, business, objective that they're working towards? And it becomes a very kludgy conversation at times. It gets very confusing very quickly as to what does AI mean. Ai within a tool is not the same as using, you know, ai to help with those tasks. It's not the same as you know, building an AI, so on and so forth. So there is a lot of really smart folks within our federal government that are working through that and trying to help explain that internally on their side as well, but it is definitely an area in which we all need help, yeah.
Speaker 2:And what I'd add to that, brandon, is that, in general, federal organizations and state organizations are risk averse to technology adoption in early stages because of the importance of their mission right, they can't afford to do something too cutting edge that might fail. The problem with AI is that I think many organizations may be going well, I'm going to wait for this to mature a little more. Yeah, and it's like no, no, no, there are so many proven solutions today that AI can solve. You need to start today. There'll be new stuff that's coming down the road. Ryan, I am sure you're having those conversations with customers.
Speaker 3:Yeah, and obviously there's the concern about hype too, because anybody with a subscription any of the one AI providers can call themselves an AI company right now. I actually had the pleasure of being on a WWT meeting with a customer not that long ago where they were showing how they use AI internally for program management automation right. So reaching out and if people are responding to RFIs or RFPs or you're just having a conversation with a friendship vendor ask how they're using it right, if you're worried about whether or not the technology is real, see how they are applying it to make themselves more efficient.
Speaker 3:Right, like NVIDIA internally is a huge adopter. Like we're basically mandated to use AI Codegen tools as assistants to us.
Speaker 3:Not necessarily replacing us, but they want us to be more effective and more efficient. We use it for, you know, in every way that we could possibly imagine. We're trying to find ways of optimizing our talent, you know, and reach out, Like at WWT. I was constantly amazed to see how they're adopting it internally. And then, you know, with Dell themselves, we've got a bi-weekly meeting where we talk about projects, where we're doing and just kind of sharing those experiences.
Speaker 2:So well, and the key is that it's all based on foundational um technologies. Right, it's? I always use this term it's evolution not revolution. Right. You're not starting from a base level of scratch. You're building on past uh proven technology success to drive new innovation.
Speaker 4:So, yeah, I think one of the things I'm curious right, you've been on both sides of the aisle here, so to speak. One of the things you said, dan, was that the risk adverse and I agree with the sentiment of what you're saying. I get a little like semantics in that, because I don't know that the government the federal government and local governments are as risk adverse so much as change adverse because of what that change means. And so an AI is a fundamental shift in how they do things. And really embracing AI and looking at that, ryan, I don't mean to jump in here, but I'm curious because you've been on both sides, you've been at an agency, you've been on this side now with NVIDIA Like how do we help them with that change? Because it is a change, it's a shift to how we do things.
Speaker 3:Yeah, and I mean, if you look up my, I was at USPS for a long time, right, one of the first jobs that I got when I moved over to engineering, you know, is I worked in the they call it the design cognizant office. So basically we helped with the technical sections of the statements to work. And one of the first tasks I got was to work on a statement of work. And basically they said, well, you just take the past program and you copy and paste most of the stuff over and then you get to go through and make sure you identified all the areas that we need to change it. In a lot of cases it was copy and paste. It was, you know, searching for the program name, make sure it was right. And then, when I was going through those past documents, I noticed how many things got missed. So I started to like, think through oh well, I can automate a lot of this, right, like, is it going to be? Is the vendor going to be providing the hardware or is it going to be the government? And that could be a drop down box? And I started to automate that process and another person in the office, uh, came over and said, hey, you might want to stop doing that. There's a lot of people that like the fact that we get you know so many weeks or whatever months to work on a statement of work and they don't want to be able to do it in three minutes. So I think those, those, those feelings and sentiments had changed over time. I think as we see newer people coming in, you know the legacy guard kind of changing over and now you know there is a demand from the administration to be more efficient, right? So I think a lot of those past mentalities of you know my only job is to do this one. I do this report.
Speaker 3:I actually had an employee that when I took over a group, his job was to generate a report once a week, and that's all he did, like, and and he sent it out to like 300 people. And he sent it out to like 300 people, right? So when I took over and we're interviewing people for what they do, I said, okay, well, how many people are actually reading the report? Yeah, well, I don't know, I send it to 300. So we then sent an email out. Hey, if you're actually reading this, let me know. And, like, two people responded and we said, okay, well, are you willing to cover the salary of this person who's doing this report, because if not, he's going to go do other things that a lot of those reports. Maybe they are helpful for some folks, but that could be automated at this point extremely easy, right, like the.
Speaker 3:If we get into, like risk aversion, I think, the past data flywheel or ai flywheel, where you know you'd collect data, you train a model, you deploy it after a couple of months, hopefully it worked, and then you'd collect data, you'd train a model, you'd deploy it after a couple of months, hopefully it worked, and then you'd have to go through like three versions of that before it was usable. Well, now, with these foundation models, you can just drag them in and bring them into your environment, and I couldn't even tell you all the use cases you could use it for right. So to me that's one of the areas that is kind of exciting is it's in the past. The amount of time and energy it would take to get a technology in to solve one problem could be months or a year.
Speaker 3:Well, now if you go through and get gen AI capabilities, you could be solving tens, hundreds, thousands of potential problems with a little bit of additional kind of wrappers around that. So I think we're in a position now that we've never been in before where, because we have such that flexibility, with a single piece of technology- I do want to provide a note on what Ryan said too is that US Postal Service.
Speaker 2:Everybody thinks getting a letter is boring. Their adoption of technology over the decades is truly amazing. They, honestly, are probably one of the most forward thinking agencies in the government as it relates to technology. I watched like some different um, basically historical things around, what they did, right shows and stuff like that and it's just amazing, like the postal service, of where they started and where they became today and yeah it's, it's a really cool agency from a technology, neural networks since the late 90s, right yeah yeah.
Speaker 1:Well, so to get into that flywheel process, right where you're starting to churn out more use cases or just advancing AI in a more rapid pace, you do have to bypass that hype.
Speaker 1:You do have to overcome that fear of change, and what better way to do it than to look what's already working for others, whether it's USPS or other agencies or something in the commercial sector? Brandon, I'll start with you on this one. But the White House came out late last year with 1,700 AI use cases that were active right now, and that was a substantial increase from what was reported the year prior. So you can bet that there's going to be even more use cases coming out when they do this inventory process this year Of that 1,700 use cases, and we don't have to rattle through too many of them. But are there any one or two or three that have caught your attention that are really providing practical, meaningful value or ROI today? That, to the point earlier, would help bypass that hype and help understand and let people know this change is not going. This is going to be a good change, not a bad change.
Speaker 4:Yeah, I think. I mean Ryan was actually talking a little bit about this earlier you know, when you look at taking simple tasks, whether it's program management tasks, PMO tasks, those types of things, you know that's one in which it'll help drive efficiency internally on their thing, on the jobs, the different things that they need to accomplish. Chatbots, I mean I know those aren't necessarily all the rage, but those have become a really easy, quick, like and provide it, whether it's via their call centers, you know, trying to have a better citizen experience, or whether it's internal knowledge based type, chatbots to help them as they bring on new employees, as they bring on new, new folks into the government, Because you know there has been a shift of who's in there. So I think those are the areas as knowledge based, that chatbots, those types of things, are the quick hit ones, and then I mean you can go anywhere from there.
Speaker 4:I think there's operational type ones that you can look at looking at it from protection, from a cyber protection aspect of it you know their own networks and looking at those things when the use cases are a plentiful um the my two cents in this and, Dan, I'm curious your thoughts on this one is the fundamental gap that a lot of agencies have is they know what they need to do and there's silos of excellence within all these different agencies, but there's not always the best data management, data discipline, definitions of what those data sets look like. So they're doing some cool stuff, but it can't translate or can't scale out the way that they would potentially need it to from a data discipline perspective.
Speaker 2:Yeah, to kind of touch on that.
Speaker 2:Obviously the biggest thing that agencies need to do and this gets into some of what we wanted to talk about is around how do you get ready for AI.
Speaker 2:To get ready for AI, you have to understand what you're doing with your IT modernization efforts, you have to understand what you're doing with your security efforts and you need to strengthen your data governance efforts, right? All of those are critical to pair with your AI development goals, right? You can't just go to say, you know, chase the AI dragon. You have to build the right things to make it reality. And that's critical and, as it ties to the use cases, probably, in my opinion, the one that's most relevant today that's in the news constantly all over the world emergency response all over the world. Emergency response. Everything that Brandon just called out with simplifying processes, unifying data sets, building automation, emergency response at government from a I don't care if it's our federal government or any government with state and local AI can drive so much value in there in improving what I'd say is response from a performance perspective. So I'm excited to see where that goes.
Speaker 4:And Brian, I did the typical technologist thing, where I didn't fully answer your question and I realized that was great. So I wanted to come back to that as to why are we seeing so many use cases popping up? Is they're trying to figure it out? They're trying to explore and really look at it and lump it into bite sized pieces, if you will, to explore what makes sense. So I didn't answer that very well, so apologies on that part, but I did the typical technologist thing Well and I think what I would add to that is there's a misunderstanding of how government works too.
Speaker 2:Brandon, what I would add to that is there's a misunderstanding of how government works too. Everybody looks at something like Department of State or they look at something like Health and Human Services and they go, oh, that's an agency. Yeah, it's a agency, but it's literally a company with a bunch of little companies underneath that doing, you know, all these different mission types underneath of those which drive all these different use case developments. It's not like a use case for one agency, it's hundreds to thousands of use cases conceivably for all of these sub-agencies that support some really big missions and critical capabilities for the United States.
Speaker 1:Okay. So, ryan, I like how, thus far, dan and Brandon have been identifying some of the more practical use cases. But Ryan at NVIDIA, you all are working far. Dan and Brandon have been identifying some of the more practical use cases, but Ryan at NVIDIA, you all are working with some of the most innovative kind of forward thinking agencies around. You know any use cases on the more innovative side or bleeding edge side that you think have either, you know, real promise or already making an impact.
Speaker 3:Yeah. So I mean, when we're talking about, like the bleeding edge, I think some of the things that we're seeing, I actually see a little bit more forward leaning right now, to be honest, with some of the state and local, I think, because they're not burdened with as many of the regulatory issues Like we're already seeing some state and locals deploying, you know, human avatar chat systems, right Like that is bleeding, bleeding edge. You're taking retrieval, augmented generation, stacking on automated speech recognition and text to speech and running an avatar. That's pretty bleeding edge. We're also seeing some of the adoption of one of our frameworks built around video search and summarization, so essentially a multimodal language model that can consume video and then you can have conversations with it or you can set flags to run on those video streams without necessarily having to go through the process of a training model. We're seeing those already being deployed now at this point at the SLED level and I think it's a little bit faster. I think that you know, as the government catches up, hopefully, you know it's a big ship once they start getting things brought on board. And I think we're also starting to see CBP specifically, when they basically approved some of the Gen AI models, they didn't approve a version, they approved a family right. That simple like change of mentality of like, oh, we're going to approve this specific model, so we're going to approve this family of model. That like we basically bet, we're in agreement with their licensing terms and conditions, where we were comfortable with their ethical practices, and we've kind of vetted out this company that allows them now to start adopting models at the pace of which they're coming out, not being so responsive.
Speaker 3:Because models I think we call it Model Monday at NVIDIA. It's like every Monday you know there's new models coming out and in some cases it might be for your use case. The performance Monday you know there's new models coming out and in some cases it might be for your use case. The performance improvements you know they're not single digits in many cases, like, they're really mind-blowing. Like if you saw what happened with VO3, with Gemini right, that video model that can now like synthesize speech and voice, like it was a just drastic step up and now we're starting to see um world foundation models. So models that you know when they're generating a video or text or something, it's not in the context of just like that modality, it's not generating video, it's basically generating a mini world right, so you can actually be able to explore this foundation model, not just from like hey, what is this thing? But you can actually start to view different modalities of those components, which is, I think, exciting.
Speaker 2:Yeah, well, and what I'll add on to there, ryan, is you are so spot on with the state and local adoption. I'll give you an example what people really need to understand about AI because they try to. What I'd say is get it into like, okay, hey, this is going to help me with like data assessment. In my opinion and I'm looking for you and Brandon to give me feedback on this the biggest value of AI is bringing together so many disparate elements of data to provide a better outcome, and the example I'd use is New York City.
Speaker 2:The Mass Transit Authority did a whole program where they hooked up the subways and they have the cars that now have cameras on them and then they have sensors on the tracks and they're basically able to do track inspection, leveraging AI to listen for things like sound and visuals and basically the feel of the track as it's being used and things like that, and bring that all together to do a better assessment of how the tracks are responding to things like heat and cold throughout the seasons and make sure that they get ahead of repairs and maintenance and things like that. That's all AI driven and that was driven in an incredible amount of time, as relate to what we've seen other organizations do in other state and possibly federal agencies. Like that adoption at that local city level was truly amazing to see. So, yeah, and then what I wanted to ask you too is do you see that the idea that hey, don't try to put ai out of box, of solving one thing from one data element, bring all these bigger data elements together, like brandon, what is like you guys at wwt seeing in that regard?
Speaker 4:I mean I think you kind of hit it there on the head right of it is trying to bring in. Instead of having to centralize and create massive data lakes and then going through all that aspect of it you trying to bring in. Instead of having to centralize and create massive data lakes and then going through all that aspect of it, you can leave the data where it is and then draw a conclusion from the data based off what your mission, what your need is, what your business need is and all that. And so that is obviously what RAG brings to the table, a big piece of what that helps agencies sort and figure out from that standpoint. So it's getting the right data into the right hands without having to centralize and bring all the data in. From that standpoint, being able to be a little simplistic here, like connect those silos of excellence, because in all these agencies there are folks that are doing amazing things with that data. They just don't necessarily always know what the others are doing it. So really, to your point, it brings that together.
Speaker 4:I think that's a huge, huge asset for these different agencies. It's a little bit faster the state local education a little bit faster being able to move on it because of regulations, because of, you know, the regulatory constraints, the security constraints and all those aspects of it. But I think we're seeing different federal agencies starting to move pretty quickly on this now. I think that and, ryan, I'm curious your thoughts on this part right here as far as the normal cycle that we see with federal, which is, you know, we always kind of joke that federal is three to five years behind, I think that from an AI perspective, they're still going to be a little bit behind the commercial brethren because they have to be to a degree because of regulations, but that is shrinking dramatically, like we're talking maybe a year, six months, year type type behind, as opposed to the normal.
Speaker 3:Yeah, and I think you know, dan, you brought up a good point of like how fast they were able to bring up that, that mass transit use case, right. And as somebody who a vast majority of my time when I was in government was working in computer vision, and the amount of time it took to collect data, train the models and get going, it kind of warranted buying a system that its only job was to do that thing. So we would buy a platform and it did a thing and it didn't do anything else. So if you wanted to have that platform do something else or train a different model, that's not really what that was for. It was funded and purchased to do that thing.
Speaker 3:Well, now we're starting to see people like I want AI capabilities, and this concept of an AI factory is really starting to like make sense, right. So a factory isn't like you just build a model and you let it run in a factory. A factory outputs models and now, instead of a use case taking I think Andrew Ang even mentions it in one of his talks it used to be nine to 12 months for a team of like 30 data scientists and software developers and you need this massive team to get a project out. We're now talking like no joke with the right people three to five days and like a couple of people three to five days to be pushing out some of these use cases Well, yeah, they will need to be maintained and updated.
Speaker 3:Five days to be pushing out some of these use cases Well, yeah, they will need to be maintained and updated. But the initial model hitting the street, maybe not making action day one, but at least observing its environment and feeding back into the engineers and being able to kind of get things up and going Proof of concept times. I've done proof of concepts on introductory calls with people now right Like, hey, well, let's try it out and it just works right at like the model right out of the box is able to solve their use case. Yeah, I love it.
Speaker 3:We could download a model, and they're up and running.
Speaker 1:Yeah, I mean, I love the fact that that gap, that three to five years behind, is starting to shrink. You know, brian, to your point. You're seeing. You know these, these use cases of these models out of the box come in with. You know, one year, six months, it's accelerating. But I do want to shift this to an infrastructure conversation right now, because, at the end of the day, you know where are these AI workloads running, or where should they be running, because you know you can have the fanciest model you could even have your data estate, you know, sorted out as perfectly as can be but if you don't have the infrastructure below it to make it run, you're going to fall flat. You know, maybe start with you, brandon, here. You know, what should these agencies be thinking about? Recognizing that many of them are probably on legacy systems, how can we bring them up to speed? How can we modernize here in a cost-effective and efficient manner?
Speaker 4:Yeah, infrastructure is, or facilities is becoming, you know, the fun topic again. So I think one of the things that all the agencies are looking at and need to continue looking at are what can they actually do within their own data centers versus how do they leverage industry partners from that standpoint, power constraints exist, right. So if you start moving towards more and more AI-centric workloads, you're going to need more power cooling. You know, all that stuff that you know wasn't as exciting a couple of years ago, now is, you know, front and center. So it is looking at the facilities. It's looking at what that consumption that they can actually handle and then identifying the right types of workloads to put on those types of systems, on those pods, and then build those pods in a way that they can scale them out. So you don't need to start with the super pods. You can start with and I'm not saying pods in the sense of NVIDIA's pods or Dell's pods, I'm more just from a building blocks perspective Start where it makes sense, building those use cases, a lot of the entry points that Ryan's talked about, dan's talked about. On this, you don't actually need massive compute as an entry point. You need it to scale out as you start to deploy it from an enterprise perspective. So it is looking at those types of things.
Speaker 4:You know, obviously here at Worldwide Technology, wwt here we have a pretty large investment in our AI proving grounds, which is part of what we're doing here on this podcast to be able to help customers and help our partners understand how to scale that out the right way and quickly, and all that from that perspective. So I do think that's a part of it. From the infrastructure perspective, I think the other aspect and I'll kick it over is it's retraining traditional data center engineers, data center SMEs, into it's a new way of working with infrastructure. Ai does change, obviously, the output, but how you build an AI infrastructure is different than how you would build a traditional virtualization infrastructure or just all that kind of stuff. The foundational building blocks are different and so it is retraining your typical data center SMEs to feel comfortable with this new infrastructure.
Speaker 2:Yeah, what I'd pivot off from that with is that the first thing is you hit on something that's critical is understanding the differences of what an AI data center requires compared to legacy data centers right, the power and cooling requirements are much different. And also, when you're talking with organizations about AI pursuit. The key things I usually talk about when I start the conversation is what are your use cases right? That's always the first question what is the sensitivity of the data in the mission that you're executing? And then, third, what are your financial constraints right, like, how do we help you get started in the model you need to today so you can immediately start at least proofing out your use cases? And you hit on it.
Speaker 2:There's things like the AI proving grounds that WWT offers. Dell has a number of co-location services team with NVIDIA and WWT and others, and there's also what I'd say is cloud-based offerings that have value, and, from my perspective, I never look at any of these things as competitive. I look at them all as part of a approach to the enterprise and how you move forward. I look at them all as part of a approach to the enterprise and how you move forward, and those are the big considerations that organizations need to think about. Don't think that I need a massive data center today to do AI. It's how do I prove my use cases in the most efficient way to meet my short-term mission goals and in the financial constraints that I live within. Ryan, what are your thoughts so?
Speaker 3:I think there has been some trends that I live within, ryan. What are your thoughts? So I think there has been some trends that I've kind of recognized right.
Speaker 3:Like I think when an agency first gets started actually the one where I talked about where we proved the use case out on the call that specific customer basically needed to make 3,000 calls to a language model year. It was for grant award description analysis. It was public data, it wasn't anything they had to worry about from a sensitivity standpoint, and 3,000 calls is probably equal up to like less than $20 a year in API credits. It's hard for me to tell that, like they need to buy an AI factory for that use case. Right, like, go use the API credits if you have the ability to burn up your Azure OpenAI credits, because that's a very easy, low-risk, low-cost way to get into that use case and you're getting those automations. But what we see then is, once people hear about that use case well, that was like five, ten users they're like, oh, can I do my job? Okay, well, now they're doing 3, 10 users. They're like, oh, can I do my job? Okay, well, now they're doing 3000 calls. Or now they've got one smart engineer that learned how to do code gen and now he's burning up 10x the tokens everybody else in the entire organization was using. And now we're starting to see like, oh, we're spending significant money per month on these API credits. Okay, well, now we start to think about that infrastructure of. Maybe we should start to bring this on-prem, or we want to start doing this on more sensitive information, where our data is migrating that model over.
Speaker 3:That is a trend that we continue to see, as well as the other important one is I think people in almost every scenario have underestimated how popular these applications are going to be. Right, like you know, they might say, oh, we're going to deploy this, for we're going to do a proof of concept for, you know, 10,000 internal users. Before you know it, they're at hundreds, thousands of users. Right, like they, they struggle with the. Basically, they're having to deny access to the platform because people are signing up faster than they can scale the hardware. That is not a one-off. That is something we continue to see. I guess it's kind of like if you imagine like oh, the calculator came out, but we can only produce them 10 a week. Like the supply of how many people want to get these calculators. Everybody's looking around seeing everybody else making their life so much easier. Nobody wanted to use a slide rule. I don't think Right. So that adoption was very easy sell and I think we're seeing that with Gen AI as well.
Speaker 2:Well, and what I'd say and what I love about your example, is showing them what I call the art of the possible right in front of them, right, what I call the art of the possible right in front of them, right. So, like the example you did with the call response, I've also seen examples of like language translation in a disconnected system, right, which matters to a lot of organizations right, being able to do that kind of capability or, you know so, having a discussion using something that has a spark to it, that makes them think like, oh, that's awesome, then it's a jumping off point. Art of the possible discussions, I think, are what will really truly drive innovation and adoption of AI across the federal government, and being able to do them well is critical too, said spark, but I don't know.
Speaker 3:Hopefully everybody else on this call is super excited about when we've got either the nvidia version or, like the dell version, the was it? Yeah, gb 10, the little supercomputer that you're gonna be able to put on your desk. So if you worry about like security or controlling costs, those are going to be a great way for engineers to get started. You're going to have essentially like an ai supercomputer sitting there at your desk being able to run language models. I'm so excited for that platform.
Speaker 4:Just to piggyback, dan, off what you were saying, though a little bit, and again I get really hung up in semantics, so, apologies, I agree on that art of the possible. I think AI has opened that up. One of the reasons I love being at Worldwide Technology and working with NVIDIA, working with Dell, is that we can actually bring. That's what our AI approving grounds is to help folks understand how to actually do it, not just talk about it from a slide perspective. What does it actually look like? When you want to scale it out? What are those things you start adding? You know it's easy, not easy, but it's easier to talk through the objectives that AI is helping do. But how do you build that? How do you start to add in that real world circumstances, the network, latency, jitter, all those types of things that impact those aspects of it? And so we used to have a slogan called, like I think it was, we make it doable, or something along those lines.
Speaker 4:I think one of the things I love about being at WWT is we can help agencies move out of that art of the possible conversation to let's go do it, let's just go try, and there are other ways to try it too, but let's try it from an infrastructure. Let's try it from how do you scale this to your enterprise? Not is there value in AI. They all know there's value in AI. It's how do you scale that and then sustain it. And that's really kind of and, brian, I might be jumping over what you're going to maybe talk a little bit from the AI proving grounds, but that's the reason that we have these AI proving grounds is to help them.
Speaker 2:What I want to tag on there, brandon, is that that's a. I want to make sure people understand that too. Is that the AI proving grounds or the AI factory and how they work together? It isn't a hey, you're going to buy this construct, you're buying a box and it has these things in it and that's it. No, it's hey, let's bring your use case in, let's figure out how to get it started and figure out what kind of services and hardware and partnerships we need to make this idea come true and start working through it. It's not a what I'd say is monolithic element. It's adaptable to try to figure out how to help an organization get where they want to go and get started. So I think that's a great call out.
Speaker 1:Yeah, effectively, you're able to take your use case, your idea, your innovative thinking around AI as it relates to your agency or company or whatever it may be. Prove it out in the AI proving ground, validate that it works and then after that it's a game of scale. I do want to go back to that New York City transit use case, Dan, I think that you mentioned so in the infrastructure conversation. We talked about data center, we talked about cloud, we talked about hybrid situations. I'm thinking if this is picking up heat sensors or other factors of the railing system, are we talking about edge AI here? And, if so, is this infrastructure agencies already have in place? Or these new deployments? How do they think about that? Because I know edge AI has been a big topic and a big question with a lot of our clients, even outside of the public sphere.
Speaker 2:Yeah, no, that's a great question, and here's the reality. Is that to me right? I come from a what I'd say is military background, but to me, the mission or the where AI touches people the most, where it executes at the edge, where it has a direct impact on the people doing the hard work, is usually at the edge. Where it executes at the edge, where it has a direct impact on the people doing the hard work, is usually at the edge right, and there has been traditionally lots of data that is developed there. But in the past that data all had to be moved back to bigger data centers to do bigger analytics on it to get a better outcome. That is changing dramatically. One there is a lot of stuff getting deployed at the edge right there is all the legacy stuff to get a better outcome. That is changing dramatically. One there is a lot of stuff getting deployed at the edge right. There is all the legacy stuff to get data, and there's more. What I'd say is IoT and other elements that are being deployed at the edge to get even more data, and you need to process it where it's being created to get the best outcome, to get that effective emergency response.
Speaker 2:I'm from New York, so I'm going to lead on New York City. Another thing New York City is doing they're doing drone deployments for first response for like firefighting. They have these basically nests on public buildings. The fire department will get a call. They'll deploy a drone immediately to go and assess the area to figure out what is the best route to get the fire trucks there, where are all the fire hydrants, how intense is this fire? How big a deployment do we need to do to respond this? Again, some of those data elements always existed, but they weren't necessarily collected effectively. So being able to what I'd say is synthesize that data quickly to get to a better outcome at the edge where the mission is executing, is critical, and we are just seeing more and more capability every day, and to me, that is the best adoption and adaption of AI.
Speaker 1:Yeah, that's super cool.
Speaker 2:Brandon thoughts on that.
Speaker 4:I think it's a mixed bag of do the agencies have, or where do they have their IoT and where do they have those things out in the field? Every agency has a different mission, different thing that they're working through, different business outcomes and all that aspect of it. So for a lot of this, it is again it's a change of how they do things. So it is moving the decision point out to the edge and there is some hesitancy in doing that right. How do I move those decisions out to the branch offices where it makes sense? How do I do all those types of things? Uh, public, you know, obviously from a first responder and all that it's a little easier to understand those use cases and to leverage it. We start thinking through some of the other agencies and all that aspect of it and they're all in different parts of that journey.
Speaker 4:So for a lot of them it's trying to understand what part of the infrastructure, where is that biggest bite to start with. Is it edge to start and then work back in, or do you start central, work out? And every agency is going to have a different answer to that from that standpoint, because there is not a universal fit. So you know that's it's a hard journey. Yeah, no, absolutely.
Speaker 1:I do want to recognize we are coming short on time. There's a lot of different areas we could still cover. Cyber security needs to be cooked in at all times, talking about collaboration and ecosystem and leveraging partners. But I do want to touch on workforce and culture. I think, brandon, you had mentioned it's a totally different way of training engineers. Without having your workforce, your employees, the folks that make up the heart of these agencies, without them being trained on AI skills. It's not going to go far because there's just not going to be any adoption. I read a stat somewhere I can't remember where it was like a high 90 percentile, like 96, 97%, of agency employees want to be trained on AI to better understand, but they're just not getting that training. So what is the best way or how would we advise agencies to get their workforce familiar with these tools, familiar with the skills needed to survive and thrive in an AI future? You know you, all three, can weigh in. You know, ryan, I'll just pick on you here to start.
Speaker 3:I mean. So communication, like share. Like if you start out initially with like a small group of, if you're piloting some AI use cases, making sure people are sharing how they're using a system where it's working. Where it's not, but if you're really just getting started like one of the things that I always tell people the first thing you should do is go ask your AI how you can use it. Like, what other technology has been out there where you can literally go I'm a program manager at X agency. Give me five ways that I can use generative AI to help improve my job. It's going to tell you and then you could be like all right, great, what do I need to learn to be able to do use case number one? Give me a 10 day learning path. It'll do it right and, especially if you've got it connected into some more advanced like research systems these things can they will handhold you through the journey for your custom application.
Speaker 3:It's. It's really kind of like amazing, like I I introduced my in-laws to this right. Like they're they're older, they're retired. You know, getting them to use newer technology can be a challenge, but when, when you're like, hey, just ask it how you can use it to do X, and it's amazing to see this talk on their face when it's like oh, you can use it for planning for vacations or meal planning or all these different things. It works the same for most of especially with the government, where most of the information is publicly available, right Like it knows about your agency in many cases and it can help you identify improvements.
Speaker 2:Yeah, I'll add on to that, ryan. Here at Dell, what we did is we used AI, like you said, to solve that problem right, and we also paired that with what I'd say is the culture of today. So we were like, okay, hey, help us build a curriculum to help upskill our workforce. And it defined the curriculum that we said, okay, make this curriculum and help us develop the actual training courses. And it did right, from the presentations to the voiceovers, to all that. And then we said, ok, make this into consumable, like 10 minute chunks, because, let's be straight, attention spans are different than they were, like 20 or 30 years ago. Making it so it's something that someone can like watch between calls, like for five or 10 minutes, was amazing and has driven a huge uptick in the adoption of a lot of our training capabilities within Dell and has absolutely helped us meet our training goals.
Speaker 3:And customization, right, like, the more you customize something, I think, the more willing people are to accept it. The more you customize something, I think, the more willing people are to accept it. So, like being able to, there's no reason now why you can't custom tailor a piece of training for an individual's entire background, right, like, hey, you're, you're a mechanical engineer or you're an electrical engineer or software developer. I don't need to tell you about X, y or Z to get you doing this part of your job. Like, let's skip all that stuff. You went to school for it for six years, like. But here's the part that might maybe you don't know about your job as your new role within the government, and that custom, individual tailored content I think not only could be more effective at providing training from the standpoint of hey, I, okay, I appreciate the fact that this is custom for me, but you don't feel like you know that I'm just wasting my time as the training end-year every year.
Speaker 4:I think there's also a part of what types of training do the different folks need, right? So there's AI consumers and then AI builders, and those are two different, enablement two different training aspects of things. Right, and obviously us as technologists, here we get really onto the building side of it. But I think, ryan, actually what you've been hitting on Dan, you've been hitting a little bit more is like the consumer, getting folks used to using it from a consumer perspective and not building it is a big piece of it and that is just comfort for the most part. I mean, what does AI skills actually mean if you're going to use it to to do tasks, pmo tasks, or you're using it to do emails or what? What is an ai skill? I mean you're typing it.
Speaker 4:I mean, one of the areas where I I do think there's going to be a lot of momentum is coding. Right, I mean, why? Why am I gonna write my own code anymore? I mean, I, I grew up kind of a script kiddie anyways, but like you know, it's one of those things like do I need to write my own code anymore? I mean, I grew up kind of a script kiddie anyways, but like you know, it's one of those things like do I need to write code? Why not have AI help me write the code and then implement that code and check it and all that kind of stuff. So I think the enablement and getting those AI skills, it really depends on what they're doing. Some need it, some of it. Just it's consumer enablement versus builder enablement, and I'm making those terms up.
Speaker 1:I don't know if that's the right terms, but yeah yeah, well, in the spirit of closing thoughts here, you know I asked you in the beginning how would you give an elevator pitch about the opportunity to a cabinet member or a leader of one of these agencies? So let's say that cabinet member buys into the opportunity. They go to their CIO and say we need to be doing this ASAP. Let's speak to the opportunity. They go to their CIO and say we need to be doing this ASAP. Let's speak to the CIO now, or whatever leader you might plug in there. What are the one or two priorities they need to take into consideration right now so that they can start to capitalize on all the things that we've talked about in this discussion? Brandon, we go ahead and start with you.
Speaker 4:To me it's starting out with data guardrails. What are those guardrails around? What you're going to start to do to begin with and define that out?
Speaker 4:the gate quickly Now traditional, and there's some CIOs are leaning in differently and all that kind of aspect of it. You know they're hesitant to go forward with newer technologies because it does introduce new risk or new vectors and all that. So I do think, defining what are those guardrails that we didn't talk a ton about cyber that could be a whole separate conversation. But really having that foundation as you start here's the fundamental cyber guardrails definition and then not restrict it past those initial guardrails, like, don't define every aspect of it, opening it up to your teams, your agencies, to do it within those guardrails, to explore and't define every aspect of it, opening it up to your, your teams, your agencies, to do it within those guardrails, to explore and figure out how to leverage it. They're going to find use cases they'd never thought of because they're not the ones doing the day-to-day tasks from that standpoint. So that's in my head, that's what I would, what I would say.
Speaker 1:Yeah, ryan.
Speaker 3:I think it's important to reach out right, like just don't sit there and say we're not going to do AI or we're not going to adopt it because of X, y or Z, and you've got all these concerns. Reach out to RFIs or what have you and let the industry help show you like dispel maybe some of the myths you might have, or help you show you how you know there are lots of concerns out there, but in most of the ways with hallucination or privacy, there's industry has come up with ways of addressing those. So yes, in its own silo, maybe Gen AI has can hallucinate, but when you, when you, when you put the right mechanisms in place, it could be extremely effective. So Reach out, let I mean we're willing to you. You know, if you're a cio or senior leader, we'll bring you out to nvidia headquarters and show you the art, not only the art of the possible, but this is not only where the technology is today, but this is how fast it's moving and where it's going.
Speaker 3:And you know we have such a tremendous partner ecosystem that you know wwt knows their customers tremendously well. They know how to kind of take the NVIDIA components and help you kind of talk about it in your agency's kind of mindset of here's how you would address some of those mission problems you might have using the industry technology. Like they can help bridge that gap from bleeding edge NVIDIA that we're releasing models every week. So like, okay, well, here's how we take those components. Here's how we turn into enterprise solution. Here's how we take a bunch of ISVs, bundle them together on some Dell hardware and bring it in your facility to help kind of address your challenges. So we're here to help. Like don't hesitate to reach out.
Speaker 1:Yeah, love it Dan bring us home here.
Speaker 2:Yeah, sure. So if I was talking to a CIO, I would tell them AI pursuits do not stand alone. Like I said at the beginning, della and NVIDIA and WWT have been working with federal organizations for years to help them understand where they're going with IT modernization, where they're going with security, where they're going with workforce you know, basically, workforce improvement. Ai ties into all of those, so you have to unify it with that existing planning and, like Ryan and Brandon said, call on us because we've been on this journey with you for years, and Brandon said call on us because we've been on this journey with you for years. We're going to help you to continue this journey to drive great success with AI and help you find all the promise around efficiency and cost savings and all that kind of stuff. So, yeah, you know it's not a new conversation, it's a continuing conversation.
Speaker 1:Awesome, Awesome. Well, Dan Ryan Brandon, thank you again for taking time, Like I said, out of that busy schedule. I know your time is valuable these days with everything that's going on in the state of the world as it relates to AI, et cetera. So thank you again for joining. I hope to have you back on soon. Yeah, it's great, Thanks.
Speaker 3:My agents have been hard at work while we've been on this call.
Speaker 1:Keep an eye out for that, bill. Okay, thanks guys. Okay, well, we covered a lot, so here are three key lessons that I'm taking away from today's conversation. First, start with guardrails and not the gadgets. Define data sensitivity, cyber boundaries and what good looks like. Then let teams explore inside of those rails.
Speaker 1:Governance shouldn't smother experimentation. It should unlock it. Second, begin small, but design for surge. Many successful pilots started with cheap API calls or narrow use cases things like grant abstracts, pmo automation, internal chatbots but then demand exploded. So prove out your solution or use case first, so you can add scale without re-architecting under fire. And third, invest in people the same way you invest in compute.
Speaker 1:The skill gap isn't motivation, it's enablement. Give workers permission and prompts. Ask the AI how it can help your job, deliver training in short, role-based bursts and distinguish AI consumers most of your workforce from AI builders, the specialists, the bottom line. Ai in government is no longer a futuristic panel topic. It's a budget line, a workforce issue and a competitive question. Agencies that pair clear guardrails with rapid experimentation will move from hype to mission impact faster than those waiting for perfection. If you liked this episode of the AI Proving Ground podcast, please consider sharing with friends and colleagues, and leave a rating or a review on your favorite podcast platform, and don't forget to subscribe or watch on WWTcom. This episode was co-produced by Naz Baker, cara Kuhn, mallory Schaffran and Owen Scholar. Our audio and video engineer is John Knobloch and my name is Brian Felt. We'll see you next time.