AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
AI deployment and adoption is complex — this podcast makes it actionable. Join top experts, IT leaders and innovators as we explore AI’s toughest challenges, uncover real-world case studies, and reveal practical insights that drive AI ROI. From strategy to execution, we break down what works (and what doesn’t) in enterprise AI. New episodes every week.
AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
Mythos And The Disappearing Patch Window
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
What happens when vulnerabilities can be discovered in minutes and exploits can be generated faster than organizations can patch them?
In this episode of the AI Proving Ground Podcast, former NSA Cybersecurity Director Rob Joyce and WWT cyber architecture and innovation leader Kent Noyes break down how AI is compressing the timelines cybersecurity teams have relied on for decades.
We explore why Anthropic’s Mythos became a wake-up call for the security industry, how agentic AI is accelerating vulnerability discovery, and why traditional patching and vulnerability management strategies are struggling to keep pace. The conversation also examines which defenses still hold up under machine-speed pressure, from zero trust and segmentation to deception techniques, resilience planning and recovery.
If your security strategy still assumes defenders have time on their side, this episode explains why that assumption is starting to break.
Support for this episode provided by: ExtraHop
More about this week's guests:
Rob Joyce served more than 34 years at the NSA, spending his final years as Director of Cybersecurity. Throughout his career, he held leadership roles in signals intelligence and cybersecurity, including leading Tailored Access Operations (TAO), the NSA’s elite hacking unit focused on foreign intelligence operations. He also served on the White House National Security Council as Special Assistant to the President and Cybersecurity Coordinator, and as Acting Homeland Security Advisor. Joyce is now founder of Joyce Cyber LLC and serves in advisory roles for organizations including OpenAI, Microsoft, PwC and others.
Rob's top pick: Cyber Resilience: Why Security Fails When It Matters Most
Kent Noyes is a cybersecurity and infrastructure leader with more than two decades at World Wide Technology, where he has held senior roles across technical pre-sales and service delivery. A Cisco CCIE and WWT’s first Distinguished Solutions Architect, Kent now leads Cyber Architecture & Innovation within WWT’s Global Cyber organization, helping enterprise customers address evolving security challenges, emerging technologies and modern cyber risk.
Kent's top pick: Defending at the Speed of AI
The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.
Learn more about WWT's AI Proving Ground.
The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.
Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.
The Timeline Just Changed
SPEAKER_01And then they ran it several hundred times in Mythos, and 178 times it found and made exploits for this uh this sophisticated software. So that isn't just a small percentage increase. That is like a radical improvement. And that's where the developers at Anthropic got scared and said, we've got to do something that protects the cybersecurity capabilities in this new development.
Why Mythos Matters
SPEAKER_00The economics of hacking have fundamentally changed. For years, cybersecurity teams have counted on time, time to vulnerabilities, time to patch, or time to respond. But AI is compressing that timeline dramatically. And the wake-up call in this case is Anthropics Mythos, a frontier AI model that wasn't built specifically for hacking, but has shown the ability to find vulnerabilities and generate exploits at a pace and scale that changes the security conversation entirely. For attackers, for defenders, for any company moving meaningfully with AI. So in today's episode, we're talking with former NSA Cybersecurity Director Rob Joyce and Kent Noyes, who leads WWT's cyber architecture and innovation team. They'll unpack what all this means for CISOs, boards, and enterprise leaders, and what organizations should be doing now to become harder targets from bad actors that will be more capable than ever moving forward. From Worldwide Technology, this is the AI Proven Ground Podcast. Let's get to it. And that is AI security and mythos. Rob, want to start with you. Just a table set here. Talk to us about what exactly mythos is and why it's different from any of the other models that we've experienced over the last several years.
SPEAKER_01So we've been on this exponential capability improvement in AI tools for a while, right? But Mythos was this wake-up call. It showed real hacking capabilities as good or better than all but the most elite exploit developers. And so it is this anthropic model that is a next generation frontier AI model. But the interesting thing is it's not purpose built for hacking. That distinction matters. It was wasn't trained to find vulnerabilities, it had these emergent capabilities that that came out because anthropic continued to improve on how these models are able to code. And to code well, you have to be able to find bugs. And if you can find bugs, you can find vulnerabilities. So if you can code with vulnerabilities, that can get you to the point you can do exploits. And those are you know currency of the realm for hackers. So these same improvements that made the bottle the models better at writing code really made it radically better at breaking things, essentially hacking. And and that's that's where we are.
SPEAKER_00Can I just go in if I had access to mythos and say find me vulnerabilities? And if I did, how extensive of a set of vulnerabilities would I get back?
Vulnerabilities At Machine Speed
SPEAKER_01Unfortunately, you can just give it, I need uh to find a vulnerability and an exploit for this piece of software, go. And that is one of the other differences, right? We're in a period where this agentic AI is getting better and better at finding vulnerabilities. And it doesn't have to be mythos. You could use, you know, Claude Code, just 5.4, the or the newer open AI models, we'll all find vulnerabilities. But it's historically taken a little bit more rigor and skill in the prompters to be able to kind of nudge it and shape it toward finding vulnerabilities. And then you had to nuance your way around the guardrails to convince it to make an exploit. But to give you an idea of you know how mythos is different, you know, one of the early things they did was they took mythos and they pointed it at Firefox. And they took a known vulnerability in Firefox and they said, make me make me a zero-day exploit from this known vulnerability. And they ran the the current Claude capability against it, and it succeeded like twice. And then they ran it several hundred times in mythos, and 178 times it found and made exploits for this sophisticated software. So that isn't just a small percentage increase. That is like a radical improvement. And that's where you know the developers at Anthropic got scared and said, we've got to do something that protects the cybersecurity capabilities in this, in this new development.
SPEAKER_00Kent, connect us a little bit to you know the business reality. You're you're out there dealing with a lot of you know Fortune 100, 200 clients of WWTs. When you're briefing them on mythos, what kind of questions are they asking? What kind of answers are you giving back?
SPEAKER_02Well, they're asking about prioritization. That's the main thing. Like how do they prioritization on all kinds of levels because there's no attack surface that is safe with this situation here. Even though mythos is focused primarily on code, it's really any attack service. So they're wondering with regard to the code vulnerabilities that are being found, you know, where do they start? How do they assess? And you have to look at it a different way for sure. The assessing risk is totally different with what it's producing compared to what prior vulnerability scanners had produced. And so they're trying to prioritize what to even patch first. So that's a real common question. I think just where to start strategically and programmatically, I think that's a big concern as well. Do they do they focus on building up their patch system and modernizing it? Do they focus on recovery and incident response? Like what are they, what are they what do they focus on in all this?
SPEAKER_00Do you think those are the right questions to be asking? Or is there even an additional set of questions that you that you have to at least broach with them to say, hey, you should also be thinking about X, Y, and Z.
SPEAKER_02So yeah, I mean, we come in with a 12-point set of recommendations basically right off the bat. So we listen to them and they will go through their list of things that they they they were already formed a strategy around, and then we'll basically put that list, our list up on the board and say, where are your gaps here? You know, segmentation, zero trust. You didn't mention that. So that's really important here to try to minimize that blast radius when something does get in and eventually will. This stuff is that powerful. Are they starting at the right place? You know, are they starting on the perimeter first and then working their way in? You know, what are the priorities? So they're asking a lot of the right questions. It's just that we have a sort of a holistic programmatic approach to it where we kind of put it all together and we just try to find where their gaps might be.
Where Defenses Still Hold
SPEAKER_00Yeah. Rob, you know, your work with the NSA, you've you've you've helped defend against some of the most sophisticated threat actors, you know, in the world. If if bad actors today, or maybe they already have some access to mythos, but if they if they got access to mythos, is there is there even a credible defense like right now for most organizations, or are most of the organizations in the industry just behind? They're gonna have to to work very rapidly to get to where they need to be.
The End Of Reactive Patching
SPEAKER_01We all have vulnerabilities and misconfigurations in our networks. What these AI tools are doing is they're allowing them to be found at greater scale and speed. And so that's the defender's challenge. What we really need to think about is doing the basics really well. You know, we've we've heard for years that we've got to do patches and updates, we've got to do good network segmentation, right? We we've got to have multi-factor authentication, we've got to be working toward zero trust. All of those things drive out vulnerabilities, and they also, like Kent said, reduce the blast radius when there is some sort of vulnerability found. You you make it less likely that then you become victimized because you've had one security failure. You know, the the the AI tools are making it easier to understand and exploit networks. There's there's no doubt about that. And you know, the in my past at NSA on the offensive teams, we did really well, not because we had exploits, not because we had really skilled human beings. We we had some of those. But what really was the secret sauce is we put the time in to know the network better than the people who owned and operated it. And now with LLMs, you can get this really rigorous understanding of a network and find out where the misconfigurations are, find out where the devices that have known vulnerabilities are that haven't been patched and upgraded, find out when you've made a misconfiguration in your AWS buckets and left them exposed, right? And find where a default password is still enabled on a device or a piece of infrastructure. So those things are all the hard things that every big enterprise works to manage. And AI is going to make those glaring errors even easier to find and faster to find. So, no, I wouldn't say that it's it's this universal skeleton key and we're all doomed, but it does mean that if you're doing hygiene well, you're certainly at greater risk today.
SPEAKER_00And Ken, is that is is all that part of that 12-point response that you brought up just a moment ago?
SPEAKER_02The patching and vulnerability management is probably the biggest thing that needs to be modernized in most companies. We're seeing a trend towards a centralization of patching mechanisms and patching processes and standards and even the people into, you know, you're you're used to seeing a knock, you're used to seeing a sock, but now we're starting to see vulnerability operation centers form, the VOC basically. So I think that because you know, as Rob stated, cyber is just too slow right now to be able to deal with this. And so, and they have to look at it in different ways from a patching perspective. Continuous patching, you know, we'll be patching more quickly and more continuously in the future. Getting that program built up, being able to have a process for prioritizing the modern results of scanners. I mentioned cyber resilience. That's a that's a massive one. You're gonna get breached eventually with this. You need to be able to recover. Logging, this is not the time to be getting cheap on your logging and your sim and trying to log less into the SIM. You want maximum visibility right now with what's going on. We mentioned zero trust and segmentation, identity, non-human identities, those are all gonna be massive when agents get in and they will. How do you how do you control access to things and how do you minimize blast radius? We just talked about a while ago. Embracing cyber tooling that's AI powered. Even even though almost every cyber tool on the planet right now has AI backing it, it's still not effective or fast enough yet. There's still ramping up, and not all companies have embraced it or know how to use it quite yet. So that's an evolving place, but you gotta, you know, again, to defend against AI, you gotta use AI. There's no other way to do it. Rob talks about this a lot deception, honey pots, canaries, things like that. Those are all going to be critical to defending and you know, basically catching the bad actors when they're using AI to come in. That's another important element. Remediating known risks, just the fundamentals, as he as he described it, remediating your known risks right now. All those criticality level two patches that didn't matter anymore. Now they're combined into an exploit with criticality level 10 patches. And you, if you put them all together, then you have an exploit that someone can come in and take advantage of. So, so all of those things and more in that 12-point plan and a programmatic approach to implement it.
Visibility Becomes Everything
SPEAKER_00So that's the full programmatic view. But inside all of that, there's one tactic that Rob keeps coming back to something the security industry largely set aside years ago that he thinks it's time to revive. This episode is supported by ExtraHop. ExtraHop provides network detection and response solutions to monitor and secure enterprises in real time. Gain deep visibility into your network with Extra Hop's advanced analytics platform.
SPEAKER_01Brian, think about it. If you've got the perfect patching program and you can get every device immediately patched with whatever the manufacturer sends as the latest update, you still are in a world now where we expect more zero days. And in fact, one of the one of the cybersecurity companies, it might have been CrowdStrike, put out their annual review and they talked about the delta between, you know, the the known vulnerability and exploit. And they said it's minus seven days in their most recent evaluation. And what they mean by that is people are using zero days against devices before the patches are out. So you've got to assume that things are going to get past your first line of defense. What is your method that's going to tell you that somebody's broken through your firewall or VPN concentrator or file transfer device? Those things that are touching the internet, they're most likely to be the source of an initial compromise. The way you need to do it is log and understand normal traffic and what's going on. But you ought to have a belt and suspenders. You ought to have some sort of safety device. And I'm a huge fan of getting small and cheap honeypot technology into your network. And, you know, we used to do that a lot in the old days. I think its time has come again because the the defender lands inside your network and they have to figure out where the things are that they're interested in. They have to figure out how to navigate the network. And so if you have some attractive things sitting around in the network that look like, you know, some sort of file server or they look like a domain controller or they look like, you know, some some repository that would be of interest, you know, a SharePoint, those attackers, they're going to be too interested in those not to touch them. And if if the alarm goes off on that honeypot or tripwire, you've got a really solid signal that something is amiss inside the network and and you ought to be deploying some response and evaluation. So, you know, I think I think in a world where you might be able to blow through a first line of defense, we've got to think about that segmentation. We've got to think about the logging. But now honey pots and tripwires have a really high value in this in this world.
SPEAKER_02I don't think that's a heavy lift for them to get back into, just to kind of reskill back up and put some additional focus on it more than what they've been done before, but it won't be any surprise to anyone. And it won't be a surprise recommendation. But again, as we stated earlier, the fundamentals right now need to be upscaled by 10 at least, probably way more than that. The game of all of those old traditional solutions that we've used before has to be raised up, you know, multiple multiple levels and from what where it is right now. And that's just another one of them.
The Return Of Deception
SPEAKER_00If time to exploit is already shrinking from you know months many years ago down to weeks, down to days, and so forth, if we're at a day right now, when is it going to turn into hours and minutes? And when are we ever are we ever going to get to like predictive patching or anything kind of in that that future realm?
SPEAKER_01Well, the DARPA ran a contest about two years ago now where they they wanted to have AI find vulnerabilities and then they wanted to have AI automatically generate the patch. So there's there's wicked smart people thinking about that world where we defend against vulnerabilities we really don't even know. But I would say if you've got the right architecture, you're already in better shape. There, there's an old saying that you know, attackers only have to be right once and defenders have to be right all the time. I I disagree with that sentiment. I think you know, attackers have to be right once to get through your first layer. But at that point, you own the strategic high ground as defenders. You know your network, you understand it, you've architected it, you've set up things that that should get in the way of an attacker, right? You've set up that segmentation, you've set up privileges and accounts that have different identities and have different authorities. You you've hopefully installed things that that do honey pots, honey tokens, tripwires. So you've got all of these advantages as the defender that the offense has to overcome. So it's not one shot, one kill with a vulnerability. It is a series of operational events that have to be stacked together. And you can do a lot from that defensive high ground to make it really, really hard for offense to win.
SPEAKER_00Kent, I it feels like a good time to bring in Armor, which is a framework that WWT introduced at the beginning of this year in 2026. Is what Rob's getting at is you know, the right architecture, getting to that high ground. Is all that stuff what's baked into Armor right now, or where does that kind of enter the fray?
SPEAKER_02Yeah, Armor is our stands for AI readiness model for operational resilience. And as I stated earlier, not many attack surfaces are going to be you know immune to this threat. Armor deals with the AI-specific attack services. So as you build a use case in your own organization and it's an AI use case, that's what Armour is meant to secure. It's meant to secure your own custom AI environments that you're building, and really, even certain extent, some of your vendors that that are running AI help secure them as well. Um, it's WWT's framework. We built it. We utilize pretty much every cyber domain we have, every team that I have came together to build it into about 160 pages worth of guidance on how to secure AI. It's out there, it's open to the world. We wanted to make it an open document. It's you can go on wt.com and read the entire thing, but also cross maps to a lot of the vendors that we utilize to secure our customers with. And so, yeah, that's it's very much a go-to framework. As we we also roll out a large a lot of very, very large AI data centers right now with a lot of high performance architecture and NVIDIA compute in them. And so when we do that now, we're able with Armor to be on the front end and to secure those environments on the front end and not do it after the fact, which is what we were doing in years past as we as we as we work with this. So this is something for us that we use. It's something for our customers to use, and uh hopefully some somebody can make use of it to protect their their AI, their AI architectures.
Trust In The Frontier Model Era
SPEAKER_00That's the defensive architecture from WWT's side, but there's a parallel effort underway inside the labs themselves. Enthropic and OpenAI are both quietly giving a small number of organizations early access to mythos level capability to find vulnerabilities before the attackers do. Rob Joyce has been watching both programs closely. What are some of the other ways that we're seeing some of these frontier labs consider defending against these mythos-esque capabilities here? You know, certainly there's things like Project Glasswing out there. What are you seeing in terms of what the industry industry is doing to maneuver to get ready for when it is released?
SPEAKER_01Yeah, so what you have to realize is, you know, the the mythos capability isn't inventing new ways to exploit our software and hardware. It is finding latent flaws, things that were done wrong from the start, and exposing those and figuring out ways to exploit them. So Project Glasswing is Anthropic's program to get a small number of the entities we rely most on this capability so that they can point it at their software first and find these vulnerabilities and try to improve them. So, you know, you'll you'll see Microsoft and Google and Apple and a series of the biggest names on the internet have access and are improving all the stuff we rely on. They're also giving access to a few of the open source foundations, right? So the people that develop Linux, for instance, the same capability. And the idea is that this small group, this trusted group, will be able to put it to good use and get ahead of the exploits and problems. OpenAI has taken a similar approach. They have a trusted access for cyber program. They have their next generation model that, you know, a UK government analysis said on the order of mythos, but that one that program's a little more available. So companies can self-nominate into the trusted access for cyber program and get access. And we all need to be putting our software through these LLM vulnerability discovery tools because it's great if Google makes Chrome super locked down and Microsoft gets Microsoft Windows hardened, and Apple gets our iPhone iOS super hardened. But if we have a custom app in our company, that then can be the source of vulnerability and exploitation. So we now have to take our source code and beat it against these tools so that it will squeeze out the vulnerability. So we've been on this journey for a couple of years now where automated agentic AI is getting better and better at hacking. So one set of this is the let's look at my code and find vulnerabilities. A second approach, though, is agentic AI become becomes an operator. So an Israeli startup company called Tenzai in 2026 entered six different CTFs capture the flag. So these are hacking contests. And that agent outperformed 99% of the humans it was up against. It was up against 125,000 people, and it beat 99% of them in the kind of basic operations of hacking. These capabilities are getting really, really good. We've crossed that Rubicon where head-to-head performance with skilled humans, real zero days are being found. It's not just CTF problems. There's automated exploit generation, machine speed in this world. Kind of changes the economics. And adversarial use of these tools are underway today. So you've got to up your security game to live in this more hostile environment.
The Governance Gap
SPEAKER_00So the capability is here, and it's not just mythos. The next question is governance. Who's responsible for making sure tools like this don't get out in front of any reasonable defense? The labs, the government? And what does oversight even look like when the technology is moving faster than any policy body can even respond to? Uh, Rob, stick with you here for a moment. I I want to get back to some of the glass wing stuff and what open AI is doing. You know, these are you know limited, trusted releases. Do you think that that industry self-governance is kind of the way to go? Or do you think that the government needs to step up in a more in a in a bigger way or in a more direct way, or is it maybe somewhere in between?
SPEAKER_01I I think government's got to partner with these companies, but nobody knows the models, what they're capable of, or have the oversight and overwatch of what people are doing with these models better than the companies. So I really think we've got to rely on and put some faith in the companies. Both of them have shown they're trying to be good, honest brokers in this space. They they are taking different approaches, anthropic, very limited, tight glass-wing program with only the top companies that they felt they could partner with. And and open AI is taking uh a broader view that says, you know, if if you are an important cybersecurity company that can help many people, if you are a critical infrastructure provider, if you are an open source provider, we're going to give you access. So the tent for open AI is much, much larger. And you know, it remains to be seen how fast Glasswing expands over time. But they're they're definitely two different approaches today.
SPEAKER_00Are we hearing from anybody within that kind of limited release window? If so, what are we learning from them?
The SOC Under Pressure
SPEAKER_02Again, this is a different kind of readout than they've had before. It's larger, it's more complicated, has more exploit-level things that use multiple vulnerabilities at one time. And so you need an AI method to even figure out which one of those should be targeted first for patching overall. And so it's it's very much for the participants, very much prioritization of patching. They're also building out lab environments to be able to test out their in their environments with mythos because they either have a copy or they have a with a method of of of of testing testing it in those environments, but they need it almost like a skiff-level environment. There, you know, everyone's kind of afraid to not get those labs locked down as much as they need to be locked down based on what's got to be probing around in there. So that's another big part is the the labs between the labs and the patching, that's probably the most common thing they're asking us about.
SPEAKER_00You know, if if these red team exercises, blue team exercises are getting so sophisticated and and you know, potentially, you know, AI augmented, what does that do to the kind of the the future of these security teams to these SOCs called in three to five years? Like what is it even going to look like when much of that in itself can also be automated?
SPEAKER_01Humans are gonna be the the oversight and decision-making layer, but the machine speed is going to be required to keep up with attacks, right? We're we're on a path where eventually it's going to be AI offense, AI defense. And I really think long-term AI defense wins, you're going to be able to, as I said, architect your network to give you the advantage, and you're going to have all the signals about what's normal and abnormal. And so that'll that that'll keep you safe. There's this interim period that's going to be fairly dangerous for people where we still have tech debt. We still haven't figured out all of the AI defenses and wired it in. And so that's the place of risk. There, there's one other area that I think it is worth talking about where we're talking about the AI offense, and you can leverage and benefit from that, that offensive capability that we're starting to see emerge. I'm a fan of getting agentic AI continuous red teaming against your networks. We know that you're going to be pen tested by the bad guys. I would rather have an AI agent on my team running continuously for me and trying to figure out what those AIs will discover and then put that into my prioritization matrix of what do I have to fix them for and fix first and why. And I think that's going to be, again, a really strong indication of if it's visible to an AI agent that pretends to be an attacker, it's going to be visible to real attackers. So you've got to address it.
The Cost Of Standing Still
SPEAKER_02I think like you'll see more pen testers, of course, more AI-powered pen testing and different kinds as well. You know, right now we're trying to check all the right boxes in that. So we're rounding up our application pen testers, you know, and code scanners, and then also separately completely different solutions for infrastructure scanning and pen testing as well. So you're going to have a sort of a pharma of them, I think. Um, also, you're talking about the future. And I think even the cyber practitioners, when you talk about SOCs and cyber practitioners as a whole, I think the individual people are going to be changing, and they are changing now, but they're going to be transforming over the next few years. They're going to be programmers because they can now, if they get clawed code, they can now script and code a jillion times faster than they could before. We're seeing them start to get a lot better at model management for custom, custom AI, custom model solutioning in security teams, where they might use one model to form the solution and then another model or two or three to be able to validate that solution and kind of go back and forth until it's an optimized solution. Cyber practitioners are getting good at that. They're even getting good and will get good at token consumption management, you know, the cost, the cost in doing their custom cyber solutions that are AI powered going forward. So yeah, cyber's cyber is transforming completely because of what's happening right now.
SPEAKER_00What does that worst case scenario look like in the end if we just do not do anything to prepare for this?
SPEAKER_01Yeah, I'm not sure about worst case, but you know, I look at attackers in this big bell curve. AI tools are gonna make the script kitties better. They're gonna pull them up and they're gonna make them able to go after some targets that they just wouldn't have been able to because they didn't have the the skills, the training, the technique. Over at the high end, the high-end attackers, they're gonna be supercharged with more knowledge, with faster capabilities, and this expansive ability to do more than any one expert could have done at scope and scale. And and I think that's the the the path and the you know the inflection change that we're under. So, you know, that's where the attackers are going. It's on the defender side of, you know, how do we make our architectures and infrastructure capable of resisting those movements? And and I think that's the challenge for us.
SPEAKER_02I mean, this is a cat and mouse game we've been playing forever, honestly. I used to do a presentation on innovation, and my innovation I focus on was fire. And fire is you know, innovation is basically taking components of things that already existed and putting them together into something new. And so when you first discovered fire, you know, it caused a lot of problems, I'm sure. And and today still fire causes problems, you know, but we keep it under control. And we'll find a way to keep this under control. It's not gonna be easy, but what it'll look like is, you know, you're gonna see just tighter hygiene, as we've talked about numerous times in this call, and just sheer speed. I think a lot of it just comes down to speed and proactiveness, you know, the the offensive cyber you're talking about. That's the one thing you can do well before an attack occurs to help you be tighter and ready. All of those fundamentals will be tighter, it will be quicker, and we'll be more proactive overall.
Cybersecurity Hits The Boardroom
SPEAKER_01And and Kent, don't don't waste a crisis, right? We we've got an opportunity here where boards and C-suites are understanding we're in a new era and there's you know, there's a new level of threat. So the security teams and the infrastructure teams who have had projects that they've wanted to do to modernize, to drive out risk, to get that thing in a more secure and manageable approach. Now may be the time that they can get that through the resourcing and justification process. I I think this is a good opportunity for all of us to double down and come out the other side stronger and and tougher and you know, harder targets.
SPEAKER_00Rob, Kent, thanks so much for joining. I'm sure we'll be having you guys uh again on here soon to talk about uh some more important stuff as it relates to securing AI. So the two of you, thanks so much.
SPEAKER_01It's great to be here. Thanks, Kent. Appreciate it.
SPEAKER_00Okay, thanks to Rob and Kent for joining with Mythos. Vulnerabilities that used to take days to find now take minutes, and exploits that required elite teams can now be prompted into existence. Rob called what follows Darwinian, and that feels like the right word, because the organizations that will come through this era are the ones moving now, while the threat is visible and the case for investment makes itself. That window is open now, but it won't stay that way forever. So don't waste this crisis. This episode of the AI Proving Ground Podcast was co-produced by Nas Baker, Kara Kuhn, and Amy Ubriaco. Our audio and video engineer is John Knoblock. My name is Brian Felt. Thanks for listening. See you next time.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
WWT Research & Insights
World Wide Technology
WWT Partner Spotlight
World Wide Technology
WWT Experts
World Wide Technology