The Macro AI Podcast

Anthropic Mythos & Project Glasswing: The Cybersecurity Operating Model Is Changing

The AI Guides - Gary Sloper & Scott Bryan Season 2 Episode 75

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 20:29

In this episode of the Macro AI Podcast, Gary Sloper and Scott Bryan break down one of the most important—and not fully understood—developments in artificial intelligence and cybersecurity: Anthropic’s Mythos model and Project Glasswing

Mythos is not just another AI model. It represents a fundamental shift from human-limited cybersecurity to compute-driven vulnerability discovery, where AI systems can autonomously analyze code, identify zero-day vulnerabilities, and generate working exploits at unprecedented speed. 

But the real story isn’t just the capability—it’s how it’s being controlled. 

Anthropic’s Project Glasswing is a first-of-its-kind defensive initiative that restricts access to Mythos and deploys it across a coalition of the world’s most critical technology providers—including major cloud platforms, infrastructure companies, and cybersecurity leaders. The goal: give defenders a critical head start to identify, triage, and patch vulnerabilities before similar capabilities become widely available. 

Gary and Scott explain: 

  • What Mythos actually is (and why it’s more than just “AI for coding”)  
  • How agentic AI systems are changing cybersecurity workflows  
  • Why the real risk is not AI attacks—but the collapse of the vulnerability response window  
  • What Project Glasswing is doing to prevent a large-scale cyber crisis  
  • Why over 99% of discovered vulnerabilities remain unpatched and what that means for enterprises  
  • How AI introduces entirely new attack surfaces, including tool access, prompt injection, and data exposure  

Most importantly, they provide a clear, executive-level framework for what leaders must do now—from accelerating patch cycles and enforcing AI governance, to rethinking vendor risk and operational security models. 

This episode is designed for CIOs, CISOs, CTOs, and business leaders who need to understand: 

How AI is fundamentally reshaping cybersecurity—and what it will take to stay ahead. 

 

 

Send a Text to the AI Guides on the show!


About your AI Guides

Gary Sloper

https://www.linkedin.com/in/gsloper/


Scott Bryan

https://www.linkedin.com/in/scottjbryan/

 

Macro AI Website

https://www.macroaipodcast.com/

Macro AI LinkedIn Page:  

https://www.linkedin.com/company/macro-ai-podcast/


Gary's Free AI Readiness Assessment:

https://macronetservices.com/events/the-comprehensive-guide-to-ai-readiness


Scott's Content & Blog

https://www.macronomics.ai/blog





00:00
Welcome to the Macro AI Podcast,  where your expert guides Gary Sloper and Scott Bryan navigate the ever-evolving world of artificial intelligence.  Step into the future with us  as we uncover how AI is revolutionizing the global business landscape  from nimble startups to Fortune 500 giants.  Whether you're a seasoned executive,  an ambitious entrepreneur,

00:27
or simply eager to harness AI's potential,  we've got you covered.  Expect actionable insights,  conversations with industry trailblazers  and service providers,  and proven strategies to keep you ahead in a world being shaped rapidly by innovation.  Gary and Scott are here to decode the complexities of AI  and to bring forward ideas that can transform cutting-edge technology  into real-world business success.

00:57
So join us, let's explore, learn and lead together. Welcome to the Macro AI podcast. I'm Gary Sloper and I'm here as always with my cohost, Scott Bryan. If you're a CIO, a CSO or sitting on a board right now, you're probably paying attention to the recent news in the world of cybersecurity. May have even seen headlines about an AI model that can find and exploit software vulnerabilities.

01:23
Yeah, there are definitely a lot of headlines out there about a new model that Anthropic has publicly described and that's Mythos.  And that's the one that it really forces us to rethink cybersecurity from the ground up. It's the first model in the 10 trillion parameter class with an estimated training cost of about $10 billion.  Yeah. And there's more to the story than just the capabilities of Mythos.  It's not just what the new Anthropic model can do.  It's...

01:53
how they are controlling it and helping organizations prepare for the future. And that's Project Glasswing. Yeah. And today we're going to try to give you a  clear explanation. So what METHOS actually is, what Project Glasswing is doing, and what this means for your business now, near term and going forward. Yep. Sounds good. So Scott, let's start with METHOS and what METHOS is, or also known as Claude METHOS Preview.

02:22
And it's described as a general purpose  frontier model  with strong, aogenic uh coding and reasoning capabilities on the platform.  That description  actually really undersells what's happening because if you just hear that you think, okay, it's  just better coding.  Yeah, and it is.  But I think the technical reality is that the, you know, the model's  software engineering bench score, which is a coding benchmark.

02:50
reached 93.9%, which is huge. So for comparison, the previous top tier models were struggling just to break 50 % just a year ago. uh So that jump encoding skill is what makes Mythos so formidable in cybersecurity. It just really kind of simply understands software better and it's obviously much faster than what humans can.  And Mythos isn't just generating  answers and code, it's actually...

03:18
better at executing complete workflows. And I think that's a key distinction as well. Yeah, I completely agree. And  based on the public material that's out there, here's what METHOS actually does in a cybersecurity context, in case you're wondering. It's placed into an isolated environment uh with a code base. And that code base, and it reads and analyzes the code real time. So it's forming hypotheses about things like vulnerabilities. It's running experiments, debugging, and

03:48
generates a working proof of concept uh in that exploit realm. So think of it that way. And then it takes all that and then it validates  all in the loop and at a very high speed compared to what we can do as humans. Right. Yeah, much faster like we've been saying. And this is AI behaving like a security researcher that's running an investigation. And I think that uh anyone who's been following machine learning and AI for the last few years uh has known that this is coming.

04:18
you know, but whether it was going to be a US firm or any number of other actors around the world, it's been coming and now it's here. Yeah, exactly. And once you connect the dots on all of that, you understand why this is such a big deal and why Anthropic realized that they had to think carefully about how to release Methos once they began to test its capabilities. Yeah, agreed. So let's talk for a minute about the actual capability shipped because

04:48
Up to now, cybersecurity has been really uh human bounded and it's been limited by  human talent, time and expertise. Yeah, that's a point. And now it becomes compute bounded, which  means now it scales, uh runs continuously,  it doesn't get tired like I do and  it doesn't bottleneck the same way  we as humans would in our daily lives. Yeah. So I think that's a good summary of the shift. um

05:17
So really this is highly capable automation applied to one of the most complex domains in technology and that's cybersecurity.  that's a point. And, and so if we were to connect this to reality and what actually changes, we should probably kind of start there, you know, for the listeners. Yeah. Yeah. Jump on that. Um, so there's still super skilled hackers out there and,

05:43
well-funded teams that can complete the tasks that METHOS can without the compute power. ah I think that with the METHOS level capability,  the most important concept is that the vulnerability window is now exponentially compressed. Yeah, that's a good point. mean, historically, if you think about a few things, discovery takes time when there is that  vulnerability. Exploit development takes time there and deployment takes

06:11
So now discovering an exploit creation can happen within that same workflow so that that timeline from an exploit standpoint really collapses. Yeah, exactly. And then there's also a second order effect there. It's volume. So volume is going to just explode because now you can run this process across massive code bases, multiple massive code bases continuously. Yeah, that's a good point. And we probably shouldn't forget and we

06:40
probably really need to address the real risk directly because this is where executives will most likely want some clarity and where a lot of the media, I think gets this wrong. The risk is not just that AI is going to hack your company. The real risk here is your organization cannot respond as fast as the vulnerabilities are being discovered attacking your organization. Right. Yeah. 100%. And Anthropic explicitly states something important to keep in mind.

07:10
They noted that over 99 % of vulnerabilities that are discovered have not yet been patched. Yeah, and that's a big one.  I mean, it kind of needs to sink in when considering the level of capability that Methos is supposed to have. Exactly. Yeah, it kind of tells you everything. With  the Methos level intelligence and capability, like we said, the bottleneck is no longer discovery. It's really immediately

07:39
transitioned right into everything you need to do, real time triage, validation, uh patching throughput. Yeah, that's actually a good point. When you think of things like triage, triage overload can quickly become systemic. So you're not dealing with just one vulnerability. We just talked about that. mean, you could easily be dealing with thousands of vulnerabilities and your organization has to do a handful of things, right? You have to validate them, prioritize them, route them, fix them.

08:09
It's a common problem today. Now it's just ample amplified as we've kind of been saying here in this episode. Yeah. Yeah. And then, uh, regarding the patch velocity was the next thing that you mentioned is it. Patch velocity becomes a competitive capability. Uh, so really, you know, security moves into operational speed or, real time. Well, exactly. The companies that patch fastest win  or  at least stay a half a step in front of the threats. Exactly.

08:39
You know, and it's also important to understand that AI introduces new attack services. And I think this is really critical for our listeners. Like we've talked about in prior episodes, when you deploy AI systems, tool access becomes a privilege boundary, prompt injection becomes a real attack vector, and sensitive data can be exposed. So now you have more vulnerabilities  and more ways to exploit them. Right.  So,  you know, the other thing we talked about earlier, and kind of

09:09
switch topics a little bit is  we probably jump into this further Scott is project glasswing. Yeah. Yeah. think, I think the media hasn't been 100 % clear on the relationship between ethos and glasswing. uh Glasswing is not a product. uh Project glasswing is a major collaborative cybersecurity initiative that was launched, initiated by Anthropic. And it was created to address kind of this current threshold moment in AI.

09:39
which is, you know, development of models that are so good at coding and workflows that they can autonomously find and exploit super complex software vulnerabilities better than humans. ah And instead of releasing this capability right out to the public, Menthropic put a pause on it and they decided to form Project Glasswing as an initiative to give uh the good guys a head start. Yeah, the goal is  really to...

10:07
proactively scan and harden the world's,  I would say, most critical software environment. So think of banking systems, power grids,  healthcare networks falls into that category, and  open source infrastructure. And this is all before malicious actors can develop AI-driven tools to attack those critical environments. Yeah, so Anthropic really initiated a specific move, and that is

10:35
giving the defenders out there, the ones that you just mentioned, uh time to secure all critical systems before this capability spreads or before some other third party actor  starts using their own. Yes, that's correct. and then if you look, there's  really a serious coalition that includes a lot of the major vendors and players in the industry to, help reinforce this at scale. Um, in the cloud.

11:02
environments of CSPs. You have AWS, Google, Microsoft, uh platforms such as Apple,  which, you know, obviously a lot of people use.  You have, you know, traditional infrastructure providers. Think of, you know, the networking side, Cisco and Broadcom. And then when it comes to security, there's partners such as CrowdStrike and Palo Alto. And, and then,  you know, I mentioned open source  earlier, you have the Linux foundation. And then when you start

11:30
thinking about all of this  comes down to costs and just  fueling the economy from a banking standpoint. JPMorgan, one of the largest financial institutions in the world, is part of this coalition as well. So you have a  nice rounded uh coalition trying to help with the reinforcement here at scale. Yeah.  Yeah. When you look at those players, it's kind of a listing of the backbone of the global digital economy. Yeah. Good way to put it. Yeah.

11:57
So I think according to what we know about Glasswing, um talk a little bit about what actually happens inside of Glasswing. And what they laid out was it's actually uh kind of a uh five-step approach. So step one, they're going to allow the coalition members to scan at scale. So use leverage methods to analyze massive code bases. uh Step two, uh generate the exploits. So it'll produce working proof of concept vulnerabilities.

12:27
And then it moves into step three, which is the human point. ah And that's human validation where actual security experts, humans and machines are going to take a look at the findings and confirm them all.  And then moving into step four, as they laid it out is the patch development. So the fixes are created and deployed.  And then there's this  controlled disclosure phase, which is a 90 day window where they're actually disclosing  the vulnerabilities and the patches out to the world.

12:56
And, and they'll be allowing some, some buffer time in there. Yeah. So this is really an industrialized vulnerability discovery and remediation pipeline. looks like. Perfect.  Um, yeah. And it's industrialized with  serious urgency. Uh, there's been a  lot of conversations at very high levels about this  and glasswing is really solving one problem. It's, how do you prevent a massive

13:23
vulnerability discovery event from turning into a global exploit event in real time. Right. And then  those vulnerabilities become widely accessible. Yeah. And I think the worst case scenario would be pretty obvious. You'd have, you know, AI discovers vulnerabilities faster than the world,  meaning the software suppliers, businesses, institutions faster than they can actually go out and fix them. Glasswing is trying to prevent that moment.

13:53
Yeah, and I think  when you look at Glasswing and what it does not solve, ah we should also take that into consideration really more as an important reality check because Glasswing does not solve the stop capability from spreading, doesn't fix enterprise patching delays. So if you're slow to ah address that internally, that's on you. That's not on Project Glasswing  and prevent misuse of artificial intelligence. So you still own the risk.

14:23
Glasswing is not a catchall here. Yeah. Yeah, exactly. So I think  at this point, having gone through all that, um right now  there's really a controlled deployment of Methos out into the marketplace. Yeah. I think you can look at this as a gated access and  structured governance with human oversight  for Glasswing. Yeah. Perfect. And importantly, the defenders are getting the capability first, which will solve a lot of

14:52
prepend a lot of problems. Yeah. Good point.  All right. So as usual, let's touch on what should leaders actually do at this point?  Um, so Scott, what are your first thoughts?  Um, yeah, I think I'll just touch on, you know, right now we've got a operating model shift. So first, you know, inventory, your AI agents, like privileged users. So you're going to need to know where AI has access to tools.

15:19
What systems can interact with  and what actions can it take? Yes, treat it like identity and access management. Yep, exactly. And then uh enforce the  no uncontrolled agency rule. So no AI systems should be able to modify anything in production. It shouldn't be executing changes  or accessing sensitive systems without human approval,  without logging, without rollback capabilities. So all those things that you need to do to stop.

15:49
and  analyze a problem.  Yeah. And then another one would be, uh you do have to build forensic grade logging. So you're going to need to have uh prompt logs,  tool execution logs and complete decision trails uh throughout the life cycle. Yeah. Because when something goes wrong, you need to investigate it like any traditional incident that could occur. Exactly. um

16:17
And then another thing is  security teams and the IT teams need to redesign vulnerability management for scale. I think this is critical. You're going to need automated triage, automated deduplication  and exploitability based prioritization. Yeah, I agree. Because volume is the problem now, like we talked about earlier, and I'll take it to the next few. ah So if you think of

16:44
you know, a handful of other areas that you just need to focus on. One is pressuring your vendors.  It's really non-negotiable. And what I mean by that is ask your vendors a handful of questions. How are you preparing for AI driven vulnerability and discovery? What are your patch SLAs  to address  vulnerabilities or updates? And how do you handle exploit sensitive disclosures? So if your vendors are, you know, kind of

17:13
sidestepping around these three questions and that should raise a red flag.  Yeah, and you can't assume that  they are doing it. So yeah, I agree.  Pressure the vendors. Yeah, yeah. And you should be asking these anyway. Accelerate patch SLAs on critical systems. So ah when we're talking about this, what you really need to focus on is the operating system, identity, browsers, core libraries within the platform. So

17:40
when you are having that conversation,  you definitely want to accelerate the patch SLAs there. ah Adopt AI security frameworks ah in use. So think of things such as  the NIST AI risk management framework, this has been around for a long time. So is OWASP, so OWASP LLM top 10,  they have that list. And then secure SDLC frameworks. So those three key areas are really critical for adopting AI security.

18:08
Yeah. And you mentioned that OWASP, uh, LLM top 10, just to highlight for the listeners, it's,  um, it's a standards awareness document that identifies the most critical security risks when developing and deploying generative AI. So yeah, point. Yeah, good point. And the last action for leaders to prepare in this new world is,  to run AI driven attack simulations.  You know, many of you, I think have been doing this for a long time, not on AI, but just other attack simulations.

18:37
So simulate AI assisted vulnerability discovery, simulate response workflows, uh simulate patch timelines. Those three key areas should get you started to really harden down the environment. Yep. Good stuff.  Yeah, just to start to wrap this one up, I think the companies that are going to win here are not the ones with the  best AI. They're the ones with the fastest secure operations because this is really a new landscape here.

19:05
Yeah. And this is the beginning of AI becoming fully integrated into cybersecurity infrastructure itself. So it's kind of that first step and we'll see changes in the future. Yeah. And just to mention Glasswing again and close on that, I think this is the  first example of a controlled rollout of a dangerous capability and probably a model for  more to come. Yeah. Good point. So  the bottom line here is, Mythos is not the story. Glasswing is not even the full story.

19:34
The story is this, cybersecurity is moving from human limited intentions to system limited. So think about it that way. Yeah. Good summary. And I think to your, your ability to operate in that world will, will determine your risk and your resilience and ultimately, you know, your business competitiveness or even survival. Way to end the episode. like it.

19:59
So that's it for the episode of today's  Macro AI podcast. If you found this  useful and informative, please share with your network. You can reach out to Scott and I both on LinkedIn. We have our links within the show notes.  And until next time, thanks for joining and we'll see you soon. Yep, thank you.