Chasing Entropy Podcast by 1Password

Chasing Entropy Podcast: Dustin Heywood on Agentic AI, Quantum Risk, and Why Identity Still Breaks First

Dave Lewis, 1Password Season 2 Episode 3

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 32:11

In this episode of The Chasing Entropy Podcast by 1Password, I speak with Dustin Heywood, known to many as EvilMog, executive managing hacker and senior technical staff member at IBM. The conversation stays grounded in real security work, from password cracking and Active Directory abuse to AI privilege creep and quantum planning. The through line is simple, most security failures start with access, trust, and bad assumptions about how systems behave under pressure.

Heywood’s background explains why he sees the problem this way. He came up through network engineering, military communications, enterprise infrastructure, and offensive security. That path matters because his view of security is operational, not theoretical. He keeps coming back to one point, businesses are not trying to be secure for its own sake. They are trying to keep operating. Security has to support that goal or it gets bypassed.

A big part of the episode focuses on agentic AI. Heywood argues that AI is exposing access problems that were already there. Service accounts already had too much privilege. Internal systems already trusted broad integrations. AI agents just make those weaknesses easier to trigger at scale. His main concern is the gap between identity and intent. A user might want an agent to buy concert tickets under a clear budget and time window, but today’s systems rarely encode that level of permission. In practice, the agent often gets broad backend access and can do far more than the task requires.

That leads to the episode’s strongest point about machine identity. Most organizations still think clearly about human access and far less clearly about machine access. That model does not hold up when a company has thousands of employees and tens of thousands of machine identities tied to services, devices, integrations, and automation. If those identities are overprivileged, an AI layer on top of them becomes a force multiplier for existing risk.

The discussion then shifts to quantum threats, and Heywood makes the issue concrete. He is less focused on dramatic “decrypt everything later” scenarios and more focused on the systems around the data. If quantum-capable attacks weaken the trust layers behind OpenID Connect, SAML, certificate authorities, VPN certificates, and federation systems, attackers do not need to break every encrypted file directly. They can go after the identity and key infrastructure that grants access. That is the planning problem security leaders need to understand now.

His advice on crypto agility is practical. Start with inventory. Know where cryptography lives in your environment, how certificates are issued and renewed, and what would have to change if a major algorithm or trust model becomes unusable. He also points out that many companies still struggle with certificate management at a basic level. If certificate rotation is manual, the organization is already behind. Automation is not optional here.

On credentials, Heywood takes a hard line that is worth adopting, assume every password entered into a remote system will eventually leak. That changes the goal. The answer is not more password theater. The answer is unique credentials, automated rotation where possible, stronger storage, and lower user friction. If security makes daily work harder, people will work around it. He is blunt about that, and he is right.

This episode is most useful for security leaders who are dealing with AI adoption, identity sprawl, legacy authentication, or PKI debt and need a clearer way to frame risk. Heywood does not treat security as a checklist exercise. He treats it as a systems problem tied directly to business operations, user behavior, and the cost of getting access control wrong.