Crestvale Newsroom
Crestvale Newsroom
Can Prisma SASE really govern unseen AI agents?
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Welcome to the daily audio briefing on AI, automation, and business technology for professional service firm leaders. Today we're looking at why traditional security stacks are struggling to govern autonomous AI agents. The gap between how these systems work and how your security tools were designed is widening fast. That gap is turning into a real attack surface. Firms that assume their stack can see everything are about to learn it cannot. Markets closed lower in the previous session. The SP slipped and the NASDAQ followed the same direction, signaling a cautious tone across tech. The 10-year yield moved up by the close, keeping pressure on financing costs for firms still planning infrastructure upgrades. Bitcoin also moved lower, showing a risk-off mood across digital assets. Now let's get into the main story. The security industry is racing to convince enterprise buyers that it can govern autonomous agents. Palo Alto is pushing hard on this with its Prisma SASE platform. The pitch is simple. As agents spread through your firm, Prisma can see them, classify them, and enforce policy around them. The problem is that the entire architecture was built for humans. It was not built for software that spawns its own processes, mutates its own behavior, and operates without context that a traditional identity system understands. When an agent appears, acts, and disappears without a fixed identity, the rules that worked for user traffic do not hold. Policies slip, logging gaps open, risk becomes invisible. This creates a blind spot inside environments that already depend on GPU clusters. Prisma can inspect traffic on the network, but it cannot see what happens inside the GPU workloads where many agents actually run. Palo Alto is trying to solve that with Prisma, AIRS, but the reality is that the inspection point is in the wrong place. You cannot protect what you cannot see. At the same time, buyers are not consolidating tools. They are adding more. Zscaler sees the same opportunity. AI native security startups see it too. The market is expanding, not shrinking, which means every firm will be stitching together more systems rather than fewer. If you deploy agents without identity, telemetry, and control, you are depending on good luck. That is not a strategy. Why this matters is simple. Autonomous agents are becoming a major attack surface, and most firms cannot even tell how many they have running today. If these agents are making decisions, moving data, or interacting with client systems, and your stack is blind to that, you are flying without instruments. Meanwhile, the Open Web Application Security Project released a new guide that reframes how organizations should think about AI risk. The focus moves away from jailbreaks and toxic outputs. It shifts toward the full data pipeline, training data, fine-tuning sets, feature stores, every system that touches that data. This guidance treats the pipeline itself as the blast radius. That is the correct lens. It also pushes for continuous verification of model behavior, not a one-time audit, and it emphasizes runtime monitoring. For professional service firms, this is the direction clients and regulators are heading. If you use AI, you must prove you protect the data behind it. This gives firms a practical baseline to work from. Now let's look at the shift happening on the threat side. Attackers are weaponizing vulnerabilities in days. They used to take years. Automation has erased that cushion. A small team can now run campaigns that once required nation-state resources. Offense has already adopted AI. Defense must match that pace or accept a steady stream of avoidable incidents. Faster threat detection and automated response workflows are no longer nice to have. They are required. There is a brighter story on the operations front. Globe Business rebuilt their workflows and used AI to cut support workload by a third. They treated data silos as a real cost and rebuilt the processes around them. Their system classifies nearly all incoming cases, which freed the support team from constant triage. They even extended the same discipline into client-facing tools. The lesson is clear. AI pays off only when paired with process cleanup. If the workflow is broken, AI will not save it. If the workflow is tight, AI amplifies it. Here is what else is worth knowing today. Claude added scheduled tasks, which shows that assistants are moving from novelty to dependable workflow automation. IBM is leaning into hybrid cloud, and WatsonX, a sign that regulated workloads will stay mixed between on-premise and cloud for years. AT ⁇ T is rolling out internal agents for HR and operations, a reminder that the first wins from AI often come from back office processes that still depend on manual work. Iran-linked hackers escalated blended attacks that mix digital and physical moves. It is a sign that private sector targets are being stress tested in new ways. Foundation Capital warned that AI labs are not likely to own runtime security. Firms should plan for a fragmented stack instead of waiting for a single vendor to solve the problem. Here is the takeaway if you deploy AI agents before you can identify them and monitor them, you are adding a tax surface faster than you are adding value. If this was useful, follow the Crestvale Newsroom daily podcast so you don't miss tomorrow's briefing. Thanks for listening.