AIxEnergy

The Cognitive Grid Part I: Before AI, the Grid Already Learned to Judge

Brandon N. Owens Season 1 Episode 12

Artificial intelligence didn’t suddenly arrive in the power system.
 It arrived quietly—through decades of automation, control systems, and institutional delegation.

In this first episode of a four-part series, host Michael Vincent sits down with Brandon N. Owens, founder of AIxEnergy and author of The Cognitive Grid, to trace a deeper and more unsettling story than the usual AI narrative. This is not a conversation about futuristic intelligence replacing humans. It is a conversation about how judgment itself moved into infrastructure—long before anyone used the language of AI.

The episode begins with a simple premise: modern power systems already act faster than human judgment can intervene. Long before machine learning entered the conversation, the grid evolved through layers of sensing, telemetry, supervisory control, and automated coordination. Each layer improved reliability. Each layer also quietly reshaped where decisions actually happen.

As Owens explains, the most consequential shift was not automation replacing operators, but automation curating the decision space—determining which signals mattered, which deviations demanded attention, and how long human intervention could safely be deferred. Operators remained present, but authority began to migrate. Judgment did not disappear. It was reorganized.

The conversation moves through the historical inflection points that made this migration visible only in hindsight: the rise of supervisory control and data acquisition, the emergence of automatic generation control, and the major North American blackouts of 1965, 1977, 1996, and 2003. These failures are treated not as technical anomalies, but as governance stress tests—moments when institutions were forced to reconstruct decisions that had already been embedded in machinery.

A central theme emerges: governance almost always trails capability. Systems become indispensable because they work. Because they work, they become harder to inspect in real time. When failure finally occurs, legitimacy is tested after the fact—when responsibility is already diffuse and authority difficult to locate.

This episode argues that the real risk of AI in critical infrastructure is not runaway intelligence or loss of human control in the cinematic sense. The risk is quieter and more structural: authority migrating ahead of governance, judgment becoming opaque, and institutions encountering consequences before they have made permission explicit.

By grounding the discussion in the history of the electric grid—one of the most mature and consequential infrastructures in modern society—this episode makes a broader claim: if we cannot make machine-mediated judgment legible, bounded, and accountable here, we will struggle to do so anywhere.

This is not a warning about the future.
 It is an explanation of what already happened—and why it matters now.

In Episode 2, the series moves into the era that promised intelligence and often delivered instrumentation: the Smart Grid, and how that gap created conditions for AI to enter as the next layer of mediation.

Support the show

Host: Welcome to AI-ex-Energy. I’m your host, Michael Vincent.

Host: Today, we’re starting a four-part conversation with Brandon N. Owens—founder of AIxEnergy and author of The Cognitive Grid, available now.

Host: Across this series, we’ll walk through the book’s central argument. This isn’t a story about AI arriving as a sudden solution to power system challenges. Rather, it’s a story about something much older: feedback, supervision, and authority moving quietly into critical systems.

Host: Brandon, welcome.

Author: Thanks, Michael. You’re right—the book isn’t written for the AI hype cycle. It’s written for integration: how do we bring AI into the power network safely, credibly, and in a way institutions can stand behind.

Author: Let me start by being clear about where things are in 2026. AI is still mostly deployed in pilot projects and narrow applications, not broadly inside utility operations. But the trajectory is visible. The technology is moving fast, and at some point—possibly soon—artificial intelligence will touch operational decision-making.

Host: Most people talk about reliability, cost, and efficiency. Those are important. But the deeper question is permission, not capability. When a system can sense conditions, interpret signals, and act faster than human judgment can intervene, the question becomes how do we preserve accountable authority?

Host: That word—judgment—can sound philosophical until you define it. In your use, it isn’t “moral awareness” in a machine. It’s real-world tradeoffs under constraint: who gets served first, what risks are tolerated, and which harms are treated as acceptable.

Author: Exactly. If a city loses power during a storm, restoration order is judgment. If load must be shed to preserve stability, the choice of where to shed is judgment.

Host: And even before a system is “autonomous” in the popular sense, it can reshape the decision space. It can make some actions appear available. The institution still decides, but the set of options may already be curated by the AI system’s logic.

Host: And your claim is that this migration happens quietly. Not by decree. By accumulation.

Author: Yes. As systems prove reliable, they become indispensable. As they become indispensable, they become harder to inspect in real time. Governance tends to trail behind capability.

Author: Legitimacy gets tested only after failure—when institutions are forced to reconstruct decisions that were already embedded in the machinery. That pattern isn’t new. What’s different now is the speed, scale, and centrality of AI systems approaching the operational edge.

Host: So this isn’t the first time automation changed what it meant to operate a power system. You argue the roots go back decades—long before anyone used the language of AI.

Host: Walk us into that history.

Author: If you want to understand what’s coming, you have to understand what already happened. The grid didn’t jump from manual operation to intelligent autonomy—it moved through layers: sensing, telemetry, and control systems.

Host: You point to a key inflection: beginning in the 1960s, supervisory control and data acquisition changed what “operate” meant.

Host: What changed, exactly?

Author: Supervisory control didn’t just let operators see the system. It began interpreting the system. It determined which signals mattered, which deviations demanded attention, and how long human judgment could safely be deferred.

Author: Operators remained present, but no longer central. They watched abstractions of physical reality unfold through layers of software, trusting that machine logic would surface relevance before consequence escaped control.

Author: Judgment didn’t vanish. It was reorganized.

Host: In the book, you describe supervisory control and data acquisition’s technical evolution—better bandwidth, better encoding, improved polling, and expanding visibility at distance.

Host: Then you make an important point: early systems weren’t “networks” in the modern sense. They were islands of automation.

Host: But by around 1970, supervisory control evolved from convenience to linchpin—operators were coordinating sensors and remote mechanisms across hundreds of miles.

Author: Right. Local protection against instability becomes a nervous system. The system can detect imbalances and intervene before they cascade.

Author: Automatic generation control can signal plants to ramp within seconds, stabilizing frequency before operators fully register what triggered the event. In daily operation, it worked—and it worked well.

Host: That line—“it worked well”—matters. Because the story isn’t “automation is bad.” The story is that automation becomes indispensable, and therefore authoritative, even if we keep calling it a tool.

Author: Yes. And when it fails, the failure can be silent in the very layers people rely on to perceive reality.

Author: That’s not just a technical vulnerability. It’s an institutional vulnerability—because it makes judgment harder to reconstruct and responsibility harder to locate.

Host: You also argue supervisory control and data acquisition’s early limits were institutional as much as technical. If it is confined within one utility, it can’t fully govern a grid that behaves regionally—across jurisdictions and across responsibility boundaries.

Host: So the first cognitive layer is also the first boundary problem.

Author: Exactly. Interconnection turns local control into shared consequence. The machine’s capacity to act quickly becomes real. But the institution’s capacity to coordinate authority across boundaries becomes the limiter. That mismatch can break the system.

Host: Which takes us to blackouts.

Host: You write that grids—and the institutions charged with governing them—almost never reform in moments of calm. The modern regime of bulk power reliability emerges through rupture.

Host: You frame four blackouts—1965, 1977, 1996, and 2003—as inflection points. As governance stress tests.

Host: What do you mean by that?

Author: Blackouts force reconstruction. They force institutions to answer questions they can often avoid during normal operation: Who had visibility? Who had authority? What was automated? What was discretionary? Where did responsibility reside when actions crossed organizational boundaries?

Author: When the lights go out across regions, catastrophe becomes the only teacher left.

Host: Let’s start with 1965.

Author: Sure, the 1965 Northeast blackout began in a place that feels too small for the consequence it triggers—a single protective device in Ontario.

Author: A relay tripped a transmission line when power flow exceeded a setting. Load shifted to remaining lines. Those lines overloaded. The system unraveled faster than human coordination could respond.

Host: And the post-incident investigation revealed something deeper than a mis-set relay. Control and visibility weren’t aligned to the true topology of interconnection. Operators lacked real-time visibility across the entire system.

Host: And the response wasn’t just technical tuning. It was institutional—regional coordination, and eventually the North American Electric Reliability Corporation. A first concerted effort to pair technical control with coordinated oversight across utility boundaries.

Author: Yes. Catastrophe clarifies boundaries.

Author: It doesn’t create cognition, but it forces the system to formalize where automation can act autonomously, where human intervention is required, and where institutional authority must be explicit before the next failure.

Author: After the 2003 blackout, that hardening goes further—toward mandatory, enforceable standards—because voluntary coordination has limits when consequences are continental.

Host: You also emphasize that later failures reveal different fractures. In 2003, one vulnerability was the software meant to provide visibility.

Host: Alarm functionality fails silently. Operators lose timely awareness. The grid doesn’t merely outpace human comprehension—the digital mediator of comprehension fails.

Author: That’s right. Dependence on mediated perception becomes obvious only when the mediator fails.

Author: And when that happens, accountability becomes contested. It’s no longer clear what an operator could reasonably have known, and therefore no longer clear what responsibility means. That’s a governance problem.

Host: So the point isn’t that automation causes failure. The point is that failure reveals whether institutions have kept pace with the automated architecture they depend on.

Host: Let me see if I’ve got this right—put plainly: once systems can sense conditions, interpret signals, and act faster than human judgment can intervene, authority migrates. When failure occurs, governance often arrives after the fact—trying to reconstruct decisions that were already embedded in the machinery.

Host: Fair?

Author: Fair. And I’d add one more element: infrastructure choices harden.

Author: Once the AI architecture is in place—technical and institutional—it becomes difficult to see, and harder to change. The defaults become “how the system works.”

Author: That’s why the book isn’t a warning about distant intelligence. It’s about a quiet operating shift—AI pilot projects and workflow integrations that become normal.

Author: The risk isn’t simply that AI systems fail. The risk is that authority moves in advance of governance, and legitimacy becomes fragile because decisions are encountered only after consequence.

Host: So the history matters because the pattern repeats—and because the grid is a place where failure distributes harm immediately.

Author: Yes. And because the grid is where we have some of the most mature governance structures for reliability.

Author: Look, if we can’t make machine-mediated judgment legible and bounded here, we won’t make it legible anywhere.

Host: Brandon, thank you for laying the groundwork.

Host: In episode two of this series, we will move into the era that promised intelligence and often delivered instrumentation—the Smart Grid. We’ll talk about how that gap created conditions for AI to enter as the next layer of mediation.

Author: Thanks Michael, I’m looking forward to it.

Host: Join us next time. Until then, visit AIxEnergy.io to stay current on the convergence of artificial intelligence and energy infrastructure. I’m Michael Vincent saying goodbye for now.