The Lock & Key Lounge — An ArmorText Original Podcast

Podcast#26: Blackstarts and Blindspots

ArmorText Season 1 Episode 26

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 54:05

How AI can turn air gaps into security gaps for ICS/SCADA

For decades, critical infrastructure companies have relied on organizational silos—air gaps between IT and operational technology—to ensure that enterprise disruptions do not cascade into the physical systems that keep the lights on. But those silos have been largely successful due to biology and physics: the scale of coordination and depth of expertise required to overwhelm them has been beyond human capability. That changed when we built something capable of assembling expert skill sets instantaneously. Patrick Miller, CEO of Ampyx Cyber, recovering regulator, and one of the most recognized voices in OT cybersecurity, joins Matt Calligan to confront the question that most organizations have not seriously answered: what does resilience look like when both IT and OT systems are simultaneously degraded or unavailable—and the assumption that you can "go back to manual" turns out to be a pipe dream?



Navroop Mitter:

Hello, this is Navroop Mitter, founder of ArmorText. I’m delighted to welcome you to this episode of The Lock & Key Lounge, where we bring you the smartest minds from legal, government, tech, and critical infrastructure to talk about groundbreaking ideas that you can apply now to strengthen your cybersecurity program and collectively keep us all safer. You can find all of our podcasts on our site, armortext.com, and listen to them on your favorite streaming channels. Be sure to give us feedback.

Matt Calligan:

All right. Welcome to another episode of The Lock & Key Lounge, the podcast dedicated to talking about things nobody's talking about in cybersecurity. And really, today is no exception. Fortunately, for decades, critical infrastructure companies have relied on man-made organizational silos to ensure operational resilience. You think about physical systems—electrical grids, oil and gas pipelines, nuclear facilities, even manufacturing operations—typically air-gapped to make sure IT disruptions don't cascade into physical or OT systems that, in many times, literally keep the lights on. But inside these silos, the push for automation and digital transformation has ensured that more and more layers of both physical and informational systems are vertically integrated over time. And with this digital transformation has come digital dependency. And today's guest likes to say that today we're all technology companies with a product problem. And yet these silos, they have been largely successful, but it's mainly been due to biology, to physics, because traditionally, to overwhelm these defenses, the scale of coordination, depth of expertise, and all these technology layers is being beyond the capability of any reasonably sized group of humans. It was basically a biological improbability that someone could assemble all this stuff fast enough, in such a scale, that it overwhelm the defenses. Now, y'all can probably see where we're going with this, ’cause we humans have now built something that, given enough resources, is actually capable of assembling these expert skill sets and scale instantaneously. And we've named it artificial intelligence. So, with this new set of rapidly evolving capabilities, which are available to pretty much everyone, cybersecurity teams have to start asking what does resilience look like when both IT and OT systems are simultaneously degraded or unavailable? And my guest today, Patrick Miller, is on a mission to make sure folks are answering this question. Patrick is the CEO of Ampyx Cyber. He's known for bridging that gap between technical cybersecurity and that real-world operational risk. He is widely recognized as an expert in cyber for critical infrastructure, operational technology, as decades of experience advising governments and organizations globally on securing industrial control systems and thereby improving resilience. Patrick, welcome to the show.

Patrick Miller:

Awesome. I'm super happy to be here.

Matt Calligan:

Yeah. Given—I know you're at RSA and it's been—I know your schedule is rapidly evolving here. So, if it's okay, we'll go ahead and just jump right in here. We'll start with the question that you—or the comment, I guess it was a comment that is actually a question—when we were talking about this. And it kind of got those gears turning for me, and you said, what does manual look like for these companies? And I was wondering if you can explain. Start by explaining what you meant by that.

Patrick Miller:

Yeah. Manual, in most cases, were—again, in your intro—it's about these operational technologies, right. These are the systems that interface with the real—the physical world. So this operates machinery, for example, or opens a breaker, or turns a pump—those kinds of things. So, in this space, manual means how do you get these things to operate manually? Because we used to do this—like a human would go physically move a valve or turn a big wheel, and it would open a valve. Right. Now we have a little electrical device that does the turning of that wheel for the human. So that manual process we're talking about is what does this manual world look like now that we've gone so far down the digital path and we've created such a tremendous degree of—and, frankly, layers of—dependency on those digital components or cyber components to interface with that physical world for us in—through automation and many other means.

Matt Calligan:

Yeah. And organizations will, if—depending on how far up into the leadership structure you get—they'll often say, well, we can go back to manual, go back to pen and paper, as they say. Yeah. So, I mean, the—what does the reality of that look like across these kinds of environments? Kind of unpack that a little bit from your perspective.

Patrick Miller:

Yeah. That's just fake. That's just—that's a pipe dream. That's false. I hear this a lot, and like, oh, well, we'll just go back to manual, and I'm like, okay, let's try that. Let's just—not even, like, physically trying, like actually walking through—and let's just walk through the thought exercise of doing this. Yeah. And even in those—just postulating about what it would be like versus really doing it—stuff starts to break down pretty quickly. And it's not like it's the immediate things that come to mind, like, would people… let's just say that the situation is bad enough. Let's say it's like a pandemic, right? Or something like that. Would the humans even show up to do the things? And ’cause there are situations that are, "I'm going to be home with my family." Like, if there's particularly threatening global instability, if there is a pandemic. But they’re not, like, fantasy-level, but things that are actually happening right now—kind of reality that would cause some people to not want to show up to work. So that's just, like, one of the aspects, I mean. And the other ones are just—and they say, we can do manual, and I'm like, okay, so let's just walk through this. And we get through some layers of it, and they start getting a little uncomfortable, and they quickly realize we might be able to do this for a little while, maybe a day or 2 or 3 days. But the minute you get beyond that, it starts to absolutely fall apart at the seams. Like, if you got to do 24/7 operations for a physical—I don't know whether it's some sort of critical infrastructure—water, gas, electric—kind of those delivery systems. Do I put humans on shift, and where do I get enough humans to do this? Qualified humans that have gone through the safety training, that have gone through all of those things that are necessary to, like, be the operator of the thing. So it breaks down really fast. So I think that it's just—when I say it's fake, it's not necessarily like it's absolutely impossible, but there is a non-zero chance that you'll do this successfully for anything longer than a day or so.

Matt Calligan:

That's right. Well, what do you think it is that organizations still think in sort of isolated layers of incidents, or incident within a layer, or within a silo, instead of cross-domain? Or—we talk about IT/OT a lot—but, I mean, there are various silos beyond just those. Those are just the famous ones in the energy sector. Why do you think they think like that?

Patrick Miller:

I—well, there's a lot of reasons, really. But, I mean, I think some of the key reasons are they haven't had to do it. They—there's—I think there's a lot of assumptions kind of built into that construct or that model for them. And when you really go through and test these kinds of things… I was just at the wor—mentioned word RSA—it was just at the Estonian discussion around what they do for Locked Shields. And it is—it's a real, like, live-fire operational test. And that kind of thing is not usually done by most organizations. There's a lot of reasons. They don’t have time, they don't have the resources. There's lots of reasons why. But without doing it like that, you don't really realize how many other dependencies you have on those inter-domain relationships. One of the classic situations—like, just Colonial Pipeline—is the quintessential example of having to shut down OT because you had dependencies in IT. But even things like in the Texas blackout, there was a high dependency on gas reserve for electric power because the power generators were electric—I mean, sorry, they were gas-fired electric turbines. So, in order to get to electricity, you had to have the gas. Well, in order to actually move the gas and actually work the gas system, you had to have electricity. So you end up with this interesting kind of cyclical dependency on—yeah. And there's even things like weird situations, like firm versus non‑firm contracts. So if the price is outrageous and I'm not required to serve you, and I can serve someone else at a higher price, I'm not going to serve you. I'm going to serve someone else at a higher price. So it's not just, like, do the infrastructures depend on each other, but weird subtleties like contract relationships can even creep into this.

Matt Calligan:

Yeah, yeah. Do you think it's—from a gaming‑it‑out kind of scenario, cyber—we talk about tabletop exercises a lot. But, I mean, these kinds of simulations, these wargame events, I—we interact in our line of business with a lot of folks that run these, and the general consensus is that folks view them as a—as sort of a box‑checking exercise. We—in fact, one of our first podcasts was interviewing people who used—actually, they brought in Hollywood directors and writers and actors to make these events real, because people got so much more value out of it. Do you think that there's just not enough fear of that kind of a thing, or is it head in the sand? What's—how do you think that the reality of that should translate more? How do—how would you do that, I guess.

Patrick Miller:

Yeah. That's a—that's a good question. There's some really interesting dynamics. Like, I do a lot of work in the electric sector in North America. We have what's called GridEx. It's done every couple of years. It has positively beautiful, like, production quality. It's—like you said—Hollywood directors. It is done very real. It feels real. It looks real. The vibe is truly exigent. And it's a fantastic job. And it does definitely get more people engaged, because it just feels more real. Right? Just because it doesn't feel real doesn't mean you should, like, discard it. Though I think that's a—that's something that I think people have to get over, just because it doesn't feel real enough. All that means is you're just not playing the game well. You're not—I mean, and I get it. We, we've got limited attention spans, and we need to be spoon‑fed all of those things. But I think most of it, for us at least, in most countries or regions where there's been a high degree of reliability over a large number of years, there's been little geopolitical turbulence. There's been a lot of comfort. Those areas need that Hollywood‑level production to feel like they need to get engaged, because they're like, well, nothing has happened in the last number of X number of years, or days, or whatever.

Matt Calligan:

Nothing happened yet.

Patrick Miller:

Exactly. And that degree of complacency is actually their weakness, because there will be a situation. It's not an “if,” it's a “when.” It's been said many times, but there will be a situation that will absolutely wreck their day. And it may not be, like, a sector‑wide or a regional event, or it may just be their company gets hit and hit absolutely cripplingly hard. So, even situations like that should just tell you right upfront, you're just a—it's a numbers game. It's going to happen at some point, and you should prepare accordingly. That said, you see countries like Estonia, for example—they're next to a hot zone with Ukraine. They're a supply line for Ukraine. They've got a long history of—we'll say—a challenging relationship with Russia as their neighbor, to say the least. And even Russia, as of late, has been threatening to do things like reunite the Soviet Union. So, for them, it's very real, right? It's something they do, and they mean it. They're like the firefighter that does the testing so that it's muscle memory. When the bell goes off, they wake up half asleep, and they're already in, like, their fireman suit, and they're on the truck, and they're on their way to the fire. But it just kind of operates like that, because it's been so well‑practiced. And that's, I think, in areas where that's your life, you get really good at practicing it. In areas where it's not, you become really complacent, and it becomes more of a Hollywood exercise when it really shouldn't be. ’Cause, like—

Matt Calligan:

Yeah, put it on a shelf.

Patrick Miller:

Exactly. It's just going to be a matter of time before something does really hurt you.

Matt Calligan:

Yeah. The—I made an assertion kind of in the beginning, and that is these silos are man‑made, and people have relied on them mainly because biology has never been able to scale beyond a certain point. Do you agree with that? Yeah. I made that assumption in here. But, like, it feels sort of because people can't get their head around what a world looks like beyond how a human thinks about something. They haven't—they have a hard time envisioning the threat landscape as it is, because they're so used to that sort of complacency and those silos working, because they couldn't ever get their head around it before. Do you see that being a big part of it?

Patrick Miller:

I do, yeah. I look back—like, I always look for historical stories and that kind of thing. But there was a time before we had a telescope, or—I mean, we'll say—long, long field‑of‑view vision.

Matt Calligan:

Right.

Patrick Miller:

That went away once we actually had something. We're—like, a looking glass. Like, we could see farther, we could see an enemy coming, and then you could behave much differently in terms of how you arranged your defenses, for example. So even something like that, that gave us greater visibility as a single human, empowered that human to do things beyond what you could do at reaction notice when an army is at your door, for example. So now, with AI and the ability to scale enormous amounts of technology behind it, and multiple AIs and agents of AI, and multiple agents of multiples AI—multiple AIs—and that scalability of all of those components is—it’s basically yet another springboard for those kind of same thing. So, with that, as the threat actors’ capabilities can use those, then the defenders should be using the same thing. So, I would say—but we were limited by biology, right? We were limited by what we could conceive mentally. And the amount of—even things like correlation, and aggregation, and inference of things based on large numbers of events, for example, and trying to find some arc of interestingness that could cause a weakness or make some effect happen. That kind of thing is limited by human capacity. But when you can scale it with the enormity of AI, it really—and that, I mean, we've seen AI is really good at finding, like, patterns and synthesizing enormous amounts of different data sets to come up with interesting trajectories. And—but that is so difficult for a human that it can be done literally at the click/stroke of a key. That is an enormous force multiplier.

Matt Calligan:

Yeah. What’s—how does that—I mean, with AI, and again, I tried to bury AI in here a little bit so this wasn’t a podcast about AI, ’cause we have 18 of those every day we’re getting alerts on. But with this kind of scalability and automation, you and I know these headlines—we’ve read these—but just for, just not making the assumption that everybody here is aware of it. How has AI changed the way these kinds of malicious activities are coordinated with the—between systems and environments?

Patrick Miller:

Yeah, I think, I mean, most of my world is in OT. So I look at this differently than, can you get Claude to code something, or can you vibe code some exploit based on the release notes of a vulnerability? And those are definitely things you can do, right? That—that's a reality. Now I'm more concerned about causing damage to a physical environment because now, as we mentioned earlier, we're running a lot of these physical technologies, whether it's a manufacturing line, or whether it's a water system, or an electric system. It's basically a big giant software platform that creates products of some kind, whatever that is.

Matt Calligan:

As a physical output.

Patrick Miller:

Has a physical interface into the physical world. Whether it's through the sensing and telemetry it gets about the physical world it's in, or the controls that you send it to operate things in the physical world—whether it's a moving arm on a belt and change the position of a component or a box, or water in a pipe, or power on a wire. So I'm looking at this more along the lines of, can I use large amounts of operational data—how the system operates, how it behaves—to affect it in some way that would cause physical damage to that piece of equipment? So it's not just, can I get to deface it? Can I use a wiper to render it unavailable, or something like that? But can I physically damage that thing based on the way I can manipulate the conditions that the machine is operating in? So examples are like locking molten metal into a smelter so that you can't get them. Now you've just disabled the smelter, and you’ve got to rebuild components because you’ve just fused the metal in there.

Matt Calligan:

Just a slag now.

Patrick Miller:

Something like that, or in certain chemical processes, if it does not happen perfectly, things go boom. And terrible clouds of dangerous gas come out, and refineries can literally blow up. I mean, so these are things that are like—that's a very different scale of a problem when you're looking at, could a human envision this. Probably so. But could a human envision a way to do it in a subtle, undetectable, non‑malware kind of way? And it would be extremely difficult for a human.

Matt Calligan:

Still doing the thing it's supposed to do. Just slightly worse.

Patrick Miller:

Exactly. So the capability to do that now, because you can get access to so much more operational data—as I mentioned, you can aggregate, you can infer in ways you couldn't—you now have a capability there that was much, much more difficult to achieve a long time ago. So that is definitely a game changer.

Matt Calligan:

Yeah, I forget who was talking about it, but they’re talking about inside an ICS system—certain things have to operate at a certain vibration frequency, and all you have to do is shift it off that vibration frequency and give it time. It's not instant, but at some point it breaks, and nobody just watching the system at an alert level is going to even see that happening. Like you said, you don't have to install malware. It doesn't have to be doing something bad to create that outcome.

Patrick Miller:

Yeah. And that is—that's very true. One of the biggest ones we worry about are things like big spinning machines, whether it's a turbine or a centrifuge or some other kind of motor or fan, though it's a big spinning hunk of metal. And to get it in—in most of the cases, there are things like what they call vibration or sync check relays. And those relays monitor, and it can only go this far outside of its band of what's acceptable for vibration, because it will start to—the motor will start to shake itself apart, essentially what you see over time. So that's a legitimate attack. And we've long looked for that. And we install literally layers and layers and layers of protection to try to keep that from happening. But if you can manipulate all of those things in subtle ways that they just don't get caught, then yeah, you can cause big problems.

Matt Calligan:

Yeah. So what do—what are the kinds of, in the immediate near‑term future here, what are the kinds of coordinated attacks when we're talking physical a lot on this side? But like, are we—they—the big fear that I hear a lot of folks is sort of a multi‑domain attack, where it is a cyber combined with an OT thing. I mean, the Colonial Pipeline—I don't know if those guys envisioned the BEC being that effective at shutting down the pipeline itself. I think that was more of an accident. But like, how close are we to somebody just actually intentionally coordinating at that level?

Patrick Miller:

Yeah. The unintended consequences so far have been alarming enough to imagine what it would be like if it was intended.

Matt Calligan:

That’s right. That was enough of an “oh shit” just to—yeah. Yeah.

Patrick Miller:

Yeah. So the intended consequences like that—I mean, that is, that's key. And if I were the one doing the attack, I would look for those cross‑domain interdependencies. I would look for ways that I could get affect in more than one domain. By doing one thing in a single domain, I can affect multiple other domains. So you do see this in areas, and particularly like gas and electric and comms, because a lot of other things are dependent on those infrastructures heavily. So, I think affecting one of those, you can cause cascading problems—interdependency problems—the—just the layers of dependency that operate. I think there was an old NTIA study that I saw that showed all of that. It was a really, really well‑done graphic and study that showed all the different types of interdependencies between all the different kinds of infrastructures. And when you just sat there and examined it, your jaw just hit the floor—like, “oh my God, this is such a House of Commons.” Yeah. How does this all stay together? So, in knowing those intersections and exploiting those intersections, it would obviously cause massive amounts of additional—we’ll call kind of quote‑unquote “benefit”—for the attacker.

Matt Calligan:

Yeah. Yeah. Well, and something—this particular got my brain—tickled my brain—was I did a podcast with Rob Lee over at Dragos, and we—he was—one of his—he speaks on this a little bit, and that is the black start sequence. And it really brought a really fascinating perspective for me on this. And this kind of dials into exactly what you're referring to, even beyond just domains within an organization, but interdependencies downstream and upstream as well. Where the—from a top‑down level, we look at, like, what are the critical things at a federal level? And we define them in electricity as the BES, right—the bulk electric system. And then everything that falls below that line sort of just—you got to figure it out. You don't…

Patrick Miller:

Yeah, you’re left to use the state laws at that point.

Matt Calligan:

Yeah. Yeah. Exactly. If anything, it really is constrain you at all. But the reality is that these big things depend on the little things that are below that line. And we're seeing now China—like with the Littleton Electric—where they're so off the grid, it's just a little co‑op or muni, or not even an IOU at this point. And—but these are components that are required to be operated in order to get the big parts moving again. And so the thing, when you and I are talking, that got me thinking is like, okay, we have China—wait, nation‑state—figuring this out. But what happens? Is it possible for anyone to leverage AI to kind of find these—kind of probe these kinds of weaknesses and interdependencies as well?

Patrick Miller:

Yeah, absolutely. And a lot of this stuff is actually still public information, right? There are a lot of municipal utilities, for example, that have to have their sunshine laws show that they've got to have a lot of their data about the organization as public. You can buy these maps from, like, various different engineering firms or other sorts of GIS companies. There are ways to get data about a power system—enough to see, like, wow, that's a bunch of lines going into one place that looks pretty critical. And based on this diagram, this is probably a 100 or 230 or 345. So you can guess the voltages. You can infer a lot. So you can feed all this into a system and essentially map out the grid. And you could map out the important parts, and you could see where the intersections were that were quote‑unquote “critical” or “necessary,” not just for black start but even for operations. So, yeah, I mean, could you model this in AI and cause problems using that model? Sure. Absolutely. Without question.

Matt Calligan:

How do you—I mean, do you think that there needs to be a shift in—I mean, electricity is probably the most, I guess, poignant example of this, ‘cause we all understand how little generators are needed for big generators. But I wonder if, from a federal level, anything needs to shift as far as how they're—the—how they treat what's critical and not. Like, do you think that there could be—do you think that this needs to come more from a top‑down perspective or a bottom‑up, like—do—what's the—what needs to happen there?

Patrick Miller:

Yeah, I mean, you're talking to a recovering regulator so.

Matt Calligan:

Right. I targeted this for, like—yeah.

Patrick Miller:

Yeah. I am not necessarily a fan of regulation, even though I've been a regulator. I do think it should be—it shouldn't be—that we shouldn't target, like, a perfect state with regulation. We should target a minimum floor that’s as—really, in order to ride the infrastructure ride, you got to do at least these things. And yeah, there is a certain threshold that's small enough that kind of—we’ll say—I don't want to say doesn’t matter, but has less impact, shall we say. Now, we had that problem with the electric sector. We’ll use that as an example, where there was a certain threshold that said, if you're under this threshold, then NERC doesn't apply, right. You don't have to do the security regulations. Over time, we added a whole bunch of new renewable generation and smaller generation that was under that threshold, because people are like, well, if the threshold’s, we’ll say, 75 MVA, I'm going to make my plant 74.5 MVA. Right? And I'm going to make a whole bunch of these 74.5 MVA plants, because I can make them whatever size I want. And I can just make a whole bunch of them. Well, when you do that, then you begin to control all of those different smaller points in aggregate, because you're not going to send an operator out to each one of those little tiny plants. And, dude, you're going to automate that. But that just makes sense. So, with that automation and with the aggregation of that automation, you now have a single point where you can control an enormously high number of megawatts of generation that are all under the threshold but from a single point. So that is something that we're looking at now in terms of changing how things are regulated. So it does have to shift, as the infrastructure itself responds to things like regulation and market needs and all those other things. So, as long as your regulation is minimum—that says this is your floor—and everyone understands this isn't the ceiling, right? This doesn't mean you're, like, terrorist‑proof or bullet‑proof. This literally means you are now allowed to operate, right? That's it. You can get on the bus. If we have that mindset, and then we have the mindset that this has to stay flexible enough to operate as the threat landscape—and, as I said, market conditions and infrastructure design—just change over time. With that construct, I think it becomes a better understanding. And I think people might even be a little more willing to adopt it, right? But it's not easy to do that, ‘cause you have to have typically pretty flexible standards and a pretty knowledgeable, balanced, and well‑informed regulator. That construct is a challenging one to meet.

Matt Calligan:

Yeah, it's not.

Patrick Miller:

Regulation is definitely not the solution to everything. But can it at least give us some good, high‑quality minimums to do really important things like electric, water, gas, chemical? Sure, it can be done. But it—again, it's not just done because, hey, we should tell them what to do. It's like, hey, we—these things have to operate, and they have to—they operate independently. So we have to work in a way that works with the other sectors. And so, done well, it can be tremendously powerful and tremendously useful. Done wrong, it can be obviously quite frustrating and useless, and cost a lot of money, and frankly detract from the situation.

Matt Calligan:

Yeah, yeah. And get a lot of people who are just there to check the box.

Patrick Miller:

Exactly. Yeah.

Matt Calligan:

Right. How do you think—from a—that's federal help—from a private‑sector side, how—everybody looks at the security as sort of like the last box check. It's always like, oh great, now we got it. Here come the guys who are going to tell us no. Right. Yeah, yeah. How do you—how should, particularly cyber—the leadership there—how should they better sell the idea of this? And to your—of these kinds of risks to leadership, in a way that implements some change into this. Like, you see where I'm going with it?

Patrick Miller:

Yeah. I think so. Yeah, I think I—for example, when I talk to boards or executive layers, I never say the word “security.” Security, the term, the concept, has like a witchcraft and voodoo—a lot of expense. Eunuchs, beards. It's just got all of these different things that come to mind, and none of it is “this is going to make my life cheaper, easier, better, faster.” Any of those things—none of those things come into mind. So I start with the cheaper, easier, faster, better approach and say, look, when was the last time you had some downtime? Okay. What happened? And we'll say, well, we did this, and we got to—we think we got the root cause analysis. So we think, okay, well, did it happen again? Actually, yeah, it did. We did this—oh, when we refined the root cause analysis, and we think we got it. So I'm like, okay, so what if you could actually, like, do the root cause analysis super fast? Would that help? I'm like, oh yeah, that would certainly help. I'm like, okay, so you need some visibility into those operations in ways you don't have now. Yeah. Yeah. Like that. I was like, okay. So let's go get a tool for visibility. Let's architect that network in such a way that you can get that visibility. And let's make sure that next time something happens, we can do a root cause analysis, like, super fast and get that plant back online. I never said “security,” but I just bought a bunch of security tools to get that. So if you can talk to them in their words—obviously. And everyone has said this, and you've heard this, but it really, really does matter. And it's not just talk about it in dollars. It's talk about them and their world, in their terms, in their words. ‘Cause every plant manager—they could care less about security unless it's there to make their job easier, cheaper, faster, better. So talk about those things. Don't sell it as security. Sell it as visibility, uptime, and efficiency. And all of those other great things that they want. And it can deliver that, if done right. So I think then it's on you, as the security practitioner or architect, to deliver it so that it does that. Right. And it's not actually causing more problems than it's worth. So that part, I think we've done a bad job of both selling it and then a bad job of actually delivering it in a way that's reliable and useful for them. We've rolled out too much, too quick. It's become restrictive. It's actually caused outages. So those kinds of things need to be factored in. So I think we've just approached it the wrong way. We approached a lot of OT security with an IT security mindset, and it really hit wrong. And it's—we've been set back ever since. And we're digging out of a hole.

Matt Calligan:

Yeah, yeah. There's—I mean, there's a very real, even sort of antagonistic, relationships, and a lot of it culturally in organizations between those two different departments. It's a real thing. With—when it comes to sort of these aggregated systems and the automation we talked about. One of the things, obviously, we're—ArmorText is a communications tool for IR. So we are big on having redundancies, and in a way that actually delivers resilience. But with automation itself, it really—and tell me if you agree with this—but it seems like you—think about, like, taking—when there is a successful attack on a network, it takes out a lot of things. It takes out communications tools, it takes out IdP, single sign‑on. Do you think that, in a way, is it fair to say, in a way, that we have automated trust away to a certain degree?

Patrick Miller:

I don't disagree. We—we've automated a lot of things, including trust out. Well, maybe another way to look at it is we've created an enormous amount of dependency. And I would say unearned trust because of it. It's been reliable enough that our short memories—our little gnat memories—can't look back at a time when this was a real big problem. And I mean, all it takes is going and asking someone who just got nailed super hard, like, let's go ask Jaguar Land Rover about their day a few months ago and see if they would be worried about things like out-of-band comms during a problem.

Matt Calligan:

Or Stryker.

Patrick Miller:

Or Stryker, for example.

Matt Calligan:

Yeah, even use the phone. Yeah.

Patrick Miller:

Exactly. So, I think for those that haven't had to go through it, again, it's a complacency issue. But we've automated ourselves into a place to where we have a high degree of dependency and an unearned level of trust.

Matt Calligan:

Yeah, yeah. How do you know in that moment where suddenly you realize how little trust is there outside of this thing that we've sort of handed it over to? How would you reestablish that? Like, you're on the fly and you're recovering, and now it's like, what? What—do I trust these people? Because I don't have this tool telling me they're good.

Patrick Miller:

Yeah. This is—it's going to sound like a broken record. But again, you've got to be able to go back to manual. You've got to have some way—if I'm a shop owner and I can't process credit cards, I'm going to take cash, and I'm going to do the math by hand. Obviously, that's a very simplified approach, but you need to have some way to do those things the same way. Right? If you can't—I mean, I talked to, like, electric organizations, like, well, oh my God, what if someone hacks all the smart meters, for example, and weaponizes them? And I'm like, well, then you estimate the bill and you send that out later, and then you fix it—you fix it in arrears. So you just think of everything that you do that is obviously critical for your operations, or necessary for operations, out of that list. What do you do to actually do that a different way? If that thing was gone, what would you do to have that function back? And at the operational level—like at the physical cyber interface level—we look at things as CIE, or cyber‑informed engineering, or consequence‑informed engineering. There's a bunch of different ways to look at it, but like I mentioned, that vibration‑sensing relay is a digital device, and it's got a chipset, and it runs as a standard kind of routine, and it's looking for vibrations outside of a band. And if the vibrations go one way or the other too much, then it sends a signal that says, slow the machine down safely so that we don't harm the machine. Now, that digital thing gets attacked, and I take those sensitivity markers and I move them way out so that it could effectively harm itself. There should be a physical device that just says underneath—that it's not digital, right. There's an actual mechanical thing that says, whoa, this is too much vibration. And then it should send the signal. So there's, like, catastrophic‑level physical backup protection for these things. And does that mean going back to, like, pencil and paper? I mean, if that's what you need for operations, then yeah. And you should try that out. It doesn't mean you got to do this for everything. No. But you should do enough to where your critical stuff can move along.

Matt Calligan:

Identify that. Critical. Yeah.

Patrick Miller:

Yeah. So that you're—you can at least continue to move. I look at, like, Norsk Hydro was one of the best examples of this. And they were so transparent, and the world learned such—well, the world saw such a great lesson. I don't know how they actually learned from it, but I mean, they continued to operate. They were open, they were transparent. And they literally, like, went back to manual, and they kept things moving and chugged along and came out of it. And as a result, if I'm going to be—if, I mean, if I'm going to use a supplier and they're on the list, I'm picking them first. I mean, that just makes sense, right. So it doesn't just save your business in the sense that your business can continue to operate, but now you become one of the most reliable partners in the business ecosystem, right? So it's just good all around.

Matt Calligan:

Yeah. Yeah. And in doing so, you reestablish that trust—at least in that context. Do you see, from a cyber leadership perspective, people with the kinds of insights you're talking about? Do you see them integrating those into their IR plans and things like that at this point?

Patrick Miller:

Well, I see a wide degree of variety. There are a lot of organizations that are very short‑term, quick turnaround, quick profit—burn as fast as you can. I have low expectations that they will do anything about any of these problems in a realistic way. Your organizations that are—I mean, obviously, things like critical infrastructure—they look at it very differently, and in some cases they're required to by law. So that's a different construct. But just a business driver of being a more reliable business partner—you'll see businesses that want to be around longer and have a longer‑term vision. They already get this, and they're thinking about this, and they're doing this the right way. So, I mean, as we've all heard, it kind of comes down to your risk appetite, of course. Of course. And if you have a very high risk appetite and you're ready to just, you know, burn down and start over, then. And if you have a very high risk appetite and you're ready to just burn down and start over, then—

Matt Calligan:

Stay close to the edge. Yeah.

Patrick Miller:

Yeah. Then I would say, as a professional in this space, I myself would probably choose somewhere else to work if I had the choice.

Matt Calligan:

Yeah, yeah. That's—that even should influence—yeah, that's true. And I never thought about, like, career path choices. It's like, those guys went a little too fast and loose. I think—I might, for longevity, go somewhere else.

Patrick Miller:

I mean, there's a lot of—I guess there's some… It's weird calling it satisfaction. There's some sense of being a really good firefighter and putting out a fire quickly and solving a problem quickly. That's great. But honestly, it should never have gotten to that point. There should have been a building code that says, here's how the building is built to reduce fire. Here's where the sprinklers go. Here's the smoke detectors, and here's the fire extinguishers, and here's the exits and the fire doors. But that keeps it so the firefighters aren't needed as much. And when they do show up, they're much more effective, because it's all been designed to enable them in ways that are much more effective. So we've got to get to that stage where—and hopefully we get to a place where there's, like, the bill equivalent of building codes in software, for example. Good luck. But we're approaching that. I mean, I look at, like, a European CRA—that's… It'll be difficult, but it will be amazing and transformational. So that construct is there. We know it's there. We know what it's like to put out fires all the time. It's exhausting. But, like I say, it really depends on your business model—and, in a lot of cases, that informs your risk appetite.

Matt Calligan:

Yeah. Do you think—from the regulator hat side—do you think there are governance models that need to be implemented to address these kinds of rostering challenges?

Patrick Miller:

I, again, I look—we in the US were really good—in North America, for that matter—we were really good at—we kind of hit a good sweet spot of regulation with, like, a NERC CIP. And there's a bunch of argument about the TSA SDs for the gas space and what's happened in water. But we kind of set the bar for making our critical infrastructures pay attention and at least get to a minimum level of security. We did a really good job. We moved the needle. Since then, regions like Europe, for example—they've got the NIS2 standard, and they've got the CRA for supply chain. Like, just an example in NIS2, the correlation would be, like, in NERC CIP, ostensibly there's this million‑dollar‑per‑day, per‑violation maximum penalty. Actually, adjusted for inflation, it's much higher. It's almost $1.8 million per day, per violation. That—that'll never happen. That won't—I mean, despite that being there, you would have to absolutely intentionally do something egregious to come even close to that level of penalty. In Europe, for example, they—and then this would happen at the company level, right? This NERC penalty that I just described. The correlation is, in Europe, your board members can be individually penalized. So this goes to the leadership of the organization directly, so that they want these—the owners and the people that direct this company, that have skin in that game, the share that governing what the shareholders make and all these other things—that they take this seriously. And the penalties are, like, what, €10 million or 2% of your global gross, whichever is higher. I mean, so I think when you put that level of responsibility on it, it gets their attention. And it's, like I say, it's not a prescriptive regulation—we'll say directive. It's not technically a regulation, but each country has to transpose that into their own laws. But we'll loosely call it a regulation. But it makes those companies—I mean, they have to get religion quick and get religion at the board level. So I think that would severely shift security for many organizations if they tried that in the US, for example. So I think if there's a model to look at—is the European model perfect? Oh, God, no. But it is a way—it is clearly going to shift things in a much faster direction, in a very real way.

Matt Calligan:

What I've seen is—I mean, there's, to your point, regulation is always the kind of drudgery and oftentimes seen as the thing that creates problems and gets in the way of innovation. But I also see, in these kinds of industries, where if you don't make it cost something to not do something, people are going to not do the thing, right. I think it's—and that's where regulation has that play, that role. I mean, if you're a private, for‑profit organization, you're not going to just spend a lot of money voluntarily, if you don't have to.

Patrick Miller:

Well, I—you're absolutely right. And I've used the example of regulation is done in the public interest.

Matt Calligan:

Yeah.

Patrick Miller:

And corporations operate in the shareholders' interest. Now, those two interests don't always align. So I think what they're trying to do with regulation is to forcibly align those—at least for some things where it's really, really important.

Matt Calligan:

Yeah. If the lights don't come on. Right.

Patrick Miller:

Right. Yeah, yeah. No water, no gas, no electricity. That's a problem.

Matt Calligan:

Big problems.

Patrick Miller:

Yeah. Real fast—you, we're about 72 hours away from total societal collapse.

Matt Calligan:

Things go towards collapse very quickly. Yeah, yeah. Well, I—I'm going to pivot here a little bit more to the personal side. What's, from your perspective as you're kind of continuing forward into these problems and across the world, what's—what are some ideas that have you excited? What are you—kind of like, what are you learning about right now that's got your attention?

Patrick Miller:

Oh, wow. I am—I'm really enjoying that cyber‑informed engineering is actually getting a bit more oxygen in the room. We're seeing a shift toward people understanding that—wow, we probably do need a safety net under the trapeze artist.

Matt Calligan:

Yeah.

Patrick Miller:

Just to catch them. Just in case they fall. Yeah, they're super experienced, and they know what they're doing, but sometimes things happen. So I'm liking that. I'm—that's got me super excited to see—

Matt Calligan:

Define cyber‑informed engineering, just to make sure.

Patrick Miller:

Yeah, it is—it's the practice of putting in those manual safeguards, those manual safety nets, for those really dangerous processes or critical processes in the OT space. So it's like that. If the pressure in the pipe gets too much and the digital sensor doesn't work, then the float valve that operates based on physics will back it up and save—safely save the day, for example. And what this does is it puts a lot more trust in the infrastructures, such that hackers can't cause catastrophic damage. Because right now we've seen, in a very real way, that—I mean, it used to be that there was this red‑line construct, right? That if you attack an infrastructure, you're getting bombs and boots at your doorstep. We're going to take that so serious. And then Russia attacked Ukraine and other things, right. But the biggest example is Russia attacked Ukraine and basically nothing happened. So what happened was they did it again, and then nothing happened, right. And then you've got China getting in Vault Typhoon and Salt Typhoon, and—well, we're not actually causing harm, but it's embedded and it's data theft. So I think that the capability for adversaries to cause real physical damage in critical infrastructure is there. Like that—that's a real guaranteed problem. So finding a way to remove that risk or minimize that risk, such that the damage is smaller, it's contained, it's less catastrophic. But motions in that direction are super, super awesome, not just because they like protecting critical infrastructure. That's a great thing. But it improves the level of trust globally on that infrastructure. And it makes the adversaries do something different. It makes them go somewhere else.

Matt Calligan:

Which is what we want.

Patrick Miller:

Yeah, yeah, it's a cost.

Matt Calligan:

Let's make it easier. Target.

Patrick Miller:

Yeah. If they know they attack something, there's going to be some physical construct underpinning it that takes away their goal. Great. I'm all for it. That that has got me super, super excited.

Matt Calligan:

What's what's something your, what's something you changed your mind on or like or wrestling with as far as an internal debate goes?

Patrick Miller:

Oh, that's a good question. I saw one today that, I've been chewing on, and it was a really good pinnacle, for my thought. But Ciaran Martin, was raising an issue from, the Munich Security Conference where they're talking about offensive cyber, and the new, national cyber security strategy leans very forward into this offensive cyber approach, punching back.

Matt Calligan:

Yeah.

Patrick Miller:

And I, I hear that I've always struggled with this because what typically happens is, as we've seen so far, and it's not just like, you know, this is in theory, this is a reality. They start going for the really important stuff like the critical infrastructure components. Right? If they're going to hit you, they're not going to take out, you know, a widget maker, they're going to take out, you know, electric or gas or water comms. So it what it does is it raises the response stakes and the escalation path for those infrastructures. And at least in the US these are private companies.

Matt Calligan:

Yeah.

Patrick Miller:

So what it ultimately ends up costing is costing the consumer at the end of this chain to try to get them to improve. You know, when they, when they do get damage, those recovery costs are paid, you know, long term through you just roll it into your pricing. And I mean, so it doesn't all it just shifts the burden of all of that weight back on the end person at the end of that long chain of responsibility. And that, that I think that's a really, really unbalanced solution. And it feels good to say, yeah, we're going to hit him back where it hurts. And then we see what the unintended consequences of that action turn out to be. Yeah. So I, I say if we do this, we should tread lightly. I am much more for attacking and doing it covertly and false, flagging it and making it look like someone else and making the attribution tremendously difficult. We should have absolutely, ridiculously skilled offensive capabilities like mind blowing offensive capabilities. We shouldn't advertise that and just go piss on someone's fence post. Yeah. On purpose.

Matt Calligan:

So I would be nice.

Patrick Miller:

I think it's I think it's more effective. You know, there should be a question if they do something, whether or not our, our offense is going to do it back in a way that is obvious or not. So I like that uncertainty because it creates less of a target on those privately owned or municipally owned infrastructures that end up being, you know, the ones that lose out in this in this situation.

Matt Calligan:

Right? Right. Yeah. Yeah, that's that's a whole other topic right now. You got me thinking.

Patrick Miller:

Yeah. Yeah. I, I struggle with it because sometimes I'm like, yeah, we need to strike back. And then I'm like, well, so I mean, it's weird to say I'm for it and I'm not for it. It's like I'm for it under certain circumstances, in certain ways. And, and it's there's just so much nuance involved to that. It's not as easy as just saying go punch back. It just it's just not that easy.

Matt Calligan:

And the unintended consequences and I mean, you can't you can't predict the number of variables that shift with an action like that. With an overt action like that, you send things into wildly new control, you know, new directions. And it's you can't game that out, you know? Yeah, it's impossible.

Patrick Miller:

Yeah. It is. And I think if if one nation does it and doesn't get penalized, okay, that's different. But if every nation is doing it and it's not getting penalized, we might as well just toss out things like, you know, international humanitarian law and the law of armed conflict and, and things that, you know, create those norms for, I would say, ostensibly a very valid reason.

Matt Calligan:

So yeah. Yeah, absolutely. All right. Final question here. And this is kind of the theme of the Lock & Key Lounge. And I've, I've taken to asking this question a little differently than just sort of like what what's what's your favorite cocktail? I like to frame it a little differently. So imagine that you're at a bar, but it's a nice one, not one that you know. It's the ones that like, you don't have to yell at, right? Oh yeah. At the other end of the bar, it's empty, right? It's not busy. At the other end of the bar is someone in security that you've been dying to talk to or meet. The question is, what cocktail do you order and who's at the other end of the book?

Patrick Miller:

Oh man, I would probably want to have the next conversation with Dan Geer to pick up from where I left off about 15 years ago, 16 year, Dan oh man. Dan Geer has been around for ages. Dan's a very bright guy. I think he was maybe founder of @stake and so many other things. He was at @stake for years and but positively brilliant mind in the space and, my background's microbiology. So we had some very interesting and thought provoking conversations around living systems and security systems and how you have overlapping, capabilities and functions. And, it was we were looking at, you know, throwing out some ideas for designing certain architectures based on the biological constructs. So, yeah, really, really fun conversation with an absolutely brilliant person that I would love to continue, at some point. But, and I guess the cocktail for me. Yeah, yeah, I used to. I used to bartend, so I, I have a hard time drinking other people's cocktails. I would probably go with just, a straight whiskey, probably about a maybe go with a nice Japanese or something. That's a nice, high proof American. Bourbon to keep me sipping it slowly. Yeah. Or maybe even, like, a really amazing mezcal, depending upon the board.

Matt Calligan:

Yeah, yeah. Very nice. I've. I've found that there's sort of a great shock test with, with cocktail. You, you know, what you gravitate towards. Someone told me that there was a she did this intentionally. She did an event, and they, they picked, a pink martini and a, an old fashioned and then a gin and tonic. And she said you could actually kind of like you saw the people who took it and you'd be like, yeah, that tracks. Yeah. Oh, yeah, I can confirm.

Patrick Miller:

Yeah, yeah. It's funny, when I used to bartend, people wouldn't know what they would say, I don't know what I want. And I could look at them and say, you want a margarita? And it was like, actually, you know, I do want a margarita, right? You could you tell right away you want a straight shot of bourbon, like, you know, I do.

Matt Calligan:

Yeah.

Patrick Miller:

You can tell. Yeah.

Matt Calligan:

Yeah. It's an interesting insight. So. Well, Patrick, I, I know you've got a lot going on here, so I want to thank you for kind of bringing some some real world, you know, perspective to this topic. I mean, I think this topic is something that really does need more attention. Obviously, you feel similarly, and I do want to thank the listeners. There are lots of places that, you all could have spent this time. We appreciate you choosing that, to come and share with us.

Patrick Miller:

Absolutely.

Matt Calligan:

Patrick, I do appreciate your time again, and hopefully we'll get to talk again soon.

Patrick Miller:

Any time. I'll probably catch you out of RSAC sometime soon. Hopefully. And thanks for having me on it.

Matt Calligan:

Yeah, we did not talk about RSAC. Next podcast. Another one. We'll do another one. Yeah.

Patrick Miller:

Awesome.

Matt Calligan:

Well, in closing, cyber security is no longer just about protecting systems. Or silos as it and operational tech become more interconnected. And the ability to scale attacks across them increases, organizations need to start asking harder questions. Questions like, what does operating in the dark actually look like? What does manual look like? In today's reality of an infinitely scalable and intelligent adversary, these silos cease to be meaningful. Cybersecurity must become not just, a tool or a component of security, but, in many ways part of a BC/DR plan, part of how you're organization gets back up and running. The challenge is no longer how to recover from an incident. It is actually how to survive it in some of the worlds that we live in. So if you find this conversation valuable, we'll be having many more of them. So be sure to follow share some of this with, with some friends you know who would appreciate it. And until next time, be well, stay curious and do good work. We really hope you enjoyed this episode of the Lock & Key Lounge. If you're a cybersecurity expert or you have a unique insight or point of view on the topic, and we know you do, we'd love to hear from you. Please email us at Lounge@ArmorText.com or our website ArmorText.com/podcast. I’m Matt Calligan, Director of Revenue Operations here at ArmorText, inviting you back here next time, where you’ll get live, unenciphered, unfiltered, stirred—never shaken—insights into the latest cybersecurity concepts.