The Lock & Key Lounge — An ArmorText Original Podcast

Podcast#4 Collective Defense Under Pressure

ArmorText Season 1 Episode 4

Navigating Threat Intelligence Sharing in Uncertain Times

First, we’re joined again by Joe Slowik, formerly a Principal Critical Infrastructure Threat Intelligence Engineer at MITRE and now Director for cybersecurity alerting at Dataminr. Joe’s career spans the U.S. Navy, national labs, and private sector security teams—where he’s led efforts to track adversary behavior and build resilient cyber defense programs.

He recently joined us for a previous episode of The Lock & Key Lounge, and today’s conversation picks up where that one left off—so if you haven’t already, be sure to check it out for added context.

We’re also joined by Tim Chase, Program Director at GRF, who leads the Manufacturing-ISAC and the Energy Analytic Security Exchange. Tim has built some of the most effective threat-sharing communities in the country—connecting private sector operators with government partners in real time.

We’ll talk about how those communities are built, what trust looks like in practice, and why threat sharing is no longer just a best practice—it’s a business imperative.



[00:00:03:15–00:00:30:09] Navroop Mitter:

Hello, this is Navroop Mitter, founder of ArmorText. I'm delighted to welcome you to this episode of The Lock & Key Lounge, where we bring you the smartest minds from legal, government, tech, and critical infrastructure to talk about groundbreaking ideas that you can apply now to strengthen your cybersecurity program and collectively keep us all safer. You can find all of our podcasts on our site, armortext.com, and listen to them on your favorite streaming channels. Be sure to give us feedback.Welcome back to The Lock & Key Lounge. I’m Navroop Mitter, and joining me today is my co-host and colleague, Matt Calligan. Today, we’re digging into a tough but timely topic - What happens when the federal government begins pulling back from cybersecurity? We’re seeing budget cuts and workforce reductions ripple through key agencies like CISA and the DOE’s CESER program, as well as the disbanding of programs like the Cyber Safety Review Board. From proposed cuts to foundational efforts like the CVE program and MITRE ATT&CK to broader uncertainty in the national cyber mission, these moves don’t just shift policy—they shift responsibility. And that burden increasingly falls on the private sector. So the real question becomes Can we rise to the occasion? Can we build the right kind of trust, communication, and collaboration to defend critical infrastructure when the shared scaffolding starts to shake?

[00:01:27:10–00:01:43:13] Matt Calligan:

To help us explore that, we’re actually joined by two experts Joe Slowik, who you all know back by popular demand, and the infamous Tim Chase—both of whom operate on the frontlines of threat sharing and collective defense. Joe, Tim, thanks for taking the time today.

[00:01:43:15–00:01:44:02]Tim Chase:

Pleasure.

[00:01:44:04–00:01:45:03]Joe Slowik:

Hello. Pleasure to be back.

[00:01:45:08–00:02:54:17]Matt:

So, first, joined again by Joe Slowik. He is formerly a Principal Critical Infrastructure Threat Intelligence Engineer at MITRE and now Director for Cybersecurity Alerting at Dataminr. Joe’s career spans the US Navy, national labs, and private sector security teams, where he’s led efforts to track adversary behavior and build resilient cyber defense programs. He recently joined us for the previous episode of The Lock & Key Lounge, and today’s conversation actually picks up where we left one of the points off, so if you haven’t already, be sure to check out that first episode for the added context. We’re also joined by Tim Chase, Program Director at the Global Resilience Federation or GRF, who leads the Manufacturing-ISAC and the Energy Analytic Security Exchange or EASE. Tim has built some of the most effective threat sharing communities in the country, connecting private sector operators with government partners in real time. We’ll talk about how those communities are built, what trust looks like in practice, and why threat sharing just no longer is a best practice—it’s a business imperative.

[00:02:54:22–00:03:13:01]Navroop:

All right. Well, to start out, I'm going to actually direct the first question at our friend Tim from the Manufacturing-ISAC. Tim, as someone who’s built multiple threat sharing communities from the ground up, what makes an information sharing community effective, particularly when external pressures like funding cuts emerge?

[00:03:13:05–00:03:59:13]Tim:

Sure. Thanks for the question, Navroop. I mean, I think that, number one—and we're going to get into more of this—but it's that trust relationship with the members themselves. Obviously, they need to have the incentive to go ahead and join a community. And that looks different for different industry verticals. Why that is sometimes is a compliance piece to it. But at least for the manufacturing community, there's not as much of a compliance piece. I think it's a growing recognition of the threat landscape that they face that forces them to look at different avenues outside of their commercial vendor spaces and maybe some of the interactions they have with some government partners. So the sharing community's become kind of a natural next step for them.

[00:03:59:18–00:04:06:03]Navroop:

Now, are there funding cuts or changes to federal programs, though, that you're seeing that directly affect fracturing effectiveness?

[00:04:06:07–00:07:27:13]Tim:

So the effect on threat sharing itself for my world in terms of the information sharing communities, it has been pretty direct. There's been a number of them because of the funding model that they have. So, they either have grant money or direct congressional budgetary line items that have funded those efforts. Some of them are remaining, but some of them are being cut or eliminated altogether. So, in those cases, it's a real and immediate impact. In the case of GRF’s communities, we made the decision long ago—they're all nonprofits—but to not use that model. Not necessarily because we were seeing down the road a decade ago that this might be a problem. It just seemed more sustainable in the long term to kind of have a slower growth model but one that was a little bit more secure and tied directly to kind of member benefits and that sort of just a member-driven organization. So, it has not affected our organization negatively, but for many, it has. The federal government partners, in terms of the sharing and what we're seeing from the federal partners, obviously, it has. And I'm sure that Joe has a lot better insight into how that's kind of looking like from the inside. But right now, I think that some of the major things—which we might talk on later—some of those things like the CVE program and the ATT&CK. As of right now, ATT&CK is—contract’s been funded. I don't know about CVE, if that is continuing or not. But I think that, mainly, also what we're seeing is not just an issue of cuts but a change for the agencies that will continue to be funded and continue to have a mission in the cybersecurity space, where their focus is. And that is yet to be seen because an executive order, basically for national critical infrastructure, put us on a 180-day review where they're looking at sort of where the policy lines are going to be and who they're looking to kind of focus on that, with an eye towards moving some of those capabilities down to kind of the state and local level. I want to say one thing about that, which is that whatever the administration's policy is, I think that, from a critical infrastructure perspective, the asset owners themselves—that there's certain things that the federal government sort of can't devolve to a state and local or private sector. And one of those is the national defense. And I sort of see that some of this information sharing or the partnership that we have with our federal partners—whether they're kind of FBI or CISA, that we have most direct contact with—and some of the national labs and some of the programs like that. But also, where they're plugged into in terms of the intelligence community and the like, the private sector and state locals cannot pick up that piece, both because they're not legally allowed to take over that, but also just they lack the capability and the visibility into that. So we're still going to need those government partners to understand, especially when it comes to nation-state adversaries, which critical infrastructure is being targeted by all the time.

[00:07:27:15–00:07:52:01]Navroop:

I want to take a step back, actually, to something that Tim brought up. And it was about the CVE program as a kind of the unknowns around ATT&CK. Joe, this question is really for you. You've publicly expressed a lot of concern about reduced federal capabilities in cybersecurity, such as the potential cuts to the CVE program and/or MITRE ATT&CK. How significant could these cuts be for private sector security teams?

[00:07:52:04–00:11:23:21]Joe:

Thanks, Navroop. So, first off, I want to emphasize that, while there were issues in maintaining funding for the CVE program—which appear to have been addressed recently through an extension—there is no risk to a MITRE ATT&CK at this time. Having said that, though, because of what happened with CVE, even if it is the case that programs like MITRE ATT&CK and similar frameworks, upon which entire products and product categories have been built within the commercial sector, doubt is creeping into the continued maintenance and existence of these programs as a result of what happened with CVE. And that's unfortunate because I think what we risk seeing—and we already have seen a little bit of this with some of the actions around the potential CVE or what almost was a CVE lapse—where you had other groups start stepping up, where it's like, "Well, we'll take this on as a foundation," or, "This can be carried on through this other organization," is that we'll start seeing a splintering of frameworks, which might seem trivial in some respects but really is a significant concern because we really were moving towards a unified language. Certainly, there are imperfections, as with anything. To channel George Box, the British statistician, "All models are wrong, but some are useful." And some of that usefulness is going to be undermined by having additional frameworks. Additional languages appear because there is increasing doubt—as far as will these frameworks be around a year from now, three years from now, et cetera? So, we really need to emphasize that there is risk—even if funding manifests itself, or there are band-aids or whatever that come up—that that loss of confidence is going to be hard to bring back, even if there ends up not being any operational impact. And so, that's something really to take into consideration. How do you gain that back once some of these items start developing or raise their heads to begin with? So, the other aspect of this, too, is—what's next? That if CVE gets defunded (which it has not been), or if other frameworks and other things that are currently managed in part through US government funding—if that should disappear for some reason—what replaces it? And it doesn't seem like there's been much in the way of thought around what these projects are doing and what the risk is in them going away, or at least not until the last minute—where it suddenly becomes realized, let's say, "Oh, this was actually important, and we shouldn't have done that." So, a lack of planning, a lack of foresight seems to be worryingly present across these decisions, which undermines the predictability necessary to really start driving a lot of these conversations forward. So, at this point in time, there's still a lot of uncertainty. We'll see how this shakes out over time because we're very much in the middle of things at the moment. But when it comes to industry, when it comes to nonprofits and others that largely have built solutions and built frameworks or built ways of operating around the ready availability and maintenance of these frameworks, folks are recognizing that what seemed to be a pretty risk-free proposition actually has some concern embedded in it. And it'll be curious to see how that develops moving forward.

Joe, actually, quickly following up there, right, because this occurred to me while you were talking:

Are you seeing any movement towards consortium building among the kinds of solution providers that would have been dependent on these programs to, amongst themselves, address it? ‘Cause it feels like no one of them would likely be able to fill that gap entirely by themselves, but that potentially a consortium of multiple of them could both fill that gap for themselves, but also potentially for others who are going to be entrants into the space. Are you seeing anything like that discussion take place?

[00:11:55:19–00:13:32:00]Joe:

Well, it seems that it has taken place—at least with respect to the CVE program—where there was an announcement of a foundation that has indicated it will step up to manage this at more of the foundation model of nongovernment, private sector organizations. The membership of that foundation is not clear at this point in time, so it's very much uncertain who is behind this and what sort of potential risks and biases may exist as a result. But I think there's growing realization among various entities that are using these frameworks that, hey, we probably should be a little bit more involved in these efforts and have some, maybe not say, but at least resourcing behind these items that have become fairly critical for the delivery and maintenance of entire product categories, as well as internal security operations and similar functions. What that looks like, I think, is very unsettled, because one of the reasons why CVE and other frameworks have been successful is that they've resided with nonprofits like the MITRE Corporation that can be seen as being a fairly neutral arbiter in the room. Whereas, if you start including vendors, even among a consortium or a foundation model, at what point do those entities start leveraging their membership or participation to shift the discussion a little bit or to interact with things in a way that maybe is not as good for the broader community but reflects well upon their interests? Not saying that that will happen, but the worry or the concern that it might happen is something that would need to be considered in a move to that sort of a model.

[00:13:32:00–00:15:15:23]Matt:

Well—that's just it—from my perspective is, it's not so much about a particular funding source or a particular organization. Broadly, it's the level of uncertainty and confidence in the structure that was there—that it would always be there. The thing that I'm seeing from our side is just that, regardless of whether the funding comes back, or was the funding ever in jeopardy, or is this organization going to keep going on—it's that uncertainty and lack of confidence in the system as it stands, right? And one of the things—Tim, I'll ask you this. Trust and that confidence is really foundational to any effective threat sharing community. Folks need to know where the guardrails are and how they're going to interact with each other. And we have a thing we refer to here—ArmorText is obviously underpinning a lot of these. We have seen patterns in these successful threat sharing communities, and we call it the triad of trust. It's kind of like the three-legged stool of successful communities, and that's trust in the identities of the people. They are who they say they are. There’s—you've got to have trust in the underlying technology and the viability of that technology that the information is shared on. And then, you have to have trust in the responsible use of that information as communities interact with each other. What—I guess from a question in the groups that you're a part of, the different threat sharing communities you've founded—how do you establish that three-legged stool with the members as you bring them on and maintain it?

[00:15:16:03–00:17:06:13]Tim:

Yeah, it's a great question. And actually, at GRF, we have a very similar phrase in terms of being in the information sharing space for a decade—starting off at FS-ISAC, when GRF kind of—which was an internal business unit. FS-ISAC sort of left and spun out. We took with us sort of that model that we had developed helping those communities, which still do the same thing—but, similar to yours, we call it people, process, and technology. So, that first thing is the trust, and that's a people thing. And it's funny because, in the cybersecurity space, most of the time when we use trust, it's terms like zero trust. And the funny thing is that zero trust, when we're talking about it, is about digital systems and how they're connecting to networks or to one another. And machines can verify. They can verify whether you're on an access control list or you have the right cryptographic key, but they can't actually trust—that's an innately human thing. And all of that stuff actually has to start with people-to-people because it's people in an organization that decide whether they're going to join a community and whether or not they're going to share that thing that they just saw in their own organization or whatever. So, first, it's a human-to-community trust. And then, as you say, that trust needs to move more laterally across systems so that you need to understand that if you share something, it's only going to be shared with the people that you agreed that it's going to be shared with under the conditions that you agreed to. And then, of course, lastly, is in that technical piece as well—that the system itself is secure to maintain a certain level of trust in that system—so that it's not going to release unauthorized access into that system. So—but I think that it starts with the people and kind of moves, kind of logically, to the technical aspects.

[00:17:06:13–00:17:34:18]Matt:

It has to. Yeah. Well, Joe, from your perspective—obviously, working with MITRE and the other private sector experience you've been on—part of that where you're providing these frameworks that are utilized so heavily in these different communities. From your perspective, how do you—how do the organizations on the private side maintain the quality and reliability of the intelligence in an environment when there's funding uncertainty around certain sources of that intelligence?

Joe:

Right, and that's an excellent question. One way I think organizations can potentially head off such concerns is getting more involved in the maintenance and resourcing of these frameworks. Certainly, that leads to the issue that we talked about earlier in terms of who do you trust, and can you trust certain private entities to act in a neutral or similar way that might be against their own individual best interest but is best for the community? But presumably, if you have a large enough selection of public, private, nonprofit, and similar entities—academic potentially as well—interacting in these frameworks, it leads to enough players in the room or people that have a stake in the outcomes, that that sort of capture of outcomes is avoided. Coordinating that, though, is painfully difficult.So, while it sounds good to like, “Oh, just get everyone involved and have the private sector kick in funding and help manage this alongside multiple governments or government entities or whatever,” is that you then start having too many voices in the room, and does this become unmanageable very quickly? And so, seeking a balance between unification of control and being able to direct and effectively maintain efforts versus that sort of democratization bit is really a tough tension to try to balance, and I'm not really sure how that gets resolved. I mean, if you looked at a lot of social media posting that popped up around the CVE notification in middle April and such, a lot of folks were like, “Why are we relying on the US government to fund this?” And say, “Well, why are we relying on the US government to fund this?” But what are the alternatives, and do you like those alternatives any better? At what point—and kind of to get back to the trust mechanism that Tim was referring to, as well as others in this discussion—at some point, you kind of have to trust someone. The underpinnings of the entire community of information sharing, framework development, and similar means that you have to have some degree of almost faith that certain parties are going to be reliable and act in good faith in developing, maintaining, and contributing to these sorts of elements. And when that starts to be questioned to some degree—which right now it very well might be because of that near lapse in support—that's a very difficult thing to try to rebuild or try to develop again. So there's risks for the future of these sorts of items. For how do we start reassembling this in a way that multiple stakeholders, multiple dependencies, or dependent entities are confident in the longevity, the accuracy, and the impartiality of such work going forward?

Matt:

With the—and Joe, as a follow-up question—given that there needs, clearly, there needs to be something that steps into this place if we're going to continue to operate in this sort of environment of volatility (for lack of a better term), do you see that—in your opinion—is there a gap? Or maybe—I guess maybe—a greater imperative for folks, for more people to be involved in this, right? Especially on the private company side. Is—should there—have we—you mentioned about once, when you get enough voices in the room, you sort of get almost a self-policing anarchy model, to use a negative version of that thing. But we see those models working a lot of times in different places—not threat sharing communities—but is—have we—is there still room for more of these private organizations to join these threat communities and invest in this? Have we kind of reached that peak noise in the room yet?

Joe:

I don't think we've reached it yet, but it'll be very interesting to see how all of this develops—how all of this basically shakes out—with respect to who are the—not just contributors, since that's been fairly democratic for quite some time, looking at things like CVE and such—but who are the maintainers, who are the authoritative voices in the rooms? ‘Cause certainly there are—

Matt:

The curators.

Joe:

Yes, exactly. And how that management structure works out, as well as resourcing and similar. So, there's a lot of uncertainty right now. I honestly do not know how this will develop. I know we have a year extension for CVE, for example, but beyond that, I am both unqualified and underinformed to make any sort of forecast for where this is going to likely go in the future or how this will impact or how this will shape other endeavors that are similar in structure.‘Cause there's just an incredible amount of uncertainty at the moment between both the public sector and the private sector as far as how are these things going to develop and how are we going to effectively maintain staff and control these sorts of items so that they maintain that effective blend of both efficacy as well as impartiality.

Matt:

Yeah, yeah. Tim, from the perspective of needing more people to join, right, and maybe we haven't quite reached that critical mass yet—what's your take on that? How—is there—do you see room from the—as you're establishing these new communities, is there room to grow in that on the private sector?

Tim:

Well, my answer to that is always—

Joe:

Absolutely.

Tim:

Yeah. Well, I mean—‘cause—so, for collective defense communities, information sharing communities—you can call them different things—but that's—we like to refer to them as collective defense communities because that's the whole idea of it, right?

[00:23:22:05–00:26:15:06] Tim:

So, in that community, broadly speaking, the more members—and especially the more members that are actively participating—the more value the members get. Now—and you certainly need a certain amount of members for the engagement to really start—and it starts to snowball. I don't think that I've ever hit an upper limit where the membership gets too large. I mean, I do remember back when we were at FS-ISAC—their membership was over 7,000. It was really, really big. It was global. But you don't get too many members that it ceases to become useful. It's just, what you end up is that there's different cohorts that develop internally, that go and kind of do their own thing based upon their own interests and needs, right? So, I actually haven't seen an upper threshold, but I do agree that there's a lower threshold to actually get that done. As to how a program like CVE would have private sector participation, I mean, first of all, the primary participation that we want private sector to have—whether it's use of CVE or MITRE ATT&CK—is to actually use it, right, to operationalize it. And then, as a part of the process, I mean, obviously, companies that are producing software participate when they're submitting a vulnerability disclosure properly and through those channels. In terms of the private sector administrating it, I mean, Joe might be a little bit close to the problem to speak, not authoritatively. I think that he can speak authoritatively, but maybe politically, I would say. I think that we only want someone like MITRE to be doing that because we don't want it to be a commercial thing. We don't want it to—and I'll just tell you, from an information sharing, if it's anything like an information sharing community, they can be quite messy and busy and complicated. We need something with a lot more structure. And so, I think that MITRE is the best place. There's kind of a neutral third party that's not a vendor, but it's not the government, and it's not an industry participant. It's kind of a chimera of all of those things but is perfectly suited. So, I agree with Joe too, that if the uncertainty either leads other participants in the market to see an opportunity, or that it becomes kind of a commercial pay-to-play, all of those do a lot of damage to the usefulness of the underlying vulnerability disclosure process. So, I don't—I hope that that’s—the one-year extension is only just the beginning, and we'll all kind of see the sense of it. But only time will tell.

[00:26:15:06–00:26:51:04]Matt:

Yeah, and as a follow-up to that, Tim. And then Joe, same question. As—if I'm someone inside a company that maybe culturally hasn't really been keen on engaging a collective defense community of various stripes, how do I communicate that effectiveness to leadership? And people who are willing to invest in the legal side and the agreements and things like that, how do I—or budgeting, right—how do I communicate that value and necessity from a decision-making standpoint at this point?

[00:26:51:08–00:29:21:13]Tim:

I mean, that is one of the toughest questions, and I think that it's not just with collective defense or information sharing communities. It's a problem for not just their participation in external entities like information sharing communities, but it's a problem for their internal security teams too, right? It's like, how do they justify their budgets? And it's really, really difficult ‘cause how do you quantify the likelihood of something that didn't happen having happened, and what the costs have been? And I think that, over time, I have seen in the last S4, a number of different presentations were kind of around that quantification of risk and applying dollar amounts and to help resourcing executives to understand what the value is and to kind of put dollars and cents—which is kind of the language that they speak—but it is a challenge. And oftentimes, it's not just that challenge where you've got to make the business case why this is a useful thing to do, but there's also oftentimes—and I see this ‘cause all the people that are kind of member prospects are usually meeting with me before they're joining—is that there's a lot of different areas of the company that may be joining, but oftentimes, they don't often talk to each other, right? They're completely different business units. So, whether I'm talking to someone on kind of like the more hardcore cybersecurity, cyber threat intelligence team that's interested in joining, or it's someone on third-party and supply chain risk or whatever—there can be all kinds of different areas, and they don't oftentimes talk to each other. And so, there's a lot of challenges, but I think that, over time, we're getting better at explaining the overall value. I don't know, and I don't know if I'm jumping ahead in sort of like a question, but I don't know whether the uncertainty will necessarily directly result in sort of those organizations sort of getting religion and realizing that some of the other avenues that they relied on in the past may be either shrinking or going away altogether. And so, the necessity for community may increase in their estimation, and the value of it sort of go up without us having to make the case. I don't know if that'll be the case. I think that, right now, anecdotally, what we're seeing is there's a lot of people sitting on the fence, waiting to see what shakes out, but early signs are that we definitely are seeing more interest in that community.

[00:29:21:17–00:32:41:12]Matt:

You described these kind of silos inside a company, and it makes me think it's almost as if collective defense needs to be adopted internally, culturally, before they can apply it externally to what other industry they're in. Joe, from your perspective, what—do you have any thoughts on that side?

Joe:

I mean, the problem in general of collective defense and threat sharing really comes down to a classic collective action problem—to bring a little Mancur Olson into the discussion here, for those who are following along. But the idea that, generally speaking, if you start looking at collective defense and threat sharing, the idea that all we need to do is share more information or share more with each other is sort of the “thoughts and prayers” of information security—in that it sounds really good, it appears to be sound advice, and yet implementation is murky at times for reasons like what Tim said, as well as what we were just concluding on in terms of organizations themselves. When they're of significant size or sufficient size, are siloed off from effective parts of one another—let alone from other organizations—and that doesn't even get into cross-sector, international, and other sort of barriers that exist.

Matt:

Right.

Joe:

So, certainly, we've seen sector-specific models, like the ISACs and similar, that have built a platform or a mechanism for ensuring some degree of threat sharing and collective defense for specific industry areas. And that has been effective in a large number of cases. But the question is, how do we start extending this to an even larger degree, especially in an environment where there's greater uncertainty for things that are at the highest level of collective defense—which would be like national defense and security—where that remains uncertain, such as what Tim had referenced earlier in terms of US federal government, potential moves to push greater responsibility for critical infrastructure security and similar endeavors to the states and local authorities? What does that mean, then, for broader coordination and resistance to threats and threat actors that are bridging or moving across these jurisdictions and so forth? And that doesn't even get into other issues when it comes to participants within collective defense of getting every party to roger up or to contribute equally among all actors and to get rid of problems such as the leaching issue of certain entities that are members of these communities but then don't really effectively give back to them. And how do we build a collective defense model that allows for and incentivizes not just consuming information that is shared but also sharing that information about breaches, incidents, adversary operations, and similar, to make those sharing communities more robust? So there's a lot going on. There aren't very many—if any at all—easy answers for how to solve these issues, but it certainly seems to be a more complicated and potentially almost hostile landscape for resolving some of these issues right now, given significant amounts of uncertainty in terms of policy, as well as economically, for the private sector using a lot of items that are—have traditionally been assumed would be provided by public sector entities and similar.

Tim:

Right. Thanks, Joe, for just very, very concisely describing the frustrations of my life.

Matt:

Well, that's—so, we're—we've been talking a lot about sort of the nature of it as it is, from a structural standpoint and building them, maintaining them. But there's been a number of very public cyberattacks themselves that have hit at that trust factor as well. Salt Typhoon—it caused all of us to question how much can actually be seen when we're just sending texts or audio, being called when we're on these networks. At the same time, we're pretty much on the tails of Salt Typhoon. We see Signal-gate really clearly emphasizing that not all tools are ideal for certain types of communications, especially when you're in a regulated scenario—whether government or private. Joe, for you, and then Tim, same question. From your perspective, Joe, how have you seen—if you have—these kinds of cyber events that kind of hit at the underpinning of the communications tools we always assumed would be there? How have you seen that—if you have—impact the way people share intelligence and the collective defense approach overall?

Joe:

That's a really interesting question, because I think notionally, a lot of folks have recognized that, whoa, this is a problem, but practically have continued business as usual for lack of obvious alternatives or other main means of operating. So, certainly, I think there's already been significant use of tools like Signal and other secure communications mechanisms for a variety of purposes. And yet, at least from personal experience, a lot of what counts for sharing communities and similar across industry, and for researchers and so forth, are largely using cloud-based applications that have clear texts stored in the cloud and similar items for coordinating and sharing information—which has always struck me as rather curious, especially as we start moving into a more contested environment in terms of communication. Some of that is certainly due to a combination of convenience and cost. So significant barriers to overcome in order to migrate to other areas, but it really is something I think organizations need to start thinking more about when it comes to the communication and coordination perspective. Because one of the things with Salt Typhoon—everyone was very worked up about oh my goodness, the PRC is listening to my private phone calls or capturing my SMS messages or whatever. But yeah, that's one thing for sure, and that's concerning. But arguably, that misses an even more concerning point, that if you have access to be able to do that sort of collection, you also have access to turn those devices off or to degrade them in some way. And it's not just about are my communications compromised, but do my communications pathways even exist at that point?

Matt:

Yeah, killswitch.

Joe:

Yeah. And I think it's come up in some exercises that I've been a part of or have seen the results of that folks haven't really figured out—what if all I have to work with in an incident for sharing purposes is push-to-talk radio or HF or something even more historical or whatever along those lines ‘cause—

Matt:

Homing pigeons.

Joe:

What’s that?

Matt:

Homing pigeons.

Joe:

But there is an RFC for it. But yeah, those ideas of how does sharing even exist in a contested and degraded environment if we know that sharing, coordination, and similar are very vital to collective defense and cross-sector defense? What happens when that goes away? And that's a question that—we're not going to answer it here—but I hope folks are at least thinking about the implications behind them and what that might mean for what cross-sector, whole-of-economy incident response and resiliency would look like if our established means of coordinating communication and sharing disappear altogether, right.

Matt:

Right. Right. Yeah. We always joke about two tin cans and a string could actually officially be some sort of out-of-band communications option, but you kind of—you get limited by physics pretty quickly.

Joe:

Yeah.

Matt:

Tim, from your side—building, obviously building and maintaining these groups—have you seen any shift from—with the latest kind of—the Salt Typhoons and Signal-gate?

Tim:

Well, let me start where Joe did, which is that, while that—the type of threats that we face certainly highlight the insecurity and the threat that we face, what's kind of interesting is, it's a little schizophrenic because, on the one hand, there's certainly a growing awareness—both of the vulnerability and of some community participants that we run into—that are doing a lot better assessment. I—just recently, I was on a call where I was being queried by a internal auditor, like a security auditor, going over sort of our digital systems and questions about that and data prominence, like all kinds of questions around that. And I was like, well, this is really interesting. This is the first time that, in a member organization, before or during kind of their onboarding, wants to know and sort of pull back the curtain on how we're doing our own internal security and the platforms that we utilize. So, that, on one hand, that's great—we're seeing more awareness of the security of the actual digital systems that we're using. But, on the other hand, I think that Joe is completely right, where most of the time people go, “Huh, that's really scary,” and then go on using natively insecure communications. And I think what we do try to do is both internally use more of those secure systems, including ArmorText, and then also recommend that others do the same. And, as it pertains to sort of the broad-scale scope of how we work across industry and government and society in general. So, GRF has a community that's not just industry-focused. So, Manufacturing-ISAC is focused on manufacturing but has a community called the Business Resilience Council, which is an all-hazards, all—kind of—industry—anyone can join. And that's where a lot of, like, our kind of central, um, programs live that are, by definition, sort of cross-sector in nature. And it's funny because one of their—the upcoming exercises they're doing is a communications disruption. The first date is on June 11—you can go to the website, and it's free to attend. And you can bring your team, and we're going to go through that. And I was on the planning cycle for that. And it gets scary really, really quick when you start doing disruptions, ‘cause it's not just communications, like, “Oh, well, my telephone would be down.” But, I mean, it's like, “Well, if the telcos are down, and then I live in Northern Virginia—a lot of—kind of out the Dallas corridor—all of those server farms are not going to have connectivity either.” So, you start getting SaaS platforms that are not working either, and so it sort of escalates from there. And you're right, I don't—we don't have a ready answers for that. But I think that one of the things that the exercise and things are trying to do is to help walk organizations down what that would look like for their own organization, and how to then think about what resilience would look like—whether that's an alternative out-of-band communication (although that still would—might be hampered by lack of telco availability)—but how they're going to make their organization more resilient in the face of that. So, that's something that we're actually trying to do. And, like I said, that exercise is actually free to the public to join.

Matt:

What—Tim, just a real quick one—what is the website if folks are curious about that?

Tim:

They can just go to grf.org and just either look for exercises or events or something.

Matt:

Yeah, close enough. Good enough. All right. Well, you mentioned sort of a best practices, and that's something that is very—that's something that I'm very much pushing. And, Joe, this also gets something you've said, like the framework, right? And both of you kind of referenced this in different language around, “There needs to be, sure, lots of people and participation.” That's critical. There's a lot—plenty of room for more participation. But what is the framework of interaction? And what are best practices? What should somebody look for when they're joining a community as an indicator of, “Okay, this is the—these folks know what they're doing, and this is where I need to put my time,” ‘cause there's—there are plenty of places to put your time when it comes to these kinds of things. So, Tim, from your perspective, I'll just kind of keep the thread going. From your perspective, what truly differentiates an effective community from everything else?

Tim:

Well, I mean, one is just the willingness of the participants to actually engage on the topics. So, I mean—and to Joe's point—all of these communities are always going to have the little red hen problem, where everyone wants the bread at the end, but not everyone wants to help make the bread. And that's always going to be an issue. But, generally, the more successful communities are ones where the people who are participating want to be there and are willing to actually engage on whatever the topic is. And I think one of the things that we're actually in the process of doing is creating a playbook for an event. And my job, basically, in most of these instances, is definitely not be the smartest person in the room, ‘cause if so, we've got problems—but it's mainly to actually get the members together who have real deep expertise. And they're the ones that are going to define what those best practices are in that playbook generation. So, we're going to have representative people from different parts of the organization. They're going to need to say, “Well, this is important for the organization for this.” And the operational tech folks are going to be involved. And so, I mean, that's sort of how my role looks like in terms of trying to foster that collaboration, but the—it's really the members that actually create the value for themselves.

Matt:

Yeah. Joe, from your perspective, being the guy who brings these—has traditionally brought the frameworks in—what do you look for when you're—if you're gonna choose a place to kind of put your time in and invest? What are some of the things that you've found to be most successful as an indicators?

Joe:

Certainly, it is a combination of maturity and almost clarity, if that makes any sense. So, for example, what side of the road do you drive on?

Matt:

It depends on what—if you ask my wife or not—but—

Joe:

That’s fair.

Matt:

—Usually right side.

Joe:

Okay, so typically, outside of a few places like the UK, Japan, et cetera, we're all driving on the right side of the road, for the most part. There is no particular reason why we have to drive on that side of the road. But it is really important that everyone kind of do the same thing. And it's that perspective that I think is very applicable when it comes to sharing communities, where a lot of things are arbitrary or an item of convention.

[00:43:52:13–00:46:54:08]Joe:

But there does need to be some degree of certainty and consistency for how these items—whether it's how technical indicators are shared, or how specific threat reporting is communicated, or how things are just labeled—and such, that looking for organizations, looking for frameworks that allow for that sort of standardization. What that standardization is doesn't matter as much as that such standardization is taking place. And it sounds silly when you think of it this way, but absent such rigor or such perspective, things fall apart relatively quickly or fail to scale effectively because you lack the ability to automate items, to gain consistency and predictability out of how things are communicated and shared. So it's a very vital, if often overlooked, perspective to things. So, how do you set up something like a threat intelligence sharing platform or TIP or similar that's used by the community? What are the metadata items that are necessary? How are things tagged, labeled, shared, and sorted, and such? And so, looking for those sorts of details, most of your mature organizations are doing some of this. And I think it's one thing that comes out when you start talking about—to go earlier aspects of the conversation of—shouldn't this or could some of these items move towards private foundations and such, is that—well, this is the hard part about how these things come together—is that these are decisions that often get overlooked until you realize that they needed to be made, and it creates a degree of friction in standing up something from scratch.

Tim:

Yeah.

Joe:

When you have to start piecing all of these sorts of items together in a way that allows for universal buy-in and applicability for the entities that are going to be users and beneficiaries of a given community.

Tim:

Yeah. Let me just say that what Joe just said is really important. And I mean, I have to say, I'm still not someone who gets up in the morning and gets excited about naming conventions, but it is wildly important. And I think that one of the things that Joe, the words that he used, is really, really important—is without those, things can't scale. And—but I also want to say that while we need the structure, I've noticed also that once we have the structure, it usually—it's often iterative, and we're building upon things. So I know that just right now, a community member of ours is coming back to us about our automated indicator sharing platform ‘cause they utilize the same platform, and they have some building that they've done internally to come back to us to show us how we might add some extra form fields that provide additional contextual information that kind of got stripped out when it was moving from one platform to another. And that's super, super helpful, right? So the community, again, the members, are actually providing the value to themselves and other members by sharing their best practices and experience. But again, it's back to that technical and sort of like standard-setting rigor that actually benefits the community.

[00:46:54:13–00:47:56:

05]

Matt:

When stuff that has to be put in place by somebody—the—there has to be—Joe, the curator comment that I made, or Tim’s—to the comments you made—there has to be some sort of structure. I see, from our side, we get folks that come to us and say, “Well, we want to do a community, we want to build a community, and we want to use your tool, ArmorText.” And I joke—I was talking to Matt Duncan at the E-ISAC a couple of weeks ago. And the analogy I used was, it's like people approach these communities sometimes like Mike Scott on The Office approaching declaring bankruptcy. They just sort of walk out onto the office and declare “threat sharing” and expect that that's all that needs to happen—is we just put a bunch of people in a room, and it happens—and there needs to be more. And there needs to be somebody whose organizations that raise their hand and say, “Yes, we'll act as that framework, and we'll develop that common framework.”

[00:47:56:08–00:48:24:11]Tim:

I'll be the first one to tell you that the Field of Dreams model for threat sharing does not exist. If we build it, it does not mean that they will come. And it's actually one of the reasons why we don't have as many communities as we could have, because we're not going to start a community that doesn't have a large enough cohort to actually make it useful. We really need to have people that are willing to jump in on day one and not just sort of make it and then hope that people decide that they're going to join.

[00:48:24:13–00:48:41:22]Matt:

Right, right. Or the—if—all the analogies I always—the other one I think of is the underwear gnomes in South Park. It's just like, “First, we declare threat sharing, then success.” It's like there's no plan in the middle. There needs to be something that you approach the whole thing with—a framework you look it through.

[00:48:43:20–00:49:59:15]Navroop:

It's kind of interesting. I've been sitting here listening for a little bit now, and I've loved the discussion. But something that I've been thinking about now, as Joe and Tim have spoken about where things are today—what kinds of things, perhaps, the private sector is going to have to take on, and/or what kind of threat sharing communities are successful and who's not—so much of this seems to—it's going to come down to companies having to step up more and more. And that's likely to require an investment, both in terms of time and capital. And if that's the case, you're going to have to have a lot of folks who are going to have to go make business cases internally, especially in an environment of constrained budgets. And so the question is, what kind of advice can you give to the folks who are clearly cognizant of the fact that this threat sharing has to improve, but that now need to take this message and justify it, build a business case for their executives and for the boards? And this question is both for both of you, Tim and Joe, right? You know what, it is what guidance or advice can you give the folks who are going to have to make this business case to executives and boards so that they actually get the funding they need to follow through on all the different kinds of initiatives you both have been speaking about for the past 50-plus minutes?

[00:49:59:17–00:52:29:12]Joe:

I can jump at that one because—to make a controversial point—it may end up being the case that it is not the correct decision for an organization, depending upon their level of internal maturity and efficacy to, similar to the point made earlier of, declare a threat sharing, mystery happens, and then declare success. There's a lot that needs to be developed, implemented, and put in place before an organization has reached the level of maturity and suitability to benefit from that collective defense model. If you're barely working with a minimalist security posture, and you don't have the capacity to ingest this sort of information or to apply lessons learned, then you're probably asking the wrong questions, and you can't make the business case at that stage. So the first thing, really, for organizations to do is make that determination of where do we stand from a security and resiliency perspective in order to implement whatever we seek to get out of this sort of a community framework. And once that hurdle has been addressed, to then look at—okay—what are the expected deliverables as well as the expected contributions required to be part of this community, and how does that align with our existing security metrics, key performance indicators, and similar, so that you can fold in that community contribution bit? Because it's not free in a number of ways to just sign up for any of these entities—to justify the cost not just financially but also in terms of time, effort, and labor to enable that sort of collaboration to exist. So really, it's a crawl-one, crawl-walk-run approach of first ensured—are we even at a level of maturity where this makes sense? Once there, how do these processes or how do the expected gains as well as the expected participation align with what it is that we're doing currently? And then, at the most advanced stage, aligning core measurements and metrics with how interaction with that community appears in order to then tell a complete story of how that sort of membership and collective defense posture is integrated into the overall security posture of the organization—so that it's not just a add-on, but rather it is a integral part of how it is that organization is effectively defending itself and building the resiliency necessary to operate in a contested environment.

[00:52:29:15–00:53:02:03]Navroop:

I mean, what that sounds like to me is laying out a roadmap and knowing what your North Star is. If you're not ready for it today, working backwards to justify even just the initial investments you're gonna have to make to cover some of those gaps that you have around your own security posture—so that one day you could be ready to make those investments in collective defense or threat intelligence sharing, right? But that, to me, sounds like a continuous roadmap for improvement and refinement within the organization. I'm not sure I see that as directly contradictory, right? It's—you're still gonna have to make those business cases to even cover those gaps.

[00:53:02:03–00:54:37:

03]

Joe:

Precisely. And this all comes back to the larger security question—larger information security question—of how do I justify my existence as what is perceived as a cost center? And the sharing community concept is no different than any other element of this discussion in terms of, like you said, establishing what that North Star or what that higher-level goal is in terms of value center, preservation, protection, and similar—and then how—what steps are necessary in order to get there. For—depending on the organization—some of this collective defense stuff might not actually matter, but being able to critically ask and then answer that question is vital to then building up a program that aligns specifically with what is needed on the end-user side. And then, from the other perspective of this as well is that from a sharing organization or from a community organization, it's also about ensuring that the processes and deliverables that are generated from that platform align with what organizations can meaningfully implement and use so that it becomes an easier story to tell on their end to justify participation. Just saying that, “Hey, we're fostering collaboration and joint discussions on issues,” is very hand-wavy and difficult to put numbers to. So thinking, “What are the concrete security outcomes that are being achieved as a result of this,” is not just a nice-to-have, but arguably—especially in the current economic environment—a necessary-to-have in order to make these sorts of things possible or palatable to those who are controlling spend.

[00:54:37:06–00:56:14:10]Navroop:

Yeah, very much so. I mean, it feels kind of like the classic marketing and sales divide, right? That executives and boards are always used to hearing about, which is—marketing says, look, we did all this marketing. It was amazing. Guess we can't show any direct attribution towards contribution of revenue. But without this, we wouldn't have been in the press 45 times, and without this, we wouldn't have had all these other potential vanity metrics. If you can't actually tie it back to what it delivered as value to the business, as an executive myself, it becomes really hard for me to continue to justify investment in those programs. And that traceability of what your impact is towards what we actually care about as a business is critical. And so, as a CEO, that is something I'm constantly re-evaluating, even in our own investments as a company. And I imagine, we're going to—in the current funding environment, that that's going to become even more important as folks look to say, “Hey, we do need to further invest in this,” or, “No, we need to invest in something else more foundational before we can invest in this.” They're going to have to start to show a lot more traceability and accountability for how they drove value to the business itself. Tim, I know Joe and I jumped in right there. But the question was also directed at you. Would love to get your thoughts on kind of that business case. So, for perhaps not the folks who are on that part of the roadmap where they're still evaluating whether or not they could or would benefit from engaging in threat intelligence sharing. But for the folks who know they're at that point and definitely do feel they would benefit from engaging in threat intel communities and/or engaging in greater collective defense, any advice you can give them on how to make that business case once they've come to that realization?

[00:56:14:13–01:00:57:05]Tim:

Yeah, I think that we've been working on kind of refining that. I personally don't like the sort of negative and scare tactic techniques where it's talking about, “Well, here's the average cost of a ransomware attack.” I don't think that that's really helpful. But there are, I think, better ways to understand sort of the threat as an organization that you might be facing. I think that we, as a community, over time, also have better metrics around the type of attacks—based upon organizational size, or manufacturing type, or whatever—that we can actually go to and look at what's actually happening out there to help them with that. But oftentimes, they have a pretty good idea about some of the threats they're facing. What I want to say is, it's something a little bit different. So, I've always been jealous of sort of the folks in commercial spaces where they have a technical platform or something, right? And the reason that I'm jealous of them is that they—I mean, we both, at the end of the day, are trying to get users of our platform, or members in my case, but—and we both are going to charge a fee for that service, that product or service. And that's about where the similarities end. And the reason is that the commercial platform has a lot of—if you're a customer of, and I'll just leave out the names, but if you're a customer of “fill in the blank” threat intelligence platform or an OT security solution, you get to dictate to the customers this is our platform, this is our solution, we're going to help you implement it, but you got to come through their front door. I don't—and have never really had—that luxury, right? ‘Cause I have to meet the members where they're at. And the way that ISACs are organized is around industry vertical, by and large. There are some exceptions, but basically, it's around industry vertical—not how mature they are, not organizationally how large they are or how well-funded they are—whatever. And so, if they're interested in joining for their own reasons, I'm not going to stop them, but I have to provide value to them wherever they are along that continuum, which means that the value of a small organization might just be on sort of helping them to understand what the threat landscape looks like for a similarly sized organization engaging in that type of manufacturing or something, all the way to some of our largest organizations that are very deeply involved in working groups. And they're super high-speed in terms of developing bill of materials for AI solutions that they're building internally, and obviously automated indicator sharing and ingestion, and so—and everything in between. So, we have to be pretty flexible to meet member demand and get them the value. And I understand, from Joe's point of view, that in—oftentimes it can seem very hand-wavy—and to some degree, it is—but the value to the members is the value to the members. And as because they're due-paying members, they're sort of voting with their feet of—as to whether or not they continue to pay to be a member. And if so, they're getting some value. They're not all getting the same type of value on—for all kinds of reasons—and they're joining for different reasons. But I think that making the case—and that's why understanding why they're interested in joining a collective defense community is important. I always ask them in our initial conversations if, for instance, there—what is their regulatory compliance landscape look like? Are they working towards a particular standard—an ISO or NIST or what have you? ‘Cause all of that kind of defines where they would need to go. And it also helps make the case for them if they're going back to resourcing executives, why this would help in a compliance case or something. And so, I mean, it's a long—slightly longer—answer, but there's a lot of reasons why they join. There's a lot of ways that they can get value, but we're always trying to refine sort of how we can help them to make the case internally but also how we can make the case to them. But we're not coercing anyone to join. It's not like an obligation, and there's no regulatory compliance piece that any manufacturer needs to be a member of the Manufacturing-ISAC. So, it's my job every single day to meet member demands and to drive value to members in whatever ways they see is valuable, which also, over time, changes. And so, we're constantly kind of refining programs and product offerings to the members based upon their changing demands.

[01:00:57:07–01:01:40:09]Matt:

All right. Oh, Tim, I'll pick up here with this question. We're going to kind of reach the point in our episode where we want to try and drop a few Easter eggs. So, talked a lot about current landscape, the impacts to—with the funding challenges, and the like. Obviously, that's a pretty volatile environment that we're already in unpredictably. But moving forward, what kinds of risks should communities be focusing on? From your perspective—and Joe I'll ask you the same thing—but from your perspective, Tim, what are some things that they're not focusing on that you think they should be, from that cybersecurity standpoint?

[01:01:40:13–01:04:53:04]Tim:

Well, having just said that all the manufacturers are unique snowflakes—and they're all different in many ways—I’ll generalize a little bit, in that most manufacturing is actually—with some exceptions, and those typically are the exceptions to the rule—are in specific industries where they're manufacturers, but they're manufacturing either for specific medical things or military-industrial complex companies. With a few exceptions, most manufacturers are actually relatively early in their security maturity journey, and even more so in their OT security journey. So, for a lot of them, it's actually more in that kind of just basic block-and-tackling, just kind of basic cyber hygiene. So—and then, from there—I think that sometimes, depending on who we're talking to—if it's an executive—they are concerned about nation-states, China, stealing intellectual property, which is true that that may happen, and it could happen. But I think that oftentimes, it overlooks the fact that they haven't made significant improvement in their kind of just basic network architecture, and to actually stop some of the more common threats, which would be something more like a ransomware attack or sort of a data breach of data that has nothing to do with the Chinese stealing the intellectual property. Oh, and by the way, making those very simple changes is actually going to help you in both cases. So, we kind of start at the beginning. For some of the ones that are larger and much more mature, I think that what we're looking at is in manufacturing is—manufacturing is very far ahead in terms of integration between IT and OT—but it's also a weak point in their operations because they have a lot of dependencies in the IT space into their OT process—so, like ERPs and the like. And so, I think that that's something that they need to work on. I think that they're adopting technology to help their business case at a very high level. So, they're one of the first to really jump on the bandwagon of cloud-based for OT. So, whether that starts off just in terms of ingesting telemetry data, or process improvement, or predictive maintenance—but ultimately, I think it's going to head towards actual process control, which, for most people in OT, is the scariest of thoughts. But manufacturing is—they have a much higher risk appetite in terms of adopting such technology. So, I think that there's some integration and some teething pains there in terms of how they actually go about doing that. And from my standpoint, and what we're trying to do is—how do they go about doing that securely? That's something that I think we're going to look at going forward.

[01:04:53:08–01:07:40:18]Matt:

Right. Joe, from your side, what's—what are some things that folks should be thinking about that they're not?

Joe:

So, I have an interesting perspective on this because I think, especially when we start talking about relatively mature organizations that have done the hard work of investing in building a security posture and a security program—that folks are getting better at the “How do I defend myself?” perspective. How do I ensure continuity of operations? And similar. There's still a lot of work to be done on that, but there's at least general awareness of, “Oh, goodness—whether it's ransomware or other—there are bad things that can happen out there. At some point, I'm going to have to solve this.”

What I think folks are not thinking about—and what bothers me, given the current environment—is there's less awareness. There's less focus on: how do I, as a given organization, play into the larger functionality and sustainability of the greater society and economy in which I reside in? Because I think this aligns with what we've seen from a number of high-end threat actors as we start getting into some of your more nasty state-sponsored activity—like a Volt Typhoon, a Salt Typhoon, or even a Sandworm, or something similar—of:

what are those critical dependencies, and how do I matter as an organization for the functionality of a bunch of other organizations? Whether we're talking about manufacturing and what it is that I provide for others, logistics, electric utilities, gas transportation, fuel transportation—all sorts of other items—and not even getting into things like the healthcare space and local and municipal governments and such. That—I think that there is a significant focus placed upon organizational-specific defense.

And I think we're losing. And this ties back to some of the things we talked about earlier, in terms of losing some of the overarching perspective that was provided through certain federal or government programs at the nation level—of:

what—how do all these pieces fit together so that it's not just about defending this specific organization or my specific company, but how do we maintain continuity of services and functionality in a contested environment? Should a worst-case scenario unfold that? I don't think people are thinking about that as much as they should in an environment where we're seeing adversaries probe for, look to learn more about, and potentially try to subvert what these cross-sectional relationships look like among various organizations. And how that could lead to effects that aren't just, again, this specific organization has a problem, but rather society has a wider problem because now critical functional aspects have been degraded—or maybe even outright destroyed—as a result of some action.

[01:07:40:20–01:10:15:14]Tim:

Can I add something to what Joe just said? I—‘cause I think that this—it ties the whole discussion together, which is that I agree a hundred percent with what Joe said. And I also think that the problem with that is that it's not something that individual organizations—meaning businesses, the average manufacturer—is actually going to be able to think about because they don't have the perspective that is broad enough outside of their own remit to see those intersectionalities. Right? And that's actually why organizations that sit above individual organizations, right—whether that's an ISAC, or whether that's a MITRE, or whether that's a government agency—actually need to exist and to be functioning because they have the visibility and the resources to be able to take a look at some of those intersectionalities, where, for most companies, they're just answering the mail and they're doing what they need to do. So, I'll give you one example. I don't want to throw too many of them out there to give the bad guys too many ideas. But, for instance, let's say that in the United States, there are three primary electronic logging platforms for over-the-road distribution like trucking that is required for federal and state compliance. Right. So, drivers have to log how many miles, how many hours they've—there's mandatory rest periods and et cetera. Right. Well, let's say, ‘cause most of those are cloud-based platforms, how many of those would have to be—to go down before, all of a sudden, there's a massive compliance issue? And then drivers either have to risk being not complying with federal state statute, and not being able—‘cause they might have to show those logs at weight stations and other things like that.

That would start to put a hamper into manufacturing supply chains. And it's something that certainly that our members are concerned about—is over-the-road trucking and transportation logistics. But that's an example of one of those things that—it takes a little bit greater visibility to kind of pull the focus back a little bit—the lens—to see more broadly and see:

okay, this little area here, if you pushed on it, might have a more broad effect than just on this individual member. But I think, again, that really emphasizes the need for some of those larger organizations to really stay involved in that overarching security program.

Matt:

Exactly.

[01:10:15:16–01:10:47:04]Navroop:

Yeah. Tim and Joe, I think you've just teed up a future episode of The Lock & Key Lounge. Quite effectively, you're both going to get an invite back, for sure. We've definitely come to the end of our time here. In fact, I think we've gone a little over. So, I'm going to ask you guys one more question. It's little bit of the kind of the fun question we'd love to end on here at the Lounge. And this is for both of you. You've successfully navigated a crisis where threat sharing was key to mitigating serious damage. What's your solitary libation of choice?

[01:10:47:04–01:10:53:01]Tim:

I don't know. Joe, what—I mean, well, I've got an answer, but I'm interested in what Joe's answer is.

[01:10:53:05–01:11:19:21]Joe:

Well, last time I was on the episode, I made known that I'm more of a beer person than I am anything else. And since we're recording this in the spring, and summer is right around the corner, I'm going to have to go with one of my favorite summer brews—to start thinking of nicer weather—and go with Bell's Oberon, which is a American wheat ale, which is quite tasty and should be available pretty soon from when we're taping this. So, I'm looking forward to that.

[01:11:19:23–01:11:21:02]Matt:

I think Giant already has it.

[01:11:21:05–01:11:22:13]Joe:

Oh, even better.

[01:11:22:15–01:11:23:11]Matt:

It's one of my favorites, too. Absolutely.

[01:11:23:11–01:11:28:00]Tim:

Isn't Bell's a—isn't it a Kalamazoo beer?

[01:11:28:02–01:11:29:15]Joe:

Yep. Just outside of Kalamazoo.

[01:11:29:15–01:11:39:22]Tim:

Yeah. My mom's from Kalamazoo. Yeah. I'm going to be boring and just say, generally whiskey, usually American, probably bourbon.

[01:11:40:00–01:11:43:10]Matt:

Any of the above, I'll take it. That’s where I am right now.

[01:11:43:10–01:11:46:02]Tim:

It's whatever Navroop is ordering.

[01:11:46:04–01:11:47:13]Navroop:

Fair enough.

[01:11:47:14–01:12:02:14]Tim:

Because we did that tequila tasting a while ago. I had that was like that green bottle. It looked like a child made the labels in Photoshop, but I haven't been able to find it.

[01:12:02:18–01:12:22:13]Navroop:

Yeah, you're talking about the Delma. I will absolutely host all of us again for another such event. And if we do it in New York, there's actually a bar that specializes in those tequilas. We can actually go do a tasting of everything they've got—on me. Would love to have you both out there. So, let me know your schedules, and we'll make it happen.

[01:12:22:17–01:12:24:02]Tim:

Sounds like a plan.

[01:12:24:04–01:12:27:02]Matt:

All right, well, Tim and Joe—any closing thoughts?

[01:12:27:06–01:12:32:15]Tim:

No. I mean, I don't want to make any more work for the editors—to cut this down to time—than they need to.

[01:12:32:20–01:12:49:11]Matt:

Well, for those listening, thanks for joining us on The Lock & Key Lounge. Remember, cybersecurity threats don't care about budget cuts. But, fortunately, good whiskey and wine never mind your company either. So, until next time, keep your communications secure and your libations ready. Joe and Tim, thanks for your time today.

[01:12:49:12–01:12:50:20]Tim:

Thanks, Matt and Navroop.

[01:12:50:22–01:12:54:06]Joe:

Thanks for having us.

[01:12:54:06–01:13:26:02]Matt:

We really hope you enjoyed this episode of The Lock & Key Lounge.

If you're a cybersecurity expert or you have a unique insight or point of view on the topic—and we know you do—we'd love to hear from you. Please email us at lounge@armortext.com or our website:

armortext.com/podcast. I'm Matt Calligan, Director of Revenue Operations here at ArmorText, inviting you back here next time, where you'll get live, unenciphered, unfiltered, stirred—never shaken—insights into the latest cybersecurity concepts.