The Entropy Podcast

Left of Boom The Intelligence Edge with Michael Freeman

Francis Gorman Season 1 Episode 8

In this episode, Francis Gorman speaks with Michael Freeman, Head of Threat Intelligence at Armis, about his career path from working in U.S. intelligence and cryptography to co-founding the cybersecurity company CTCI. The discussion explores how Michaels approach to threat intelligence led to the early identification of significant vulnerabilities such as Log4j, sometimes months before they were publicly known.

The episode covers:

- Michaels background in crypto analysis and offensive security

- His perspective on post-quantum cryptography and emerging risks

- The founding and approach behind CTCI, including identifying active vulnerabilities before others in the industry

- The challenge of false positives in commercial threat feeds

- The importance of asset visibility and contextual vulnerability management

- The role of AI in both enhancing cybersecurity operations and in aiding attackers

- Michaels thoughts on modern election security, influence operations, and the broader geopolitical implications of technology

- Risks of over-reliance on AI for critical thinking and decision-making

The conversation emphasizes practical insights into how organizations can better understand and secure their environments in the face of rapidly evolving threats.

Michael Freeman (00:01.102)
you

Francis Gorman (00:01.253)
Hi everyone, welcome to episode eight of the Entropy podcast. I'm your host, Francis Gorman, and I'm joined today by Michael Freeman, head of threat intelligence for Armis. Michael, how are doing?

Michael Freeman (00:09.038)
Well, thank you.

Francis Gorman (00:12.571)
Great to have you on Michael. Before we talk about your current position, I'd like to explore a little bit of your past. You've done some really fascinating stuff over the years. You've worked for American intelligence agencies and set up your own company, which led you on your current journey. Can you give me a flavor of your journey to date through the land of technology, cybersecurity and business?

Michael Freeman (00:36.238)
It's not a normal path that most people take. Basically one born out of curiosity more than anything. My original goal was to go into the US military and go into the special forces route was my goal. It was what I grew up, you know, plan on doing. But it was one of those things where when I get my hands on anything, I was taking it apart, figuring out how it worked, putting them back together, trying to make it better, trying to make it do things it's not supposed to do.

Long story short that led to playing around with computers in a really young age trying to figure out how programs work. Playing around with mainframes at my dad's military base. Things I probably shouldn't touch in the beginning with, you know, it kind of evolved from there. Next thing I know I was doing some work as a cryptographer, more on the crypt analysis side, and I was finding it's much easier to...

attack the implementation of cryptography or even the system surrounding it than it was the actual cryptosystem itself. That kind of led me from, you know, going and figuring out how things work to figuring out how to break things. And that led me into the space of, you know, doing more offensive side

The other team did DEF CON CTF way back in the very early days of DEF CON and it changed.

Francis Gorman (02:00.369)
Yeah, that's really interesting. And the cryptographic space, most, I can't resist myself now asking you about post quantum and some of your thoughts there before we jump into the journey again.

Michael Freeman (02:10.446)
Yeah, it's going to be interesting journey. It's one of those things that everybody is definitely concerned about and rightfully so. It's something that's going be inevitable, but we don't know when. That's the key thing. We were taking a look at four, eight qubit computers, quantum computers back in the late 90s and thinking that it's going to be an evolution that's going to happen very quickly. we're looking at back in the 90s and the evolution

of quantum computing really hasn't taken off to the point we thought it was going to, but it's definitely making leaps and bounds. It's definitely something where we're seeing where it's on a preface. And it's right there close to the point where post-quantum computing world for designing and building crypto systems is now taking up higher priority and rightfully so. We don't know when it's going to happen.

Francis Gorman (03:07.111)
That's the problem, isn't it? We don't know when it's going to happen, but we know it's more than likely going to happen based on the current trajectory. Michael, I want to ask you a little bit about CTCI. You helped co-found that company. Something that I found fascinating when I was doing a little bit of background research on you is you saw a thousand tax that no one else was seeing and some of them three to six months before CESA picked them up. Can you explain to me, one, how you...

Michael Freeman (03:08.769)
It is.

Francis Gorman (03:36.251)
came across the concept to take this approach to how you were so far ahead of the game and Tree, what did you learn?

Michael Freeman (03:44.718)
Yeah, it came about I was doing some work for a think tank and this think tank primarily only does work for intelligence agencies, groups like DARPA, IARPA, and a little bit for NASA. And we're doing some extremely cutting-edge work for them, for the customers. It was a situation where just life changes. I had an opportunity to go from the think tank world to Fortune 1.

It was a company that had been trying to recruit me for quite a while. They were standing up with cybersecurity from the ground up. had nothing in place. It literally is a Fortune 100 company with one guy that was in charge of cybersecurity. And it was just one guy doing all cybersecurity. The premise was there was a number of new teams that needed to be set up. You're going to building management team, inset response, forensics, security operations, you name it.

And as I was helping set up these teams, I started taking a look at some of the key problems we really had. And I always try to bring things back to this most core element of whatever it is we're trying to do. And I would look and say, OK, we're buying all these tools imaginable. We're buying all the thread feeds imaginable you can get. And it's only covering a small little component of what actually I saw that was true value that we really needed. When I looked at the vulnerability management team, I said, look,

you know, let's take a look at the vulnerabilities of the systems we at least knew about. And then we came back with something like 28 million vulnerabilities in aggregate. And looking at the criticals and highs, there's a little over six million criticals and highs. But as I took a look at the actual true vulnerabilities, and I said, we have a two-year plan just to address the criticals and highs, nothing else below that. But I knew several threat actors were actually targeting

the fours, fives, and six on the CVEs that we were exposed to. And I looked at the team and said, what's our plan for this? And he said, well, we're not going to get to these. It's not a priority. Another problem I had is when I took a look at all the threat intelligence feeds we were paying for, and we had paid quite a bit for some of these feeds, it was a lot of false positives. I looked at the numbers, it's close to 900 per one false positive rate. We had issues with...

Michael Freeman (06:08.814)
firing IOCs and everything else like that. I'm like, what are we missing here? That's a question I ask myself when I'm trying to tackle any type of a problem here. The most important thing is the way you ask the question, the type of question you ask, and the way you think about the question. So I realize that everybody is focusing, in my opinion, on a very ride-a-boom concept, which is basically

In the military terms, it's already happened. The issue has already happened. This is really far after that. So I looked at it and said, what are we missing? said, we're missing the left of boom, the very proactive nature of things. And so what I did is I said, before I came on board to this Fortune 100 company, I still own the IP of work I was doing for the Think Tank. That's the nature of the type of work we're doing. So I looked at it said, do you want, I have some technologies that I've created.

for the think tank and previous careers doing consultant work for other agencies, things like that. I still own the IP for it. Let me take the typical intelligence lifecycle, which we were not doing. We were basically trying to collect all the intel we could, search for bad things, and then try to act upon it. But the typical intelligence lifecycle starts with a question. And to us, I said, what's the question that we need to focus on?

to me was, what are vulnerabilities that threat actors are using right now or attempting to use in the near future? So what I did is I took a look at my technologies I've created and said, actually three separate technologies can help me individually answer that part of those questions, but they can also work together, help each other refine and identify areas that would make me answer the question much faster. So I put together this technology stack that

uses a lot of really unique approaches. Some of them are actually very old school approaches that we had in the industry for quite a long time, but just in a unique, different way. And putting this together, I started providing this Intel to a guy that would eventually be my co-founder with CTCI, but he ran the security operations team. And he's basically building really unique AI pipeline, things like that to help find bad and something like 68 terabytes of data a day.

Michael Freeman (08:34.99)
Eventually, he came back to me. like, you know, the data you're giving us is you're reading it way before anybody else sees this. Sometimes up to a year before we actually see it being hit in our environment. So, you know, how are you doing this? So I showed him what we did and how we did it. And long story short, we had a vendor in that we bought a new technology from the CEO that company was also a private investor himself, took a look at what we did and

with just a quick handshake deal, he wired me a little bit of money to help profile the concept a little bit more, showcase it to a couple of CISOs, friends of his, and basically said, let's start up a company. And we, at this Fortune 100 company, after you're there for five years, you get lifetime access to buy their product at a substantial discount. I had already reached that point. My co-founder's like, well, I've got to...

like four more months to go before I hit that point. So we stayed on for another four more months and got the legal, everything stood up for the new company CTCI and we went and launched it. We went and launched it at RSA, showcased to some friends. We had a lot of friends lined up to have breakfast and lunch and coffee with us. And the very first meeting was to a friend of ours and he tried to buy the company immediately. He was like, I need this right now. His company did threat intel.

took a look at what we did compared our data to his and we didn't even have a customer yet at this point. We had a customer three hours later. We went to meet with another friend of ours and he literally pulled out a dollar on the spot said, okay, here's my down payment and I'll send you a contract. that's, yeah, we looked at him and okay, it's definitely a business idea that we have something here and we took off.

Francis Gorman (10:29.703)
That's incredible, Michael. I suppose it's the, you don't know you have value till other people start buying in. And then when that happens, it's magic. Is there anything you found months before anyone else that you went, this is a, this is the background?

Michael Freeman (10:32.525)
on it.

Michael Freeman (10:41.966)
you

Michael Freeman (10:45.806)
Quite a few things we find. One of our processes, we've divulged it too much, is identifying when certain types of vulnerabilities are being fixed within certain technologies or software libraries, wherever the case may be. And we had a process that flagged for us of a potential issue coming along in a very critical software library.

We looked at and go, this could be bad. At the time we didn't have like armistice at reach when it comes to identifying, you know, how big this particular type of technology would be within other technologies. But we knew enough about a lot of technologies ago. I think the software libraries used in a lot of things and that was a log per day issue. We caught that two months early, actually. We actually caught it when it was being discussed.

among some of the developers, a couple other people had found the patch and said, you know, not the patch, but the bug itself, and said, think this might be bad. Our intelligence collection's picked up on it, ranked and categorized it and said, this is a potential issue. So we started looking at it and go, actually, this is quite easy to exploit. And we talked to a couple of our customers and found out, yes, they definitely have a lot of log for J and lot of their technology stacks. And so...

When the issue finally became known publicly, we had been on the very forefront of identifying the evolution of the attacks as they were happening, because some of the bug fixes were further down the hatches, we weren't fully working the way we needed to. Letting people know, here's the variations of the attacks as they're evolving, because we have some unique capabilities to help us identify that. At the time, a lot of...

National Search around the world are basically linking to our LinkedIn because we're posting all the information almost daily, sometimes three to four times a day.

Michael Freeman (12:47.342)
new evolutions of the attacks and how they were evolving. So that was a key one. We caught another one as we were testing out the concept of it. And we were still at the Fortune 100 company. We had a friend of ours that was at a extremely large defense contractor. And we started seeing some intelligence chatter that was pointing out to something that looked like it's going be really, really bad issue coming.

We took a look at and we got our hands on the exploit code, tested out and said, yeah, this is going to be very bad. that was something that eventually became known as BlueKey. But we let our friend at the defense contractor, who we use Shodan and Census to identify who's exposed to the internet. And this company kept popping up. like, this is going to be bad. We need to let our friend over there know. He took action on 10 days earlier than what everybody else did and let us know.

A couple of weeks later, that 10-day heads up saved them what would have been close to $100 million in average if they would have been compromised. Here's 10 days after we notified them, they started seeing the hits. But thankfully, were mitigating throws on place before that.

Francis Gorman (14:04.987)
That's intelligence worth having right there. A hundred million euros saved. I hope you had a percentage callback on early guidance. I suppose when I think about this, you're talking about identification of things months before they're known in the wider either, but somebody knows about them. What does that mean in terms of your perspective for the modern world, for elections, for, you know, bedding in bad actors into companies and online silent for

Michael Freeman (14:09.506)
Yes.

Michael Freeman (14:27.822)
you

Michael Freeman (14:33.902)
Thanks.

Francis Gorman (14:34.951)
We're in a very unstable geopolitical climate at the moment. What's your perspective there? What does this mean for the wider technology landscape that we all rely on so heavily in our day-to-day lives?

Michael Freeman (14:38.861)
Thank

Michael Freeman (14:46.414)
You know, it's a good question. am especially the advent of large language models and the ability for threat actors to use those in such a way that allows them to really compound the return on investment for themselves. In the past, if I was going to target, you know, if I have a target that I'm going after, it's a long drawn out process usually.

I'm spending a lot of time mapping out what the infrastructure looks like, how the people operate, what are the operational security flaws going to look like, what technology stacks are going to be using, how am going to gain access, how am going to...

Michael Freeman (15:35.534)
Low and slow method is a very effective approach to it. know, sometimes I'd look at it go, okay, if I can get into this step, I'm going to have to figure out a vulnerability within this type of technology for me to be able to move around because possibly I don't have one right now. So I'm going to have to, you know, spend the time later to find one. I don't want to use one. That's why they know.

So using AI can really help fill those gaps very quickly. And I've seen that happening now. I've been able to get my hands on some LLMs that some third actors are developing that have provided very unique capabilities for them. One in particular that stood out for me is after you've compromised an environment, you've stolen Slack channel conversations, you've...

got on their wiki and stolen a bunch of information, compromised an email server and you steal a bunch of emails. You could put this into a database with the front end with furrag and use the LLM on it to actually identify certain individuals within that company that would be very susceptible to blackmail because maybe a comptroller identified some

no financial issues, raise it up with the CFO and the CFO said, don't worry about that. Well, you know, it's not a problem, but you know, it actually is. therefore you just had a two LLM can actually help identify two potential people that you can blackmail in that environment saying, you know, actually based upon this law, you, you broke this law. I'm going to get, know, but the authorities know, unless you pay me, you know, 2000, know, $2,000 in Bitcoin or something like that. So I've definitely seen, you know,

the evolution of LLMs really help drive threat actors and some of the unique tools I'm seeing right now. But on the flip side, we're identifying ways we can also use the same types of technologies to actually make ourselves even more secure. There's some new capabilities that came out last week where you can put a couple LLMs, put an LLM in front, no, I'm sorry.

Michael Freeman (17:58.062)
I can't remember the name of the technology up top of my head, but basically it's a tool set that helps you reverse engineer malware much faster and easier using LLMs. So that's an evolution we're seeing. We use AI quite extensively in our processes and we can now do the work of two to 300 people with one person now. And we've been doing that. We did that with CTCI and we're really taking that capability here at RMS. I think it's going to be...

for a while now, an AI arms race where people, threat actors and defenders are gonna use AI in different ways to try to one up each other. The combination of that will be those who can actually understand their environment better than the other person will be the winner. So whether that's gonna be the person doing the offensive work, if they're able to use AI and understand your environment much better than you can.

then they're going to be the winner of it. Likewise, if you're the defender and you can use AI to understand all your assets, where they are, what the context is around them, what kind of controls you have in place, you're going to be able to beat out the attacker that way. So that's where the evolution of it's going to go.

Francis Gorman (19:12.935)
I think, I think that's a really key point. had Francesco Cipollone on last week. He's the CEO and founder of Phoenix security. And he was talking about a very similar thing that the meantime to attack now with an LLM is, is minute with the, with the right makeup. And he's the point he was trying to get across is if you can understand the attack surface of an environment, then leverage and intelligence speed and LLM or whatever you have to.

Michael Freeman (19:19.657)
Thank you.

Michael Freeman (19:30.32)
Thank you.

Michael Freeman (19:40.59)
Thanks

Francis Gorman (19:43.117)
exploit that vulnerability, you have an upper hand straight away. And as he was talking, was saying like, you you, get, you talked earlier about all of these critical vulnerabilities and high vulnerabilities that you identified and the amount of time it takes an organization to patch those and to rectify them. So now we're in this, this nuanced situation where the time to attack versus the time to patch are completely at odd with each other by months, if not years in some places.

Michael Freeman (19:43.982)
Yeah.

Michael Freeman (19:49.87)
Thank you.

.

Michael Freeman (20:06.03)
Thank you.

Michael Freeman (20:10.698)
you

Francis Gorman (20:13.649)
So that's going to create a whole new way of leveraging these technologies to really drive into the exploitation front. that knowing your environment. And I think that's just, this is pretty where Armist comes in as well as having some of those capabilities and others on the market to really understand your assets and then rank stack them in terms of exposure based on how close they are to your perimeter, where they may have API access to backend systems that are public facing or whatever.

Michael Freeman (20:16.588)
Thanks for

Michael Freeman (20:35.306)
you

Thank you.

Francis Gorman (20:43.687)
I think that's definitely going to be a focus area for organizations in the next couple of months and years. I know it's already a big piece, but I think it's going to really come to light that your vulnerabilities and your ability to patch those vulnerabilities in a data-driven way is going to be key to keeping your perimeter safe.

Michael Freeman (20:44.165)
Thanks.

Michael Freeman (20:49.485)
Thanks for watching.

Michael Freeman (20:54.211)
Thank

Thank

Michael Freeman (21:05.742)
Yes, it's going to be key. When we sit back and look at it and go back to the fundamental problems of it, you look at it and go, what's the problems you really truly have? If you can't see what you have, you can't protect it. That really hit home for me when I was at the Fortune 100 company.

I asked one of the teams, like, let me know how many websites we have, how many externally facing websites we have. And they came back to me after, honestly, about six weeks, using every tool they could find 2,300 plus websites that they knew about. So I said, I think there's more because I remember stumbling on more at times just poking around. at the time there was another company around,

future, friend of mine ran and so I reached out to him and said, can I use some of tool sets, DNS tool sets to help me find all the websites we had? And we did and it took about a week and we came back, we had over 18,000 websites that we didn't actually know about. A lot of them were spun up by marketing for certain campaigns that they were doing on old WordPress servers that were exposed to the internet and attached to our network. We just had no clue they existed.

No, that was to me a big eye opening because my past career is we knew all of our assets were at, know, we knew exactly where they were at. So to have this view, it was, it was eye opening to me and go, you know, is this a really big problem that everybody has? And when I started speaking to friends in the industry and so yeah, we have no clue what assets we have. We take a look at what we think our crown jewels are. Some of the technologies that supports that and we'd look at all the logs that we can get our hands on and

search through. knowing that, I looked at it and go, you know, that's going to be a big problem for the industry as a whole, for everybody. And that's one of the key things we looked at when we being approached to sell CTCIs. had other threat Intel companies trying to acquire us. We had other consulting firms trying to acquire us. And we looked at it said, you know, where's the real problems at? Because we want to solve problems. We don't want, you know, to go do something for a couple of years where

Michael Freeman (23:27.47)
You know, it's nice to do, but we really want to

that are back to the fundamentals. And that was the key driver for us to look at.

So like I said, going back to the basics is going to be key for everybody in the future. The more and more technologies and AI you try to throw on things, you're going to get lost in the weeds. think if you're putting AI in the right key areas and it's a very well thought out approach to what you're doing to help you leverage things instead of trying to solve the problems for you is really special.

Francis Gorman (24:05.127)
And I think there is that, that data saturation problem that humans face, you you lift the hood on, on, on that new tool that you're plugged in and all of a sudden you've got millions and millions of vulnerabilities or issues. And you just don't know where to start. Like where do I size it? And that's where really understanding your environment becomes, becomes a key part of that picture. So maybe there's a, maybe there's another startup there in the background, know, around visualization of one's environment and then back in the assets office.

Michael Freeman (24:16.066)
Yes.

Michael Freeman (24:31.842)
No.

Francis Gorman (24:35.387)
And yeah, yeah, it's, definitely, yeah, another problem, another problem. call it, there seems to be so many problems at the moment. This is just another one. Michael, I want, I want to talk a little bit maybe about, the political landscape that we're in at the moment. You've obviously got a background in intelligence in government, et cetera. we're seeing shifts to the far right. We're seeing technology playing a huge role in that in, in different areas. Have you got a perspective on.

Michael Freeman (24:51.169)
you

Francis Gorman (25:04.921)
modern elections and how susceptible people are to underlying teamatics that are being driven by actors with a different agenda than maybe in their public interest.

Michael Freeman (25:07.374)
Thank

Michael Freeman (25:13.614)
you

When it comes to election security, I look at it in two different ways. There is a technology aspect to things and we were engaged quite heavily.

Michael Freeman (25:27.246)
counties and cities to help make sure that their elections were ran securely. And definitely say that there has been a lot of changes, especially here in the United States, about election security from that front and were very, very good, as best we could put it. Every agency or county or city that we worked with had taken really great steps to actually secure their election technologies and the processes.

So that was actually good to see that happening because the previous elections, I definitely saw lots of issues that I'm like, okay, this is actually kind of bad, but there was a lot of good on that front. Now, when it comes to using methods of persuasion or, you know, elicitation, of the case may be to drive the emotional factor of voters is definitely something that is, it's all countries do it to each other.

You know, we've been doing it for a very long time. Other people have been trying to do it for us in a much, equally long time. Everybody does it. That's compounded now, but with social media, social media has made it so simple to do and so cheap to do in the past. know, prior to social media, trying to influence outcomes of an election was boots on the ground. A lot of, you know, financial and other.

type of resources to actually find the key components and factors such as evolving the culture to actually change that to happen to get a better outcome for you as a country to influence another country's elections. Now with social media and using technology in that way has made it much

Michael Freeman (27:23.222)
That is, you know, that was definitely a big concern of mine. I have seen how that has been evolving greatly. with the short attention spans that, you know, people have, the, what I say is the inability to do, you know, first and second order thinking where they can actually see something, go, okay, you know, let me think about this. The news says this, you know, what is this news? How is this news article being written?

to influence me. Looking at it and go, okay, well, you I can see by the way they state this, it's driving certain reactions out of me to think it this way, in a way that they want me to think. But let me look at other sources of this and let me analyze what other sources are saying and say, okay, this source of this, this source, you five of these sources say this, this one source is this, but when you look at it and go, okay,

Most of these articles are written the exact same way, almost saying the same thing verbatim, but this other article over here says maybe something new that I didn't know. Maybe it's seen it and including certain information that happened that was left out.

And people just don't take the time and understanding to do that. So it leaves them to be very susceptible to any type of influence. And human beings are easy to influence in reality. seeing AI, not just AI, using AI, using social media to really drive that.

change is really a big factor. With changes that I've seen, also it's not as easy for some groups to be as influenced as they used to be in the past, Now that certain aspects of social media has changed, new types of social media has came out, new types of channels have come out, it's now to the point where one group who was very susceptible

Michael Freeman (29:26.798)
now the ability to look at things a little bit more openly and then analyze things a little bit better and say, actually, no, this is not what I thought it was. I can see the second order effects of this happen of what, you know, this particular political, um, maybe, you know, proposed changes could potentially happen to me, how it would impact me, my children, things like that. um, I am seeing that happening more and more now.

because people have really, I think I've gotten a little bit of a t with social media, telling them this is how you should identify, this is how you should work and think. People kind of been stepping back from that and going, I don't think so, not so much because the outcomes for me are not what I expected them to be. So it is unique. It is something where it's easy to do, but it's also having a different effect longer term.

where it is pushing people to say, you know, this is, you know, I don't like people thinking for me or, you know, an algorithm thinking for me. I'm going to look and see, start thinking for myself. So I am seeing a shift in that too. So it's interesting to see this play out because we've never really seen this play out like this before.

Francis Gorman (30:44.261)
Yeah, it is. is an interesting dynamic. I had Louise McCormick on a couple of weeks ago talking to Edical AI and, the the one of the one of the the things that she said in it that really struck home was once you give your child a smartphone, you lose control of the type of person you're going to become. I thought it was a spark. gave me a stark image of, know, God, that's a that's a powerful sediment, you know, and I think we're seeing seeing some of that materialize in the real world. now with

Michael Freeman (30:59.926)
Yes.

Michael Freeman (31:08.097)
see you again.

Francis Gorman (31:13.127)
with the power of AI and gendered AI and large language models on the back of that and people using those as well to start doing a lot of their, the cognitive process they would have had to do themselves beforehand. I do worry that we're making ourselves mentally more susceptible to, you know, being swayed in different directions. So it's definitely a watch space of mine, even in companies that, you know, we weaken our human resilience by having an over reliance on technologies.

Michael Freeman (31:16.172)
Thank you.

Michael Freeman (31:29.348)
to attack.

and I'll see you

Michael Freeman (31:36.632)
Thank

Thank

Francis Gorman (31:41.819)
to do some of the key thinking for us, which is a key part of our makeup and our decision making and our leadership process.

Michael Freeman (31:46.766)
Exactly. There was an article, a paper just published, I want to say last week, examining just that, where they task intelligence people, people who work in intelligence space, to use AI for a few months and analyze the quality of their work versus, you know,

what they would be doing. it was an evolution that everybody took. You start using AI to help you, you know, speed up some of your task, help me refine this paper a little bit better to, okay, run this task for me so I don't have to do that. And then what they found was, is the more you used it for most people, not everybody, but the more you used it, the less critical thinking you were actually doing. And, you know, these are very highly trained, very skilled.

intelligence analysts that were going through this process and were finding out very you know to this paper very quickly and you know most of us were looking at yeah actually I'm becoming way too over reliant on letting AI do the critical thinking processes for me and when I was building the technology stack behind CTCI that was something I was very cautious of doing to make sure that it's not doing the critical thinking for me. It's

augmenting my processes and giving me great leverage is what I was focusing on. I can see very simply, very easily, it could have gotten down that road where it's doing a lot of critical thinking for me or my analysts, and that was just not something that I was willing to do. I always wanted to make sure the human was in the loop, nothing could go unpublished until validation was actually done.

verification of every step was actually completed correctly. And, you know, that was, you know, a driver of mine. was, I last year had, it was a lunch with the CTO, the CEI, and that was a topic that came up. They were taking a look at, you know, how can we actually use AI to help the analysts not make the decisions for them, but actually help them.

Michael Freeman (34:10.158)
challenged him. just in passing, said, here's an idea that I came up with for my tech, SCTCI to help challenge the analyst is when you put together a report and say, is the probabilities, this is what we're seeing, you can put that into a trained LLM that can actually challenge you and say, did you think about this? What are the second order thinking process that happens if you make this recommendation?

So I built that as a, what would that be like to help that have, and so I built it and it actually does that for me. So whenever before we publish something on Armist now, it actually goes to that process and say, okay, before you write this report about this threat actor and these TTPs and stuff like that, are you sure and have you validated that this particular TTP is used this way correctly and

are you sure these detection methods actually work? And it actually goes through the process and say, we've highlighted some of the, know, highlight some of the detection rules you've written may or may not work in this context. Have you thought about that? So it does some unique challenges for us to help us think better instead of thinking for us. And that's an area I definitely want to explore more because like you said, it's going to be a problem where it doesn't have the critical thinking for us.

Francis Gorman (35:35.975)
That's a really key insight that you've built into the tool because yeah, does this mitigation or this detection method work in your environment? We know it works in the environment that we validated it in, but does it work in your environment with all your nuances and customization that you've done? So yeah, that's a really key. that's actually a brilliant example of, know, computer says, no, you you really need to actually, yeah, we're saying that this works.

Michael Freeman (35:46.008)
you

Francis Gorman (36:04.881)
but have we validated that it works, which is a really key part of it. Look, Michael, it's been a pleasure to talk to you. I think that flew, think for 36 minutes or so already. So really nice to have you on. And hopefully maybe we can catch up in the future when you've seen a bit more of the Armis capabilities bedding and what the next new tool is gonna look like.

Michael Freeman (36:29.752)
Yeah, thank you for having me. It's been great.

Francis Gorman (36:32.305)
Thanks a million.


People on this episode