The Security Circle
An IFPOD production for IFPO the very first security podcast called Security Circle. IFPO is the International Foundation for Protection Officers, and is an international security membership body that supports front line security professionals with learning and development, mental Health and wellbeing initiatives.
The Security Circle
EP 168 'Why Is Security by Design Still an Afterthought?' with Pablo Breuer Ph.D. Former Director US Special Operations Command
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
🎙️ Why Is Security by Design Still an Afterthought?
The Security Circle Podcast with Pablo Breuer
Security leaders have been talking about “security by design” for years. The principle is simple: build security into systems, technology, and processes from the very beginning — not bolt it on once the damage is already done.
So why does it still so often arrive too late?
In this episode of The Security Circle Podcast, host Yolanda “Yoyo” Hamblen is joined by cybersecurity strategist and former U.S. Navy Information Warfare officer Pablo Breuer for a candid conversation about the gap between security theory and real-world implementation.
Drawing on experience across military information warfare, cyber defence, and enterprise security, Pablo explores why organisations continue to treat security as a compliance exercise rather than a design principle. From technology innovation racing ahead of governance, to the growing threat of disinformation and influence operations, the discussion highlights how the modern threat landscape demands a far more integrated approach to security thinking.
Together they explore:
- Why security by design often struggles to gain traction inside organisations
- The cultural and leadership barriers that keep security on the sidelines
- How influence operations and information warfare are changing the security landscape
- Why protecting systems is no longer enough — people and perception are now part of the attack surface
- What it really takes to embed security into innovation, strategy, and decision-making
This episode challenges a difficult question for every organisation:
If we all agree security should be built in from the start…
Why do we still treat it as an afterthought?
https://www.linkedin.com/in/pablobreuer/
Security Circle ⭕️ is an IFPOD production for IFPO the International Foundation of Protection Officers
If you enjoy the security circle podcast, please like share and comment or even better. Leave us a fab review We can be found on all podcast platforms. Be sure to subscribe. The security circle every Thursday. We love Thursdays.
Speaker 2Hi, I'm Yolanda And welcome to the Security Circle Podcast, produced in association with IFPO, the International Foundation for Protection Officers. This podcast is all about connection, bringing you closer to the greatest minds, boldest thinkers, trailblazers, and change makers across the security industry. Whether you are here to grow your network, spark new ideas, or simply feel more connected to the world of protection and risk, you are in the right place wherever you are listening from. Thank you for being a part of the Security Circle journey..
Yoyopablo Pablo, bruh. Is that how I say your name? It's a rather unusual way of saying bruh, isn't it?
PabloIt is. Thanks so much for having me, yoyo.
YoyoListen, it's a real pleasure. Listeners today on the Security Circle Podcast, we're stepping directly into the front lines of cyber defense, information warfare, and the fight for truth itself. Brace yourself. My guest is Dr. Pablo Brewer, 22 year US Navy veteran, former director within US Special Operations Command. Some of us won't know what that means, but we'll soon find out today and one of the rare security leaders who generally operates at the intersection of cyber. National Security and cognitive resilience is a two-time DEFCON Black badge winner. A senior advisor to major US Defense initiatives and a co-author of some of the most influential frameworks Encounter Disinformation today. He's also a board member at O-A-S-I-S, open shaping Global open source standards, and a researcher whose work bridges the technical, the psychological, and the geopolitical. Pablo brings a perspective most people in cybersecurity never get close to, and today he's bringing it straight to the Security Circle audience. This is gonna stretch your thinking, it's gonna challenge assumptions, and it's going to show why cyber leaders can no longer ignore the information battles being fought beyond the screen. Let's dig in deeper. Pablo, what do you reckon to that introduction? That's you.
PabloHow am I gonna live up to that?
YoyoShould I go and get the right guy?
PabloI was a rotten kid who never grew up and just found a way to make a living doing it. That's it.
YoyoI, I think there's something quite extraordinary about that, but probably something quite truthful. They say, don't they, that you should never feel like what you're doing is a job for a career, but there must have been times you can't go through all that and, and not realize at some point that it's a job. But how did you get into the Navy?
PabloSix months before I joined the Navy, I had never thought about entering the military at all. Uh, I was originally born in Argentina and we immigrated with my folks to the states, uh, when I was a child, and they had mandatory military service. So my parents were horrified, uh, when I told them I was joining the Navy. Uh, and then, um, I showed them a little bit about, uh, the US Naval Academy and that eventually I'd be going there to, to get a degree. And they, they looked at it, uh, and they, they talked some people and all of a sudden they were all for it. Uh, right up until the fact, the point where my mother realized I was also gonna be deploying to places like East Timor in Afghanistan. Uh, and then, you know, uh, not, not so much supportive of that, but she liked the, the whole degree bit.
YoyoBut that's hard being a family member and a loved one being in a war zone, Pablo.
PabloIt is. It is, but it's also, uh. Not what you think it is. Uh, it's, uh, not like what they show you in the movies. There's a, you know, uh, long periods of doing things that are really, really impactful, that have nothing to do with violence. Uh, and I will tell you that some of my absolute favorite times in my career were solving some of the issues that we solved, uh, in, in Afghanistan. It was everything from building, uh, national, uh, telecommunications to building schoolhouses. Uh, and it's really fun seeing the impact you can have on greater society.
YoyoYeah, no, that's fair to be honest with you. But when did you decide that you know, you were gonna be the right fit for a more specialist type of role?
PabloUh, that actually came early on. I was, uh, I was one of those kids that got themselves into a little bit of trouble earlier on, uh, with computers. Uh, and, you know, law enforcement at one point had shown up and said, you need a fine. An outlet. Uh, and um, so I spent the first year of my Navy career, uh, enlisted and, uh, messing around one day, found, uh, the HR database for the base and reported it and said, you probably should fix this. Um, and then, uh. You know, went off to the Naval Academy and got my computer science degree. Uh, and then my first posting after my degree was, uh, on a ship, the USS Blue Ridge out of Japan, which was the, the flag command ship. It's one of only two ships of its type in the Navy. Uh, and so I show up with my shiny new computer science degree and the, the captain goes, right, uh, you're gonna own the two propulsion boilers. You're gonna work in engineering, uh, largest afloat network anywhere in the world. That's what he had me doing. Uh, and uh, unfortunately for, for the captain, uh, the Admiral Fired two, uh, IT officers in a row and said, if I fire a third one captain, you're going with him. So he. Pulled me outta the bowels of the ship and sent me to, uh, go work the it for the Admiral. Uh, did pretty well there. And, um, the admiral and his chief of staff said, listen, there's a new community that nobody knows about. It's gonna be the first of its kind in DOD, they're gonna do computer warfare. This was before most people use the term cyber, uh, and they're gonna select in September of 2001. Uh, obviously that got delayed for a bit. Um, but I did end up getting, uh, picked up for what was, uh, called the Information Professional Community in, uh, mid 2002. Uh, and, uh, showed, showed up to NSA as you know, one of the plank owners in that community, and was put right to work in the red team where I got to light sheet and steal to US government officials for a living. It was a fantastic job. Uh, I got to break all of the rules and then go back afterwards and tell people how to solve it. Uh, so that meant I didn't have to, uh, feel bad about the things I was doing, the bad things I was doing to good people.
YoyoSo you've kind of come full circle in the sense of where you said you started out, but I'd love to get into the mindset of that young Pablo who got, just before he got into trouble with the police, what was your motivation behind what you were doing and how, how does that help us to understand the motivation behind anybody that's sitting, you know, in, in a threat actor position? What did you learn about threat actors through your own experiences?
PabloUm, so two things. You know, what was my initial motivation? My initial motivation was rather simple. Uh, we were, you know, low middle class and I liked video games and they were expensive. And so, like a lot of kids my age, I got into, uh, frankly bulletin board systems and, uh, you know, partying video games, not, not necessarily legal, but fairly common at that point. Uh, and so that's really how I got into it. And it was really just about solving problems. Uh, you know, uh, I'm gonna show my, and say that I lived at a time, I was a teenager at a time where calling, you know, several miles in one direction was free, but calling two blocks in another direction may have been long distance, and long distance was expensive. So, after getting grounded a couple of times, I found a way to make calls without, you know, having to pay for long distance. And really what it came down to was, um, everything was a puzzle. Uh, and people make assumptions. And so, uh, you have to go through and you have to test those assumptions. And some of those assumptions are valid and some of'em are not. Um, and that's where usually find out where they, where the issues are. But it's almost never a technology problem. It's almost always a people problem. And that's the part that we still miss in the security community.
YoyoI love it. I hired a woman once who, uh, told me in her job interview that she re-engineered the game, uh, the computer game so that she could win. Uh, and, and, and I just said, oh, you are hired. That's what exactly how you want analysts to think in a soc. You want them to start going beyond, you know, the boundaries of what the computer says. No, and, and, and problem solve. So how does your relationship with the threat actor. Relate Now in terms of is it, is it easy for you to think how they think in terms of the problems they're trying to solve, which ultimately work against us and society?
PabloUh, well, sure. It, it becomes easier because you, you practice it. Anything that you practice doing, uh, becomes easier. Uh, the other thing I think that actually helped me is, um, this is one of the few times that I will admit that this helped me is I went and got a PhD at one point where you learn a whole lot about biases. Um, and so really learning about biases, uh, and, and how to find your own biases, uh, is really helpful because. The computers say no when you tell them to say no. And they say yes when you tell'em to say yes, but they don't make decisions on their own. So you have to get in the mind of the developer and the programmer. Um, adversaries are a bit like water. They're gonna take the path of least resistance. They don't try to be slick. They just try to accomplish whatever the goal is, which is to either, you know, gain entrance to a system or convince somebody to click on something or pill for money or pill for information. And they're gonna do whatever's easiest, fastest, and cheapest to do that. So the, you really have to start out with, uh, the very low bars, uh, and then you work up from.
YoyoSo clearly to get into cyber command with the NSA, you must have some very, very well-grounded and special skills. At the time you got that job and you started acting on their behalf. At what point were you thinking, holy shit, this is real?
PabloI think most security professionals go through this lifecycle., Not all, but a good portion of it start out going, I wanna hack for a living. I want to do things that would be illegal otherwise and get paid lots of money for it. And then, uh, you get pretty good at it after a while. And then some of us, this switch clicks and it usually has to do with either being of a certain age or getting married or having kids. And it goes from, I'm really good at this, to, this really should be harder for anyone to do. And I live here and my family lives here, and my parents live here. And oh my God, I've gotta fix all of the things. It's not enough to show that they're broken. I really have to fix all of the things. And so you, you kind of go into this panic where you live in frankly, a terrified mode for a number of years. And then after a while you realize that the world is not gonna end. Um, and, you really just have to make things good enough, right? Uh, yes. Uh, somebody can break a window and break into your house. They don't do it often, right? So we still have windows. None of us live in safes. And so our, our networks and our information systems are the same. They have to be good enough to keep the honest people honest and to keep honest mistakes from really, you know, setting the whole world ablaze. But they still have to be useful.
YoyoI do sleep better. Knowing and believing that the critical infrastructure in both of our nations is being tested by people with your skillset all the time for the better. Good. All right. We wanna know that the nuclear power facilities in our countries are having their cyber networks, their information networks penetrated deliberately by red teams to see where vulnerability sit. Oh, please tell me that is happening so I can feel like I can sleep better.
PabloI'm not gonna comment on nuclear power. What I will tell you is, the president of the United States, Barack Obama, had released a presidential policy directive called PPD 21, where he listed out 16 national security sectors. It is not secret. You can go and look it up, and every one of those gets tested on a regular basis by all sorts of professionals inside of the intelligence community, inside of DOD, inside of other branches of government, and nuclear power generation is its very own sector. How's that?
YoyoIt's a fantastic answer and it does answer well and keeps you in the straight and narrow. We don't wanna ask you anything that's gonna put you in a compromising position, that's for sure. Not that you would ever succumb to my charms and ways. Anyway, the other question I was gonna ask you, I dunno if you're aware, but bla our head of MI six. Basically said very recently in her very first public address to our nation, she said that we are currently living in a state between peace and war. And there's an awful lot of activity going on in cyberspace in terms of established threat actor groups, established threat actor, protocols, and definitely she's probably popping up our government for more money. What can you talk to in terms of how you've seen the trend of cyber becoming the new frontier?
PabloYeah, no, it's, um, let me start out by saying that this is not new. Um, in, in 1998, there was a, a, a group of, of very famous, uh, American hackers, uh, who went by the group named Loft Heavy Industries, L zero PhD. Uh, and, famously, much Zko, who's one of the hackers said, I can shut down the internet in a matter of hours., And I'm abusing the quote a little bit, but you can still look it up. And that was really the first instance where, that I'm aware of where publicly hackers told the governments of the world, listen, you've got a real problem here. And since 1998, we've just gotten more and more machinery and more and more computers, faster networks. Now everything is smart, not just in industry, but in our homes. And when we crank out things quickly, we don't always do it necessarily safely. Lots of companies want to be the first to market, but being the first to market comes at a cost, you get closer to a minimum viable product than you get to a finished product. Unfortunately, separately from that, what has happened is the bar for hacking and for abusing system has gotten lower. It used to be the case that if you wanted to do something on a computer, you probably had to write your own software for it, which meant that you had to know how to program, you had to know something about the computer internals. You don't have to do those things. I remember in the early two thousands when, meta Metas Exploit came out. Metas Exploit is a well-known open source hacking framework. And a quote from somebody unnamed, at, the NSA at the time was, this is like the ice cream man handing out dynamite to kids. And there were legitimately things in there that. Were, I'm sure for some government state secrets as to how certain exploits worked. Uh, and now we've gotten to a point where you can literally just point and click and use a graphical user interface. Uh, so there are more targets, there are more people that are capable of using the tools that are more capabilities that are out there. Uh, separately from that, uh, you, you've got what the US military now calls either, uh, gray zone conflict or, uh, confrontation. Short of armed conflict. And this is what your leader was referring to, that. Um, there are governments that are using cyber means, uh, to attack computer networks in ways that fundamentally could have only been done kinetically in the past. So when you look at international law and you look at things like the west failure model of the nation, which say, which is what most of our international laws are based on, you know, one. Principles there is, uh, countries will not interfere with the internal matters of another country because that leads to kinetic conflict and war. And if you take a look at, uh, attacking of voting systems, attacking of power, attacking of water, these are basic things that people need to have functioning democracies and functioning, um, societies. And they're being attacked over the internet, uh, and they're being done without a formal declaration of war. And they're being done by what the Geneva Convention would consider illegal combatants because they're not operating, uh, with a uniform, right? They may or may not be operating formally under a flag. Um, they may or may not be actual, uh, government agents. They may be. Allowed by government aid, by governments to operate on within certain rules. Uh, and I won't point the finger at Russia, but Russia, uh, and uh, so it makes things, uh, very difficult to the historical way that we've reacted to these things. So yes, we're definitely in a, in, in a state of confrontation, short of armed conflict.
YoyoThink back to the original Top Gun movie when the threat actor. The MIGS it was an unrecognizable threat actor group, wasn't it? They deliberately kept it anonymous. They didn't make it look Russian or Chinese, and that was a really smart move. When you think how we've evolved in terms of our relationship with both of those countries, they could have quite easily, you know, outdated that movie. And it's still incredibly current today, except the technology's moved on somewhat talking about technology. Moving on, how have you seen the technology moving on and what made you go wow, over the years?
PabloOh my goodness. I do both things almost daily basis. I go, wow, that's really amazing. And then I go, really, when I look at the security. And so I'll give an example and I'm picking on this, not because it's particularly bad. It's not, it's a wondrous technology. But we're making simple mistakes. Artificial intelligence, everybody's talking about artificial intelligence and etic. And I think, wow. Some of the stuff that it does is really ama. I mean, it's mind blowing. It is so mind blowing that computer scientists that develop these systems don't fully understand how these systems work. It's really, it's amazing.
YoyoYeah.
PabloAnd then you look at some of the underlying technologies like MCP, which is used for agents to be able to talk to other agents and collaborate. Mm-hmm. And you see the initial specification, the draft specification come out, and there's absolutely no security built into it. And you go, these are the same mistakes that we had with T-C-P-I-P in the mid 1990s, and we saw how that went. Have we not learned anything since?
YoyoFor the non-cyber professionals listening, T-C-P-I-C-P-T-C-P-I-P is the handshake. It's the, hello, can I send you this? Yes, please send me this. I'll send it back to you. It's the handshake between devices. And when you think that back in the nineties, we just used to go online without any type of protection at all, and if this was a physical kind of interaction, the whole world would've got very pregnant. I just think we've got to start thinking safety, safety safety. But the problem is, as you said in your own words, there's a race too who can get there first. And you've only gotta look at a OL to see how the race to get there and the race for longevity is so critical. And who really uses Yahoo as a search engine anymore? You know?
PabloYeah. And again, those were early on. And so, you know, we make mistakes. I always laugh at the, you know, the first online, shop. Was, actually Victoria's Secret, believe it or not. And it was, it was announced during a Super Bowl,, a us super Bowl. And, they went on and on about how the protocol, meaning how your machine talked to their machine was encrypted. And it was encrypted. It was the first time it was amazing. Um, and they forgot to encrypt the database on the back end that actually stored your credit card afterwards. And so later on they had an incident. You go, well, yes, it was encrypted in transit, but on the back end it was not. Or before I showed up to NSA, so right around 2001, 2002, I lived in one of the very few apartments in Texas that had, high-speed internet at the time. It was a big draw point. And I am sitting at home one day and I'm seeing this weird traffic go across because it was shared bandwidth. And I'm mentioning to my neighbor, he goes, oh, that's me. I calibrate the oil refinery, you know, a few miles down the road. And I go, you know, I can see everything. Like, none of this is encrypted. He's like, why would it be encrypted? I said, could something bad happen if you didn't know what you're doing? He's like, oh yeah, you could completely blow up the oil refinery. I mean, you would make a smoking hole, you know, several kilometers wide. I go, well, you, you may wanna encrypt it because I can see it all. We learn these lessons. Sometimes we'll remember them and we implement them, and sometimes we don't. What I will say is, and this is gonna get me in trouble with computer scientists everywhere. I think we should teach computer science more like we teach engineering. And here's what I mean by that.
YoyoTell me,
Pabloso engineers spend an inordinate amount of time planning for failure, right? If it, I can't think of bridges. Bridges, right? Everybody knows the Tacoma Narrows Bridge video. Everybody saw it in high school, right? And so you go, listen, I need to build a pump that runs at 5,000 rotations per minute. And the engineer will sit there and build it. Go, listen, it will run just fine at 5,000 RPMs for a lifecycle of X. If you run it at 7,500 RPMs with 90% certainty, it will fail in three minutes. When we teach computer science, we say, well make sure that you check input. Make sure that you do bounce checking, uh, because otherwise your program can crash. And so you, you write some code and your professor throws some bad input at it, and your program crashes eventually learn to check. But crashing is really the best thing that can happen in most cases. Yeah. The worst that can happen is somebody hijacks the execution and now can do malicious things. But we don't teach computer scientists this, and we don't teach them how to plan gracefully for failure. We tell'em to plan to prevent failure, plan to make failure as hard as possible, but not how to fail gracefully and safely. Uh, and if you think about that, we've got self-driving cars and we've got nuclear power plants, and we've got things where, uh, to borrow, turn of phrase bits and bites, meet flesh and blood. We probably should talk about how to fail safely.
YoyoNow, on top of failing safely, I'm a huge advocate. I mean, I'm known as banging a sounding brass about this in my workplaces that I, where I get to speak, but I believe in security by default. You know, there are far too many. Processes open right now. If you think about, a simple ticketing system and asking for access or reconfiguration firewall, something like that. Something, you know, why? If we'd secured it by, not by design, if we'd secured it by default, you wouldn't be allowed to bypass the processes to ensure that you can set something up that isn't being set up to policy. So for those people who might be a bit confused, we've obviously got technology, people and process within the cyberspace. It's not dissimilar to the physical space where we have people, property and assets, right? Properties tend to be designed with also a lack of security in mind. But architects have got better now at designing buildings with security in mind. But it's got to be more than that. Why is it, and I don't expect you to have the world answer Pablo, but why are we just. Like, I wouldn't have a job if architects, when they learned how to do their job, designed with default in mind if computer scientists designed and learned with security default in mind, or security by design even. But I am, I'm quite successful in my job because that's, security by default doesn't exist. But it's such a quick fix.
PabloI can actually answer this one.
YoyoDo
Pabloit, do it. And you gave me the answer. So the wonderful thing about a architecture is the buildings don't change all that often technology does. And technology, uh, and buildings have to be fit for purpose, right? So if I go to security by default, which, uh, many practitioners would call, uh, uh, allow by exception mm-hmm. Default, and I allow by exception, I want you to imagine the number of emails that you get. And if. Your email program denied every email coming in from someone that had not previously uh, interacted with you. It's
Yoyotoo restrictive.
PabloIt's too restrictive. It has to be fit for purpose. And so you really have to do a bit of threat modeling. Go. What are the worst things that could happen?
YoyoMm-hmm.
PabloHow can I implement controls that are, Safe yet effective and don't impede progress. So, you know, um, I I, I'm gonna beat up on my, my fellow security practitioners, uh, who have allowed themselves to become the department of No right. A a a lot of CISOs, right? A lot of help desk analysts. They, they think that the first job is to pick up the phone and go, have you turned it on? Turn it off again. Okay, good. Uh, the next answer is no. Um, and the next five answers after that are no. And that doesn't work. Right? Um, if you're working at most corporations, most corporations do not make their money from cybersecurity. They make their money despite cybersecurity. And that's a problem. Yes. If you're, if you're working in a business and you're working cybersecurity, your job is to enable the mission. So,, I always like to use the, the example of racing breaks. Right. Brakes on a race car are not there to stop the car from racing. Brakes are on a race car to enable the car to take the curve as quickly as safely possible. That is what cybersecurity is supposed to do. So, you know, maybe you don't default deny an email from somebody that, you've never, talked with before. But maybe what you do is you, you label it and say, you've not spoken with this person before. Be aware. Maybe you don't allow it to, render live content. Maybe you only allow it to provide text and you don't download pictures. Maybe you put up a popup box before you download the attachments or you don't allow the attachments. There are lots of gradations between Yes, always and no, never that we have to be able to do. And let's face it, the, the cybersecurity personnel are outnumbered by our customers. Our users a hundred to one at a minimum. And so if they see you as an impediment to progress, they're just gonna go around you. Lemme
Yoyochallenge that notion just for a second, because if there's no brakes on a race car, and I'm a huge Formula One fan, there's no brakes on a race car. Yeah. The car's not gonna get past the finishing line. So a lot of cybersecurity and security professionals, you know, the no department would ultimately never be recognized for the fact that had they not being there, there may not even be a business for that, person to operate in and put brakes on the car.
PabloSure. And we can have that discussion. And I think that you're, you know, when you go to most companies, they have a CISO because they're legally mandated to have one. But then you go, what's the average lifespan of a CISO in a corporation? You find out it's about 18 months. Wow. And so kind of the, the running joke, it's not very funny, but it's, Hey, we just hired a ciso. Find me the next one I'm gonna hire after a fire. This one. So the CISO's there because I'm legally mandated. As soon as something goes wrong, they get fired. Even if they were doing everything right, we've gotta get past that. We've gotta get to the point where the CISO is seen as an enabler, right. Uh, and instead of saying no, they go, okay, tell me what you're trying to accomplish and let me give you a couple of options for how to accomplish that safely.
YoyoNow you've said that with a lot of words. I just say Yes. And in the words of Ariana Grande, it was a great song. She says, yes, and we're gonna have to do it with this in mind. You know, I think we've gotten to the stage certainly where, where I frequent, uh, work-wise, we, we tend to sort of. Think of ourselves as enablers, as security, bolt-on tools. You know, we work with the teams And I think it's about saying, let's find a way to yes. Rather than ever say the no word. Just'cause the PR is so bad for us, just like you said.
PabloA hundred percent. And the other thing, I would say is, you know, security by design. I'm a huge fan of that. And there are two, key indicators that if you see both of these things, you have a fantastic security culture. The first one is if you're developing, if your company is developing a product and they have somebody from cybersecurity in the early strategy and design meetings to go, are we missing anything? If you're in those early meetings as opposed to, right, we're ready to go to market. Tell us how to make it safe. That's not a good security culture, right? You need to be in there early. The second one is. If your, users feel comfortable picking up the phone and calling the help desk going, I think I clicked on something bad, or I wasn't really thinking, I put this USB drive. If they feel fine self-reporting and going, I made a mistake, and knowing that they're not gonna get in trouble for being humans, um, then you have a good security culture
Yoyoon your point about telling us how to make it safe. So I'm gonna give you a real life. Anom anm Anonmized. Uh, example, because I need to preserve the safety and dignity of this individual. But Appal told me recently that, um, they've got this neurogenic AI solution within their organization because policy documents can be stored in a number of different locations, and the Argentic AI is there to find all of the relevant documents and provide summaries so that this person doesn't have to spend all the time wasting hours trying to resource their, their, their documents. And, and for somebody in any type of policy role, this is invaluable. However, he realized the Agen solution was providing him information from highly restricted documents and documents that he shouldn't have the clearance to be able to view and access. But he was being given the content through the agent and, and he being in a security role, escalated to the business, said, hi, hi. Um, yeah, I don't think I should be having access to this data. And they said, oh, no, no, no, you're right. But this is to prove your point. That, that we've got to now spend more time making sure that users, all types of users, both from the engineering side, the dev side, and through to the user experience, are testing products before they are fully utilized. But that's often as training is usually something that's, bypassed in essence of speed.
PabloIt, it is. And I, I think the other thing that's missing, is education, right? The average user doesn't need to know bits and bytes. If you're a CSO and you're going to the C-suite or leaders of your company and you're talking about bits and bytes and CVEs and format string vulnerabilities and rock chains, you should be fired. You're there to translate all of that technical jargon to business speak. So that. The company can understand it.
YoyoYeah.
PabloUm, so the, the biggest problem with, let's say artificial intelligence is really it's marketing. It's the fact that we call it artificial intelligence. People believe that it thinks and it doesn't think it's math and it simulates results. That would come from thinking, but it doesn't think on its own. And so this is a perfect example of listen, if you put an agent on your network and you allow it to scour your network for all of the documents without placing any limitations on it, recognize it's gonna have all of the documents. So boss, I'm gonna be able to search to figure out what you made last year as opposed to what I made last year. That's probably not gonna play out well. And so giving those kind of simple examples of where things can go wrong for the business really go a long way. And, and the smart companies, again, will, will bring on the, the cybersecurity folks early and go, listen, we're thinking about this. What could go wrong? And we'll, we'll start listing out all the things that can go wrong. And then it's really up to the business to decide, you know, are those risks acceptable or unacceptable? If they're acceptable, great, roll on. If they're unacceptable, then hey, security guy or security gal, you just pointed to me a problem now, find me a solution, find me how to mitigate that risk down to an acceptable level. Nothing is zero risk.
YoyoIt's a bit like doing a Power BI script in a sense. If we're gonna go quite simplistically for those folk that aren't sitting in the cyberspace, you know, if you've got a number of different multicolored circles and you only want the purple circles, then you literally have to say purple circles only. Not yellow, not blue, not green. And I think we've got to get used to sort of thinking like this, when we're designing a Gentech AI to do things for us, you know, IE would it have been too complicated to look at the roles and privilege levels of individuals with the access to that AI solution and then mirror the access levels to the restrictions of the documents that are being stored? Of course, that's complicated. Who wants to do all that hard work?
PabloSo, yeah, it's, bad example, but I like to, you know, explain that AI is a bit like, they're a bit like a college intern from a very good university, or maybe the absent-minded professor. They have tremendous education and no common sense. Yes.
YoyoOh my God. It's
Pablolike C children. It is, it's like children. And so it's a tremendously powerful tool, but realize if you say accomplish this by any means possible, it's gonna accomplish it by any means possible, including some things that we would find horrifying just because you said by any means possible. So it literally does what you say., And so really, thinking about things that could go wrong, right?, If you're a parent, you've had a young child, right? You've played this game where you've given what you thought were explicit directions and the child finds a loophole to do exactly what they wanted anyway. it's a bit like that, but without the malice.
YoyoWell, if it's designed by a human, it's flawed by design anyway.
PabloA hundred percent. It's a hundred percent. But again, it doesn't mean it's not tremendously useful.
YoyoYes, I know. I agree with that. But listen, I am gonna forecast into 2026, okay.'cause we are record, we're recording this and it's still 2025. We're likely to be into Q1 when this goes live, and I'm gonna make a forecast that by the end of 2026, you know, like they do the top 40 singles countdown at the end of the year, I think at the end of 2026, we're gonna have a top 40 AI screw ups countdown. Because what we're gonna see now is the emergence of all of the, what went wrong. It's a bit, you know, I do genuinely feel like, and I use this analogy, I love using an analogies and folk often say to me, oh, I love all your analogies, yoyo, but I, I liken the uptake of Egen AI to saying to a whole load of people, Hey, listen, have a car. Yeah. But don't worry about getting your driving license. You'll learn as you go along. And we'll take your registration number plates off you as well. So, you know, if you make a mistake, no one's gonna know it's you. I feel like that's what we've done.
PabloWell, you know, uh, for better or worse, you're an optimist. No, no, no. I'm, I, well, I don't know if this makes me an optimist or a pessimist.
YoyoOh.
PabloWhen you look at the Oasp top 10, right? So Oasp collects a list every year of like the top 10 vulnerabilities in web applications. And they've been doing this for 15 or 20 years, and I think it's only changed like once or twice. I'm gonna guess you could probably take a look at the Oasp top 10 and go, right, how do we do that in ai? Those are probably gonna be our problems. Yeah. So we know what the problems are, we know what they're likely to be. Um, so let's, let's see if we can address it before that becomes a problem. Before the problem materializes.
YoyoYeah, but the, there's, there's such a huge element or in, in this, uh, relationship, and it's the human, isn't it? The human is the risk. And I mean, do you just look at it from an insider threat perspective as well, Pablo, in terms of a human being able to get access to information they shouldn't even have access to in a business. You know, what if it doesn't land in the hands of a friendly security colleague, but it lands in the hands of somebody who's, you know, just been told, they can't have a pay rise. And, they are, disgruntled beyond belief and thinking of walking out. We've seen the stories in the news about how employees, carry out actions after they've left in terms of, being massively disgruntled. I don't think that's great.
PabloWell, let's look at both sides of this. First of all, if you're a company and your employees are all disgruntled, then you probably should be treating your employees better. We'll start with that one, but. You know, invariably, bad things happen to good people, right? Uh, markets go up, markets go down, people ha, you know, companies have to, they grow, sometimes they have to shrink, and, you know, sometimes people just, they're not a good culture fit or they're, whatever. And you have to let them go. If you're letting somebody go, maybe when you call them in to let them know that news, maybe you remove their accounts while they're in the office, right? I mean, there's, there's some best practices there. It's if you're letting somebody know that they're gonna be unemployed, right, that you're breaking off the relationship, they're gonna be upset, right? There's gonna be some hurt feels. Maybe we should recognize human nature and take some proactive action.
YoyoI watched, Scrooge last night with Bill Murray, and when he let one of the board of directors go, he told us Pa, and she got on the phone and went code nine. It was just really funny.
PabloThe way you don't want to do it is like the movie John Wick, where he goes, listen, uh, we're gonna give you 24 hours and in 24 hours we're sending every bounty hunter after you. It's like, well, that's 24 hours for me to do a lot of damage. Yeah,, don't do that.
YoyoYeah, but what situation to be in, I feel for the guy I,
PabloIt's a bit like working cybersecurity, you know? Everybody's gunning for you, mostly your own users.
YoyoYeah. Yes. I loved what you said earlier though about the relationship between computer science and engineering, because you said in our pre-check, computer science is actually fairly new, isn't it? Compared to a lot of the engineering disciplines that are out there and including the qualifications. If you could wave a magic wand, what would you do to change things and bring computer science education up to scratch?
PabloI would do a couple of things and these are all gonna be controversial. The first one is I would spend a lot more time teaching computer scientists how to fail gracefully. The second one is, I would go back to the days where computer scientists learned a lot about how the computers actually worked. How arithmetic logic units worked. You know, how assembly programming worked, all of the underlying things, because when you use a modern language like Python, a lot of those things are abstracted from you, and so you don't really understand how they work, and that leads to bad assumptions, and that leads to things breaking later. Uh, the second thing I would do is I would have, uh, licensing for professional developers. Maybe not in every field, but certainly in certain fields, if you are developing the code for nuclear reactors or medical devices or self-driving cars where people's lives are literally in the hands of your software, you should be formally licensed, uh, in some way. And, engineers have PE licenses, they have professional engineering licenses. And so there, there should be some measure of some of that, you know, if you're developing, not to make light of it, but if you're developing a video game or maybe, software for a smart tv, it's not,, the worst case scenario is not as bad as you're developing software for somebody's pacemaker or insulin pump.
YoyoI love it. In fact, I was having a conversation recently and it will all come out in the podcast in all due time, but there are very early conversations, and we're a long, long way away from a final output, but very, very early conversations around making sure that the law stands up to AI being used maliciously, a bit like the, you know, malicious communications act, for example. We know that there's something quite similar in different countries, but if you are using AI to cause harm, whether it's, you know, deep fakery, fraud, and, it's a tool to assist you in crime. And I like the fact that they're having those conversations early. At the moment, like you said, not only are there no regulations around, how AI should be designed, you're right. If it's for health, if it's for the public, if it's for public safety and wherever safety levels are high, there's your barometer of where your licensing potential needs to sit. I think that's a really nice bit of thought leadership you've got there, Pablo.
PabloWell, thanks. And I will say, regulations are hard and I'll say, you know, in, in Western democracies, We don't like to make laws before there's an incident because we don't like to be overly restrictive. Right. Uh, and, and I get that, but I think putting in some basic guardrails is helpful. We have to be careful about how we do that because if we do it wrong, if one way, we can't hold people accountable and if we do it wrong the other way, people that are legitimately trying to help are abused and victimized. I'll give a very good example. In the United States, we've got the Computer Fraud and Abuse Act, and that's legitimately been used to bully and prosecute security researchers that are doing research into products and trying to tell vendors, there's a problem here. You should fix this.
YoyoWow. Where's the logic behind that? What the heck?
PabloWell, it's just the way that the, you know, the way that the law was written, it was written broadly and open to interpretation. And so, you know, certain districts are better about this than others, but, judges in most places don't get to write the law. They get to interpret the law as written, right? The law is written. You get to interpret it as as written. And so, if it's written well, then great. If it's not written well, it's not great. But as we, we've mentioned several times already, technology moves so quickly, right? That you can't really foresee every possible circumstance if you're writing regulation ahead of time. Uh, and so there has to be a bit of leeway in there. There has to be room for a little bit of interpretation. Uh, you just wanna make, don't wanna make it so broad that it can then be abused.
YoyoSo I've gotta ask you this question. We never talk about politics though, on the podcast, Pablo, but do you think that, uh, people should feel safer now? Or do you think we were safer three or four years ago?
PabloOh, goodness. Um, you know, that's a, that's a really difficult question to ask, answer, and I'll tell you why. Yeah. There's, well, there's certainly signs that a lot of things are not going well in, in general society, right? There's increased conflict, there's increased partisanship, there's increased antisemitism, there are increased number of conflicts. There are all sorts of things that are bad. Um,
Yoyoand then add mind sovereignty onto that as well.
PabloMind sovereignty onto, yeah, I'll, I'm sure we'll touch on that in a minute.'cause I've got ran about,
Yoyooh, let's do that.
PabloVE slavery. Um. But the other thing that, that is getting better is we have access to more and more and more information. So part of me goes, do things seem more dangerous because there are more dangerous things happening? Or do see things seem more dangerous because I'm now more aware of the things that are happening, and I don't have the answer to that. Um, so I, are people safer? I, I don't know. Do people feel safer? I think generally people don't feel safer.
YoyoI think that's down to a very common, uh, issue that we have also here in the uk, there may be less crime, there may be more crime reported because the channels and the confidence to report crime are greater than they were. Or even the awareness of reporting crime and the importance of reporting crime becomes more relevant. But if the perception is that there's more crime and there isn't, then there's a systemic failure there because we shouldn't have people living in fear. When there's no need to, however sociologically, uh, there's a benefit to having people living in fear. They are more likely to do what you cognitively tell them to do than if they're not living in fear. And that's probably, quite well summed up for me to be honest.
PabloIt is. And I'm gonna push back only slightly.
YoyoYes.
PabloRight, because you you said systemic failure and you know, I bet if we talk to the generation that comes after us, they're probably feel more pessimistic about the future than we do. We feel more pessimistic about a future than our parents did. Our parents felt more pessimistic about the future. Every generation feels that they have it worse off than the previous generation. And, and that's gone back who knows how many generations. And so I've got my own personal kind of opinions on, on whether things are better or worse, but those are really irrelevant. What I would say is that there tends to be a bias. To think that past times were better and it's really easy to go back and go, well, you know, in the past we didn't have a, any sort of vaccine for polio. How when's the last time you saw somebody in an higher lung? I'd say some things are better, right? The, yeah, our, our overall lifespans are getting better. Our overall quality of life is getting better. That doesn't mean that there aren't people living in horrifying conditions that existed hundreds of years ago. There still are, but when you look at the span of society. Things are better off for most people now than they were a hundred years ago.
YoyoAnd I always joke, uh, at this point in conversation,'cause I always say, you know, whereas I'd always like to have the opportunity to go back in time after reading a lot of books around accident investigations, including with, you know, trains and, and railways, sorry, same thing. And planes. I actually think we, we are in a much safer world now than we were then in terms of health and safety and just general guide rails to keep us safe. I mean, we don't smoke on the London underground anymore for a start. Yeah. And so I think from that perspective, I think going back in time is probably the riskiest it's ever been.
PabloYeah, no, it's, it's easy to kind of romanticize. And one of the wonderful things about, uh, cognitive bias is, as we get further away from an incident, we remember the fun times more than the bad times, right? So if you're a military veteran, right? You think back fondly on your times going through bootcamp or through some really awful training or, you know, I just said that one of my favorite tours was in Afghanistan. There were things there that were, you know, legitimately horrifying and awful. And I, I have to go back and think about those. I do remember the silly things that happened, right? So we kind of romanticize what happened in, in the past, and think that, you know, the future is dark and grim. And I, unfortunately, I think that's a little bit of human nature, but every so often I think we should stop and take a look back and go, yeah, we don't have plague doctors anymore, where we think, you know, some herbs are gonna keep us from catching the plague.
YoyoYeah, no, that's a great point. So tell me, I've gotta take you back. What is cognitive slavery?
PabloOoh, cognitive slavery. Alright. I'm gonna say one of my bombastic things that's gonna get me in trouble. We used to buy and sell people's bodies to do labor for us. Yeah.
YoyoAnd,
PabloWe decided that was bad, that was slavery. And then we used to take young children and we would pay them very minimal wages to do very dangerous jobs. And we decided that really child labor was equivalent to slavery. We didn't want to do that. What's going on now on the internet is this freemium model where you get a free service because you are, the product, is really akin to cognitive slavery. And I think that in time, we're gonna find that that model is ethically unsustainable. And it is morally repugnant. And we're going to, gosh, I hope, get away from selling people's information for giving them a pittance of a product. The reason that social media works is it plays on your emotions. The way they,, the reason that you have an account on social media is because they are buying and selling your likes and your dislikes. They are targeting your personality and your cognition so they can save you more ads, show you more ads, and they do that by keeping you engaged in the platform. How do they keep you engaged in the platform? They keep you engaged in the platform by playing to your biases and tinkering with your emotions, usually outrage.
YoyoYeah. No, you're right. You're right. You're so right. Right, right. In fact, interestingly meta Facebook have brought back the facility to be able to connect with your friends only in, in a tab. So now you can just see what your friends feeds. And I think one of the biggest complaints that they received over the last couple of years is where is the purpose? Why I joined this? I joined this because I wanted to be able to see what my family in Australia were doing. I wanted to be able to connect with my friends job interviews over in Texas, for example. We all of a sudden lost the ability to do that. And I saw somebody like, it was, I obviously only follow credible social media posts and memes and things, but there was this one guy, and he basically said, you know, they gave us a product and then they made it difficult for us to use it. Then they, we have to buy the solution so we can use it how we used to use it in the first place. And I can see you agreeing with me and yesterday, you know, I had to give LinkedIn again, you know, full access to my library of 62,000 photographs so that I could just find one image to upload because it was just being difficult with me and I can see what's happening. You know,, I can turn that restriction back off again, but, oh my God, you know, it's now becoming really hard work to manage the access. If you just look at location settings on your iPhone, your smartphone, whatever you are, using a not so smartphone, just to, to manage. That can be a full-time job. If you want to manage it with your privacy in mind, it's, but I'm right. Aren't I in sense of how they made it easy for us to use, we all bought into it, then they started making it hard, so we would buy things to make it easier to use. And they did the same with businesses as well.
PabloNo, a hundred percent. And the other thing that I'll add about,'cause you said, that meta had brought back where you would only see your friends feeds and, let's go back and analyze our friends, right? Some of our friends are friends that we know. We've grown up with them. We've gone to school with them, we work with them, right? They're legitimate friends. But we're also looking for social interaction. And so there are all of these groups on social media for whatever it is that you like to discuss, whether it is, you know, basket weaving or model making or cooking or child rearing or politics. Then you don't know who's in those groups.
YoyoNope.
PabloRight. There's that, there's that old me meme of, you know, on the internet, nobody knows you're a dog. Well, you know, on the internet, when you're sitting there talking about your local politics, you assume that somebody that's in there is also in your local area. They're a fellow citizen. You don't assume that they're a Russian agent or Chinese agent or Korean agent that's there to sow discord.
YoyoMm, I know. I know. And so
Pablowe still let those people in.
YoyoBut in fairness, I come from Cornwall in England and there's enough political discord down there, that I see on my, that I see on my feed. I'm kind of like, I can't believe my beloved friend thinks like that. So yeah, I think we've just gotta, is the pressure on us, you know. It's a bit like driving a car, isn't it? Do we just drive as crazy as everybody else is on the road to survive? Or do we actually just drive responsibly and take ownership for how we drive?
PabloI think we drive responsibly and take ownership for what we, for why we drive, but I think we also, when we see somebody doing something we don't like, we have to stop and think, why are they doing that?
YoyoYeah.
PabloThere's this fabulous thing that happens in cognition where, all humans overestimate how many people believe as they believe, right? So, if you go to, if you go to the us, something like 85% of Americans believe they're above average drivers. That's not the way the math works, right?, It's just not. And so what, what happens on social media is you believe X and you go, you know, because you believe in x, the algorithm goes, oh, let me introduce you to a bunch of groups where people also believe in X. And before you know it, your feed is full of only people that believe in X and you're in this echo chamber.
YoyoYeah.
PabloAnd you don't see anything outside of that echo chamber. So you think, of course, everybody believes this. Yeah. One of the things that's fundamentally changed is, you know, in the early 1980s, there were only so many, you know, television and radio channels for you to get your news, right? And so it didn't matter where you were on the political spectrum, you know, the news coverage was the news coverage, and then you would go talk to your neighbor that was on the other side of the, political spectrum, but at least you both saw the same news coverage and you could talk about what you agreed with and disagreed with that coverage. The problem now is there are so many channels. That you're not seeing the same stories at all. No, and there's not enough common ground there to have that civil discourse so we can solve the problems.
YoyoLook, I'm curious by nature and I did a qualification on cognitive thinking and critical thinking, a few years ago and it was very, very useful. I do recommend it'cause it makes you think in a different way, which is, I guess good, that's the objective. But there's also a part of me that thinks, you know, there aren't a lot of people that do think, they don't have the critical thinking capability. And that's pretty hard., That's hard when it comes down to voting, you know, down to being in committees and groups and subgroups and even sports groups and things like that. We tend to gravitate though, quite naturally, don't we, to people who are more similar to us than dissimilar, but in studying. The conversations that are going around the general rhetoric. The general narrative. I have been able to identify more, not identify, this is tricky. When I hear a politician say something, I don't agree with them, but I understand why their followers would agree with them. And that's a very different way for me to think. So when people say, you know, is he making sense? I'm like, to the people who really like him, yeah, he's, there's a dog whistle there, there's a, he's really tuned in. It's a radio frequency. And I don't know how, when, it's almost like when you see that you can't unsee it.
PabloNo. It's a hundred percent. And so people always ask me, they're like, well, you know, my, my friend, my family member, my coworker has been infected with this belief that it's absolutely horrifying. How do I. Talk them out of it. I said, well, tell me about the last conversation. They go, well, they said this horrifying thing. And I immediately told'em, no, they're wrong. And I went, how would you react if somebody told you you were wrong? You would dig in and stop listening. That is not a worthwhile conversation. You're much better off kind of sneakily, walking them through your logic. And the way you do that is by asking leading questions. Oh, you think that the, you know, the earth is flat. Why do you think the earth is flat? Tell me about that. You know, what are the, you know, what are the benefits of the earth being flat? What, you know, what would lead you to believe that the earth wasn't flat? What about this? What about, what the, have you considered? Right. Uh, and that gets them thinking in ways that they might not think as opposed to going, well, you're just one. Hopefully, hopefully. But it's an, it's non-confrontational and you're not gonna be able to change ev everybody, that, that's just no unrealistic, right? But your likelihood of changing somebody by walking them through, a series of questions that lead them to a conclusion that you would want them to arrive at, which is, yeah, maybe I need to do some research. Hey, would you be open to me sending you some papers that say the opposite of what you believe, is gonna get you a lot further than just telling'em that they're, you know, insane, uh, and crazy and they're not thinking about things.
YoyoI've had to understand a lot more about the difference between being critical and being cynical. Cynics don't change their beliefs and it's really important to understand that about society. And then you understand, you know, that cynicism is really not a great distance away from some fairly negative mental health in the sense of the disability to be able to see things from other people's perspectives and understand and have rational kind of conversations. And when I look at flat earthers, and again, if we go back to how I've been able to listen to the narrative and understand why they believe what they believe, even though it's completely different on the spectrum to what I believe,'cause I'm gonna make it very clear, Pablo, I think the earth is round. In fact, it's not quite round, it's more like a little wobbly shaped. But if you listen to the scientists anyway, we're not perfectly round. Um, there's definitely a part of me that thinks more of us need to think, and that's how we can exist in the same space. Otherwise, we are just gonna be continuously fractious as communities. Um, but in, in terms of flat earthers? Yeah. No. Well,
Pablocan, can, can, can I just riff off, because you used two words and I wanna, I wanna talk about those two words. The, the first word that you used was cynical.
YoyoMm-hmm.
PabloUh, and then the other word that you used was critical.
YoyoSkeptic.
PabloSkeptic. Skeptic. Yeah. Skeptic. Yeah. Okay. Uh, cynicism is an emotion. Mm. Skepticism is not. Uh, and I think a critical point I think most people should realize, and this is, this is why rage bait, and clickbait work. If you see something, hear something, or read something, and your first instinct is an emotion, you are 100% being manipulated. A hundred percent. Right. Love that. When, when your, when your dog comes up to you and gives you those puppy eyes saying, please give me a biscuit. Right? You're being emotionally manipulated. Just recognize that and they know
Yoyoit. Yes.
PabloMm-hmm. And, and your emotions come from the, you know, the older portions of your brain and they short circuit the higher cognitive functions. So you're not thinking rationally when you're being emotional. Right. Um, and so if your first reaction is emotional, you need to, you know, set yourself and interrupt. Going, wait, am I acting rationally or emotionally? I'm acting emotionally. Alright. Take a deep breath, think things through and then respond. Right. That, that whole, uh, don't send that email when you're angry thing there. There's something to that.
YoyoOh, true. And also about Amazon as well. Uh, don't buy anything late at night. Always think on it.
PabloI still have, I've yet to learn that one. And especially over the holidays, I don't think that's fair game.
YoyoTry, try and think, do I really need that now or should I just think on it tomorrow? And then by the time tomorrow comes around, you're like, no, I don't really need that. Uh, and it, it's so true.'cause we do think more emotionally as well when we're shopping late at night for some reason. I don't know why.
PabloAnd listen, you know, Jeff Bezos is a smart man. There's a reason that there's that countdown clock that says, listen, if you order it in the next 90 minutes, you'll actually get it tomorrow. Hey, listen, 30 minutes. 30 minutes. Right? I
Yoyoknow. I feel so manipulated. Pablo, what can I say? It's wonderful that we've been able to transcend over so many different but relevant human topics to talk about very current right now. And maybe we'll just have to get you back, in a year's time so we can talk about 2026 and see if it really was an AG agentic shit show.
PabloYes, please. And you know what I'd like to, I'd like to play a little game with you.
YoyoGo on then,
Pablobecause you said you were gonna come up with a top 10 list of, you know, AI oopsies.
YoyoYes.
PabloI think we should come up with our list, right? Yes. I think we should trade it. And maybe you put it in the description of the podcast. Yes. And then we come back in a year and see how we did.
YoyoOh my God. Yeah, it could do, but I think your sources might be a little bit more, deeper, than mine. But it also would be a good idea to debate though, which ones should make the final 10. Oh, I like it. Yeah, and I think we've got some secret sauce there. Very good stuff. Love it. Thank you so much for joining us on the Security Circle podcast.
PabloThanks so much for having me.