Curiouser & Curiouser
Curiouser & Curiouser is a podcast for leaders, builders, and curious minds navigating AI, GenAI safety, and governance in a rapidly changing world.
Produced by Alice, the enterprise trust, safety, and security platform for the AI era, the show draws on frontline adversarial intelligence to explore how AI systems are stress-tested, red-teamed, governed, and protected across their lifecycle.
Each episode looks at how AI is actually showing up in the real world, how organizations evaluate it, where it breaks, and what it takes to build systems people can trust.
We cut through hype and fear to explore how AI shapes trust, decision-making, and real-world work, one rabbit hole at a time.
Explore more from Alice:
Website: https://alice.io
YouTube: https://www.youtube.com/@Alice.io.advance.unafraid
LinkedIn: https://linkedin.com/company/alice-io
X: https://x.com/alice_dot_io
Curiouser & Curiouser
Resilience by Design: Inside Estonia's Digital Nation
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Most countries talk about digital trust. Estonia built it into the architecture.
Joseph Carson, Chief Security Evangelist and Advisory CISO at Segura, has spent 23 years living through Estonia's digital transformation from the inside. In this episode, he and Mo get into what it actually takes to engineer trust at a national scale, what went wrong along the way, and why the lessons from a small Baltic nation matter more than ever right now.
Joseph also previews his talk at RSA Conference 2026, "From Cyber War to a Digital Nation: Estonia's Playbook for Resilience."
🔗 Podcast: https://alice.io/podcast
Follow the show so you don’t miss the next episode.
New episodes every two weeks. Stay curious.
My view in the last couple of years is that we have used AI more on the defense side than the attackers have used it for offense. We have accelerated on using it because a lot of the traditional attacker techniques still work. Which is unfortunate, but we're still seeing a lot of innovation, like a lot of acceleration. Core fundamental things that you have to think about when you're looking at any AS systems is the ethics, is the rules, the law, the risk, and then making those decisions whether you accept the risk or not. And if you don't accept the risk, what things can you do to minimize it where you possibly can?
SPEAKER_01If AI has ever made you stop and think, wait, what is happening? You're not alone. I'm Mo, and I'm a security researcher asking the same questions. On Curiouser and Curiouser, we're having open conversations with experts, researchers, and leaders working at the edge of this space, talking through how AI is taking shape, what's shifting, and how people inside the work are thinking about it as it happens. So join us and listen in as the conversation takes shape. Well, hello, hello, and welcome back. Very excited to have our guest here today, Joseph Carson. Joseph is the Chief Security Evangelist and Advisory CISO at Segura. He's got an incredible history, and I do not want to butcher it. So I'm gonna let him tell us all about it, uh a little bit of how he got started and everything. So, Joseph, thanks for being here with us.
SPEAKER_00Absolutely. It's a pleasure to be here, and it's honestly uh an awesome uh privilege to get to chat with you and have a funny conversation um uh uh for today's episode. Uh for me, um my history's it's such a long one. So I'll try to summarize it as much as I can. Um, but I've been in the industry for more than 30 years now. Um I started off way, way, way long time ago. You know, when when it was we we called it digital transformation back then. And it was a true digital transformation because it was really moving from typewriters to computers. That's what digital transformation was then. Um and one of my first jobs was back working in medical records, uh, and it was digiting medical records for the hospitals. Um, and back then it was like in physical folders on a shelf, and we were basically connecting uh mainframes uh with McDonnell Douglas uh dumb terminals so doctors could have immediate access to patient records. Um, and it was a massive uh undertaking. And uh uh throughout since then, uh a large part of my early career was uh in cis administration, so managing the systems, keeping the infrastructure running, keeping users uh productive. Uh and I did a large part of that in the medical field and uh then also uh moved into ambulance service. So uh critical infrastructure. Uh so when people basically called uh the emergency phone line, my systems were responsible for one is rooting those phone lines and then also dispatching an ambulance to get the patients. Uh so when ambulance didn't function and that uh system didn't work, then it was a life and death situation. Uh but for uh after that, I I switched into transitioning much more into the security field. Because then security was just, it was things I did during my day. It was like a part of your job, but wasn't your job. It was like you know, just a task that you had uh among many other things. Um but then for the past 20 plus years, I've focused my career into uh cybersecurity. And it was an interesting transition because uh beforehand, uh, it was when I uh did the transition, the company you worked for at the time became a victim of a massive DDoS attack. And I had the privilege of working um uh with uh Steve Gibson uh on basically uh uh which was Gibson Research, uh, was also a victim of that DDoS attack. And uh I could become so fascinated in it. When I was dissecting it and analyzing it, um I got so like indulged and was excited again um that I switched my career to focusing over the last 20 years and protecting uh governments, organizations, critical infrastructure, uh specializing in the identity space uh more in the past 10 years. Um, but uh it's been an exciting career, and I've met so many amazing people, and it continually evolves in transitions. There's always a digital transformation every couple of years. Um, of course, you know, when we're in the AI digital transformation, which is happening at the moment. Um, but uh yeah, it's it's it's been a privilege and uh and an honor to be able to contribute so much to the industry.
SPEAKER_01Yeah, and you know, um you mentioned uh a couple of things about incidents and stuff, and like, you know, you never let a good incident go to waste. And it just seems like uh each incident teaches us something a little bit new about, you know, kind of the environment and the climate that we exist in. Which is actually something I want to talk to you about, because I don't think a lot of people know that like you're based in Estonia. And Estonia is uh something I've learned just in speaking with you, has the greatest, like what is it, like uh greatest data or like data GDP per person or something, right? Like Estonia is so like far out there in terms of, in my opinion, super far out there in terms of uh how far y'all have integrated technology into like everyday life. And I know like there have been a couple of times where you've mentioned that like Estonia, like you can go and open a business in minutes, right? And like tax returns, something that takes me weeks on them to do, right? It's people literally just need to just show up and sign. Um so like I think that's like such an interesting environment to really live in. And like I would love to hear like a little bit more about that and like how this digital transformation, I think it means something different for us, right? Like you say it's just from typewriters to computers, but that's more like an us thing. I think for Estonia, I think it must have been totally different.
Cryptographers, vodka, and the birth of blockchain
Why Estonia isn't building the next OpenAI
SPEAKER_00For the audience, while I live in and based in Estonia for over the last 23 years now, through that entire digital transformation that Estonia's been through, um, I'm originally from Ireland. So when I came to Estonia, um it was all new to me at the time. Um, but I was very fortunate to work with a lot of those pioneers who started that transformation. And it was back in 1991 when Estonia became, they had the re-independence uh from the Soviet Union. Um, and that was really the start of their digital transformation. And one of the biggest challenges they had at the time was when you're occupied uh or run by, you know, uh uh dictators or states, uh, what happens is they control your history. So all the things about like, you know, uh property register, land register, um, a lot of times they had been doctored and been manipulated, uh, whoever's in control of that information. So the big thing that Estonia wanted to do was make sure their history could never be erased. That was one of the fundamental things. Uh, and it was from a physical perspective. But in 1991, we had to remember that that was a big of the boom. That was when I was working in medical records and trans doing data transformation from paper to uh computer records. Um, it was also the big boom of ACERN and the internet was really booming. That was the time it all kind of like came to to basically collaboration and accelerated. So Estonia, when they became independent, they wanted to make sure one is they wanted to become it was a they become a paperless society because they wanted to take advantage advantage of computers. Um one of the big things Estonia had was also great education. They had lots of mathematicians, computer scientists, cryptographers. So it was really kind of uh also the the knowledge that was in the community and culture here was also significant to that contribution. But as they went down the path, one of the things is the problem they had with that, the land register and the population register and the the uh uh housing register, they wanted to make sure that no one could ever manipulate that data again. Um so whoever's in control of it, they cannot change, uh change history. So that was one of the fundamental things. But they also give the uh their uh cryptographers and scientists this challenge, how to do that in a digital world because they were going down the path of being paperless. So they sent their scientists off. I I think I always hear the story, they went into the woods uh into a sauna. Uh the government gave them the you know, gave them uh a bunch of uh uh bottles of vodka and said, go solve this problem. And then they forgot about them. Um but it wasn't until 1997 when they came out of the woods. I think somebody drew the short straw, and it was that uh, you know, go get more vodka. Uh we need we need to carry this on for a few more years. But when they came out, they ended up having to present to the government what their findings and what the research was. And they published one of the first papers. It was the paper published uh the paper in 1996, um, but it wasn't, they didn't start that they went through their peer review at that time, and it wasn't until 1997 when they became confident that they had the right solution. And really it was kind of the foundation of a blockchain uh element um in alter to make sure that history could not be erased. And then this set off a kind of fundamental path um that the government uh knew they had to do. They had uh identity at the core, so they knew they they needed an element of identity for citizens, they knew they needed to have an element of digital signature, so how to verify and sign, um, authenticate and authorize uh the changes to data. And then time is also an important element of that as well. So in 2001, all of those legal requirements came together, uh, meaning they had uh a mandatory digital uh uh document, uh, which was, you know, the it was kind of innovative by the Finnish, but the Estonians took advantage of it and they made it their own. And it became mandatory for every citizen to have this physical document, uh, which was even more powerful than a passport because it created it had PKI certificates in the actual document itself. So you can actually use for authentication and authorization for digital signing. Digital signing was then put into law in 2000. Um, and then they started doing the first systems. So they wanted to get the citizens on board because one is you don't wanted to make it look like the government's doing you know oversight. You know, they they want a backdoor into the citizens' information, but in fact, it's actually a more of a front door for the citizens to have an interaction with the government. Uh, so that's how they kind of proposed it. Um, and ultimately, one of the first systems was the tax system. It was the one that was really painful for everyone. It's it's one of the most time-consuming services that citizens have to utilize. Um, it was taking, you'd take hours. You'd go, you know, park your car, pay for parking, stand in queues, fill in forms, find the forms that you had uh were incorrect. You had to go and do them again, you end up having to go and amend them. Uh, and it would take time. And then people and it was wasted time. That's return on, you know, that's a a cost of the citizen uh and and the population every year that's wasted in doing tax returns. And it also becomes for many businesses a profitable business as well. So it was the first system they digitized and ultimately they made it back in 2002. It became almost about 15 to 20 minutes to do your tax returns online. But you had the option, you could still do it in person, but you were faced with this choice. Do I want to waste my time standing in queues in the cold weather? Because it's really cold in Estonia. Right now it's like minus 18. It's so cold. You don't want to be parking your car and queuing those. So to do it from your safety of your home, that was a big attractive. So so really kind of as people started to kind of use that, the first tax system, then it moved into banking, doing online banking, then it got into healthcare, then it got into voting, and the more and more systems became available anywhere at any time from any device, uh, no matter where you were. And it what happened was it really it changed in 2002, 2003. It was a significant change that it was no longer a paperless society. The government the government realized it's becoming a digital society. So now we're we're we're not integrating all these systems and services, uh, interoperability, um, you know, uh uh security, uh blockchain for non-repudiation, uh, PKI for authentication authorization. So all of these things started coming together. And it was a significant one, uh, I've had a couple of opportunities to to uh have uh interviews with the former president uh who at the time uh kind of led this um uh initiative. And uh he always says to me, you know, it was education. You had to prepare people for what's coming. You have to keep educating them, you have to keep informing them, you have to keep them involved. He said that this while it all kind of came together in 2002, the whole thing started back in 1997. Um, and it also means as well that uh you know that education knowledge became critical to the success. So people knew what was coming, they were ready, um, they were excited, um, they were involved, uh, and they got value from it. They they reduced wasted time standing in queues, you know, not filling in duplicate forms and and having records being duplicated. And the great thing as well, uh, when you move from a uh a serial kind of system, which everything has to wait for something else to occur, they move to much more of a parallel system, meaning that uh, for example, you know, one example is uh becoming a parent. When the minute you know the baby's born, everything that all the records and all the government systems already know that you're a parent. Rather than you having to go and filling a form in and then waiting for to get a name and get benefits and get healthcare and all of that stuff is done immediately from that moment because all of the integrated systems, all the interoperability that's there. Now, with all of that, it was it was fantastic. It accelerated Estonia to the to the global stage. But of course, it always comes disruptions. Um no system's perfect. Uh so there's always there's always challenges, and we we've had we've had our fur for fair share of challenges over the years.
SPEAKER_01I was gonna say, like, um maybe this is like more, and then you can feel free to stop me with this question, right? Because it is it's different, right? Like Estonia has so much data, right? It's again, like I said at the beginning, it's got one of the highest data economies, just as a percentage for like the amount of population, because it's it's small, but like very rich in data because of how much intelligence and um just technology that's been implemented in the infrastructure. Um but it sounds more like there's a lot of implementation versus development in terms of like this AI piece, right? Or like uh I think about the United States, and it's a lot of like we are need we need to build our own models, we need to go and do all these things, right? Estonia has all the data to do it. So why isn't Estonia building like the next open AI kind of thing? You know, because I mean you have all of the makings of uh everything. You have all this rich, really high quality data. So, like, what do you feel? Um what's like kind of the strategy in terms of AI implementation um beyond craft, right?
SPEAKER_00Absolutely. I mean, it it's it's um going throughout all of the different systems uh in Estonia. It's really kind of getting to the point where almost every system is going to touch an AI agentic agent in some way. Um so I think one of the things is you know why Estonia hasn't created an open AI kind of we we have. We have local models. Um so a lot of the models are built locally, so the LMMs. Um I just don't think we've had the computational, you know, and access to other data to make context of it. Uh, we would have been able to do a really good job at the local models uh and region and the country level, uh, which is what's happened. Um even recently, uh the Estonian kind of handed over all of the uh language uh archives to Meta uh and uh a few others in order to make sure that one is that uh you can start doing uh true translation uh and live translation into Estonian language as well to make it much more global accessible um as well. Uh so there's a lot of initiatives around uh you know expanding the use outside of Estonia. Um but uh it's been definitely they've taken a much more cautionary approach because they did a few have a few issues uh a number of years ago um that did you know make them kind of think about you know security and and and taking a bit slower process, even though they're way advanced in a lot of other areas, uh they did take a much slower pace. Um a lot of that was uh due to uh one is uh we had an ID card vulnerability back in 2016, 2017 uh through a change in vendor. Um and uh that caused a massive uh challenge that they had to go and uh uh uh reissue uh new um uh crypto algorithms for the uh PKI uh on the card. So that could cause challenges, but it just means that you know uh it's not going back. It means they're not changing, you know, and going back to doing something historic or old. It just means we have to find a solution to move forward and to make sure the trust remains and the ethical and the transparency is there as well. Um but it's only it does have a it's it's it has done a really great model around using AI and in society. Um and it's only gonna get better, especially when it extends into healthcare and and automation. Uh, I can't even tell you, you know, one of the goals is it's it they want to get to the point where it's a click one, uh click once, uh it's a authenticate sign society. Okay. That's all you need to get to. Um you you authenticate, I am who I am, you assign whatever you're going to sign, whether being a vote, whether being a legal document. As of last year, every single government service is now online. Yeah, every single one. Yeah. Um, and uh uh even the last two years were I mean, voting isn't just internet voting, it's also electronic voting. They do have the option to go and hit the button in the booth if you want to do electronic voting, which is the the traditional method, or you can do the paper voting if you're you know can't get uh to one of those locations, you can you can paper vote as well. Um so there's many options. Uh, but two years ago was the first time where internet voting, where you actually vote with your your laptop, um uh overtook all other types of uh voting mechanisms. And the great thing is when you have a choice, it allows you to provide oversight to the others because it allows each system. If you have multiple systems, each system can be a bit of a uh an oversight to others. Um, so it allows you some type of integrity or or additional visibility into uh the way and methods uh of what's happening.
Should AI decide who gets their medication?
SPEAKER_01For those of you heading to RSA this March, you know how chaotic it can be. Honestly, there are so many vendors, there's a ton of booths, all this. With this year's theme, focused on community, we decided to slow things down a bit and give the community a space to take a break and maybe join us for a cup of tea or two. Stop by booth S2051 and you'll see what I mean. Thanks. See you there. You've kind of, um, or at least Estonia has given everyone the ability to interact, right? However they feel most comfortable. But I think again, the most important part is standardizing all of this data and making sure that no matter how someone wants to interact with the service, it's always going back into a format that is easily consumed by the systems that are underlying. So that by the time you get to the, you know, whether you're filling out a paper ballot, right? Or whether you're going in and pressing a button, all these things go translate to the exact same types of outputs and they can be consumed in a single way. And I think that's kind of like the best part is uh making it very easy to consume and then standardizing it, um, having this like data classification that's like really like just easy to understand the data, um, and simplifying it so that no one is confused at the end of the day. Um, and actually, I think this is like a really big lesson to learn is um back in back in high school, um, I'm not very good at sports ball uh in any sense, but I was on the football team and our coach um had this like saying, Um, keep it simple, stupid, right? Because it was so confusing to learn all these plays. But if I could just stand there, or any of the guys on the on the line could just stand there and do one thing, like this one action, um, we can translate it across many different plays. And it was a lot easier to kind of execute. So in that same kind of um in that same kind of vein of thought, um, you kind of propose like uh at least you know, me doing like my my research and prefing this, right? Um this kind of like AI yes and AI no classification across data and um just to like kind of practically understand risk, right, across like these different services. And it sounds really simple to do that. And I'm thinking of Estonia as like this one massive organization. So um it sounds like maybe doing that across um a bunch of different services and a bunch of different things might be like kind of a nightmare. So like especially when you're can using all these different, you're an implementer, so you're using all these different services, all these different providers to go do this thing. So how do you actually enforce like these kind of these kind of yes and no's without creating all this drama of security theater?
The PII classification mistake that's still haunting us
SPEAKER_00Absolutely. One of the critical things is that how it's constructed, uh they they refer to the whole you know back end system as X-Road. Uh so X-Road is uh is the kind of interoperability between uh these systems. And they're they're uh decentralized as well. So they're not all central, it's not all just one big data lake. It's all decentralized and deduplicated. So if you want to bring a new service to X-Road, you can't bring uh the same data that somebody else is hosting um and make it available. So you have to have what's you know is the where where is this data available for me to use and what new data am I bringing? So it's decentralized, de-risk um, and uh interoperability is key uh in order to and the security elements of the transactions and the blockchain piece. So that all kind of provides that in repudiation. So but getting into you know that area is that uh uh about the yes and no side of things is of course when you get into critical decisions, and this goes back even to that subject matter working group in the EU AI Act is that um is there a risk to life? So If I'm giving you prescriptions, um, and uh let's say from the medical side of things, uh, there might be a certain set of prescriptions that, if taken incorrectly, um uh or too many, or with other types of medication, could be fatal. Do you want to make that decision allowed to be done by a system that's uh been driven by uh AI? Um and is there a potential risk of uh mistakes been made? Um, or do you want it to deal with things that are which are you know low risk, repeatable, um, that you kind of so it's gonna you know, even do you want an AI system to be able to make decisions of life support? Probably not. So this is really where we have to start looking and classifying. There's actually an interesting I had uh uh an online discussion with a couple of my peers who actually from that working group um have been making that statement about um yes AI, no AI, the scenarios and how much can it be involved in the decision process as well before there's human oversight. We, you know, a lot of times uh what I find is that suddenly I'm becoming more of a just a you know an observer to all of the transactions that's happening in the background. And at the end, there's certain transactions that I get to uh uh participate in. Is as I mentioned, that you know, authenticate and sign. So when the tax returns come up in the end of whatever March, um I will log in authenticate and sign. So there's certain things that I I can't delegate that to an AI bot. Um, so that's a that's a a yes, it can be done in order to uh create and bring the data together uh uh from a acceleration to make it quicker. Uh, but some point in time there has to be a human uh making the decision and and having accountability because then there's legal ramifications as well that comes uh and participate in that, or also financial uh uh outcomes as well. So it's really getting into uh we have to really take the EU AI Act, uh, which is that risk-based model, into running through every scenario. Uh and uh at the end of that scenario, is there a potential uh possibility that a mistake and the AIs are probability-based. We have to remember that they're not they're not rules-based. Rules, if we want a rules-based system that can make uh definitive decisions based on yes or no, that's a rules-based system. But they're probability-based, which means that it's highly likely. Um, and to, you know, when highly likely, then we have to get into the decision about will that have a risk to life uh or to significant uh impacts to people's even way of living or standards. Uh, then we have to to really be critical about you know how we apply that uh to the systems and to the decision making. Um, it reminds me even, I remember uh quite a few years ago, it's only came out with we have this, we it started with data embassy. Data embassy was where it started, but then it became e-resident. So anybody around the world become an e-resident and become like a and use the Estonian digital services no matter where in the world you came from. And one of the challenges was that without uh having a financial component, the e-services were not really valuable if you couldn't have a financial transaction step, you know, uh contributed to it. So if you open up the business and you couldn't open up a bank account, then that created problems. So one of the things back in 2014, whatever, I can't remember the exact date, but um, they came up with this concept of open banking, that you could become an e-resident, uh, get the digital services, but also open up a bank account online. And it was like kind of weird through this whole scenario. And I remember I was like something in my gut was telling me there's something wrong. Of course, you can look at it retrospective later and say, okay, ah, that's what it was. But what happened was they treated all e-residents and all uh open banking equally. Um, and ultimately what happened was people in Southeast Asia and other parts of the world were getting these e-residents, but also getting the bank accounts legally, um, but then they were basically selling them to criminals who were then able to do basically money laundering. And that created a massive money laundering scenario where um it's almost like you know, if you have a bad apple in the batch, um, that's exactly what happened. Everybody who was a not just a e-resident, but if you were a uh uh not Estonian citizen, but anybody else, you had to go through and know your customer uh process. Um and that was that was problematic, that was a massive challenge. But the way it should have been done was uh uh with e-residents and the banking, they should have done it around the legal system, the legal framework. So that's what it should have been bound to. So because an EU citizen, a US citizen, a Canadian citizen, there's basically binding uh agreements between each of those regions, but in Southeast Asia, there's not. So when you're getting into those critical yes or no decisions about AI systems, there's multiple things you have to uh uh uh consider. One is absolutely the type of data, the data classification, so critical of as part of that. The second part is that is there an impact to human life as as a as a part of that data? If it makes a mistake, a probability mistake, um uh will somebody suffer. And then the third part is a legal uh consideration. Is it doing it in a legal way uh and adhering to legal kind of uh requirements, not just within countries, but as it crosses borders as well. So this is really some of the core fundamental things that you have to think about when you're looking at any A systems is the ethics, is the rules, the law, the uh the risk, uh, and then making those decisions whether you're you accept the risk or not. Um, and if you don't accept the risk, what things can you do to minimize it where you possibly can? Um, and that's really it. That's that's ultimately kind of where the EU A Act came out of. A lot of those working groups, we were all given different scenarios. The specific one I was given was law enforcement's acceptable use of AI in criminal investigations. And what we found was that to earn trust in that specific scenario, you always have to be right. If you if you're doing a forensic lab and you you you got a bunch of swabs and all of a sudden you you find that one's contaminated, they assume that all the ones that's used from that batch might be contaminated. Um same in legal cases, if somebody was doing something incorrectly, uh all the cases they worked on might be under review. Um, so we really have to be very cautious about unleashing uh this into everything. Um and that's why Estonia's taking a much more, you know, certain things are progressing um and they're being very cautious about which systems uh they're they're exposing into, you know, especially around the data itself. Um because one once it's once it's out, it's out. There's no there's no back. There's no there's no turning around. It's out there, it's done. Um so you have to be very, very uh considerate into how you expose expose the data um and and maintain trust at the same time.
SPEAKER_01Yeah, data is unfortunately more permanent than I think we really like can understand sometimes. Um and it it it lasts for a very, very, very long time.
SPEAKER_00I want to bring up an important point as well. One one mistake that uh we made during EU GDPR uh was to that point, to your exact point, is that there's two we we didn't do data classification correctly. What we called it was personal identifiable information, PII. Um, and I think we should have had two classifications. One should have been persistent PII and non-persistent PII. So what that what that means is data that you can never change. You know, it's it's persistent. You were born at a certain date. You can't change the date that you were born. You you can manipulate it, you can poison it, but you can't change that fact. You can't change the who your parents are. Um, that you know, there's events in history which are persistent that are uh personal identified information, home address is changeable. You can move, uh, you can change your name. Um, you can't, you know, there's lots of things that are non-persistent as well. So we have to make sure as as we look at uh AI and uh personal identified information, we has the class we have to classify it into two categories: the persistent data that cannot be changed, and and if it's exposed out there. Uh health data, your past surgeries, uh, you know, that's uh uh uh all of those activities um have happened. They they are events in history, uh, but there is data that can be changed. Uh, you know, you can get new credit cards, you can get a new telephone number, things can modify IP addresses change on a daily basis.
SPEAKER_01Yeah, all the time.
SPEAKER_00Uh passwords change. So they're they're non-persistent. So we we should have classified it as two categories. Um and that would have been it much easier, especially in incidents and breaches. Um, because if you know if uh a data breach happens on the persistent data, now it's out there. That that's done, you can't change it. Um you can just monitor more cautiously, knowing that it's out there. And you should always monitor um assuming it's out there. Um, but the persistent data you can take action against. You can get a new credit card, you can change your password, you you get there's you can take precautions uh to minimize exposure in that side.
SPEAKER_01And I think it brings um, and that brings up a really good point. Like um, again, um having gone through the EU AI Act and then having been on a working group for the the AI code of conduct, right, for the general practice, um, there's something that there's this recurring thing that keeps coming up, right? And it's like this need to reconcile um like this like uh requirement for like really dynamic security, but then having these very static regulatory requirements that are almost literally written in stone, right? And um I think um this has been said many times in the past, but cybersecurity is a living organism and it has to be treated as such. And this requirements that we have one day, just like you said around PI9, they need to be made modern because yes, there are pieces of PI9 that are persistent. Like I can't change, well, my parents would argue that uh you can change your date of birth, but you really can't change your date of birth. Um and you can't change certain things. Um But yeah, it feels like we're just kind of like in this very interesting age where we do need to reconcile um the things that that do need to be modified that just are not working for us anymore, um, especially when it comes to regulations.
Attackers and defenders are using the same tools
SPEAKER_00I I completely agree. That's it's something that it needs to be the security need as you know, we we see it as a living organism, it has to be dynamic, it has to be adaptive. Um it really comes down to the data, as I as as we're we're kind of referring back to that the data is really what we're protecting. And and uh if we can change certain pieces, that can uh change the risks, but the the ones that you can't change, uh that risk is persistent.
SPEAKER_01Yeah, I think like a really good way to think about it, or at least a lot of people have used is they say the double-edged sword. Um, you know, uh I always like to think of it as a shield, though, you know, like where we try and protect everyone with a shield, but like you can also bash someone across against the head with the shield, right? It's heavy, it's cumbersome, but it'll work, you know? Um, but no, you're right. Like we we're trying to create these defenses, and it's so weird. As a as a security person, we're always told like um you're always gonna be behind the attacker and you're always gonna be behind the threat and the risk, right? And you have to just move faster to get ahead of it. And this feels like the one time um where innovation is actually moving faster than we can really think about defending it. And we're almost um we are so far ahead of ourselves that we're protecting again, or trying to think about threats a little too slowly. And it's kind of scary because we're also going ahead and implementing AI in our defenses in ways to um like you know, protect against these threats? In a way, where like attackers can actually leverage a lot of these new tools that we make, like, oh, like this is how we're gonna defend ourselves with AI. And attackers are gonna be like, well, I'm gonna take that and use it to go and break your defenses.
SPEAKER_00You know, like I think you just get into is you take the guardrails off and you figure out. Um, you're absolutely right. Uh I've I've seen my view in the last couple of years is that we have used AI more on the defense side than the attackers have used it for offense. I think we've we've we've accelerated on using it because a lot of the traditional attacker techniques still work, which is unfortunate, but we're still seeing a lot of innovation, like a lot of acceleration. We're hoping that we will keep that pace moving forward.
We're becoming digital companions to AI
SPEAKER_01What is it what does it mean like when as we move towards this space where attackers are using AI and defenders are using AI, but eventually I think it's just gonna be analysts watching AI just fight each other. Um like what is what do you think like that's gonna kind of end up? What what is the rubble or rubble gonna look like when that chaos is happening? Like who's gonna be facilitating that?
SPEAKER_00So I have a couple of metaphors that I've used for that uh over the time over the years. And you're absolutely right. We're becoming observers, um, and we can inter inter can intervene or interact when we need to in order to maybe modify or or you know improve, optimize uh the algorithms and and the data. Um and I think I had this idea a couple of years ago where I started thinking that uh we're we we're AI is almost like a tomagotchi, um, the old digital companion where you have to keep feeding it, you have to keep you know uh keep entertaining it, keep giving it data. And actually, what I realized it wasn't too long ago, actually I got I got a tomagotchi uh uh as a gift, and I started realizing that actually it's the opposite way. We're becoming the digital companions to the AI itself. So it's almost like the reverse. It has to keep feeding us for information and context. So uh, but to answer your question, I think that uh it really comes down to the context of the data. The more better, accurate, valuable data, it's not more data, it's the quality, the quantity, uh quantity and the quality of the data. Um, that's and then the computational power uh is what wins. Um and that's ultimately what's going to come down to you is what algorithm is refined, trained on the best data. Um and I I remember I've had discussions where sometimes the best models is not uh having all the right data, but also having incorrect data as well, because it will allow it to counteract itself and balance um and learn what's good and right. Well, you hope that it's gonna learn that the good and right side and things not become rogue. Um but it means that training it uh with the data that will make it as best value and and and uh uh the confidence level as high as you possibly can, and then it will come down to computational power, um, GPUs, um, that ultimately can can crunch that data as quickly as you possibly can. So that means that you want to have data that uh can make critical decisions very quickly, um, at the same time faster than the attackers. And that's what's gonna be the difference. It's gonna be like it's just gonna be like a Formula One race, is that you know, we're all racing in the same cars, you know, it's you just think it as a Formula One race. And us who have the best analytics and the best performing engine and the best sock person uh analyzing that, which is the the driver, of course, um, is gonna be able to prevent and stop attacks quicker than the attackers can abuse them. And that's what it's come down to. So I mean it is gonna be a Formula One race when it comes to AI. Um and the more, the better we uh optimize and the better we uh decide on which data can help us the best uh is gonna be the you know that's that's who's gonna you know go through the checker line. And we and we'll we'll always we'll always have to be ahead.
SPEAKER_01I'd like to ask you one more question um that I think will will help us get there too. You've seen so many things, right? And I'm uh you've been across so many different organizations, and you've seen products be built, you've seen products fail, um, and you've seen products wildly succeed, even though maybe they shouldn't have. Um but you've also seen all the dark sides of it too. You've seen things go horribly wrong, and you've seen things uh completely get destroyed. But I think we're in a space right now where everyone is adopting AI. Um, all sorts of vendors, all sorts of teams and um products are implementing AI all across it. And then security, I know I've seen this happen a little bit too many times at like different conferences and um just different vendors saying, we're using AI, we're using AI. And we're at this point where we are all responsible for how AI is being implemented in products and how we talk about it, and how we are providing value at the end of the day, right, for each other. Now, I'm not saying this for us to advertise either of our organizations. I'm actually, I would say that I'm asking this to challenge both of our organizations, right? Um, just based on your experiences, like what do you feel like are the shiny things that maybe are being talked about too much that are adding no value versus like the things that actually make um I would that make security better, right? Um, that make it more feasible for places like Estonia to go and say, hey, this is actually where we are seeing value from security, and this is the things that actually moves us forward and things that actually enable us and things that make security the trusted partner in the room.
SPEAKER_00That's a uh a fantastic and excellent question. I think one of the things is that we'll have we have to change as an industry uh in order to get to where I believe that we should be going. I think that when I when I go to a lot of conferences and over the years, I've seen a lot of buzzwords and a lot of trend words, you know. I I've seen them all. And I think you know, when you talk about the next gen, this and AI part of this, I think it really comes down to is really what value? What are you making easier for the person? Uh how are you making their lives better? How are you making the organization much more resilient? How are you making them much more innovative? Um, I remember, you know, it goes back years ago when I I there's a pen test when I did for a power station, and it was a big realization. I was talking with the uh CEO and CFO at the time, and I was advising the CISO. I was trying to get them more budget, and the CAFO said, uh budget denied, uh we took a fear approach. We did the skirt tactics, just like every one you see out there scurrying uh people into buying something. And uh what I realized was that the CFO said, Is that uh how are you helping us uh you know make our employees' lives better? And it was a realization, it was like we weren't thinking about that. Uh and it meant that I had to, I went and I actually sat and I I I put myself in the shoes of one of their in like several of their employees to understand about what a day in the life of their job looked like and what things could I potentially when you're putting security in place, it should always make the person's experience better and it should help them with their job better. And that's the fundamental change we have to look at. So anytime you see a new buzzword, a new trend, um, you know, we've seen it many, many times over the years, is that what's the fundamental thing that it's doing to add value to our society and how is it doing it? Uh and is it making that person's lives better? If it's a sock engineer helping them analyze things, if it's the employee who's sitting on the desktop making financial decisions, how is it helping them do that much more safely, but at the same time helping them be successful? So fundamentally, I think what we really need to do is we need to really change it being from a security perspective to being what's the return on investment. And that return on investment could be anything, it could be what Estonia measures is reducing wasted time. And that reducing wasted time has a monetary value uh to society. It helps us do more with the the the budget that we have. Uh it helps actually reduce the risks of people becoming victims, uh, it helps us do better integration, much more uh interoperability in society, um, accelerate innovation and technologies. So you fundamentally have to look at is that how is it at the end of the day, how is it making society and our lives better? Uh and you have to have a measurement for that. You have to have a metric. And that's what's actually one of the my my most recent books, which is currently in draft, is all about that. It's about looking at identity and fundamentally how we can turn it from being looked at uh from a security perspective to the actually outcomes that it adds value to our daily lives and focusing on that. Um because security is we don't do security for the sake of just security, it fundamentally has to have value to the business. Um, and I think that really that's that's the big shift that we had to go through. Uh and so yeah, the fact You know, it's AI powered this. It should be uh we're using AI in order to uh help your SOC analysts uh analyze 10x more uh number of uh incidents and uh alerts and uh indicators of compromise and detect them you know three times faster than they're doing today. And you can look at that, just you know, that turns into something that's really clear from a you know, okay, it's helping my analysts actually be able to uh um uh work more efficient on a larger amount of alerts, but at the same time detect the ones that's malicious three times faster. Um and that turns into okay, if I'm able to do that faster, then I'm able to am I staying ahead of the attackers? Is that three times faster just making that difference to make sure that I'm stopping the attack before it becomes something of financial impact or or business impact or or you know employee impact? That's what we needed to get to.
SPEAKER_01So, Joseph, thank you so much. Like I really enjoyed our conversation. Um, you said you had a book coming out, so I know I'm gonna be really excited about it. But where can people find out more? Where can people find you? Uh yeah, what do you have coming up?
SPEAKER_00Fantastic. So I do have, I mean, uh the easy thing to if people do have questions, they want to contact me, the probably easiest places on LinkedIn. That's pretty much where I post a lot of my content and do my engagements and and uh a lot of people connect with me, ask for mentorship, or you know, ask for feedback or or our collaboration. So LinkedIn is probably the easiest place to reach out. Um, they can also find me on my own podcast, which is the Security by Default Podcast. Um, so it's on all streaming platforms. Um, so that's another location. Uh, where am I gonna be? Um uh I'm definitely gonna be at the RSA uh conference in March. So um I'll be actually a speaker there, actually doing a talk on the uh evolution of uh the Estonian digital resiliency. So um if people it's the at the RSA conference, uh come come catch the talk and uh uh find me at the conference as well. So that's that's where we're the easiest places to contact me.
SPEAKER_01Thank you so much, Joseph. No problem, it's been a pleasure. If this episode helped cut through the noise, like or subscribe so you don't miss what's next. Thanks for spending time with us. Until next time, stay curious.