
Digital Transformation & AI for Humans
Welcome to 'Digital Transformation & AI for Humans' with Emi.
In this podcast, we delve into how technology intersects with leadership, innovation, and most importantly, the human spirit.
Each episode features visionary leaders from different countries who understand that at the heart of success is the human touch - nurturing a winning mindset, fostering emotional intelligence, soft skills, and building resilient teams.
Subscribe and stay tuned for more episodes.
Visit https://digitaltransformation4humans.com/ for more information.
If you’re a leader, business owner or investor ready to adapt, thrive, and lead with clarity, purpose, and wisdom in the era of AI - I’d love to invite you to learn more about AI Game Changers - a global elite hub for visionary trailblazers and changemakers shaping the future: http://aigamechangers.io/
Digital Transformation & AI for Humans
S1:Ep68 Synthetic Lies of Deepfakes: Rethinking Trust, Identity, and Defense in the Age of AI-Generated Deception
Aaron Painter from Seattle, United States, joins me to explore the synthetic lies of deepfakes - and why protecting our identity is the next great battle of the AI era. We are going to dive into Rethinking Trust, Identity, and Defense in the Age of AI-Generated Deception.
Aaron is the CEO of Nametag Inc., the world's first identity verification platform designed to safeguard accounts against impersonators and AI-generated deepfakes.
Aaron is a global leader, having lived and worked in six countries across four continents. He has authored a best-selling book titled LOYAL. He is also a speaker, advisor, and investor to companies that are pursuing business transformation.
Prior to his tenure at Nametag, Aaron served as CEO of London-based Cloudreach, a Blackstone portfolio company and the world's leading independent multi-cloud solutions provider. He also spent nearly 14 years at Microsoft, where he held various leadership roles, including Vice President and General Manager of Business Solutions in Beijing, China, General Manager of Corporate Accounts and Partner groups in Hong Kong, Chief of Staff to the President of Microsoft International based in Paris, France, and General Manager of the Windows Business Group in Sao Paulo, Brazil.
Aaron is a Fellow at the Royal Society of Arts, Founder Fellow at OnDeck, a member of Forbes Business Council, and a senior External Advisor to Bain & Company.
Key Topics:
– What inspired Nametag as a new paradigm for digital trust
– Identity in a world where sight, sound, and text can be synthetically forged
– Where innovation ends and erosion of truth begins
– Psychological, emotional, and societal risks of synthetic media at scale
– Rethinking cybersecurity, education, and policy to counter synthetic manipulation
– Who benefits and who is most vulnerable in the AI-generated future
– What we’re most unprepared for in the next 3–5 years regarding identity and authenticity
– Building trusted systems when perception becomes weaponized
– What beliefs and assumptions we must let go of to adapt to the synthetic age
🔗 Connect with Aaron on LinkedIn: https://www.linkedin.com/in/aaronpainter/
About the host, Emi Olausson Fourounjieva
With over 20 years in IT, digital transformation, business growth & leadership, Emi specializes in turning challenges into opportunities for business expansion and personal well-being.
Her contributions have shaped success stories across the corporations and individuals, from driving digital growth, managing resources and leading teams in big companies to empowering leaders to unlock their inner power and succeed in this era of transformation.
AI GAME CHANGERS CLUB: http://aigamechangers.io/
📚 Get your AI Leadership Compass: Unlocking Business Growth & Innovation 🧭 The Definitive Guide for Leaders & Business Owners to Adapt & Thrive in the Age of AI & Digital Transformation: https://www.amazon.com/dp/B0DNBJ92RP
📆 Book a free Strategy Call with Emi
🔗 Connect with Emi on LinkedIn
🌏 Learn more: https://digitaltransformation4humans.com/
📧 Subscribe to the newsletter on LinkedIn: Transformation for Leaders
Hello and welcome to Digital Transformation and AI for Humans with your host, amy. In this podcast, we delve into how technology intersects with leadership, innovation and, most importantly, the human spirit. Each episode features visionary leaders who understand that at the heart of success is the human touch nurturing a winning mindset, fostering emotional intelligence and building resilient teams. What happens when truth becomes optional? Aaron Painter, from Seattle, united States, joins me to explore the synthetic lies of deepfakes and why protecting our identity is the next great battle of the AI era. Great battle of the AI era. Today, we are going to dive into rethinking trust, identity and defense in the age of AI-generated deception.
Speaker 1:Aron is the CEO of NameTag, the world's first identity verification platform designed to safeguard accounts against impersonators and AI-generated deepfakes. Nametag has become the trusted choice for leading companies seeking to prevent fraud, reduce support costs and eliminate the frustrations associated with account lockouts and high-value transaction authorizations. Aaron is a global leader, having lived and worked in six countries across four continents. He has authored a best-selling book titled Loyal, in which he details his key to leadership, fostering a culture of listening. He is also an active speaker, advisor and investor to companies that are pursuing business transformation.
Speaker 1:Prior to his tenure at Nametag, aron served as CEO of London-based CloudReach, a Blackstone portfolio company and the world's leading independent multi-cloud solution provider. He also spent nearly 14 years at Microsoft, where he held various leadership roles, including Vice President and General Manager of Business Solutions in China, general Manager of Corporate Accounts and Partner Groups in Hong Kong, chief of Staff to the President of Microsoft International based in Paris, france, and General manager of the Windows Business Group while stationed in Sao Paulo, brazil. Aron is a fellow at the Royal Society of Arts, founder fellow at ONBEC, a member of Forbes Business Council and a senior external advisor to Bain Company. Welcome, aron, it is a great pleasure to have you here in the studio today.
Speaker 2:Thank you, emi. It's a real pleasure to be with you, and so many of the topics that you've discussed and the ethos, the philosophy for these interviews really resonated with me, so it's a privilege to be with you.
Speaker 1:Amazing. Let's start the conversation and transform not just our technologies, but our ways of thinking and leading. If you are interested in connecting or collaborating, you can find more information in the description, and don't forget to subscribe for more powerful episodes. If you are a leader, business owner or investor ready to adapt, thrive, thrive and lead with clarity, purpose and wisdom in the era of AI, I'd love to invite you to learn more about AI Game Changers, a global elite hub for visionary trailblazers and changemakers shaping the future. Aron, I've been looking forward to this conversation for quite a while, so I'm excited to learn more about yourself, your journey, and I would love you to share with us what is guiding you towards such incredible achievements.
Speaker 2:Well, thank you. You know, the most recent set of things I focus a lot on is this idea of protecting people online and protecting people in the digital world, and for me, that started about five years ago. In particular, it was the pandemic. It was March of 2020. And I had several friends and family members who had their identity stolen and everything, if you remember, was closed and there was no way to go into a branch, into a physical office. I said, ok, I'm going to be a way to go into a branch, into a physical office. I said, okay, I'm going to be a good son, I'm going to be a good friend, I'm going to be a good grandson. We're going to jump on the phone, we're going to call these companies, we're going to figure out what's happening. And it turned out, when we called companies, before they could help us, they had these famous security questions that they had to ask and it turned out that someone had called before us and was able to answer those security questions and then take over the accounts of my friends and family and loved ones.
Speaker 2:And it was sort of at that moment that I realized that in the growing world of digital transactions, which, during the pandemic, everything had become digital, that in this new world, there was suddenly an even greater need to know, you know, is someone human?
Speaker 2:And oftentimes, which human are they when they're behind the screen, whether that's on the phone or computer or other format? And that's what led me to really try and solve the problem and I went out and found some of the smartest minds I could, and security and identity and we started building this sort of really powerful product that could verify who someone was in those remote environments and we knew to impersonate another has only gotten better. You know, we've given these bad actors new superpowers with Gen AI technology and they're using that often to impersonate others and take over accounts. And that era of identity has become the leading cybersecurity attack vector, targeting companies all over the world. That started in the US and most recently the UK, and many, many global enterprises of all sizes have become victims of these cyber criminals that are using Gen AI tools to impersonate rightful account owners and take over the account. So preventing this is what motivates me and keeps me and our team very, very focused on trying to build a safer internet.
Speaker 1:Thank you so much for sharing this story. I can imagine how stressful it's been in the moment. At the same time, it's been such a turning point because you created something that can help so many protect themselves and avoid that situation and that stress connected to the identity loss, Because we all can think about it exactly as you mentioned. But it's a completely different story when you need to go through the situation in real life. Your solution is truly impressive. You are pioneering this solution to one of the most existential threats of our time. I'm really happy that you shared the moment of insight that made you realize that the world really needs name tags. Aaron, we are entering a world where this visual truth is no longer verifiable. How do you envision identity evolving when what we see, what we hear and read can all be synthetically forged? How can we still authenticate our human individuality?
Speaker 2:Yeah, a lot of topics today. When we think about synthetic content and think about the rise of technologies like deepfakes, oftentimes we think of them all, about AI. We think about even what people call deepfake detection, which is the idea of using AI to detect AI, and one of the concerns or one of the challenges with that in particular is that it's an arms race. Someone will always be slightly better than the other. Often it's the bad person is slightly ahead of the good person in having an AI model that can deceive the models that are meant to detect AI. And so it raises these interesting questions about our digital era, because so much of what we now do is remote and to your point. As humans, we've learned that we can trust what we see and what we hear, but when that happens on a screen, increasingly we really can't trust what we see and what we hear, and so that does cause a whole large number of sort of existential I mean very big picture, very high concepts that we need to rethink as society that's become very reliant on these sort of digital channels, things like video calls, zoom and Teams calls and even regular voice calls. If the voices can be made synthetically or the videos can be made synthetically, then can we really trust the type of human interaction that we think we're having? So one of our big insights was to think about other technologies that could go along with AI in order to help prevent bad actors from using just AI. And some of our big insights there were really around in the world of security what we think of as cryptography. Cryptography is what protects the internet transactions in general. You might think of even sort of the HTTPS you know or more secure web pages that you might visit, but cryptography is commonly found on our mobile devices in particular and one of the strange insights that we often don't realize but our computers.
Speaker 2:Over the years you mentioned, I worked at Microsoft for many years and obviously we did a lot with Windows based computers and our OEM partners and so many great OEM hardware manufacturers. But over time we started to build these things and Intel built in what they call the Trusted Platform Module. People call it a TPM chip, sort of a chip inside a laptop, for example, that allows things to be better protected. So you might have, let's say, a fingerprint reader on your computer. Today that's often connected to this Trusted Platform Module, which means it can't be interfered with, you can sort of trust that if someone's using that fingerprint reader, the signal isn't sort of intercepted or used in an unintended way. But what's sort of strange is that the webcams that we have today on our laptops they're not connected to that secure chip and so it's really not a secure channel. They're not connected to that secure chip and so it's really not a secure channel.
Speaker 2:So everything that we rely on and nor is the microphone, does not matter. But the things that we rely on for what we see and hear to your point, that webcam or maybe the microphone that's picking up our audio those things can't necessarily be trusted as an input source. And just like it's easy in a Zoom or Teams call to change the camera or to change the microphone, a bad actor can do that quite easily as well and make it still seem like you're using, maybe, the camera on the laptop, but instead make it a piece of software that's creating, let's say, a real-time, deep fake. And so as long as we're relying on things like that webcam camera to power what we're seeing, it's going to be a world where we're allowing bad actors to potentially deceive us. So one of our big insights was what if you did the same thing.
Speaker 2:Let's say, let's say you only had a video call, but it operated in a secure app on your mobile phone. Well, it turns out the walled garden that Apple allows us to operate in, or even Android, with an app that's come from the Play Store and it's called what's attested, meaning it's legitimate. It means that the hardware is in strong form. It means that you can actually use the power of cryptography to trust the evidence collection. You can make sure that the camera on that device is the camera that you think you're dealing with, which means that someone hasn't intercepted it, they haven't tried to use a deepfake instead and so you can use the power of cryptography to make sure that the evidence you're collecting let's say, if you're trying to verify someone is in fact sound. And so our big insight was let's take things out of the desktop web browser cameras that are less secure and move them into mobile environments. So when we allow someone to verify their identity, we use the power of mobile technology, we use the power of that cryptography.
Speaker 2:We use, let's say, the face ID camera.
Speaker 2:Even though it's not for face ID, it has many advanced features around biometrics and three dimensionality to make sure that someone, let's say, has a three dimensional face or is human.
Speaker 2:We're able to take all of these different inputs from a mobile device and have a much more high assurance or authentic, or some might say, secure, way of verifying who a person is, and so our philosophy is sort of simple. It's let's use something that everyone might have in their pocket. For a long time they've had their government issued ID or their driver's license or their national identity card or their passport. They've often had that in their pocket and in fact, they use that. You use that, we use that in the real physical world. But if we could allow someone to take that and combine it with something else they often have in their pocket, which is their mobile phone, then together we could get a very high assurance way of verifying who someone is. And that was our big technology insight and that's what allowed us to sort of do identity verification in this new way that could be used for more secure use cases, and then we hope that by doing that you can have a much more trusted sort of digital interactions.
Speaker 1:It feels that even this is just a matter of time, but for now it is an amazing step forward.
Speaker 1:The more you were sharing about the details, the scarier it sounded, because then you start thinking about the reality and oftentimes we just don't go into the details, we don't dive that deep in order to understand how it works, and when we start thinking about it and understanding what the danger is, then, of course, we start seeing our interaction with the digital world in a completely different way. So this conversation is so timely, so needed, and I hope it will help many of our listeners and viewers avoiding potential troubles connected to this type of synthetic lies. There is growing normalization of this synthetic content, as you already mentioned, and all those face swaps, voice clones, ai avatars. They are around us and people enjoy creating them, using them, and it looks like it is the next step forward for humanity. But where do you personally draw the line between creative innovation and the erosion of meaning and truth itself, and who gets to decide, in a world of accelerating technological advancement, where that line should be?
Speaker 2:I think a lot of it comes down to the scenario or the use case, because there are wonderful uses of deepfake technology or face swaps or the ability to synthetically create someone. For example, wanting to teach a student history lessons and bringing back to life a historical figure can be incredibly profound. You know, we used to think that allowing students to watch a video maybe made it more dimensional than reading text about a historical figure. Well, imagine if suddenly you could interact with a historical figure and they might look or sound like that historical figure actually did. That's a powerful, beneficial use for society. The ability to bring back loved ones or to help people, maybe perhaps later in life, who are having memory issues, to be able to interact with loved ones who are still alive and to have real conversations and make it feel like they are talking to people that they care about, or to assist them with memories. Those kinds of use cases, to me, are very powerful and are great applications of how some of this technology is evolving. However, when you get into the world of impersonating someone else, or another area you often think about is when you get into the world of posting social content. It is very difficult, I think, to police is a content that's posted real or genuine. However, I would argue that platforms have a responsibility to know who their community members are, to know who is posting content, just like, let's say, when you go to an event and maybe you check in at the event and perhaps you give them your business card and maybe you show them your government ID and then they give you, coincidentally, a name tag that you might wear and walk around the event and suddenly you know you're in kind of a safe area. You know that everyone you meet has kind of been vetted, they've been verified in a way, and you know that you know the name on their name badge and the company they're from is probably accurate and you can get to know each other and build trusted relationships. One of the reasons people go to in-person events, right the same thing. Let's say, if you go to the airport, you go through security, you know someone's checked your boarding pass, they checked your ID and you know you're kind of in a safe, safe place or a safe zone.
Speaker 2:But digital communities, I would argue, have that same responsibility. Now there's plenty of room to operate with a pseudonym or an alias or maybe even to be anonymous on that platform, but I think the platforms have a responsibility to know who is posting content, because it allows you to help prevent bad actors. You know, in too many communities today, online, it's easy to just disappear. Maybe you do something wrong, maybe you violate the rules or terms of service, and all you do is create a new disposable email address and you log back in and create a new profile and somehow you're instantly someone else. Or it could be a bot that's doing that, or it could be a bad nation state actor who's creating that identity that others are relying on, and so then content is posted from an account and we don't necessarily know who the account is, but we're left to question the content, and so I believe that these platforms have a responsibility to know who is posting the content so that they can help create safer communities.
Speaker 2:Those are the two main ways that we see deepfakes used today.
Speaker 2:They're used often in the, you know, trying to impersonate someone and take over an account, and the practical use case here is someone might call a customer support hotline, or they might call your employee help desk and pretend to be you, and all they have to say is hey, I got a new phone, or I lost my phone or for some reason, I can't access my account, I'm locked out and suddenly all the security measures that we've created with multi-factor authentication and trusting the device, it all goes out the window, because some you know well-intended help desk representative is there and has to verify are you actually the account holder, are you the rightful customer or employee who's trying to access his account?
Speaker 2:And that's a very hard thing for that person to do over the phone, often with limited tools, and often you have to do it in a way that makes you sort of an interrogator, when actually you're trying to be a customer support representative. And so that's one big scenario and way that people are using deep fakes to trick those agents and trick taking over accounts. It has nothing to do with you. Often Someone can just call up and pretend to be you, and then the other side is, as we were discussing earlier. It's often around posting content that might be real or it might not be, and that's often because the platforms themselves don't know the authenticity of the person who's posting the content. So, taken together, these things are the ones that really concern me and, I guess, also give me hope and optimism in this new digital world that we're living in.
Speaker 1:It sounds really profound, but it led me to the thoughts about our freedom and the total control we might fall into through these technologies. So how can we ensure that this type of technology is going to be used ethically, with consideration to the freedoms of human individuals?
Speaker 2:I don't think we have much flexibility. These tools that have been created have gotten so advanced I think specifically about ChatGPT, for example, or LLMs, or GenAI technologies in general. They're so democratized, they're so out there and available to everyone that I don't think we get to choose anymore is it a good actor or a bad actor who's using it? And so we have to assume that both good and bad actors alike are going to have access to these tools.
Speaker 2:What I'm very focused on is helping to make sure that people whose accounts need to be protected for companies that need to protect their customer accounts, for companies that need to protect their customer accounts, for companies that need to protect their employees and their workforces and all the data that an employee might have access to, for example that they are also using the technologies that the bad actors have access to, because the only way to defend ourselves is to be equally as advanced. In some ways, I think we can be more advanced than the bad actors by combining technologies we discussed a bit earlier around not only AI, but AI with cryptography and perhaps biometrics coming together. That gives us a much stronger arsenal against a bad actor who is trying to just use AI. We have to be deploying these advances universally on both sides if we're going to stand a chance, because they are universally accessible to everyone and that means I'm not sure we're going to necessarily get to ethically or morally decide whether only good or bad people use them.
Speaker 1:This reminds me exactly of another conversation I've had on my podcast with a cyber security expert from California and we were talking exactly about similar outcomes and dynamics between the good and bad actors and the battle we are exposed to and we absolutely have to win. I love the beauty of it and it's amazing that you could come up with such a fantastic solution. And it's amazing that you could come up with such a fantastic solution. And to everybody, a kind reminder if this conversation sparks something for you, hit like follow or subscribe and share it with one person you know would be inspired by this episode. Sharing is caring, aaron. Some argue that deepfakes are just another wave of disruption, but what?
Speaker 2:are we dangerously underestimating about the psychological, emotional and societal impact of deepfakes at scale? I think in many ways, deepfakes are just a new level of technology disruption, but every layer of technology disruption, whether it's computers and digital or even just new technology, whether that was the wheel or the shovel or the connectivity between devices or the personal computer you pick any wave of technological disruption and it has the ability to cause and create disruption and change disruption, and it has the ability to cause and create disruption and change. And we're at an era where deepfakes certainly have the ability to create disruption and to create change, and, like all change, it can be for good or it can be for bad. And so one of the really critical things I think that we're often not necessarily taking into account is how human this change is, and what I mean by that is this ability to trust people, and so much of our instinct and our nature as humans revolves around building trust, and trust often stems from things that we see or that we hear or, in a way, that we feel, and the ability for someone else to hijack those emotions and to manipulate what we see or what we hear and then ultimately, how we feel even, can all be heavily influenced by these Gen AI technologies, and so that creates a slightly more profound disruption in a world where we have many, many more things that we see in here and where trust is even more important than ever. So deep fakes or these new genetic tools certainly are the latest wave of disruption, but in some ways they get better and better and more profound, and this is the technology that fundamentally learns on its own and learns from how we respond as humans and learns from new data sets or data sources and gets more targeted.
Speaker 2:As an example, one of the things that we're seeing most active now in the threat landscape is targeting health information. Bad actors are targeting hospitals and doctors and anyone that has insurance companies, anyone that has access to protected health information, and for a while, we were sort of confused. We said, well, why is this selling so much more on the dark web? Why are bad actors selling this for so much more than they were other forms of personal identifiable information? And as we started talking to others and some of our customers and the people that use our technology and think about this all day, we realized that some of the new threats that were emerging is that bad actors are taking this data, they're stealing and they're using it for more targeted, more informed, more manipulative attacks.
Speaker 2:So you could imagine, instead of a spam email saying you know I'm a prince somewhere and please send me money, or suddenly people are saying, hey, I, you know I'm following up from your doctor visit last week. Or hey, there's a new drug available for people with your condition, or there's a new support group available from this, or your doctor, you know, dr X or Y said this and I'm following up. Suddenly, you can do much more targeted attacks that, of course, are going to elicit some kind of a response from a person who receives them, if you can start to bring in these very intimate personal details, especially about someone's health or medical conditions. And so it's an example where people are using stolen data that's increasingly digital and then they're running it through these more targeted Gen Aon models to be able to make a very compelling, very targeted outreach at scale to get people to believe it, and suddenly they are hijacking your trust.
Speaker 2:If they're hijacking your trust, then a lot of bad things can potentially happen if that person doesn't have good intentions, and so this world of these new tools that are available are being used by these bad actors, and it is giving them superpowers. It's giving them the ability to manipulate people at a fundamentally new way, and, given that so much of what we do is virtual and is through these digital channels, if you can't trust what you're seeing or hearing, you fundamentally come back to this question of can I trust the person I'm interacting with? And that's a very profoundly human question and one that I think, if we don't solve the right way, paints a very difficult picture for the future.
Speaker 1:I couldn't agree more. It is a fact that deepfakes are the new weapons of mass disruption and probably the ultimate business nightmare nightmare. And your example is mind-blowing because it is so unreal and so real at the same time how far somebody can go in order to impact another person or get access to the personal data, to the funds, to the financial system. So we need to dive a little bit deeper here. If it's okay with you, I would like to hear more about the modern defense. What should it look like, not just in cybersecurity, but across education, public policy, everyday decision making? Who is truly responsible for building our immunity to this synthetic manipulation?
Speaker 2:I think we need the help of technology partners to implement solutions that we do know can work. Again, there is increasing. We spend a lot of time on this. I talk to companies every single day who are innovating or trying to create new solutions to sort of stop the bad actors, to prevent deep fakes from being used in malicious ways to verify, maybe, who someone is as a job applicant or who they are as an employee or when they call for support. There are technologies available, but we need an unprecedented level of public-private partnership in solving some of these matters. I'll give you another very practical example as we were coming on to record.
Speaker 2:There's a breaking story so not too many facts released yet, but in the Washington Post in the United States about the Secretary of State, marco Rubio Politics aside, irrelevant for this matter, marco Rubio Politics aside, irrelevant for this matter but the report was saying that someone had created a deep fake of his voice and was using that to call and target other people and sort of misrepresent the US Secretary of State. To your point, that is an issue much broader than maybe, than commerce or taking over someone's traditional account. That's suddenly the ability of statecraft and to suddenly manipulate, maybe what the position is from one government to another, which can lead to many, many things right. That we only begin to extrapolate Again if you're used to taking calls from heads of state around the world and suddenly you get a call and you're not quite sure if it's the person who. It is something that challenges our whole diplomatic infrastructure and you know how we deliver communiques and what that means, and let alone the speed at which we can operate and sure it might be wonderful to get on the plane and have important conversations in person, and certainly departments of state role model that.
Speaker 2:But also the speed at which things happen. Now, if you can't suddenly jump on a phone or have other some other sort of communication or a video call and know that you're speaking with a legitimate person, the speed at which things happens will could only cause harm and, as things move too quickly, to not be able to get them resolved in a fast way. And so that's an example where we need different technologies that enable secure communications, and not just in unique scenarios but at scale and more frequently to verify who someone might be. So we're seeing this rise of new technologies being used to impersonate people at all levels of society, and certainly you know, whether it's heads of state or you know as you're going in education or in the business community. I think the technology is innovating fast on both sides and my big focus is how do we help make sure that we're applying it for the good people as much as the bad people are taking advantage of it?
Speaker 1:That's another brilliant example. I would like to keep talking power right now who will gain the greatest advantage from AI generated realities, advantage from AI-generated realities, and who do you think is most at risk of being erased, manipulated or misinterpreted in this synthetic future? The examples you are sharing they are so deep and so impactful, in-depth, profoundly changing the reality we are dealing with. So what are all those answers and how can we get more protection, among other things, through the solutions like your Nantag?
Speaker 2:I think that so often we talk about vulnerable populations and, of course, maybe someone who is later in life and gets a call from someone to pretend you need to have kidnapped your grandchild or someone in a lower income situation and suddenly someone's offering them money. We're so used to in the scenarios of Frog talking about vulnerable populations, but what's so different about this technology is that it actually is much more unifying. The example I was just giving you can impersonate the Secretary of State. You can impersonate the Chief of Staff to the President. That happened a few weeks earlier. You can impersonate the CEO of the company. That's happened frequently now at many, many companies, because it's not about that person, it's about pretending to be that person. And then you're relying on all the other systems and infrastructure that are very weak at knowing if it's really them, the average employee. If they get a phone call and it sounds like the CEO of the company and is with the convincing narrative, they're probably going to act. They think, wow, I'm very important. The CEO just called me. Let's say, and this must be an important matter, you know, I want to be helpful, I want to try and do the right thing, but how do you trust the authenticity of that call and it's nothing to do with the CEO. The CEO in that case is being targeted.
Speaker 2:They're not a vulnerable person.
Speaker 2:It's not a vulnerable population necessarily, but it's too easy to impersonate anyone, and that's what's so powerful about this technology in the digital realm.
Speaker 2:And so I think a lot about these matters and I think we have to think about how to use technology in more ways. I think we have to naturally be more cautious of the channel, the context you know. Is someone calling me directly or are they reaching out from a group messaging conversation? Are they someone that normally calls me or normally reaches out? Perhaps it's a bit unusual in a large company, let's say, for the CEO to call you. Maybe that warrants a little bit of extra questions or, oh, thanks, let me call you back and try calling to the main number or establishing communication through a different channel. You have to kind of be naturally a bit more skeptical, and maybe curious is a nice way to think of it, but you have to be a little bit more skeptical in this digital era, and I hate saying that, because so much of our humanity is about interpersonal relationships and building trust and that means you don't want to live in a world of skepticism and doubt and caution and concern and fear, but increasingly these digital channels.
Speaker 1:I think we have to at least be curious and sensitive to realizing that the interaction we think we're having might not be the one that we're actually having. Great point and it made me think just earlier today I held a presentation in front of senior leaders a big roles and we still have to navigate in this new world full of challenges and also opportunities, but critical thinking and adaptability are going to help us stay alert and getting the best out of it.
Speaker 2:I think you're absolutely right, although you know it's interesting that there's a story I often forget about, but I've been reminded of recently. You know, only a couple of years ago I was going to say the early days before the child GPT sort of went more mainstream and available to the public when they were doing testing of it and for safety reasons, one of the scenarios was a recapture. You know, in websites often, where they say sort of are you human? They ask you to solve a bit of a puzzle or to type some numbers, and the artificial, the gen ai agent obviously at that point now they can was not able to solve it and so it went out and hired someone on a task worker site to go and solve the recapture for them and interestingly which, to your point, is quite creative, actually that's quite adaptable. I mean, that is quite innovative thinking is I can't do this, I'm going to go hire someone to do it. And then the person they hired was also a bit skeptical and said, huh, this is sort of a funny ask for someone to hire me. They said what are you a robot? No-transcript. That is creative, that is adaptable.
Speaker 2:And yet there we see instances of kind of these Gen AI tools being able to apply that same level of thinking and in a scenario where it was trying to keep things safe, in a scenario that was trying to prevent fraud and to keep things secure, gen AI was able to overcome it and that was trying to prevent fraud and to keep things secure. That you know. Gen AI was able to overcome it and that was years ago I'm not that many, but I mean the technology has advanced significantly since then and one of the problems we have as a society, on the internet at least, is those recapture tools that were meant to keep away bots can no longer keep away bots. They don't need to hire someone anymore. I mean that technology is kind of irrelevant at this point, but it is the main way across so many websites that they keep bots away from content and from sites and material and account access. And so we have so much innovation to continue to do to keep up with the innovation that these new Gen AI tools are bringing.
Speaker 1:I actually remember when it happened and it was such a big deal, and it's amazing that you referred to this example, because it's actually very relevant. And now we are light years ahead and still technologies are developing so incredibly fast. So I would like to take another look into the future together with you. To take another look into the future together with you and as we reflect on deepfakes and the future of human identity, I wonder what trend or shift technological, psychological or maybe ethical we are most unprepared for in the next three to five years when it comes to trust and identity.
Speaker 2:I think this ability to trust digital channels is something that we're not fully prepared for. You know, over the last, since the rise of the internet late 80s, early 90s, browser technologies we've become to rely increasingly on digital tools and digital channels. You know when, from accessing the bank you do in person to now you do online, it would go into the office and now you can have work meetings with colleagues remotely, no-transcript. That is a massive change, simply that we've been experiencing as things got digital in the last 20, 30 years. But suddenly now we might not be able to trust those channels the way that we've gotten used to, and that is a fundamentally massive change that's going to require new innovation, new ways of doing things.
Speaker 2:Some might say, sure, we need to go back to analog and doing things in person again, and I'm sure there's some value in that in many scenarios. But I think we're also quite far down the benefits of a digital economy that we need to make sure we're finding equal ways to protect the digital interactions we're having. And as a security industry, we've had challenges like that before and we found smart ways around them and I feel confident we can again today. There are already great technologies out there. Obviously, we try and make one. There are other companies innovating in other ways and other parts of the technology stack, but we have to continue to be able to trust these digital channels that we now rely on for so many different parts of our life.
Speaker 2:And that's the biggest thing that I think we're not prepared for. And, to your point, it actually does cross many different things. It is personal. It is how we have relationships, how we keep in touch with our loved ones, what we do for our work, how we educate ourselves. In some cases, how people attend religious institutions. They might not be able to go in person, but they might find forms of worship that they can do remotely and find communities of people with similar beliefs. All these different aspects of our lives are increasingly digital and we need to be able to trust and have a sense of safety when we're in those experiences.
Speaker 1:That, to me, is a profound shift that, as a society, we're really just starting to come to terms with. It is truly an extremely profound shift. And, as you were describing all that, the number of questions in my head and the buts, the but, but, but. You know this and that, and then you started giving examples as well, and there were so many natural way of our development, of our evolution, because nowadays so many parts of our life are happening in the digital space and, of course, the number of questions connected to that space and the trust we can put into it or choose to withdraw ourselves and become more analog and more person to person, human to human. That's a matter of choice as well and priorities, but of course, it's just getting increasingly complex but at the same time, I would say, increasingly exciting. I would say increasingly exciting because the world we are entering it is a very interesting and varied world of opportunities. Aron, what is your most important piece of advice for building resilient, trusted and future-ready systems or businesses?
Speaker 2:especially when the very nature of reality becomes a competitive advantage. That's a great, very thoughtful and comprehensive question. In a way, I do firmly believe that it's important for everyone to be experimenting with these technologies and for people to be sharing and listening to a podcast like this and other content as much as I can. The best way that we defend ourselves is by learning and understanding, and some of that can be experiential learning. You know, go try and make a deep fake. Go try and make a voice deep fake. Upload some of your own video content Maybe. See how it goes. Try these technologies, because the more that you experiment with them, the better you'll understand them. And the same thing goes from learning through others and learning and listening from conversations like this or reading other forms of content or blogs and other things.
Speaker 2:The level of advancement has gotten so accelerated that this defense system that we have is really from sharing and learning with each other.
Speaker 2:So one of the biggest pieces of advice that I have is please continue to learn, continue to experiment, stay on top of these sort of advances so you understand what's happening and then also understand what people are doing to protect against it.
Speaker 2:Understand what's working, what technology advances are happening that can help prevent against some of these things being used for bad. That blend of knowing how the bad actors are operating and knowing what the good actors are doing to block the bad actors is, I think. Knowing what the good actors are doing to block the bad actors is, I think, fundamentally one of the best ways that we can protect each other. But it comes from actually being very human and sharing what we're learning and what we're experiencing and what's working and what's not. But humans have traditionally been very good at that and one of the benefits of our digital economy, and the internet even, has been the ability to share learnings around the world at incredible speed, and we have new distribution mechanisms and content creation tools and interviews and ways to do it. But it takes people taking the time to listen, to understand and to try, and if you're curious and you're asking those questions, I think that we're positioned as a society to have a more optimistic outcome.
Speaker 1:This is a brilliant advice. Of course, there are so many aspects, and it's not only about protecting ourselves, but, as you pointed out, it's also about protecting each other, and that's where our human aspect is coming front and center, because we are also responsible for each other's safety, for the outcomes of everything what is happening around us, on the technological level as well. So I love this conversation and I could have 100 other questions, because this topic is so deeply interesting that I would love to hear so much more. To truly thrive in this age of AI, augmented reality, what core belief, habit or assumption would you say we must unlearn, both as individuals and as leaders?
Speaker 2:This world of trust but verify has become all the more real and I think, by default, we as humans can trust what we see and what we hear, and increasingly, I believe we need to find a way to verify what we're experiencing, particularly if it's on a digital channel, to help ensure that we can trust it. That, to me, is really one of the biggest lessons that we can take forward and something that we all need to be really mindful of new ways to do.
Speaker 1:Beautiful. Thank you so much for being here today, aaron, for sharing your experience, your wisdom, the stories which are truly impactful, and they are so relevant to what we all are going through today, to what we need to be prepared to and keep in mind for our own safety and a brighter future as a civilization for holy humanity. Thank you so much for being here today.
Speaker 2:Thank you so much for having me. It was a wonderful conversation.
Speaker 1:I absolutely loved it. Thank you for joining us on Digital Transformation and AI for Humans. I'm Amy and it was enriching to share this time with you. Remember, the core of any transformation lies in our human nature how we think, feel and connect with others. It is about enhancing our emotional intelligence, embracing the winning mindset and leading with empathy and insight. Subscribe and stay tuned for more episodes where we uncover the latest trends in digital business and explore the human side of technology and leadership. If this conversation resonated with you and you are a visionary leader, business owner or investor ready to shape what's next, consider joining the AI Game Changers Club. You will find more information in the description. Until next time, keep nurturing your mind, fostering your connections and leading with heart.