Digital Transformation & AI for Humans

Cybersecurity Trends and Predictions: The Role of AI & Human factors in Shaping the Future

• Dr. Victor Monga • Season 1 • Episode 43

Join us as we dive into cybersecurity trends and predictions! In this episode, we explore the role of AI and human factors in shaping the future, together with my amazing guest, Dr. Victor Monga from LA, California (USA).

Dr. Monga is a Cybersecurity Technologist & Architect, co-author of Simple Solutions, Complex Problems, founder of VTF University, and co-host/producer of the Zero Trust Journey Podcast

With over 70 industry-recognized cybersecurity certifications, Victor is a highly sought-after speaker at global cybersecurity events, an adjunct professor, and an active board member of premier cybersecurity organizations.

🔑 In this episode, we reveal powerful insights on:

âś” AI & Human-Centric Cybersecurity

âś” Top Trends (2025 and beyond)
âś” AI: Defense & Threat
âś” Why EQ, Mindset, and Soft Skills are as vital as Technical Expertise
âś” Human Factor Risks
âś” Predictive AI Security
âś” Victor's Future-Proofing advice for Leaders

🎧 Listen now for expert insights on AI, cybersecurity, and leadership!

đź”— Connect with Dr. Victor Monga on LinkedIn
đź”— Tune in to the Zero Trust Journey podcast : https://www.ztjourney.com/
đź”— VTF University on LinkedIn: https://www.linkedin.com/school/vtfuniversity/


About the host, Emi Olausson Fourounjieva
With over 20 years in IT, digital transformation, business growth & leadership, Emi specializes in turning challenges into opportunities for business expansion and personal well-being.
Her contributions have shaped success stories across the corporations and individuals, from driving digital growth, managing resources and leading teams in big companies to empowering leaders to unlock their inner power and succeed in this era of transformation.

📚 Get your AI Leadership Compass: Unlocking Business Growth & Innovation 🧭 The Definitive Guide for Leaders & Business Owners to Adapt & Thrive in the Age of AI & Digital Transformation: https://www.amazon.com/dp/B0DNBJ92RP

📆 Book a free Strategy Call with Emi

đź”— Connect with Emi Olausson Fourounjieva on LinkedIn
🌏 Learn more: https://digitaltransformation4humans.com/
đź“§ Subscribe to the newsletter on LinkedIn: Transformation for Leaders

đź”” Subscribe and stay tuned for more episodes

Speaker 1:

Hello and welcome to Digital Transformation and AI for Humans with your host, amy. In this podcast, we delve into how technology intersects with leadership, innovation and, most importantly, the human spirit. Each episode features visionary leaders who understand that at the heart of success is the human touch nurturing a winning mindset, fostering emotional intelligence and building resilient teams. I'm excited to dive into cybersecurity trends and predictions. Today, we will explore the role of AI and human factors in shaping the future, together with my amazing guest, dr Victor Monga, from LA, california, a cybersecurity technologist and architect, co-author of Simple Solutions Complex Problems, founder of BTF University, and co-host and producer of the Zero Trust podcast. Welcome, victor. It's truly a pleasure to have you here today.

Speaker 2:

Thank you for having us. Amy, Really appreciate it.

Speaker 1:

Let's start the conversation and transform not just our technologies but our ways of thinking and leading. Interested in connecting or collaborating? Find more information in the description below. Subscribe and stay tuned for more episodes. I'd also love to invite you to get your copy of AI Leadership Compass Unlocking Business Growth and Innovation the Definitive Guide for Leaders and Business Owners to Adapt and Thrive in the Age of AI and Digital Transformation. Find the Amazon link in the description below. Victor, I'm so happy to have this conversation today. We're living in exciting times where the first generation going through all those incredible technological shifts and it is so great to create and have this space to discuss what really matters and open up for the new opportunities for business growth and development for all our listeners and viewers. To start with, I am so excited to hear more about you, your journey, your experience, what brought you into the field of cybersecurity and what are your passions around this topic.

Speaker 2:

Yeah, I think I wanted to be a fighter plane pilot. That was my passion, but that didn't happen, so I ended up in cybersecurity. I used to have a joke that I got here by chance, not choice, because when I was in college it was always exciting to see the network part and wireless networks. And the wireless network at my university at that time was only reserved for the teachers and professors and I just wanted to know more and I wanted to get into that wi-fi so I play video games and watch movies. So I ended up actually hacking into that Wi-Fi and I was given a choice from there either we're going to report you to the cops or you can actually be ethical hacker, and I chose ethical hacker from there. So that brought me into network security, cyber security and past tech.

Speaker 2:

It has been very exciting for me where I have architected the largest data centers for security and obviously being a security manager, cso and recently I've been doing a lot of zero trust architectures and that led me to actually start my own podcast where we talk about there's a lot of noise out there about zero trust. But how do I do this? Zero trust, zero trust strategy, but how do I implement it. Zero Trust. Zero Trust is a strategy, but how do I implement it? What do I have to change in my security program? So that's what I've been doing for the past two years.

Speaker 1:

Amazing, really impressive, and you still reminded me of my one when I wanted to become a doctor, but it happened that I became a digital business doctor, working with data and insights and helping businesses to detect their problems and the right diagnosis in order to prescribe right remedies and help them getting healthier, better, and back on track, and back on track. So, victor, cybersecurity is evolving rapidly and the interplay between AI and human factors has become a defining theme. Can you help our listeners and viewers understand why the combination of AI, technology and human centricity is crucial in shaping the future of cybersecurity is crucial in shaping the future of cybersecurity. Why is this balance so important as we look ahead to 2025 and beyond?

Speaker 2:

For sure. Yeah, so for those who have watched the movies Matrix, they can appreciate the sentiment where we are at the point where either we can let go of the control or we can take control. And this is the point where we want to take control and not let go. We don't want machines to take over. And again, yes, I am air coding and really going into the matrix world. But we are scratching 1% or maximum 2% of AI use cases today. But that also means that AI is going to have a brain of its own. That's the whole point of it where it can predict, it can see things, it can suggest, it can do things for us. Right now, again, we are the 1% or 2%. You humans can use this as an opportunity to really one embrace AI. We don't need to decline it because it is here to stay. The use cases are almost to the point where we just can't deny it. It is here to stay, but we humans can take control. We take the charge, but we humans can take control. We take the charge. We really start actually using ai to help our day lives, to get better at doing our, our tasks. So that's one in.

Speaker 2:

Second is, I think, in again from between the human and ai, if you will.

Speaker 2:

It's gonna sound like a sci-fi movie, but at the end of the day, machines are going to make a logical, objective decision.

Speaker 2:

Always, humans who does have emotions, have senses and have sense in general, can actually make a subjective decision that can help mankind when it's needed. So that's where the contrast is, where we can continue using AI to make the objective decisions that we want it to be, where we humans need to make the subjective decisions for whatever, whenever it's possible, and sometimes things are not going to be obvious to an AI or a machine that will be obvious to humans. So there's a Turing test. Obviously all of us have seen or heard of it and when it comes to machines, if a machine can pass a Turing test, that means they have a conscience of themselves. That's where humans comes in. Humans design that test. We can always actually have a leg over AI because we can always actually test and measure and have the assurance that AI is doing the right thing, what they need to do, while humans are verifying and humans are still in control to make sure that AI is not taken over.

Speaker 1:

Speaking about this point, what do you think about probability of singularity?

Speaker 2:

about this point. What do you think about probability of singularity? And we're gonna get into a lot of controversy here, but again it's gonna sound like a sci-fi at the end of the day. Again, I think I feel like we're not there yet. We, we are really at the level where it can go either way. Again, a choice was given in my life as well either go black hat or go white hat in in a way, where I chose to be a white hat and and make sure that I use my skills and my my abilities for the right reasons. This is where we are today, where we shape ai to continue, use and help mankind to the decisions and predictions and analytics that we humans can process at the same speed and pace, where AI can help you and Singularity is going to be part of it. Are we there yet? I don't think so.

Speaker 1:

It's possible that we will not even discover or notice that point. But at the same time, Exactly.

Speaker 1:

we won't even know if we got there exactly, but let's hope that we are not there yet, at least not yet. But the world needs more leaders like you, definitely, in order to put together technologies and human centricity and highlight what really matters and where the accents should remain. So it is truly important to see what is crucial, because oftentimes those AI development programs they are seen just as a roadmap, just as another project, but it is so much more than that and there is so much more to it in reality and its impact is incredibly big and whole the humanity depends on certain decisions of those leaders who might stand behind the outcomes to the better or to the worse, with exactly their name and the date. They took certain decisions right. So it is really important.

Speaker 2:

Well, right now, I think that there's a lot of noise as well, right, like there's actual signal, there's actual impactful conversation around AI, but there's also a lot of noise as well, which is what distracting maybe leaders to make the right decision because they're not being informed about the right things. They're being informed about these distractions, which is not true. So when they actually start analyzing, when they start uncovering that and they realize that this is all a facade and that's what, maybe they are feeling that this is all of these things are just a distraction and that's what. Maybe they are feeling that this is all of these things are just a distraction, but that's not true. Maybe 70% is distraction, 30% is true. So, as a leader, it's up to you how you're going to uncover and go through the decision matrix of what's true, what's not, what your organization, for your security team, for your security program. That's what you need to do. You need to have that hearing test, if you will, to decide which way to go and how to actually see what's real, what's not.

Speaker 1:

Yas Moton. Actually, this is truly important and it's always up to us to choose what to believe and how to prioritize. And it's always up to us to choose what to believe and how to prioritize. What are the top cybersecurity trends we are seeing emerge and how do you predict this will evolve this year, in 2025 and in the next three years? It's quite difficult to see far away ahead in time, but at least let's take a look at three years ahead. Are there any surprises or shifts that might redefine how organizations approach security strategies in the near future?

Speaker 2:

Well, look, there's a global shift going on with an AI because it's watering down the specialization, if you will. It's watering down the specialization, if you will. So think of it this way If I was an expert in cybersecurity and you were an expert in healthcare, like you're a surgeon, you're very expert at one area. Now it's going to water down a little bit because I can ask an AI a question about cybersecurity, and maybe a neurosurgeon as well, and they will have an answer. So there's a global shift going on where the general questions, general knowledge, general information it's not a commodity anymore, it's not something that only you and you hold it. What's relevant is your experience in the field. That's something AI or the chatbots can't give you. That stays with the human. If I have been through an incident and I have gone through the stress and I've gone through the post-mortem of the incident, that stays with me and I can inform the next time or next company or next organization I'm consulting and inform them. Same for you If you had done 500 surgeries, that experience for each surgery stays with you. That's the data sets of the humans. I think that's something that locally residing, but global shift is happening where the chat spots are pretty much taken over. So the general knowledge, the first level conversation, it's been taken over by chatbots. The local shift or the organizational shift, is where now.

Speaker 2:

Cybersecurity used to be hot. Cybersecurity used to be something that only a few people in the organization knows about it. Mainstream media has changed, changed. It has changed since covid time. Everybody started working from home, everybody started learning more and more about computer security and all those things and this industry as much as it used to be hot, not anymore. So that shift is going on, where now, if you go to your cfo, if you go to your board and try to just fear them with oh we need more money because we need to be secure, the tactic doesn't work anymore. It hasn't worked in the past four or five years. It used to work. So that shift is happening as well, where you don't have a whole lot of money left for security. Now it used to be like you almost have a blank check. On the contrary, what board of directors and the executive suite is asking is give me a business case and I will give you money for a program. So it's not on the name of security, it's on the name of ROI. Back to basics, business 101. I'll give you 10, return me 100, but you have to justify how that 100 is quantified. Not because that I will make sure you don't get hacked. And that, too, most of the security leaders are saying well, I can't guarantee that. So board of directors are saying I'm happy to give you the money you're asking, but you can't guarantee that we will not get hacked. So what is the point of all of this? Hence the business use cases, business decisions and the conversations about bring me a business case and I will give you money for a program. That program can fund cybersecurity, along with other things, and that's where now you start seeing cybersecurity leaders more and more speak in the business language.

Speaker 2:

Going back to the shifts what's happening in 2025, as the new shift is happening to threats, we see a new breed of threats, which is deepfakes, the generative AI content. Now the insider threat is pretty hot because everybody has understood how to actually hire talent and skill globally. So, instead of looking at your local town or local city or local state, I can hire someone around the globe and they can work for me. But how do I know that person is just working for me? Maybe they are working for my competitor as well. Maybe they are working for my competitor as well. So that new shift is happening where the threats are evolving and hence cybersecurity needs to be prepared to combat that.

Speaker 2:

So for years we told people that if a CFO sends you an email to transfer money into a Cayman Islands account, you need to text that CFO. You need to get on a Zoom call with the CFO to make sure that you look into the eyes of the CFO to confirm that they are the one asking you. With Deepfakes, I can do that. Now. With Deepfake, I can get on a call with the employee or a controller and ask them hey, I'm the CFO and I need you to transfer this money because we have a great deal coming up and we need to close it right now. So who do you trust at that point? So that's a threat.

Speaker 2:

That's the shift is happening where now it's not about simply going back to the tactics we had told our employees via text, call or video calls. We got to think about something else as well. Maybe a code word CFO is given that they need to verify. Maybe a cheat book that employees have in their hand and everybody has their code word, and their code words changes every month. I'm just throwing it out there right now. The point is, the tactics have to change now. Some of it is going to go back to the old school ways of validating and authenticating who you are, how you are, instead of just the digital ways. One thing is changing over things we have seen in 2024 and that's going to trickle down in 2025.

Speaker 2:

Part of the research we did at VTF University. The technologies now have a flavor of AI, but that also brings implications to the organizations, to the enterprises, because just because they're saying powered by AI, what does that mean? And that's where the research we try to uncover what does that mean? Powered by AI or we're using AI? Does that mean that the product, the software, is using AI to get better at their code and their product? Or does that mean they will use the customer data as the data set to train their models and get better at it and get better at it? And that's the leak. You have to see and read the EULA, the User Acceptance License Agreement, because a lot of the companies again, I don't want to badmouth anyone, but a lot of the companies they are sneaking in those terms and saying that if you use my product. I'm entitled to use your data to train my AI, and maybe you don't want that.

Speaker 2:

Imagine you are in a large aviation company that is building the next 390 or 395 bus I'm giving an example as 380. That will not use fuel, that will actually run on solar. Again making it up here, the point is that if you are doing that, that data sets get trained into an AI that you were using for meeting nodes, even that gets actually fed into some sort of a larger chatbot. And now, as a newbie, victor typed into the chatbot what are the creative ways of efficient flying? And it could be a new 395 using solar to fly. There you go All of a sudden. Your millions or billion dollars of research is out there on the public and the worst is, if I even ask the chatbot to hey, do you have designs, do you have plans? Do you have solution? Brief, do you have architecture diagrams or architecture decisions? I can take that to the comparator and say, hey, maybe you guys want to speed up your r&d. So that's, there's a new. Um, I would say ways of thinking.

Speaker 2:

Now where, if you have your disaster recovery planning but you have disaster recovery planning if that deepfake video is actually harming your company reputation. Imagine and again, this is all hypothetical. Imagine, ceo of a company is actually accusing the company of being unethical and that is put out in the news media or YouTube, and that YouTube got viral. But that's just a deep fake. How do you mitigate that? Or an AI-generated website which looks like your website is actually put out there with the competitor products and hyperlinking to the competitor. How do you combat that? So, again, you have to rethink the playbook now. You have to go back to disaster recovery planning. You have to go back to business continuity planning. What are the essentials that you need and how does AI play into that?

Speaker 2:

One of the recent research when we conducted the survey was about does the companies even know that they're using AI?

Speaker 2:

How many companies?

Speaker 2:

How do you know that you're using AI?

Speaker 2:

It's not just about the chat box.

Speaker 2:

It's not just about how the employees are typing information in the chat box.

Speaker 2:

What other AI solutions? Most of the respondents said that we don't know, because the meeting notes that could be ai. Next time, when you go into email, maybe that's ai. The apple ios is apple intelligence. Maybe it's a company device but has company email. That's also AI. So how do you know which extent up to which extent, you're using AI and how your data sets are being trained in the model and how the leakage of the data is going out? So that's, I think, where we are seeing 2025, the organization security leaders and I'll talk more about it as we progress with this podcast where the pen testings talk more about it as we progress with this podcast where the pen testings now the pen testing is coming up with. Can you make sure that my data is not in the major popular chatbots? Can you poison my AI? If I'm using an AI, can you poison it? Can you direct the AI to say things that they're not supposed to say, just like break their guardrails, if you will? So I'll talk more about it.

Speaker 1:

It sounds both impactful and a little bit scary very scary actually and I am thinking about the fact that it requires certain personal treats from the cybersecurity leaders in order to be able to deal with all those uncharted territories and immense threats and risks and still stay ahead of the competition and do what you are supposed to do and help your business be protected and stay safe. So it is really exciting, and also not so easy as it was before, to say the least. But let's take one step deeper into the direction of black heads versus white heads, and I'm just thinking. Ai is increasingly being adopted across industries, but it's also being weaponized by cyber criminals and other groups. Could you share your perspective on how organizations can stay ahead by using AI as both a defensive and offensive tool in their cybersecurity arsenal? How can they ensure that AI strategies remain adaptive to emerging threats and, for example, what to do with the deepfake, because, as you mentioned, it takes seconds, minutes to go viral and it has an incredible detrimental impact. So how to deal with all that?

Speaker 2:

Be vigilant. I think that's the best you could do right now. The vigilance is the only, I think, option right now for cybersecurity professionals. Now let's talk about Black Hat, white Hat, how they're using AI. So if you go back in time a little bit, where, when the cloud was launched newly launched before cloud we used to attribute IP addresses to the threat actors. It was as simple as if the IP is coming from North Korea, then I know the threat actor is from North Korea. If the IP address is coming from Russia, then I know exactly it's from Russia and the IP is from China, so on. But with cloud and anyone with a credit card could use cloud that attribution went away because now the IP address is actually coming out of cloud. So I don't know if the threat actor is actually sitting in Russia, china or North Korea or any other country. It's the IP address of the cloud. It's AWS IP or Azure or whatever that flavor is. And then eventually cloud providers got better at it that you can't use my cloud infrastructure for hacking. Then they put the controls around it. They actually put detections. So if you use today a cloud provider for malicious activities, they will block your account, they will clean up that instances and all that.

Speaker 2:

It took time. It was not overnight. For the longest period, productors were using cloud as proxy to attack the victims because it was new Cloud providers were busy making money and they were not thinking about these things. Fast forward relate the same to AI now. Ai providers today are busy creating AI and making money and we're not thinking about the malicious activities or malicious use cases. But threat actors are using it. If you think about it Today, if you go and sign up at ChatGPT or Cloudy or any other, all they care for is an email address, some information and a credit card.

Speaker 2:

How do they know how I'm planning to use it, good or bad? And that's where we are at the AI today. We are at the point where the providers are not yet thinking about all the malicious use cases. Yes, they are thinking about it. They are doing everything in their power to put some guardrails, put some governance around it, but it's not there yet because it's just so new. You don't have a baseline what's good, what's bad? Right, in order to create the guardrails, you need to know what is expected out of a good behavior so that everything else is bad. We don't know that yet it's so new. Everybody is still asking for cat pictures to create from chat DPTs. We need to have the baseline of the good behavior so that we can block and prevent the bad behavior that will happen in the future. I hope so, but that's where we are. The black hat and the hackers are currently using the AI because it is so new and the providers cannot distinguish between the good behavior and bad behavior. They're using it for malicious purposes.

Speaker 2:

What can we do? When we are on this side blue team when we are on the good side, we are on the side of the good team or the good guys. Vigilance when things are happening, we need to be vigilant. That's where the the really the importance of zero trust comes in, really at a timely right now, you cannot trust anything happening in the digital form right now. You just don't know. That's where every transaction has to be validated. Every endpoint has to be validated. Every person has to be validated. Their identity has to be validated. When they're has to be validated, every person has to be validated. Their identity has to be validated when they're accessing what they're accessing, how they're accessing. Everything has to be validated. The policies has to be enforced. It is not complicated, it's just hard. It's just culture shift.

Speaker 2:

A lot of times we have heard that zero trust is complicated. It is not. It is hard because now the IT people need to know everything that others need to access and others need to get access to what they need to access to do their day job. In other words, I will not be given access to anything more than what I need during the day job. Every time that. I will only get access to the application when I need it. I perform my day job and then access gets revoked or I don't have access to it anymore, and all of that is tracked. All of that is enforced by the policy. Anything I do beyond what I'm supposed to do, I get alerted or my access is revoked.

Speaker 2:

So the point is, the zero trust is really enforced to the level where you only reward good behavior out of your employees, contractors. Anything bad is going to be blocked by default. And that's where we are with AI. When you can't trust anything in any shape or form, zero trust becomes the paramount of your security. Zero trust becomes the foundation of your security. That's what we can do Be vigilant, enforce zero trust principles in your program today and do the best to your abilities to be informed about.

Speaker 2:

What are the AI threats out there? So deepfakes, everybody knows about it. Content, everybody knows about it. Maybe some of the new research is coming out where AI is actually creating these malware on the fly through web browsers. Be aware of it. There are a lot of threats coming out. Try to be informed about those threats so you can actually either bake the playbooks to respond to it or at least detect it so you can actually stop it. So that that's where I feel that the bad guys are gonna continue using the, the technology, anything new hype. It's perfect opportunity for them. This is actually the gold mine time for them, because it happens every 10 years. Internet happened, talk can happen. Ai happens every 10 years. It happened, talk can happen. Ai happens Every 10, 20 years. These errors comes in. This is their golden opportunity to actually make the most of it. They will not stop. We need to be vigilant to make sure that we can spot the bad behavior.

Speaker 1:

Sounds optimistic. Thank you so much for sharing this advice, and it's true that there's always something new, and those who are on the darker side are always among the first to adopt technologies and use them against humans in a way, and we just have to make sure that we catch up with the development and are willing to meet their efforts with our efforts in order to protect potential victims and businesses. Victor, we've previously collaborated through the Virtual Testing Foundation, nowadays the largest online cybersecurity university in the world, where I had the privilege of running masterclasses for an international audience of cybersecurity leaders, engineers and validation analysts, focusing on mindset, emotional intelligence and soft skills in cybersecurity. How do you see these human-centric skills and mindset playing a role in cybersecurity strategies? Could you share examples of how soft skills like communication, resilience, empathy and adaptability contribute to mitigating those cyber risks you've been talking about, especially when paired with technical expertise?

Speaker 2:

Well, first of all, thanks so much for running those classes because I think until even today we get some feedback that those were the best math classes run by you, and everybody really well received it. It really helped them with their career advancement and opportunities. So again, thanks so much on behalf of BTU University.

Speaker 1:

Thank you. It warms up my heart and I'm so happy that we got that opportunity to collaborate. I'm looking forward to the next opportunities and next rounds of collaboration around that, but tell me more about how you see the importance of it and why this is so crucial today and tomorrow.

Speaker 2:

You know, it's the thing we talk about during the VTF university internships, where it's so easy to spot a rookie when they join. And you know why? Because they just know all the technical details or they have the technical skills and they have the technical talent. They're missing everything else because, coming straight out of the school and college and university, a lot of times you don't have the soft skills. If you will and again, in my humble opinion, those are not soft skills, they are the hard skills that you need to be a professional, amy, with your master class, that's what they got. So where it overlaps and where it should actually pollinate, where I think the the last years of the, the education, these skills should be embedded as part of the curriculum and there's not a whole lot of universities who do that out there and that's why vtf university is so unique in its own way. We are. We spend 70 percent of the time making a student a professional and 30 giving them the technical skills, because technical skills change. I've been in industry from past two decades and I still learn a new technical skills every now and then so that that can be taught, but what's baselined and fundamental and engraved in me is the soft skills that's very hard for me to actually learn now. If you will, that's what we do. That's exactly what we are offering as part of the curriculum.

Speaker 2:

It's important for a student to know how to present your ideas because they don't have that confidence right away. We don't want our students to join a company and feel that they're left out just because they don't have the seniority or they don't have the confidence, they don't have the soft skills to actually voice out their concerns, their ideas and take the credit of all the hard work they do. We see this over and over again where someone new junior joins the company, does the hard work, but someone senior takes all the credit because you're not confident to speak up. We want to change that. We want our students to be professionals from day one and take the credit, have the confidence to speak up when something is wrong and give them the confidence of presenting ideas, presenting professionally, the briefs that you would think a seasoned professional is giving, and that's what we try to separate it. We don't want our students to join a company and feel they are rookies. We want them to feel that they are well-seasoned in the industry because technical skills they can keep learning Soft skills from day one. If they have honed on and built the confidence they can continue evolution from there, they can continue getting more confident from there. But the first piece of that foundation, they get it at BTU University.

Speaker 2:

And why is it so important we always talk about in cybersecurity? There are so many openings, there are so many jobs out there and yet we're not filling them. And on the other side, the data shows that there's almost a billion dollar industry in cybersecurity trainings and education. There's not a lack of it if you think about it. There are so many universities, colleges, training institutes, online trainings and education. There's not a lack of it if you think about it. There are so many universities, colleges, training institutes, online trainings platforms. So where is the gap? The gap is we're not helping them become the professional.

Speaker 2:

If you go to business school, you get these presentations. You have to have more than what you learned as part of education in cybersecurity IT. We don't do that. We only teach them IT and that's it. As a professional, if I work for a company, I need to learn how to speak with my manager. I need to learn how to collaborate with my peers. I need to learn how to make slides so I can present my ideas. Funny story one I'll share with your audience here.

Speaker 2:

The first time we ran the internship at VTU University, we asked the interns to create calendar invites with their managers for one-on-one. 99% of the students failed to put the correct subject in the calendar invite, failed to check time availability of the manager. Failed to actually add Google Meet, which is a way to collaborate online. Failed to put agenda in the calendar invite. These are basics.

Speaker 2:

If you think about it, if you are in a professional decorum, you will know these things. But they come. You learn them after year after year, by looking at others, by making the mistakes. We want our students to make mistakes at VTF University. This is your training. But once you're out, that's when you're ready to become a professional. That's when you are a professional. That's when you actually know all of these things that people take years to learn. We want them to learn at VTF and the minute they join a company, the minute they start their own company, they are a professional. They're not learning to be a professional. They're not learning to be a professional. They're not going to make the mistakes. They're not going to make the fundamental mistakes, if you will, we want them to make mistakes. At WTIB University they are encouraged to make mistakes so we can teach them.

Speaker 1:

It's amazing that you are working so deeply with what really matters and what others are oftentimes missing or just skipping in their programs, because this is crucial saw a picture of one of the feedback pieces.

Speaker 1:

When I was collecting feedback after every masterclass, I remember somebody one person from the group wrote to me that he had no clue how much he needed all this in reality and he was never presented to the idea that this creates such a big difference and this matters so much. But the shift was extremely visible, both for me from the side and for him as well, and there were so many other feedback pieces and so many members of those groups who were sharing that they actually didn't realize how big impact mindset and emotional intelligence and those soft skills which I totally agree with you are actually very hard skills in the end of the day, because even when you think about statistics around digital transformation projects and the number of projects which are failing, most often they're failing not because of the technology and not because of those hard skills, but exactly because of people, because of culture, because of processes behind and everything what is actually so important and crucial and not included into the specification of what really matters. And it is amazing, absolutely amazing, that you are teaching your students in such a wise and future-proof way so that they get that stable and super valuable, super strong foundation from day one. It is such a great job you are doing. I've been impressed from the beginning and I'm still impressed by your impact.

Speaker 1:

So the human factor continues to be one of the most exploited vulnerabilities in cybersecurity. Victor, how do you see organizations addressing this issue in innovative ways, especially with AI-driven solutions? Are there examples of companies successfully combining AI with behavioral analysis or cultural changes to mitigate human-related risks?

Speaker 2:

I think exploring human trust, human sentiments, is not just limited to our cybersecurity or IT. I think, if you look around, a lot of times when people are trying to break into their houses or people are trying to break into others' houses or trying to break into offices, they use that as well. And that's the same principle in the digital that I will play with the emotions of a victim where, if I know, I have daughters and they go to seven daycare, and if fire happened in LA, maybe I'll play that as an adversary that hey, there's a fire around your daughter's daycare. We are special services providing to pick up daughters or the kids and bring it to your home. Just click on this link to approve it. You don't even have to pay anything right now. Whatever, that is my point is play with the emotions. As a father, if I'm not around, I might be inclined to it. So, again, playing with the human's emotions are nothing new to get into either the digital space or the physical space With AI. There are three things happened actually in that space and that happened since the chatbots and the predictive models have come out. Models have come out. One is where when these emails, when messages or team messages are coming in. Ai can analyze and do a level of the predictive models where how true it is, because, again, ai can be objective.

Speaker 2:

Humans are subjective. Humans have senses and emotions, so hence we are always subjective, even though we want to be objective. We'll be subjective Law area, if you go and ask any judge that you have what's the toughest part of your job and they're like staying objective. Because humans are so inclined to be subjective because of our senses and emotions and empathy. We are wired like that. It's in our DNA to have empathy, so we're always subjective. But with AI as my partner in crime, if you will, I can have an objective view of the scenario or the situation right away. If I got an email, I got a text message, I got something, as long as the AI can look at it, it can give me an objective view of it. That no, based on the reports that I'm seeing, there's no file around your daughter's daycare. This number is registered to an LLC which seems to be registered in Malaysia or North Korea, and this seems to be a known scam. We have some reports from blah blah threat report Right away with those three bullet points. I know this is a scam because it's objective.

Speaker 2:

So that's where I think, as long as your AI solution has access to the data remember, that's a double-edged sword here. You want AI to help you. Ai needs access to your data, so, which means AI needs to now read my emails, or AI needs to know about my life. Ai needs to know how many kids I have. Ai needs to know about my life. Ai needs to know how many kids I have, ai needs to know where my kids go to daycare, which means if someone can break into my AI, which is reading all this information, can get all that personal information about me. That's a double-edged sword you've got to play.

Speaker 2:

So with all that where I'm going with it, that if an organization, if a company, is still concerned about, like the the first level, phishing attacks and phishing emails and phishing messages and humans being exploited at the emotional level, ai can help you be objective. Ai can help you look at the emails and even actually give you a red flag that these are AI-generated content, because AI can read that. Ai can also help you by making determination and again, this is very technical what I'm about to tell you, but AI can make a determination that this URL is actually going to a proxy site which is actually is being categorized as phishing, or the secondary site is categorized as phishing. It has an iframe which is asking you to click somewhere else, but the URL is taking you somewhere else. All of those things can happen objectively, right away, and there are solutions out there by now that can do it for you.

Speaker 2:

So I think at this point, with technology, I don't think so you need to be fearful of clicking and going to internet. We should be able to explore internet with confidence. There are technologies out there which is giving you that confidence that even if I click on phishing link doesn't matter, matter, nothing's going to happen because it's going to block it. It's going to defang all of that malicious data, malicious activity, from the browser, from the internet, from the application, from the endpoint, from the laptop, wherever that goes. It can happen. Technologies are out there already.

Speaker 2:

So at this point point, we should be telling, by deploying these right solutions, we should be telling our employees that if you click on phishing email, don't worry about it. That's a confidence. Imagine the confidence that you just gave it to your employee that if you click on phishing link, don't worry about it. We got it. We have the technology, you're good to go.

Speaker 2:

Versus what we used to tell our people that if you click on phishing email first, I'm going to send you a mandatory training, maybe I'll send you a suspension, maybe you get fired. That's a fear I had to live with as an employee. Versus with the right solution, with the right technology. Now I can actually give them the confidence that you can go to internet anywhere you want. You can click on anywhere you want you. We will help you because we we have the right solution. That's a confidence that, as a security leader, you can give it to your employees, your contractors and our organization that I am protecting businesses from this 20 years old problems of the phishing emails. We have the right solutions out there. Now you just need to deploy it at the right places and tweak it to the right policies. That's what your responsibility is To your employees. I think you owe it to your organization. To your employees, I think you owe it to your organization.

Speaker 1:

This sounds fantastic, because when you are taking that leap from fear to confidence, you can perform in a completely different way.

Speaker 1:

You are not limited anymore by your fear of taking wrong steps and doing something that you shouldn't do, and you know that you are not going to pay that price, which might be a price of being fired or maybe even going to the court because of something you've done without an intention to do something wrong, but it just happened because of new technologies and innovations in this space, and what you are offering and what you just described is a game changer in this area, because, then, it is a new space to navigate, to innovate, to work and create that exponential growth without fear of being attacked or being in danger. It is amazing. Looking forward, what role do you see for AI, not just in reacting to threats, but also in proactively shaping security strategies? Could we envision AI becoming a predictive force that helps companies prevent breaches before they occur? What are the current barriers to achieving this vision? You already touched down on this, but can you just explore it a little bit deeper so that we understand what is there on the market and what's coming next?

Speaker 2:

To get the juices flowing for our audience and listeners. Let me ask everyone here that just because we have the fanciest locks, does that mean there's no more theft going on? Just because we have AI, the breaches are not going to stop, because now it's going to take a new shape of breaches, which is going to be AI breaches, if you will. It's going to happen. It's going to continue to happen as long as people have something to lose, companies have something to lose, someone is going to take it. That's just how it works. What will happen? What will change a little bit? What can happen from the blue side perspective, is that a lot of the organizations right now they are considering the ai to at the limit of generative ai like to build the content, build the videos, images, text-based blogs, white paper selection brief. That that's a regenerative AI. Eventually, I think we're going to move towards the AI where it's going to do a lot of predictions based on the data sets, because, again, ai is so new right now it's infancy, but as it becomes toddler and kid and get to the adult level, where it's going to have so many data sets, where it's going to have so many data sets, it's going to have so much data to crunch and predict where it will help us, as security professionals, to build our security programs based on our business requirements, based on the budget. Now, just for the argument's sake, I will humor you guys. Imagine you're sitting in a boardroom with the computer on and you're asking the computer that I can only give $1 million to my security team. I can only have five full-time security employees two senior, three junior level and I can only have one CISO as full-time for strategy. And I am in the retail business. We sell t-shirts and my biggest threats are coming to the retail and my customer list and my intellectual property of how we make t-shirts, the print. You just give that as a prompt and imagine this AI and again, this is all hypothetical. This AI is actually giving you the security program, complete security program roadmap, to the point where here is a strategy that we're going to employ for next three to five years. Here's the roles and responsibility of the CISO senior security people, three security junior people and here are the technologies you're going to buy or renew. Here are the policies you're going to inform your employees. Here are the risks you're going to accept because, based on the million dollars, there is residual risk that you just can't mitigate, so you're going to accept because, based on the million dollars, there is residual risk that you just can't mitigate, so you're going to accept it. Here is the policy of the cyber insurance you're going to buy and here are the terms you're going to put into the cyber insurance. Here are the things that you will be always look out for.

Speaker 2:

For the intelligence. My point is by giving the budget, the people and the vision, the AI will be able to predict what you can do from a security program perspective, and not only imagine. It can actually tell you that if you take an approach of strict security model, then your employee efficiency will reduce by 7%. But if you take a little bit of lenient security model, then your employee efficiency will reduce by 7%. But if you take a little bit of lenient security model, where you're not that strict, your security or your employee efficiency will improve by 20%. Imagine the outcome of that in the board meeting. Within one hour, you have the decision points that the board can make right away and say yes, we want all of this or we want option one, we want option two, and let's just do it. And then the CISO and the security team goes in and just deploys and keep an eye out for it.

Speaker 2:

Assurance and governance and all of that. This is the future where we're going, what I just explained. I don't know when it's going to happen, but it will happen. That is what the application use cases. I see where I know my risk, I know my budget, I know what I can take on and what I cannot take on. But to do all of that, to actually make the soup, I just have the ingredients. But to make the soup, ai will make it, then we'll just go eat it. I just don't have the ingredients. But to make the soup, eya will make it, and then we'll just go eat it.

Speaker 1:

I love this vision and I can say only one thing exciting times ahead yeah. It is going to be amazing if, or rather when, as you mentioned, it gets into reality. Yeah, what is one piece of advice you would give to leaders, the C-suite and decision makers looking to future-proof their cybersecurity strategies in 2025 and beyond? Specifically, what mindset or priorities should they adopt to thrive in an era defined by both technological innovation and heightened human risk?

Speaker 2:

My advice to the board of directors and executive suite for 2025 is first, look for authentic resources for AI information. There's a lot of noise out there. A lot of noise out there. You need to find that authentic resource, or authentic source in your area of industry, or the right resource from the government, or the right resource from a partner or technology partner or vendor whatever that means but find an authentic source that can inform you what is coming down when it comes to AI. There's a lot of noise out there about AI. Everybody is trying to bring their own definition. Everybody is trying to make their own way of explaining AI that best suited their technology, their use case, their best interest. But find the authentic way. Find someone independent, that has nothing to sell you but yet can give you information.

Speaker 2:

Second, this is the year I highly recommend 2025. Go back to your people and look at their skills. Look at their talent. Either refresh it, help them actually get trained on AI threats, get them to actually learn more how to combat AI threats, or start retaining the right talent from outside as a contractor or someone as an incident responder who can handle the AI threats. You don't want to be on that phone call at 4 am in the morning, saturday, where you had a breach but no one knows how to handle it because you just didn't have a play knows how to handle it, because you just didn't have a playbook how to handle with AI threats. So this year, refresh the trainings, refresh the training budgets, help your staff, help your members to learn what is AI threats.

Speaker 2:

There's a lot of decent information out there already and I would highly encourage all the executives, all the leaders, to invest in in your people, in your employees, to get refreshed on ai threats and how to combat. That will inform a strategy for you for 2026, because right now, we, not a whole lot of people, even know what threats are out there. So if you don't know the threats, how are you going to protect it? Know your enemy, right. So this is the time that you will start learning about the ai threats, and 2025 is is a great start to to get started with that one. So that's my advice invest in your people, invest in the trainings, invest in the right resources, authentic resources. Try to remove all the noise and stay focused on the program. Don't lose that focus from the program, because just because the AI is here doesn't mean the other threats are just going away. There are still other threats, so keep the lights on while you learn about the AI threats and build your strategy for 2026 based on the learning that you learned from 2025.

Speaker 1:

I love your advice and this is really valuable and, as you mentioned, so many leaders are still not up to date with what's going on in the space in 2025 and beyond and how to deal with it. So, to be be updated, to learn more, to open up to the new sources of information, to the new vision and insights, it is the first step to be ready and learn more about your enemy. But we have this conversation today and you are helping other leaders all over the world to upgrade their knowledge and understand a little bit better what's out there and what's coming next. So thank you so much for being here today, victor, and sharing your wisdom, your knowledge, your experience and your vision for the coming years. I really appreciate you, thank you.

Speaker 2:

Thank you. Thanks for having me, Amy.

Speaker 1:

Thank you for joining us on Digital Transformation and AI for Humans. I am Amy and it was enriching to share this time with you. Remember, the core of any transformation lies in our human nature how we think, feel and connect with others. It is about enhancing our emotional intelligence, embracing a winning mindset and leading with empathy and insight. Subscribe and stay tuned for more episodes where we uncover the latest trends in digital business and explore the human side of technology and leadership. Until next time, keep nurturing your mind, fostering your connections and leading with heart.

People on this episode