The Security Circle

EP 074 Don Morron MBA: AI Enthusiast - 6 ways you're probably using AI that you might not know about, and the race to Quantum Computing

Yo Hamblen

Send us a text

About

Don Morron | Global Sales Leadership at Honeywell | Expert in Physical Security and Enterprise Sales | MBA with Specialization in International Business and Information Systems | AI/ML Certification from MIT Professional Education

---

I am a seasoned professional with extensive experience in the physical security industry and a comprehensive background spanning 15 years in various high-impact roles within the workforce. My career is underpinned by a strong academic foundation, including an MBA with a focus on International Business and Information Systems and an AI/ML Certification from MIT Professional Education, equipping me with the skills to bridge the gap between technology and business on a global scale.

Professional Expertise:

- Physical Security Industry: Over 12 years of dedicated service, developing a deep understanding of the sector's unique challenges and opportunities.
 
- Comprehensive Work Experience: With 15 years in the workforce, my career encompasses a broad range of roles from technical expertise to strategic sales leadership, contributing to significant growth and innovation.
 
- Educational Foundation: My educational background, highlighted by an MBA and specialized certification in AI/ML from MIT, empowers me to leverage technology to drive business strategy and operational excellence.

Career Highlights:

- Global Sales Leader at Honeywell: In my current role, I am responsible for leading global sales strategies, driving business growth, and achieving market excellence in a competitive environment.
 
- Enterprise Sales and Leadership Experience: Accumulating 5 years in enterprise sales and 7 years in sales leadership positions, I have effectively built and led teams towards achieving exceptional results.
 
- Proven Track Record: My career has been enriched by experiences at leading companies, including Honeywell, Motorola, Johnson Controls, and Sprint, where I have played pivotal roles in driving success and fostering innovation.

Personal Interests:

Outside of my professional endeavors, I am passionate about exploring global cultures and cuisines, believing in the value of continuous learning and personal growth through diverse experiences.

I welcome the opportunity to connect with professionals interested in discussing advancements in AI/ML, strategic business growth, or sharing insights on professional development. Feel free to reach out to engage in meaningful conversations.


https://www.linkedin.com/in/donm-mba/

Security Circle ⭕️ is an IFPOD production for IFPO the International Foundation of Protection Officers

If you enjoy the security circle podcast, please like share and comment or even better. Leave us a Fabry view. We can be found on all podcast platforms. Be sure to subscribe. The security circle every Thursday. We love Thursdays.

Yoyo:

Hi, this is Yolanda. Welcome. Welcome to the Security Circle podcast. IFPO is the International Foundation of Protection Officers. And we are dedicated to providing meaningful education and certification for all levels of security personnel and make a positive difference to our members mental health and well being. With me today is Don Marone, M B A. I think that's important, isn't it Don? M B A? Yes. Well, you've worked hard to earn it. Don you're a global sales director and a podcast host and an AI enthusiast, but currently director of cloud sales at Honeywell. But really AI enthusiasm has really you've got the bug, haven't you? Tell me how you would describe an AI

Don:

enthusiast. I would say someone that is fearlessly curious about the topic, right? Someone that sees. Not necessarily out in the horizon where I could be, but what it is and seeking to understand it as a revolutionary change to humankind something like that. I don't know how you couldn't be curious about. Right? So I felt it coming upon me to understand it as much as I can. You know, that's what I've been doing for, you know, the last six to eight months is just diving into the topic in all different kinds of ways. So that's how I would look at it.

Yoyo:

The objective to us chatting today is to kind of demystify the artificial intelligence in the physical security space which is where we're going to kick off. But you're fellow podcast hosts, aren't you?

Don:

I am. I'm trying to work up to your level, Yo. That's what I'm, that's what I'm focused on. So, season two coming up, but I need I'm going to learn some things from you today. So, for sure on that.

Yoyo:

Yes and that's fine. I'll give it away for free. No, I'm just kidding. My little podcast series, I think it kicked off just because I was out of work for a bit. And Mike Hurst, who is a bit of a legend in the security field, certainly here in the UK, but he's quite widely known in America. He said, why don't you set up a podcast? And I thought, okay, this is great. This will help keep me connected with people whilst I'm off work. And it kind of literally grew and grew.

Don:

That's awesome. Good for you. And you have the presence for it, right? And I think that's something that probably got a lot better over time. You know, you start off scratchy and then over so many episodes, you're, you turn into a pro. So I certainly looking forward to be at that level with you and listening to more episodes for you. Bless

Yoyo:

you. Thank you, Don. Well, listen, AI. Look, I'm going to be very honest. I know that AI is probably the one most touched on subject for security and for the security community all together in 2023. So much so that by the end of 2023, I kind of got fed up with hearing about it. So how can we focus for example, on where AI is being used successfully in the security industry. And then coming up after your comments, we're going to talk about where it's being used now that people wouldn't necessarily really know.

Don:

Yeah, you know, I think it's found its place in the industry for a long time. I mean, machine learning has been around since the fifties and more notably, I think the industry has come to learn it in the camera space, right? The surveillance space using camera vision, and that uses machine learning. At least in the past, and as it developed into a more progressive form of that called deep learning just basically, it's smart. We'll just leave it at that. But the point of it is it's widely adopted in a lot of things today. We're going to continue to see more of it. And that's going to continue to proliferate between access control, threat detection. Customer service. You can look at, you know, security operators as an aspect of you know, what could replace that. You know, could be things like autonomous agents. There's a really interesting future that we have ahead of us. But more so in the current state. I mean, we've been seeing it for a while now. It just hasn't been that notable until chat GBT showed up and now it's. You know, now everyone understands it, or at least can understand what the value of it is. I wrote a security industry association article for their December newsletter release. And it's funny. I was actually talking to my wife about this on the way back from Christmas vacation, because I have nothing else to talk about the AI. And the one thing that I think where we're at today why I think that the industry should absolutely embrace this is AI is at the point of accessibility. And in the past, when you made these machine learning models, basically, when you make the AI stuff that makes the magic happen, it's tremendously expensive and time consuming to do that. And so now we incorporate tools that we have today and those tools are more advanced. And that in this case, more accessible to manufacturers to enhance their products that all of our security folks use might like and know that's much more within reach. And so I think that availability or the accessibility of product is going to allow us to see more and more of this happen over the coming months and most certainly over the coming years. So that'd be my perspective on just high level on current state. So

Yoyo:

let's go back to basics then. I bet if 10 random people in the security in a community were asked, you know, name five reasons that you're using AI now, but you wouldn't necessarily know it. I wonder if anybody would get any of these because I didn't. I didn't. I had to Google it. Right. Google says number one. Of course they're going to say number one. Google Maps. Right. So, let's demystify the fear. Now people have been using AI and Google Maps for a long time. This is because Google Maps, just like most map applications is powered by AI. But let's look at the difference being powered by AI and is one thing, but what Google does is it uses machine learning to analyze real time data. That's where the real intelligence comes in. There's two separate things, right? You've got power by AI and you've got real time data analytics, and these make predictions about distance, travel, potential interruptions, and more. This is where the real smart stuff comes in, doesn't it?

Don:

Yeah, it's been all around us, right? You know, again, one of the things I also wrote in the article was that we just have to embrace this. We cannot fear this. We need to fully understand it. That's one of the big reasons of my why of why I started the podcast is because I saw the revolutionary human shape, humankind change that we're up against. Right? And I think it's incumbent upon us as business professionals, especially those that are the protection of lives, property and assets business to really at least understand what's happening and what's coming, right? And to your point, you know, this stuff's been around for a long time. I mentioned in the fifties, I'll give you an easy example. When you buy something from Amazon and it tells you, Hey, you might also be interested in this product that uses a machine learning recommendation system model. So there's actually, and really that's on a basis of statistics. And really what this stuff does, what it has been doing is either predicts or it explains something and it doesn't give you an absolute, but it gives you a high percentage of, hey, there's probably a 97 percent chance that Yoyo really wants this based on what's in her cart right now, right? You know, so if you look at in the security world if we think of anomaly detection events. You know, that uses a machine learning model, let's say called probabilistic learning, which is basically what it's looking for is what is odd in this event scenario or in this timeframe that doesn't usually happen. Right. But what it's doing is it's. Predicting what is most likely an anomaly out of all the events, right? It might not tell you exactly that it's anomaly, but one of the things about where we're at today is that you very much still need that. They call it technical term is human in the middle. So you still have to have a human in the process to decide. Okay, is this really an anomalous event and you can react to the systems and can allow the machine learning just to learn better by saying, yes, that was an anomaly event. Sure. But, you know, none of this is really new. And again, I think that I think the manufacturers will see more of this with the proliferation of. Generative AI. So this is basically a new form of AI, right? Just think of it that way. More advanced form. There's a whole back end of complications with that. But the biggest point is that it's accessible and that people can get attachment to a API that is developed by, let's say, OpenAI and allow you to get the benefit of what extensive machine learning models and costs of those would have normally been. Down for the low price of a token a month based on activity. So, you know, we're talking like fractions of a cent. Yeah, so really exciting times. To,

Yoyo:

to a point around Amazon the algorithm was so good in the past. that if you were buying fertilizer, or something like Semtex, for example, it wasn't Semtex, you can't buy Semtex on Amazon, you've never been able to buy it. But it would also suggest, do you want nails with that? It was, they actually had to adjust their algorithm for people who had sinister intent. I wanted to compose maybe ingredients to make a weapon, you know, they, they had to adjust that. I wish I could remember the true product that will link that was linked to something else. Okay, so number two, search engines is another. Way that we're using AI every day. And in a very similar way, the search engine is using predictiveness, ISMIT, and patterns of human behavior to kind of, you know, suggest what you might want along with relevant results. And machine learning also, it says here, continually refines the results and recommended searches based on the billions of inputs it receives every year. Not surprising now that we know what we know, that search engines comes up number two.

Don:

Yeah, when you hit the forms of let's call it, they call it parameters, right? So the idea is that there's different parts of the framework that get you to an outcome or a decision of something, right? So if I want to search engine and I'm typing in something. That the parameters of the Internet, which could be billions of trillions that I'm searching through to get down to what I'm looking for that from my understanding is much beyond a machine learning model and live could live within a deep learning framework. So deep learning, think of it as a, it's a cake, right? The whole cake is called AI, but you have in the first layers machine learning. You know, there's a, it's a wide part of the cake, you know, at the base model. But you know, maybe that's not where all the sweet part of the cake is that you're really looking for. And you go down a layer and you're in the deep learning section, which is likely this framework of search. But when it's doing that it's basically saying I'm taking an input, which is my search criteria. And then I'm funneling through a, what you were talking about earlier, Yo, which is a neural network. And in that neural network when it's making its first pass is the technical term is called forward propagation. All that's doing is it's basically saying, okay, I want to think about a problem and I need to funnel through all the variables that could give me an answer. And at the end, there's an output, right? And there's all kinds of scenarios this can run in. But then what happens with a lot of these tools. Is they'll also go backwards almost like it's checking itself and that's called backward propagation. So forward right thought backward thought just to say, okay, did I make the right prediction of the best prediction in this model? And then and there's that full term is actually called one epoch and basically the point in all that jargon. Is that all of that takes computational power so you can imagine it scale when you've got millions of people searching how that amount of compute obviously resides in the cloud right in a lot of ways, and it's unlikely you're going to have a search engine on Prem, but it is and it's not. Meaning that from the security industry's perspective you can definitely have a search engine living on a deep learning framework, even on a prem system, right? Not necessarily on the cloud. Why? Because the parameters are much lower, right? It's a lot less for it to think about. It's, and its inputs are also those that. Are limited, right? If for me to say, let me anticipate all input variables for the search engine basically would imply that I need to understand what anyone would search at any time, like that's really difficult, but from a security perspective, you know, I mean, we could probably put down on. A very thin book on all the things we're probably searching for. Right. So, well, let me

Yoyo:

tell you the number one, let me tell you the number one for 2023. It was Taylor Swift. Everyone. Yeah no, but she was the most searched thing period on the internet for 2023.

Don:

I did not get tickets to that, but now, but I'm not also not disappointed cause I'm not a fan. So, but that's okay.

Yoyo:

Well, there's things that can just end relationships.

Don:

Don't my wife loves Taylor Swift, but from my perspective, I mean, I don't hate her. Right. I mean, obviously she's, you know, an icon, but it probably just not my cup of tea music normally more like a heavy metal kind of verse. So,

Yoyo:

so look, other than Taylor Swift, we obviously have Siri comes up number three. And I think most people know who Siri is. But what's more exciting, number four, is Tesla. So those people, I think, who are driving Teslas these days are probably far more open to machine learning and the potential of AI in their lives. They've almost kind of married it to a degree, haven't they, buying a Tesla? Have you ever driven a Tesla?

Don:

Yeah. Yeah. And I it's interesting. I bought one and then returned it for a number of reasons that I won't say on this podcast, but I do have my Cybertruck deposit down still. So we'll see if it comes in 2024, 2025, but love you all most. Yeah.

Yoyo:

Well, so does this article. They're calling him a genius. I can't think why. They say the self driving technology implemented into Tesla's car is a fine example of AI and machine learning. Firstly, the car is capable of navigating through traffic, speeding up, stopping, changing lanes, and more. It's practically alive, just not breathing. Furthermore, machine learning continually improves health. Safe, the car navigates, learns new roads, et cetera. So what it knows now is more than what it knew a year ago. And this to your point is thanks to the deep scale, deep neural network software, which you mentioned just now, this deep neural network, I think is something that we're going to see a lot and hear a lot more about. This is what detects objects of significance. This is the smart technology, isn't it?

Don:

Yeah. Yeah. And it's you know, so they use a great deal of camera vision, right. Or computer vision. Tech, the technical term is convolutional neural network. So there's three types of primary neural networks. There's a artificial neural network, which primarily uses texts. So think of like the chat box, right? Forms of input of texts, output of texts. The other one's convolutional, which is what we're talking about, right? Could use camera images basically, right? And what is a video? It's basically a bunch of images paired together or stitched together. I mean, the other one I can't, the name escapes my mind, but it's basically for coding, right? For understanding code language and then also applying code language. So when we talk about Tesla my impression is a convolutional model. And within that is super interesting realm of study. So when I look at an object that object essentially the machine learning model behind all of it is classification and I'm classifying what that is they're simple one versions there's multi classification models, which is what Tesla is going to use. And that's basically identifying its environment. And again, with the amount of parameters and scenarios that the vast world provides, you would most certainly lives in a neural network, right of deep learning. So, yeah, so it's interesting because they use camera sensors all around the cars. There's newer, there's other cars, actually, that will slap like 1 huge LIDAR camera on top of the car. That that will understand its environment without the need of casting a camera or just a basic camera vision technology. It'll be interesting to see where Tesla picks up with that as a pioneer, whether they're going to stay with the camera sensors or they're going to go to more of a LiDAR model. I don't know. But yeah, that it's funny. I had, I, like I said, I almost bought a Tesla. Actually I did and then returned it, but I'm still looking at a Cybertruck for the future. How can

Yoyo:

you drop that in and me not ask you why you returned the Tesla?

Don:

Yeah, no, it's so it's funny. I bought it. I I bought it up front and I remember they told me it's like, Hey, buying a Tesla is not like buying any other car. And the transaction process is really fluid. I got a performance Y model. And it was super easy. I mean, it took my, all my cash up front, like as if I was just transacting to buy a, you know, buy groceries online or something. It was very simple. Upload the license. Everything was pretty straightforward in the beginning. And then when I got there basically the car was scratched up, not washed, not charged. And while most people are leasing cars, I'm the one paying cash for it. So I kind of expected the car to be like immaculate condition. Like most times you buy a new car nowadays and was reminded again, how this wasn't a car dealership. Now this could have been my experience in Georgia, but that wasn't acceptable for me. So I was just like, I don't want it. That's I'm just returning. And I had to wait another two weeks for another car. And I just decided not to, and picked up a. Very conservative hybrid which is not as cool and sexy but I certainly get decent gas mileage. So what is

Yoyo:

it a Hyundai?

Don:

iT's a Toyota RAV4. Yeah. So it's a all wheel drive, you know? So, but we'll see what happens. I mean, definitely support what. I mean, how can you not support what Elon's doing? I think the guy, even what he's doing with the neural link, right, attaching, you know, to the brain and all that's super interesting all the stuff that he's doing, but yeah, just from the basic sense, going back to convolutional models, right, which is, you know, kind of the basis of. Of what Tesla is doing. That's another great example. And if you know me, if anyone knows me in the industry, I'm a big advocate for cloud and cloud based systems. I just find that for something to update like it does. Once we see more, I guess, 5G used and communication networks used, and we've had some people on my podcast that I talked about that, right? Because you could have all this great software and all this great computers, but it's got to transmit the information somehow, right? So, in this case you know, I think as, you know, 5G picks up, maybe into 6G, we're going to start to see a lot of this. Autonomous vehicle purchases and maybe the value of those start to be seen, right? Because I don't know that's been the case so far. People are buying 15, 000 worth of autonomous vehicle use but don't really get to use it, right? That much. That, I think that's a limitation of the tech that's available today. And I want to say a lot of reason outside of just physical security companies have a hard time adopting AI in general because again, in the past, it's always been really expensive and very confusing. In order to adopt. So do you

Yoyo:

remember in 2018? Something that they called the moral machine at MIT crowdsourced over 14 million moral decisions for Tesla or for any automated vehicle. And this was made by millions of individuals in 233 countries, and these decisions were collected through the gamification of self driving car accident scenarios. Do you remember where they had to choose whether the, should a self driving vehicle choose to hit a human or a pet if there was no option of anything else? What would they go for? More lives or fewer, women or men, the young or the old, and law abiding citizens or criminals. And so, All of these moral decisions of 233 countries are compiled together. And that's what's gone into sort of building in the consciousness, I guess, of these automated vehicles. Do you trust that?

Don:

Yeah. So, man, that's a great question. Cause you know, again, I had someone else on podcast that talked about something similar and they said, If you were stuck in a jail cell, who would you, what would you rather have watching you, a human or AI? And I quickly said AI because I said the AI has a specific existence, right? So, you know, the AI is looking at something, it might be looking for a slip and fall. It might be looking for you know, something being stolen off of a shelf. It might be, so it has an existence, a kind of purposeful end or an outcome that's delivering. That's current state, right? I could change the future. That's basically what we're talking about today. When I talk about a human, I don't know what the human's intentions are, you know, people, we're still animals, you know, do I trust that, you know, in every aspect that they're going to have my best interest in mind, do I feel like, you know, they're going to use my, so I have access to my camera system, or if they're going to be looking at my camera system for some ill intent, I don't know, right. And so, I tend to think optimistically about AI as long as that you know, that it's built. With those things in mind, so what they talked about a lot of these businesses that are building AI models is this is the governance piece, which is very specific in cyber that any of the background in and so governance is the same place here is equally important as in cyber. Is making sure that and it's a good question of morality. Cause it's like, who sets the tone of morality, right? Different countries might have different perspectives of what morale, what's moral, right. That's right. Right. So if you look at things like the EU, AI acts, for example, right. And I'm not an expert in that at all. We had someone on the podcast, earlier podcast that talks about that, but. Having groups that decide what that morality is, I think, is super important. And I think that one of the things too, and IBM talks about this, is that AI, and it does all these great things. Amongst anything, it cannot be a black box. Right? But it has to be explainable. So, meaning that it needs to be open with what data we're using. I need to understand the morality code behind it, like you were saying. Like, who's building the framework of what is right and what is wrong. And that needs to be open, that needs to be openly understood. And the thing about AI in general is that it is sort of a black box. It is difficult to understand. But when companies are able to make really their own generative AI, which is kind of this Chuck and Ren now they can make smaller models than what is existing today. Where they can have more control over these things, and they can layer in their own data whether it's real data or there's a term called synthetic basically you can, you know, instead of pumping in images of your own images of a human, like say it's camera vision, because this is respect to the industry, it could just be synthetic images of a human, a lot cheaper, a lot faster, and I don't have to get a sign off because these are approved synthetic images that maybe other manufacturers use, you you know, so, so yeah, it's a good question. And I wonder, I mean, EU is already ahead in this respect. I think I haven't seen a terrible bunch in the US about this, but I could just be missing that tabloid.

Yoyo:

I didn't realize that the spam filter of our inboxes was powered by AI. But lastly I've got here Uber. As the number five out of the top six ways that we're using AI and machine learning, and we wouldn't realize it. So for example, this is because Rareshide app, Rareshide, Rideshare, it's like saying car park, isn't it? Instead of car park. Rideshare apps like Uber use AI to determine the price of your ride, minimizing wait time, routing the driver to you, etc. And they use the Uber heat map. which shows drivers high traffic areas where rides will be in high demand. I really didn't realize that Uber was using AI. Did you?

Don:

I can imagine that they wouldn't, you know, and there's so many aspects. I'm giving a good example, right? So, they could also use AI and so does other security organizations to understand if there's threats happening in a certain area, do I want to send my driver into that area? If crime is up in that area, I just got a theft that was called or a violent crime that was just recorded, right? In some way, I don't know, shooting breaks out or whatever it is. Do I want to send my rider into that area? Right? If it's a high crime area, do I want to alert my driver ahead of time to say, hey, this is a higher crime area. Do you want to be making that drive? Right? And so, 100%, you know, there's a lot of aspects of this that you can't just do on a simple you know, if then statement algorithm, right? It needs an opportunity to make very you know, or otherwise complicated decisions at scale. And I, yeah, I would certainly believe that Uber was doing that for sure. And so is what's interesting about the spam thing you meant, one of the projects I did in my six month serve program at the MIT professional education was to build a no-code machine learning algorithm for spam. So, and and that lives within a a recommendation system, right? So. Again, it just takes the same of what characteristics that make up spam. In this case, just go back to Uber, what characteristics make up the outcome that I want to tell my driver or recommend to my customer that's using the driver app and providing that prediction analysis, which comes in the form of An answer, but in the background, no one really knows. Again, that's a cloud based driven tech, and that's not necessarily living on devices today. But it's but I think it's going to get close. And actually, Google released their Gemini large language model. So basically, that's just kind of their advanced version of open a eyes, large language model, who is the basis of chatty bt, right? So they released theirs and and I think that one of the things that was really cool about that is they're releasing a nano version, a small version that's going to live on the Pixel 8. I don't know what that's going to do. Maybe it's going to enhance cameras or do some kind of. Camera tech, I have no idea yet. I haven't really read that far. But the thought of having generative AI, something like that, on the palm of your hand, I don't know if it's going to require internet as a part of the compute process, but just the thought of that is really cool because as a security professional, is that something that we can then push down to a camera or an access control panel or an intrusion system? I don't know. You know? But that's the possibilities in the future that really excite me. And something I want to continue to monitor.

Yoyo:

For the last segment then, let's look at the top 10 reasons why people fear AI. Number one, job fears. Elon Musk hit on it, didn't he, at the AI Safety Summit. He said that there'll come a point where no job is needed. And he's right, if this works.

Don:

Yeah, I think it's inevitable, for sure. And I think, you know, again, this is a good reason to talk about it, right? Because I think if you're in a job that has some possibility of being replaced, you should be looking at ways that you can find something that is going to allow you to work alongside AI, right? So, the thought is that you're hedging your bets. By augmenting AI with you. Instead of hoping that it doesn't replace you, right? And that's not all jobs. I think the creative jobs will still have a lot of relevance. I think that when there's really tough decisions, which oftentimes happens security, I think there's absolutely human and mental factors. What I mentioned earlier. I do think that I think I absolutely think it's inevitable. Actually, it's happening with customer service teams right now, HR. Teams, it's happening, marketing teams it's actually having a dev ops. So if you've got a development coding team yeah, there's aspects where this stuff is coding, you know, work for businesses. And so if I was someone in a job, I would think about, okay, let me research where in my role that AI could replace me and I need to figure out what is my future look like over the next 5 to 10 years. And how I can work alongside of it. So I'll give you an example. Like for me, I mean, I'm in sales. I've been a sales leader for a number of years and you know, I can sell this. Right. I mean, do I think sales, this could replace sales? Absolutely. I think the better recommendation systems get and you're looking for something, they could, but I do think when it comes to enterprise level sales, when you have complicated sales. It's got a ways to go before it gets there. I don't know.

Yoyo:

Now I'm going to challenge that and say I think that people buy people. And I think you start putting in too much tech because people don't like dealing with bots. Yeah, they know they're dealing with a bot now. I Think people buy people. I think people in sales are going to be all right for a bit,

Don:

yeah, and let me let me bring some clarity to my statement. So when I say replace, I don't think fully replaced. I think there's a percentage that it's going to replace. Right? So they may look at this and say, okay, maybe I can replace 20 percent of staff, but the rest of my staff I can have use these AI tools. Right. In order to augment what they were doing before. Right. So for example, you know, one company I know used to pay for language translation. Okay. And their marketing team, and they would pay someone a bunch of money to translate documents. And I'm like, wait a minute, this is a perfect opportunity for generative AI. And now for fractions of on that dollar that they were spending. They're saving a bunch of money. So dealt that what that did is that probably allowed them to fire that agency that was translating language, the language for them. So, yes, in some essence, someone did lose their job, but it allowed now the marketing team that was creating this new language to augment themselves. By using this tech to get the same outcome, right? And so I think that what I'm saying is, I think we're definitely going to see percentages of various departments get cut. But I think that's going to allow companies the opportunity to figure out how they can enhance their existing staff that they will keep to work alongside AI. Right? So. I do think it's inevitable, and I think one of the things, too, I mentioned earlier was the idea. So Boston Consulting Group wrote something about autonomous agents, which is extremely interesting to me. Bill Gates mentioned it. He said that he looks forward to a time where we have these agents. And agents is, I don't know if agents will come when we get to the point of what they call artificial general intelligence. And that's really one of the goals of OpenAI Google, all of them. I don't know what that's gonna look like, right? And I think that those types of things, and they already make voice bots, too, that are getting better can most certainly challenge the current state of thought, right? So, basically, what we think AI is today. In common media can completely turn on its head and the next 3 to 5 years, right? On what isn't what it's capable of doing. But I think that again, the best thing to do for anyone is to be educated

Yoyo:

all right, so number two is independent thought. And this is particularly relevant as a fear that we're going to lose this independence, lose this control. I think people worry that this will cause too much of a great dependence on technology. And people worry that as AI becomes a bigger part of our lives, people will lose the focus. And this is quite well evidenced in a report in 2021, where Tesla drivers were reported becoming inattentive when they use the autopilot facilities. So that is a genuinely sensible fear, isn't it?

Don:

Oh, absolutely. Yeah. And I think the fear can be driven from the fact that they're getting some results when they're testing. Right. And they just have to look towards those, but. You know, we think about Tesla and how they're going to scale. There's just so much to hedge against what we've been, what we've seen over the years, you know, I think I watched something about Elon Musk, just generally on the electric car side. He said that if we flip the switch to the bar and every car becomes electric, it would still take like another a hundred years or something to get us out of gasoline. Basis of gasoline, right? Which drives everything and so many things. Think about it. Even the utility basis is based on gasoline or diesel. So, yeah, that'd be my answer to that.

Yoyo:

And let's look at number three, lack of regulation. And I think this is something that a lot of people do worry about lack of regulation. And I think a lot of people would trust AI more if there was more, if there were more laws to support the use of AI, where there's more accountability on the. producer of the AI, because at the moment there's no accountability is there? There's no like, Oh, they can't turn around like they did with Fukushima, that big. That was going to literally knock our power station off its foundations our nuclear power station office foundations. And, you know, AI, everyone knows it's got this huge risk attached to it. Beg the first person that screws up with his turnaround saying, well, we didn't really think that was going to happen. What's your views here on lack of regulation?

Don:

Yeah, it's a, that's a great question. I mean, it's tough, right? Because every country around the world, they're going to, they're going to absolutely weaponize parts of this. Right. And so I think a part of it, it's like, Oh, it's super positive. But in the background, it's also really dangerous because the more we sit on our laurels and we just sit basically on our hands about this. Waiting for regulation or letting regulation stagnate growth, the more we risk being behind in ways that could not be great for us. Right. And so I think they have to find a good balance between regulation to not lose the speed of innovation. But they definitely need some form of regulation, right. To protect privacy, human rights you know, and obviously to limit liability.

Yoyo:

Yeah, and simple things like if you use someone's identity against their without their permission, that should be as serious as any other type of fraud. You know, people's privacy and people's identities need to, I mean, I watch too much Black Mirror. Do you know what I mean?

Don:

That's a great, that's a great, that's a great show. Yeah. I'd love to get your opinion on that, but yeah, it's yeah, sorry, go

Yoyo:

ahead. It's so good, like, it's really ticking all the boxes in relation to my genuine concerns around technology and I'm on season six now and it's like, ah, it's nuts. Okay. Number four, human connection. So with the rise of technology in general, there's an eternal worry that popularity of technology will cause a simultaneous drop in human connection. This has been proven to work. In this way, with the technology supporting social media, but there are more and more people less connected than ever,

Don:

right? Yeah, absolutely. It's funny because one of the, I think it was the CTO of OpenAI claimed it said Elon Musk was a speciest because he supported humans. And all of this revolutionary change, and I think we have to be that way. It's so, you're right, I mean, even today, we're so disconnected with all the technology that we have. anD the further we get into things like metaverse and where this AI can live, and you don't know with deep fakes, am I talking to a person or not? You know, propaganda becomes reality, and that's a very dangerous place to be. Very dangerous when you can't separate reality from fake news. Yeah,

Yoyo:

very. And that's where it becomes very political and very worrying indeed. But okay, so number five, we've just kind of segued into that quite nicely political bias. So we may think of technology being inhuman. Incapable of having human opinions and therefore unable to sway our political, religious or social beliefs. But is this right? There have been increasing worries that just as social media algorithms are arguably becoming less impartial, AI too is shifting towards having a political bias. But surely this is about the humans behind that, using and manipulating it.

Don:

Yeah. I mean, even before the thought of AI, like all of this was super dangerous. I mean, media companies have so much power and controlling the narrative and people's impressions and elections and down to the things we just buy. Right. Even though we weren't intending to. And so, you know, when you talk about the big tech companies, whether it's Meta you know, X down to Fox, CNN, or even BBC, right? They have such a major responsibility to make sure they're being responsible. And I, to be honest, I don't know how the news is at BBC, but in the Americas. It's very clear on where Fox and CNN sits, and I'm hoping they break them all. Stop

Yoyo:

swearing at me, dude! Yeah, I mean, the BBC, I mean, I lean on the BBC if I'm abroad. It's always my go to when I've lived abroad, and there's something very nostalgic and, you know just lovely about growing up with the BBC and knowing that BBC World News comes on and it's a way for you to connect with back home. That's just something magic that you share when you're a fan of the BBC, but they have to be fairly independent and impartial and I can hear this invisible voice of someone going, yeah, they're not really, but I believe they are because they always have to show. Two sides of everything, three sides if they possibly can. And then you've got a lot of the kind of right wing view would say BBC is more like woke, which is really insulting to those of us that think wokeism is really okay. But interestingly, the Washington Post reported earlier this year that a paper from the University of East Anglia here in the UK suggests that chat GPT has a bias towards liberal parties in the US, UK and Brazil. AI having a systematic political bias is a real cause of concern due to the sheer accessibility of this technology. I mean, let's face it, it's okay if it's supporting a liberal viewpoint and you happen to prefer liberal viewpoints, but if you don't, that's gonna, that's gonna cause angst,

Don:

isn't it? Yeah. And then it goes back to you know, garbage in, garbage out. Right. So it's fed the internet and the internet is full of garbage. And I think that and I think that the, it's going to take those biases and that are inputs by humans. And it's going to hallucinate content that is not favorable to some groups. And I think that's unfair. And I think that when it's used, people have to again, go back to the governance piece and understanding, you know, what measures are put in place to protect every group, right. To consider every group. As a part of our future. And so, yeah, it's a big topic. And I think even Sam Altman talked a bit about this. Who's the CEO co founder of OpenAI. This is a continued issue that I think is, there's no silver bullet today. And again, at the EU, AI Act is much closer than what I've seen we do in the U. S. So kudos to

Yoyo:

you, to your group. You mean the revolving doors, Sam Altman?

Don:

That

Yoyo:

was Microsoft saying, get back in there, right? We need to see at that table. That's all that was in my opinion. So number six is the AI arms race, a really serious subject because let's face it, I feel like everyone's chasing AI now, like they were chasing the scientists for the first atom bomb. You know, there is no doubt about it, particularly between US and China, an all out arms race has erupted to be the top dog in this technology, with each trying to out Trump the other by rushing to make advancements in AI. Now that's scary in itself, because I don't think any advancements in AI should be rushed, because that's where mistakes are made. This has also promoted worries that the AI arms race will fuel global tensions and consequently result in a more heated and confrontational political landscape. Why cannot we build AI with a kill no human methodology?

Don:

Well, the same question goes to why do we always get into wars over centuries, right? I mean, that's just humans, you know, they want peace, but they don't want peace. You know, that could be narrowed down as maybe some smaller groups, but one, they're already weaponizing AI that's happening on a dark web. So imagine like a very strong version of open AI's framework. It's already happening today, and they're targeting people when finishing ways that are so articulate. That people are getting overwhelmed. I've actually noticed that lately for me, I've gotten text messages, WhatsApp messages, LinkedIn messages on the same thing. And no one human could target someone. At scale that much that often. And so I know that certain by AI it is a scary time. There is one thing I'd like to add to this though, is that when we talk about weaponization there's a topic in AI that a lot of people talk about, which is the compute hardware backend. And where we're where they're already doing, they have been doing this is coming out with quantum computers and basically think of it as a computer to set things up zeros and ones and things in terms of every variation that can ever exist and actually runs on electrons. So the short of it is that. With a I backed by something like that it will become even more weaponized and China. Actually, they release it. I don't know if it's true has one of the most advanced quantum computers in the world. And we're obviously with IBM and Google trying to hedge against that. And I think that even the Pentagon or the government has connections between those two large organizations. As a way to hedge against that. And so it's a real threat, real concern.

Yoyo:

Yeah. And I'm thinking, you know, I can just see quantum computing completely taking over. It's like the, it's the next gen. I know we use next gen an awful lot in, and you look at the industrial revolution, you can't help but think what revolution are we in now? We're in a technology revolution, but I also think we're in a greed revolution. It's just, I don't know how we're going to look back at this particular time. But you're right. Next is cybersecurity. Any kind of technology as it matures inevitably leads to an amplified risk to cyber security. You mentioned the phishing attacks earlier. They're getting very sophisticated. Kind of miss those kind of Robin Hood days where you just get robbed going through the forest, you know, because you could make plans, you could not go through the forest. And now it just seems like. Every person on this planet is a target for fraud and this money is going into very dubious intent as well. And it's being owned and managed by bad actors. This is a big thing now.

Don:

Yeah. So they showed a chart of AI adoption and spend, and at the same accord, cybersecurity was going up in the same amount, right? So, yeah cybersecurity, if you got a job in that space today, you will have a job in for many years to come. It's getting pretty, it's getting pretty bad. I mean, one of the things I didn't mention earlier is that they're also doing voice AI. So with deepfakes, someone could call. So this is one of the things and I they, people can record your voice or systems can record your voice. And they need about 30 seconds of your voice to understand it. And then basically from there, you can type out anything. Yeah. And it sounds like yo or it sounds like Don. And then, so basically if it's yo that's recorded, I want to call, let's say you want to call a friend or a family member. They might not have any idea to say, Hey, by the way, can you send me a cash app here? You know, it's yo. And then they might not know. And especially when you prey on folks that don't know that this exists. Again, this goes back to really understanding the threat. So yeah, it's getting pretty dangerous. So I'm that's one of the things certainly if I said it can't be up at night, that'd be one of them.

Yoyo:

The typical fraud is, hi, can you help me? I'm stuck on the motorway. I've got a recovery vehicle here, but they need to take cash. And my card isn't working on the machine. Can I, Are they going to call you back? Can you give them my credit, your credit card details so they can take payment and take me off the road? And that's the thing. And I think we're encouraging now young people and families to have passwords. It's a really quick fix. Sure. What's the password? And that way you're almost kind of having a vernacular password plan. Do you know what I mean? I, it sounds nuts. You would never thought you'd be having this conversation, you know, in 2024. But that's where we're at.

Don:

Yeah. Authentication is going to be a big thing, you know, for people, I think it's going to be wild to see. And I mean, I'm not a cyber connoisseur as much as you might be, but I think that authentication, but it's interesting because we're authenticating everywhere we're going. For all kinds of reasons. Like I use my face to get through the, you know, get through the airport lane really fast. And to me, it's like, ah, well, it's a government. What can they do with it? And which is true until they're probably hacked. And then who knows where that's going, right? Like actually what happened with 23andMe I joined that to understand my family origin or whatever, and now they're hacked. And so who knows what that's going to be used in the next five to 10 years. It's a scary time.

Yoyo:

So we've got two more. Art and originality is one of them. I think it's quite obvious. I think there are a lot of creatives out there that have very strong viewpoints about how AI is being used to create art and there's no doubt about it. Even in the program I was alluding to when we talked offline, they were talking about how, you know, certain technology exists now that can literally You know, paint you as a human being, and it's artificial technology, and there's no doubt about it. There's this kind of appreciation for the cleverness behind that, but also a lot of creatives are going to be massively threatened by this existence. But we'll move on from art and originality, because I think we pretty much get it. Don't start a career. If you're a kid, kids do a job that's a simple graphic design. Stay away from that. Misinformation, we touched on this a little bit. We can see the damage, but then, I mean, I have to say that both the United Kingdom and America have, you know, generated a lot of misinformation in the last 200 years that have worked very well in our favor. And now all of a sudden we're out of control and it's not something we're in control of. I think that's a fairly safe space to be in,

Don:

right? Yeah, because it's who validates it. What is, what's the basis of misinformation or just the right information? And that's a real question, I feel like, right? That goes back to the EU AI Act or some basis of framework to say what is our source of truth?

Yoyo:

Yeah, I think great. There needs to be laws to protect anyone who is being fraudulently misrepresented. Number one, there's no such law at the moment. And you really need to get there. And secondly, there needs to be this more sophisticated way that people can check. Something is real or not. And that means, what do we check? Who do we check with? And I remember this became really prevalent in COVID. You know, we were hearing stories about the military going on the streets of certain towns. And this is early doors. This is March, 2020. And we were like, what the hell? Military law in the UK? No, it wasn't. It was just a training. group. And all of this misinformation was just getting people really hot under the collar. They were hopping up and down. And I was one of those people, quite often punished, but I was one of these people that said just, if you're in doubt, just check a BBC website. Yeah, everyone that is a BBC, check a gov website, a government website, gov. If it hasn't got gov or isn't produced by the BBC, then regulations haven't being put there and I was constantly saying just check your facts, fact check and we're better at it now to the point where I remember seeing something on TV the other day. And it was something like, Oh, wow. And they were like, is that real? Like this. So now people think that can't be real. Do you know what I mean? Because they don't, you know, everyone's questioning everything, even the truth.

Don:

It's interesting. Missing that. Cause it reminds me of almost the like verification check Mark that lives within LinkedIn or X, right. And so now I wonder, cause that to me, AI could do that, right? And that there's maybe some framework that authenticates or fact checks. So every, if it's a fact, if the news released something, how do I know it's propaganda misinformation, it's got its fact check part. Right. That validates that. And I think that's actually really, I don't know now I want to Google this after this podcast to figure out if that exists, because that seems like that is the next step in data because our next step in AI verification, or excuse me, verification of misinformation that probably needs to happen.

Yoyo:

Yeah, I think you're right. I think we, I think it's like anything, you know, you don't build a car and give it to people to drive unless you have some sort of safety. Well, actually, we did. We didn't put safety seat belts in cars. We let a lot of people die actually before seat belts were put in. And before a speed limit was put in place, for example, and it was the same with planes, you know, a few planes that had crashed before people thought, Oh, you know, let's give everyone a safety briefing before we take off and let's show people how to, you know, yeah. put the mask on themselves first and not the children, blah, blah, blah. I don't know. I think I know we're just going to learn the hard way with this because it's, we seem to be following this kind of pattern of learning the hard way. And it's always going to be us people, little people, you know, always going to be, you know, at the the fate of the makers, so to speak.

Don:

Yeah. And I think AI is just going to go right. I don't think it's going to wait on anyone. I think the technology revolution is here. And it's going to continue to pace forward at a, at an alarming speed that will absolutely make mistakes along the way. And that's okay. If companies are making trillion dollar bets right on this, they don't balk at a billion dollar loss. Right. If that's the case. And so, it's scary, but it, the reality is, you know, they know that they have to lose some to win some.

Yoyo:

So, a few learnings. We've gone through the big fears, which are incredibly relevant and certainly worth discussing. We talked about six ways that it's being used that we have come to lean on and love, that we might not even know that AI is being used where it is. I think your point about human in the middle, I think, It needs to be human in the beginning, human in the middle and human at the end. And I always say it's like anything, you know, if it's being checked by a human, if it's being regulated by a human and there's some sense checking along the way, I think AI can be a certainly a very productive part of our future. And I think the trick is for kids now to look at a job, a career. Where in quantum computing, that seems to be the safe place to go for a job right now, Tom.

Don:

Yeah, you're absolutely right. I mean, it 100 percent is. I mean, they, that, they're right now, they are in a race to make quantum computers commercialized, right? Because right now it lives in this crazy setup, that refrigeration setup, that's meant to get the chip down to I think it was absolute zero. So, like, colder than the space. So that's crazy. Like, I think the average one's like five and ten million dollars. So for that to get down to low price of your phone or your computer. I think you're right. I think studying that, actually, there's a book called Quantum Supremacy that's written by a pretty famous physicist that talks about a lot of this revolutionary change. So, something that I look forward to learning more about. Actually, you know, there's a website called Heygen, so check Heygen, H E Y G E N. And you can record your voice and you can put it over your own avatar, right? And here's the thing. I, it's funny. I actually did that because what I wanted to do was to make my own virtual assistant. And then as soon as I did that, I was like, oh God, I immediately regret that. I did not want to buy a voice in either. Cause Because of the deep fakes that are happening, like, what if someone calls my wife? What if someone calls my wife and they're like, hey, this is what's going on. So

Yoyo:

you have to think, you think like a security risk professional and you say, wife, if someone calls you and you're not sure whether it's me, let's just, you know, you must have a safe word.

Don:

You have to. Right. You know, I feel terrible though, for older folks, you know, like my mother, I mean, don't fall for. You know, clicking the link to get a free vacation, right. Kind of level, like much less someone calling them. That sounds like a family member that says, Hey, you do this. Like that just makes me so mad. And there's moments that I've wished I worked in cybersecurity space because of that, because I hate the thought that people do that. That makes, I mean, they're criminals, right? So maybe no remorse and they don't care about other people, but, yeah, it's it's going to be nuts, but now that you talk, and that's why I love talking to people, because sometimes when we converse, like, there's things that come up and I'm like, yeah that, that sounds. Like, why doesn't that exist? Or maybe it does exist. And I just don't know that it exists. Cause I haven't been inquisitive enough around the topic. So when we mentioned the verification of authenticating misinformation, right. And so now basically if you're Fox news, you get a check to say, Hey, this has been double verified across 10 unbiased resource or source groups. And we can open up a business. I should

Yoyo:

have mentioned the BBC has a verification journalist and they're called a BBC verification identifier or something, but their job is to verify what's being said and done and shown. And it's really good because there was a piece, just to give an example, early doors around Gaza. And there were videos being shown that were alleging to be one thing. And the verifier was there saying, look we, there's no doubt about it. These videos were taken in the timeframe, but we can see that some objects have been removed and replaced by other things. And so therefore we wouldn't consider that to be safe, but it's been broadly broadcast to deliver a certain message. And it's just like, oh, and it. Well, it's just the trust. You need to believe in that trust. And that trust, I think, is going to come from institutions that we know and love and believe that go above and beyond to help us believe the truth, so to speak. But yeah, you always got to trust no

Don:

one. Yeah, it's funny. I got do you know who Biggie Smalls is rapper? Yeah, I got a shirt. It says, like, trust nobody. And but it's almost like in the industry. I always tell my wife because I'm like, I get paranoid. About things. I'm like, it's normal. I'm in the security industry. It's like trust, but verify type of situation, you know, but also I work in sales. I have to be like trusting in people, but I think it's, I think it's, I think it's, I think it's right to trust stuff. I mean, I, again, I'm just saying, I don't really watch the BBC. I'm just generally speaking, like, what's the big publication at work. Right. That, that you can agree to, but most certainly in the Fox and CNN side is very clear on what side they choose. Oh, yeah. Okay. Nothing more annoying than that. They're like, one hates Trump, one loves Trump. It's like, come on guys, like, can we talk about something that is It's

Yoyo:

polarizing. Yeah. But you know what? It's okay to be a critic. It's okay to be critical. I think the difference between being unhealthy is when you're a cynic. There's a huge difference between being a critic and being a cynic. Cynics won't change their minds. Cynics believe no matter what they hear or see or told that they believe what they believe. And I think cynicism is very dangerous in today's kind of fake news society. And it was during COVID as well. And that cynicism is, it turns into conspiracy theorism and it's really negative. And I believe it turns into quite a bad mental health state for some people on the scale. We're not like very dark place here. We're talking about sliding scale of just you know, isolating yourself. Because when you start getting into very dark conspiracy theories, you start isolating yourself and your friends and family and people who, you know, can just sit in the very safe space and have safe space conversations. So yeah, be a critic. Criticize everything, definitely. And know those cynics when you meet them. That's my lesson for you today, Dawn.

Don:

There's a lot of cynics in the U S for sure. A lot of people that are heavy, on a topic, but don't know why and can articulate why, you know, so I've seen the

Yoyo:

memes on social media is killing me.

Don:

Seriously, it's nuts. And I'm careful about what I bring up in that subject. But cause I certainly have. Opinions on that and I just don't think I just don't get it. I mean, I just think we're a society that if you have an opinion, you could be punished for it. And I think that's wrong. I think that people, if you've had an opinion, depending on your platform, you should be able to argue against it. Right? If you're just opinion on the street, then whatever. Okay. Just say whatever you want. You know, but if you're an Elon Musk and you have an opinion, well, I mean, Hey, your words move mountains. So therefore you should probably defend your point.

Yoyo:

Absolutely. And I, what's the name of your podcast, Don?

Don:

AI FISEC today. FISEC is an acronym for physical security. Well,

Yoyo:

that makes sense now you've said it well, we'll put a link to a podcast and there you go. If anybody's super interested in this subject and wants to learn more about AI and the conversations taking place within our security community, we'll put a link for your podcast, Don. Don, thank you so much for joining the security circle. for having me.