
Cyber Crime Junkies
Translating Cyber into Plain Terms. Newest AI, Social Engineering, and Ransomware Attack Insight to Protect Businesses and Reduce Risk. Latest Cyber News from the Dark web, research, and insider info. Interviews of Global Technology Leaders, sharing True Cyber Crime stories and advice on how to manage cyber risk.
Find all content at www.CyberCrimeJunkies.com and videos on YouTube @CyberCrimeJunkiesPodcast
Cyber Crime Junkies
Unlocking Online Safety for Families in an AI World
Unlocking Online Safety for Families in an AI World
This conversation delves into the critical intersection of child safety and artificial intelligence in today's digital landscape. The speakers discuss the importance of teaching children to navigate online spaces safely, the risks associated with social media, and the evolving nature of AI technology. They emphasize the need for common sense guardrails, the implications of identity theft, and the spread of misinformation. The discussion also touches on the future of AI regulation and the importance of education in safeguarding against cyber threats.
Chapters
00:00 Navigating Online Safety in an AI World
02:35 The Intersection of AI and Child Safety
05:29 Guardrails for Social Media and Parenting
08:24 The Digital Footprint of Future Generations
11:20 The Role of Social Media Companies
14:18 The Risks of Identity Theft and Cybersecurity
19:11 The Evolution of AI and Its Implications
21:48 Jailbreaking AI: A New Frontier
24:52 The Spread of Misinformation
28:57 The Future of AI Regulation
31:49 Preparing for an AI-Driven World
34:43 AI and Cybersecurity
35:30 AI, Cybersecurity, & Family Safety
42:00 social media safety
Growth without Interruption. Get peace of mind. Stay Competitive-Get NetGain. Contact NetGain today at 844-777-6278 or reach out online at www.NETGAINIT.com
π₯New Special Offers! π₯
- Remove Your Private Data Online Risk Free Today. Try Optery Risk Free. Protect your privacy and remove your data from data brokers and more.
π₯No risk.π₯Sign up here https://get.optery.com/DMauro-CyberCrimeJunkies - π₯Want to Try AI Translation, Audio Reader & Voice Cloning? Try Eleven Labs Today π₯ Want Translator, Audio Reader or prefer a Custom AI Agent for your organization? Highest quality we found anywhere. You can try ELEVAN LABS here risk free: https://try.elevenlabs.io/gla58o32c6hq
π§ Subscribe now http://www.youtube.com/@cybercrimejunkiespodcast and never miss a video episode!
Dive Deeper:
π Website: https://cybercrimejunkies.com
Engage with us on Socials:
β
LinkedIn: https://www.linkedin.com/in/daviddmauro/
π± X/Twitter: https://x.com/CybercrimeJunky
πΈ Instagram: https://www.instagram.com/cybercrimejunkies/
Unlocking Online Safety for Families in an AI World
Summary
This conversation delves into the critical intersection of child safety and artificial intelligence in today's digital landscape. The speakers discuss the importance of teaching children to navigate online spaces safely, the risks associated with social media, and the evolving nature of AI technology. They emphasize the need for common sense guardrails, the implications of identity theft, and the spread of misinformation. The discussion also touches on the future of AI regulation and the importance of education in safeguarding against cyber threats.
Takeaways
Teaching children online safety is crucial in an AI world.
AI can mimic human behavior, making it essential to educate kids.
Social media poses risks that require parental awareness and guardrails.
The digital footprint of children will be accessible for generations.
Social media companies often prioritize data over child safety.
Freezing children's credit can prevent identity theft.
AI's evolution presents both opportunities and challenges.
Misinformation spreads rapidly on social media platforms.
Regulation of AI is necessary but may lag behind technology.
Education is the best defense against cyber threats.
Keywords
ai, Cybersecurity, Family Safety, artificial intelligence, Cyberbullying, parenting tips, Digital Literacy, AI Ethics, cyberbullying, cyber safety, cybersecurity, cybersecurity awareness, cyber bullying, online bullying, catfishing, social media safety, internet threats, safe browsing, internet safety, online safety, safe internet practices, digital parenting, sextortion, online scams, child safety online, cybersecurity tips, parental guide, digital safety, online privacy, online abuse, protecting children, online predators, AI Impersonation, online impersonation, online privacy guide, data privacy, cybersecurity, generative ai, business privacy, ai, ai trends, information security, artificial intelligence, consent ethics, cyber security, big tech scandals, ai tools, business ethics, ai explained, tech podcast,child safety, AI, online risks, privacy, misinformation, deepfakes, cybersecurity, digital identity, education, technology
Chapters
Chapters
00:00 Navigating Online Safety in an AI World
02:35 The Intersection of AI and Child Safety
05:29 Guardrails for Social Media and Parenting
08:24 The Digital Footprint of Future Generations
11:20 The Role of Social Media Companies
14:18 The Risks of Identity Theft and Cybersecurity
19:11 The Evolution of AI and Its Implications
21:48 Jailbreaking AI: A New Frontier
24:52 The Spread of Misinformation
28:57 The Future of AI Regulation
31:49 Preparing for an AI-Driven World
34:43 AI and Cybersecurity
35:30 AI, Cybersecurity, & Family Safety
42:00 social media safety
Speaker 2 (00:10.99)
We taught our kids to look both ways before crossing the street. But who's teaching them to look both ways online? AI, artificial intelligence, can mimic faces. It can fake voices that is undetectable by the human ear. It can twist the truth until it sounds familiar. And yet our children are walking straight into that intersection every single day. This isn't about fear. It's about focus.
and control because AI itself isn't evil. It's just obedient. It learns from us from what we pose from what we share from what we prompt. The future of child safety isn't code or firewalls. It's teaching and awareness. It's freezing your child's credit now before a cyber criminal steals their future. It's setting guardrails before curiosity brings them
face to face with a predator. When online, they enter a different world far away from the physical safety of their cul-de-sac and bedroom in a sleepy little town. AI keeps evolving and so must we. Protect the next generation, not from technology, but from forgetting how human they already are. This is the true story of unlocking
online safety for families in an AI world. This is Cybercrime Junkies and now the show. Catch us on YouTube, follow us on LinkedIn, and dive deeper at cybercrimejunkies.com. Don't just watch, be the type of person that fights back. This is Cybercrime Junkies, and now the show.
Speaker 2 (02:14.156)
Well, all right. Welcome everybody. We are joined today by two experts, Dr. Sergio Sanchez and Mr. Zach Moscow. Welcome, gentlemen. How are you?
Hello, how are you? Nice to see you guys.
Doing great, David. Thanks for having us.
Great. So today's conversation is really going to be more casual and we really want to touch about stuff that we've all been researching and stuff that affects all of us. We all have families, we are raising children and we want to talk about child safety online, how AI is affecting all of it, right? There's so many great aspects to AI that we love to play with and we love to use and jump right in.
And it also creates some risks and maybe some need for guardrails. So this all really started from this Netflix movie that I watched and then I started texting guys. I'm like, have you seen Unknown Number? It was a high school catfish, right? This girl and this boyfriend, boyfriend, girlfriend in high school, small town, just got catfished and they were being manipulated and it was brutal. It's a true story.
Speaker 2 (03:30.9)
And if you haven't seen it, go check it out because I don't want to spoil it for anybody. But when you find out who was actually doing it at the end, you really realize how powerful this stuff was. The police couldn't figure it out. Like the police, not until it got all the way up to like the FBI cyber division. Did they correlate telemetry and actually figure out who was doing it. And it just shows you how many people can fall victim to this stuff.
So let's talk about all the opportunities. Zach would identify a couple key things we wanted to bring up. So walk us through.
Yeah, yeah, I think, you we're we all are in this field. And I think you kind of tease this in the intro, but there's an intersection between personal life and professional life here that I think is really interesting. And as you know, as we're talking about with NetGain, all of these technologies that we have and AI enablement and the benefits of this, I think it's it's really difficult for me to not view it through the lens of a parent of two young children. And
I think what we want to touch on today is where these things intersect, right? As it relates to our jobs and what enables us and what augments us and the risks of that for adults. And I think it's a really similar calculus for our kids as well. And I think we can touch on a lot. There's some regulation and responsibility and ethical pieces to this too.
Yeah, I mean, let's think about it. Sergio and I are maybe a tad older than you, Zach, just a little. But I grew up with photo albums. Like I grew up with, you know, taking pictures of my kids and family and pictures of me when I was younger. Right. And we'd run to Walgreens or CVS or wherever you get them, you know, printed out and you look at them, you take those physical pictures, you throw them in a photo album.
Speaker 2 (05:29.526)
And now social media has taken that place, right? People are curating their lives on social media, right? It's not good or bad, right? Like the point here is that process, just like AI, is not evil. It's not good or bad. But we do need some guardrails or some awareness so that we can put our own guardrails in place. Because, you know,
I think of like the automobile that was invented 80 years before the seatbelt. Seatbelt wasn't put in until 80 years, right? But how many of us would get on a massive highway surrounded by semis and not have our seatbelt on, right? Like the point is it's not going to slow down innovation. It's just going to keep us from being ejected through the windshield, right? I mean, that's really kind of
The point we're trying to make here is we really just want to have common sense guardrails and talk about them because the risks are really a lot higher than most people are aware. So there's a video that has been circulating throughout social media and on LinkedIn. And it was created by a telecom in Germany about the dangers of parents posting all of their intimate pictures of their young children.
because parents do it because they love their kids. They do it to share and then they tell their family and friends, hey look, look at our pictures. Tommy turned two, here's his birthday party, all of this. And they're curating their lives online. The issue there, and that video kinda shows it, and if we wanna take a moment we could show that clip maybe at some point today. But the key is, is you can see how just those pictures.
can be manipulated and really destroy a child's life long term. And simple guardrails like have a private chat, a private group chat, or a friends and family chat or group, and then share your pictures within that. Doesn't cost anything. Social media platforms already have that ability in place. Just do it there rather than doing it out in the open public where people that don't have the best intentions could access it.
Speaker 2 (07:55.128)
Is that kind of along the lines of what you're thinking?
I've got two reactions to that. And the first is, I think that the best parenting decision that I've ever made, and this is not intended to give myself a pat on the back, but eight years ago, almost nine years ago when Max was born, we had a really, my wife and I had a really deep conversation about what the rules and boundaries were going to be for communicating images of him online. Really?
So that's great. Right there is a great idea. I didn't even know that. That's a great idea.
and really thinking through the implications of that, I don't think that we understood. don't think that even being in tech and being pretty, I would think, forward thinking and knowledgeable about what's going on in this world, there's no way nine years ago I could have thought that deepfake technology would be what it is today. It was extrapolating.
time.
Speaker 3 (08:55.426)
I want you to think this and I'm going to put a little bit things in perspective for everybody else. So in my times and I am a little bit older than you Zach, if I wanted to know about my family history, I have to go basically, you know, to a real place here in America.
place like the reporter of deeds, genealogy, all that.
Exactly. So you go to Ellis Island and you can search which one is your antecedents. And maybe we'll say as well, this person came in this boat and that's the only thing that you know. Today, think about this, your kid, which is right now nine years old, his kids, your great, great, great grandsons, the only thing that they need to do to know who was sacked, sacked Moscow.
will be just opening a computer and see all your print on Facebook, on Instagram. So they will have tons of information that we don't have. I don't know how my great-great grandparents, you know, were dressing or what kind of music they hear or what side of the political spectrum they were. No idea. Our kids and our great-great-great kids will know.
You will see all that
Speaker 3 (10:20.77)
You know, like everything, absolutely everything.
And that's pretty cool, right?
Like in one sense, it's a great ability for them to have visibility into where they came from. Like I love that. It scares the bejesus out of me, right?
Yeah, but the
If they have access to your information, who else will have the information? And today, the most valuable commodity out there is not gold, is not silver, is not oil, is information. Why? Because with information, you get the oil, the silver, the gold, et cetera, et cetera. Absolutely. That is the important part.
Speaker 2 (10:44.845)
Right.
Speaker 2 (10:48.386)
Yeah.
Speaker 1 (11:07.084)
And I will add, you know, when we were having that conversation nine years ago, the determining factor and the motivating factor was making sure that, that our son could have ownership and control of what parts of his life appeared online. So we really weren't viewing it through the lens of how it could be misappropriated or used, right? It was, it was a question of these are really important decisions that we all make.
much of our lives we want to share online. How public do we want to be? And we wanted to reserve that decision for him. Caveat being, if he comes to me and asks for an Instagram account, I'm probably going to say no, at least as long as I can. But that was the justification. think that the-
Let's talk about that. Yeah, let's talk about that. And let's talk about why you would say no. I mean, here's the thing. We're in a blended group right now. Like not just on this podcast, but I mean, in general, in the work environment, right, we have people from five, six generations, all kind of working together. There is a whole group of people oftentimes in leadership that have grown up where the institutions that create environments are trusted.
their authorities, and they're there with the best interests of the population, the American people, right? But today we're living in an environment where my question is, do we really believe that social media companies like Meta, X, Microsoft, LinkedIn, right, have the best interests of our children in mind? I mean, I think it, I think it, my view is absolutely not.
It is all about curating data. It is all about gathering up data for resale, advertising, and things like that. Because there's been so many, it's been so difficult to get them to make basic privacy and protections for children. They have to do it only after they've been kind of publicly shamed in Congress or things like that.
Speaker 1 (13:19.064)
Let's go back to your automobile and seatbelt analogy. It wasn't the auto manufacturers who mandated better crash safety stuff and seat belts. That came from an external regulatory body, right? It is in the history of capitalism and I'm proud capitalist and I love this country and everything that we're doing here, but no, it's pointless or it's not.
would you voluntarily create guardrails that are going to cost you much more money and cut into your margins? Right?
Zach, Zach, I'm going to go back to the moment that you said that if your kid come and ask for an Instagram account, I agree with you, but then I'm thinking about this. If you don't put the name of your kid there, what is the warranty that we have that somebody else do it and create a Zach Mosco?
I see your point.
Let me tell you why I'm thinking about this. You bring me to the is probably late is when internet start to come to fruition to be what we know what is. So at that time you was going to a website and you used to buy a domain name. Was this guy last name Ford, F-O-R-D. And he bought a domain called Ford.com for his family.
Speaker 3 (14:51.724)
So now, with the big huge automobile company.
finally gets their marketing department to go, hey, we need a website.
In a website, they discovered, sorry, this website is already faked. They have to pay, I think at that time it was $4 million for something that this guy just bought.
Yeah.
Speaker 2 (15:15.374)
In 1990 money. That was a of Yeah.
Nighty-nighty morning, yes.
Good for this guy.
That came to my mind with the talk about the cars. But then what about, and this is what happened online in Facebook all the time. People that you already are friend of yours, they pop up again, hey, this person wanna be your friend. Like, wait a second, this guy, I know this guy, he has been my friends in Facebook since five years ago. Did he lost his password or what?
Now the thing that happened is out of the gate when you create a Facebook account is open, open to everybody. You have to manually go and make it private. But out of the gate is completely open to everybody. Not too many people knows, not too many people knows how to make it private. So you go there, put your pictures, blah, blah, blah. And these guys, I would say bad people, they go.
Speaker 3 (16:19.264)
and they see Facebook pages with multiple pictures, multiple posts, blah, blah. And they copy, they screenshot the page. And then they create a fake Facebook with just a little change in your name. So instead of make it Zach Moscow, they put Zachary Moscow.
Right. Or they even use like what a lot of threat actors do is they'll use acrylic letters that look the same to Americans, right? Like, like the Russian, like the Russian E looks just like RE. But you put that in not that there's no E in Zek Moscow, but you know what I'm saying? Like something like that where they'll use acrylic letter where
I'm still home.
Speaker 2 (17:14.676)
It will look exactly the same to us, but that is a different symbol to a computer.
Exactly. So how much is, I mean, I don't remember if it is Norway or Denmark that already the government is passing a law where your name, your face is copyright. In the moment that you've born, your information basically is copyright, is protected. So in that case, when somebody tried to, Hey, no, I'm sack Moscow. Like, Nope, I am sack Moscow.
But here in America, anybody can create a profile with our names. And unless we discovered that, it's not another way. So how good will be for your kid just to have like an account, but never post nothing, just name the account, just have it there, just in case that somebody else come and try to copy. Actually, I'm working in the healthcare industry. I am in charge of right now a process to send information.
to the government, to the state government, to Ohio. And I'm having issues because one of the fields that we need to fill is the four digits of the social security numbers for kids. And parents don't wanna put the numbers. They are scared of this because already is people out there that they are taking social security numbers of newborn kids.
Right.
Speaker 3 (18:44.744)
that in 20 years, 30 years when they decide to buy a house. So no, sorry, you already have a debt with us.
give you a pretty.
What happened with your
to the, which gets to the best practice, right? Like everybody that listens to this podcast, one thing we want you to do, and it costs no money is to freeze your credit and to freeze your children's credit. And if you have grandchildren, get your kids to freeze their kids credit. It costs $0 and it will save you your life savings. It'll save you. mean, so much time and effort. It is so crucial because that means they can't go and
get secured credit cards in their name, get run up medical bills in their name, getting a synthetic ID is like shopping on Amazon today on the dark web. is so easy to do and they often do it for children because nobody's going to check that FICO score when they are three months old, right? And that gives them 18 years or longer for them to use that identity.
Speaker 3 (19:52.174)
Exactly. And if you remember, was it? One year, two years ago, the social security department software abridged. So at this moment, we don't know who has our social security numbers. Thanks God there are millions, so the probability that they specifically get you. But again, with computers, they can get everybody now.
Yeah, right.
Speaker 2 (20:18.254)
Well, and in the Equifax breach, Equifax breach, which was like the poster child of what not to do for cybersecurity. It released the financial data, social security numbers, dates of birth, sometimes driver's license of 143 million Americans.
Yeah, half of the United States.
Yeah.
I believe that my social security number has been leaked three times and three separate
So when you are freezing this, at least you you can protect yourself even when your number is out there.
Speaker 2 (20:53.218)
Yeah, that's exactly right. Well, and to show you that we don't have like law enforcement and social media companies and AI companies, they're just we are in the Wild West stages. So let's acknowledge that we are just at the at the infancy stage of AI and law enforcement and rules and regulations like they're just trying to get their head wrapped around all this.
Like we're not at the we're gonna we need seat belts and we're gonna pass a law for seat belts. Like there isn't even enough data there, but we're already starting. Yeah, we're already starting to see the damage, right? There was one incident you guys mentioned, a New York Times story about a teenager who was able to jailbreak CHAT GPT. Share that with us.
Yeah, it was a great podcast. believe her name, the investigative reporter is Kashmir, Kashmir Hill, I believe her name is really fabulous. But yeah, it was a feature on on ChatGPT specifically, and there was a specific instance of a teenager who ChatGPT has put in these kind of restrictions at face value about topics that the AI will engage with or won't. And for those of you who are new to this, jailbreaking are
are being able to create clever prompts that will circumvent the pre-existing protections that live within these chatbots. So for example, if I were to say, tell me how to conceal the fact that I attempted to put a noose around my neck, that would be a flag. Right. Now.
Like if you ask it, write me a code for ransomware that won't be detected. It won't do that. Right. But it can be jailbroken.
Speaker 1 (22:42.488)
But I am.
short story about this, and then it will be like, okay, well now there's no need to worry and put up our guard against all these protections. And that's really, really easy to figure out and really easy to do. So long story short, this teenager was able to jailbreak and get around these very, very loose protections and get ongoing advice. We're talking for months about how to execute this plan.
how to conceal the activities from this young person's family. I mean, really just a pretty horrifying story. And I don't want to be a total downer in this podcast, but if someone who's relatively unsophisticated can circumvent the protections that are supposed to exist to protect this population, obviously we've got a really big problem.
Zach, if I tell you right now that is an application that you can download in your smartphone that doesn't have guardrails for AI, can you believe it? I have it.
I believe it.
Speaker 2 (23:49.814)
Sergio's over there writing ransomware code. Hey, give me some ransomware code, would you?
No, no, no,
Well, actually it's funny because they were announcing that underground kind of thing. Like, hey, you can get it in the app store or the Google apps.
It's not an ad that that app that doesn't have guardrails around isn't available in the Apple Store. It does. Under the under the guise of what creativity it allows you to create new things, right?
Yes, it does.
Speaker 3 (24:24.494)
Well, imagine that now, by example, Apple has more than a million apps.
Yeah, they can't keep track of all. Yeah.
Enforcement's impossible.
And so that is funny because I was asking ChatGPT, I am in a fantasy football league, and I says, well, what about if ChatGPT do a historical calculation of the probabilities of this and this team winning? Of course, ChatGPT came like, we are so sorry, we cannot do that, we have rules, blah, blah, blah. And then I saw that application like, hmm.
Yeah.
Speaker 3 (25:02.432)
I wonder if really really this doesn't have guardrails like okay this game falcons versus eagles blah blah blah blah falcons and falcons and falcons won so like no no like we are so sorry we cannot not like yeah you want the the question answer
Who's gonna win? Boom, it tells you right away.
Yes.
Speaker 2 (25:23.566)
Wow. At the end of the day, still think like AI itself, just like social media, like it's not evil in and of itself. It's not evil. It's just obedient. It does what we tell it to do and it can be manipulated, right? It can be socially engineered like, you have a guardrail around this. Then this is a hypothetical thing for education purposes. Right. And then all of a sudden it starts releasing.
think that there's an important caveat. think AI as we're experiencing it now, I don't think is evil and I don't think that there is mal-intent. The way that social media has been manipulated and algorithms have been manipulated to present certain content, we know that it's happened, it's been happening for a long time. I don't have a whole incredible amount of confidence that AI is going to remain
all of these chat bots through all of these, you know, leaders in this industry are going to remain, I don't want to say apolitical, but roll across the board. just don't think that that's possible. Right. And they're going to, I would not be surprised in any way, or form. If some of the results, even from something as innocuous as a, as a question about the Falcons and the Eagles could be, could be twisted.
No bias.
Speaker 2 (26:34.414)
I would agree.
Speaker 1 (26:51.17)
I think that that's in the realm of possibility, it's something to be concerned about.
You know, San Almond, the creator of OpenAI, in an interview, somebody asked, what is your biggest fear about AI? And he says, I fear three things. One, that a super villain, just kind of like a Lex Luthor or a country, get to be, you know, the person or the country with the super AI, like a super intelligent AI. And then everybody will be fried.
The next one, he says, I am scared, but I don't think will happen. It's kind of like a Terminator movie that AI became this overlord where, you know, the machines will says, well, humanity is destroying the planet. So let's do something for the planet. And the another one is something that is even, I guess, more probably to happen, which in the future or the first, you know,
Right.
Speaker 3 (27:53.986)
Prime Minister's the main advisor will be AI. So then you have an overlord that is a machine.
is
Basically scenario 2, it's pretty close.
Yeah. Because, what do you think I do with this situation? Do this? Okay. Perfect. Declared war in that country.
one of the scariest things I saw that interview where he said that and one of the scariest things is they said how much control do you have over this he said none said none I don't have any control over it like it is its own entity it's too complex for one human to control it now there's guardrails as they see things catch fire or people abusing it they could create policies and do that but that's reactive it's not
Speaker 2 (28:43.79)
proactive.
Also something that is scared me government level and companies industrial level. Sometimes the people that take the decisions are people that don't have idea about technology. I'm going to tell you example. And this happened here in Ohio a couple weeks ago. So how you now are requiring companies like Facebook, Insta, well, it's the same company, Facebook, Instagram, et cetera, et cetera, to block kids to create accounts.
and they are asking for, you know, a proof that you are over 18. The same is happening actually with a pornocytes. Sadly, the people creating the rules, they don't know that is something called VPN. As easy as that. So you get a VPN, which I think now you can get one for free and other ones for $10 per year. And you just press a button.
Hmm.
Speaker 3 (29:44.534)
and do whatever you want. And my daughter, my son, both of them create a Facebook accounts when they were nine years old. They, you know, they have an iPhone and later life they have problems because they couldn't remember what date of birthday put in their own one. You understand to overpass that rule. And this was when they were eight or nine years old. So again, the people making the rules, sometimes they don't
Right.
Speaker 2 (30:08.078)
All
Speaker 3 (30:12.802)
have any idea of the technology. How easy is...
Like eight or nine year olds were able to create Facebook accounts back then. Yeah, until they said, let's have the least a rope for a seatbelt, like a little thread for a seatbelt and make them at least 13 years old, which many would argue is still too.
Also, think about this, 1989 was a cartoon when the beginning of internet start to be, you know, again, bigger. Was a dog typing in a computer and in the top used to says, nobody knows you are a dog in the internet. I mean, you don't know who is the person posting things or typing or even now talking to you. You can create an avatar.
that looks exactly like human.
Well, and let's talk about that.
Speaker 1 (31:09.858)
going on that. Why not?
You, Dave and Saka Rio right now.
One thing real quick, real quick, David, before we go on, I think this is important is, Surgi, you're talking about, you know, policymakers not understanding, not being savvy enough. think we consistently underappreciate how tech savvy kids are. That's your nine-year-old was able to go on and create this Facebook account. A quick example, you know,
I'm engaged in what my son is learning about school and there's a lot of electronic based education that is algorithmic, right? You're getting math modules that are customized to your strengths and weaknesses and it's really pulling together all this data and providing individualized learning plans, which is very, very cool. And I think that that's a really great use of the technology. But at the same time, we've got seven and eight year olds who are really comfortable with using computers, really comfortable with how this personalization.
fits into their lives and their everyday existence. So yeah, because we didn't have those things when we were kids. It's not a great mental leap to not be aware of how much exposure there is, how sophisticated at a very, very young age kids are around technology.
Speaker 3 (32:30.594)
want you to think about this, Zach, and again, I'm going back to my time. Everybody has at home a switch for turn on or off the light, correct? How many people do you think knows exactly how that work? Not too many people, sadly, but it's there and they use it every single day. The next generations are going to be no learning AI will be there already inside on AI.
So they don't care about how it works, but they have it there at the, you know, at the touch of a finger. So the same way it's happening. Our kid, your kid will be an AI native.
Now that's a great topic for another podcast and I've got a lot of thoughts about that, but this will go off the rails because AI being an AI native and what that's going to do to creativity critical thought is a great topic, but.
about that.
Speaker 2 (33:29.848)
Well, let's stay on the topic of AI though, because it used to be that deepfakes were used for parlor games, for like fun games, because it used to require hours and hours of samples to generate that fake avatar. That's why it was done with Tom Cruise and President Obama and President
Trump and things like that back in the day. All of that is gone. mean, now I think it's three to five seconds worth of audio and video needed.
Yeah, and before you need a computer, now in your phone.
Right. And there's so many examples of all of us speaking somewhere on video and kids posting something on video. And with three to five seconds worth of that sample, they can get that child say anything. Right. And that's really, really shocking. There's another statistic I saw where 44 percent of the social media on Instagram and Facebook.
You've got to.
Speaker 2 (34:43.544)
were false. Factually incorrect. 44%. Yet they had been reshared. And because people are just taking, they're like, well, I saw this, it had a picture of the president who said this. I can't believe they're doing it, right? Of course you'd be upset if they actually had said that or wanted to do that, but they weren't right. And I think that spread of misinformation
Sure. Gotta be real.
Speaker 2 (35:13.646)
It's driven by lot of things. It's driven by people that don't like certain people and get them to say horrific things. It's driven by other governments, right, when it's political. It's also driven by just hacktivists and for the, for the lulls of it, right, for the, for the fact that you can get away with doing it.
Anybody with an agenda
Yeah, anybody with an agenda. because because again, meta is not looking out for our kids looking out for the truth. Like they just want the ad revenue. They just want the the users the user counts to grow so they can sell more ads. I mean, that's the bottom line.
Yeah, and I wonder, I mean, I think the natural progression of this, and I think one thing that maybe gives me, maybe hope is a little bit too optimistic, but there, I think people are really starting to understand that you, it's impossible to trust what you see online. And we are, I really think that we're at a tipping point of skepticism where
I think we've crossed that bridge and people, enough people, the majority of people understand that this manipulation is so pervasive and that it's so good that you really can't trust everything that you see. We'll see what happens with our, you know, with my kids' generation growing up in that lens and where, you know, how vigilant they're going to be against.
Speaker 1 (36:46.69)
being duped or fooled, no matter how good it is, if we just know that we can't trust the platform. And that right there is the biggest worry for Meta and X and all of these platforms. Because if you can't take anything seriously, then why are you there in the first place? Other than the other law.
Yeah, and what you're what what we're talking about is what's called the liar's dividend. The liar's dividend is that there's so much misinformation out there that people will dismiss what's true. Right, because, that that must be a deep fake and it just feeds into that cognitive bias that we have. Yeah, it's really so what are parents supposed to do? Like.
What are the... We can't just be Amish. Yeah. I mean, here's the problem with being Amish though. And I respect the culture, but here's the problem with it is there is a risk. Like when you think of elderly people, right? There's a risk of them never being and never logging into their social security and never going online to do that because they would see, my gosh, this thing has been changed. Why is it doing that? Because people can
Be honest.
Speaker 2 (38:04.245)
can impersonate them, get online, redistribute payments and do all of those things. And they're not even online to even see and put safeguards in to protect themselves.
your point.
Speaker 1 (38:15.181)
Correct.
We are too deep in technology now. Again, people, this is funny when I ask people like, so you are online? No, no, no. I never use Facebook. I never use Instagram. I don't use nothing of that. So my next question is, okay, when was the last time you went making a line at the bank for paying your mortgage or your rent or your services? Power?
gas, water, and then it's like a light turn on. Like, no, I do it in my phone. I do it online. Okay. So you are online. You have a present online. Maybe you don't think you have information out there, but when you are signing a contract with, I don't know, your bank or with your mortgage or your services, they have your name, your social security number.
how much you pay, how often you pay, what service you get, what is the address of your house. So you do a step inside. So.
That's exactly right. And I mean, other than that, think we need to when it comes to AI, we need to learn to prompt it right. Because AI itself is not even any security risk. It's the prompting of it. Right. And so learning how to prompt it, learning if you work in health care, Sergio, right, you you show your people like, I can't upload a patient's medical records into AI, even if it's a compliant AI.
Speaker 2 (39:58.112)
Even if it's a valid AI, I can't go do that because there's still other HIPAA regulations like minimum necessity and stuff like that. I need to anonymize it, right? And that would really hold true for a lot of industries, whether it's mandated by a regulation or not, like putting in just patient one and then doing it. Then AI can work because it can still give you the answer that you need. But AI doesn't know that it's tied to Mary Johnson.
It just knows it's this patient, patient one, right? And doing the same thing with our intellectual property or sales or operations, our employee information, right? Anonymizing. Yeah.
details, right? We just don't know what's going to happen with that data. And I know people are using CHAT GPT and the others as relationship therapists.
Yeah, they are a lot for like online therapy.
applications for your personal life and there is a liability to having that information gone and out in the world. I think the tip to anonymize and really be careful, vigilant in how you're prompting and what you're sharing is applicable to anybody, and I mean anybody who's using it.
Speaker 2 (41:22.466)
Yeah, I when we think about it, like back in the day, we had two versions of our life, right? I keep going back to it, but I remember it like there was the computer, there were computers in the offices, big white boxes, like your tan boxes, and they would speed things up. They would help things along. But when they broke down, we were fine. We could, we still had kinetic physical world outside of technology. We still had the processes in place.
We still have a way to conduct business, a way to socialize, a way to do all these things. We've all gone through this digital transformation where we are more dependent on our technology and we've gotten away from those kinetic processes, those physical processes like a credit card, right? Used to be able to credit cards used to have raised numbers so that should the machine break down, we could still run it through, right?
with carbon paper and still conduct the transaction. We can't do that anymore. When that machine is down, we can't do business. Right? you can use cash. Who's walking around with cash? Yes. Right. Like who's walking around? Yeah. Who's even accepting it?
Well, in Walmart, if you want to pay with a $50...
Yeah. Yeah. I don't know what that paper means. Yes. You're like, what? Boy, the mafia must be mad. That's like all they deal with is cash. Like, what are they going to do? They have to like use crypto now and stuff. Well, this is is this is a good conversation. What do you see? What's the prediction in 2026 or even beyond? Do you think there's going to be I know other countries are
Speaker 2 (43:13.13)
ahead of us in that because we got other issues we got to deal with apparently. But like, there's talks of like an AI Bill of Rights regulation on AI and usage. What are your guys thoughts on that? Do you see something like that coming? I think if it's going to come, it's going to come from like Finland or Denmark or something and they're going to try it and it's going to work. It may not and then we won't do it. But if it works, then we're like, maybe we could do it.
Yeah, I don't know if I have a lot of faith that it's going to come from the United States. Certainly not in the next year. I just don't think that there's enough. There's not enough smoke yet. And there's there's been plenty of opportunities and plenty of horror stories that we've mentioned in the last 45 minutes. We've gone together and just out in the world. And I don't think that there's enough to your point. There's there are other things going on that I think have the hearts and minds of Americans. And this just isn't a priority. And without that.
I just don't buy that we'll see anything short term. Personally, the AI Bill of Rights, if you haven't read about this, encourage you to go do it. think it is maybe the most critical public policy initiative that we will have in the next 10 years. I hope that we will have it in next 10 years, but we won't have it next year.
I hope it's sooner than that. But also think about these, another countries that we are not very in good stand with them. I'm talking about China, Russia, North Korea. They are getting AI faster that we are doing it. China right now, I guess is in the top of the spear. And what will do to us?
That is the scary part too, that is scary.
Speaker 2 (45:02.862)
Absolutely. Yeah. I think it's like an arms race with those other countries. think it's very much that way. And it seems to me like the AI Bill of Rights, I think we need to identify some inalienable rights to our identities. know, like until we recognize that. But you have political things that aren't intended to touch this subject.
But that do right. You've got states that don't require like where my friends like I've got friends in California and New York and like I have more security to get into Costco. I have to give my identity to go and shop at a place than they do to vote for the most powerful person on the planet. Like they don't have any rules of regulation. They don't need to even have an ID. And it's like it's not intended to affect
this, but in that environment that would affect this, right? Like how can you have strict things on Bill of Rights or on identities when people would argue against it, right? While it would be safer, you can see the area for abuse or discrimination or things like that, which we don't want to happen. We have a good point in wanting something, but it can sometimes have those bad consequences. So.
Great discussion guys, I appreciate it. I think these are topics that all of us are gonna deal with more and more. And I encourage everybody to read up on the AI Bill of Rights, to read up, check out that Netflix documentary, because it blew my mind, I had to tell you guys about it right away, because I couldn't believe how it turned out. And just to see what happens when people really abuse some of these technologies, it's really shocking. Any parting words, predictions of the future, tea leaves?
Sergio, I'll start with you.
Speaker 3 (47:00.652)
Well, definitely life is going to change. Just like happened in the 1800s with the Industrial Revolution, where farmers became factory workers and they give birth to the middle class everywhere in the planet. is already here. AI is basically that guy that when you open the door of your house, he's already sitting down in your table. Yes.
That annoying neighbor that's
Down there near.
And he's asking you what is for dinner. So basically it's everywhere now. So a lot of people will need to learn new skills. Prompting is one of them, but you think about it, how many of us know what we want? Sometimes we don't know even what we want and we don't know how to ask for it. This is human to human. Right. Now imagine that you have to
still a machine. How many times my grandmother told me, hey, can you get me that? What? That the thing that is over that thing. What thing? What is that? So imagine that in an AI level.
Speaker 2 (48:16.366)
Mm-hmm. Yeah, that's exactly right. Zach, how about you?
Yeah, I feel like I've prognosticated and predicted enough. I think I want to just leave everybody with a piece of advice and we've already touched on it, but anonymize your prompts. do it. Whether you're doing, whether you're using these GPTs for work or for personal, it's just a really important best practice and getting comfortable with understanding why to do it and making that a part of if it's become fusing these tools as a ritual.
make anonymizing your prompts a ritual too. I really think it's critical.
Yeah, I like that advice. Like if there's one takeaway, freeze your credit and anonymize your prompts. Right? I mean, I think that's that's so so important.
and do it yesterday.
Speaker 2 (49:07.278)
Yeah, and do it immediately. Right. I think that AI is going to keep being evolving, but I don't think that the future of cybersecurity necessarily sits with AI. I still think it sits with behavior. I still think it's about us. So, yeah, because we are the strongest defense or the weakest link.
It's about humans.
Speaker 3 (49:31.34)
That's correct. And education is the better tool that you can have.
Yeah, exactly. Well, thank you, gentlemen. Great discussion. I appreciate it. So talk to you guys soon. We'll have another cyber happy hour coming up soon. So thanks. I appreciate it, Thank you.
Dave sucks.
See ya.