.png)
Cyber Crime Junkies
Translating Cyber into Plain Terms. Newest AI, Social Engineering, and Ransomware Attack Insight to Protect Businesses and Reduce Risk. Latest Cyber News from the Dark web, research, and insider info. Interviews of Global Technology Leaders, sharing True Cyber Crime stories and advice on how to manage cyber risk.
Find all content at www.CyberCrimeJunkies.com and videos on YouTube @CyberCrimeJunkiesPodcast
Cyber Crime Junkies
Can you Spot AI Job Applicant Fraud? Think again!
On this Cyber Crime Junkies deep-dive, we expose how deepfake job applicants, AI-powered resumes, and AI voice-cloning are turning remote hiring into the ultimate con. Real-world cases include executives duped by glitchy interview bots—and one bizarre scam channeling funds to international criminals.
👨💻 What you’ll discover:
· AI resume generators that craft “perfect” fake backgrounds.
· Deepfake Zoom/Teams interviews are here
· New ways too identify a deepfake.
· Scams funneling salaries overseas—like $88 million to North Korea.
🛡️ Protect your hiring processes—learn the red flags and detection strategies.
🔔 Subscribe for weekly cyber-crime exposés with Cyber Crime Junkies.
#CyberCrimeJunkies #DeepfakeJobApplicants #AIResumeScam #InterviewScams #VoiceClones #PayrollFraud #RemoteHiring #AIinRecruiting #HiringSecurity
Takeaways
· Deep fake technology is becoming a significant threat in hiring.
· Employers are struggling to identify fake candidates during interviews.
· 75% of impressive resumes may be generated by AI deep fakes.
· 12.5% of job applicants for software roles are faked.
·
Chapters
- 00:00 The Rise of Deep Fake Job Candidates
- 05:38 The Impact of AI on Hiring Practices
- 10:45 Real-World Consequences of Deep Fake Technology
- 14:00 Deepfake Job Interviews
- 17:37 The Scale of the Problem
- 21:00 AI Fraud To Watch For
- 26:33 Strategies for Mitigating Risks
- 30:00 What Employers Need To Know
- 33:00 DEEPFAKE SAMPLE
- 38:30 The Future of Hiring
Growth without Interruption. Get peace of mind. Stay Competitive-Get NetGain. Contact NetGain today at 844-777-6278 or reach out online at www.NETGAINIT.com
Have a Guest idea or Story for us to Cover? You can now text our Podcast Studio direct. Text direct (904) 867-4466
🎧 Subscribe now http://www.youtube.com/@cybercrimejunkiespodcast and never miss a video episode!
Follow Us:
🔗 Website: https://cybercrimejunkies.com
📱 X/Twitter: https://x.com/CybercrimeJunky
📸 Instagram: https://www.instagram.com/cybercrimejunkies/
Want to help us out? Leave us a 5-Star review on Apple Podcast Reviews.
Listen to Our Podcast:
🎙️ Apple Podcasts: https://podcasts.apple.com/us/podcast/cyber-crime-junkies/id1633932941
🎙️ Spotify: https://open.spotify.com/show/5y4U2v51gztlenr8TJ2LJs?si=537680ec262545b3
🎙️ Youtube (FKA Google) Podcasts: http://www.youtube.com/@cybercrimejunkiespodcast
Join the Conversation: 💬 Leave your comments and questions. TEXT THE LINK ABOVE . We'd love to hear your thoughts and suggestions for future episodes!
On this Cyber Crime Junkies deep-dive, we expose how deepfake job applicants, AI-powered resumes, and AI voice-cloning are turning remote hiring into the ultimate con. Real-world cases include executives duped by glitchy interview bots—and one bizarre scam channeling funds to international criminals.
👨💻 What you’ll discover:
· AI resume generators that craft “perfect” fake backgrounds.
· Deepfake Zoom/Teams interviews are here
· New ways too identify a deepfake.
· Scams funneling salaries overseas—like $88 million to North Korea.
🛡️ Protect your hiring processes—learn the red flags and detection strategies.
🔔 Subscribe for weekly cyber-crime exposés with your host at Cyber Crime Junkies.
#CyberCrimeJunkies #DeepfakeJobApplicants #AIResumeScam #InterviewScams #VoiceClones #PayrollFraud #RemoteHiring #AIinRecruiting #HiringSecurity
Tags:
Deepfake Job Interviews, AI Job Applicant Fraud, what employers need to know, Latest AI Risks To Small Business, Insider AI Risks, AI Hiring Fraud, Exposing AI Hiring Fraud, Job Applicant Fraud, AI Fraud To Watch For, New Types Of Ai Fraud, Latest AI Fraud, Ai Impersonation, Cyber Crime Junkies, AI resume scam, interview scam, voice clone interview, AI in hiring, remote hiring fraud, hiring fraud detection, payroll fraud, deepfake Zoom, deepfake job applicants, ChatGPT resumes, AI interview tool, fake candidate detection, HR cybersecurity
Takeaways
· Deep fake technology is becoming a significant threat in hiring.
· Employers are struggling to identify fake candidates during interviews.
· 75% of impressive resumes may be generated by AI deep fakes.
· 12.5% of job applicants for software roles are faked.
· The consequences of hiring deep fake candidates can be severe.
· Cybercriminals are using deep fakes to fund illicit activities.
· Companies need to enhance their vetting processes for remote hires.
· AI tools are being used to create and optimize resumes.
· The FBI is actively working to disrupt deep fake operations.
· Employers must adapt to the evolving landscape of recruitment and cybersecurity.
Chapters
Chapters
00:00 The Rise of Deep Fake Job Candidates
05:38 The Impact of AI on Hiring Practices
10:45 Real-World Consequences of Deep Fake Technology
14:00 Deepfake Job Interviews
17:37 The Scale of the Problem
21:00 AI Fraud To Watch For
26:33 Strategies for Mitigating Risks
30:00 What Employers Need To Know
33:00 DEEPFAKE SAMPLE
38:30 The Future of Hiring
Speaker 2 (00:12.888)
Job hunting sucks. We all know it. But here's a pop quiz. What's worse than losing out to somebody more qualified? How about losing out to somebody who's not even real?
Yeah, welcome to 2025, where your toughest competition might just be a deep fake in a virtual suit rocking an AI make believe resume and a glitchy webcam smile. Where companies are hiring glitchy avatars with AI generated resumes. Perfect on paper. They're there in the interview and they seem real, but yet they're totally fake.
This is the new frontier of hiring fraud. And trust me, it's going to get weird. It's about to get seriously uncomfortable and this isn't science fiction. It's happening right now all across the United States. We're talking about deep fake job candidates, hacked hiring and resumes built by robots. Small talk sucks, so let's dive in. This is Cybercrime Junkies and now the show.
Speaker 2 (01:26.03)
Catch us on YouTube, follow us on LinkedIn, and dive deeper at cybercrimejunkies.com. Don't just watch, be the type of person that fights back. This is Cybercrime Junkies, and now, the show.
Speaker 2 (01:46.69)
Look, we all know that searching for a job can suck. But how about finding out that the person they did hire isn't even the person they interviewed? Join us as we explore AI and hiring and deep fake job applicants. I mean, speaking of nightmares, imagine interviewing the perfect candidate only to find out that their perfect resume was generated by an algorithm and the person you're speaking to.
and interviewing isn't even the same person that you wind up hiring. So this is causing an issue on both sides of the interview table. mean, in 2025, here's what we're up against. Deep fake enabled fraud.
has already led to $200 million lost in just one quarter in 2025. AI-generated crypto scams are up to $4.6 billion. And look, I know no one's really surprised because it does deal with crypto, and we all know what that's like.
But over 50 % of leaders admit that there's zero training for employees on how to address deepfakes. So we're just gonna let this exploitation zone that's occurring and this new wave of leveraging AI for evil to just take advantage of it. And we don't wanna train our employees on it. That doesn't make sense.
61 % of executives say that they have zero protocols, zero, zero protocols for addressing AI generated threats like DeepFix. You see, 17%, there's a brand new report that came out, it's mind blowing. 17%, that is one in six, one in six. I'm not making up this math, it's ridiculous.
Speaker 2 (03:37.902)
17 % 1 in 6 hiring managers are now reporting, get this, that the candidates are not real. AID fakes applying with perfect resumes and passing all background checks only to later learn that 1. It's not the person, it's an impersonation. Or 2. It's a synthetic ID altogether.
So let's peel back the curtain on this wild dystopian world of AI in job hunting, where deep fakes and digital imposters are making resumes and zoom calls feel like scenes of a Black Mirror episode. Is your next hire a superstar as they appear or just a dishonest piece of crap cyber criminal? That's what we're trying to figure out. Today's job market is weird.
I mean, it's a terrifying circus where AI generated deepfakes are literally passing interviews, getting hired, showing up to meetings like it's nothing. You're not being scammed by a genius hacker in a hoodie. You're getting duped by some guy named Chad hiding behind an AI face filter. And that's the reality of it. In a recent news alert,
that said one in six hiring managers have admitted on the record that they've interviewed somebody who wasn't real. Not catfish real, like this person is a deep fake avatar sent from the digital underworld real. AI doesn't just polish resumes now, it fabricates them. And that's a big shift.
Add a dash of like mid journey for the perfect headshots and congrats you've just met the senior sales ninja Alex, who actually is a fever guy in Mumbai using a video deepfake filter and voice modulation software feeding him answers off screen.
Speaker 2 (05:46.656)
It's not just your grandpa Googling AI. IT pros are getting catfished mid interview by synthetic versions of job seekers. And I'm going to show you an interview with one in just a bit. Look, some AI deep fake job applicants are simply attempting to land multiple jobs at once to boost their income.
But there's evidence that suggests that there are nefarious sources and state sponsored groups, which the FBI announced a big bust and take down in a coordinated international effort, which we're going to talk about in just a second real briefly. But it's leading to big, big consequences for unwitting employers. And this is happening across all size organizations here in the U.S. It's happening in the SMB space.
in the micro space as well as in the enterprise and mid tier markets. We've talked to people at all of these and HR is struggling. The deep fake detection technology is just not there yet. It's really an AI arms race and right now the good guys are losing.
Um, circle back to 2024 and cybersecurity companies, crowdstrike, you know, the blue screen of death company with really good Intel and really bright people working there. Um, they responded to more than 300 instances of criminal activity related to famous KoLima, a major North Korean organized crime group. More than 40 % of those incidents were sourced to IT workers who had been hired under a false
identity. They are quoted, Adam Meyers, senior vice president of counter-adversary operations at CrowdStrike said, much of the revenue they're generating from these fake jobs is going directly to a weapons program in North Korea.
Speaker 2 (07:51.542)
Let's think about that for a second. So they are getting in under synthetic ID or impersonation, meaning impersonating a real person or creating a fake ID altogether. with the amount of
Identity verifications that are available online. They're just taking a real person most times and they will be that person create this phenomenal resume because When you get a job description, right you feed that into AI and it creates the perfect Resume you're gonna get the interview. It's exactly what they're looking for, but it's too good
But then they come in and oftentimes they mess up the interview and that's fortunate for us because they haven't honed that skill down. But a lot of times they're getting in and that might be due to poor interview practices. But they're also clearing all the background checks because these people don't have the traditional flags of like a felony, a DUI, other issues. None of those are popping up because the identity isn't real or it's been stolen from
somebody with a clean record. But then...
You'd think they would get in and like immediately launch ransomware or something like that. Maybe that's just like the cyber crime story person in me that like thinks that's what always happens. But it's not right. Like what they're doing oftentimes is is just getting the paycheck and that money isn't even going to them. It's not even their personal greed. It's because they will die if they don't do this right because they are in North Korea.
Speaker 2 (09:36.488)
and that money is directly tied and been tracked to funding the North Korean missile system.
That's amazing that all the way to a small town in a SMB space that's just hiring somebody to update their website or some coding work or something like that, meaning it's going to be a remote worker. And that is funding the North Korean missile system. That's what's really happening.
In December 2024, 14 North Korean nationals were indicted on charges related to a fraudulent IT worker. They stand accused of funneling at least $88 million into the weapons program of North Korea.
from businesses.
Speaker 2 (10:26.262)
and they did that over a period of several years. The Justice Department also is alleging that some of those workers also threatened to leak sensitive company information unless their employer paid them an extortion fee. Now we're talking, because once they're in and they have all this access, there's no way they're just there for the paycheck. That's kind of what I thought. So that makes sense.
So, and then there's this legacy case from VDoc Security where a guy's face starts twitching like he owed the Matrix money. And when asked to wave his hand in front of the camera, just dead silence. He just froze. That's been all over social media. We'll show an image of it, but it's really...
It's really mind blowing because it's actually captures the moment when the interviewer asks him and realizes that it's a deep fake. And don't even get me started about the Columbia student.
I don't know if you guys heard about that. There's the Columbia student who built a real time cheat book tool for interviews. So it feeds answers to him while he was screen sharing. And he turned job interviews into Twitch streams. I mean, it's really kind of remarkable in a sinister kind of way. And of course, the one that we've talked about all the time.
is the bank director for the company who got impersonated causing his employees to wire transfer $25 million. $25 million in cold hard cash just by deep faking the directors voice and convincing through social engineering an employee to fall for the story and transfer the money. And it worked. It's unbelievable.
Speaker 2 (12:31.01)
So let's zoom out from theory and tech talk and land squarely in reality.
Because while deep fake job interviews might sound like a scene from Mr. Robot, for one small business leader, it was a Tuesday. Meet John Fly, CTO of a small but mighty AI powered marketing company called Firm Pilot. They help law firms get found on Google and flex their legal cloud online. Smart, right? Here's where it got messy. As Firm Pilot started growing and started to scale, John found himself knee deep in interviews for a few
different technical roles. Think AI, content systems, automation, real geek paradise. Here's the catch, 75 % of the most impressive resumes, 75%.
were completely fake. Not just exaggerated a little. Fake personas, AI generated resumes, deep fake avatars, live interviews on Teams and on Zoom. All deep faked.
Speaker 2 (13:46.904)
So let's think about this for a second and let's talk about the actual application of AI deepfakes that small to mid-sized businesses are actually seeing today. Like what's real, what's actually happening out there. And you'll be surprised. I'm going to play you a clip from.
interview that I had with the chief technology officer for a 15 person small business and you will hear how he explains how they are getting bombarded with AI deepfakes and AI generated resumes which
day.
Speaker 2 (14:29.606)
is just unbelievable and you will hear how he pauses an interview and breaks it down and actually speaks to the person. And luckily for us, the person actually came clean. And so I really want you to check this out because it is it's really shocking. Check it out.
that we're looking for. So yeah, mean, when I'm reviewing these resumes, the ones that look like, know, rock star developers or, you know, the 10 out of 10 people I'm excited to talk to, I really got kind of cynical by the end of this because I would be looking at the strongest resumes and thinking even before I got it on camera, is this even a real person? It was just so bad.
had to go over it with a fine-tooth comb and prepare questions to really dig in, in case it was a fake. And for the developer role that I had, it was probably about 75 % of the highly qualified resumes ended up being a completely fake persona when I got them on the camera.
Speaker 2 (15:40.396)
Wow. Now, were they using AI or something like that in the, in the teams meeting or in the, they were. So AI deepfakes actively being used. Wow. I need to tag this part over to another podcast. Cause that is, that is really shocking for business owners to understand, you know, most of the talent isn't necessarily right local, right? And so
Yeah.
Speaker 2 (16:10.444)
you need to find the best person with those skill sets so you look broader. And so that usually necessitates having meetings like this, having a teams meeting, having a zoom meeting, whatever, only times have changed and the software's really, really good.
So I was able to start really putting it together and by like the tenth one my Spidey Sense went off like right at the start. They were always using a digital background. Most of them had the large over the ear headphones, which those by themselves aren't red flags, but what we would notice, well I'll just tell you, about like 40 or 50 of these into it, I did find one of the guys that were just
Rev.
Speaker 3 (16:53.838)
His resume was so ridiculously strong and he was such a weak interviewer or interviewee. I just asked him, said, I think the resume is fake. I think you're AI right now. Can we just stop and you tell me what's going on? And I had one of these guys just kind of break it down.
So he said, he said he's in an office. Like he said, I'm surrounded by people. We're all doing this. He said, yeah. He said they, they scrape jobs. They'll use AI to create a really powerful resume. And he said, especially for the big companies that have multi rounds, sometimes it's not even the same person showing up round after round. And so they show up, they have the resume, they have a terminal in front of them.
What did he say?
Speaker 3 (17:43.118)
We'll typically have somebody listening in, prompting the LLM based on my question. So I'll ask a question. They'll usually pause with, that's a great question. Let me think about that.
Meanwhile, they're prompting the AI.
Yeah. And then it's then they're, then they're reading. you'll see them lean in and read sometimes, but, the really good ones that might be coming in over their headphones. But the, the entire schtick is to, you know, I asked him like, what's the goal? Like what's the big payoff? And he really said the goal that they're after is to get, to get a job, to get on payroll and then to either farm the work out to AI or just stretch it out as long as possible to pull in paycheck.
Speaker 2 (18:30.058)
And the guy honestly just flat out confessed. There's not gonna be any accountability. No one's going to arrest this guy. He's not here in the United States. And the guy confessed, yeah, we're in a room full of people doing this. We use AI to build resumes and feed answers in real time. Our goal is to get on payroll. That's what he told them. And not get access.
not even steal client data, just to get paid. So John was fortunate in that sense that they weren't there to steal sensitive data and to launch, you know, either exfiltrate data, fancy word for steal, or launch something like ransomware, especially if
they would have access to law firms who then have access to all of these clients. You see the supply chain attack that can happen here. But here in this scenario, luckily, it was contained by the attackers themselves. They were just there to get on payroll. But that's what shook John the most, the industrial scale, the playbook.
This wasn't a lone scammer, it was organized routine and so effective that these fake candidates were rotating through interview rounds under different identities, like a casting call for con artists. These applicants passed standard background checks. That's the bottom line. They were that good. It's not paranoia if it's actually happening, right?
And my message to you guys is this, it is actually happening. So what are you doing about it? My question to you guys and put it in the comments. I want to know what you guys are doing about this. What do you see? Have you come across anything like this?
Speaker 2 (20:31.222)
And depending on your role, know several of you are in HR roles. Some of you have experienced some of this because I've spoken to you. But I really want to know what you're seeing. We don't have to publicize it. I just want to know so we can help others. Look, John's story, like hundreds others, reminds us that no company is too big or too small.
to be targeted. In fact, small businesses are really prime targets. Fewer security layers, no security people on staff, less scrutiny, and a greater sense of urgency to hire. Here's another real world example. Pindrop, the CEO of Pindrop, a 300 person information security company, says his hiring team came to him with a strange dilemma.
They were hearing weird noises and different tones, tonal anomalies, while conducting remote interviews with job candidates. To get to the bottom of this, the company posted a job listing for a senior backend developer. It then used its own in-house technology, what they sell at Pindrop, to scan candidates for potential red flags. We started building these detection capabilities not just for phone calls, but for
conference calls and video systems like Zoom and Teams. The CEO of PinDrop told Fortune magazine. And he also said, we do threat detection, we wanted to eat our own dog food, so to speak. And very quickly, we saw the first deep fake candidate. Out of 827 total applicants for the developer position, first of all, pause for a second. What the hell? There's 820
7 applicants for a backend developer job? That is incredible. And we need more job openings, because that's ridiculous. How are good people supposed to get employed with with those stats? Anyway, circling back. The team found that roughly 100 of them Holy crap, so that is 12 and a half percent if I did my math right. Let me see.
Speaker 2 (22:53.166)
Yeah, 12 and a half percent. I'm really good at math. 12 and a half percent of all the job applicants were f**** faked. That's crazy. The CEO then said, this was never the case before and tells you how in a remote first world, this is increasingly becoming a problem. Now look, you can't stop it when you're hiring for remote.
freaking deep easy
Speaker 2 (23:21.932)
workers. And while there's a lot of SMBs that only hire locally and only do it in person, that's great. But oftentimes the talent you need is not local. Sometimes the talent you need is a state away or a couple states away or three states away. Right. And that initial interview, even filtering through your funnel of these 800 some resumes is going to be remote.
Right? But we have to fix the process. We have to verify in person and run the background to confirm that the person is actually the one submitting for the application. And then there's the resume game. Applicants are optimizing their CVs, their curriculum VTAs with JetGPT or Claude or whatever AI platform they're using.
auto applying for hundreds of roles while hiring bots are screening out alleged unqualified candidates. So candidates are using AI to boost resume acceptance and then jumping on zoom and teams video interviews using AI deep fake technology. Two workings of AI just in the HR space. Meanwhile, companies are leveraging AI to screen candidates, but that's not working.
It's unbelievable. Now cue the paranoia. Old school social engineering is starting to make a comeback. Enter stage left the paranoia era courtesy of Wired magazine.
No more one click trust. Now every email voice call and LinkedIn direct message gets a background check. Prove it's really you. They posted on it at Wired. People are texting selfies with time stamps. They want you to jump on a video call then swing your camera lens around. Is that your living room or green screen? They're asking. It's unbelievable.
Speaker 2 (25:30.55)
And here's where it's getting wild. Crowd Circuit found that some of these deepfake workers were funneling stolen salaries right back to North Korea like we were talking about. The North Korean operatives are, it's an entire organized crime state funded and state sponsored organization. It's unbelievable.
Up to 12 % of applicants for software jobs are ghost identities the FBI is saying feds called it a canary in the gold mine This tech chaos is about to go mainstream Nicole Yellen a PR pro in Detroit now screens every unknown Contact like she's in the CIA. She even tests accents So because apparently that's a trust indicator now according to wired. It was a there's a big write-up in
magazine about this. So your sweet soft-spoken junior apt-dev employee might just be funding the next missile launch from King Jung-un, which is a big issue. And that brings us to today, June 30th, where the FBI reported
that the US Department of Justice and the Federal Bureau Investigation announced a coordinated nationwide action to disrupt North Korea's illicit remote IT worker operations. Check this out. They got 29 suspected laptop farms were searched and taken down across 16 different states. 29 financial accounts and 21 fake websites were all seized.
200 different computers and servers were seized and two indictments were unsealed today and an alleged US facilitator was arrested. These operatives, this is according to the press release by the FBI today, these operatives gained employment with over 100 US businesses where they not only siphoned millions of dollars to fund the DPRK regime, which is Kim Jong Un's
Speaker 2 (27:47.722)
North Korean regime, they gained access to and in some cases stole export-controlled US military technology from their employers. Let that sink in for a second. That's where the exploitation and the combination of socially engineering people along with AI is making us all
more at risk even from the national security standards.
I mean, it's, it's, it's mind blowing the level of risk that we are operating under right now because technology is advancing so fast and we are still doing things generally the same way it's always been done because we are slow to adapt and we are not interested in learning. We're not interested in seizing the moment to create new policies and checks and balances.
and we're gonna pay the price for it. North Korea, is what they said. Brett Leatherman, the assistant director for the FBI, he's phenomenal. He said, North Korean IT workers defraud American companies and steal the identities of private citizens all in support of the North Korean regime. That is why the FBI and our partners continue to work together to disrupt infrastructure, seize revenue, indict overseas IT workers, and
arrest their enablers in the United States. But the actions announced today serve as a warning that if you host laptop farms for the benefit of North Korean actors, law enforcement will be waiting for you.
Speaker 2 (29:37.854)
Roman Ruszowski, assistant director of the FBI counterintelligence division said today North Korea remains intent on funding its weapons programs by exploiting US companies, but the FBI is equally intent on disrupting this campaign and bringing its perpetrators to justice. So what are you supposed to do today if you're a small mid-sized business and you don't have deep fake detection? Frankly, I wouldn't even be investing in it
because it doesn't seem to be up to the level where it's counting everything because all of the ones that we keep testing still that we know are deep fakes we upload into these systems and they're not detecting it. They're still showing that it's real. So they're just not there yet. They're good. They're getting there and they they're just hit and miss right so you really can't rely on it solely but it won't be long and they'll they'll be where we where they need to
be. And then deepfake detection and deepfake policies are going to be the norm, probably in the next few months or years. So having those in place and getting ahead of the curve, that's really key.
But what companies can do today and what leaders can do is a couple things. One, you need to harden your remote hiring vetting, like the way that you vet your employees. Verify identities, not just at onboarding, but you have to do it continuously. You have to do it in person. There needs to be some verification done where there is a live human that is met by somebody trusted that is just
key or verified in some other ways. Also monitor for unusual remote logins. This can be done through traditional cybersecurity detection platforms. We have them, lots of MSSPs have them. In the enterprise space I know that internal organizations leverage these tools and in the SMB space you can rely
Speaker 2 (31:50.384)
rely on more of an MSSP rather than an MSP, but you can get that stuff done. And they'll be able to detect things in real time and find remote logins, suspicious logins. technology can be used that is used for
same.
Speaker 2 (32:12.942)
business email compromise protection, right? Or other social engineering protection. So that is, you know, the number two thing. And then review the latest FBI public service announcements for red flag indicators and detailed migration steps. So January 2025, there was an update on data extortion tactics that will link these below. And in May 2024, there was guidance on US facilitators.
is we've been studying this for a while. There was an FBI alert actually in July of 22. That is a long time ago. So July of 22 is the very first FBI deep fake warning about jobs and they were saying people were doing this back then. I mean this has been going on for a while but today they announced a big takedown of them which is really good.
And we've entered what wired calls, the paranoia arrow. Now every Zoom call comes with requests like,
Wave your hand, tilt your head, show me your room, there's no green screens allowed. It's less interview and more like trying to pass TSA with your dignity intact. Recruiters are firing back with their own AI tools as well. And this is important. So there's biometric ID checks, there's liveness detection, blockchain resumes. We're basically training AR to become counterintelligence.
agents, which is good. And next up, there'll be polygraph tests with retina scans before onboarding probably. But if we've shown in our live support, you know, we have these live awareness trainings. And what we've shown in that are...
Speaker 2 (34:07.678)
how realistic and how it's virtually undetectable by the human eye. Deepfake detection software just isn't in there. When we speak about deepfakes, it's hard to understand through verbal description. Seeing is believing. So let me walk you through the following video and then we can break it down for you.
taking it.
It's getting to the point where deepfakes are nearly impossible to decipher as computer generated, which is super exciting, but also kind of scary. Now my face is slowly morphing into something else, and it's basically pixel perfect. Look, it's like, amazing. I'm not me. I mean, I am me.
Here's how AI is
Speaker 3 (35:00.248)
but I'm not mean to you and that's kind of nuts.
It scares me. really does.
The FBI tells NBC News they're following the rapidly developing technology closely. It's a real concern. Lawmakers and law enforcement are getting worried about this technology. Here's a letter from Congress to the director of national intelligence. A 43 page report from the U.S. Department of Homeland Security. And look at this title page. I look at that graphic design. DHS says that deepfakes and the misuse of synthetic content pose a clear
sir.
It's a real.
Speaker 3 (35:31.502)
present and evolving threat to the public across national security, law enforcement, financial and societal domains. The Pentagon is using its big research wing, the one that helped invent, I don't know, the GPS and the literal internet, that one, to look into deep fakes and how to combat them. Like, they're taking this very seriously. And then of course, deep fakes are being used for good old fashioned cyber crime. Like this group of fraudsters who were able to clone the voice of a major bank director.
and then use it to steal $35 million in cold hard cash. $35 million. Just by deep faking this guy's voice and like using it to make a phone call to transfer a bunch of money. And it worked.
That's a lot of
Speaker 3 (36:17.026)
We've got you involved in a few different breaches that unfortunately almost every American's gonna show up in. Look, I obviously like to think we do a lot of smart things. I'm obviously not immune to being human. Let's get into the pretext. It's about to get creepy. Rob, we're gonna do a voice clone demo. So I took a clip of you speaking from a video on social media. I put it into my voice cloning tool that requires no consent.
I spoof your phone number so I make it look like it's calling from you on caller ID. Your team member picks up the phone call, they answer it, they hear your voice. Hey, sorry, can you remind me of my password manager's master password? But I mean, it's very accurate. It's definitely my voice. So this is me wearing your face like a digital mask. Terrible. I took about two minutes of that video and I put it into this tool with no consent and I spit out your voice.
Asking about the master password again. Imagine this isn't a Zoom or a Teams call, okay? Hey, sorry, can you remind me of my password manager master password? Appreciate it. The thing that's going through my mind is, Rachel, imagine what an attacker with unlimited time and resources could accomplish. So we're in the kind of Wild West phase where the lawmakers are kind of just trying to get their head around this stuff. I mean, that's unbelievable. Yeah, that was within a few minutes. Deepfake technology is getting faster, cheaper and more realistic.
making it easier than ever to create scams or spread misinformation. AI companies have created deepfake detectors, but this cybersecurity expert says they have serious limitations. Anyone that promises that one-click type of answer is wrong. can upload things that I know are deepfakes because I made them, and they'll say that they're likely authentic. It has the very real potential of creating a false sense of security. Probability 5.3.
Same audio clip that was 100 % AI generated and it thinks it's real. I think somebody that's not thinking about this with nuance would go, it's probably real. Yeah, and that took no effort. Deepfakes are getting better and better, more believable, and the tools that maybe I thought would help me figure it out may not be so helpful. If we have set up some kind of code word, I can ask you that. It's simple human things like that.
Speaker 3 (38:33.726)
that we're going to be able to use until the technology catches up.
It's really remarkable what we've seen and it shouldn't come as a surprise today to anybody unless you're just not aware. And if so, then hopefully this video will help you. But the distrust that this causes us to have is draining.
And if you look back on our prior episode with Perry Carpenter about AI deepfakes, it's a really good one. I interviewed him right when he released the book Fake and what he talks about not only is the exploitation zone and how the deepfake technology is advancing so fast and we're not able to adapt fast enough as humans, as well as the technology that we have today is not able to.
advanced fast enough. But what's interesting is there's also a lot of societal and privacy issues that are happening now because now people are dismissing things that are true or they are propagating misinformation and whether intentional or not they're causing disinformation. Meaning when you see a social post we see a video and it's shocking.
You don't know if it's true. So you need to verify it before you go commenting on it and before you go forwarding that on or reposting it. Because if it's false, you look like an idiot, right? And that's not what we want for anybody, right? And also, we also don't want to just be spreading fake information, right? Because then when you have something legitimate to say,
Speaker 2 (40:29.27)
Nobody's going to buy from you. Nobody's going to believe you. Right. So being authentic in actually stating what is accurate and real helps you. And it also helps society. I don't care what country that applies across the world. And so just verifying before forwarding is really, really key. But also it causes us to question what we see.
and whether it's true. That's exhausting because now we have to spend more time just to receive information and it's really trying.
from.
The distrust, honestly, is draining. LinkedIn recruiter Ken Schumacher notes that when AI goes rogue at scale, old tricks like phone camera flips or rapid fire quiz questions about their city start to creep back. So they're going back into...
some of the tricks that were done in a in HR processes and interview processes back in the 90s back in the 80s because they're just trying to verify if you really do have that experience and that's not a bad thing as we just saw in that video that we just showed you look sometimes human like having a challenge word like Adam Benwell's challenge word that we just have on our channel
Speaker 2 (42:06.102)
They're brilliant ideas. that, you know, having a, a challenge word for your family, right? It's a really good idea because we're just trying to vet human to human and it doesn't need a technology, solution, right? We're just trying to find out what's real. And so by having the people that are real, all know the same word. And when somebody has some,
weird, urgent request for sensitive information or money, right? What's the friggin' word? Tell it to us. And if you tell it to us, then we know you're legit and you're real. And if not, then you're not real and you're not getting our money or sensitive information. It's that simple. And it really isn't bad as a process. So when we think about solutions to this,
I mean, some are doubling down on biometric identity platforms, eye scans, blockchain records, that sort of sci-fi jazz. Others just kind of ditch the AI madness and go back to hiring through referrals and handshakes, but that's not really practical for a lot of organizations. The arms race is hurting kind of everyone. Employers are bogged down in digital forensics instead of hiring talent. Applicants worry.
their accomplishments might be overshadowed by distrust. And we're looking at a future where every candidate says, yes, I'm human. No, I don't have a cloned voice. And if you don't convince them, then you get ghosted by the recruiter. At heart, both of these stories and challenges with the North Korean IT workers, the
AI deep fakes and what happened to John over at Firm Pilot, it really underscores the same irony. AI is meant to make things smoother, but instead it just turned job seeking into a blend of Spectre and Westworld. And we're all just exhausted extras caught in the blur between reality and fakeness.
Speaker 2 (44:30.402)
This is Sabercrime Junkies and we hope you enjoy this.