Plaintext with Rich

How AI Deepfakes Hijack Instincts And What To Do Next

Rich Greene Season 1 Episode 3

A familiar voice calls. The phrase you’ve heard a hundred times lands with urgency. Your gut says act now wire the money, share the code, approve the access. That reflex once kept work moving and families safe. Today, AI can borrow the voice, mimic the cadence, and ride your instincts for sixty seconds. That’s long enough to cause real harm.

We dive into the mechanics of modern deepfakes: how a few public breadcrumbs voicemails, Zoom clips, social videos train models to sound and look convincing. We walk through the most common attack plays, from the fake CEO pushing a confidential transfer to the distressed relative with a broken phone and a new number, to the video meeting that feels legit just long enough to ask for credentials. The pattern isn’t perfection; it’s urgency. The goal isn’t to fool you forever; it’s to rush you past verification.

Then we shift from fear to action. We share a four-step playbook that works at home and at work: slow down urgent requests, verify on a second channel, create no-exception rules for money and access, and assume audio and video can be faked until proven otherwise. Along the way, we reframe trust itself. Voices and faces used to be reliable signals; AI has broken that assumption. Your senses aren’t failing you’re just receiving synthetic input, which means trust must be paired with process.

By the end, you’ll have clear, repeatable habits that lower risk without slowing life to a crawl. Think of it as adding friction exactly where attackers need speed. If this resonated, share it with someone who handles approvals or transfers, and tell us: what out-of-band check will you implement this week? Subscribe, leave a review, and send your security questions we read every note and reply.

Is there a topic/term you want me to discuss next? Text me!!

SPEAKER_00:

If someone calls you, sounds exactly like your boss or a loved one, uses the phrase they always use, and says they need help right now, you don't hesitate because why would you? But what if the voice is real and the person isn't? Welcome to Plain Text with Rich. Today we're talking about AI deepfakes. And as always, let's start simple. A deepfake is audio, video, or images created by AI to convincingly impersonate a real person. Not cartoons, not bad impressions, not obviously fake videos. Real sounding, real looking, good enough to pass a quick gut check. And that's the important part. Deepfakes don't have to be perfect. They just have to be believable long enough for you to act. Now, this didn't suddenly appear because attackers got smarter. It exists because the tools got easier. What used to require a studio, professional equipment, hours of editing now requires a few voice samples, a short video clip, and a laptop. AI models can learn tone, cadence, phrasing, and rhythm from surprisingly little data, which means attackers don't need your password anymore. They simply need your voice, your face, or your boss's voice. And those samples, well, they already exist. Voicemails, Zoom calls, social media videos, public talks. We all left breadcrumbs. AI learned how to follow them. Now, most deep fake attacks, they're not flashy, they're boring. And that's why they work. Here's what they usually look like. A fake CEO voice calls finance says there's a confidential deal that needs a transfer right now. Or a fake family member calls in distress, claims their phone broke, needs money sent immediately. Or a fake video meeting pops up, looks right, sounds right, asks for credentials or approval. Do you notice something? The goal isn't to convince you forever. The goal is to convince you for 60 seconds. Deepfakes don't win by realism, they win by urgency. Now, when this works, the damage is real as always. Money can move, accounts can be compromised, reputations can take some hits. But here's the part that again I feel matters most. Afterward, victims often feel foolish. They replay that moment in their minds over and over. They think they should have known, they should have seen the signs. But the technology is designed to bypass that instinct. This isn't about intelligence, it's about timing and trust. And trust is exactly what deep fakes exploit. Look, we're wired to trust certain signals: a familiar voice, a known face, an authoritative tone. For most of human history, those signals were reliable. AI has broken that assumption. Your senses are still working, they're just being fed synthetic input. So the future or the failure isn't that you trusted. The failure is assuming trust equals verification. That assumption used to be safe. It isn't anymore. So, as I like to do, let's get practical. Let's look at four things that we can do. Step one, slow down urgent requests. Urgency is the weapon, speed is the trap. Any request involving money, access, or secrecy, it earns a pause. Even 30 seconds is going to help you just think things through and process a little bit more. Step two, verify using a second channel or what you might hear of as an out-of-band channel. If you get a call, hang up and contact them via another method other than that particular phone number. If you get a video request, confirm by text or email, again, you want to find another channel. If it's internal, check with someone else or go physically find them if feasible. Again, never verify inside the same conversation. You want to take it out of band. That's how deepfakes win if you keep it inside. Step three, create no exception rules. Money. If money needs to move, make it require two people. Access approvals require confirmation. Executives don't bypass controls. We shouldn't have important functions handled by just one person, right? We need to have checks and balances. Not because you don't trust them, because attackers impersonate them. Step four, assume audio and video can be faked. Not always, not everywhere, but enough that proof requires verification. Deep fakes don't mean panic, they mean process. Trust is still possible. Blind trust is not. Security doesn't fail when AI gets better. It fails when we keep using old assumptions in a new reality. Again, deepfakes exploit trust, not technology. They don't need perfection. Again, they need urgency. Your defense isn't better eyesight, it's verification habits. Slow down. Check twice. Use a second channel. That's how you beat a fake that sounds real. Now, if there's a security topic you want broken down in plain text, send it my way. As always, email me, DM me, drop it in my comments. However, you choose to reach me, I will read it and I will respond. If this episode helped, share it with someone who'd actually benefit. This has been Plain Text with Rich. 10 minutes or less, one topic, no panic. See you next time.