AI in 10

X Drops Deepfake Safeguards: Why Every Parent Should Worry

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 11:05

Text us your thoughts!

Grok AI quietly removed restrictions on creating intimate deepfakes. This isn't just tech news - it's a threat to anyone with photos online.

Referenced Links:
X Platform
Grok AI
Cyber Civil Rights Initiative
FTC Deepfake Guidance


Want to go deeper with AI? A community of professionals is learning AI together right now at aihammock.com — show notes, links, tools, and real conversations about how to actually use AI in your life.

SPEAKER_00

Welcome to AI in 10. I'm Chuck Getchell, and every day I break down the biggest AI story in just 10 minutes. What it is, why it matters, and how you can actually use it. Looking at artificial intelligence today, there's a story that should make every parent, every professional, and frankly every human being pay attention. Reports are emerging about AI systems becoming more capable of generating what are being called undressing deep fakes with fewer restrictions than before. And before you think this is just another tech news story that doesn't affect you, it absolutely does. Here's what's happening in the AI space. Various AI image generators that previously had strict guardrails about creating explicit or inappropriate images of people are seeing those protections tested and sometimes circumvented. Users across platforms are reporting they can generate images that would have been blocked in the past. We're talking about AI that can create realistic looking photos of people in compromising situations, people who never posed for those images. Now, if you're thinking this sounds like science fiction, let me be clear. This technology already exists, it's already being used, and it's becoming increasingly accessible to millions of users across various platforms. The technical term is non-consensual intimate imagery generated by artificial intelligence. But let's call it what it is, right? It's digital violation made easy. Here's how this works in practical terms. Someone can upload a regular photo, maybe something from your LinkedIn profile, your Facebook page, your company website, and ask the AI to generate a compromising image. The AI takes that face and creates what looks like a photograph of something that never actually happened. The scary part? These images are getting incredibly realistic. We're not talking about obviously fake cartoons. Modern AI can create images that would fool most people at first glance. The lighting looks right, the shadows fall naturally, the skin texture appears authentic. If you didn't know it was AI generated, you might believe it was real. And this isn't just about celebrities or public figures anymore. This technology democratizes the ability to create fake, intimate images of anyone, your co-worker, your ex, your daughter's teacher, the person who cut you off in traffic and whose license plate you photographed, anyone whose photo exists online, which is basically everyone. What makes this particularly concerning is the scale we're dealing with. Major platforms have hundreds of millions of users, AI tools are becoming integrated right into these platforms. You don't need special software, you don't need technical expertise, you don't need to visit some sketchy corner of the internet. These capabilities are becoming available through mainstream services. Let me put this in perspective. Remember when Photoshopping someone's face onto another body took hours of skill and expensive software? Now it takes 30 seconds and a text prompt. It's like giving everyone a professional photo editing studio, except most people don't have professional ethics to go with it. So why should this matter to you, especially if you're not particularly active on social media? Because the implications ripple far beyond any single platform. First, let's talk about your professional reputation. Imagine you're up for a promotion or you're job hunting or you're running for the school board. Someone with a grudge could generate compromising images of you and spread them online. Even if you can eventually prove they're fake, the damage to your reputation might already be done. As they say, a lie travels halfway around the world while the truth is putting on its shoes. For parents, this creates an entirely new category of digital safety concern. We've taught our kids about stranger danger online. We've talked about not sharing personal information, but if you talk to them about the fact that a regular photo from their Instagram account could be turned into something completely different, that their classmates now have access to technology that could be used for harassment or bullying in ways we never had to consider before. For women in particular, this technology is already being weaponized. Studies show that approximately 90 to 96% of deepfake videos online target women. This isn't gender-neutral technology. It's being used disproportionately to harass, intimidate, and silence women in public spaces, professional settings, and personal relationships. But here's what really keeps me up at night about this development. We're normalizing this technology. When platforms make it easier to create fake, intimate images, we're essentially saying this is acceptable behavior. We're moving the goalposts on what constitutes normal versus harmful use of AI. Think about it this way: if someone broke into your house and stole intimate photos, we'd call that a crime. If someone hired a photographer to create fake, intimate images of you without your consent, we'd call that harassment. But if someone uses AI to create the exact same result, suddenly we're debating whether it's even a problem. This has real economic implications too. Professionals whose livelihoods depend on their public image, teachers, doctors, lawyers, small business owners, now face a new kind of reputational risk that's completely outside their control. You could be the most ethical person in your community, but if someone with an agenda decides to target you, they now have incredibly powerful tools to do so. And let's be honest about the legal landscape. The law is way behind this technology. Most jurisdictions don't have specific statutes addressing AI-generated intimate imagery. Even where laws exist, enforcement is complicated. How do you prove damages from something that's fake? How do you prosecute someone who claims they were just experimenting with AI? How do you remove these images from the internet once they've been created and shared? For businesses, this creates new liability questions. If an employee is targeted with fake images that affect their ability to do their job, what's the company's responsibility? If fake images of executives start circulating, how does that impact investor confidence? These aren't theoretical problems anymore. They're business realities that leadership teams need to start planning for. So what can you actually do about this? Because sitting around worrying isn't going to help anyone. First, let's talk about protecting yourself and your family. Start by auditing your digital footprint. Go through your social media profiles, your professional headshots, your company website photos. Ask yourself what images of me are publicly available online. The more high-quality, clear photos of your face that are publicly accessible, the easier it becomes for someone to misuse this technology. I'm not saying you should delete all your photos or go off the grid. That's not realistic in 2024. But be intentional about what you share publicly. Consider making your social media profiles more private. Think twice before posting that crystal clear headshot on every professional platform. Here's something specific you can do today. Set up Google alerts for your name and the names of your family members. This won't catch everything, but it will help you discover if fake images or other harmful content starts appearing online. The earlier you catch this stuff, the easier it is to address. For parents, it's time for a new conversation with your kids. Explain that photos they post online could potentially be misused in ways they can't imagine. This isn't about scaring them away from social media, it's about helping them understand that digital literacy now includes understanding how their images could be manipulated. Teach them that if someone threatens them with AI-generated images, they should immediately talk to a trusted adult. Make sure they know this isn't their fault and isn't something they need to handle alone. If you're a professional whose career depends on your reputation, consider working with a digital reputation monitoring service. Yes, this costs money, but so does dealing with the fallout if fake content about you starts circulating. Think of it like professional liability insurance for the AI age. Here's a practical step everyone should take. Start screenshotting and saving legitimate photos of yourself with metadata intact. If you're ever in a situation where you need to prove certain images are fake, having a documented record of your actual photos with timestamps and location data could be crucial evidence. For business owners and managers, it's time to update your employee handbooks and crisis communication plans. What's your company's policy if an employee is targeted with fake imagery? How will you handle it if fake images of executives start circulating? Do you have relationships with legal professionals who understand this technology? These conversations need to happen before you need them, not after. But here's the most important thing you can do: stay informed and stay engaged. This technology is evolving faster than our legal and social frameworks can keep up. The decisions being made today about how AI platforms operate will shape the digital world your kids grow up in. Support legislation that addresses non-consensual intimate imagery. Vote for leaders who understand technology issues. Engage with your school boards about digital safety education. Use your voice as a consumer to demand better safeguards from the platforms you use. And honestly, if you're feeling overwhelmed by all this AI stuff and want to understand it better, my AI Explained course breaks down these technologies in 30 bite-sized videos designed for people who aren't tech experts. Because the more you understand how this stuff actually works, the better you can protect yourself and your family. Look, I know this is heavy stuff. Nobody wants to think about their photos being misused or their kids being targeted by new forms of digital harassment. But here's the thing: this technology exists whether we pay attention to it or not. The question is whether we're going to be proactive about protecting ourselves and pushing for responsible development, or whether we're going to be reactive dealing with problems after they've already caused damage. The companies building these AI systems are making choices every day about what safeguards to include and what restrictions to remove. When we stay silent, when we don't engage, when we assume someone else will handle it, we're essentially letting them make these decisions without our input. And as we're seeing across the AI landscape, those decisions directly impact our safety, our privacy, and our digital rights. This isn't about being anti-technology. AI has incredible potential to improve our lives, create new opportunities, and solve complex problems. But powerful tools require responsible use and appropriate safeguards. When those safeguards get quietly removed, it's not just a tech story. It's a story about the kind of digital world we're building for ourselves and our families. The key takeaway here is simple but crucial. Your photos online are no longer just photos. They're raw material for AI systems that can create content you never authorized or intended. Being aware of that reality isn't paranoia. It's digital literacy for the modern age. That's today's AI intent. If you want to go deeper and learn AI with a community of people just like you, join us at aihammock.com. I'll see you tomorrow, my friends.