Unpacking Education & Tech Talk For Teachers

Strategies for Identifying Deepfakes

AVID Open Access Season 3 Episode 169

In today’s episode, we'll review seven strategies that you and your students can use to identify deepfakes. Visit AVID Open Access to learn more.

#273 — Strategies for Identifying Deepfakes

11 min
AVID Open Access

Keywords

detectors, detect, strategies, identify, fake, video, number, real, content, voice, source, media, audio recording, means, examples, ai, information, image, site, approaches

Speakers

Paul (94%), AI (4%), Intro (2%), Speaker 1 (1%)


Paul Beckermann  0:01  

Welcome to Tech Talk for Teachers. I'm your host, Paul Beckermann.


Intro Music  0:06  

Check it out. Check it out. Check it out. What's in the toolkit? What is in the toolkit? Check it out. 


Paul Beckermann  0:17  

The topic of today's episode is strategies for identifying deepfakes. Last week, we established a basic understanding of deepfakes. We defined a deepfake as media that sounds or looks real, but it's actually fake. Most commonly, this comes in the form of a video, an image, or an audio recording. We also looked at the four most common types of deepfakes: face swapping, face manipulation, voice synthesis, and AI generated images. If you missed that episode, you may want to go back and check it out. The focus of today's episode is to take our deepfake journey to the next level and explore how we can identify deepfakes when we see them or hear them. This will be an increasingly important skill for all of us to have, especially as we head into election season. Let's take a look at seven approaches we can take to detect deepfakes.


Speaker 1  1:14  

Here are your seven tips.


Paul Beckermann  1:20  

Number one, use deepfake detectors. Deepfake detectors are pieces of technology that can be used to digitally determine if media is real or fake. Intel released the first real-time deepfake video detection program in November 2022. Right about the time that ChatGPT was being rolled out to the public. This product, called Fakecatcher, identifies deepfake videos by analyzing the blood flow of a person in the video being shown. The software detects the changes of color that happened in blood veins as the heart pumps blood through our bodies. It's pretty cool. If these color changes are detected, it's likely that the human in the video is real. If there's no blood flow, it's likely a deepfake. Data suggests that this tool can identify deep fakes with a 96% effectiveness rate. To this day, Intel's Fakecatcher is still considered one of the best deepfake detectors. There have been a plethora of other products released as well. For example, Sentinel AI is a popular one and was created by an Estonian cybersecurity firm focused on collaborating with governments and media outlets to combat disinformation campaigns. Phoneme-Vizeme Mismatch Detector is another and that one features a technique developed by Stanford and the University of California, which identifies mismatches between mouth movements and audio. Since it is a detector that claims a 95% accuracy rate for identifying deepfake images. And even Microsoft is getting into the game with Microsoft Video AI Authenticator. This tool detects subtle grayscale changes in videos that are usually missed by normal eyes. Microsoft has been specifically focused on misinformation campaigns surrounding elections. There are more detectors out there, but these are some of the most well known. I should add that I don't know anyone who has purchased one of these detectors. My hope is that media outlets and government organizations will utilize them and report on an influential deepfake gets circulated. In this way. They might act as deepfake watchdogs for those of us without access to the detectors. 


Number two, listen with your eyes. You don't need fancy detection software for this strategy. Instead, you need to use your own two eyes. Be observant and look for anything that doesn't look quite right. MIT Media Lab points out that there's no single telltale sign on how to spot a fake. However, with videos and sometimes pictures, there are common discrepancies that you can look for to spot a potential deepfake. Ask yourself these types of questions. Is the skin too smooth or too wrinkly compared to the rest of the face? Are there shadows where you would not expect them? If the subject is wearing glasses? Is there a glare that looks unnatural? Does facial hair or moles look real on the face? Does blinking look natural? Too much or too little blinking can be a sign that it's a deepfake. Especially pay attention to lip movements. Do they look natural? If you're looking at a picture, you can ask more general questions like does anything look off here? This might be noticing if people in the image have the right number of fingers or toes. Or it could mean that the shadows don't match the placement of the sun, things like that. 


Number three, listen with your ears. This strategy is fitting for both video and audio recordings. Does the speaker's voice sound natural? Does the recording reflect the subject's natural speaking cadence? Are there audio artifacts that indicate audio has been manipulated? An artifact might be an artificial sounding blip or a sudden change in pitch. If something sounds off, it's a reason to question the source. Yes, it could just be a poor recording, but it could also be a sign of a deepfake. 


Number four, practice. Practice helps us get better at spotting deep fakes. It's especially helpful to examine both examples and non examples. The MIT Media Lab has created a website that's perfect for this type of practice. It's called Detect Fakes, and it contains a series of videos and text selections related to the current presidential election cycle. All the examples are currently based on content from Joe Biden and Donald Trump. The content is divided evenly between the two men to reduce any implicit or perceived bias. The site works by prompting users through about 20 examples, and then ask them to determine if each example of media is real or fake. I took the challenge and ended up getting 17 out of 20 correct. I feel like I'm pretty tech savvy and in tune with deepfakes and misinformation strategies in general. Yet, I got fooled three times. I felt like taking the quiz was a really helpful learning activity for me. And I think both students and adults can benefit from something like this. In fact, during my experience with detect fakes, I got better as I went. I guess that means I learned some things as they went along as well. I found that I struggled most with the videos that had no audio. They were silent videos with captions, I had a hard time watching and critiquing lips and facial expressions while I was busy reading the subtitles. The videos with sound were the easiest to detect for me, because I could tell if the mouths were off a little bit. 


Number five, use your background knowledge. One of the best detectors is to compare what you are seeing and hearing with what you already know. Of course, this means the more you know, the better you'll be able to detect inconsistencies. If you've been following the presidential elections, for example, you'll likely be familiar with the talking points of each candidate. You'll probably also have a good idea of what policies each candidate would support, and even their usual tone of voice. When you watch a video and hear one of the candidates speak, you can then ask yourself, based on what I know about this person, does this clip seem believable? This is not a perfect system, of course. But if something seems off or out of character, it deserves looking into a bit further before accepting the clip as authentic. Ideally, this will lead you to do a little follow up research to cross reference the content with other sources that you trust. With this technique, the more you know, the better you can judge. This probably means that our students won't be great at this. It doesn't mean that they're not smart enough, it just means they simply don't have enough real world experience yet. The same could be said for adults who are unplugged or not up to date with what's going on in the world. Regardless how much background information you possess, you should be able to corroborate any content that you find with at least one other credible source. If you can't, be especially cautious. 


Number six, consider the source. This is a good practice for any type of credibility check, not just for deepfakes. And it's generally the first question I asked myself about almost any new information I found. If I'm browsing the web, for example, and scroll through a list of search results, I immediately look at the URL for clues. If it's .gov, I know it's published by a government agency. If it's .edu, it's likely from a postsecondary institution. If it's .org, it's usually a nonprofit organization doesn't mean they're perfect resources, but at least have a clue where they came from. If it's some other URL extension, I'm probably not as sure where it came from and I need to look even deeper. When I get to the site, I need to ask things like, who's the author here? Do I recognize them as a respected individual? Or maybe an organization? Or are they someone I've never heard of before? Is it a news agency or government organization that I trust? Does the site share where they got their information? If I don't recognize the source at all, I'm immediately more skeptical. And in those cases, I need to find out who the site sponsors are and if they're trustworthy, sometimes this is not clear. And if I'm not sure, I don't put much validity in the content they find there. I look for a better source. There are too many good credible sources of information out there that we shouldn't settle for anything less than excellent quality. 


And number seven, approach media with a healthy dose of skepticism. This last strategy is a bit of an overarching one and can serve us well when using any of the other six approaches to identifying deepfakes. We all need to think critically about the media we consume, and that includes our students. While we don't want them to turn into overly cynical citizens, we do want them to have a healthy habit of questioning what they see and hear. Rather than simply accepting things blindly as fact, a healthy dose of skepticism is essential in developing their critical thinking and media literacy skills. 


AI Paul  10:25  

I'd be surprised if deepfakes didn't continue to become more commonplace. As technology advances, they will be easier to create and more convincing as well. That means the strategies for detecting deepfakes may need to shift and evolve in order to keep up with those changes. For now, these seven strategies can help you and your students identify content that might look real, but is, in fact, a deepfake. 


Paul Beckermann  10:50  

Oh, yeah. Those last four sentences, that wasn't really me speaking, that was a deepfake. I use Speechify to clone my voice and read those sentences for me. I never actually said those words. Did you catch that?


To learn more about today's topic and explore other free resources, visit AVIDopenaccess.org. And of course, be sure to join Rena, Winston, and me every Wednesday for our full length podcast, Unpacking Education, where we're joined by exceptional guests and explore education topics that are important to you. Thanks for listening. Take care, and thanks for all you do. You make a difference.


Transcribed by https://otter.ai