Bee Cyber Fit: Simplifying Cybersecurity for Everyone

Bee Cyber Fit: Take a Deep Dive into Deepfakes - What Are They and Why Should You Care?

November 29, 2022 Wendy Battles/James Tucciarone/Kerry Tomlinson Season 1 Episode 7
Bee Cyber Fit: Take a Deep Dive into Deepfakes - What Are They and Why Should You Care?
Bee Cyber Fit: Simplifying Cybersecurity for Everyone
More Info
Bee Cyber Fit: Simplifying Cybersecurity for Everyone
Bee Cyber Fit: Take a Deep Dive into Deepfakes - What Are They and Why Should You Care?
Nov 29, 2022 Season 1 Episode 7
Wendy Battles/James Tucciarone/Kerry Tomlinson

Send us a Text Message.

Do you know what deepfakes are? If you don't, you should.

In the ever-evolving world of cybercrime, gaining an understanding of deepfakes is important. 

These slick images and videos purporting to look like the real thing can sneak up on us unknowingly. It's no stretch to imagine that cybercriminals use them to trick us. 

But when you're armed with the knowledge you can hone your deepfake recognition skills to avoid falling victim to their scams.

In this episode, we speak with Kerry Tomlinson, a cyber news reporter, and cybersecurity awareness advocate. She spills the beans on all things deepfake and how we can steer clear of these online traps.

Listen to this episode and you'll learn:

▶️ What a deepfake is and why you should care

▶️ How attackers are using deepfakes to trick us

▶️ How do spot a deepfake in an image or video

▶️ How Artificial Intelligence (AI) can be used to our advantage

▶️ How we should prepare for these attacks on a mental, emotional and intellectual level
▶️ Clickbait - our buzzword of the day!


*********
Calls to Action:

Mentioned in this episode:

About Kerry:

Kerry Tomlinson is a cyber news reporter who works to help people stay smarter and safer online. She spent three decades as a TV news reporter, often going undercover to investigate crimes, winning multiple Emmys and other local, regional and national journalism awards. Now she travels the world looking for creative and compelling ways to show people what is happening in the digital world and how it impacts them. She has reported from Russia, the Philippines, Spain, Denmark, Turkey, Argentina, Colombia and more. She is a popular speaker and received the SANS 2021 Difference Maker award for her work in informing people about cybersecurity.

Connect with Kerry:

Ampere News site: www.amperenews.com
Ampere News YouTube Channel

*********
Please Share What You Loved

Your feedback means everything to us! If you enjoyed this

Learn more about Yale Cybersecurity Awareness at cybersecurity.yale.edu/awareness

Never miss an episode! Sign up to receive Bee Cyber Fit podcast alerts.

Show Notes Transcript

Send us a Text Message.

Do you know what deepfakes are? If you don't, you should.

In the ever-evolving world of cybercrime, gaining an understanding of deepfakes is important. 

These slick images and videos purporting to look like the real thing can sneak up on us unknowingly. It's no stretch to imagine that cybercriminals use them to trick us. 

But when you're armed with the knowledge you can hone your deepfake recognition skills to avoid falling victim to their scams.

In this episode, we speak with Kerry Tomlinson, a cyber news reporter, and cybersecurity awareness advocate. She spills the beans on all things deepfake and how we can steer clear of these online traps.

Listen to this episode and you'll learn:

▶️ What a deepfake is and why you should care

▶️ How attackers are using deepfakes to trick us

▶️ How do spot a deepfake in an image or video

▶️ How Artificial Intelligence (AI) can be used to our advantage

▶️ How we should prepare for these attacks on a mental, emotional and intellectual level
▶️ Clickbait - our buzzword of the day!


*********
Calls to Action:

Mentioned in this episode:

About Kerry:

Kerry Tomlinson is a cyber news reporter who works to help people stay smarter and safer online. She spent three decades as a TV news reporter, often going undercover to investigate crimes, winning multiple Emmys and other local, regional and national journalism awards. Now she travels the world looking for creative and compelling ways to show people what is happening in the digital world and how it impacts them. She has reported from Russia, the Philippines, Spain, Denmark, Turkey, Argentina, Colombia and more. She is a popular speaker and received the SANS 2021 Difference Maker award for her work in informing people about cybersecurity.

Connect with Kerry:

Ampere News site: www.amperenews.com
Ampere News YouTube Channel

*********
Please Share What You Loved

Your feedback means everything to us! If you enjoyed this

Learn more about Yale Cybersecurity Awareness at cybersecurity.yale.edu/awareness

Never miss an episode! Sign up to receive Bee Cyber Fit podcast alerts.

Kerry: We need to understand that every single message that comes into us is a potential weapon. And instead of saying, "Oh my gosh, I can't look at any messages." Instead, take that step back and recognize that that is the reality of our world.

Wendy: Welcome to the Bee Cyber Fit podcast, where we're simplifying cybersecurity for everyone, where we cut through confusing cyber speak and make cybersecurity simple and easy to digest. I'm one of your hosts, Wendy Battles.

James: And I am James Tucciarone. Together, we're part of Yale University's Information Security Policy and Awareness team. Our department works behind the scenes to support Yale's mission of teaching, learning, and scholarly research.

Wendy: Ready to get cyber fit with us?

Wendy: Hi, everyone, welcome to another episode of the Bee Cyber Fit podcast. We are so excited that you're here. This is the place to come for information and some inspiration about how to stay safe online and build your cyber fitness. And today, I'm so excited because we have a special guest. James, have you ever met Kerry Tomlinson?

James: Wendy, I've heard so much about Kerry and so many good things. But unfortunately, I have yet to have the chance to actually meet her. So needless to say, I am very excited that we have her as a guest with us today and I can't wait to see what she's going to share with us.

Wendy: I know, she is some kind of amazing. I know you're going to love her and she's so knowledgeable. So, that is coming up shortly. But we also are going to have Calls to Action at the end of the episode and we have a great buzzword of the day for you.

James: Exclusive never seen before footage. The shocking new science that shows milk is bad for you. You won't believe what happened next. The sensational and attention-grabbing headlines are all examples of potential clickbait. Stay tuned to find out more about what clickbait is and what we can do to avoid it.

Wendy: Today, you are in for such a treat. We have our friend, Kerry Tomlinson, who is a cyber news reporter, who is going to be in conversation with us about the topic of deepfakes. You might remember that our buzzword from our last episode was defining deepfakes and today we're going to talk in detail about them. So, Kerry Tomlinson, welcome to the Bee Cyber Fit podcast.

Kerry: Thank you so much for having me today. I'm in Copenhagen, Denmark. So, I'm excited to be talking to you from across the water.

Wendy: Literally across the water. And that sounds pretty amazing and a lot more interesting than being in New Haven, Connecticut, right now.

[laughter]

Wendy: So, thank you so much for joining us. And I want to begin talking about this, first by asking you a question about yourself. We really want to know how you became a cyber reporter.

Kerry: Oh, great question. So, I was a TV news reporter for just about 30 years, which is a very long time, as I like to tell people each day in TV news is very stressful. And there are so many deadlines and you can lose your job if you don't make them. It's like living a month in a day. It's like a month of real life. So, I'm like 900 years old.

[laughter]

Kerry: I've seen a lot. One of the things that I saw and I think it was about eight or nine years ago, I was talking to an engineer at one of the TV stations where I worked. And he said, "Kerry, there's this thing called ransomware?" And it would hold all of our files hostage and what that would mean if it hit our TV station is that all the work you've done for all these years, all the interviews you've done, all the stories you've done, all the times that you've exposed crime or wrongdoing and changed people's lives, all that would be gone. And I said, "Oh my gosh, why doesn't everyone know about this?" And he said, "Well, people, they don’t care about cybersecurity." And that clicked something in my brain. The reason that people didn't care about it at the time, there were many reasons, but one is that there's a lot of terminology that we don't understand. And there's a lot of stuff going on that we don't understand and therefore, we push it away from us, we dismiss it because it's too much to absorb. And I thought, "I can help change that. I can help make it all understandable, easy to learn about, interesting to learn about, and not too difficult to take good steps to protect ourselves." And thus, was born about seven years ago was my first cybersecurity news organization and I moved it to a different company and started a new one about a year ago doing the same thing. My job is to look at what's going on in the cyber world, all the attacks and things that come our way, and talk about it in real everyday terms. So, everyone of every skill level can understand and do something about it. That's what I do and all of that was sparked by that conversation about ransomware so many years ago, now people do care about it. Now, we all are understanding that this can really impact our lives. So, it's all come together in one beautiful thing now, so many years later.

Wendy: It's really amazing how one conversation can put us on a trajectory in ways that we often don't anticipate, and also have a huge amount of impact. Because part of what I see you doing when I look online, and I look on YouTube, and I see your videos, is how you do explain things in really simple, approachable, and engaging ways that can help attract people and make it easier to understand. So, I really appreciate that about what you're doing.

Kerry: Well, thank you so much. I love languages, I speak multiple languages, I study more languages than that. And to me, the cybersecurity stuff is like a language that needs translation, so that we can all be comfortable and not be uncomfortable when we hear it and say "What are they saying? What are they talking about?" I don't care, it's too much.

Wendy: Yeah, absolutely. And I'll just say that is such a great parallel with what we're doing for our Yale community, which is that same idea of trying to simplify things, so that people don't have to feel so scared and apprehensive.

Kerry: Right? No need for that.

James: You took the words right out of my mouth Wendy. I think it's great because that's exactly what we're trying to do with our program here is educate folks, give them opportunities to learn, and become more familiar and comfortable, more confident when it comes to cybersecurity. What would you say is one of the biggest lessons that you learned in your time as a cybersecurity or cybercrime reporter?

Kerry: The first thing that I learned that absolutely blew my mind is the amount of chaos and attacks and destruction going on in the digital world. Stuff that we don't know about? Stuff that cybersecurity experts are running around trying to stop from hurting us. And they actually, overall are doing a pretty good job, because for the average person sometimes it seems like there's very little impact. The plus side of that is "Yay, good job cybersecurity experts." The downside of that is that we don't always understand what's really happening. And we go, "Well, that doesn't impact me, nothing happens to me, so I'm fine." And that's the result of a lot of hard work by a lot of cybersecurity folks. 

There are things happening and there are things we can do to protect ourselves from it that aren't that hard. One of the most important things is just hearing the stories about what is happening. And I know we're going to talk about a good one today, we're going to talk about some fun stories. But hearing those stories so that you understand, "Hey, this really is happening." And I really can just do a few quick things that will make life easier for everybody. It's kind of like, do you guys remember Y2K back in 2000? There was this big thing where when the clocks change, there were going to be a lot of cyber issues. Well, what happened was the cybersecurity people and the tech people ran around and did a huge amount of work and prevented massive problems. So, when that day rolled around, when that hour rolled around, there wasn't a lot of chaos and destruction. So, then people said, "Oh, well, it was no big deal, we didn't need to worry about it." But it's because of all the work that went into it.

James: Right, and I think that's such a great point is that the average person doesn't realize all the work that does go into something and it does lull us into a false sense of security. We also talk frequently about behavior being difficult to change. And I think part of that is maybe that false sense of security leading us to think, "Hey, maybe we're fine as we are?"

Kerry: Right, don't need to do anything. And it's stressful to think about doing something about making changes. Our lives seem easier if we just go, I'm not going to deal with that stress.

James: Right, maintain the status quo.

Kerry: Yes.

Wendy: Yeah. And I think we're all so overwhelmed that some of those things, I just have to let go, I can't keep track of every single thing that might impact me. And just sometimes I have to just hope for the best. I think that's the attitude people can have sometimes.

Kerry: Right, which is totally normal and typical. But I would love to get to talking about some of this crazy deepfake stuff. Not so people can worry about it. We don't want you to worry about it, but we want you to be aware because as Wendy and I mentioned not too long ago, if you're aware, you can catch real-life attacks happening to you in your real life, in your social media right now today.

Wendy: Absolutely, and so maybe the first question we can ask you about that, Kerry is "What is a deepfake?"

Kerry: Perfect, great question. So deep fake is a computer-generated image or video or voice of someone who doesn't exist, but sounds perfectly real or looks perfectly real or someone who does exist, but making them do or say things that they didn't really do or say like if I wanted to attack, I could make a video of Wendy and James, a deepfake video saying Kerry Tomlinson is the best person in the whole wide world. And maybe you didn't really say or do that. Now, the downside of me mentioning that is now people are going to say "Did Wendy and James really? Did they really say that?"

James: Maybe not that far off.

[laughter]

Wendy: Yeah, so I see how-- it sounds to me like we could easily get fooled by these deepfakes, especially as we're running around and we're so busy and maybe we're not giving things the full attention that they need.

Kerry: And that is what the attackers are relying on, they are hoping that you're too busy or that you are excited about connecting with someone on LinkedIn or on another social media platform. And you'll just go for it. In fact, the attackers generally don't go for the expensive deepfakes unless they're doing something really specific to you and that does happen. But a lot of times, they just rely on the easy ones. So, for example, there is a website called thispersondoesnotexist.com. And it generates really quick and dirty, deepfake pictures of people that if you examine the pictures closely, much of the time you will see a tiny little clue that will show you this is fake. In fact, I've done some stories on it and I'm sure we'll add some links, so people can look at these stories and learn exactly how to spot these deepfakes whether it's an image or a video or an audio because those are skills we need to have. But sometimes you can find that the attackers just grab the free and easy stuff and throw it out there and hope that we are not looking closely enough. So, that's the good news. The good news is that if you take a look at it, you can often spot these deepfakes.

Wendy: Is it like one thing, one little tip you could give us, about like "How would we know?"

Kerry: I can give lots of little tips, it's so exciting. We'll start with images because images are what you're most likely to run into on social media. Someone's profile, someone will have a profile image and that could be on LinkedIn or Facebook or Twitter or what have you. And one of the big clues is that the pictures are often a square profile picture. And if you look at the shoulders, the shoulders are sometimes misshapen a little bit, like at first glance you look at the shoulders and the color of what they're wearing and it looks fine because you're not paying attention. But if you look more closely at the color of the clothing and the shoulders, you can often see some strange things. The technology that is being used for some of this free or inexpensive deepfake science is it still has some issues, so it often doesn't get the shoulders and the color right. Sometimes it's misshapen, sometimes the colors are strange, and sometimes it looks like the shoulders are extending beyond them in an odd way. So, you can start out by looking at the color and shoulders. When it comes to pictures of women, they often have the earrings misshapen or mismatching or the earring is floating up higher on the ear. They can't quite get that right. And then you can also look for things like the glasses if they're wearing glasses. This one is pretty small, but often the two sides of the glasses are different shapes slightly. And then finally, for now, there are more details like this. But this one is also helpful, the background. The background is often amorphous and when you first look at it, you say, "Oh, it's just a typical picture?" But then you look at the background more slowly, more carefully. And it's not really a background that shows anything or if it is a background that shows something it doesn't line up right, it looks crooked, it looks twisted. So those are some good ways, some good starter ways to check out a deepfake image.

James: Those are some great tips. And I think it's really interesting that you mentioned they do sort of go for the lower-hanging fruit and they still try to take advantage of our emotional state of that fear, the uncertainty to get us to fall victim or to believe that this might be a real image. We see all these different things you can look out for. And clearly, there's a lot more to talk about. You mentioned images, so I'm assuming that alludes to some other things that deepfakes might also be getting involved in. But in terms of deepfakes, in general, what would you say is the most interesting or the most intriguing, or the most realistic one, the one that fooled you the most?

Kerry: Well, one of the things that I'm most interested in is this concept of deep fake personalities and virtual people. And I did a story on it, hopefully, we'll link to that, so people can enjoy a little bit more of that as well. But this is about the many tools now available for people, regular people, and for attackers to create a deepfake person to attack you. And what is the benefit of that? Well, they can certainly just attack you straight away. But if they use automation to create all these aspects of these virtual people, then they can quickly and easily create 1000s of people to attack you, to customize and attacked you and everyone in your company, everyone in your family. And we can talk about all the different ways that this is coming together because it's super interesting. So, to me, the concept of this deepfake person who is in-depth right now, what we see with a lot of deepfake attacks on social media, is they're not deep, they're shallow. They're super shallow, they grab a deepfake picture, they steal someone's work history and put it into LinkedIn, they like a few posts, and then they connect with you and send you a message and try to get you to click on a link.

So that works and that's why they don't put that much effort into it. But the tools are available now to go much deeper if that should stop working. So, I want to talk about all the different aspects related to that, for example, not just a deepfake picture of someone to put on their profile, but you could do a deepfake picture of an apartment or a house. So, a house or apartment that doesn't exist. So, if you try to do a reverse search on it, you will not see that they've stolen the photos from someone else because they haven't, they've created them in a second, in a millisecond. They can do deepfake cars, deepfake horses, deepfake cats, deepfake resumes, all of these things to create a deepfake world that you cannot verify because it doesn't exist elsewhere. So, the theory would be that you can't check to see if they've stolen some of these things from people because they haven't stolen them, they've created them. And that is actually just the beginning of that deepfake personality. And we can talk about the many more things that attackers can do. All the different ways they can create this person that seems so real.

Wendy: So, Kerry, let's say that I get some email or some connection request or something that has this deepfake car in it that I cannot trace in any way. What's an example of what they're trying to do with that?

Kerry: So, one thing with a deepfake car is that currently the technology that they're using for the deepfake cars, it's not realistic enough for them to actually use it in an attack right now. So likely, what they would do is take a real car and fuzz out the plate or something like that. So that particular aspect of it I just want to note, I've seen the pictures of the deepfake cars and they're actually comedic. They're actually in many cases quite funny. Sometimes they're realistic, sometimes not. But it wouldn't be so much that they would use the individual car to attack you or try to sell you a car, typically they'll just steal those pictures. What they do want to do is create a world, a believable world and that can be used for catfishing, for romance scams, we see this a lot where people have dated catfishers for eight or nine years. So, in that case, over eight or nine years, you're going to need a lot of pictures, you're going to need a lot of parts of your life to share with people and just continuously trick money out of people. Another thing they can do is espionage. And a lot of people say, "Oh, well, espionage, no one wants to spy on me." But reality is whatever company you work for, somebody wants to spy on that company. And that is whether it is like Russia or China or Iran. Also, competitors like to spy on competitors. And in addition and at first you say that can't be but that actually does happen a lot. In addition, there are access brokers on the underground market. So, the access brokers get away in, so they trick you with a fake person on social media or an email, they get you to click on a link or download a file, so they have access to your computer and your company, maybe to your bank account, maybe to your PayPal account, what have you. And then they go on the underground web and they say, Okay, I have for sale access to someone, we will say a 40-year-old woman who works at Yale, who has access to these systems, has a PayPal account, has some credit cards and there you go. And then other attackers will say, "Well, that's what I'm looking for. Ooh, I do want to get into Yale, or ooh, I do want to get into this company, or ooh I do need some PayPal accounts." And then they buy that and then off they go.

Wendy: So, it's a whole marketplace of stuff going on?

Kerry: Yes.

Wendy: If they don't know what we don't know often, just like we would, I don't know negotiate other stuff that's legal.

[laughter]

Kerry: Right? Exactly. Oh, it does happen and that is one of the thriving things that happens on the internet that we don't think about or know about. And one thing I do want to say is, we don't want to be scared and say, "Oh, my gosh, this is happening, there's nothing I can do." What we can do is slow things down. I like to say Island time, use Island time when you receive a message. When you see receive a connection request, don't click right away, don't respond right away, slow it down, give yourself some time, maybe wait for the weekend if you want to do a little research on this person, but give it the time that you need to actually verify and when in doubt, don't respond.

James: It's funny because that's becoming one of our most common tips that we are, offering that we're hearing from other folks. And take a minute, take a-- stop and think, come back with fresh eyes before you react or respond. And I will say, this is really eye-opening for me because I would have thought that a deepfake, most likely to be used in sort of the short term. And I think it's really something to think about that there's this long con that they're most often going to be used for. I sort of had in mind when you mentioned apartments thinking about, "Oh, so this would be a great tool to use if trying to do like an Airbnb scam." But then it makes such total sense that, as you said, making an entire life where these romance scammers, for instance, are trying to scam somebody over the course of months and years. And honestly, you will expect to see some sort of validation verification, I mean, everybody-- Not that you're going to go to and say like, "Hey, I need proof that you're real." But you're going to expect to see something about these people's lives over the course of time. So, wow.

Kerry: Right. And there are so many more tools available. I want to talk about some of these because I find them both entertaining and a little bit nerve-wracking. And part of this that we have to recognize is these tools can be used for fun and entertainment. For example, on various services like Snapchat and other apps, you can use filters to put your face into an actor's face. For example, there's an app that I have used for undercover investigations where you can put your face on another video and make yourself talk or sing like the other person is doing. Well, attackers are using this kind of technology to trick people. It can be used to trick people, so it's out there. But those kinds of tools that are out there, we know them. But some of the fun ones that we're not always aware of, is, for example, there are services now that will use artificial intelligence. In other words, computers on steroids, basically, to write blogs for your website. So, let's say you have a travel site and you want 20 blogs on various places, what to see and do in various cities. Artificial intelligence can create it for you in seconds.

Wendy: It's crazy. That is crazy, Kerry.

James: What does that mean for our job security?

Kerry: Exactly, oh, man, so many things come up. So, you say I want a blog about things to do when you visit New York and it will write a blog and I daresay you will not be able to tell the difference. So, these services are active now and that means there are sites likely that you have gone to where you have read a blog, and you've thought, well, that's a nice blog. And it's been written by a computer.

Wendy: Amazing.

Kerry: Amazing, and that's not all. One of the big things now is deepfake people. Because it is very popular when you go to a website that you have a video of a human host showing things on the website or talking to you or saying things or helping you out or what have you. And there are services that will totally create this for you. And it can be very hard to tell the difference. And so, you look at it in some interesting ways. We as humans respond to other humans, which is why people would want that say for their website. But are we responding to a real human? Are we responding to a fake human? Do we care? What are the upsides? What are the downsides? Does it make us more comfortable to talk with a fake human, a robot human? Or would it be better to just have a text with a real person or with a robot person? I mean, there're so many aspects of it, so many ethical things to think about, so many human issues to think about. The important thing from my aspect of things is that we have to be aware that attackers can use these against us and will use these against us. So, quick example here, you can with a snap of the fingers use artificial intelligence to create a deepfake company website. That looks completely real. It's got testimonials, it's got people, it's got descriptions of what they do. It's got contact information. So, boom, you've got a company website, you can then use artificial intelligence within seconds to create boom, a blog, a blog series for that site. And then boom, a human host, a person who will appear in a video on that site and welcome you and you just type in the words that that fake person is going to say, then you can have an artificial ChatBot, chat with customers, you can have an artificial voice answer the phone. So, the positive side of that is that can make it easier for people to get into business. The downside of that, attackers can do this in seconds and create something that looks believable, and you will spend your money on the site and you will send in your credit card number or give them your passwords or what have you and boom, they steal it.

James: I could see there's also potentially some positive applications like I would love to see my utility company up, an AI-generated person onto the hold line that I can have a conversation with while I'm waiting for a representative. But I think one of the big things-- one of the big takeaways that I'm hearing here is that it's really a toolkit. All of these different things can be used together to just make these attacks more efficient, more complex, and more believable.

Kerry: That's exactly it. That is exactly right. It's a toolkit that can be used for positive things, that can be used for negative things. And what cybersecurity experts are talking about is that the thing that we need to think about is that when it becomes cheap and easy for attackers to use it, they will use it and what they are doing and what we should be doing is being prepared for this emotionally, being prepared for it on a mental level and an intellectual level and on an emotional level. So, that when this comes our way, we will be ready. An example of this is artificial intelligence used to create emails and people. I interviewed a fellow named Ben Murdock who works for a cybersecurity company and one thing that he studies is the creation of language. So, what he did as an experiment was, he put in the information to create a LinkedIn profile of someone that he would like, meaning someone that would appeal to him, and then he had the computer write a series of emails from this person to him. And what he found is that the computer was able to create a profile that he himself would connect to if he didn't know, it was fake because the person looked interesting, and create a series of emails that he would respond to if he didn't know because the emails appeal to him and his personality, appeal to areas of study he was interested in, written by someone that he found appealing. So, that's one of the keys with all of this right now, attackers are guessing what will appeal to you. But if they can use a tool to say, I know what Wendy and James are interested in, I will create a LinkedIn profile, a series of emails, and a series of blog posts that I know Wendy and James would like, and then boom, you follow through and do whatever the attacker wants, whether that is clicking on the link or sending over your password information or what have you.

James: It's diabolical.

Kerry: It is.

Wendy: And with technology continuing to evolve every day, every year, it seems like it will only get worse before, and I will say before it gets better because it probably will never get better. So, with that in mind, Kerry, as we're wrapping up, if you were to boil down what are two things that people could do? The simplest things they could do to prevent falling victim to deepfakes. What would those two things be?

Kerry: Yes. So, the number one thing that I love, this is what I love, is really just being aware is the biggest thing. So, we can say now after listening to this podcast, all of us are now more aware that this can happen. And why is that important, because that lodges in our brains. And then when we get something a connection on social media or an email in the back of our brain, if we're not rushing around, if we're using Island time to slow it down just a little bit, then we start remembering, "Oh, yeah, this is out there, and no, I haven't seen any attacks like this before. But I do remember listening to that podcast about it and I do know that it is possible. So, that is actually the number one key thing.

The number two key thing, besides being aware and slowing it down. The other thing is that I don't want people to be too afraid. Because from fear, it just makes us panic, it makes us scared, it makes us dismiss things. I want us to also enjoy this. And so, one final thing that I am going to read here is a rap song that I created with artificial intelligence. So, there are tools that you can use, and you feed in what song you want and I chose a rap song. You can choose what mood you want, do you want it to be happy? Do you want it to be sad? Do you want it to be in between and by the way, if you put a sad rap song in here, you'll get a lot of profanities and a lot of vulgarity.

[laughter]

Kerry: I put in a very happy rap song. And then you choose the subject and I chose Choco Tacos. So, for people who don't know, that's the ice cream treat, it got discontinued. So, these are some of the lyrics that the computer came up within seconds. And not that these are the greatest lyrics in the world but it's just funny to see what the computer did. I mean, really in a second, in a millisecond, the computer came up with this. So, "I just woke up from a fantastic dream, some more delicious taco for my brain, bed, bath, and breakfast, soul taco, they so fresh, but I won't forget the Choco, that I was that day." So, it's just [crosstalk] it's fun. At the same time, let's also enjoy what these new tools are doing as well as being aware of how attackers can use them against us.

James: Yeah, there're so many different applications for these different technologies. And I think it's also great that you mentioned we don't need to be constantly on guard, terrified that we're going to become victims to one of these deepfakes and let that sort of rule the way that we operate, sort of in a connected world. And it's something that we-- so many things that we've said today that we so often say you know outside of this interview, but I think it's really about having a healthy skepticism and about constantly being worried as to what's going to come next.

Kerry: Exactly, we need to understand that every single message that comes into us is a potential weapon. And instead of saying, "Oh my gosh, I can't look at any messages." Instead, take that step back and recognize that that is the reality of our world. So instead of freaking out about each individual thing, we just need to take that step back and say, "Okay, every message is a potential weapon, every knock on my door, every phone call, every email, every text message is a chance for an attacker to get me so I need to put up a filter, I need to create a filter for my world where I can be calm and happy and also aware."

Wendy: That's such powerful advice Kerry as we think about the Yale community because I'm imagining that many people wouldn't be thinking of it this way until they heard this and perhaps might have a different perspective. And even extending that because we always talk about how any of the information, we share at Yale can be shared with people's families and friends. And it's really all about empowering our community and beyond. I can already see how people could hear this and say, I want to share this with my parents and my grandparents that are online that wouldn't be privy naturally to some of this information or be so in the know about deepfakes at all, or maybe even heard the term before. So, you really are adding so much value for us. It's such a service that you're doing all of this and I'm so grateful for you.

[music]

James: Here's the buzz on what's commonly called clickbait. Clickbaiting is the act of intentionally overpromising or misrepresenting the content we'll find when we click on the link. Often, it's a catchy headline related to current events or a trending story. It can also be a quiz that offers some hidden insight, perhaps revealing our personality type, which [unintelligible [00:36:32] we're most like, or which character we would be in our favorite shows and movies. Beyond headlines and quizzes, clickbait can also take the form of linked images and ad banners. Have you ever encountered the one claiming you're a winner or the phony downloads like on a page where you're expecting a real one, we've probably all seen clickbaiting used as an arguably legitimate marketing tool to drive website traffic? However, it's also used by cyber criminals to fool us into providing our personal information, installing dangerous malware on our devices and granting access to our social media accounts. Ultimately, it's up to us to be on the lookout for clickbait. Here's a few suggestions that can help keep us safe when engaging in online activities. Avoid clicking headlines that seem provoking, outrageous, or even scandalous, and advertisements that appear too good to be true. And play critical thinking skills when using websites and apps. It's important to consider the motivations and agendas of the content provider and keep listening to the Bee Cyber Fit podcast, to help build your cyber muscles and be better prepared for cyber scams.

Wendy: James, I thoroughly enjoyed our interview with Kerry today and I came away with so many different insights. One thing that really struck me is this idea of deepfake personalities and virtual people, that's like a whole new world. And how they can be created in depth, this fake person can be created to attack us, like how technology is creating these fake people to try to trick us that just really blows my mind.

James: I have to agree Wendy. This was definitely I think one of the most interesting topics that we've talked about this season. It's because it's so extraordinary. And Kerry said it right like these cybercriminals, they're trying to create this entire believable world that is custom designed to us so that we're going to fall for it. And it's really kind of crazy.

Wendy: It is crazy. It's totally crazy. [laughs] And while I'd heard of deepfakes before, I got a much better understanding of exactly how these things manifest and how tailored they are that I honestly, James had no clue about.

James: Yeah, and we talk about spearfishing all the time, but this is really taking it to a whole another level.

Wendy: Yeah, right. And for those of you don't remember, spearfishing is when something is very targeted for a particular individual. So, a whole another level of that.

James: Right. Wendy, I have to say one of the things that I found most interesting was that Kerry suggested as one of her top tips to take a step back, take a minute and think before you respond, which is pretty much exactly what Jeremy said when we interviewed him in episode 3 as well.

Wendy: Yes, James, you are so right. The parallels are really fantastic. And it points to why that is so important. I believe that as humans, we can be so impetuous. We act in the moment and this is all about pausing, as you said taking a step back, clicking with caution. So, we're not just launching right into it as I'm going to be honest, I have been known to do in the past.

James: Like many of us.

[music]

Wendy: James, it is time for our Call to Action after this fantastic informative and intriguing episode. There are two things we want our community to do. Number one, we have a link in the show notes to an article by Kerry, which is about how to watch out for attacks from virtual people with phantom lives. It's super interesting and informative. We encourage you to read through that. We've also included some other links as well that you might find helpful. And number two, we have a link in the show notes to an article from cnn.com that has a quiz about deepfakes. We want you to build your deepfake recognition muscles. And this allows you to compare a couple of things to see if you can figure out which of the two things is the deepfake. So, we are practicing our deepfake skills and have some resources to help you build your muscles along with us as we build ours.

James: Ah, really great resources Wendy, and hopefully our listeners will take advantage of them. But that's all we have for today. Until next time, I'm here with Wendy Battles and I'm James Tucciarone. We'd like to thank everyone who helped make this podcast possible and we'd like to thank Yale University where it's produced and recorded.

Wendy: Thank you everyone for listening. Thank you, Kerry Tomlinson for gracing us with your presence. And remember, it only takes simple steps to be cyber fit.

[Transcript provided by SpeechDocs Podcast Transcription]

Podcasts we love