
The Edge
A podcast for surviving our modern world. With help from UC Berkeley experts, California magazine editors Laura Smith and Leah Worthington explore cutting-edge, often controversial ideas in science, technology, and society. Should you be able to choose your baby’s IQ? Are algorithms really smarter than people? As we face a planet devastated by climate change, what is the future of food? All that and more. A production of California magazine and the Cal Alumni Association. // Reported and hosted by Laura Smith and Leah Worthington; produced by Coby McDonald; artwork by Michiko Toki; original music by Mogli Maureal
The Edge
#30 Deepfakes with Hany Farid
Some Catholics no doubt took offense, but no one was seriously harmed by the Pope-in-a-puffer-jacket meme. Far more sinister deepfakes are on the rise, however, with scammers now frequently using widely available technology to bilk the unwary, and political campaigns marshaling AI to sow lies about their opponents. Perhaps the greatest threat is not the deepfakes themselves but that their mere existence can cause us to question the veracity of nearly everything we see and hear online and in the media. So, how concerned should we be about the proliferation of fake media? And what, if anything, is being done to staunch the bleeding? UC Berkeley Professor Hany Farid, a pioneer in the field of digital forensics, joins Editor-in-Chief Pat Joseph live onstage to discuss what the growing onslaught of mis- and disinformation portends for our society and what we can do to manage it.
Further reading:
- Watch the full live conversation with Hany Farid on YouTube
- Find out more about California Live! events
This episode was produced by Coby McDonald. Special thanks to Hany Farid, Pat Joseph, and Nat Alcantara. Art by Michiko Toki and original music by Mogli Maureal. Additional music from Blue Dot Sessions.
NATHALIA ALCANTARA: If you’re like me, you have a hard time trusting anything these days. It used to be that seeing is believing, but even that feels increasingly tenuous. Did that politician actually kiss a reporter or was the video doctored? Was Tom Cruise really performing magic tricks, or was it AI-generated? Which news is real news—and how can you even tell?
These are the questions that Berkeley computer science professor and digital forensics expert, Hany Farid has dedicated his career trying to answer. A leading voice on deepfakes, Hany is deeply troubled by this age of increasingly credible fake news—the speed at which digital technology is outpacing regulation and authentication.
Last fall, Hany joined our Editor-in-Chief Pat Joseph onstage at the Berkeley Art Museum and Pacific Film Archive as part of our ongoing lecture series, “California Live!” In this episode, we’re revisiting that conversation—a fascinating and, at times, terrifying discussion of the rising tide of deepfakes and misinformation.
So, enjoy. And stay tuned for never-before-heard episodes coming next month.
[MUSIC OUT]
[THEME MUSIC IN]
This is The Edge, produced by California magazine and the Cal Alumni Association. I’m Nat Alcantara. Today we’re bringing you a special recording of our California Live! talk from last September with Professor Hany Farid.
[THEME MUSIC OUT]
PAT JOSEPH: So welcome everyone. Tonight's California Live. I'm Pat Joseph, editor of California magazine, editorially independent publication of the Cal Alumni Association. On behalf of the Association and our co-hosts, the Berkeley Art Museum and Pacific Film Archive, I'd like to welcome you all here tonight for our talk about deepfakes and disinformation, featuring our distinguished guest professor, Hany Farid, one of the world's leading experts on the subject. Professor Hany Farid has a joint appointment in the Department of Electrical Engineering and Computer Sciences and the Berkeley School of Information. EECS and the iSchool, as they like to say here. He's also a member of the Berkeley Artificial Intelligence Lab, the Berkeley Institute for data science, and he's a senior faculty advisor for the Center for Long Term Cybersecurity. His research focuses on digital forensics, forensic science, misinformation, image analysis, and human perception, and it's really no stretch to say that he's the go-to expert on all things relating to deepfakes. When I was doing research for tonight's topic, I just kept running into Hany Farid talking to me on YouTube and over CNN and whatever, you know, so he's everywhere. Before coming to Cal, Hany was on faculty at Dartmouth College for two decades, and tomorrow, he and his wife, who are in the, who's in the audience tonight, return to Vermont on sabbatical, just in time to watch the leaves change, and also they'll be celebrating the fact that Emily Cooper got tenure. So congratulate her. She's on the faculty of the School of Optometry and vision science. Okay, so with that, please welcome Professor Hany Farid.
HANY FARID: So I realize you want to come here and hear us have a conversation, and you will. But when I was talking to Pat about thinking about this setup, I thought it'd be good to just show you some things about where we are in terms of generative AI and deepfakes to sort of level set us, because this is a very fast and moving space. One of the great things about being a professor is you put a talk together and you get a good two, three years out of it. I mean, it's a gift. But these days, no kidding, two to three weeks, and I've got to change my slides, and I'm really sort of irritated by that.
So where are we in terms of deepfakes? So first, I think it's important to understand that fundamentally, the ability to alter photographic, audio and, video record is not new. We have been doing this for as long as we have been recording visual media. So go back to the 1920s—Stalin famously airbrushed people out of photos who fell out of favor. Now, first, I have a lot of thoughts about this, but I only have 15 minutes, so you can ask me questions later. But to do this, this was highly skilled. First of all, you had to have a camera, you had to have a dark room, you had to be able to go into the dark room, paint on the negative, re-expose it, and then what would you do with it? There was no Twitter. You would just change the history books. And so the idea here is that you can alter history, but not so much the future. A hundred years—the photographic record, many of the famous Civil War photos, the most iconic photo of Abraham Lincoln, are all composites or manipulated in some ways. Now fast forward almost 100 years. We are now in the very early days of the digital revolution. Right? The internet is bubbling up. Social media is bubbling up. We're starting to have digital cameras in our pockets that are quite powerful, and we start to have things like Photoshop that allow us to alter that digital media. In 2010 that photo on the bottom there was released by the Iranian news agency that showcased three mid-range missiles being launched as part of a threat to their neighbors. One of them misfired, and they didn't like that. And so what they did is they Photoshopped in a fourth missile. Pretty crude, and if you look at the clouds, you can actually see a repeating pattern between the two clouds, but in 2010 this really stunned people that we could do this kind of thing, and it is nothing, it is nothing compared to what you can do today. Today, you can go to any of a dozen web services that you are either free or you pay a few bucks for, and you type, give me—by the way, whenever I ask Chatgpt for something, I always say, please. And I think that's really weird, but Sarah, one of my students, told me a really good idea, which is, you should do this, because when the AI overlords come, they will remember that you were nice to them. And I thought that was really good advice—please give me an image of a power plant exploding. Wait five seconds, and it will generate that. And think about the democratization of technology from the 1920s in terms of capture and manipulation, through the early naughts to where we are today, that is phenomenal. What you can do today, only limited by your imagination. You can generate almost any image. So what has happened is not fundamentally we can manipulate media, it's that we've democratized access. It went from a handful, to state sponsored actors, to 8 billion people in the world can now do this, and that is really interesting.
So where are we in terms of generative AI? So we spent a lot of time, and you'll hear about this when we're talking, thinking about how you detect these things, but we also spend a lot of time creating it. So you can go to any number of web services and type young Americans walking happily in a pretty patriotic holding signs that say, Harris-Walz 2024 this is an image that came out. I just found it on social media almost immediately after Harris walls were announced as the ticket. And look at that image fully AI-generated, no camera, no Photoshop, nothing. The words are correct. Look at the reflection on her glasses. It's incredible. I mean, it's incredibly beautiful. And this was not possible six months ago. You couldn't do this six months ago, you couldn't even come close. And the trajectory just continues. It's a phenomenal trajectory. We say that gen-AI images are passing through the uncanny valley. They are becoming so realistic that a passive viewing of them is almost impossible to distinguish from reality.
Voices are also very, very good. So it used to be two years ago, I needed eight hours of audio recording to clone somebody's voice, which meant things like reporters, Joe Rogans of the world, politicians of the world, were vulnerable to having their voices cloned, but you and I were not. Today, 20 seconds of audio, I can go to a service that I pay $5 a month for, and I can clone a voice. So I'm going to play for you the voice of Mayor Khan, not anybody particularly famous, famous, and he's got a nice British accent. And then I'm going to show you a clone voice. So listen to His voice, and hear the intonation, what his voice sounds like.
MAYOR SADIQ KHAN: But I think this takes the best care. It's just such bad taste. It just reminds everyone.
HANY: Okay, so that's Mayor Khan. I got it off of YouTube. I put it into his service. I paid five bucks a month more, and I cloned his voice, and then I type and I have him say whatever I want.
MAYOR SADIQ KHAN: I am told that on the issue of deepfakes, one of the world's leading experts is Professor Hany Farid, having spent some time with him, however, I can tell you that he is a bit of a dipshit.
HANY: The deepfakes giveth and they taketh. And this is what happens when you ask your students to make things for you. It is amazing. Sarah, who's sitting right there, a PhD student in my lab just finished a perceptual study. People are pretty close to chance distinguishing real from fake audio. They are passing through the uncanny valley. They are really, really good, and I need 20 seconds of your voice to clone it. That's amazing.
Video. Videos are passing also through the uncanny valley. So now, videos of people talking, moving, dancing, doing whatever, are becoming indistinguishable from reality. So why should we be worried about this? First of all, lots of cool things you can do with this. It's a content creator's dream. There's really beautiful things you can do with this as a creator, or somebody who is artistic, wants to be a filmmaker, but doesn't have $100 million to create a film. It's incredible. It's incredible technology. But as soon as this technology started getting created, we started seeing bad people doing bad things with it. And so let me just give you a couple of examples of that. This is probably one of the most dramatic ones. It was over a year ago, in May of 2023 somebody created that fake image of the Pentagon being bombed, and they published it on Twitter, X I suppose, and it was on a blue check marked account that was paid $7 a month for that made it look like a Bloomberg news story. The stock market, in two minutes, dropped a half a trillion dollars because people panicked before they figured out it was fake. Was it intentional? Was it stock manipulation? Was it state sponsored actors? Nobody knows, right? Half a trillion dollars in two minutes went by. I got a call from CNN, by time I got back to my office to look at the image, it was over right in January of this year, Taylor Swift had made some comments about the upcoming election. Some people were angered by that and started creating horrific, horrific images of her and sexually explicit material and carpet bombing the internet with it awful, awful things can happen, and not just to people like Taylor Swift, but to every single person in this room, because if I have a single image of you, I can now insert you into videos of anybody else's doing whatever. And that is a huge threat to people like Taylor Swift, but also to individuals as well. We've seen phenomenal, spectacular financial fraud. This is an amazing one from earlier this year. A worker in Hong Kong was on a live video call like what I just showed you. He thought he was talking to his chief financial officer, and ended up transferring $25 million. That was not the first one. It will not be the last one. This is at least the dozenth one that I've heard about, this where tens of millions of dollars are being stolen through very sophisticated scams.
We're seeing massive amounts of political disinformation on social media, on X, on YouTube, on Twitter, and it's a real problem for us as a democracy. And it's not just here in the U.S. We are seeing these manipulations everywhere in the Global South, in Western Europe, in Asia, we are seeing these problems with disinformation campaigns that are now being fueled, or jet fueled with deepfakes. There's this new scam where people are applying for jobs and doing interviews over Zoom as imposters, and they are either hackers from North Korea, they are people who are just cyber criminals trying to get into companies and insert malware or ransomware, and it is working. The FBI has now issued multiple reports about imposter hiring that if you are interviewing people over zoom, you need to be exceedingly careful, because you don't know who you're talking to anymore. It's a weird world we're living in. In the UK right now, we are seeing horrific unrest and violence in the streets almost entirely being fueled by misinformation and disinformation and fake images and fake audio meant to create divisions in society. We are using this technology to further distance ourselves from people we don't like or disagree with.
What we spend a lot of time, or you're gonna hear us talking a little bit about, is how we detect these. When you take a photo, when you pick up your camera and record it, you are transferring photons in the world to a digital signal, and you're using a camera to do that, and there are imperfections when you do that. Every time I take a photo with my camera, the top left pixel will slightly over represent how much light came in, and then the next pixel will slightly under represent it, and that is unique to my camera, not an iPhone camera, but literally with my serial number. We have put people in jail with this pattern. We have linked people to photographs that they have taken, from a device to an image, and said this camera took this photo because it's unique, like a DNA signature. And when you synthesize an image, you are not going through a physical process. You're going through something extremely synthetic and artificial. You can see big fat, whopping differences between what real images look like and what synthetic images look like. They have really strange patterns that are not immediately obvious. And when you look at the image, this is one of many, many, many different techniques. And so the last thing I'm going to say about this, before I stop, is, I wish I could tell you, just looking at things, look for these artifacts, and you'll be able to tell the difference. I could do that today, but in three weeks, they won't work anymore, and now you'll have a false sense of security. So what we spend a lot of time thinking about is, how do we develop these techniques, constantly evolving them over time, and trying to help us figure out what's real and what's not. Because things are getting pretty weird out there, and there's a lot to be excited about, but I think there's a lot to be scared about, so I think now we're going to talk about what to be scared about.
PAT: That was great. I think I should have just let you keep going. But so if my daughter calls from college and she's in distress and asking for money, should I hang up?
HANY: You need a code word.
HANY: No, yes, absolutely, hang up. You need a code word. By the way, this happened to me. I was working a case with a pretty high profile lawyer, and he got a phone call from somebody that sounded like me from my phone number, and they were talking about the case, and he got suspicious, picked up another phone and called me, and he said, “Are you on the line right now?” And it was, it was somebody who was impersonating my voice in real time. So we have a code word. We have a code word. I can't think of it right now, by the way, so we'll have to remind me what it is later. You should have a code word with your family members. You have to remember what it is so you don't hang up when it's, you know.
PAT: So that's not just paranoia talking?
HANY: No, no, that's just common sense now, okay, yeah. And by the way, if you are at a bank that is using voice biometrics to authenticate you, you should move your money right now, banks are getting hacked with biometric identification. You should set up two form authentication. It is the only thing that is even reasonably secure right now.
PAT: So being in this, this world, and this, being your focus of study, has it though, made you more paranoid?
HANY: Yeah, like, really, it's weird. I get on calls with people and I'm suspicious of them.
PAT: Oh yeah, talk about Obama.
HANY: Okay. So this is gonna sound like a flex, and it is, but it's actually an interesting story. So Maddie, who's sitting right here, one of my students, was in my office when I took a call with President Obama. So last year, he wanted to get smart about generative AI, he was advising the Biden White House. And it's weird to get on a zoom call with Obama. It just the two of us and his chief of staff, and he gets on, it's, you know, it's Obama, and I'm just like is that…? Ten minutes, I was freaked out. I'm like, somebody, somebody's pranking me. And it was weird. And I was really, I was really skeptical. I finally met him in person. That made me, I felt more comfortable about that. But yeah, it's almost everything I look at, I'm immediately suspicious of now.
PAT: I think you told me earlier that when his hand crossed—
HANY: Yeah, so this is yeah. So right now deepfakes struggle when your hand goes in front of your face. And at some point he did this, and I relaxed a little bit.
PAT: What is it with hands? Artists can't do hands very easily. And AI seems to struggle.
HANY: It's getting better at it. So hands are really complicated. I mean, the human body is incredibly complicated, but hands are really like all the joints and the way they're configured. Anybody who's an artist knows that rendering hands are really, really hard. AI struggled for a long time with hands. That problem is basically solved. There's a new engine out there called flux that seems to have fixed the hands. They got it. It was just a matter of time. By the way, can I say there's somebody who's a genius out there? I went to Amazon one day and I saw this thing. It was a prosthetic finger that you would attach to your hand. And then when somebody took a photo of you, they would think it was AI-generated. I just thought this was genius, because a lot of AI generated images created extra fingers. And I just thought that was genius.
PAT: Well, before we get too much into this, I really want to know how you arrived being the guy on deepfakes. I'm guessing that in 1988 when you graduated from college, that was not where you expected to go.
HANY: So I love this question because I love telling the story to students because it tells you how science and frankly, life works. So I had finished my PhD, I was a postdoc. And this is how long ago it was—I was at the library getting a book, which is how long ago it was. I was waiting in the line, and the line wasn't moving, and so I picked up this random book on the return cart, and it was the Federal Rules of Evidence. And I had no godly Earth reason to be reading this book, but I just literally opened it to a random page, and the title of the chapter was introducing photographs into a court of law, and I'd been interested in photography, and I'd been thinking about computer vision, and I'm like, “Oh, I wonder what the rules are?” And so I started reading, and it said film and 35 millimeter negatives and prints thereof, all of those are admissible. And then in my memory there was a footnote, but it was probably parenthetical. It said, “there's this new format called digital and we are going to treat digital images exactly the same way we treat 35 millimeter negatives.” And I thought, well, that's really dumb, because they're not the same. They're inherently malleable. And this was in 1997, so really very early days, but we knew digital was coming. And I went back to my office, and I just started noodling this idea, like, what would you do? Like, how would you authenticate it? And I just, I couldn't shake it. And couple years went by, I had moved up to Hanover, New Hampshire, to start my first job as an academic, and I had a friend I was playing tennis with, and every weekend he'd kick my butt, and I hated him, and so to set up a tennis date for us to play, I took a picture of Andre Agassi, and I took my friend's face and I spliced it into the image. And when I did that, I did a very specific manipulation. I had to make his head a little bit bigger, and I realized, ah, that's going to leave an artifact that I can quantify and measure and then I went next door to my office, and I told the student, I said, you want to do something really crazy? And he said, Yes. And that was the beginning of it. And that's how science works. It's incredibly serendipitous and it's random. And that was 25 years ago, and this is what I've basically been doing for 25 years now. And I started doing it way before I should have. Like, this was, you know, nobody saw the generative AI, like, we saw the digital, and this was a much more bespoke, narrow field, and then has exploded thanks to generative AI.
PAT: Which is really just the last…
HANY: Five years. I mean, I think the term deepfake hit about five years ago, but really the last two years in terms of really popular and very much in the vernacular.
PAT: And it's deepfake because it's deep machine learning. Do I have that, right?
HANY: Yeah. So the underlying technology behind everything I showed you is what's called deep learning. And so the term deepfake actually comes from a moniker of a Reddit user who was using this technology to make sexually explicit material. So I don't like the term a lot. Generative AI is the rebranding by Silicon Valley, because nobody wanted to use the term deepfake. And it's a really good rebranding, by the way.
PAT: Yeah. But you made a deepfake. The first thing you did with the technology was make a fake.
HANY: Yeah, I mean, like, I would say a third of our time is making fakes. If you're gonna be in the business of detecting this stuff, you know, you gotta, you gotta be in the business of making these things. We're pretty careful with what we do with them. We don't release them, but, yeah, we spend a lot of time making fake stuff, and usually it's just people calling me, you know, bad things.
PAT: As you say, it leaves an artifact. There's something that you can spot.
HANY: There's always something. And one of the great things about generative AI is it's fundamentally a statistical inference engine. It's fundamentally looking at the statistical distribution of images, audio, and video. And so it's doing something that is statistically similar, but that means it doesn't know about cameras, it doesn't know about lights, it doesn't know about 3D geometry, and it makes, actually, a lot of mistakes that are not physically plausible, but that your visual system doesn't really care about. And so a lot of what we do is we look for those failures. So that was one example that I showed you. Other ones are things like shadows move in really weird ways, reflections—like I can see a reflection on your glasses right now from the table—those tend to be a little wonky. But I would say almost everything we develop has a shelf life. If we're lucky, we get a couple years out of it, and then we have to shutter it, and move on to the next one.
PAT: And am I right that these get better because of this, what do they call it, adversarial learning networks?
HANY: It's a combination of things that they're getting better. So first, the data on which they're being trained keeps getting bigger and bigger and bigger, and the more data they have, the better that they get. The underlying algorithms are getting better. So in some cases, that's because commercial entities have started to monetize this, and once you pour billions of dollars, obviously the technology is gonna get better. And then the other reason is a lot of these have huge, massive open source communities where people are just piling on and doing better and better. And then there's some algorithmic improvements. But it's doing exactly what you expect it to be doing. Every few months, it gets better, cheaper, and easier to use, and people are finding clever and not so clever ways to use it.
PAT: Yeah, do you see a day, though, when it gets so good that even software won't be able find these deceptions?
HANY: I really don't like this question, but it's the right question to ask. My only hope is, I will have retired before that happens. Okay, the less snarky answer is, I don't think so, and here's why. First of all, video is incredibly complicated. You've got 30 frames a second, you know, you've got the audio track—that is a lot of data that you have to get pixel perfect, and that is really, really hard. I mean, maybe if we have quantum computers in our back pockets 20 years from now, and we have computing that we can't imagine, maybe. But here's the good news about the generative AI space, with a few exceptions, is the OpenAIs, the Anthropics, the Midjourneys. They're not setting out to burn the place to the ground. They may do that, but that's not what they're setting out to do. So they're not, they're not really my adversary the way somebody creates malware and ransomware is my adversary, right? Somebody who's a malware creator is trying to cause damage. I don't think you can say that about OpenAI. So they're not actually particularly incentivized to create something that is pixel perfect, that we cannot distinguish from reality, and I take some comfort in that, that maybe there's some hope there, that we'll be able to keep up.
PAT: Not to feed your paranoia, but do you do you think a state actor might develop its own generative AI?
HANY: 100%. In fact, I'd be surprised if they have not already, okay, I think you should expect that our adversaries at the state sponsored are absolutely doing this.
PAT: Okay, interesting. Are you surprised that we haven't seen more deepfakes in this election cycle?
HANY: Well, I've seen a lot of them. I think the question, and I think you're gonna start to see more, by the way, and I'll tell you, there's a couple reasons for it. One is Elon Musk has gone off the deep end. That is contributing to this. And we should talk, by the way, about social media, because this is not fundamentally just a generative AI problem. This is a social media problem. The problem isn't just you can create this stuff, it's that you can carpet bomb the internet with it, and then people like Elon Musk will amplify it. So one of the good things about the OpenAI, Midjourney, Anthropic is that they have some pretty good guard rails on the generative AI for images and audio and video. You can't go there and say, give me an image of and then think of the most obscene thing imaginable. It will stop you from doing that. But of course, are we recording this?
PAT: Yes.
HANY: I have tenure, it's fine. So what did Elon Musk do? He unleashed grok, which is a ChatGPT without guardrails, and he unleashed Flux, which is an image generator without guardrails. And of course he did. And so I think, and that was just released a few weeks ago. So a lot of the nasty, nasty stuff that you started to see on Twitter, on X right now is because we now have these generative AI without guardrails. And I think it's gonna get worse. It's gonna get worse, and you're starting to see some pretty ugly things, and I think it will get worse. We probably saw the first impact of an election with deepfakes. But you may not have heard about it because it was in Slovakia. So earlier this year in Slovakia, 48 hours before the election, the pro-NATO candidate was up four points in front of the pro-Putin candidate. Somebody created a deepfake of the pro-NATO candidate, saying that we're going to rig this election and win it one way or another. And it went viral online. And because there's this bizarre thing in Slovakia that says you can't talk about the election 48 hours before election day, nobody could go in and clean up the record. Two days later, the pro-Putin candidate won by four points. That's an eight point swing, and a country that neighbors Ukraine to its east, by the way, not, you know, not for nothing, that's pretty serious.
We probably saw an impact in India, but probably down-ticket, not at the top of the ticket. And so that's the other thing too is, do I think deepfakes will impact this election? I think, in a couple of ways, but I'm also worried about down-ticket. But here's where I think you're going to see the impact, and you're already seeing this. This was the better part of the last few weeks was talking to reporters about this, these images of Harris at big rallies with huge crowds, tens of thousands of people showing up. And this infuriated Donald Trump, and so he started saying the photos were fake. deepfakes work both ways. It's not just that you can create fake things and hurt people. You can deny reality. And the question you want to ask yourself is, why is he doing this? Well, it could just be pettiness. He doesn't like that the crowds are bigger than his crowds. That's a perfectly valid reason. I think it's way worse than that. I think what he's doing is he's setting the stage to deny the outcome of the election. And how do you do that? And by the way, I'm not making that up. He has told us that he's going to do this. And the easiest way to do that to say, “look, she couldn't have gotten those votes, because those crowds are fake,” right? And so I think you are seeing this by the poisoning of the information ecosystem, both by state-sponsored actors, which is absolutely happening, by the campaigns, by the candidates, and by sociopathic CEOs.
PAT: Yeah, yeah. Well, the title of the event is, you know, what if seeing is no longer believing, and I guess that gets to this point that things don't have to be fake, if it's plausible that things could be faked, you can claim it's fake.
HANY: Yeah. It's called liar's dividend. It goes both ways that once we live in a world where you can make images and audio and video, nothing is real. I'll give you an example of how fast the landscape has shifted. In 2016 then-candidate Trump got caught on the Access Hollywood tape saying some really awful things about women. I mean, even by Trump standards, these were bad. And what did he do? He apologized. You go back and check he actually said, I'm sorry, which was pretty amazing by today's standards. Now, fast forward a year, somehow he wins the election. Fast forward a year, and he's asked about the tape. And now deepfakes are just on the rise. And he says, oh, it's fake. Why'd you apologize in the first place? It doesn't matter. It's fake, right? Who's accountable anymore? Police violence, human rights violations, politicians doing and saying things, everything is fake suddenly. And how do we move forward as a society if we can't agree on basic—we can disagree on lots of things. I'm fine with disagreeing on things. I like disagreeing on things—but we've got to agree that two plus two is four. I don't know how and so I do to me, if we don't have a shared sense of reality, this is an existential threat to our society. And I'll give you some numbers that should if you're not already terrified, should terrify you. I asked for bourbon up at the front here instead of the water, but I got water. 20% of Americans, 20% believe that Bill Gates created COVID to put a tracking device in them, and that maps with the number of people who are not getting vaccinated, not just for COVID, by the way, 20%. By the way, I want to ask these people why Bill Gates has to put a tracking device in them if we all have phones in our pockets, like what I don't even understand this theory, it's fine. 30% roughly, of Americans believe that climate change is a hoax—not that it's not created by man, not how to respond to it, that it is a hoax. Okay, one in two, 50% of Republicans believe that Donald Trump won the election in 2020. So think about what I say, numerator, public health, climate, democracy. How do we move forward? This is not fun and games anymore, but here's the big one that with those conspiracies, which are hitting a significant number of our fellow Americans, comes something else that you don't just believe those things that are factually incorrect or have no foundation, it's that you also believe that the government, the media, that's you, and the scientific experts that’s me, are in on it. That we are keeping information from you. And so you have a distrust of the very institutions that we need to function as a society—government, scientific experts, and media. We need some more people, don't get me wrong, but you definitely need those people, and that's what scares me, is that it's this overall distrust of institutions that comes along with this. And I can tell you that, because we've been mapping out these conspiracies, it used to be the Flat Earthers, the Elvis is alive, 911, the birther. Those numbers were typically in the single percentage points, right, somewhere between four and nine percent now we are in the 20 to 50% this is not fun and games anymore. This has real consequences to us as a society.
PAT: So part of what I think I promised from this talk is that we would have some things we can do about it.
HANY: Ah, ah, well that was silly of you. No, no, I'm gonna be helpful. I'm gonna be helpful. Okay, a couple things. Stop, for the love of God, getting your information from social media. I mean, this one's easy, honestly. Like, just turn it off. You'll be so much better off for it. Facebook, Twitter, TikTok, get it rid of it all, all of it gone. In fact, if I could, I'd unplug the internet, but I'm gonna be reasonable. So we do need to teach people—you and I were talking about this earlier on—that this is not, was not designed to get reliable information. That's not what it's designed for. And, but that's what we are using it for. The majority of Americans use social media to get information and that is devastating and that's an, I wouldn't say it's an easy fix, but it's not a technical, it's just an education thing. That's number one. Number two is on the technology side. What is so hard about what we do is we're basically a postmortem. We come in and clean up the bodies after it's over. And that's important, right? We should, we should have facts, we should figure out what happened, but honestly, we're cleaning up the mess. By the time I see something, millions of people have already seen it online and I can't get them to unsee it.
But there's some cool technology that's coming down the pike. So there's a, there's an organization I'm involved with called the Coalition for Content Provenance and Authentication. It's a Linux Foundation, open source, multi-stakeholder, and has industry people, academics, not-for-profits, journalists. And what they're doing is building a technology, I guess what we are doing is building a technology that at the point of creation of an OpenAI, for example, it will add signatures into the media that is being created that will automatically identify it as AI-generated. In fact, right now you can see this technology in place. If you go over to OpenAI and you generate an image, say please when you do that, Dali images from OpenAI will have what's called a content credential associated with it. And if you upload this and you can see that, or we can see it, but also, if you upload that to LinkedIn, it will tag it as AI-generated. And it's the first proof of concept from what we call glass to glass. The first piece of glass is the creation, and the last piece of glass is the you consuming it. We now have content credentials that identify things. I call these necessary but not sufficient conditions, right? It's like putting warning labels on tobacco. That was good, but it didn't stop people from smoking. So I think that's important that you know what you're looking at, like a nutrition label. You can still eat potato chips, but you should know that they're very, very bad for you. So I really like that technology. But it requires the cooperation of the OpenAis, the Anthropics, and the Midjourneys, and Elon Musk, which I don't have a lot of hope for. So in this world, we're only as good as the lowest common denominator. You know where that's going. It used to be Mark Zuckerberg. Now it's Elon Musk, go figure.
PAT: I'm gonna pause for one second. If you have questions that you've written out, I'm gonna ask you to pass 'em this direction. And I think the ushers can collect those.
HANY: I just wanna say, my wife is writing a question, so don't pick that one, please.
PAT: Yeah, she’s working hard on it too. She should just bring that straight to me.
HANY: Yeah, she's gonna hand it to you.
PAT: Yeah, so the question should go down here to my colleagues. Great, so the last talk was with Stuart Russell.
HANY: He probably scared the bejesus outta you too.
PAT: Yeah, yeah. Yeah. It was strangely funny though.
HANY: Well, what else are you gonna do? But laugh.
PAT: Exactly. Yeah, exactly. But, what I, you know, part of what I brought up with Stewart was that, that the field of computer science and, and the AI, uh, world is so divided between the people who are saying it's, it's gonna be the end of us and those who say, now it's gonna solve climate change, and we're gonna live forever. And I think it's, you know, it's hard to get people to go out on a limb, but I would like you to go out on a limb.
HANY: Yeah, sure. Um, I'm somewhere in between. There are absolutely people saying, well, I would say there's other people saying this is complete nonsense and none of this works, and it's all hype and it's complete crap. I think they're wrong. I think there is really something happening that is very dramatic. I was just talking to some of my students earlier that I've been doing this for 25 years, both teaching and a practitioner of computer science. I don't think I've seen a technology in such a short amount of time fundamentally change the way I work and the way I think and the way I think about teaching the next generation of computer scientists. Something very dramatic is happening, and I don't think we fully understand what it is yet. So I don't think it's nonsense. I don't think it is AI is going to solve climate change and discover new drugs. I don't see that being true. I think it will, it'll be an accelerant for certain things. I don't think it's gonna be the end of us, but I think it's going to empower—I think here's what scares me. Lemme put it this way. I'm not worried about AI. I'm worried about capitalism. Right. I'm worried about AI within the capitalist system that will burn the place to the ground to maximize profits. And the reason I'm worried about that is 'cause that's what we've been doing for 20 years with social media. Why social media is so awful? Because they are burning the place to the ground to maximize their profits to buy a bigger yacht. Right? And they don't care what they do to individuals, societies, or democracies. And so I worry that without a regulatory landscape, which is what the government should be doing, has fallen asleep at the wheel at, the companies are gonna do what the companies do, and we know this and I think left unchecked, it's not AI, it's humans, it's humans that scare the bejesus outta me. The AI is fine.
PAT: And I know that there would be lots of people in Silicon Valley who would push back and say, “government doesn't know what it's doing, it doesn't know enough about what it's doing to regulate effectively, and it's gonna just create more problems than it solves.”
HANY: Here's the problem with that story. Is that the reason why the government is confused is because these people are spending millions of dollars sending people to Capitol Hill, to confuse everybody. This is Big Tobacco and Big Oil all over again. You muddy the waters. Right. What did Big Tobacco do for decades? They didn't say, smoking doesn't cause cancer. They said, ah, we don't really understand, don't overreact. That's the same nonsense you're hearing from Silicon Valley for the last 20 years. So I don't buy that story. And I can tell you I spent a lot of time talking to politicians in Sacramento, in Capitol Hill, in Brussels, and in the UK, and they are not, well, some of 'em are stupid, but a lot of them are not stupid. I think they are just, they are being sold a story by people who have a vested interest in not having regulation. Regulation is good. And here's how, you know, regulation is good. 'Cause a lot of people here are old enough to remember when we tried to put seat belts in cars and what did the automotive industry do? They cried like a bunch of babies for decades saying this is gonna destroy the automotive industry. It was a big fat lie. It was a big fat lie. Safety is good. Not killing us is good for business, right? Good, smart regulation is really good. And by the way, you don't even have to believe me. Go read the Biden Executive order from October of last year. It's incredibly thoughtful, and it was pulled together in about nine months and it is really a thoughtful roadmap. And it was done in collaboration with our friends in the UK, our friends in Brussels, our friends in Australia. So I'm not saying that the government, you know, is here to solve all our problems, but I think if we work together and not against each other, we can get the government to move in a way that puts reasonable guardrails. We have good innovation, we do good things for individuals and societies, and a bunch of people can still make money along the way. That's my story.
PAT: It's a good one. But it doesn't seem like the ethics ever catch up to technology and technology is only moving faster and faster. And ethically we don't seem to move faster. In fact, we seem to drag our feet.
HANY: Yeah. I mean, so, you know, we used to measure change in technology in 12 to 18 month increments. Right. now it's 12 to 18 weeks. Um, and it is fast and that does make it very hard. And the reason why ethics lags and regulation lags is 'cause we don't, we don't know what we're doing. We don't know what this is yet. And it does take a little bit of time. Here's something I've been thinking a lot about, speaking of ethics, by the way. This is coming, by the way, it's coming in our lifetime. We are going to start to create digital avatars after people pass away. Because right now we basically have the ability to take a body of writings or listening to you talk, put 'em into a ChatGPT large language model. You've got my voice, you have my likeness. You can open up an iPad and talk to a digital version of me interactively. I know she like, this one's just like, oh my God. But I'm telling you, you're gonna have to change your will to, you know, say, don't create a digital avatar of me, but this is coming. And you know, a lot of people in Silicon Valley have been obsessed with immortality. This is what immortality is gonna look like. Right. And there's some really interesting questions about that. Like, should UC Berkeley be able to use my likeness and my voice and my thinking to teach classes after I die? Like, that's weird, interesting. Should my wife have a digital version of me afterwards? I don't. I don't. I don't. But this is, yeah. So it's like there's some weird stuff coming and it look, you know, we dealt with this in the medical profession. Right. And, but you know, in the medical profession, I think there was a longstanding history of ethics and biology, ethics and medicine. We're a bunch of engineers. We're a bunch of nerds. Nobody's thinking about this. And by the way, that's a failure of the university system where we don't teach these young engineers ethics. We don't actually spend a lot time talking about philosophy and history and ethics. We just learn how to code and go out and do some damage. That's our failure.
PAT: By the way, I thought you were gonna say, what does it mean for Elvis impersonators when we have those avatars?
HANY: That was a missed opportunity.
PAT: A little Elvis on the brain.
HANY: That was a missed opportunity.
PAT: Okay. Emily, gimme your, oh, you turned it in.
HANY: I really, shouldn't have said anything.
PAT: Thank you all for submitting questions. You talked about technological solutions, but what do you think about legislative solutions? Okay, we kind of talked about that. Good.
HANY: Yeah. Let me, let me chime in. Yeah, I, I think we need regulation here. I think it needs to be smart regulation. California has been really leading the charge. And I think that, I don't think it makes sense to do the sort of state level, but if any state is gonna do it, it makes sense for California to do it. I like a lot of the language that's come out of the UK online safety bill, the AI safety bill coming out of Brussels, and even the Biden executive order, the National Institute for Standards and Technology is propping up an AI safety institute within it to think about this. I think we need some guardrails. There is no industry, there is no industry where there are not some reasonable guardrails to keep us safe. And I think it's perfectly reasonable to say that this type of technology, and it's not just AI, all of it needs some reasonable guardrails, but they have to be smart, and I think we have to do it in a way that is collaborative and not adversarial. And I think there's a glimmer of hope here that we might be able to do that.
PAT: Okay. What's the most surprising deepfake you've seen? And if I can add to that, have you been fooled?
HANY: Ah, by very, the definition of that is, I can't tell you if I've been fooled.
PAT: Right.
HANY: I think, um, that video I showed you of the woman talking was the one that, it was very benign, but it surprised me of how good it was because I think if I had looked at that and didn't know, I don't think I would've seen it. Because a lot of the artifacts that I've come to know, like the throat doesn't usually move, the expressions are a little inconsistent. The mouth often is a little sort of off. The one comfort I take in that is Maddie, who's sitting right here as a student in my lab, his technology was able to detect that as a deepfake. So I take a little comfort in that, but I was really blown away by that. I've also really been blown away by the voice cloning. It is shockingly good. Like, it really surprised me how good it is. And I will tell you, we, we were just finishing up some experiments with Sarah, who's also sitting here and, you know, we're always doing sort of sanity checks of our data and we, we always look at things and we're like, okay, is this what we think it is? And we keep being fooled by our own deepfakes. And that's really disconcerting. That's starting to really bother me.
PAT: Can you talk for a second about the—because I just think this is such a trippy idea of these—these adversarial networks that, one generates something and the other one is discriminating.
HANY: Yeah. Yeah. So, so there's different technologies to create deepfakes. Um, one of the technologies are called GANs or generative adversarial networks. They're not, they're sort of being built into what are called diffusion based processes too. But lemme talk about that because it's a really beautiful idea. So the first sort of really good deepfakes, were all built on GANs. And the way GANs work is you take two systems, which are deep neural networks, that's the underlying neural architecture or computation. And one of them says, I'm going to make an image. And the way it does that, it just splats down a bunch of pixels and it sends it to another network that says, is this a face or a photo? And that one says, nope, it's not, try again. And then the generator modifies some pixels and sends it back to the detector and it says, nope, try again. And they work in what's called an adversarial loop. So the generator's trying to make an image that fools the discriminator. And the power of that is so incredible because all you have to do is give the discriminator enough examples that it can tell the difference. And so that sort of shows you the power of that. It's almost like it's nothing. It's just such a simple and elegant idea. And by the way, you know, these systems are, you know, learning on top of themselves. And so one of the things we think a lot about when we create deep detectors is we wanna make sure those can't get incorporated into GANs to then build better creations. So you have to think really carefully about how you develop these technologies. How do you deploy these technologies? Do you, do you talk about them publicly? Do you release code? What do you do? Because fundamentally, this is an adversarial system, right? And so some of the underlying technologies are really beautiful and elegant and very, very powerful.
PAT: And if I understand it correctly, it's not like, old fashioned computer science where you could look at the code, you could see how you're getting your output. You can't just lift the hood and say, okay, let's see, what was the process?
HANY: Well, two things. You can lift the hood and see the process, but you can't understand it. So the networks are learning literally billions and billions of values, what we call weights to a model. And so what it's doing is doing this GAN thing or something else, and it's learning this incredibly complex set of calculations. So I can look at the process and say, this is how it's doing it. But then once I unleash that network and I said, make this for me, I don't really know how it's working. And by the way, you know this is true. 'Cause this happens every once in a while. Like a ChatGPT will go sideways, and they’ll ask Sam Altman, like, why did that happen? He is like, yeah, I don't know. And I'm like, that is not a great answer. Like, that's not what you want to hear from the CEO of OpenAIs like, yeah, we don't know. Like these things go, they get a little weird. Right? And we don't fundamentally understand them. 'cause even though we can look under the hood, they're doing such massive computations, right? And we just don't really understand, like, why it's doing those things.
And, and they have what, what we call emergent behaviors. They do things that are surprising and that is disconcerting. And if there's ever a reason to have guardrails on technology. Like if I told you your plane every once in a while is gonna do something surprising, you'd be like, yeah, that's not okay.
Right. We're gonna put some guardrails on that.
PAT: AI airlines.
HANY: Yeah, exactly.
PAT: Alright, someone wants to know how will generative AI models evolve once a large fraction of the images that they train themselves on are also fake?
HANY: Oh, man, that's a great question. So Maddie, who's sitting right here, did a really beautiful project, this, I guess it was last year, on what we call nepotistic training. So what we did is, so what you should know about all these generative AI is that they have largely been trained on your data. Stuff that you have put on the internet, that media outlet. And, and by the way, massive, huge copyright infringement. Lawsuits are gonna be for years, for decades, gonna be sorted out on what happens here, but largely on human generated content.
And what we are curious about, because now we're entering the age of generated AI, where a lot of content online is now AI generated. So what happens, and by the way, the, the way the data comes to you is you just scrape the internet indiscriminately billions of pieces of content, text, image, audio, and video.
So what happens when you start grabbing in some AI generated stuff? What we and other people have started finding with large language models and image generators is the models start to collapse. Which means they start to create gibberish with as little as 3% of the training data. And in some cases, with as little as a fraction of a percent, if you are more adversarial in nature on how you attack it.
If I'm a state sponsored actor and I wanna slow down OpenAI, you know what I do? I prop up a couple thousand websites with poison data and I just wait for them to come take it. I don't have to hack their network, come take it. And I'm gonna start poisoning your models. This may be a good thing, by the way. We'll see what happens. But I, some weird things are starting to happen and there's a series of papers that have come out in the academic literature talking about this. And I, here's one thing I take for comfort. I take comfort in this, is that I wish people would just do what I want them to do, but I'm also an adult and I realize that doesn't always work that way. So what I like is when my interests and the company's interests are aligned, and in this regard, the companies are actively incentivized to label every single piece of their content so they don't resist it in training. That's really good. I like for OpenAI to wanna say we want contact credentials so we don't reabsorb this. And that may even be true for Elon Musk. 'cause that means his models will also start to collapse. And that's good when our incentives are aligned.
PAT: This is a really insane idea. It hadn't occurred to me, that it can pollute itself. And what it's being created is not, is not good for it.
HANY: Yeah. And by the way, we don't know why. We don't know why still. Maddie's still working on it. Um, and I don't think anybody does. By the way, the other papers we've seen that we, it's an observant, we observe the behavior, but we fundamentally don't understand it. Back to your question, because we don't really understand these models in a lot of detail. So this is something that we're thinking really hard about, but we still don't understand it. But we have observed it and it's very real.
PAT: Can you think of any other technology that human's created that's been like that?
HANY: I mean, there's a joke about the British Empire that's in my head. But, um, I mean, but look, we know that, you know, when a species inbreeds, it's very bad for the species. Things get pretty weird. Sorry, Sarah. Sarah's British and I just made fun of her entire, so yeah, I can't think of another technology like that though. But certainly in genetics we've seen that kind of behavior.
PAT: This is sort of a media question. I think any chance that we can start moving away from a vast array of niche media markets and towards common sources of information, a la major networks in the 1970s. So back to the Walter Cronkite mark.
HANY: Oh man, I love that question. I grew up at a time where it was three news channels, right. And it was Walter Cronkite and Dan Rather. And look, I don't think it was perfect, but it's a hell of a lot better than it is now. And say what you will about the news. But here's the thing that you have to understand about news. Prior to CNN and Ted Turner, and he's not a bad guy, but prior to that news was the price you paid for being on the airwaves. Why did we have news at six o'clock and 11 o'clock every day? It was because the government mandated that you have news and that was what? You did that for an hour and then you got the airwaves the rest of the time. So that was your responsibility. It was never meant to be a prophet. It was meant to be a service to society. That was your peas and carrots and then you could have your reserve. Then you can watch Benny Hill all you want. It was great. And then there was also the fairness doctrine. Right, which Reagan pulled the fairness doctrine, said you cannot be hyper, hyper-partisan. There was reasonable balance in news reporting. Ted Turner came along and said, we are going to monetize news. And I don't think he meant to destroy the media outlet, but I think he effectively did because he told everybody else that news should be a moneymaking venture. And as soon as news is a money making venture, things get weird. Right? Because your incentive is very, very different than Walter Concrete and Dan Rather. And I think that's been bad for us. Will we go back? I doubt it. Yeah, I doubt it. 'cause people like to be fed things that they agree with. Yeah. We know this. And by the way, that's not a partisan issue. The left is as guilty of that as the right is. And I think people don't want to go back to tell me what's actually going on because it's less comforting.
PAT: So should there be a consequence? This is not a question from the crowd, but should there be a consequence for sharing deepfakes?
HANY: Wow, that's a good question. So, the answer is yes, but there's some caveats to it. So for example, if you're creating child sexual abuse material, 100%, you should go to jail. And it is illegal in most parts of the world, and it is the most common use of deepfake. It is a common use. People are doing awful things with children. It's awful, awful, awful. I think the creation of non-consensual sexual imagery of adults is now illegal in about a dozen states here in the US. We are working on federal regulation to sort of unify that.
I think if you take a woman's likeness and you put her into explicit material, that is not protected speech, and I think that should be illegal. And then we have to figure out what the, you know, what, is it a civil, is it criminal? And I think we should think a little bit about that, but I think that should be banned on the political front. It's obviously more difficult because we should be able to criticize our politicians and we can't say you can't create a deepfake of a politician. That would be silly. So here I have a very simple rule, which I just think we have to disclose these things. I think if you create a deepfake of a politician, it's fine.
But I think it has to be labeled as that. I think there's a line that you cross. So for example, when the guys created the Joe Biden deepfake in New Hampshire telling people not to vote, that's election interference. And those, by the way, millions of dollars of fines have been now levied and, and, and felonies have been charged. So that I think you cross a line, you can't interfere with the election. We need satire, we need humor. We need to be able to criticize our politicians. So I think with political speech, we have to be careful, right? But I think there's other things that are unambiguous. Fraud is fraud. We don't have to create new laws for deepfake fraud.
It's fraud, it's illegal. So I think we have to modernize on the sexually explicit material. The child abuse materials need, the, the laws are a little vague. They need to be modernized a little bit. And then on the political side, I just think we should have disclosures. And I think that's, I don't think that's perfect, but I think it gets us far enough there without getting into trouble with, uh, with our free expression issues.
PAT: Okay. I want you to talk a little bit about your company, Get Real. By the way, I said it, if I was gonna do this again, I'd call it “Keeping it real with Hany Farid.”
HANY: That's so good.
PAT: I missed that opportunity, but I wanted to share that. Yeah, but I'd like you to plug your company. I do want credit for it. And if you use it, it's now on film.
HANY: Understand. I understand.
PAT: Tell us what it does, and why you think it's a good product.
HANY: So this isn't a sales pitch. I'm not selling anything. In fact, we are not working with consumers. Um, so we, I co-founded a company two years ago with some really, really smart people and a lot of really, really smart former students, which by the way is completely awesome. One of 'em is sitting right there, by the way. Um, and, uh, what we are doing is developing a suite of tools, build on the work that we've worked on for now decades that would allow a media outlet, uh, a law enforcement agency, national security agency, um, an analyst. To ingest a piece of content, image, audio, or video.
And we give you as much information as possible about that by whether it's real or not. You can see why this would be important for media outlets. You can see why it would be important for law enforcement when they're introducing evidence of a court of law and certainly for national security. We're also building, uh, technology that would sit on Zoom calls and team calls that when you get on a call, you know who you're talking to.
You can see why Fortune 500 companies would care about this, um, and save you 10 minutes wondering whether Obama is really Obama. That's right. Exactly. You just focus on what he's saying so you don't sound like an idiot. And then we're also doing some service stuff where we wanna work with large agencies like a New York Times, a CNN for example, where, you know, there's sort of three bins of content that happen every day.
There's stuff that they can deal with, they don't need my help. There's stuff where they need a little bit of help from the technology and they don't need me. And then, you know, there's something like really, like we don't know what's going on and so we want to be able to be helpful to those organizations.
The model, internal model I have in my head is. 8 billion roughly people in the world. I can't stand between those 8 billion people and the truth and facts, but I can stand behind some really incredibly smart and talented journalists and investigative journalists and help them help the rest of the world.
So I'm sort of behind the, your front line. Yeah. And so we want to help media outlets because we think that if the media can't get this right, then we're nowhere. We gotta get that right.
PAT: Last question, if we, if we don't get it right, what does it look like next election cycle?
HANY: I mean, look, we got a glimmer of it. Right. January 6th was a glimmer of it. And that was not a deepfake powered lie. That was just a big fat lie. Right. Now imagine that lie, which we know is coming, if he loses, powered by audio of Harris saying we're gonna rig the election images of ballots being burned. Secretaries of state saying that they're rigging the, that's what's going to happen. And you know this. And you know that Elon Musk is going to amplify it to hundreds of millions of people and Trump is gonna amplify it. And January 6th of 2020 is gonna look like a day in the park.
I don't think this is hyperbolic. I do think we are looking, I think if you don't have a shared sense of reality, you are looking at an existential threat to your society and democracy. We don't, we're not, we're not disagreeing about politics anymore. We're not disagreeing about philosophy.
We're not disagreeing about, we're just disagreeing on two plus two. And I don't know how we move forward like that. Like the political conversation. You should see the hate mail I get. It's amazing. It's amazing. Like it's, but it, it's really, it's detached from reality. And I don't know how to have that conversation. And what, what is the nature of it? Are they claiming that you're being alarmist for no reason and that they're not, they think I'm part of the conspiracy to cover up. I mean, they think they, they think I'm part of some conspiracy. Some people think like, you know, it's somebody, by the way, I've been accused twice in the last year of being a time traveler, which is very cool.
'cause if only, but this is what it is. Like, literally, like somebody thinks I'm a time traveler and that's how I'm doing. I don't, and I think back to your point about social media even more than deepfakes, generative AI have really polluted the political discourse, the public discourse to the point where there is just so much conspiratorial thinking out there.
We didn't need deepfakes to to sew this kind of discord. It's making it worse. It's jet fuel and it's absolutely making it worse. But if you ask me if I had a magic wand and could solve one problem, it would be the social media problem, not the AI problem. Because I think that is by far the, the, the, the and and it's really the thing you have to understand about that. It's not a bug of the system. It's a feature. It was designed to be like this because it's from the attention economy, right. It is designed to feed the most hateful, conspiratorial, salacious, outrageous stuff. 'cause that's what we, and by the way, that that's on us. 'cause we're the ones clicking on those things. You can't really blame Zuckerberg and Musk for that part because they are responding to what we are responding to. Now I would say they have a larger responsibility as multi-billionaires and trillion dollar valuations that they need to do better. But that's like, that's the problem is that we are poisoning our brains.
And look at the Q Anon conspiracy. Some 25% Americans believe the core tenets of the most bizarre conspiracy. And that's one in four. That's insane. That's insane. I don't like, I'm, I don't think that's being hyperbolic or alarmist. But of course I may be a time traveler.
PAT: That's just what you would say.
HANY: That's what I would say exactly.
PAT: Well, let me see. Let me do a time check. Yeah, I think we're, we gotta wrap up. I think you need to give us some words of hope and encouragement.
HANY: Hold on. I'm gonna dig deep. Okay. Here's something cool. So, we've been talking about the bad effects and we should talk about them and we should figure out how to mitigate them, but no kidding. Generative AI is amazing. I'm a computer scientist and applied mathematician by training. I've been writing code for 30 years. I don't write code anymore without chatGPT by my side where it writes a lot of code. And the reason I'm saying that is that you don't need to be a computer scientist and a brilliant engineer to do really cool things now. You can do amazing things with ChatGPT if you just know how to ask it questions. And for me that lowering of barriers of access to that is incredible. If English is not your native tongue, being able to edit your text to make it, um, uh, more grammatically correct is amazing. Being able to translate things into multiple languages is amazing.
And so there is something to be excited here. I really think there is something exciting about the large language models in the chat GPTs, that I think is going to change. I think we have to think carefully about what it means in terms of the economy and people's jobs and the fact that all these things we're trained on content creators, incredibly talented content creators without their permission.
We have to think about that, but I think for us as users of this technology, I do think it's an enabling technology. I heard a great line the other day, and that I'm gonna share with you is that. It's in the context of lawyers, but you can replace what I'm about to say, lawyers with any profession is that AI is not going to eliminate lawyers from work.
But lawyers who use AI will eliminate lawyers who don't. And I think that's true of just about every profession. I think if you don't figure out how to use this technology, you're gonna be left behind. And that was true of computers too, by the way. Same thing. And the internet. So that's not a particularly shocking thing to say, but I do think we have to think about how to bring this into our education.
And the thing that's harder here, PC Revolution was about 50 years. Internet revolution was about 25 years. This revolution is one year. So we don't have the time to do what we did in the first sort of rounds of this. But I do think it is fundamentally changing the way we work and interact with the world in exciting ways and in ways that are terrifying and in ways that are unfair.
And we just gotta find that balance.
PAT: Yeah. All right. Okay, with that, thank you everybody. There'll be a short, uh, wine reception just outside these doors here and everybody needs that. Thank you so much for coming and hope to see you again in the near future. And I just wanna say once again, thank you so much to Hany. Thanks.
[MUSIC IN]
NATHALIA: This is The Edge, brought to you by California magazine and the Cal Alumni Association. I’m Nathalia Alcantara. This episode was produced by Coby McDonald, with support from Laura Smith, Leah Worthington and Pat Joseph. Special thanks to Hany Farid. Original music by Mogli Maureal.
[MUSIC OUT]