Then & Now
Then & Now connects past to present, using historical analysis and context to help guide us through modern issues and policy decisions. Then & Now is brought to you by the UCLA Luskin Center for History and Policy. This podcast is produced by David Myers and Roselyn Campbell, and features original music by Daniel Raijman.
Then & Now
Challenges and Opportunities in the New Age of AI: A Long-Term View with John Villasenor
As advances in technology continue to shape our world, understanding the implications of artificial intelligence (AI), cyber security, and digital privacy has never been more important. In this episode of then & now, we delve into the crucial intersection of technology, law, and policy with John Villasenor, a distinguished professor at UCLA and co-director of the UCLA Institute for Technology, Law and Policy.
Villasenor's expertise provides a fascinating glimpse into the history of technology and how it has rapidly evolved over the years. From the pioneering work of Alan Turing to the current landscape of AI, Villasenor offers valuable insights into the challenges and opportunities presented by these advancements. Join us as we explore the impact of technology on society and the changing landscape of technology law and consider: can we regulate AI? Should we?
John Villasenor is Professor of Engineering, Law, and Public Policy and Management at UCLA, where he co-directs the UCLA Institute for Technology, Law and Policy. He is a leading voice in the discussion surrounding the ethical implications of technology and the importance of thoughtful regulation in the tech industry.
Welcome to Planet Now, a podcast by the UCLA Luskin Center for History and Policy. We study change in order to make change, linking knowledge of the past to the quest for a better future. Every other week we examine the most pressing issues of the day through a historical lens, helping us understand what happened then and what that means for us now.
SPEAKER_02:Welcome to Then and Now. I'm your host, David Myers, and today we're going to talk about technology and the law. Will artificial intelligence take over the world, rendering human beings pawns or victims of technology's unrestrained excesses? How did we get where we are? What constraints can the law provide to the rapid advances of artificial intelligence? And what does the future portend? To help us address some of these questions, we'll be in conversation with John Villa Senior, Professor of Engineering, Law, Public Policy, and Management at UCLA, where he co-directs the UCLA Institute for Technology, Law and Policy. He's also a non-resident senior fellow at the Brookings Institution and a member of the Council on Foreign Relations. Professor Villa Signor's work addresses the intersection of technology, law, and policy with a focus on topics including digital communications, artificial intelligence, cybersecurity, and privacy. Welcome to Then and Now, John.
SPEAKER_01:Oh, thank you very much for having me.
SPEAKER_02:So you have an unusual background that brings together a number of different disciplines engineering, technology, law. Tell us how you got to UCLA. You were an engineer at the Jet Propulsion Laboratory. So how did you make your way from there to UCLA, where you're teaching both the School of Engineering and the Law School?
SPEAKER_01:Well, but it's a yeah, it's an interesting backstory. So I was indeed, prior to joining UCLA, I was at the NASA Jet Propulsion Laboratory, and then I ended up joining the faculty of the engineering school. And back then, this is a long time ago now. We're talking really back in the early 1990s. And back then, um my work was solely and only in engineering. Uh, and so I had what you might call a fairly traditional uh early part of my career uh going up through the faculty, the progression of professor ranks and doing um doing a particular type of research, but pretty traditional uh engineering work. And it wasn't until later that I branched out and started getting involved in some of these other uh other interdisciplinary aspects to the extent that I am now. How did that happen? Why? Well, it's a no, it's a really interesting question. I think uh I've I mean I I should be clear in stating that I'm no less interested in engineering as a pure discipline in and of itself. I think it's uh a foundationally important discipline, and I have great respect for it and still uh have quite a lot of interest in it. Uh at the same time, I also became more and more aware of or cognizant of the importance of looking at these technology and engineering related questions, not only uh in the pure traditional engineering context, but also in terms of the broader implications, uh, the ramifications, the policy implications, the intersection with legal frameworks. And I saw that there was an opportunity to do that. And not that I'm the only one who's ever looked at the tech law intersection. Of course, you know, thousands of people have done that. But more traditionally, the people who do that in academia are people who are coming from the law side. In other words, legal scholars who, as one of their areas of focus, have technology law as a area of specialization. And far fewer people who really came up through the academic training system in engineering branching out and then actually engaging formally in the legal, uh, legal scholarship and the legal academy on those issues. And so I seemed to me that that was a really good opportunity. Uh, and uh sort of that's what I sort of started emphasizing that aspect of my work as well. Um, and I guess now it's probably close to 15 years ago. Uh, and it's been uh it's been a real learning experience and a growing experience for me as well as hopefully uh an opportunity to uh provide some contributions more broadly.
SPEAKER_02:Maybe by way of framing our discussion about artificial intelligence, um, you can share with us why law, what what were the legal issues? Is this a matter of of patent? Is it a matter of regulation?
SPEAKER_01:Yeah, so I should, yeah, it's a great question. I should just sort of back take a step back and say that uh I didn't originally uh get into the tech law intersection specifically because of artificial intelligence. Uh I was just more interested in it more broadly, you know, questions of things like cybersecurity and digital privacy and policy around uh intellectual property. There's just a whole a whole host of areas, some of which involve nothing have nothing to do with AI, and some of which have come to be very closely related to AI. But when I first started getting involved in this, you know, maybe 15 years ago, AI wasn't nearly uh as dominant in the public conversation and in the academic conversation as as it is uh today. So my uh my original sort of interest in this was not specifically drawn to AI, if that makes sense.
SPEAKER_02:Okay, so let's jump into the heart of the matter. Um, I'm sure there are many ways to go about this, but how would you define artificial intelligence?
SPEAKER_01:Uh so rather than having me come up with you know the 10,000th definition of artificial intelligence, what I thought I might do is read you a one-sentence definition uh that was codified into federal law. There's a something called uh the National Artificial Intelligence Initiative Act of 2020. And at least for the purposes of that act, uh they gave a particularly, I think, a useful definition they wrote uh in that law. The term artifact, they being Congress, uh the term artificial intelligence means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. So I think that's as good a definition as any. I guess I would add to that that very often when we talk when we talk about AI systems, we talk about systems that that learn from and adapt to to their environments based on the observations, the data that they're receiving.
SPEAKER_02:So it seems to the uninformed observer that there is there's been a very considerable acceleration in the development of artificial intelligence over the last few years. I mean, it seems almost month by month there are significant leaps. But obviously, artificial intelligence um wasn't born in the uh in the 2020s. Um, how do you narrate the history of artificial intelligence?
SPEAKER_01:It's a fascinating history, and I I think we can, you know, many observers will go back to the pioneering work of um Alan Turing uh in in the UK, who you know famously published a paper in, I don't remember the precise year, but it was the mid-20th century asking the question, can machines think? Which again, today perhaps it's not uh an earth-shattering question to ask, but it really was a prescient question to ask, you know, back you know, 75 or whatever, however many years it was ago. Uh and and that was that that paper, as much as anything else, I think you can sort of point to as really the an early foundational contribution to what became known as artificial intelligence. And then over the subsequent decades, through really the end of the 20th century, there was uh significant work, uh increasing work uh as the years went on in artificial intelligence. Uh but you're right, and that has continued, but it it didn't really um explode into the public consciousness until a few years ago. Um, there's a couple of specific reasons for that, but I'll also say that you know, things like ChatGPT, you know, everybody's heard of, but even well before that was released a year and however many months ago it was, AI was already very much in um in our world, even if we didn't notice it. I mean, for example, two years ago, if you got you know route finding for rideshare apps and purchase recommendations, you know, you were benefiting from artificial intelligence even if you didn't specifically know that that was behind what you were seeing or doing.
SPEAKER_02:So I I can think back to something that seems significant in its in in my time, which was when the computer deep blue bested the chess master Gary Kasparov. Yes. That would seem to be a signal moment in this history, was it?
SPEAKER_01:Yeah, and it's a great milestone. But I think it's also important to mention that uh that as far as I'm aware, the deep blue computer did not actually use artificial intelligence. It was programmed, you know, over the course using many, many person years of effort to basically try to capture, you know, the collective knowledge of the programmers and of you know what worked and what doesn't, and then and also just do brute force, sort of you know, look a certain number of moves forward through brute force and try to figure out uh what things, you know, what's going to work and what's not. But it was a sign, it was a signal moment in the power of computing to do things that were were had up until then been associated primarily with you know human cognition. And even though it wasn't AI as we think of it today, it was just uh it was still really important. Um, and and if I if I may, I can just sort of contrast that with something that happened 20 years later in 2017. There was a Google computer named AlphaZero, which starting only with the rules of chess, giving it got no other information other than the rules of chess, it was able to teach itself to play chess at a world class level in about four hours. And so if you look at that 20-year time span, you know, with deep blue, we have, you know, world leading chess, but at only at the cost of many person years of development. 20 years later, we have AI being used so a computer can teach itself to play chess in four, four hours at a world class level. That's an absolutely stunning advance set of advances over you know 20-year timeframe.
SPEAKER_02:And for the layperson, can you explain what that advance actually entailed? Um, it had to do with the speed and capacity of something.
SPEAKER_01:Yeah, I think I yes, I think that I think you can, you know, people have been working on AI for, like I said, three quarters of a century. But but of course, over the last really over most of that time period, at least the last 50 or 60 years, you know, we've had this exponential increase in computing capacity. And that's both in terms of the speed at which computers can, chips can compute things, and also the amount of storage. The price of storage has uh has declined roughly exponentially over that time. And so what I think really created a tipping point for AI in the last decade or so, and and even more so in the last few years, is the amount of data and the speed of computers and the the low cost of storage all sort of came together uh such that you you now have truly extraordinary computing and storage capabilities that are accessible to uh people who are developing these AI systems. And that created this really qualitative leap in what these AI systems can do. I mean, if you look at, you know, people have fun poking, finding errors with Chat GPT, you know, when it does, when it creates hallucinations, it makes up facts that aren't aren't real and things like that. But but if you take a step back, the things it can do are, you know, extremely are just remarkable. And of course, that's you know, it's still early days yet. So I think, as you just suggested, it's really computing power, plus, of course, a a lot of people getting very good at creating these algorithms and working on these, you know, it's a combination of the tools that were available and the and the the knowledge of the people using these tools and those things uh together is really what has led us to where we are today.
SPEAKER_02:And is this um a reality that Alan Turing and his generation imagined to be possible? Well, that's a good question.
SPEAKER_01:I I say that Al, yeah. Um I would say Alan Turing is is not he's he's he's it's someone in some sense apart from his generation just because he was so visionary. And and I think there were very few people at that time uh who would have even entertained the question of whether machines can think. Um so but I think that's you know, I mean, someone like Alan Turing, that the mere fact that he, the very fact that he could ask the question, you know, suggests that he would perhaps be less surprised than others of his time uh to see what has happened today.
SPEAKER_02:So it's curious to ask what futurists today say of the potential tomorrow.
SPEAKER_01:Yeah, that's a good question. I mean, I'm you know, I'll give an analogy that I think is is perhaps useful here. Back back in the late 1990s when the internet was first becoming widely accessible. I mean, the internet itself had been invented, you know, prior to that, but it didn't become you didn't have browsers on people's desktop computers, at least at large scale until mid-1990s, late 1990s. And it was clear to me then, if you'd asked me in 1997, 1998, you know, what's the internet gonna look like in 25 years or something, I would have been able, I would have accurately been able to tell you, for example, that it was just gonna make it a lot easier to find information, right? Just information, the whole cost of accept of finding information was gonna just plummet and the efficiency of finding it. But I would have completely missed social media. Like it, you know, it did not, it wouldn't have occurred to me. I was not able to see in 1997 that the internet, among other things, was going to lead to the creation and rise of social media. And I think it's an important history lesson, at least for me, because it illustrates the difficulty of trying to predict the future. And so I think there's some things about AI that we can say with some high degree of confidence, but technologies have a way of developing in ways that you didn't necessarily originally anticipate. And I think it would be naive at best to suggest that we sitting here in 2024 can say with any real certainty what AI 30, 40, 50 years is gonna look like. I'll just give one more kind of analogy that or comparison I think is useful. If you go back 50 years, we're in 2024 now. We go back 50 years to 1974, you know, it is no easier, it was no easier in 1974 to predict the technological landscape that we have today than it is today, I think, to predict the technological landscape that we're gonna have in 2074. And so if you look at the vast differences we have over the past 50 years, I think it's reasonable to expect the differences will be similar in cal in scope in the next 50 years. And I'm not gonna try to predict exactly what form that's gonna take because that's that's hard.
SPEAKER_02:But is it the case that the pace of change is much more rapid by orders of magnitude?
SPEAKER_01:I don't know if I would say orders of magnitude. I think the pace of change has been pretty fast. I mean, certainly the pace of change is of technological change is faster in recent decades than it has been in say prior centuries. I don't know that it's faster today than it was in the 1990s. I mean, if you look at, you know, if you look at the, you know, going from 1993 or four to 1997 or eight, just a handful of years, you know, web browsers went from, you know, essentially non-existent in a population scale to prevalent, right? And at least in many, in many countries. Um, that's a stunning change. Uh, if you go back, you know, not too many years before that, there was a period of time where almost nobody had mobile phones. And over a pretty short period of time, you know, many people, uh again, especially in some some countries, um, were able to get mobile phones. Uh, so there's been, you know, those are incredibly transformative, profound changes. And those were, you know, a quarter century ago. So I don't think the idea of technologically induced sort of profound changes is is new. But yeah, things are happening quickly now.
SPEAKER_02:Yeah, and I just think back to the the announcement, the first public public announcements uh to the general public of of the arrival of the chat GPT. It seemed like there was a lot of conversation about the potential of AI. And then the next moment there was this instrument that everybody could have access to that uh you know that was a source of good and not so good, perhaps. But it seemed to come out of nowhere. But I'm sure within the uh community uh it was highly predictable.
SPEAKER_01:I don't know if I'd say predict I mean, yeah, certainly there's people there's you know large language models weren't a secret, and you know, neither was the fact that people were you know working on interfaces to them. I think what was a surprise to many people, even in you know the community, was how good uh these things had gotten. Um I think that was what was surprising. Because it wasn't too long ago where chat bots just you know were just you know were just vastly less capable.
SPEAKER_02:And what accounts for that?
SPEAKER_01:I think the the there's the models are bigger, there's more computation. I think the people uh behind the the best uh large language models are, I mean, they're they're extremely they have a lot of expertise. This is what they're this is what they do, they're focused on that, and um, and they've really uh produced some amazing technology.
SPEAKER_02:And as a moral being in the world, how do you find AI when you think of its benefits and its potentially productive qualities?
SPEAKER_01:You know, I I this is not a trendy view to hold in academia because there's a lot of doom saying, but I am uh I am much more of an optimist than a lot of people. Now, that doesn't mean I'm not that be careful. I'm not suggesting that there's never gonna be any negative uses of AI. Of course there will be. I mean, every technology has been exploited by people for negative purposes as well as positive purposes, the same or almost almost every technology, right? Um, and so uh, but I'm not of the view that AI is gonna you know take over the world or you know, make all humans obsolete or control us or anything like that. I think there's just extra incredible potential. I mean, just one area I'll I'll mention is pharmaceuticals, drug development, the potential for AI to uh discover new drugs that otherwise would have remained undiscovered uh for perhaps years, decades, perhaps forever, is is truly extraordinary. Um there's just a long list of applications where the benefits are really, really amazing. There's a there's advantages, uh there's opportunities to create, to broaden access to legal services, uh to uh improve medical diagnostics. I mean, there's just a long list of benefits. And alongside those, of course, yes, there are there are people who are going to misuse AI for malicious purposes.
SPEAKER_02:And you don't imagine that there that those efforts will continue. I mean, we saw just in the last week uh reports that uh uh some supporters of the Trump campaign used AI to alter images of Nikki Haley um and disseminate them very widely. Um the capacity to do that kind of thing and promote this and disinformation seems to be almost.
SPEAKER_01:So what are the guardrails? Yeah, so this is I yeah, so I'm not I'm I want to I I don't know and I haven't read about that specific thing, so I'm not gonna comment on that specific uh allegation. Um, but I will say that the AI, and not only just yeah, AI creates the opportunity, it that it makes it possible to alter images and to uh, and of course you could alter images before AI, but it makes it more you can do it in a more realistic way. And people can also create images, synthetic images that are uh increasingly going to be difficult to disentangle from the real images. And of course, that kind of technology uh can be used to uh to probably really problematic ends. Um, but it can also be used to beneficial ends. I mean, I'll give an example. There's a South Korean presidential candidate who, on his own campaign, recently used AI because he had been apparently criticized for appearing sometimes too cold and unfriendly, and he he used AI to kind of modify his video so he appeared more approachable and friendly. I mean, you can have some discussion about whether that's right or wrong, but it's certainly not malicious, it's not evil, right? And and it's not um it's not something that we would in the United States try to block somebody from doing if somebody tried to do it here. You can also imagine you know AI being used in, I don't know, filmmaking, for example, if somebody was going to make a use AI to make a movie depicting Abraham Lincoln in a completely realistic form, you know, that's another example. But yeah, there are going to be people who who use it to prop to create and propagate disinformation, and that is a problem. Although I will say that disinformation has been a big problem even at even without AI. You don't need AI to have a disinformation problem. There's been there's been more than more than plenty of that, unfortunately, in on the internet for a long time.
SPEAKER_02:So let's turn to the law, um, where there seems to be a kind of dissonance or anomaly of sorts, insofar as the very people who are producing technological innovations, that's a big tech, um, are also the people who are called upon to regulate um their industries and uh the internet and presumably the future course of AI.
SPEAKER_01:Um is that an accurate reading? I'm I'm gonna push back a little bit on that. I mean, the sense that you know there is regul there is government, you know, there's government regulation that's not run by the tech companies uh that is relevant to AI. In fact, there's quite a lot of it. Um that's you know, uh so it just depends on it depends on what you mean. Let me just just so I don't get you know taken out of context. Let me people people often sort of suggest that, well, there's you know, that we don't have any rules regarding AI. Well, that that's not true. I mean for for example, if a bank is using an AI system to make decisions about who to give loans to, and it and the bank ends up making decisions in a way that disfavors members of protected groups, let's say on the basis of race or gender or religion or something like that, um, well, that's that's already unlawful, right? That under the Fair Housing Act, you can't discriminate in in home loans. Uh, and that uh prohibition on discrimination isn't any less just because you know a bank happens to be using AI. So there's a whole set of protections that we already have that will apply to AI if AI is used in a way that contravenes the behavior that those laws are intended to block. Um so that doesn't now question, does that mean there's not new ways in which AI could be used that are harmful, that fall through the cracks outside the scope of some of these regulations that are already there? That may be the case. And if that if that is, if we identify things like that, well, that is something that we should be looking at carefully in a balanced way about uh how to address it.
SPEAKER_02:So do you think there is sufficient government regulation at present of the tech industry in general and AI in particular? Like, are we in a good place?
SPEAKER_01:Yeah, I you know, I I get hesitant when people just say regulate, and then you just um let me give an example. So I think regulation has its role, and I think there could be there, there may well be a need for additional regulation, but I think it has to be done thoughtfully. Uh, another example I think is helpful to cite is back in 1986, Congress was concerned about uh digital privacy, and they uh enacted a law called the Stored Communications Act, which gave uh substantially more protection for emails stored for less than six months than for greater than six months. Basically, the law enforcement needed a warrant to get emails stored less than six months and uh did not need a warrant for emails stored late greater than six months. And by the way, that's still book law is still on the books. Um and um and the logic back then was that well, nobody stores emails more than six months because you'd run out of disk space, right? Now, of course, you fast forward, you know, 40 years, and that same law is on the books. And now the what that means is the vast majority of our emails are subject to less protection because of this purportedly privacy-enhancing law. So, my my point in bringing that analogy is that's a case where Congress said, Hey, let's get ahead of this, let's enact some regulation about this new thing, you know, these electronic things, things like email, and it ended up actually being counterproductive doing exactly, you know, being, in some sense, an anti-privacy law, at least through tip the lens of today's technology. Maybe it was okay at the in the and when the day it was enacted. So I do get nervous about calls for, hey, you know, the logic is we need more regulation, let's regulate. That to me sounds like a formula for unintended consequences, as contrasted with you say, okay, here's a problem that is attributable to AI, and let's look at our existing frameworks, legal frameworks. And oh, if we find that none of the existing frameworks can actually address this problem, well, then it makes sense to talk about potential regulatory approaches or other solutions as well. But that to have that on the table seems reasonable.
SPEAKER_02:And how would you grade the big tech companies in their own efforts to um produce guardrails against excesses and misuse?
SPEAKER_01:You know, I'm not, you know, I I guess I don't have enough knowledge about uh all the you know internal fit steps that they've taken. Um I mean, tech companies are big targets, and so um certainly they have not behaved perfectly. Um, but I also think you know there's a lot of political um mileage that you can get members of both parties, you know, can get by sort of bashing big tech because it's you know, it's it's you know the tall the tall the tall tree catches the wind. Um and so again, I'm not suggesting that tech companies have behaved perfectly, I and no doubt they have not. Um, but I also think that many of these tech companies do have people who are thinking carefully about some of the um you know the ethical implications of the technologies that they're building.
SPEAKER_02:And just thinking of um a response to an earlier question, um, sort of you were identifying yourself as, I guess, if not a doomer, maybe a boomer, as I understand uh the world is divided. Um why should we be sanguine that um a group of bad people won't use AI to build a devastating nuclear bomb or sort of the more sweeping assertion?
SPEAKER_01:Yeah, yeah. I mean, there are always going to be people who use the the latest technologies in ways that are problematic. Uh I mean, again, history provides a good example, or not a good example, but an illustrative example in a um in in a lot of ways on these things. Um, trying to think of I mean, so for example, um, this is a grim example, a terrible example. In World War II, um uh Nazi Germany, you know, their goal was to put a radio, an old-fashioned AM radio, in as many German homes as possible, thereby allowing the government, uh, including Hitler, to sort of speak directly into homes. And that effort, in terms of the effort to distribute radios, was was they got radios in a lot of homes. And that's a that's a terrifying use of what was then an emerging technology. But I think the problem was not radio, the problem was the Nazis, right? And so I think we we need to, you know, radio, of course, has in in the decades since then it led to many innumerably good uses, right? We would never suggest that there's something inherently evil about radio. It's it's the problem is that, like any technology, there are people who will misuse it. I think the same is going to be true for AI. You know, we will be able to find it's a big world, there's billions of people in the world, some subset of them are going to try, you know, to use AI for really problematic purposes. And we should be on guard about that and we should address that. Um, but that doesn't mean that you throw the baby out with the bathwater. It's not, it's not the fault of the technology, it's the people who are employing it in these ways. And just as it just as a global ban on radio in you know the late 1930s would have made no sense, so too would it not make any sense to have a global ban on AI.
SPEAKER_02:And what prevents you from lapsing into the ultimate doomsday scenario of AI taking over the world, the human race?
SPEAKER_01:I think there's any number of uh there's any number of reasons why, you know, it depends on AI taking over the world. Um, but uh I just don't see any system or AI collection of AI systems being so untethered and um decoupled from human oversight and control that they could, you know, literally take over the world. Could an AI system do damage? Uh sure. But I I think, you know, uh as both longer-term history and recent history uh makes clear, I think the greatest, you know, the greatest damage uh is done to people, the the greatest source of damage for people is frankly other people.
SPEAKER_02:Not AI developing decidedly and wholly negative human characteristics like resentment, um, jealousy, indignation, um, impulse to violence.
SPEAKER_01:Yeah, I mean, um I I you you get into sort of a philosophical question of whether AI has feelings and all these kind of things. So I'm not not quite ready to sort of concede that that at least today's AI systems have something that we could genuinely analogize or properly analogi to feelings in the human sense. I mean, they're just they're at the end of the day, they're they're just machines. Uh but even if we got to that point, um, you know, I would assume that there's they're going to be guardrails. Uh, and again, they're not going to be perfect. You know, I don't, you know, if we if we over the next decade, will AI in some cases be used for uh really problematic ends? Of course. Um, but that's also true of the internet, right? There's there's a ton of, I don't know, financial fraud on the internet. Um a ton of it, right? There's all sorts of bad stuff that happens on the internet. Um, and we we do our best to combat that. We do our best to hold the people who commit those acts uh accountable, but we don't shut down the internet, right? We understand, I think most of us, that the the benefits of having an internet uh vastly exceed the uh downsides, even though the downsides are very significant. Uh there's no, you know, I don't want to downplay it, right? If if you're a victim of financial fraud, somebody steals your credit card, you know, and they're thousands of miles away, then that that you know that's still a big problem and there's far worse that happens on the internet. But again, it's it's it's the people using it, not the medium that's at fault.
SPEAKER_02:So it sounds like you're not losing sleep at night over fears of what AI might do.
SPEAKER_01:I don't want to sound I I don't want to suggest that I'm sort of you know whistling, you know, I I I I mean it's I I have what I would like to believe and I I respect that others may have a different view. I will I I like to believe that like so many technologies like the internet, like radio, like mobile phones, um that on balance AI is going to to bring us far more benefits than harms. Does that mean we close our eyes and ignore the potential for harms? Absolutely not. We need to be vigilant. We need to think about frameworks for minimizing the probability that those harms are perpetrated or if they are for identifying them and stopping them quickly. But I don't uh I don't subscribe, you know, and I I guess I you know you suggested before that there's sort of the doomers and the boomers but I I don't have a binary view. I think I I would say uh that I'm on on on largely optimistic but also wary and aware of the negative uses but I don't think that's a reason uh to to turn our eyes away from the really amazing opportunities that we have okay at that point of cautious optimism um I'd like to thank you John Villasenior for taking time schedule well thank you appreciate it for listening to Then and Now a podcast by the UCLA Luskin Center for History and Policy.
SPEAKER_00:You can learn more about our work or share your thoughts with us at our website luskincenter.history dot ucla.edu Our show is produced by David Myers and Rosalind Campbell with original music by Daniel Reichman. Special thanks to the UCLA History Department for its support and thanks to you for listening