The Penta Podcast Channel

Mapping AI's impact with POLITICO's Steven Overly

Penta

On this episode of What's At Stake, Penta Partner Andrea Christianson is joined by Penta Senior Director Scott Haber-Katris and Steven Overly, host of POLITICO Tech. The trio discussed AI's large-scale impact, including for the workforce, education, politics, and the law. Based on years of experience in the tech industry and politics, the group offered their predictions for how Congress will (and will not) regulate the emerging technology, and how industry is preparing to respond to these guardrails.

Speaker 1:

Welcome to another episode of what's At Stake Append to Podcast. I'm your host, andrea Christensen, a partner of Append to and co-lead of our AI task force. Joining me on the podcast today is my colleague, scott Haber-Catris, a senior director of Append to and one of our resident tech experts, and Steven Overwee, the host of Politico Tech, a daily podcast that explores the way technology is disrupting politics and policy. Steven has over a decade of experience covering business, tech and politics. Prior to joining Politico as a tech reporter in 2017, he spent seven years at the Washington Post covering business and tech. Thank you both for joining us today.

Speaker 2:

Thank you for having me Happy to be here.

Speaker 1:

Scott, you've also been in DC for a long time. You've been working in tech for a long time, including at the Interinette Association, where you worked on Section 230 and net neutrality. What's your perspective on the latest developments and the likelihood of meaningful lawmaking on AI?

Speaker 2:

Yeah, so I would say that I'm not particularly hopeful that anything happens in the near future. I don't know that that would be surprising to anyone that's been following tech issues Broadly. I'd say legislating is just very hard. As a general rule, I remember one of the things I worked on at Internet Association was the Music Modernization Act. It was a thing that everybody in the music industry agreed on. So the recording industry, all of the streaming companies etc. Wanted to modernize how payments were made for online streaming services. I remembered that the music industry, all the streaming companies etc. Wanted to modernize a specific piece of how licensing for online streaming worked and how everybody got paid for it. The law hadn't been updated since the early 1900s when player pianos were the main way that everybody listened to music. It literally took a year and a ton of disagreement and it nearly didn't happen. That was something where everybody in the industry agreed. It was not this giant sweeping policy issue like AI that touches a whole bunch of sectors of our life. It was literally just like we want to change some payments and how this all works. So from my perspective, legislating is very, very hard. I think to Steven's point. They're doing a lot of the things right.

Speaker 2:

I do think that Schumer is holding these hearings. You have engagement from industry. You have engagement from third parties, you have members of Congress interesting and trying to learn as quickly as they can, but all of those are necessary ingredients but not necessarily a recipe for actually getting anything done. So I'm waiting to see exactly where we coalesce on all of this stuff. There's a lot of issues where all of those boxes are checked, like everybody agrees. Industry is involved, members are getting themselves educated, and we still don't see legislation pass. You could look at privacy as an example of that in the tech space. You could also look at competition as an example of that in the tech space. Like you had a four-year-long investigation in the House Judiciary Committee looking at competition and still no legislation was passed, even though there was sort of bipartisan agreement on a bunch of things and we could talk more about that. But yeah, I'd say that's where I am on this is that there's a chance, but any legislating is hard, and so we'll have to see how the sell shakes out.

Speaker 1:

Yeah, it'll be interesting. I mean, it certainly seems remarkably bipartisan right now and there's definitely industry engagement, but once you get into the details, that's where the devil is. So, steven, we're going to take a little bit of a step back. You've covered technology. You've covered trade across your career, so I'd be interested how your perspective on technology has changed over the years and how it kind of is impacting society. And then also, how did trade and technology work together? Are they coming closer? Are they dissimilar? What's kind of your broader view on that?

Speaker 3:

Yeah, I started covering tech in about 2010 and a lot of the coverage in those days is what I call G-Wiz tech coverage. Right, social media was really still taking off. The iPhone had come out a few years before, but smartphones and mobile were really still transforming industries and how we go about our daily lives, and so much of the coverage was. Look at this fun new app. Here's a profile of an interesting startup.

Speaker 3:

There was not as much of a critical eye about technology and its effects on society.

Speaker 3:

Then, around I would really say 2016, the presidential election you saw the potential and make a very hard swing toward heavy tech criticism a lot of criticism and scrutiny around misinformation, around online speech moderation and then, following that, a lot of criticism subs, brown market dominance in the size of big tech companies.

Speaker 3:

I do think now we're starting to see the pendulum move toward the middle, where there's a recognition among regulators, among lawmakers and some in the press at least, that tech is both good and bad. It is both useful and harmful and it really comes down to how it's applied and how it's regulated. I do think, especially with this new conversation around AI, you're really starting to see it framed in those terms In terms of trade. I'll just say I thought I was leaving the tech beat when I started covering trade and I realized very quickly that tech will follow you everywhere. So much of our foreign business relationships, our foreign national security relationships, are dominated by tech nowadays, whether that's AI, whether that is microchips, whether that is privacy. There's really no global conversation that is taking place right now where tech is not some sort of facet at play.

Speaker 1:

Yeah, I think that your description of the tension between the good and the bad and around tech more broadly is really relevant to AI. So I want to circle back there quickly, because there's been two groups well, three groups really. It's going to be utopia, everyone's going to die and probably going to be somewhere in the middle. There's tension between good and bad. So I want to talk a little bit about how we think AI is going to impact white collar workers. Stephen, what's your view on how policymakers and businesses should be thinking about it at this early stage we're in?

Speaker 3:

Absolutely One of the most fascinating dimensions of AI to me is its impact on the labor force. We've seen over recent decades, technology advancement and automation really decimate and, at the very least, disrupt a lot of industries involving a lot of manual labor, manufacturing work, etc. Now, with artificial intelligence, some of those same dynamics are coming for white collar workers, people like myself working in media or you all working in communications, but that includes business executives. I just read an article about artificial intelligence being put up against a bunch of MBAs to come up with new, creative business strategies, and AI completely trounced the MBAs in that competition. We've seen applications in the medical field, for instance, where AI is not necessarily replacing the doctor, but certainly replacing some of the functions of a doctor, so that will have far-reaching consequences for all of those industries and more.

Speaker 3:

That will, I think, probably include layoffs and downsizing. That will include, though, changes to how we do our jobs. So the exact calibration of that I don't think it's figured out yet. Are we going to see massive layoffs or are we going to see just changes in how we work? To some degree, I think that will come down to two key factors. One is the development of AI how sophisticated it is and how quickly it is developed into these different industries. And then the other side of that, as always, is how quickly AI is adopted, how fast companies are to bring this technology into their operations and use it to replace or supplement their workforces, and it is just too early to tell. You've started to see some companies kind of stake different claims or pursue each of those tracks, but at the end of the day, we are going to go through a period and we're already starting to go through a period where companies will have to figure out how much AI they're going to use and in what way they're going to use it.

Speaker 1:

Yeah, and one thing that Senator Schumer has really underscored as a part of his listening sessions and forums and things like that is safety, right, innovation and safety. But one thing do you think we're at the point where, as all of these tech executives gather today, should they be talking about a trade as a justice, men's assistance or something like a UBI down the line, or is it too soon to be talking about those kind of policies?

Speaker 3:

What's funny is I've asked this question a lot around Washington because I'm very personally curious about it, and really that conversation does not seem to be happening. I don't know, though, that that's a sign that it's too early to be having that conversation, because there are other countries that are doing it and there are some political figures raising those questions. A prime example I just had a conversation recently with Andrew Yang, who was talking about UBI, and kind of I was talking about UBI and the transformation of technology back during his presidential race, and he's still talking about it today, even though other politicians have not fully latched onto it. And so I guess, all that to say, I don't think it's too early, but that conversation is not happening, at least not yet, and so the challenge will become how quickly, then, can that kind of program be put in place if and when it's needed? And a lot of people I talk to tech experts, civil society experts, you name it say that it probably will be needed, and probably sooner than many of us realize.

Speaker 1:

Thanks, and thinking about things that policymakers aren't necessarily thinking about or businesses aren't thinking about. Scott, I wanted to get kind of your perspective on what are the dark horse AI issues that are sort of flying under the radar.

Speaker 2:

Sure. So I would say I think there's one. I have two answers for you, I'd say. The first is education, and I think that's both in the terms of how people are going to learn, but then also how we're going to have to fundamentally rethink a lot of how education currently happens. And we're starting to see those conversations, but candidly, I think that, because back to school is happening now, it's about to become a much bigger part of the conversation, both at the university level but also in the K through 12 zone.

Speaker 2:

So on the first piece, because AI can generate relatively good pros and help with problem solving that otherwise was assigned in homework or things that people could do on their own, I think that one people may be able to more quickly grasp concepts they might not understand or more quickly learn things, but also they might just not have to because the chat, gpt or bar or what have you can do the work for them. And I don't know that schools, as far as I've seen, have really fully grappled with the implications for that in terms of our education system broadly. I know that's a little bit of a glib statement, but I do think there's going to be some fundamental shifts in how teachers are going to have to approach assigning work, taking tests, assessing people's skills because, fundamentally, a lot of the ways that they used to do that can be done at a relatively high level by the current LLMs that are out there and, candidly, there are better ones coming soon and they're likely to come here faster than they think. On the flip side, in terms of what that means for policy or schools, etc. I do think that there's some real opportunities to improve how we teach.

Speaker 2:

I remember listening to, I want to say, the hard fork podcast with the interviewer teacher and the teacher said that actually this actually makes my job as an educator much easier. If I'm trying to grade 30 third-grade level essays for grammar and spelling and provide feedback, I can have generative AI do that for me and that frees up my time to actually work with students on creative ideas or work with students on the structure of their argument or things that the models aren't quite there yet, and so it actually can unlock a lot of time for teachers or educators at large. And so I think there's going to be some really big opportunities to improve education, improve outcomes for folks, but also there's going to have to be a little bit of a fundamental shift in how we think about and approach some of this, because the ways that we've measured aptitude, the ways that we've measured how well people are doing in class or how they're learning just simply aren't going to work. When there's a free way to have an LLM, do that work for you.

Speaker 1:

Very, very true. Thanks, scott. And speaking of disruption, we talked a little bit. There's going to be a lot of disruption for a lot of industries. But, stephen, you've been in journalism your whole career. What's your view on how AI and other tech is going to impact journalism in the future?

Speaker 3:

Well, I hope I'm not about to be replaced by a robot. I guess I should start there. I don't know how good of a podcast host AI is yet, but hopefully that technology doesn't develop very quickly. No, I mean, ai is something that in journalism, we're already experimenting with to various degrees, both in terms of our journalism, so everything from generating headline suggestions to writing entire news stories. You see news organizations experimenting with that. There are also news organizations experimenting on the backend, because these are also businesses, so using AI internally for various functions to make their organization more efficient.

Speaker 3:

At the end of the day, I guess I'm still a believer in the importance of a human touch in journalism. I just think of some of the key facets of my job. I spent so much of my day talking to human sources, chasing lawmakers on the hill, talking to regulators and officials in various federal agencies, some on the record for the podcast, some off the record just to get information. I don't know that AI will ever be able to do that, or I struggle. I should never say that AI can never do anything, because I will guarantee you I'll be proven wrong.

Speaker 3:

But I struggle to see how AI can replicate that human-to-human interaction, that very nuanced, very dogged, frankly, pursuit of facts that is a hallmark of good journalism. But I can easily see AI replacing some other functions in journalism writing a story off of a hearing on Capitol Hill, for instance. Ai may not be able to chase the lawmaker after the hearing and ask them popular questions, but I bet it could generate some start of a new story on the hearing itself that maybe then is edited by a human being. And so, like all industries, journalism is going to be grappling with this and is already starting to grapple with this, and I don't quite know where that will lead yet, but I will say I've had several guests on the podcast already sort of asked me what's my next career plan, because who knows how long this one will be around. So I guess maybe I should start working on that.

Speaker 1:

Well, that is actually a good segue into. I've got two final, more rapid-fire questions, and the first one is for both of you. We talked a little bit about the spectrum of utopia to dystopia, and I'm curious where do you both land on that as it relates to AI?

Speaker 2:

Rob, do you want to go first? Sure, I'm happy to go first. I think there's two ways that I think about this. Sorry, this isn't a rapid-fire answer, but I think there's two ways that I think about this. So one, as reclined, published that really big piece in the Times, I want to say, sometime earlier this year, and the thing that's been stuck in my head is that the lead to the piece talks about how the human brain is really good at predicting patterns, like if you wake up and you walk outside, you kind of know the sidewalk's going to be there because that's the way it works and our brain is sort of tailored to sort of accept that the way things were is the way that things are going to be. And his argument was basically that AI is so fundamentally different that the human brain is going to be very bad at anticipating how and where it's going to change our society and what's going to happen, because it's just such a paradigmatic shift from the last several millennia of how humans have lived and I think that's a good point. That's well taken from my perspective.

Speaker 2:

But on the flip side, I think we're already starting to see some of the limitations, at least at present, of how much AI is going to be able to change or shift what we're doing. I think I completely agree with everything Stephen said about the fundamental shifts, especially in services industries, whether it's journalism, professional services, etc. Etc. And how that work is done. But, on the flip side, we're already seeing that it's really hard to build new generative AI models because there's so much AI-generated text on the internet that it's hard to get a sufficient amount of information to build a stronger model or build a model that's actually going to be better or include relevant new information, as opposed to the models that were drawing on data from pre-2021, where we didn't see a ton of AI-generated text. And so we're already starting to see how and where and we obviously have the hallucination problem too, which we don't need to talk about too much so we're already starting to see exactly where and how AI can help and also some of the limitations.

Speaker 2:

So I don't think that we're going to end up in some deeply dystopic society where there's like 10 people who have jobs and everybody else has been replaced by machines. But on the flip side, I do think there are going to be some big shifts. We've seen some big shifts, as Stephen discussed, both in terms of trade and also in terms of automation and society. There were changes in society. There are ways that lawmakers and legislators and we as a whole nation should be thinking about and how to address those problems, but I don't necessarily see this as some sort of, at least in the near future. I don't necessarily see this as some sort of massive, massive societal upheaval event that's going to happen.

Speaker 1:

All right, Stephen. What do you think?

Speaker 3:

Yeah, I've interviewed both the doomsdayers and the eternal optimists and there is actually a commonality that I see and it might be a sliver, right, but if you have a Venn diagram with dystopians on one side, utopians on the other, Usually the dystopians say this is what will happen if we don't manage AI properly. The optimists will say, or the utopians will say, this is what can happen, but we do need to manage some of the risks of AI. That caveat on each side is like that sliver in the middle of the Venn diagram where both people live. Frankly, on the podcast, that slight overlap is what I'm most interested in, because we are in a critical moment with AI where, if you regulate it and put the right guardrails on it, we could see a lot of upside and limited downside, as we talked about earlier.

Speaker 3:

If past is precedent, the track record on regulating technology, certainly out of Washington, is very poor. I do think it is an open question whether any proper measures are put in place to guarantee things like safety and privacy and job security et cetera. If that doesn't happen, then I don't know. I have to think that the dystopians make a good point, because the one I'm also bad at the rapid fire. So I apologize, but the one point I'm so fascinated by that I hear over and over again is concerns from technologists, from people who develop AI, concerns that there will come a point where AI starts to develop itself and evolves beyond human understanding of it. Maybe that's farther away than some people predict, but I find that concept to be troubling, because this notion that if AI gets out of hand, you can just unplug it I think that's a pretty basic understanding of what is a very sophisticated technology.

Speaker 1:

You are also segueing me into my final, last question. You must be a journalist or something to anticipate so well. But the final question is if America had to agree on one thing as it relates to AI regulation, what would it be? Steven, I'm going to ask you first.

Speaker 3:

I do have an answer for this. I will distinguish America from Congress because I think that a lot of issues, including this one, I don't know that Congress will actually be reflective of America, which a lot of polling suggests most Americans do want AI regulation. I think one of the regulations many Americans probably would agree on if not most Americans would agree on is disclosure when AI is being used, especially gender of AI. People want to know that the video or images or audio that they're engaging with online is real. If it's not real, they want to know that upfront. If they don't know that, there are many implications, everything from our entertainment to our politics. That, to me, is an area of regulation. Simply requiring that the use of AI be disclosed is something I think a lot of people would be on board with.

Speaker 1:

All right, Scott, what do you think?

Speaker 2:

I mean, I think Steven's answer is going to be smarter than mine and I'm just going to endorse everything he just said.

Speaker 2:

But one thing that I have been thinking a lot about is the open source nature of some of these models. So we're currently at the point where an LLM or any other model right now can mostly model through, but it's not perfect, et cetera, et cetera. At the point where there's no regulation or no oversight and anyone can download a piece of technology that could, say like, perfectly impersonate somebody else or generate an enormous amount of text on a certain topic, or that sort of thing, I think that might be a place where the whole society goes off the rails. We end up in this topic, world, et cetera, becomes a much more serious issue, and so, to me, finding a way to address and finding some way to either license or control the spread and use of very, very powerful LLMs that could impersonate humans to that sort of thing is one of the bigger pieces that I think is going to be really important to address, along with all of the other things we've discussed today.

Speaker 1:

Yeah, so this is obviously we're just tip of the iceberg over here, but, steven and Scott, thank you so much for joining the show today To our listeners. Remember to like and subscribe, wherever you listen to your podcasts, and follow us on Twitter at Penta Group, and be sure to tune in to Politico Tech, with Steven Overly reporting daily on the intersection between tech and politics. I'm your host, andrea Christensen, and, as always, thanks for listening to what's. It's prohibited to add LLMs for your channels.