Bartholomewtown
Journalist Bill Bartholomew brings Rhode Islanders closer to their world through analysis, interviews and reporting.
Bartholomewtown
Live from newportFILM: A Panel Discussion on AI and Its Impact on Rhode Island
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Following a newportFILM presented screening of The AI Doc: Or How I Became an Apocaloptimist at The Jane Pickens Theater Bill Bartholomew moderates an expert panel on AI's growth and impact on Rhode Island.
David Altounian (Associate Professor, Business & Economics, Salve Regina University),
Timothy H. Henry (Chair, Computer Science and Information Systems, Institute for Cybersecurity and Emerging Technologies, Rhode Island College),
Michael Littman (Associate Provost for Artificial Intelligence, University Professor of Computer Science, Brown University)
Briana Vecchione (Technical Researcher with the AI on the Ground team at Data & Society Research Institute)
presented by newportFILM
Last week I got to experience a good chunk of our community coming together and asking questions about AI. This was a Newport film organized event, a screening of the documentary The AI Doc, or How I Became an Apocalyptomist, followed by a panel discussion that I moderated that you'll hear on Bartholomew Town in just a matter of moments. What was so powerful about this event though? One, the film, as is often, in fact always seems to be the case with Newport film, was curated perfectly for the moment. You can feel the room, you can feel the energy inside the picket theater that you've got a curious, informed audience, but also a situation where there's just such a knowledge gap on AI for most of the people, which is totally understandable because I think there's a huge knowledge gap for people who are working on AI deeply every single day. And the fundamental tenet of this film, this documentary, which I recommend you watch if you're curious, was hey, is AI going to destroy the world or is AI going to make the world better than ever? So breaking this down on a hyper-local level and also obviously on a macro level, Newport Film organized a tremendous panel that included David Altoyan. He's an associate professor of business and economics at Zavi Regina University, Timothy Henry, the chair of computer science and information systems at the Institute for Cybersecurity and Emerging Technologies at Rhode Island College, Michael Littman, Associate Provost for Artificial Intelligence, and a University Professor of Computer Science at Brown University, and Brianna Vision, a technical researcher with the AI on the ground team at Data and Society Research Institute.
SPEAKER_10Maybe we should start by hyper-localizing the conversation and just we'll go across the panel if we could. What's one thing that our community here, Quinwick Island, Rhode Island writ large, needs to know specifically how AI is impacting their life today?
SPEAKER_05Well, without getting too nuanced about it, I think it really depends on who you are. I think it depends on what kind of AI you're interfacing with at work, you know. Are you in a job where you're seeing AI procurement? Are you seeing that affects your rate of employment? Are you applying for jobs where there's AI behind the scenes, right? Are you on social media where you see AI every day? Um, yeah, I mean, these things are global, but also will affect different populations in different ways, for sure. Um but I think, you know, maybe one of the things that I did appreciate about the end of the movie was the emphasis on like collective action and the importance of community. And I'm really glad that we're having this conversation here today in a community-oriented way, because we can really talk about things on the ground that are specific to all your lives, and you know, I think we're all really interested and excited to hear your questions. Um, but yeah, I guess you know, just being active in the community and really saying, you know, what do you want your future to be in relation to AI that's something that really key central to the conversation tonight?
SPEAKER_02I think so. One of the things that I've been noticing is that this seems to cut across a lot of different tasks that people are engaging with these generative AI systems to do, is that the systems are good, but they're not perfect. And you as a human being understanding what it is that you're trying to do and basically doing oversight on these systems is actually really critical. This seems to be across uh using using these systems to write computer programs, using these systems to write documents, using these systems to create illustrations. When we talk to the experts who actually do that for a living, what they find is that these tools can help them, but they don't replace them. They're not actually good enough at these tasks to just uh do the work for them. And so so there are these opportunities to work together, but it involves actually a lot of learning. So there's a lot for us to do if you want lots of powers of these to uh to roll up our seams and learn.
SPEAKER_06And I think learning about them is really the key, is that the tools that you use, how you uh it impacts where you work, really find out as much as you can about how it works where you you know in your occupation, in your life on a daily basis, because the more you understand about the tool uh and about AI in general, the better you can advocate for what you feel needs to happen with it.
SPEAKER_08Yeah, and I would say AI is uh impacting our lives here every day. Um, and I think saying we need to slow down on AI or we need to stop AI is unrealistic. It's the use cases we need to pay attention to. It would be like saying we should stop medicine because bad things happen. There are certain things in medicine or research we don't allow globally and locally because it creates harm. In AI, there are use cases you saw here that are going to create harm. And so we talk about collective action. I think the collective action needs to be around the specific AI use cases that we're worried about. For example, you know, using AI to uh filter up resumes. The transparency idea out there was great. You know, we went through this and we did offshore of um call centers, and uh it was very unpopular after a while, even though a lot of jobs got you know moved until people said, no, I don't want to talk to an offshore person. If you knew that the AI that you were talking to was AI and it was transparent, people would hang up on it and they would go back to using people. So I would say the one thing about this that I loved was that ignorance is our biggest threat, not the AI. It's ignorance about the use cases and what it does. And so I think that's where we have the opportunity. And this was, I thought, a great film for highlighting both sides.
SPEAKER_10No doubt community orientation is a backstop and a generator of AI technology at the same time, right? It serves as a regulator and it can advance the cause as well. But right now there's a tension point playing out nationally and globally. We're seeing it here in our state, which is on the regulatory side. Our Senate here in Rhode Island has tabled a discussion on regulation around AI data centers, for example. And we've introduced a statewide AI master plan as well. Where should AI be regulated in realistic terms? The idea of an international cooperative agreement seems as far-fetched as anything. So, in realistic terms, is this something that state and municipal leaders should be thinking about? Going off to the roof, or if anyone has a wants to jump in.
SPEAKER_08Oh, no problem, sorry. So um, I think uh one of the most uh important pieces up there, and we had this conversation beforehand, was the geopolitical issue and incentives, and we need to understand the incentives domestically. States have different approaches and incentives than the federal government sometimes, depending on administration. But all we have to do is look at social media as a paradigm for this and look at how China, Russia, the EU, and the US deal with social media regulation. And you can see we have a really big problem in the US. States are trying to get their own regulatory paths, the federal government seems to knock down any approach to regulating. While EU has very strict rules on privacy, Russia uses social media to control the population, where China talks about using social media to monitor to maintain what they call harmony. So all regulatory goals are different. So I do think that there needs to be some global uh uh environment. We have that for world health. We have it. Uh we have it for nuclear arms. So I think there is those kind of things that are gonna be required. But on the US, we're gonna have to see, we haven't been able to do it on social media. I don't know how we're gonna do it well.
SPEAKER_06I would I agree that it would be great to have something with the global scale. My fear is uh the global AI arms race is already taking off after the war in Ukraine, the war in Gaza, and now the war in Iran, uh autonomous devices are being used very frequently, they're advancing, uh, and it's gonna be very difficult to reach a global agreement right now, though I do believe that's what needs to happen, but it's going to take some major event and get people to repay.
SPEAKER_10We see things like technologies in surveillance, for example, Palantir and others. Right now, there are there's a there's a program that was just released in Warwick, Rhode Island, where the police department is going to be using drones for what they claim are gonna be emergency responses, so on and so forth. So it's we think of this on the battlefield in Ukraine, yet over a Best Buy in Warwick right now, similar technology without the killbot component is there. Is the cat out of the bag? Are we living in this or well predicted world right now?
SPEAKER_05I don't think it's too late. I think, you know, if we hold governments accountable for the decisions that they make, I think that this is why accountability is such a huge and important part of this conversation. Um because, you know, for a lot of these big companies, uh, uh regulation and accountability, like maybe they they have the assumption that, oh, that's gonna slow down progress. Like, I actually think it maybe this is a hot take. I think that you know, safe testing and third, you know, third party auditing, not you know, self-rest referential evaluation from within these companies, like that can actually be strengthened our technology.
SPEAKER_02That line in the movie got an applause from the audience.
SPEAKER_05No, we got some line, but it was everything. That's someone who's published on it. Like, um, but no, I I actually think that it's you know it can present as a weakness uh in terms of our national security to not have these kinds of like really valuable safety evaluations. So um yeah, I that's just one way of understanding.
SPEAKER_06And and I think that's one place locally we can do it is we can establish what we want to bet, what we want to accept in each of these different areas.
SPEAKER_10Uh before one more before we go to QA, if that's okay. Just using just a quick kind of almost like this is a social media type of answer right here. Are you an optimist or a pessimist that before the end of a person, the life of a person born today who would have a normal human lifespan? Are you optimist in that time frame that, or pessimist that the world will devolve into near or actual apocalypse, or will we be in a position where most people have found a pathway forward where menial tasks, you know, the quality of life is raised, et cetera. Which side is it?
SPEAKER_02So one thing I want to point out. I mean I really I came in thinking I was gonna hate the movie, but I really liked the movie. I thought it was extremely well done. The through line that could be just perfect. And um, but that said, I feel like every single person that got interviewed was an extremist. And I think there's so much space that they're like this poll and they're physical. And there was the person who was like, oh yeah, yeah, I think both polls are wrong. They're just not extreme enough. I'm like, oh my god, even the person being in the middle is too extreme. So I think there's so many more people who are involved in this and who are studying it and are trying to help the world navigate it, that are seeing much more nuance between these two poles of like apocalypticism, right? And so uh I just wanted to I wanted to point that out because there were certainly moments in the film where I thought like the thing that was said, we were all reacting to it, but it was like, yeah, but there's but that's crazy, right? Like that's that's not really what I think most people think are actually gonna happen. So um that said, I think in terms of like us surviving the next human lifespan, I'm I'm gonna call myself an optimist. I think we can do it. We've done it before, we'll do it again.
SPEAKER_10Optimist, pessimist, or we'll we'll qualify in new category, modern.
SPEAKER_05Yeah, you're not gonna like my answer, Bill. Um I'm not I'm not a um doomer, right? Like, I definitely think that they are really valuable, discrete uses of AI that have been inarguably valuable for the future of science, and you know, like there have been some great outcomes. I'm not an optimist, because well, you saw the movie. Um so yeah, I think you know, I mean, I think it's it's like a very huge qualifier. Like it really just depends on like how we want this to proceed. Um, so I've you know, I'm gonna take another road. Sorry.
SPEAKER_08So let me jump in because I think um I have two points on this. First of all, uh when ChatGPT came out about three years ago, uh, AGI was something people said would be 10 or 20 years. I don't know if you saw it yesterday or this morning, Jensen Wang said we've hit AGI basic now. Uh and made that claim today, which is if you saw it would be a little bit spooky. First point. Second point is this all talked about the engines. What it was really missing, and I only touched on was the robots. The thing that concerns me is the convergence with AI with uh systems. And you got a little bit, you talked about the brain, you talked about the robots and autonomous vehicles. I think just like we talked about use cases, um, I'm really concerned about the connecting of these devices, the convergence of AI with something else. So I would say I am absolutely uh an apocalypse, whatever that word was. I am I'm very excited about the promise, and I'm very concerned about uh its misuse.
SPEAKER_06Yeah, I think for me, I'm an optimist about the actual technology and the tools. My my concern is more the people that use it. And as I thought through that from a global scale, I I think it's in the uh global interest for each of the different organized countries to keep limits on it because they're not gonna want it to get out of control because they'll lose control. Part of the reason they're using it is because they want control. So I'm not too worried about that. It's more uh the way people locally are gonna use it. Um but I do feel that we'll figure that one out because it's more about us and we'll be able to work through that. So I don't tend to be an options.
SPEAKER_08I would say they talked about here, you know, unfortunately, our human nature is that we have to wait for something really bad to happen to crack down, even you know, nuclear wasn't until after um you know uh Hiroshima and Nagasaki did the world realize they had to get together. And and I I do have a little bit of concern that something bad is gonna have to happen for everybody to get together. Um, but the sooner that they get together, the more optimistic I have about our future.
SPEAKER_10Let's go to the audience for QA.
unknownAll right, and we'll crack it.
SPEAKER_07Hi, hey. Hey Gary. So um from my perspective, so I I I when I was in graduate school, I did research on neural networks, which is kind of related to AI. And uh in my past life, when I say past life, because I retired last year, I I managed a technology collaboration at a DOD laboratory that developed autonomous systems and weapons. And of course, AI is very much involved in that. Um and I guess I would say I would say that I'm an apocalypse. If I could say that word, I would say that that's what I am. But um my you know, my biggest term, and um you know someone mentioned that uh ignorance is you know the biggest problem. Um it's you know, right now it's in the interest of a small group of people to control this technology. The technology, like any other technology, it's a tool. It's not inherently good or bad. It's like every other technology that humankind has developed, it could be used for good, it could be used for evil. But the problem is that um it's in the interest of certain people to control it, to use it, to manipulate people, and ultimately the goal is always to concentrate wealth and power in the hands of a few people. What I'm really concerned is that you know what's going on in our country, where a little over a year we've seen a lot of, you know, we've seen a big movement uh from democracy toward uh uh an authoritarian society and manipulating and controlling the press, um, taking away freedom and so forth. Um I'm not sure how we are going to get through this. Um I you know, I don't think you know we can't rely on our government to do anything. Um, for example, uh a company that was featured uh prominently in this documentary, Anthropic, um, they they have government contracts, they stipulated that their technology could not be used for surveillance of the American people and it could not be used to be implemented in fully autonomous uh weapons. And so the response of the Trump administration was to say anthropic is a threat to the supply chain and it's a threat to national security. I I think ultimately it's gonna take some kind of a bottom-up movement to overcome this. But I don't, you know, I'm as I say these thoughts to myself, I'm becoming less and less of the optimist and more of the apocalypse. So I don't know. Maybe I'm looking for one of you to kind of swage me or uh to convince us that uh there is hope.
SPEAKER_08Can I just say real quick? Because I think this is the slot piece is really important. When I started watching the movie, I teach uh intranet AI for business. I have students here. I uh do uh I teach a PhD class on AI policy perspectives, and I saw sat here in the beginning of the movie, and I'm like, oh my god, what am I doing? And then by the end of the movie, I said, the people here that are working on this that are teaching are doing a service. Because the same people that control are the ones that want to keep us ignorant. And so the more you're educating people when you use this, the more you're gonna be able to have the collective action. So I actually feel a little bit better now. But yeah, I think your point is right. It is something we've gotta make sure we educate people, give them agency.
SPEAKER_10Next uh question.
SPEAKER_00So I think this is less a question. Uh I think the debate is actually who the horse has left the stable, right? Um, I think the concern is more about what we're doing for our young people. So our students, K through 12, are entering a world in which AI is ubiquitous, right? It is the future is now. And so my wondering for the panel is what does this mean for education? What does this mean for K-12 learning? Right? What does this mean for students pursuing their purpose and passion in a world where what we thought uh education was is no longer what education is. And so I'd love a conversation about what does this mean for schools, K-12 right now?
SPEAKER_02Yeah, so so uh I can't speak directly to K-12, but we're at what is it, 13 to 17. So at the college level, this has been extremely disruptive. Right? So one of the main ways that we evaluate students is we give them questions and we ask them to write their answers down and give them two answers. Except now questions are questions, they're prompt. You can just take that same text that a professor writes or an instructor writes and feed directly into a chatbot and give them, you know, non-terrible answer, like a very coherent often and relevant answers. What one of the things that we're seeing is they're actually not great answers, but they're really well constructed. And it used to be that the students who actually knew what they were doing were the ones who are creating well-structured answers. And so we could just grade based on, oh yeah, that sounds pretty good. That turns out to not work anymore, right? Because now you can make something that's just wrong that sounds great. And so that means that we have to fundamentally change the way that we structure assessment and engagement. And so it's one of the biggest fears that we have now on campus, and I'll try to get back to K-12 in a second, but is is the notion that the students who are using these tools exclusively and feel like they're doing something productive are not learning anything. They're not actually engaging their cognitive processes and doing the thing that caused them, you know, past students, to actually gain knowledge and gain understanding and gain the ability to work through the models. I think this is at the K-12 level, this is even a bigger threat, right? Because by the time we get the students at the college level, we're counting on them having had that sort of foundational understanding of how to be a student, how to actually learn new material, how to show that you know it. And that's been a little bit taken away from us. And so um, yeah, somebody needs to fix the K 12 thing.
SPEAKER_06Well, I can talk about a little bit with that. Uh, for the last couple of years, I've been doing a lot of uh AI K 12 professional development and working with faculty as they they're struggling through this, uh, because they know. The horse is out of the barn. And doing a fact of the back. And so the first step is to teach teachers about artificial intelligence, how to use it, and what the risks are of the tool, what the pros are of it, and the appropriate times to use it. And then the next step is to, okay, how do you incorporate this into the K-12 curriculum in the right way at the right levels for the students, depending on what age they're at. And a big part of the education about not just educating students using AI, but educating K-12 about artificial intelligence, is educating them about the risks and the reality of it. In other words, letting them know from an early age this is a tool. It may sound and feel like a person, but it is not. Teaching them what the how safely to use it, similar to how when we started teaching cybersecurity in the schools, the first thing is don't share your password with anybody or your username with anybody. Same thing with artificial intelligence is don't give it personal information about yourself. And so we're working on that, and I know Rye is working to come up with standards in the next year for K-12, and RAD is actually a little bit ahead of the curve on that nationally. So we're it is a problem we are working on in the middle of the state.
SPEAKER_10How does this disrupt that?
SPEAKER_05I mean, yeah, I a lot of what you know my fellow brilliant analysts have said I agree with. You know, we have like cognitive offloading, um, which is the term that you know we will use when we're talking about how you know instead of using your brain to do a thing, you're offloading that to something else, um, which is extremely important, right? If you're talking about kids, uh and if you don't have the if they don't have the skills that they need to be learning, um, then they just don't develop those skills. I mean, I've recently seen articles about how um you know certain uh schools are just reverting back to pen and paper, or computers to pen and paper. Um but uh yeah, I mean I I guess for reference, um some of the work that I do now, I look at how people are using large language models for social and emotional use, therapists, companions, things like that. So obviously there are attachments that are built there. Um and we see, you know, we've seen in these like horrific cases of like suicide uh that some of these kids are like particularly vulnerable. Full transparency, Adam Rain, who's from my hometown in Southern California, so that wouldn't like it particularly quite really close to home. Um, but yeah, I mean this is another like very clear example of why we need like accountability and regulation in these phases because children are so impressionable, you know, they they it's difficult for them to know a world that they haven't lived in for that long. So, you know, we really need to be responsible as the adults in the room and say, like, you know, this is not okay. We can't be deploying this and we can't be letting our children use it in this way.
SPEAKER_10And all that at the same time as schools in Texas are going full AI, so there's also that debate as well. Also overseas, there are scenarios where it's one-on-one with the computer. We have time for one more question. Who's that gonna do? We got 30,000 hands and one question. I'm not making that decision.
SPEAKER_03I'm wondering.
SPEAKER_10Oh no. We're gonna call it a draw.
SPEAKER_01We'll do two more. Yeah, exactly. No, no, no, yeah.
SPEAKER_03I'm wondering if the allocation of resources is gonna be our the first thing that we really see happening. As anthropic makes computer scientists no longer computer programmers. So everybody's getting laid off. And so, and all of these data centers are taking all of our energy right now. You're paying more if you're living in a certain city. So, what if, question, if you could regulate the big guys all paying back to society to take care of those problems?
SPEAKER_02So one of the things that could happen, so one of the things that I think is really like we should keep in mind when we see these giant data centers being built, and it's like, oh my gosh, they're they're they're concentrating all this power and control. They're only building these big things because they want to tap into all of us. Like they really are, they're powered by us, and we can just say no. Like, or we could just insist, like, yeah, okay, we'll play this game, but here are the rules. Like, here's what we demand to be a part of this system. We're they're they're not doing this without us, they're doing this kind of at us, and and and and for the uh for the take opportunity of us as a resource and to take our resources. And so, yeah, we I think we need to pay attention to that. We need to remember that this really is about us, and we have a say in in uh in the way that it gets about it.
SPEAKER_04Last question. Um Anatolian Boyce, I wrote a book on uh I visible incident or artificial intelligence, and basically my book and this is how AI overcomes mankind's capacity of thinking that we are no longer needed to basically work, I should say. But as we're speaking now, we have 50% of the industry working laid off. Another three or five years, it'll be 100%. So as we talk about, I want to challenge the care. What do you have in mind to help mankind earn a patient, to have money to buy a home, to buy a vehicle, just to have the necessity for food and water, artificial intelligence do not have to be bad. They don't need age to go. I've spoken about this before, they just need a little room closet and they could just remain there for the day and night and uh us back to work where we need uh vital nutrition in order to survive and to allow AI to work out farms, eventually they could yield the crops that they wanted for us to have and cause a shortage. Same thing with water. And so that's where we're at, and that's when my books about annihilation of a plan. I'm not here to promote it, but I talk about all the importance about AI.
SPEAKER_05Uh yeah, that's a lot. Um yeah, I well, I mean, first of all, AI technically does need water, right? Because they're building all these data centers, they are physically located somewhere. Um, but I think that again, like, you know, not to let the camera this back home to everything that we've been saying, but like this is a question around life process and accountability. Like, yes, in a world where everybody is laid off, which like frankly, I like do not want to put that kind of like just blanket statement out there. I think that there's a lot more nuance to that, so I really you know please don't hang on any of that. Um there needs to be processes for the ways that people are gonna live their lives uh and be able to afford food. I don't definitely agree with that. I think you know, not just before everybody, I think everyone agrees with that, right? We have like a brief conversation around, you know, like UBI is something that is like maybe a supplement. Um, but one of the things that I would really love to see in this administration, and even for these companies, right, is like what is the plan here, right? Like, are you just gonna displace everybody? Because it's the plan like you just displace everybody, and then like all the power is concentrated in like the hands of a few, and then everybody goes home, but right? Like, this is not like we need we need a plan, and I think the gap here is that no one's talking about really concretely what that plan is.
SPEAKER_06Uh uh just really quick. When we talk about AGI and superintelligence, it's not like there's one big brain that's gonna be running everything. There's gonna be a lot of little tiny brains that have a very limited amount of control, and also there'll be little tiny brains from uh XAI, chat, or open AI, rocket, and they're all gonna be competing against each other. So I think it's gonna take a while for any one to take control.
SPEAKER_08So yeah, I would say I've been um before I was uh an academic, I was in the technology world for about 35 years, 73. I don't subscribe to the fact that all jobs are gonna go away and it's all gonna be there. Humanity has too many problems to solve that we have not solved. Uh the rights and oceans, getting out to outer space. How are we gonna do all these things? Um, if any of you are old enough to remember when PCs came out and COVID Fortran started to go away, all of these programmers were getting laid off. You look five years later, and there were so many more programmers that are writing in third and fourth generation languages, just doing totally different. I think it goes back to the accountability and it goes back to the piece of the use case regulatory work of how do we make sure that we're not getting rid of jobs that we need to get rid of. And as someone said at the movie, I think keeping the human in the equation. I don't want AI doing all of my medical work. I want a doctor to be involved, at least at some point. Um, and and you know, we can't get specialists, we can't get out in here in the state. I can't get a primary care physician. So if this allows primary care physicians to get more patients, then I think it's good. And so I think we just got to think about how we manage the waste deployed in the jobs, they'll take, or it will do what you say. I think that's the risk.
SPEAKER_10And there's also the element is of holding companies accountable for what we're starting to talk about now, not just within AI's specialization, but in economics and politics in general, AI wants to be where you have these mass layoffs that are happening, and they say, well, it's because of AI, when it may in fact not be. And that piece of accountability is just as important so we can really understand where the trends are going, and we're not just seeing companies saying, well, let's save some money and blame AI.