Teaching Beyond the Podium Podcast Series
Get inspired by members of the University of Florida community as they discuss tips and strategies for creating a quality learning experience.
Teaching Beyond the Podium Podcast Series
Practical Ways to Teach AI
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Learning about artificial intelligence isn't just for computer science majors! Dr. Jasmine McNealy, Associate Professor in the College of Journalism and Communications offers practical strategies and useful resources to help faculty learn about AI and apply it to their teaching, no matter the discipline.
Music: Motivational by Scott Holmes
00:06 Dr. Alexandra Bitton-Bailey
Hello, my name is Alexandra Bitton-Bailey and welcome to the Teaching Beyond the Podium podcast series. This podcast is hosted by the Center for Teaching Excellence at the University of Florida. Our guests share their best tips, strategies, innovations, and stories about teaching. Our guest today is Dr. Jasmine McNealy. Jasmine began her career at the University of Florida as a graduate student and after teaching at other universities, she came back to her very first academic home at the University of Florida as an associate professor in the College of Journalism and Communications.
00:41 Dr. Jasmine McNealy
Two the areas I work in are broadly defined areas are law and mass communication. But what that means to me anyways is like how do we examine how people actually interact with tools, media and technology in particular? And then how do we better make law? Or how do we rethink the law that we have to consider the impacts of these technologies, and the organizations and individuals that are using them on people.
01:13 Dr. Alexandra Bitton-Bailey
Part of her expertise, which she works to share with students, is centered on artificial intelligence. Jasmine broadens our views of AI and helps us to connect what artificial intelligence really looks like in our everyday lives.
01:28 Dr. Jasmine McNealy
Yeah, you know, I think the way we talk unfortunately, about artificial intelligence is, you know, sci-fi. The talking car, you know, those kinds of things that, like a general artificial intelligence, which doesn't exist right now. What we do have is automation, we do have algorithmic and machine learning decisions, and decision making technology. You interact with artificial intelligence all the time. Most people have Netflix, most people have some kind of recommendation system, whether it's Netflix or Hulu, or Pandora, or Spotify, right? Those are recommendation systems. Those are machine learning systems. And you're interacting with them all the time. People have Facebook, people have Twitter, those are algorithmic systems. You talk to Alexa, you talk to Siri perhaps, that's artificial intelligence. Now, there are other kinds of artificial intelligence that are- most people don't have themselves to deploy. Most people don't have facial recognition to deploy, you can use it on your iPhone to unlock your phone. But you don't have like a, hopefully, a facial recognition system that you can deploy on other people. Right. So I think, you know, thinking about the language we use when we talk about artificial intelligence is really important.
02:49 Dr. Alexandra Bitton-Bailey
Despite the growing enthusiasm and energy behind weaving AI into the curriculum, Jasmine cautions us to carefully consider the impacts and potential consequences of AI.
03:01 Dr. Jasmine McNealy
We have to critically consider what is it mean, when we say we want to have AI woven throughout everything on a campus. That, to me, comes with responsibility. That means teaching everybody about responsibly using this tool, but also the fact that a tool is never neutral. A tool always takes on the values, the biases, and the you know, social orderings that come along with the people who created it, but also are deploying it and using it. And then so what does that mean? Is that reifying structures we already have said are bad that we've been saying are bad for a long time? Is that harming people? What is- what are the impacts that could possibly happen?
03:51 Dr. Alexandra Bitton-Bailey
Jasmine explains that sometimes our best intentions for AI uses create unexpected outcomes. These are the examples that are so critical to share with students as they prepare to enter their careers.
04:05 Dr. Jasmine McNealy
Example, a couple of years ago Amazon I believe it was had or Yahoo one of those one of those tech firms that said like, Look, okay, we understand that there's a pipeline issue, women are getting hired in tech. Here's what we're gonna do. We are going to make an algorithm that blindly reviews resumes, right? So this is this will presumably take away bias because people will look at names and they'll attach a gender to a name. And so those human biases we think are causing, you know, women to get overlooked in the resume submission process. And then, you know, at that point, not even getting an interview or even a preliminary interview. Cool, right? Right on its surface, right. We want to get rid of gender bias in the hiring process the problem is they made an algorithm that instead of taking away human bias related to gender, reified bias related to gender. It was kicking women's it was just like wholesale jettisoning womens' resumes. There are other pointers that point to woman algorithmically than just the name. So how- what are you basing your model employee that you want to hire? Like what does that look like? And if you had an organization that has traditionally hired a lot of men, then that exemplar, that model employee is going to probably look a lot like some of the men you hired, and what does that resume look like? And if women are having way more diverse experiences, that means their resumes are probably not going to look just like those mens' resumes. So if you created an algorithm and trained it on this data, and your exemplars are all these dudes, right, presumably, and it encounters women, and other people of color, and other marginalized folks whose resumes look different, who's going to get jettisoned from that pile? And it was found that a lot of women weren't even making it past the screening process. Now, we've created this algorithm that's supposed to be like that's supposed to, you know, help us out, do better. And it wasn't. So these are the kinds of things and there have been several others, right healthcare, banking and finance, school admissions, right, college and professional school admissions. In the 2020, UK GCSEs or A-level exams, they used an algorithm that downgraded all of these students, right. So we've had like, several high profile instances. And then there are several others, where people have no idea why they didn't get an apartment, didn't get a loan, didn't get a job.
07:06 Dr. Alexandra Bitton-Bailey
The intersection of science and the humanities is key. Jasmine points out that too often, AI is reserved for the sciences, and isolated from the humanities and social sciences. To improve AI and truly develop universally usable tools free of biases, requires a deep collaboration between STEM and the humanities.
07:28 Dr. Jasmine McNealy
And I think that's where Humanities and Social Science comes into play. Because for a long time, humanities and social science have been like looking at social ordering, and society. And having the ability to have these discussions about like, look, a technology or tool or new media does not take away does not erase what we've been talking about for a long time. So how does social science and humanities help you to think critically about these tools, and about ways of solving problems with these tools?
08:03 Dr. Alexandra Bitton-Bailey
Jasmine believes that gen ed courses should help students consider the connections between the world around them and their coursework.
08:11 Dr. Jasmine McNealy
Part of what a university does is help students think about what is their role or what could possibly be their role as a member of a functioning society. So I think, you know, grounding in our gen ed courses is really important. And I also- people are like, Oh, why do we have Gen Ed requirements? Why do we have this this this? Why do we have like a cultural studies requirement? Why do we have whatever requirement? Because they could have you as a student thinking about, oh, I thought I was just gonna take a class about, you know, Roman literature, but no, it's Roman literature, but we're looking at the technology they talked about, and what that meant for different, like social structures. But then we could bring that to the modern day and talk about artificial intelligence as a, you know, technology and what that means for the social structures.
09:08 Dr. Alexandra Bitton-Bailey
When it comes to AI, one of the big challenges for most of us as faculty is that we think we need to understand how to write code, but in reality, most of us simply need to connect our field to what students are experiencing in the world.
09:23 Dr. Jasmine McNealy
Again, you don't have to be you don't have to learn Python over the summer to be able to teach your class in the fall at all. I think most people do not need to know how to program an algorithmic system. There's no, it's not necessary, right? Save that for the computer science and engineering folks. You do need to know how- broad overview- how it works. What it means when you like a song, what it means when you fill out a job application online or a bank loan application. What do these points of data mean and how do they, how are they used? You know, for faculty members in the humanities and social science, they're probably already doing some related work. It's just, if necessary, pulling out or connecting it to the artificial intelligence. So I gave that example earlier about like Roman literature and technology. You know, can you connect what it is you're teaching to the stories that are happening in the news right now that students may be aware of? Oh, you know, I heard about how Tik Tok is shadow banning certain words. Well, what does that mean within the context of censorship, right, if that's your class. Say you're teaching art and art policy. What does it mean to have artificial intelligence create art? Can a artificial intelligence own something, right? So all those things that you are already teaching, it's just making connections.
10:57 Dr. Alexandra Bitton-Bailey
Jasmine has some strategies and resources to help us get started. These are easily accessible materials we can use to help us connect AI to our own fields in ways that are both critical and informative. This approach can help us develop content and conversations to have with our students that resonate with them.
11:17 Dr. Jasmine McNealy
There are several books and several writings that help or could be helpful and thinking about this for faculty members. I think one that is really great really well written a couple of years ago, Artificial Unintelligence by Meredith Broussard is a great book to start thinking about what is we actually mean when we say artificial intelligence, where or what is the actual state of artificial intelligence, but also Safiya Noble Algorithms of Oppression. Virginia Eubanks' book which titled slips my mind completely right now but Eubanks, eu ba nk s. So those, but also the conference proceedings are really good too, for what was formerly called FatStar is now Fact with the c in it, several different interdisciplinary papers coming out of that. And in that, while they seem like they could be very weighty in the computer science, they're not there's a lot of social scientists that go there that present, there are a lot of lawyers too which, you know, may or may not turn you off, but they're, they're there. So the interdisciplinary conference. There are other ones that are even taking even more critical approach. So critical ML is one to look for as well. And so other folks have created like critical AI syllabi that are available online that have a lot of different readings that could be good for, for thinking about this.
12:55 Dr. Alexandra Bitton-Bailey
Jasmine acknowledges the many challenges we as faculty face when we're trying to make a shift or transform our courses to embed content that will help our students gain much needed proficiencies.
13:09 Dr. Jasmine McNealy
Well, you know, I think one of the challenges we always face as instructors is time, right, and preparation. I think having resources that help faculty, you know, gain, that baseline knowledge about artificial intelligence will be critical for the University of Florida. I think having that in different for different people with different learning styles will be critical. I would also say though, like being transparent about the fact that like, Hey, we're all in this is, this is new for everybody. And we're learning together, but we're gonna think about this together. And when there's a problem, we're gonna like, critically consider it together will be an important thing for faculty to do. I would say also, for faculty and staff, and you know, grad students who are teaching as well, who will be, you know, responsible for this, it'll be important for us to make sure that the university knows that we need these resources to effectively teach about artificial intelligence and other technologies which are going to be used on campus.
14:21 Dr. Alexandra Bitton-Bailey
Jasmine suggests two important approaches to helping students get familiar and comfortable with AI. First, including content that interests them that they connect with and relate to. This is key and getting students to really engage in rich conversations. She also recommends getting involved with the Center for Teaching Excellence through workshops, resources, or faculty learning communities, where she discovered new teaching strategies.
14:48 Dr. Jasmine McNealy
So two things I learned to do at the University of Florida, actually with the help of CTE and one is team based learning. So if you have students who are, let's say, shy, but we'll say not as engaged. They may engage, though, with a group of students, other students that they get to know a bit more, and they feel freer or more comfortable answering questions. That's one. And two, I always try to have a modern or contemporary example that they will have, you know, either encountered or experienced, or know somebody who's experienced. If I'm being perfectly honest, I try to entertain myself in class. And if I'm not like interested or entertained, then I'm pretty sure they're not either. Of course, I know, this is not for entertainment, yada, yada, yada. However, if I have to be in front of you, at least for fifty minutes then I have to, you know, not bore myself. So, that being the case, I try to find examples that I find interesting, and I think, will spark discussion. I don't like go out of my way to find the weirdest but if there's something weird if Wired is covered something weird if the if WCJB, or Wesche has covered something really weird and interesting, or if I know that an episode of- I don't know what's popular now- The Handmaid's Tale or Law and Order, or, you know, whatever has a clip or part of it in there that is relevant to what we're discussing, I'm gonna bring that in there because I figure you know, they may be watching this or they may be talking about this, but only if it's relevant right to what it is that we're talking about.
16:32 Dr. Alexandra Bitton-Bailey
Thank you for listening to this episode of the Teaching Beyond the Podium podcast series. For more helpful resources developed by the Center for Teaching Excellence at the University of Florida, visit our website, teach.ufl.edu. We're happy you joined us and we hope to see you next time for more tips, strategies and ideas on teaching and learning at the University of Florida.