Higher Listenings
A lively look at the trends and people shaping the future of higher education, featuring thought leaders from across the industry. Brought to you by Top Hat.
Higher Listenings
Adopt, Resist, or Adapt? AI’s Crossroads in Higher Ed
AI isn’t knocking politely—it’s already moved into the classroom, and higher ed is still figuring out who gets the guest room. Marc Watkins, Director of the AI Institute for Teachers and Assistant Director of Academic Innovation at the University of Mississippi, has become one of higher ed’s leading voices on the ethical and practical implications of generative AI. In this episode, he unpacks the divide between those eager to embrace AI and those determined to resist it, and we can learn from both sides.
From the pitfalls of institutional hype to the importance of “AI-aware” classrooms, Marc makes the case for a middle path grounded in ethics, curiosity, and connection. Together, we explore what meaningful teaching looks like when students can outsource thinking, and why the future of learning depends on keeping humans, not algorithms, at the center.
Guest Bio
Marc Watkins is Director of the AI Institute for Teachers and Assistant Director of Academic Innovation at the University of Mississippi, where he also lectures in Writing and Rhetoric. His work with generative AI in education predates ChatGPT, and he champions a stance of curious skepticism toward the technology. Featured in The Washington Post, and The Chronicle of Higher Education, Marc trains faculty nationwide in AI literacy and writes about the intersection of AI, teaching, and ethics on his Substack, Rhetorica.
Follow us on Instagram: instagram.com/higherlistenings
Higher Listenings is brought to you by Top Hat
Subscribe, leave a comment or review, and help us share stories of the people shaping the future of higher education.
So I encourage my students to talk to me about how they're using this because we need to have those open discussions. We can't just look at this technology as a cheating technology, because otherwise those conversations aren't going to happen.
SPEAKER_00:AI isn't knocking politely, it's already moved into the classroom, and higher ed is still figuring out who gets the guest room. You just heard Mark Watkins, director of the AI Institute for Teachers and Assistant Director of Academic Innovation at the University of Mississippi. He's one of HigherEd's most thoughtful voices on how generative AI is reshaping teaching, learning, and trust. In this episode, we dig into the tension between adopting and resisting AI, explore what AI teaching really means, and talk about why something should be grown, not graded. So here's the question: if AI is unavoidable but not inevitable, what kind of human-centered future will we choose? Welcome to Higher Listenings. Welcome to Higher Listenings. It's wonderful to have you with us.
SPEAKER_01:Well, thank you guys for having me. I appreciate it.
SPEAKER_00:So, how do you view AI in the context of higher education? Is it a wolf at the door, as you wrote, or more akin to a pesky mosquito that may or may not carry the risk of a nasty long-term virus?
SPEAKER_01:What's your oh wow, yeah. Yeah, I think it's uh it's definitely going to look like a chronic problem. The wolf will eventually go away if it gets too hungry. You keep going with this analogy, the virus will become endemic to where we are, too. I just think that we're in this weird sort of situation, too, where we have to face the technology that's not going away. It's becoming part of our lives now and really a big part of our students' lives as well. So you can't hide from it. You also can't policy it away. You're going to have to start looking at this with some nuance and some really in-depth professional development, too, that looks at AI and education as a continuum and not just like a one-off sort of thing. And that's going to take a long time for education, especially higher education, to wrap its head around.
SPEAKER_00:I think that's an interesting point that we need to think about this as a continuum because it's not like I can go and attend some training on active learning and I get the methodology and then I can apply that for years into the future. This is a shift in mindset around becoming sort of a continuous learner in many respects.
SPEAKER_01:Yeah, it's a shift in mindset too. It's also just the reality where we are. Higher education is on the receiving end of a fire hose of AI developments. If we could get some stability in the actual sort of developmental space, too, that would be great to just give it each other a chance to catch our breaths, take our variance, see where things are. But every time we get a curriculum stood up for some type of professional developing course or in-person event, it changes within three to six weeks, if not three to six days. So just having that sort of mindset that we're in a moment of acceleration can be very challenging for a lot of faculty to wrap their heads around.
SPEAKER_00:You've written that AI is unavoidable, but not inevitable. Can you tell us a little bit more about what you mean by this?
SPEAKER_01:Yeah, so AI is here. You're not going to be able to just avoid it. It's embedded in most of the systems that we're using every day. So it's very hard to avoid these types of tools as they're being integrated and deployed throughout it. But that doesn't mean you have to use them. And more importantly, too, we want people, if they are using AI, to do so intentionally. We want them to be thoughtful about how they're engaging it and they want to do so with a certain degree of openness and trust. So I don't think it's by any means inevitable that we're all going to be using these AI features and tools. In fact, it feels like we're in a point of saturation in the market right now where every single app you can hear of. In fact, the students talk about this too. It's in Snapchat, it's AI. My students talk about that Tinder has their own AI chatbot. So they don't know if they're actually talking with another person or doing data or if they're talking with a chatbot. It's just gotten to the point too where it's everywhere and it's almost like background noise. So we really do have to start thinking about when we want to employ these tools, how we want to do so, and in what way.
SPEAKER_02:AI is permeating, as you say, the environment. It's appearing unannounced in every application space that we are in. There are, though, those who want to resist this, faculty and students who are trying to put up some principled resistance to this technology from whatever direction. Is that resistance, especially for faculty? They think about faculty now. Is it okay to resist its use in your course in an environment where it is so transforming the world of work and the nature of what students are going to be heading into? Is it still okay to be a resistor in that sense and to not talk about it or use it or to ban it from your course?
SPEAKER_01:So I think nuance is really important here. I really do ban the use of AI to automate parts of my course, both from my students' perspective when they're doing their learning and assessments, also from my own work. I don't want to be automating grading. I don't want to be automating my emails to students. I don't want to automate letters of recommendation. That's based on my values, though. Other faculty have very different senses of this. So what I've been focusing on when I talk to my students about this is like, look, AI is here. We're going to talk about how you're going to use this as something to augment your existing skills and as a collaborative partner to actually work through this. We're going to be very wary about being automated with this technology. I think that's really important to think about because some of the most recent research is that this is really starting to have an effect on the job market, especially for student graduates, because a lot of the junior tasks and junior sort of skill sets that we saw are now becoming completely automated. To the point, too, that these companies are trying to save money by eliminating some of those junior positions. So we want to be really careful about going down that automation pathway. And we want to really focus on using AI to augment your skills and use it as that ability to collaborate and going forward. So I encourage my students to talk to me about how they're using this because we need to have those open discussions. We can't just look at this technology as a cheating technology, because otherwise those conversations aren't going to happen. It's incredibly important for us to know how students are using this in whatever discipline you're teaching. So I think naked bands, if that's a term you want to use too, or I guess the other version of that too is also adoptions across the board. That's not going to be a focus we're going to see through a lot of higher education. I think it's more about what is acceptable, what's not acceptable, and giving students clarity in that. And also expecting clarity from our institutions about what's acceptable for my usage as a teacher, is going to be really important too. We've seen a lot of silence from institutions of higher education too about what's acceptable or not acceptable for a faculty member to use AI. And I think that's going to become something we're going to hear a lot more about, especially as you start hearing more and more reports about faculty using this to grade or to automate feedback.
SPEAKER_02:Eric and I noticed what seems to be a growing interest in alternative grading or ungrading that seems to be driven or accelerated, I should say, by the emergence of AI. And I think you've joined a number of voices in your essay, some things need to be grown, not graded, really teasing out some ways in which we should be working to re rather than try to police our students, we should be reorienting toward nurturing curiosity, adaptability, ethical reasoning, and really trying to resituate learning at the center of this rather than achieving a mark.
SPEAKER_01:I think that's a wonderful segue into alternative grading practices. And we actually are lucky to have here Josh Eilert is part of our faculty team along with Emily Donahoe. They're both working on grading and alternative grading practices. I've been using alternative grading in my courses since right before COVID, so almost six years now. And I love it. My students, generally speaking, feel very positive about it. But institutionally, there's such an ingrained response of this transactional mindset about grades, both from faculty and from students. And I think that is turned out to be one of the worst situations possible for us with AI, because it then just becomes completing an assignment to get the grade and not actually so show the growth and that sort of situation. So for me, it's smaller classes, it's much more human face-to-face contact when appropriate, and we can actually do that. And it's also letting students know that their work has value and is seen.
SPEAKER_02:If I'm starting this semester or thinking about the spring semester and thinking, you know, I've felt this for a long time. I know this tension is growing with the emergence of AI. I really want to make this transition toward a growth-oriented course rather than a grade-oriented course. What's one simple step that I might take as an instructor to move in that direction?
SPEAKER_01:I think inviting students in the very first day to start crafting your policy about how they're going to be graded and actually co-creating parts of that on the policy too, so that they're aware and they're actually helping construct their actual grade. And the other thing you can do if you have a conference structure set up in your course, invite students in to talk about how they are growing and being able to identify their growth too. Put that onus onto them. That's the whole point, too. We want them to be able to advocate and show how they're growing with that too. So that you can be sitting in your office and you can have a list of prepared questions about their actual process that makes them go into metacognitive reflection about what this process is like, too. So very simple changes like that, too, of inviting students in to be co-collaborators about how the grading is set up too, and having a space where they can start talking with you about them and start really advocating for what they did to show you their process and growth in the class, are two really good, I think, clear strategies you can actually implement if possible. Now, if this is a 200-person class, that's not going to happen. You're going to have to be more thoughtful about how you want to integrate that too and consider it of what technology you want to use to help you mitigate that.
SPEAKER_02:The student success movement, which is now decades old, I think has begun to make this transition, or certainly was beginning to make the transition when I was at Ohio toward a focus on experience. What is the student experience from start to finish? Where is the friction? Where can we make the value more transparent and apparent and actually real to the students? And so that whole journey mapping kind of practice has, I think, begun to take root in a constructive way. And classroom redesign is a part of that for sure. So beyond the living experience, which a lot of money and effort was invested in, thinking about the learning experience as really the thing that needs the most attention urgently to shape experiences that actually motivate students and reinforce and support their learning in various ways. So I think that's something to look forward to. And I think there are certainly opportunities for AI to become a useful tool in that effort.
SPEAKER_01:Yeah, it's happening. But the thing is that this is why it gets so important for not just policy conversation, but for vision. A lot of faculty are being left to their own sort of devices about coming up with their own AI strategy, own AI policy in the classrooms. Most institutions have said, look, it's your job as an instructor to provide this to your students. Well, from a student's perspective, they might be having five or six classes a semester. They're going to get five or six very different AI policies. And what we've seen from surveys across institutions is that quite a few faculty are giving zero guidance right now because they don't feel comfortable about this for a lot of reasons. So going back to the idea of the student experience, when they graduate from college, they're going to be very confused about how AI was actually used and how it can be used in the actual workplace. Many of them are going to, in some situations, be taken up on charges because of AI detectors. Some of them are going to be obviously guilty of academic misconduct, but from what we see in the space, a lot of them are going to be falsely accused. That's going to affect the student experience. So when these students graduate and have their own children and start thinking about college, I don't know if they're going to necessarily have a positive experience thinking about that college learning experience when AI was just coming along here and wondering if that's going to impact their actual decision to send their children to college.
SPEAKER_02:Yeah, I hadn't even thought about that generational impact. I'm very narrowly focused on the next 10 years. I'm just trying to get through the week, Brad.
SPEAKER_00:I'm just trying to get through the three.
SPEAKER_02:Yeah, I don't know. I don't know. Some days, some days it's just the day. That's all. Like get me to five. So there have been some very high-profile institutional investments or announcements of partnership with OpenAI and other companies. And you uh have added your voice to a chorus of those who criticize universities for doing these things more for optics than for anything intentionally or seriously committed to addressing the challenge of student success and driving learning forward in this new era. So, what's behind your criticism there and what should universities be doing instead?
SPEAKER_01:We have to start with students. We have to start with faculty, and then we have to decide if we need to actually invest in an outside tool. The thing that makes me upset with some of these major announcements is that the optics were we bought the tool and now we're gonna teach faculty how to use it and we're gonna get their advice and feedback on that. To me, that's the wrong sort of pathway to take. You want to consider what your values are for your institution, what your strategic goals are for the future, and you want to consider too how AI is gonna play a part in that. Again, there's not just one AI tool on the market. It's not just in one system. Your institutional philosophy, your vision for how people should be using these tools, hopefully as collaborative and to augment and not just automate things, I think is gonna be much more meaningful than saying, hey, we just bought so-and-so's company and here we have 5,000 or 12,000 or 10,000 subscriptions. Use it as you like. That's just not gonna work very well.
SPEAKER_00:So in the corporate world, we're seeing the push to move faster. You know, a lot of workers want to be able to use the latest tools while the IT department is saying, hey, wait a minute, slow down. We need to look at this more carefully. Where's the data going? What are the security considerations? Do we already have a tool that can do this? So users see the shiny new thing and think, oh, this is gonna make my job more efficient. So let's go. But then there's the whole risk side of the equation that many folks aren't necessarily thinking about.
SPEAKER_01:Oh, yeah. I think the safety features are other things too. We're all very aware when we're typing something in or putting a document into an AI system, I think. But now we have multimodal AI. Some people are bringing this to work, they're talking with it, they're activating their screen to let AI look at their screen, they're activating their camera so it can look at them. That's a whole new ball game in terms of security and privacy concerns that we need to be really thoughtful about, especially in our industry in education. Dealing with FERPA and HIPAA requirements too are going to be something we all have to be aware of. And uh the thing is, once it's activated in your system too, it's gonna get updated too. And the updates are gonna change how it functions as well. So it's not just this static thing that suddenly appears, it changes then over the course of the next few years, and you then have to keep up with that too, on top of it. So it adds a lot to the cognitive load of everyone from students, faculty, and staff, and even administrators, just you can't even keep aware of it.
SPEAKER_02:I think that's the modern condition, right? We're bombarded by technologies that are evolving and changing, and we have no say over those, other than are we going to use it or not, right? One of the worries that is growing with AI, and it's not a new worry, it's an old worry, really. And in my view, it connects to the downside or the tension that emerges when you focus on personalized learning or personal learning environments. Students turning to AI are turning to AI sometimes instead of or in lieu of turning to the professor or turning to a peer. And this may be exacerbating isolation and loneliness. And students may or may not be aware of that until they're in a crisis. There's clearly some value to students engaging with AI outside of class when they're in their room studying, to gain clarity, to have something restated in their language of birth, whatever use they might that might add value to their learning in that moment, that all seems quite positive. But it does seem like we might need to start thinking more about how to ensure in the design of our courses that we're creating occasions for positive peer-to-peer and student-to-instructor interactions. So, do you have some suggestions or thoughts about how that might unfold?
SPEAKER_01:Oh, yeah, I think that we should definitely be looking for as many opportunities as possible to be human with each other and connect with each other. If you're teaching an online or hybrid class, you might start looking at some different tools used a voice threads to actually have conversations with students that are asynchronous. And a lot of the social annotation tools that are on the market, too, that can actually give students a chance to go in and start annotating a text to and typing and really starting connecting with each other. So using technology in that way that's supportive, I think is really important. I do worry about the number of people that are using this chatbot as a person or sort of thing for like therapy or companionship. Even when Ethan Mollock was talking about this, I think it was last October, I was at the conference. He said that some studies were showing that students that were really super users of Chat GPT were less likely to raise their hands in class to actually ask questions because they're less likely to go out there and put themselves out to think that someone might judge them if they're wrong. And here's this wonderful thing called Chat GPT that's not going to judge you if you ask it a question or whatever. So I think we have to be really aware of that and really thoughtful about this sort of space too, and talk with students about this and get them to start thinking about how they are interacting with these tools.
SPEAKER_02:The slow pace is something that I really am struggling with. This feels as urgent a moment to me as COVID. And when COVID hit, I was in the executive office helping to lead the institutional response to that. We had the benefit of enormous federal resources to help us fund what needed to happen nearly overnight. Nonetheless, the mobilization of the institution, faculty, staff, and students around that pivot and ongoing policy changes in the wake of the pandemic was pretty remarkable and proved that institutions could move really quickly when the urgency called for it. This feels to me like this is one of those moments where there needs to be an understanding that a mobilization at that scale is necessary. And there needs to be a figuring out with urgency of the assessment issue. Are we going to do a two-lane solution or are we going to do something else? But we've got to figure out how to do this. We've got to figure out how to make the student experience optimized and make sure that learning is happening and help everyone make that pivot. That means helping students and staff and faculty all make that pivot. It feels like the urgency is there to me.
SPEAKER_01:I definitely share the sense of urgency, agree with you completely. I feel like that if the actual AI industry were more stable right now, too, and not launching these new updates over and over again, it would be a lot easier for us to respond and find our foothold. I think that most of our institutions are completely overwhelmed by the pace of this. We'd like to go through very slow processes of shared governance that takes a lot of time too, getting actual state funding, getting federal funding if it still exists. Takes a lot of amount of time to actually do these types of things. So I think you can say it's a priority, and priorities are really important to actually do that. And having a charge from the actual university is really helpful to have that too, to say, look, you don't need to solve AI, but you need to pay attention to it. You need to be curious about it, you need to be starting thoughtful about how this is being integrated within your actual discipline. And you need to start talking to deans and chairs to actually begin to support your faculty in different ways, whether that's giving them actual sort of microgrants where they can go out there into the world to use some of their professional development funds to pursue some different types of learning about this so they can integrate it in their classes, or big institutional initiatives where you can get them to start redesigning courses and talking about what that means. I do think that it is there. It's going to remain there for a lot of different people, a lot of different times, but it's going to have a lot of just friction just to get to that point for our actual institutions.
SPEAKER_00:It feels like we're circling this bigger question. You know, in a world where AI is reshaping what students can do, what does it mean to offer a meaningful learning experience? So for some faculty, especially those hesitant about AI, that question seems to open up a door, not just to resist, but to rethink, hopefully. Do we double down on the why behind our teaching? Do we become more transparent, more human, more invitational in how we structure our classroom? So I wonder as part of the opportunity here, not just about AI policy, but about practicing better pedagogy, maybe even reimagining what fairness and equity look like in this new train.
SPEAKER_01:I think good pedagogy is really a conversation we need to start having. We really do need to work on ways that we can show faculty how you can grade when students use AI that are equitable, that are fair across the board. It's incredibly challenging for people to wrap their heads around this to actually ensure that that happens, especially when some students are coming here with free AI. Sometimes they have an institutional account, sometimes they're paying the$21 a month account. I'm sure a few of them probably have the$200 a month account from OpenAI. So that does create this sort of situation too, where it's really challenging for a faculty member to think about, okay, well, my students can work with AI for the free version. They have three or four iterations before it locks themselves out for four or five hours, versus, you know, the actual students that can work on this for several hours at a time without worrying about this. So yeah, I think it's gonna be a bigger conversation for that too, in terms of it for a lot of our faculty. And a lot of them who are resistant say that's one of the key things that are keeping them from actually going headfirst into this.
SPEAKER_02:Yeah, it's the concern about equity or managing the differential, differentiated management of those facts that you just recited there.
SPEAKER_00:Meanwhile, on TikTok and YouTube, there's all these student AI influencers and they're selling everything from like hacks to beat detection tools or how to transcribe a lecture and get the summary or letting ChatGBT do your reading for you and these 90-second videos. That's a pretty powerful narrative. I personally have actually deleted Instagram recently. I'm kind of proud of that. I almost signed back up just to go and look at more of these videos, but I resisted that temptation. But how can educators push back against those narratives? Because I know that's what my I have two boys in university right now, and that's probably some of the content that they're consuming in their downtime.
SPEAKER_01:That's why AI awareness is so important too, having those conversations with your students too about hey, where did you find this weird tool with the name I cannot pronounce that's not ChatGPT or Google Gemini or something else? Oh, you found that on TikTok? Okay, well, are you actually finding a lot more of that? Well, my friends sent me this, and that's a lot of what we're seeing, how they're actually interacting with it too, is that they're far more influenced by social media personalities than they are necessarily by these giant companies. So I think we have to consider how Gen Z is getting that information. What makes this really challenging is that I'm currently in a state where TikTok is blocked on our university account. So I can't actually go in to a presentation for my campus and show them TikTok. I'd have to go to my home office, download TikTok, and do a PowerPoint to for a video to to actually show this. It's gotten so challenging in terms of access. So we are now losing access to certain types of devices, services that you may not be able to get on a university computer. But if you bring your personal device with you, you can get around fairly easily. So again, we go back to this idea of access and who's actually watching these things and understanding too that this landscape is becoming far more jagged than it is in terms of evening out because of these issues of differential access.
SPEAKER_02:So this conversation, and I think the conversation we're going to have and see unfold over the course of this next year will continue to dwell on some of the really deep challenges because they're deep and they do require us to be thoughtful in responding and pivoting as institutions and as individual instructors. There's also, I think, an opportunity for us to see real opportunity in the future. And so I want to invite you to share what you're optimistic about with the emergence of this technology and what you think we have to look forward to and get you excited about the year ahead.
SPEAKER_01:Well, I think the technology has amazing potential for healthcare and also for just dealing with some of these giant social problems that we've been falling on our world to. You can do a lot with this technology in terms of identifying patterns in our world that would be very difficult to do with even versions of machine learning and data science that were there a few generations ago. And I'm saying generations, not as in human generation. I'm taking the technological generation of like six months ago or seven months ago. So I think we have a lot of good opportunities for this to actually improve our world, to improve our lives. We have to look at this, though, not as just a fire hose of just releasing this on the public. We have to really start thinking about those good intentional engagements and use cases for AI and our world for research, for the sciences, and also for medical care, and consider how that's going to be supporting our lives. If we can do that, then things are going to be great, I think. I think we're going to have some good overall outcomes. It's this fire hose situation. We just need to get this to the point too where it just becomes a little bit of a stream instead of a giant fire host aimed at higher education.
SPEAKER_02:I want to thank you for your time and for your continuing to lead in this conversation. Mark, it's been a pleasure talking with you.
SPEAKER_01:Thank you, Brad. Thank you, Eric. This has been wonderful. I really appreciate it.
SPEAKER_00:Thanks so much. Higher Listenings is brought to you by Top Hat, the leader in student engagement solutions for higher education. When it comes to curating captivating learning experiences, we could all use a helping hand, right? With Top Hat, you can create dynamic presentations by incorporating polls, quizzes, and discussions to make your time with students more engaging. But it doesn't end there. Design your own interactive readings and assignments and include multimedia, video, knowledge checks, discussion prompts, the sky's the limit. Or simply choose from our catalog of fully customizable Top Hat e-techs and make them your own. The really neat part is how we're putting some AI magic up your sleeve. Top hat Ace, our AI-powered teaching and learning assistant, makes it easy to create assessment questions with the click of a button, all based on the context of your course content. Plus, Ace gives student learning a boost with personalized AI powered study support they can access anytime, any place. Learn more at TopHat.comslash podcast today.