Higher Listenings

AI's Future in Higher Education with Dr. Kevin Yee

Top Hat Season 1 Episode 5

At a time when AI offers students the easy button, how do we motivate them to do hard things? And how do we help students reclaim the sense of purpose, agency, and self-confidence vital to a meaningful, productive life. Who better to ask than Dr. Kevin Yee, Director of the Faculty Center for Teaching and Learning at the University of Central Florida and a lead organizer of the university’s wildly successful Teaching with AI conference. Dr. Yee gives us his take on our promethean moment in a conversation equal parts practical and philosophical. 

 

00:00: Navigating AI in Higher Education

09:09: Enhancing Evidence-Based Teaching With AI

22:01: Implications of AI on Assignments

25:24: Navigating the Future of Higher Ed

35:23: Revolutionizing Student Engagement With AI

Follow us on Instagram: instagram.com/higherlistenings

Higher Listenings is brought to you by Top Hat

Subscribe, leave a comment or review, and help us share stories of the people shaping the future of higher education.

Speaker 1:

Welcome to Higher Listenings, a podcast from your friends at Top Hat, offering a lively look at the trends and people shaping the future of higher education. I'm Eric Gardner, director of Educational Programming, and I'm Brad Cohen, Chief Academic Officer.

Speaker 1:

In a reflective mood, stargazing. A couple years ago, I asked the universe what's the meaning of life, and it replied you'll never know. But here's an existential crisis instead, and around that time, chatgpt made its debut. And while AI may be viewed as a crisis by some and an opportunity by others, one thing is clear AI is our Promethean moment, a moment when not just one thing changes, but everything changes. That's probably true for the traditional approach to essays and assignments, but it's a lot more than this. At a time when AI offers students the easy button, how do we motivate them to do hard things? And as AI gets even more powerful, how can we help students cultivate capabilities that are meaningful to them and the communities they care about? These are some of the questions we explore with Dr Kevin Yee, the Director of the Faculty Center for Teaching and Learning at the University of Central Florida and one of the lead organizers of the university's wildly successful Teaching and Learning with AI conference.

Speaker 1:

Whether you're looking for practical advice or to explore those big questions, we've got you covered. So grab a cup of tea or a starry sky and settle in. We've got you covered. So grab a cup of tea or a starry sky and settle in. Welcome to Hire Listenings, dr Yi. Welcome to the podcast. It's great to have you with us.

Speaker 3:

Thank you very much. I appreciate the invite.

Speaker 2:

So you obviously got in and have been in since the launch of ChatGPT November 23. So how would you characterize where we're at in this moment with AI in higher ed?

Speaker 3:

So one of the ways people talk about where we're at in a new technology that sort of thing is the hype cycle which is put out by Gartner every year and often for different technologies.

Speaker 3:

The relevant pieces of the Gartner hype cycle are that trigger in the first place, the inspiration trigger, this peak of inflated expectations that everyone thinks that the software does amazing things, it's going to change the whole world, followed by a trough of disillusionment where people are feeling, okay, this thing has some serious limitations, it's not going to change anything at all, and then a slope of enlightenment which is a much slower climb back up. That is more realistically aimed. I think our faculty, at least on our campus, they're in all of these locations. So there are definitely still some faculty at the start, before the inspiration trigger. They've not been inspired, they're just doing the same old, same old. There are definitely many who are feeling like this will change everything and there are very many who are in that trough of disillusionment. They're not happy with what AI could do for various reasons, and so that has many faculty disillusioned as well. It really is a process that people have to go through, not unlike the seven stages of grief when you adopt a new technology.

Speaker 1:

I'm sure there's plenty of faculty out there who may be in one stage of grief, having been exposed to AI.

Speaker 3:

for sure, I still in some ways believe the hype that was said around early 2023. I think it was. Thomas Friedman in the New York Times said that this is a Promethean moment, meaning it's not just that one thing changes, everything changes. All of the things associated with that industry are going to change around or across all industries, really Right. And so I do still think AI has that potential. Generative AI specifically has that potential, but there's a lot that's left unknown still.

Speaker 3:

There are unknowns with the ethics of this, there are unknowns with the legalities of this, there are unknowns with the authorship and ownership of this, and so there are reasons to want to go slow and try to figure things out. And yet I think there are also some dangers to going slowly, because the technology is now there and people are going to start using it, and if you don't put any guide rails down, then the uses of it will continue to grow and expand and do lots of things. So doing nothing is also something that concerns me, because there's lots of evidence that students know all about this, that they would use it, even if you tell them not to use it, meaning that if it's cheatable with AI, then a good number of them will do so and rob themselves of learning in the meantime, and so that's inherent in the danger of doing too little.

Speaker 2:

It's clear that it is such a dynamic and complex and confounding kind of emotionally laden space that we're operating in right now and this is on top of faculty and students really are still in the process of recovering and regaining their footing from the pandemic. Even if you're open to AI and you want to embrace this in your coursework, the possibilities seem multiplying on a daily, weekly basis. So how are you approaching faculty development when, for so many, the path forward is unclear and the instructors themselves, maybe, are just feeling overwhelmed?

Speaker 3:

You know, the first time I heard the phrase 2020 has been a long year was in 2021. And obviously it was in reference only to COVID at that point, but that continued in the 2022. And and and. Then the joke morphed in 2023 into like okay, great, now we're in another crisis that is equally as disruptive as COVID, although not necessarily the faculty who choose to not pay attention to it, then they don't think of it as disruptive. But for those who recognize the potential, the opportunity and the danger, then it feels like 2020 is still here and I think it's important for those of us in faculty development to recognize that that burnout is there and it's real and it exists and it's natural. I think it would be a mistake for us to not acknowledge that and just move forward with programming, especially the way we explain programming and positioning contextualized programming. If we don't contextualize it within the burnout question, then it looks a little callous perhaps.

Speaker 3:

So I think there's a role to play here for teaching and learning centers and faculty support offices also online support offices in doing some change management when it comes to faculty, recognizing that society and higher education has shifted forever. I think those of us on the teaching center side need to be doing all we can to make the change more bite-sized. So you know, the book Small Teaching by James Lang has this basic concept that you don't have to make enormous changes to have an outsized impact. You can make small changes. I also like the idea of giving faculty some options. Faculty want programming in terms of face-to-face events because they learn from each other, but they also love programming in terms of online and asynchronous events where they can nibble at it in their own way, and they also like just-in-time sorts of things, like our teaching center's website. You know, I want to look something. Where's a good syllabus statement for me to adopt? Well, and here's the five choices I put on our website that give them that.

Speaker 1:

Dr Yee, what guidance would you offer to other folks tasked with faculty development around A&M and, as you've just indicated, there's so many different facets to this, but I'm thinking about those centers with fewer resources.

Speaker 3:

Frankly, so I think this changes.

Speaker 3:

The introduction of generative AI changes so much that it's not something a center can, no matter how well it's staffed, can meet the need appropriately without doing injustice to the need or without getting a hold of additional resources.

Speaker 3:

So I'm normally somewhat shy about going to administration saying, hey, I could do this extra things for you if I had these other resources.

Speaker 3:

But I think this moment feels like such an ask that if you were a center of one and such things are pretty common around the country and you needed to do everything to get up to speed on artificial intelligence and then do all of the things we've been talking about programming and workshops and resources that are practical for your faculty on your campus you would only be talking about generative AI. You would not be talking about problem students or lecturing better or being interactive and evidence-based teaching. There's so much to talk about in faculty development that if you were just the center of one, I would be making an argument for we're going to fall behind. Frankly, what I would recommend is benchmarking what's happening with your aspirational peers at the other institutions and specifically what's happening regarding AI and faculty development, and that data assembled could look pretty compelling, I think to an administrator who might be able to help with half a position, or faculty fellows or whatever it takes, because just getting current on the AI tools, staying current, is almost a full-time job the AI tools.

Speaker 2:

Staying current is almost a full-time job. You mentioned evidence-based practice, and so I want to dial in on this topic for a second here. There's been a lot of conversation around integrity and the limitations of this emerging technology, the potential impact on student learning. You mentioned the need to promote literacy. A lot of the conversations that I know you're having are really about how faculty put this into play in their courses for their students, how they manage and support appropriate student use of this technology, but it's also the case that we have a real deep understanding of how people learn, that we have a real deep understanding of how people learn, and faculty aren't generally trained in that. They don't have that background. Do you see a role for AI in accelerating evidence-based practice?

Speaker 3:

Well, it depends a little bit, I think, on your definition of evidence-based teaching practices. One of the things that we're big on is learning science, and when you look at the principles that have come out of cognitive psychology about how the brain actually learns and what students need to truly learn something, there has to be repetition. There has to be spaced repetition, spaced out over time, and not just cramming and distributed practice. What it suggests to me is ongoing cumulative quizzes every time I see a student, and so that's an evidence-based practice.

Speaker 3:

That is something that we advocate for it doesn't immediately seem to have any overlap with generative AI, but I can think of at least one way in which it does so. One of the other laments faculty have also wrapped around academic integrity is that many students will go get last year's test out of Chegg or Course Hero and they're short-circuiting any learning that they're supposed to be doing about this material by just essentially by just cheating and I've been delighted to tell all faculty we've got a Chegg killer now finally because you've got a system that can pump out a test with only minimal hallucinations and fixes needed by you in mere seconds.

Speaker 3:

So that is itself an evidence-based teaching practice is to have different questions each time, which is now made easier because faculty get more productive with generative AI. My own mind is not made up about everything. You sketch the academic integrity side, but also the lean inside and, you know, promote AI literacy.

Speaker 3:

Fluency in students Seems like they argue against each other. So you know the nuanced position I usually take and what I do in my own teaching is that I scaffold essays into smaller, different assignments and each one of those smaller assignments has an AI output component to it, and then you know AI plus human part of the assignment, so the plus human part they're supposed to react to after the AI does its bit. I think through giving them all of these assignments is how we give them AI literacy and so that they start to recognize some of those assignments in fact are designed to point out the limitations of AI, that they can't always be trusted, and yet it can be a good starting point for a lot of things. So it is walking a very fine line and I think that generative AI probably will help the most when it comes to making faculty lives easier and helping faculty as well create those things that our students are going to be asked to complete, like quizzes and tests.

Speaker 1:

I have a friend who works in the AI space creating AI bots essentially to perform different sort of coaching functions. I'm wondering if you see a use case for AI for faculty around their practice of teaching and so they don't actually have to go out and you know read James Lang or some of the other experts around teaching and learning that they can actually have a coach there who maybe gets a sense of their teaching philosophy and what they're trying to think it.

Speaker 3:

I think a lot depends here on, well, a couple of things how well the guardrails work about not making hallucinations with this sort of topic. That's number one. And number two, how much the publishers and their products recognize that evidence-based teaching is not the same thing as getting students the answers. So the implementation matters, for how AI is actually implemented, to see if it's going to make a big difference when it comes to student coaching, student mentoring, coaching faculty. Yeah, I think we'll see that. I can tell you after 20 years in this business, almost nobody wants to have the teaching center come and videotape you. You can look at the video afterwards.

Speaker 2:

I was going to say. I still have nightmares about my early development as a faculty member, getting videotaped by my mentor and having to sit down with her and watch that video.

Speaker 3:

Yeah.

Speaker 2:

You learn a lot.

Speaker 3:

Even as someone who offers that service, I don't think I've had it done myself since I was a brand new teacher. It's daunting. So you know, once you have an artificial intelligence bot of some sort that can do that function and it feels, I think you would feel less naked that's probably the wrong word, but you would feel less exposed by having a bot give you that feedback. So as soon as that becomes available, I think we'd be very much interested in vetting that and giving faculty advice about whether that works.

Speaker 2:

I wonder if this also cuts a different way. We've heard a lot about the state of student mental health. In fact, there was a recent study from the American Psychological Association that found that people who use AI frequently may be more prone to sleeplessness, loneliness, problem, drinking even, but we know it's an epidemic. So how should we be thinking about our approach to AI to ensure that we don't end up exacerbating this feeling of isolation from either students or faculty?

Speaker 3:

Yeah, I'm actually going to give a different answer for each audience there. The faculty all grew up in a world where generative AI was not there, so the loneliness that is induced by generative AI, I think, is a much bigger problem for our college-age students than for our faculty, because they have coping mechanisms for how to deal with life already they're fully grown and those sorts of things. But with students it is indeed a much bigger problem. The mental health crisis was always there in some ways. I'm sure COVID exacerbated it and generative artificial intelligence is making it worse as well. And I want to add an even additional layer on top of that.

Speaker 3:

I think these isolation effects are strongest for online classes.

Speaker 3:

So if you've got a student who went through COVID, then went through generative AI and like all of this is done online and there are plenty of studies that students prefer online learning and some other studies that are a little mixed, that the best kinds of learning seem to come from hybrid environments or at least face-to-face environments.

Speaker 3:

So there's some evidence that students, although they prefer online, might struggle with the learning element of it a little bit more, and loneliness and isolation is probably part of that. Generative AI has the ability perhaps to assist, because they won't be as lost. I mean, if you ever took, let's say, a chemistry problem that was numerical in nature and tried to Google it verbatim, what you would get is a bunch of articles that walk you through redox reactions without explaining how to solve this particular one, whereas if you put that same thing into ChatGPT, it'll help you understand exactly what's going on with this one example, and so it is like having a personal tutor, and so a student who's got the proper motivation is probably going to find that online courses have become easier now with generative AI, because they can get answers to help them understand in a much more targeted way. So what that means, I think, is that we might have some work to do on getting students properly motivated when it comes to approaching learning the right way, as opposed to just getting the degree.

Speaker 1:

On a similar tack, there are concerns and I worry about this personally about the effect of AI on our own self-confidence or sense of agency. You know, let's face it, ai can outperform students, for sure, and many adults on a whole range of different things of AI, while at the same time building confidence in their own skills and aptitudes and things that we, as humans, hopefully will be able to leverage throughout our lives and our careers.

Speaker 3:

Well, I'm going to answer that through the lens of one of my own beliefs about evidence-based teaching, which is that I could give students deep and involved and wonderful and engaging lectures about ethical and effective use of AI. But it is my belief that it sinks in better if they don't just hear it, but especially if they perform it themselves. And so a second belief I have is that students won't really do much in the way of exploration or additional work unless it's worth points. So, as cynical as that sounds, I think it means we have to give assignments where the point of the assignment is to learn a little bit about this. One sub-feature of what AI could do so AI hallucinates citations is a sub-feature that warrants its own discussion board post, where they learn that firsthand and they get points for doing it. The fact that they can have a conversation with AI across a prompt that then refined and then gets refined with the output again and then one more refinement based on the next output, is another discussion board prompt that deserves some points. So they learn, so that teaches them the effective use of AI.

Speaker 3:

What I do in my deeper, bigger assignments the ones that are not discussion board ones is, as I mentioned. I split it into multiple parts. Ai does some of the parts and I then ask them to perform things with the AI output that the AI is probably going to be bad at Performing critique, deep critique, using judgment, examining the ethics of the answer that just came out, using judgment, examining the ethics of the answer that just came out right. So these are all things that themselves will expose students to the ethics of using AI, the ethical use of AI. So it's kind of metacognitive at the same time, they're getting an answer but also learning about AI's metacognition. So I think this is the way where we get students comfortable for a future workplace where no one is saying you know, skip these AI tools.

Speaker 1:

I think that's going to be an unusual, and so I think that's what we have to think through in each of our disciplines is how do I convince students to use AI so they become faster at the job, this job that I'm training them for? After they get out, we'll be right back If you're enjoying the show. You can do us a favor by subscribing to Higher Listenings on Apple, spotify or wherever you get your favorite podcasts. You can also write us a review. We'd love to hear your thoughts or invite a friend.

Speaker 2:

And last, most important of all, thanks for listening Back to the show. Do you have any thoughts about how AI is perhaps compelling us, as educators, to be a great deal more explicit and maybe take more time and care to draw students' attention to what we're trying to get them to develop in terms of knowledge and skill through this activity or this reading or this exercise that connects to things they care about?

Speaker 3:

Yeah, I think we can invest more, depending on the rationale behind why you want to invest more. If a faculty member is thinking through, like you know, why are we giving this essay assignment and realizing it has to do with critical thinking and then here's what I mean by critical thinking and then turns all of that into a very persuasive mini lecture to students on the day before the essay is assigned, in the hopes that students will hear these messages and get on board that they won't use it, because then they're cheating themselves. I think we should still do those things, but I also have seen plenty of studies that point out about 50% of students are going to use generative AI, even though you told them not to. So if a faculty member wants to just throw up their hands and let the students cheat if they're going to cheat and get the grade, if they're going to get the grade, then fine, then nothing needs to be done. So I think the majority of faculty are on the other side of this, which is all right.

Speaker 3:

You just told me I can't use essays. I can't convince them not to use generative AI, so maybe I need to switch from an essay into something else. So the process we recommend is that the faculty member do go through that thought experiment of why was I assigning an essay in the first place? What exactly did I mean when I said that? These things that constitute critical thinking in my field? And then I do think it's realistic in the modern day and era to look at that list of 16 things you just wrote down and cross off the 8 or 12 of them that artificial intelligence can do by itself, because, honestly, that might be a skill we don't need to teach students in the future if AI is good at that, ai is good at brainstorming, ai is good at outlining AI is good at doing the first paragraph so that you don't start with a blank page.

Speaker 3:

So we don't need assignments that do those things. Although we could still have them, I prefer them as face-to-face things. After they've done the online, generate the output, because then that's where we do the human add value part. This is the AI output. Now, here's how I'm even better than it. Right, this is the AI output. Now, here's how I'm even better than it.

Speaker 3:

So I think that you know, getting faculty to recognize that essays are in trouble, at least unless delivered differently and the focus switches to be a process instead of product, then it might be okay, but then that also means switching to a different deliverable, and that's another piece of this equation that is moving and shifting. At the moment, there aren't a lot of tools that are free that can just create PowerPoints. So having students create a narrated PowerPoint, let's say, as a replacement for an essay, it's something that you could try today, although, yes, generally I could write the script for them. But if I know students, they'd rehearse each of those slide presentations so much that they'll know it as if they had written it. So, but it's a moving target. So you know, in terms of deliverables, that we're going to have to continue rethinking and reconceptualizing and, frankly again, this is hardest when it comes to online learning.

Speaker 2:

Right.

Speaker 1:

So I've got a son. Actually, I have two sons. One is in second year university, and I worry about what students might be thinking these days, because it wasn't that long ago we said it was all about coding camps for kids. Every kid should grow up knowing at least a little bit around, how to code or how to write effectively, like we said, and now the message has changed. It seems like why should you bother when AI can do these things so well? And I worry about students starting to question whether their choice of discipline major even still makes sense, given how quickly things are changing. So are we talking enough to students about these concerns in your estimation?

Speaker 3:

So are we talking enough to students about these concerns in your estimation? Like you, I have two children that are of an interesting age for this. One is a college senior, one's a high school senior, and my take on this about you know, are things shifting and are they feeling negative about their choice of career goes in a couple of directions. So first a few of the specific industries I think will see some changes. I think that in the future, before a team at a tech company might need a team of eight programmers. Now they're might only going to have four, or maybe they're only going to have two, because they can generate code faster. They just need someone to proofread it and to fix it and to troubleshoot it. So I don't know how quickly that's coming I don't have any firsthand data on that, but I think that's an industry where there might be less rationale for it.

Speaker 3:

So, and then the other thing I wanted to say about all of this is that the students are sometimes distraught and downtrodden because they feel like there might not be any jobs. For me, I can do everything, and you know when you do those assignments we were talking about earlier where they have to go get an AI output and then add value and point out how the AI doesn't go far enough or it's biased or whatever. It's actually redemptive for the students. They recognize that I still add value. There are still going to be jobs for us out there, and so I think the process of leaning in actually solves and salves some of the problems that might otherwise exist.

Speaker 2:

When Eric asked the question is AI sort of supplanting my role in the world and kind of raising existential questions?

Speaker 1:

That's why I turned to podcasting Brad. They can't take our voices away, but actually, they could, they could, but yeah, but I think this is safe for another six to 12 months perhaps.

Speaker 2:

Right, right, but, but. But on a serious note, ai today is the worst it's ever going to be right, so it's going to get stronger. I do wonder if some of the value proposition of higher ed needs to be centered around stewarding students toward a certain understanding and a desire to cultivate their own voice, their own set of capabilities that are meaningful to them and to the community they care about, and nevermind that somebody else can do it better, faster, stronger, and so I wonder if there's an opportunity for us to kind of re-secure a kind of value proposition here that was at the beginning of the birth of this industry.

Speaker 3:

Yeah, I think it is an opportunity, and I'm going to start with an acknowledgement that you mentioned AGI, and I tend to believe artificial general intelligence will be here in our lifetimes, our working lifetimes, at which point the machine thinking and writing ability is indistinguishable from humans, which will cause a lot of philosophical quandaries and debates, and you know what is the purpose of society, what is the purpose of life, and I think it's a tremendous opportunity for all of the humanities philosophy, certainly, but also, you know, historically, departments of English that have migrated toward thinking about technologies, digital humanities, those sorts of things to almost to reinvent ourselves as a discipline, to be the discipline that thinks, these thoughts about mankind's role vis-a-vis artificial intelligence, especially artificial general intelligence.

Speaker 3:

Now, side by side with that, though, is a concern I still have with all of these ideas we have about getting practical and leaning in and heading off student problems and not letting them cheat themselves.

Speaker 3:

I still think that the system we've got in our own minds that hopefully will work has a glaring flaw, and that glaring flaw is that students today can do what we're asking in terms of generate this AI output and now improve it.

Speaker 3:

They can do that because they've had 18 plus years of not having AI around, meaning that they were developing critical thinking skills in a world where there was no AI to do critical thinking for you. So in two years we will welcome that class that has had their entire high school experience having chat GPT at their fingertips. And so, even if the teachers are doing the right thing and having students lean in, generate output and be better than the AI, they're not going to be able to do much improvement on artificial intelligence a few years from now. And I am worried about idiocracy somewhat old movie at this point coming true, where you know, know everyone just has a much lower common denominator. There's an echo of it having to do with physical fitness in wally when you go far enough in the future yeah, but that's just where I was going, yeah right.

Speaker 3:

So I mean, yeah, is that where society is going? Is that we just have the machines do all the work for us? I'd prefer a science fiction vision, like the first original star Trek TV shows that talked about, you know, improving yourself. If that's if we have time and an ability to do that I actually have a lot of faith in human ingenuity. There have been lots of times throughout history, beginning with Plato and Baudelaire. Plato was talking about writing as a concept, physical writing, and Baudelaire talking about photography, worried about art. Lots of times where people get, you know, have these, these moments and these doubts, but we always bounce back. And we've got it more recently with the internet. Right, oh, it's going to replace jobs. Well, it did, but it created other jobs, and so we found a way, and I think we will find a way here.

Speaker 1:

Yeah, I just want to pick up on something that Brad said too. It's it's the idea of purpose too, like the Japanese term ikigai. You know, having a sense of the difference that you actually want to make what's going to get you out of bed. And you know, I think at its best, the higher ed experience is helping students discover that for themselves. So when things are changing and different, you've got a sense of what you're about and how you want to show up.

Speaker 3:

Yeah, a big piece of this generation of college-age students cares a lot about environment and sustainability. So I agree with you and what that suggests and makes me think about is gosh, maybe we need some institution-wide push on the UN sustainability goals and make that more threaded into our general education or whatever. It's always made sense, but it kind of might make more sense even still if the purpose behind it is to give students purpose or to meet students purpose where it lives. So it does suggest to me that maybe we continue to have thoughts about what do these students care about and what should we help them care about?

Speaker 1:

How should we be thinking about the value of higher education and positioning that to students when things are changing so quickly?

Speaker 3:

I know this sounds a little cynical, but a good number of students are not in college because they want the learning that comes with it. They want to know how to be. I think a lot of engineers probably do want to be great engineers. They want the learning and the piece behind it. But there are also students who are here because they just want the piece of paper, feeling like the diploma will get them the job. The job will then train them on what they need to do. So this is a fairly common thing about undergraduates. Employers, by the way, hate that attitude. That's not what employers think at all. So part of me thinks that will persist, that people will still need the piece of paper to get the job, because AI doesn't change any of that. But part of me also thinks it's an opportunity to have that conversation with the students about the value proposition. Like why are you in this in the first place? It's part of the anti-academic integrity is that, I think, across the board, in every class maybe not every class meeting, but frequently in every semester we probably need to have students think that through. Like you know what? What are you going to get out of this? Like why are we? Why are we doing this assignment? Is it just because I care about good essays on Hamlet? Like no, the world doesn't need essays on Hamlet. What you need is this kind of thought process that you can take things apart and redo them. So getting them to recognize, through consistent and repeated efforts, that cheating really is cheating themselves has always been a goal, but it's been under one. That's not been prioritized, and I think we need to prioritize it now because it's more important than ever that students understand why we do all of this and, frankly, that's an evidence-based practice in and of itself.

Speaker 3:

It goes about in teaching and learning. It goes by the name of the transparency movement. Marianne Winkle-Mess was at the start of this and, to sum it up, she says that we're very good at telling students what to do. We're pretty good about telling them how to do it and maybe how we're going to grade it, but we often forget to tell them why we're doing it, and so you know the. But we often forget to tell them why we're doing it, and so you know. The transparency of why we're doing this really does help student motivation and wanting to do it the right way. I think the numbers of students who cheat drop when you have all this transparency about you, know why you're doing it and also how you're wasting your money just getting this piece of paper.

Speaker 3:

If you're not going to get the skills, if you're going to be just as good as chat GPT by the time you're done, you're not going to be able to keep a job. They only want people who are better than chat GPT, and so I think it is a sales job. We have to give them that, like I know, the tool is there and you might be tempted to use it and you might even get to the point. The AI might even get to the point where I cannot tell. There are still some tricks at the moment where I can tell, but especially if I give them an assignment that refers to other things we've done in the class that the AI wouldn't know about, et cetera. But really the sales job is convincing the students. Don't do this to yourself. Don't get an empty degree. This is not to your advantage.

Speaker 2:

I think this goes back to the need to invest more time in being explicit in our teaching practice in a way that historically maybe we weren't called upon to do, and it has the benefit of transparency. It has the benefit of giving them at least the raw material for metacognitive reflection on their work. And if we do it in the right sort of way with them, it connects to the things they care about, and I think that's where AI actually might be able to help us complete that circle If we toss our assignments in there and say help me make connections to things students care about, draw the line between this competency I'm trying to get them to develop and the careers that it might connect to.

Speaker 3:

And I think, if we do it right, then we can actually make students pretty excited about using AI not just, to, you know, replace my job, because obviously I want a job, but at the end of the day, it enables you to do a lot of things, and faster, which is, frankly, it's exciting and it should excite students.

Speaker 1:

Dr Yi, it was an absolute pleasure talking with you today, so thank you so much for joining us.

Speaker 3:

Yeah, thank you both, and, as I mentioned earlier, we're all in this together, learning from it, and so I look forward to what happens next on the journey.

Speaker 2:

Thanks, take care, appreciate it.

Speaker 1:

Higher Listenings is brought to you by Top Hat, the leader in student engagement solutions for higher education. When it comes to curating captivating learning experiences, we could all use a helping hand. Right With Top Hat, you can create dynamic presentations by incorporating polls, quizzes and discussions to make your time with students more engaging. But it doesn't end there. Design your own interactive readings and assignments that include multimedia, video, knowledge checks, discussion prompts. The sky's the limit, or simply choose from our catalog of fully customizable Top Hat eTechs and make them your own. The really neat part is how we're putting some AI magic up your sleeve. Top Hat ACE, our AI-powered teaching and learning assistant, makes it easy to create assessment questions with the click of a button, all based on the context of your course content. Plus, ace gives student learning a boost with personalized AI-powered study support they can access anytime, anyplace. Learn more at topatcom slash podcast today.