The Teaching Table

AI in the Classroom: Navigating Academic Integrity

University at Buffalo Office of Curriculum, Assessment and Teaching Transformation

Have a question or want to share some thoughts? Send us a text message!

Artificial Intelligence has dramatically transformed the educational landscape, leaving both students and educators navigating uncharted territory. Dr. Kelly Ahuna, Director of Academic Integrity at the University at Buffalo, discusses this complex issue in our thought-provoking episode about AI tools, academic honesty, and the future of assessment.

Join us for this insightful conversation that balances practical advice with thoughtful reflection on preserving educational integrity in the AI era. Have you encountered AI challenges in your teaching or learning? We'd love to hear your experiences and strategies for navigating this new frontier.

Maggie Grady:

Welcome to the CATT Teaching Table podcast, where we explore innovative teaching methods and dynamic educational strategies, hosted by the University at Buffalo's Office of Curriculum, Assessment and Teaching Transformation, otherwise known as CATT, and supp orted by the Genteels' Excellence in Teaching Fund, this podcast is dedicated to highlighting the journeys toward educational excellence. I'm your host, Maggie Grady, a learning designer for CATT, and today we're exploring a critical and timely topic artificial intelligence, academic integrity and where we go from here. So I'm thrilled to be joined by Dr Kelly Ahuna, who is the Director of Academic Integrity at the University of Buffalo. Kelly, thank you for joining us today.

Dr Kelly Ahuna:

Thank you very much for inviting me to participate.

Dr Kelly Ahuna:

I always appreciate an opportunity to talk about academic integrity, especially in this new world of AI.

Maggie Grady:

Yeah, and we'd like hearing it from you because you're the specialist, so let's dive right in. So AI tools are changing how students approach their work and how we assess it as faculty, as educators. So, to start, can you explain what cognitive offloading is and why it's particularly relevant in today's academic landscape?

Dr Kelly Ahuna:

So cognitive offloading is when you take cognitive demands and you reduce them to allow greater efficiency. So a good classic example of this would be the calculator that you know if you have to do long division to get to ultimately to the end of a big problem, you might cognitively offload the long division to the calculator. Things like translation tools. I think EndNote is another good example. You're writing a long paper. You have to put all your references together. You use EndNote to do the references. So these are tools that do what previously would have been cognitive labor, and I think the concept is particularly relevant today because some students are using generative AI tools to cognitively offload work that in some cases they shouldn't or they're not allowed to.

Maggie Grady:

Okay, so the issue isn't necessarily the tools themselves, but how they're being used and within that context of learning objectives. So how can we dive into that?

Dr Kelly Ahuna:

Yeah, I mean, I think the learning objectives are really where the conversation should be, because the desired learning outcomes of the course should dictate when cognitive offloading is allowable. So, for example, if you're taking a Spanish 101 class, students are not going to be allowed to use translation tools, because the learning objective is for students to memorize vocabulary. But if it's a high level Spanish course and students are reading Don Quixote in Spanish, a translation tool is probably fine, because if they run into unknown words, the learning objective is not about memorizing vocabulary, it's much broader than that. So the translation tool is probably allowable.

Dr Kelly Ahuna:

And this is where the instructor's rules become really important, because, as the expert in the subject and the designer of the learning outcomes, they're the people who know when an AI tool hinders or advances a learning objective. And students. You know they're enticed by these tools, but they're novices and they might not be able to identify this issue of how the AI tools hurt or help learning objectives. So the more explicit instructors can be about what tools are allowed, when and why, the better for students.

Maggie Grady:

Understandable

Maggie Grady:

So how does AI impact assessment, especially considering today's tech-savvy students who might view AI tools like Chat GPT, Microsoft Copilot, something along those natures, is just another resource.

Dr Kelly Ahuna:

Yeah, so this is really important because faculty, when they collect an assessment from a student and they score it, it's really imperative that they are scoring what the student knows, or what the student can do, and if students are using AI tools in unallowable ways, it really harms the authenticity of that assessment. So they're not scoring what the student can do, they're scoring what the AI has done, and so it's just imperative to the you know the genuine nature of assessments that students aren't using the tools in ways they're not allowed.

Maggie Grady:

That makes sense because, right, just what you said, they're judging or they're grading the AI tool versus the student, so isn't that the idea is that we want the students to learn and build that knowledge.

Dr Kelly Ahuna:

Yeah, and I would just say that AI is just the newest threat to academic integrity. This has always been the case. You could be grading what the student's roommate did for them, or the student's mother, or what they purchased online, or there have been all kinds of other threats.

Dr Kelly Ahuna:

It's just that artificial intelligence now is so accessible in so many areas.

Maggie Grady:

So communicating to the students can be tricky. And how can we ensure that understanding when AI is used and if it's crossing the line?

Dr Kelly Ahuna:

Yeah, this is where communication between instructors and students really has to be very clear. So there are a lot of things faculty can do, I think. First of all putting something in their syllabus. So this could be a blanket statement if they're never going to allow AI or they're always going to allow AI, but it's probably more typical that it's going to be assignment dependent so their syllabus statement can say there will be specific rules about AI as we get to assignments, and then they need to say it out loud when they get to the assignment.

Dr Kelly Ahuna:

You know, in this case you're allowed to use these things and I learned from Ethan Blanton, who's a computer science professor here, that I think it is easier to tell students what tools they can use than to try to make a list of what they can't, Because the list of possible tools is proliferating rapidly and you're never going to be able to cover all your bases. So if on this assignment you think Microsoft Copilot would be okay for students to use, to tell them overtly you can use this tool or that tool or these three tools and sort of put the parameters around it for the students. And then, importantly, I always tell students they should never make assumptions If the professor hasn't said they should not assume it's allowed.

Dr Kelly Ahuna:

They should always go back and ask.

Maggie Grady:

Yeah, so clear lines of communication, always talk to the professor of uncertainty and communicate out towards the students, so keeping things clear for everybody. So, I have heard from faculty a few strategies that they use requiring students to submit drafts as a way to get to combat cheating, or having students include an oral component, and this is also another way that they can detect if they're using AI. So why are those two things effective and can you think of other strategies?

Dr Kelly Ahuna:

Yeah, I mean, if the goal is to make sure students aren't using AI, there are some methods that might be more reliable than others. So I think a lot of people are going back to the classic in-class proctored assessment, where students aren't taking the exam somewhere else or writing doing the writing somewhere else. They're writing or taking the exam in class where they can be observed by the instructor. So, along those lines, sometimes people are flipping their classrooms and if you're not familiar with that, that's the idea that students do the learning outside of class and the assessment in class. Typically we think of, you know students coming to class to hear the instruction and then they might go home and do the assessment as an out of class something. Some faculty are flipping that around, especially since COVID. We have so much better, you know, fluency in the technologies for students to watch lectures at home and then do assessment in class. Also, you know the Gen AI tools are getting better all the time, but still, if you can be very specific to course content in the assessment questions, then it's going to be harder for students to get the help from the tools.

Dr Kelly Ahuna:

So making the assessments as course specific as possible you mentioned drafts. You know drafts is a good idea. You could follow a student's progression as they go from beginning to end and along those lines you could ask for the metadata you know Google Docs keeps you know track. You can ask a student to turn that in. Or you could ask a student to take notes on their progress. You know sort of a metacognitive approach to like when I did this first part I was stuck here and now I'm feeling better and you know, just sort of talking it through. And then, like you mentioned oral components. I mean, if you want to know if somebody knows something, a really good way is to ask them questions and see if they could respond in real time. That's often not possible in classes of any substance of size, but it is a good, effective technique to get knowledge.

Maggie Grady:

Yeah, so putting a personal component into whatever assessment method, I think would be one way that they could get around and making sure, just like what you said, that they know the material, they're ready to move on and they understand. So now let's talk about detecting AI use. So I know that UB uses Turnitin, which is an integrated AI detection feature, and how reliable is it and what should educators be mindful of?

Dr Kelly Ahuna:

Yeah, this is really such an important topic. I mean I don't want any student out there worrying that they're going to get in trouble for academic dishonesty when they really did their own work. We have a lot of stop get measures in place to prevent that from happening. And I think there's been a lot of press about these tools. You know, after ChatGPT came out, these detection tools proliferated. I spent a lot of time trying to keep up with them all and some of them are much better than others and I think the bad ones have really brought bad press to all the tools. We use Turnitin, it's built into our Brightspace platform. It's easy for faculty to turn it on. Turnitin does two things it will give you a plagiarism score and a score of what is the likelihood that text was generated by artificial intelligence. So it's a predictor likelihood score. Turnitin is less likely to catch AI than it is to falsely accuse a student of AI. So their sort of business model is that they'd rather miss some than falsely accuse some people.

Dr Kelly Ahuna:

But I just want to say that our academic integrity process we rely on a standard of evidence that we call preponderance, which is what is more likely. We're just looking to see what more likely happened and if Turnitin comes back with a high likelihood score that this was generated by AI, our procedures require that the instructor have a conversation with the student and when they have that conversation and the professor says, tell me about your paper, why did you write this, where did you get this information, and the student can talk in an educated way about their paper, that really usually just erases the concern of what the Turnitin report has been. So if a student has really truly done their own work, I don't want them to have any worries about a false positive.

Maggie Grady:

Okay, so let's move on to well, or let's continue to talk about what is the best approach for faculty to take.

Dr Kelly Ahuna:

Yeah, so faculty should just follow the process we have in place. We have a, I've worked really hard to develop a faculty web page on the Academic Integrity website where they can just walk through the steps of the academic integrity process if they have a concern. But they can always call our office. They can talk to us. We're happy to walk them through anything. But when they have this conversation with a student you know I was faculty for 20 years before I took this job and so I know that if you have a student who you think cheated somehow on your work, you might take that personally and I would just say to faculty try to leave that at the door. You know students have lots of reasons and motivations for why they do things and usually it's really not personal to you and the goal of this is just to have a conversation to come to an understanding of what happened.

Dr Kelly Ahuna:

So I'm finding that instructors can pretty easily identify when students are using artificial intelligence in a couple of ways. Sometimes they include information that you did not teach in class and sometimes they include information that's very much at a higher level than what you taught in class. Sometimes a lower level, but usually a higher level. Sometimes there are these idiosyncratic misinformation items, sometimes there are false references and sometimes it's just markedly different from every other assessment the student has turned in. The whole voice of it is very different. So I always just say that, you know, I think instructors should trust their expertise and their experience. They read a lot of these assignments and sometimes it's pretty clear.

Maggie Grady:

Okay, so, as AI continues to evolve, what are some of the larger challenges that you foresee for academic integrity and higher education?

Dr Kelly Ahuna:

Yes, I mean I can't stress enough the need to protect the value of a UB degree. I mean we are not a diploma mill, right? We are not just giving out degrees for money. We are trying to make our community and our world a better place by educating our students to go out and do good work. So if our students you know, I always try to tell them that our futures are inextricably linked together. Because if our graduates go out for a job unprepared because they cheated their way through UB, that employer is going to wonder why this person doesn't know what they're supposed to know and they're likely going to think you know what is UB doing? UB is not preparing the student in the way that they should, and so the next time a UB graduate applies for a job with that company, they're so much less likely to be looked upon favorably because the employer no longer trusts UB. So really, UB's reputation is critical to everyone involved with the university.

Maggie Grady:

I like that viewpoint because some people forget that. So it's nice to put that it's reputation, it's value, it's all of those things and we're just building upon that. I like how that thinking is.

Dr Kelly Ahuna:

Yeah, sometimes students will say well, if I cheat, I'm only cheating myself. That's the kind of line that people say, and that's really not true. When students cheat, they really they hurt the whole community.

Maggie Grady:

Yeah, I agree on that. It sounds like there's a delicate balance of integrating these tools without undermining the core values of education. What advice would you give our listeners?

Dr Kelly Ahuna:

Well, I guess I'll talk to two audiences. I'm not sure who exactly will be listening, but I would remind our students that they're here and they're paying good money to learn. There's no shortcut to learning. Back to our cognitive offloading discussion. All learning has to take place in a student's head. They have to do the cognitive work to take in new information and make it meaningful and keep it. So, t hey need to trust their professors when it comes to how they're going to learn in a class and follow those guidelines.

Dr Kelly Ahuna:

But for our faculty, I would quote a woman named Sarah Eaton who does academic integrity work in Canada and she likes to say that our students aren't our enemies, they're our future, and I think this is really important to keep in mind.

Dr Kelly Ahuna:

We don't want to make this into an arms race and that instructors really I know it might seem a lot to some people, but I think they need to educate themselves about the available AI tools and how they're relevant to their content area and, whenever possible, allow students to get some responsible practice with the tools. You know, we know from a survey that we did last year that our students are very worried about the job market expectations around AI use, that this is going to be an expectation of them. So when faculty can allow AI in their classes and in their assessments to, you know, sort of allow for that model, that for students and get them to be comfortable with the tools.

Maggie Grady:

So it's been a super enlightening conversation and before we wrap up, do you have any advice for instructors navigating the use of AI in their teaching and their assessments?

Dr Kelly Ahuna:

You know, I would just say use the Office of Academic Integrity. If you need support reading a Turnitin report, if you have questions about how to approach a meeting with a student, or worry about if you have preponderance of evidence or any questions like that, reach out. UB is very lucky to have an Office of Academic Integrity. Not a lot of universities are resourced that way, so that's why we're here. We're here to help, and so I would just say please use us.

Maggie Grady:

So, for those that want to learn more, you can visit the University of Buffalo's Academic Integrity page, just like Kelly mentioned, and that's buffalo. edu/ academic- integrity. And Kelly, thank you so much for joining me today.

Dr Kelly Ahuna:

Yeah, thanks very much for having me. I mean, these tools aren't going anywhere, so we're going to need to keep the conversation going.

Maggie Grady:

Thank you to our listeners for tuning into this episode of the Teaching Table podcast. If you enjoyed today's discussion, be sure to subscribe and leave us a review. We'll be back soon with more conversations on exploring the latest in teaching innovations and strategies, and until then, keep exploring new ways to reach and inspire your students. As always, be sure to connect with us online at buffalo. edu/ catt that's C-A-T-T or email us at ubcatt@buffalo. edu