
Keystone Concepts in Teaching: A Higher Education Podcast from the Stearns Center for Teaching and Learning
Keystone Concepts in Teaching is a higher education podcast from the Stearns Center for Teaching and Learning at George Mason University focused on discussing and sharing impactful teaching strategies that support all students and faculty.
Join us as we feature conversations with experienced educators who discuss actionable, impactful, and evidence-based teaching strategies that may be applied across disciplines and instructional modalities. This podcast aims to support faculty professional development by providing access to broadly inclusive teaching strategies, supporting faculty of all appointment types and across all fields by discussing the keystone concepts of teaching and learning.
Subscribe now to the Keystone Concepts in Teaching and Learning podcast on your favorite podcast platform to get notifications of new episodes as we explore teaching and learning small change strategies that you might even wish to try out in your course yet this semester!
Hosted by: Rachel Yoho, CDP, PhD
Produced by: Kelly Chandler, MA
Keystone Concepts in Teaching: A Higher Education Podcast from the Stearns Center for Teaching and Learning
S1 E6: What Do We Do About Artificial Intelligence Technologies in the Classroom?
Dr. Laina Lockett, the STEM Education Specialist in the Stearns Center for Teaching and Learning, joins us to talk about the “big thing” right now in education: artificial intelligence-based text generators. We explore actionable strategies for continuing impactful and engaging teaching in this new educational context.
Resources: Your host, Dr. Rachel Yoho’s, publication on inclusive teaching now that we have AI text generators: Yoho, R. (2023). No, Let's Not Go Back to Handwritten Activities: Inclusive Teaching Strategies in the Context of ChatGPT. In The National Teaching & Learning Forum (Vol. 32, No. 6, pp. 1-4). https://onlinelibrary.wiley.com/doi/pdf/10.1002/ntlf.30379, Stearns Center for Teaching and Learning at George Mason University recommendations for teaching considering AI text generators: https://stearnscenter.gmu.edu/knowledge-center/ai-text-generators/, Inside Higher Ed article mentioned in the episode: https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2024/07/23/new-report-finds-recent-grads-want-ai-be
Hello and welcome to the Keystone Concepts in Teaching Podcast, a higher education podcast from the Stearns Center for Teaching and Learning, where we share impactful and evidence based teaching practices to support all students and faculty. I'm your host, Rachel Yoho. In this episode, we're going to be discussing how we keep teaching with meaning and impact and to support all students and faculty, of course, as well, with the emergence of these new artificial intelligence text generators like ChatGPT. I'm joined by this episode's guest, Dr. Laina Lockett. Dr. Lockett has teaching experience at the college level from being an adjunct instructor at the Pratt Institute in Brooklyn and being a teaching assistant at Rutgers. Dr. Lockett served as a graduate fellow with some summer research programs at Rutgers, where she led workshops about writing and presentation skills. She also has experience working with faculty and teaching assistants from the Rutgers Academy for the Scholarship of Teaching and Learning. And Dr. Lockett has a PhD in Ecology and Evolution from Rutgers University and a Master's in Environmental Science from Towson University. So thank you so much for joining us for this episode, Dr. Lockett.
Laina:Thank you so much for allowing me to be here today.
Rachel:So as we get started, I know we have lots of thoughts, there's lots of concerns out there, lots of people writing articles and all these things about ChatGPT and all of these other AI text generators. But can you tell us a little bit about some of the major concerns instructors have right now about the emergence of these new AI text generators like ChatGPT?
Laina:Of course. I personally think that there may be three main concerns that faculty seem to have and I think maybe you might guess that the number one concern might be cheating. I think also some faculty or a good number of faculty are concerned that relying too heavily on these types of technologies will lead to less skilled students. And I think a third concern actually is, and continuing to see, the spread of misinformation.
Rachel:These are excellent points and I think they encapsulate some of the major concerns really well. And I think, and I agree with you, that a lot of the initial concerns are around things like cheating. That this isn't really doing the assignment or this isn't really doing the activity or learning in the field or profession. And so let's talk a little bit about, what are some of the quick sort of gut reactions that many instructors might be having right now about how to essentially"ChatGPT proof" their assignments. So can you tell me a little bit more about this related to some of these themes you were talking about?
Laina:Certainly, I actually had to chuckle a little bit because I think that gut instinct to chat GPT or AI proof your assignment is not going to necessarily go the way that a faculty member may want it to go. So, for example, if we're thinking about the idea of cheating, it might feel like we should lean on those AI detection software systems that are out there, but that's actually currently not a great, reliable resource to use. I've actually tried it myself. And when I was, you know, putting in different text examples, it was just as likely to tell me that what I actually wrote was written by AI and what was written by AI was actually written by a human. So you don't want to rely too heavily on those types of things.
Rachel:Yeah, those don't sound good. As we're thinking about it, we certainly don't want to be in those types of academic integrity hearings if it's not very reliable, let's say. And so what else can we do? What else do we want to be thinking about right now?
Laina:I think another thing that we could be thinking about is, being transparent. So we might be afraid that our students might be less skilled moving forward, we might think about how we can be giving them information about why it's important to learn the skills that we're teaching them. And so I'm a fan of Duolingo, I have lots of different languages open, but I have not gotten very far in my learning tree, for many of them. So if I were to ask AI to write an essay in Swahili, I've only done, you know, several lessons. So I wouldn't be able to actually check the accuracy of the output for AI. And so I think that applies whether it's languages or different content. You have to have foundational knowledge because you're going to need to vet these types of software, because especially when we're thinking about text generation, it's going off of what's the most probable next word, not what's the accurate next word.
Rachel:That's an interesting point because one of the things that I've seen, one of the recommendations we might consider would be having students do basically side by side assignments. So one side would be, or the first part, would be them doing the activity, say, by hand. Whatever by hand looks like in their field or discipline, whether that's written out, whether that's calculations, programming, whatever that might be. And the second would be having ChatGPT or another AI text generator do the assignment and then comparing them using some of those skills. So is that the type of activity you're talking about, or what else might we be considering here?
Laina:So I think that's a great assignment that you could do with your students. I think it can be a great basis for having discussions. I think there's some other ways that we can incorporate AI into things that we may already be doing. So something like a think pair share activity where you have students answer a question independently, then they work with a partner, a small group, and reanswer the question. You could think about during that deliberation step having students turn to AI to try to help them if they're, you know, having this tie or disagreement of ideas. That could be a place. I also think a place that I've used it personally is in an assignment that I already had that was scaffolded. So when I've been working as an adjunct at Pratt, I teach STEM writing courses. And so part of that writing process is peer review. And that's always been an area that my students have struggled with because they like being nice and no one wants to tell their peers like this could be better. Right. So I still have them do the peer review and we usually focus on a particular concept. So maybe we're thinking about how to be more concise. But then after they do the initial review, I have them use AI to also give a similar comparison on that same kind of area of focus and have them assess what the AI says should be fixed. And then they can use that as part of that process to give their peer feedback. And so then they don't have to feel like the bad guy, but it also helps them think critically about analyzing writing as well.
Rachel:That's really interesting because when we think about this, I mean, teaching peer review is exceptionally hard. It doesn't sound like a thing that should be, but I think we probably need an episode in the future just talking about how to do meaningful peer review. But as we think about providing feedback, as we think about, how do we design the assignments, can we go back a little bit to what you were talking about with scaffolding and can you tell us just a little bit more about what that means to you in this context and what that could look like for our faculty, our instructors who are listening?
Laina:Sure. So for me, scaffolding, I use that as a strategy for larger projects that students work on over the course of the semester. So instead of giving them a set of instructions at the beginning and hoping they do what I want them to do by the end of the semester, I break down the different steps of the project and have them turn those separate steps in. And that gives them an opportunity to get feedback before that final part is submitted that's going to be worth the larger portion of the grade. And so different steps might look different depending on the final product, but I do this kind of blended project where they do a scientific research paper, and then they turn a scientific concept from their paper into an art installation. And so we have several steps where they work on research, so finding peer reviewed articles to support their scientific ideas and questions that they have. We have several rounds of writing and thinking about, you know, outlining and building that out based off of the information that they found. Then we even have a draft of what are you going to create. So they'll do a sketch. So a step in the scaffolding project doesn't always have to be a written thing that your student turns in. And then I also have them do a draft budget before they submit the final thing. So the, the final product is a grant proposal and then they make the thing that they proposed essentially.
Rachel:This sounds like it translates really well across disciplines, because not only are you talking about the science side, the writing side, the art aspects, and creation there, but I think here what we're looking at is how scaffolding can be a great way to, unfortunately,"Chat GPT proof," or really look at how do we still teach with meaning and with impact and supporting all of our students in the context of these AI text generators. Because when we don't wait until the end, or we don't have one big project, when we look at small deliverables, not only is that a great, and very inclusive way to create a learning experience, but also we're looking at here ways that we are essentially checking to see if this is our student's work as we go along and how we have these assignments build over time. So I think that's particularly compelling. And so to build on that as we're talking about some of the concerns or some of the gut reactions that we've been talking about. One of the things that often comes up is instructors who want to have students just do everything by handwriting now. We're gonna set all the computers aside, put them back in the bags, not bring them to class. We're forgetting about technology. And so can you tell me a little bit about your reaction to that or what we can do when we have that sort of initial gut reaction to ChatGPT or some of these other AI generators.
Laina:I think it can be helpful to pause and do a little bit of self reflection. So I think that there are places where doing handwritten assignments or even something like an oral presentation has a place. But I think that if we're just having students do those activities as a way to try to get around these new technologies, that kind of misses the point. So when we're thinking about what we have our students do in our class, we should always be linking everything back to the learning outcomes and making our decisions based on those. And so, for example, in one of the classes I teach at George Mason, I want my students to practice oral communication as scientists. I think that's really important that they be able to have a command of the concepts and be able to also relay it to a lay audience because when they go to their careers, they're going to have to talk about scientific concepts. So I do have a space for that, but I don't do it because I'm trying to get around AI. I think if we're trying to make our classrooms inclusive of everyone, we want to make sure that we're making choices that aren't going to disadvantage students just because we're fearful of a certain outcome.
Rachel:I like that statement right there. And it's really about the fear. One of the things that we might be thinking about is how we are approaching our assignments and our activities. And that's really what we're talking about here, is whether that's coming from a place of learning, or from a place of fear, like you were just mentioning, Laina. I think this is a great way, as we're thinking about some more proactive and more inclusive strategies that we might be considering, to really consider what the motivation is. You know, are we wanting students to just look at pens or pencils just because we're afraid of the text generators? Because, well, unfortunately, I hate to tell everyone this, but you could do this stuff with the text generators and then handwrite it in a lot of cases. But even so, is that the best way to do it? Is that inclusive? I mean, I can type a whole lot more in the same amount of time than I can handwrite on a page. And I personally don't want to go back to reading handwriting or trying to guess what people are writing anymore. As we've gotten away from that in teaching and grading, it's been a great improvement. And so what are some of the more, say, other proactive and more inclusive teaching strategies we might be considering in this new educational context?
Laina:So I think we've actually already talked about some of these things because I think they're just foundational to teaching such as thinking about incorporating scaffolded assignments. If we aren't already doing that. Again, I think being clear with expectations is going to be really helpful as we move forward with trying to be inclusive and also incorporating AI into our courses. And I think also taking some time to update assignments, that might be a good place to go as well. But I think one that maybe isn't thought about as much is the idea of being mindful about consent. So while it's great to use these technologies, and I think it's going to be really important for our students to learn how to do so specifically in relationship to their own fields, a lot of these programs do require you to set up accounts, and I think it is perfectly reasonable for our students to not want to do that because there is still a lot of unknowns about these programs and how they're working. And so I think that as we revise our courses to help our students develop these digital literacy skills of using generative AI, we do need to make sure there are spaces for them to still learn this without forcing them to do something that they're not comfortable with.
Rachel:That's a really important point to be thinking about privacy and consent and all of these things. We might be thinking about this less often in our day to day because so many of the different educational technologies, the plugins, the polling systems, whatever we're using in our teaching, these are so highly vetted by the institutions. And so these other things like the AI text generators, we might be thinking about those concerns, or potential concerns, less often. So some of the ways that we've heard, there's many recommendations out there, certainly, but we might consider having example prompts and responses from an AI text generator. So instead of asking our students to do those interactions, having those already, providing that as part of the assignment, part of the instructions, and say, okay, now take this, now use this, now do the side by side comparison like we were talking about there. But certainly, these are things that we can model. You know, we might be thinking about not just the logistics, not just prompts or responses, but we might be putting our students into, for instance, scenarios. As a practicing professional in whatever our field is, we might be thinking about, well, how would I or how would I not?, perhaps, use AI text generators or any of these other related tools as starting points, or not, for our work and our practice, and really think about what that could look like then in the classroom as learners, as individuals developing into that professional practice. Here we're really looking at how can we be creative and sensitive to a number of different issues, not only trying to say prevent cheating, but also include our students and some of their potential concerns. Because I think a lot of us have a lot of those same sorts of concerns as well. And so as we're expanding our conversation a little bit to talk about some of these bigger picture things, how might we, for instance, design course policies for our syllabus around some of these AI text generators? What would we consider including or perhaps not including?
Laina:So I think when it comes to AI policy for your class, there are a couple of things. Currently, George Mason doesn't have a universal policy, so we can't just, you know, look that up and slide that into our syllabus. I would say, talk to your department chair or your course coordinator, if you have that, to make sure that you're in line with the context in which your course falls, but I think again, it's really going to tie back into your learning outcomes. So I don't think that every class necessarily should have the same policies because you want to make sure that you're setting your students up to do things that you can properly assess the outcome. So Bloom's taxonomy, it's a hierarchy about different things that we might ask our students to do. So at the bottom of that pyramid, we have what you might consider more basic skills like recalling information, right? So if your course learning outcomes focus on something that's more basic," so to speak you can't see my air quotes, but hopefully you all get my point I'm going for so these simpler things that we might ask our students to do that AI can do really well, then you might not want to have a lot of AI use in your course because how are you going to tease out what your student actually knows versus what the AI is generating. But if you're working with maybe more senior students. And your outcomes allow for more creativity, and developing new ideas, things like that so that would be kind of in that higher level of Bloom's taxonomy it might make more sense to allow more freedom for students. The Stearns Center has on their website, a table that gives some sample language. And so I think what you'll see if you take a look at that website is that it doesn't have to be black or white. I've heard someone use a traffic light analogy, right? So we can have classes where it doesn't make sense to use it. And we can have classes where it's a free for all, but your students have to be responsible for what they submit. Right. But then there are definitely going to be classes where it's okay some of the time, but not all of the time. And so that might be our yellow light kind of classroom.
Rachel:I like that comparison quite a bit because that gives us some very tangible things in our transparency and our communication with the students. If we're using the yes it's okay to use here, ah use it with caution in these spaces or here's how to use it in these spaces, or not at all. You know, any of these types of things that increase transparency are always really useful here. And so as Laina mentioned, we have some great recommendations, some guidelines on the Stearns Center website that we'll provide some additional information in the show notes and links from the podcast episode. But as we wrap up for today, it sounds like this concept really represents a keystone concept in teaching because we're really looking at how we continue teaching, how we have meaning, how we're not just being replaced, let's say, by these AI text generators, but we're having meaning and teaching to support all of our students in their professional preparation. So can you reflect on that as we wrap up in this conversation?
Laina:Absolutely. So I think. that there is kind of one takeaway I would say that I'd like everyone to walk away with and that is that we can do it one step at a time, right? So there are lots of options of things that we might think about doing to revise our class, but we can do it one step at a time. And especially since the technology is going to keep evolving, it might feel overwhelming to think about all the things that you could do, but I'd encourage you to pick, you know, maybe just one thing. So maybe it's something like updating your slides on digital literacy to include some information about generative AI and how that fits into it. Or you may consider having a discussion with your students about the ethics of generative AI in your field, right? So you can start someplace small and go from there. But I do think it's important that we do start to think about these things because there is an article that came out in Inside Higher Ed, and the point was that employers are starting to expect students to be familiar with these tools. And so I think it would be a disservice if we don't incorporate AI anywhere in the curriculum. The article went on to say that 70 percent of recent grads think that AI needs to be part of the undergraduate curriculum, and I would have to agree with them.
Rachel:We're learning in a new context, and being able to use the tools that students will be using in their future professions I think is essential, no matter what that tool might be. If we ignore it, that's certainly not only to our students detriment, but to ours and the institution and their learning. AI text generators, ChatGPT, all of this is certainly a big topic, so I'm sure we'll be revisiting this in the future. But I appreciate your time and we can't wait to share our next episode with you for Keystone Concepts in Teaching, so please come back for our next episode as well. So thank you so much, Dr. Lockett.
Laina:Thank you.