Rise with Clarity Podcast
Rise with Clarity Podcast is hosted by Dr. Katherine Lee, Higher Ed Coach and Career Strategist. This show offers strategies and guidance to women of color faculty in academia.
Rise with Clarity Podcast
45: Check in with Your Pedagogical Values When It Comes to AI
What are your values when it comes to AI? And following up on that question, what are your pedagogical values when it comes to AI usage in the classroom?
If you’re confused about how to navigate this new reality, it’s completely understandable! It’s also reasonable if you are feeling conflicted about whether you are even Team AI or not.
In this 45th episode of the Rise with Clarity Podcast, I share 10 reflective questions that can help you to better assess what your pedagogical values are when it comes to AI.
I also point you in the direction of some books, podcasts, and articles that can help you to clarify where you stand in terms of AI in general.
Find the full written transcript for this episode at RisewithClarity.com/45, along with all of my other podcast episodes and other resources for women of color faculty in higher education.
Dr. Katherine Lee is a Higher Ed Coach and Career Strategist and a former tenured professor at an R1 university. She helps women of color faculty to manage the tenure track, navigate politics, and take the next steps to advance their careers. To find more resources or to work with Katherine, check out her website at: Rise with Clarity.
What are your values when it comes to AI? And following up on that question, what are your pedagogical values when it comes to AI usage in the classroom?
Since ChatGPT was first introduced in late 2022, advancements in AI technology have revved up considerably. It’s no longer possible to ignore the fact that generative AI is fundamentally changing the ways students are learning and engaging with your courses.
And perhaps you may have noticed that AI has already been integrated into many of the platforms that your university uses. Even if your university didn’t explicitly sign new contracts with vendors, AI assistants and chatbots have crept up into the learning management systems and other platforms that you use on a regular basis.
And these tools come by way of updates that you likely never knew about or had the opportunity to opt-out of.
If you’re confused about how to navigate this new reality, it’s completely understandable! It’s also reasonable if you are feeling conflicted about whether you are even Team AI or not.
So, in this 45th episode of the Rise with Clarity Podcast, I wanted to share with you 10 reflective questions that can help you to better assess what your pedagogical values are when it comes to AI. And I also want to point you in the direction of some books, podcasts, and articles that can help you to clarify where you stand in terms of AI in general.
I’m going to put all of the links in the written transcript for this episode, which you can find at RisewithClarity.com/45, along with all of my other podcast episodes and other resources for women of color faculty in higher ed.
My Reflections on Takeaways and Learning Objectives
I have to admit that I am pretty relieved that I am not in the classroom right now. Frankly, I’m not sure that I would be able to manage this moment very gracefully. As professors in the US academy right now, you are all dealing with some incredibly challenging issues, in so many different respects.
Recently, I was thinking about a course that I used to teach on a regular basis at my first job—an undergraduate ethnomusicology survey course called Musics of the World. This was mostly a course that non-music majors took in order to fulfill their diversity or world cultures requirement. By the way, it’s unfortunately a very distressing sign of the times that this kind of requirement is being eliminated at some US universities.
Anyway, in that course—one of the assignments is for students to attend a live music performance—ideally in connection with one of the units that are covered in the course. For many of the students, this would likely be the first time that they would attend a concert of non-Western music.
I would coordinate with some of the affiliated performance faculty in our department as well as the Arts Center on campus in order to highlight certain concerts and even arrange for discounted or free tickets for the students. And often times I would be able to integrate performances by guest musicians directly into the course.
After attending a performance of their own choosing, students would have to submit a concert report (along with a photo of their ticket or the program) and reflect on their observations.
In my mind, the learning objectives of this assignment were to: 1) have students engage in a performance event that they might not have been exposed to, and that might have been a little out of their comfort zone. And 2) use the tools that were introduced in the course to analyze the concert in terms of the musical sounds they heard, the setting of the event, and to do some light research on the cultural significance of the music.
How would this kind of assignment fare today? If students wanted to use AI to fast track this assignment, it would be very easy.
A student could ask generative AI to create a concert report that would hit all of the marks for an A grade. If a musical group has a regular program that can be found online, along with program notes—it would be all the easier for ChatGPT to create a concert report with a lot of specific details.
So knowing this, would an iron-clad rule blocking the use of AI work in this case? That might have worked 2 years ago, but it probably would be pretty hard to enforce today. I know that some instructors have integrated in-class writing activities to get around the AI usage in homework assignments. But this kind of activity would be quite difficult to integrate into a large course of over 100 students and with this particular concert report assignment that I just described.
At the end of the day, my main pedagogical intention for this exercise is to have the students engage with a musical culture and culture that is likely different from their own. And to go in with a sense of curiosity and respect for what they were about to hear and observe. So, if I were to teach this class today, I’d make sure to articulate this clearly to the students. And to be honest, I’m not sure right now how I would modify this assignment in light of the ubiquity of AI.
I wanted to share this example with you because I sense that many of you are faced with similar dilemmas, where the usage of AI by students is really challenging the kinds of assignments and assessments that you have spent years designing.
Besides all of this, let’s also acknowledge that there are some serious ethical dimensions to consider as well. Like the environmental destruction that massive data centers create. And big tech’s exploitative practices in places like Kenya, where extremely low-paid data workers are forced to engage with the disturbing content on the internet in order to train AI. Moreover, there is the training of AI on copyrighted books—maybe books that you yourselves have published—without your consent.
There are plenty more issues that we could talk about today—like surveillance by big tech and the government as well as what scholar Ruha Benjamin has called the New Jim Code—where racist habits and logics are built into technological systems. But for now, I’m just going to go ahead and put links in the written transcript to relevant articles and books that you may want to check out.
On the flip side, there is a lot of pressure being put on you to uncritically adopt this new technology by big tech, some of your own institutions, and by AI evangelists. You may feel start to feel a sense of FOMO (fear of missing out) if you don’t acquiesce right now.
For one, there’s the argument that professors need to be engaging with AI literacy because today’s students are already using it and that they need to develop critical engagement skills for their own futures. And then there’s another kind of argument that says that certain aspects of AI can have a democratizing element to it by lowering barriers to access—especially for students from marginalized backgrounds.
There is just so much to chew on here.
I think it’s worth it to pause for a moment and to get some clarity on what your own values are in relation to AI. And then branching out from there, it can be helpful to check in with your pedagogical values as they relate to AI usage in the classroom.
Check in with Your Pedagogical Values When It Comes to AI: 10 Reflective Questions
Here then, are 10 reflective questions for you.
- How are your core values aligned (or not aligned) when it comes to AI?
- What are your core pedagogical values that remain consistent over time?
- What are your concerns when it comes to the usage of AI?
- What are the possibilities that you see when it comes to the usage of AI in your teaching?
- When it comes to AI, what are you open to?
- When it comes to AI, what is non-negotiable for you?
- In your course on _____, what do you want the ultimate takeaway for your students to be?
- Related to the previous question, how does the usage of AI by students challenge that takeaway?
- Related to question 7, how does the usage of AI by students enhance this takeaway?
- If you were to revisit your teaching philosophy statement right now, what would stay the same and what would you revise?
I hope that some of these questions can be helpful for you as you think through how you’re going to navigate this tricky time in the classroom. I have a feeling that I’ll be returning to this topic in a future podcast episode, so feel free to reach out to me if you have thoughts, insights, or suggestions. I’d love to hear from you. You can e-mail me at Katherine at RisewithClarity.com.
That’s it for today.
Stay strong and stay well.