Designed for Learning

Navigating AI’s Evolving Role in Teaching and Learning

Notre Dame Learning Season 1 Episode 4

Although artificial intelligence has been part of higher education for a couple of years now, faculty are still struggling with what this development means for themselves, their students, their courses—and especially their assessments.

Notre Dame Learning recently launched the Lab for AI in Teaching & Learning (LAITL), led by Alex Ambrose of our Kaneb Center for Teaching Excellence, to help instructors navigate this terrain. Alex is an eloquent spokesperson for the argument that by building their AI literacy and taking advantage of the opportunities it provides, faculty can expand student learning and even make it more equitable.

But are faculty buying it? And a deeper question: Does everyone need to embrace AI? Or are there times and places where we shouldn’t be welcoming it into our lives and our courses?

Fresh off hosting several campus AI workshops together, host Jim Lang and Alex discuss these issues, AI at Notre Dame, and a variety of helpful resources for faculty.

Key Topics Discussed:

  • The experiences that have led Alex to become a cautious optimist/power user of AI, a path informed by his long-standing concern over technology’s impact on student learning
  • What’s happening right now at Notre Dame with respect to AI in teaching and learning, including the availability of Google Gemini to all faculty, staff, and students and an AI academy for faculty
  • The case Alex would make to a skeptical colleague about AI, one that is centered around empathy, literacy—and a very practical example
  • The relationship of the two AIs, artificial intelligence and academic integrity, and the results from a survey of Notre Dame students
  • Resources to help instructors articulate AI policies for their courses and assignments (see “Resources Mentioned” section for links)
  • Imagining next-generation assessments that push students to go beyond just creating a final product
  • An example of how Alex is starting to see AI assist faculty with assessments

Guest Bio: G. Alex Ambrose is a professor of the practice in Notre Dame Learning’s Kaneb Center for Teaching Excellence, where he serves as program director of assessment and analytics and leads the new Lab for AI in Teaching & Learning (LAITL). His work has been published in a range of academic and technology-based journals and earned him the 2015 Campus Technology Innovator Award as well as recognition by Google, IBM, USAID, the Bill and Melinda Gates Foundation, and the National Science Foundation.

Resources Mentioned:

Designed for Learning is hosted by Jim Lang, a professor of the practice in Notre Dame Learning’s Kaneb Center for Teaching Excellence and the author of several influential books on teaching. The podcast is produced by Notre Dame Learning’s Office of Digital Learning. For more, visit learning.nd.edu. You can also follow Notre Dame Learning on LinkedIn.

(intro synthesized guitar tunes)[DR. JAMES M. LANG] Welcome to 'Designed for Learning,' a podcast from Notre Dame Learning. I'm your host, Jim Lang.(upbeat synthesized guitar tunes) We are recording this episode in March, and I'm wrapping up a week of co-hosting workshops for faculty on artificial intelligence with today's guest. Now, although AI has been part of higher education for a couple of years now, the mood at these events remains somewhere between curious, apprehensive, and sometimes just downright befuddled. People are really still struggling with what this development means for themselves, their students, their courses, and especially their assessments. My guest has been an eloquent spokesperson for the argument that faculty need to build their AI literacy, and embrace the ways in which AI can expand student learning and even make it more equitable. But are faculty buying it? And a deeper question, does everyone need to embrace AI? Or are there times and places where we shouldn't be welcoming AI into our lives and our courses? Today's guest is Alex Ambrose, who serves as a Professor of the Practice at Notre Dame Learning's Kaneb Center for Teaching Excellence. He is also the founding steward of the Lab for AI in Teaching and Learning. He holds concurrent appointments in two departments, Education, Schooling, and Society, and Computing Digital Technologies. In addition, Alex is a faculty fellow at Notre Dame's Institute for Educational Initiatives. He's currently teaching two courses, Learning with Gen AI, and Assessment in Elementary Education. His work has been published in a range of academic and technology based journals, and earned him the 2015 Campus Technology Innovator Award, as well as recognition by Google, IBM, USAID, and the Bill and Melinda Gates, and National Science Foundations. He regularly serves as an international learning ambassador, educational developer, consultant, and evaluator for grants, programs, and universities in South America, North America, the Far East, Europe, and the Middle East. You have a bit of a winding road in your current position, Alex, so tell us a little bit about where you stand, where you started, and where you are now, especially the new position I know you recently took up.[PROF. G. ALEX AMBROSE] Sure, Jim. If you don't mind, I'm gonna go back to the late 80s in New Jersey. And my elementary school adopted the CTBS test, the California Test for Basic Skills. And there was a new AI technology back then, it was called Optical Marking Recognition, it was basically the Scantron bubble sheet. And middle school 12-year-old Alex realized that this machine was gonna be analyzing my responses and comparing me and judging me and ranking me towards all my classmates. And I was already a Cuban German with curly hair that didn't really fit in, and I said, 'That's enough.' And I rounded up about a quarter of my grade, which was a small school, but I got us to fudge the test. We did A, B, C, D, C, B, A, A, B, C, D, C, B, A, and we messed up the school's standardized test. And I think that's where I first, looking back, realized how I was worried about these big systems, these technologies kinda taking over. And the next one was, I think... Oh, when I got, I had just got back from Baghdad. I was a school teacher in inner city Detroit, and we were getting a lot of pressure with the, No Child Left Behind standards and moving kids along these metrics. And there was something called the SRI back then, and basically, it was a diagnostic reading test. So, I was getting a lot of pressure. My fourth graders, or some of them, were reading at first and second grade level. And I had a lot of pressure, like six-month contracts or bonuses if I didn't move the needle. And luckily, I was able to get them to fourth or even fifth grade within a semester by the end of the academic year. But again, I didn't think it was really fun and humane to really put that kind of pressure on the kids, even though we were making some good results. Fast-forward maybe another ten years, one of my second or third jobs here, I'm proud that I was able to get the last Scantron out of Notre Dame. They packed up and out. And my work with ePortfolios, and next generation assessment I think played a role in that to make sure faculty thought there was other ways to assess besides the bubble sheet or the traditional blue book. And then the last chapter, I think, goes to around 2020, the pandemic. I had a kindergartner at the time, and she was displaying some of these similar rebellious educational behaviors. And we realize now,'cause we got diagnosed, that the reason why it was so hard for her to sit with the Zoom sessions and sit with the i-Ready, not filling the bubble sheet, but clicking on the screen for the next answer for adaptive learning, that, she had ADHD. And then I realized maybe I should get tested on that. Oh, at 40, I realized, I have ADHD And that kind of explained a lot to me about why I had such a, well, I had to work a little harder through my education, because it wasn't fitting as normal. So, that brings me to today. I admit, I am an AI cautious optimistic. I've been a Power user using AI, the premium level, for the last two years, really pushing it to its max to see how it can help me. And I realized it's a cognitive enhancement, augmentation. It helps me keep up and catch up and also collaborate and contribute at a much, much higher level than I ever thought it was possible.[DR. JAMES M. LANG] Interesting backstory from someone who works in educational technology. There's been a lot of resistance to technologies in your life actually, and then you come around to then really see the power of them for yourself and your children and your students, right?[PROF. G. ALEX AMBROSE] Sure. Yeah, if done right.[DR. JAMES M. LANG] If done right, okay, good. So, tell me, there's a lot of stuff happening right now at Notre Dame with AI and technology, so give me an overview. Like what's happening here before we start talking about the bigger issues, and what are you really involved with? And you're involved with all of it, so tell me more, like what's happening.[PROF. G. ALEX AMBROSE] Sure. I think the biggest news that I'd like to share, and I think it's safe to share based on the timing of the release of this podcast is, I'm proud, our institution made a decision. We're releasing Gemini to all faculty and all students. I was here back in 2008 when Notre Dame was one of the first universities to turn on Google Gmail, Google Docs, and I'm really proud that we're doing it again. We're gonna be one of the biggest, the earliest adopters to make this bold move of providing AI for all students using Google's tool, the Gemini. I'm really proud of our decision, and we're doing it in the right way for the right reasons, I believe. It's number one, it's an equity issue. We saw from some of our surveys that 15% of the students don't even know anything about AI, some of them have premium access, so there's a big gap of who has AI, and what access to the tools. The second is the privacy. Because we're partnering with Google, we're gonna get much better privacy controls than we would if we were using these free tools and handing over data to the companies. And the third is we're making the recognition that we need to give our faculty and staff and students a platform that we can begin developing these new literacies to adapt what learning means today and what the students will be going out into the workforce.[DR. JAMES M. LANG] Yeah. And so, you talk about literacies, and you're also doing a bunch of stuff about helping building literacies around campus, right? So, give me an overview of that stuff too.[PROF. G. ALEX AMBROSE] Yeah, yeah. Another big project we're working on is the Notre Dame Teaching Well with AI Academy, NDTWAA, we call it. Yeah, this is a brand-new activity that we started. We're meeting once a month with 28 different faculty all from different disciplines and colleges and dispositions. So, we didn't just find all the AI optimists and power users. We deliberately recruited some AI skeptics and cautious. So, we're working once a month. We gather together to talk about what are some of the emerging discipline-based strategies that we could begin to think about, and how we need to adapt our teaching, our assessments. We're helping them increase their literacy and training them to help them improve the literacy of their students. So, yeah, that's a big, big new program we're working on this semester.[DR. JAMES M. LANG] Okay. So, let's start getting deeper now into AI. So, when I first came in Notre Dame two years ago, AI was still very new, and I was definitely skeptical, right? I thought it was one of the dooms of higher education, going forward, its value for learning, education. And so, you and I had a lot of conversations, and while I still remained cautious, I found your arguments pretty compelling. So, I wanna give you a chance to make your case to a skeptical or even resistant faculty member. So, why should I address or even experiment with AI in my teaching, or even in my life?[PROF. G. ALEX AMBROSE] Sure. I remember those early conversations. We were just getting to know each other as colleagues, and I really then and still do appreciate you pushing me on these very difficult questions. I remember we were talking about one of your, I think it was a 'Substack' or a column in 'The Chronicle,' but about your decision not to use AI coming out of your recovery and I said to you,'That's great that you don't need to rely on it, but you're already an established writer.' You're a very good writer, and you have those skills. There are some people like me, who now I realize has ADHD, that really struggle with writing. And that although you may not need it, some people like me, the un-neurotypicals, could benefit from getting assistance from getting better organization, helping us with voice-to-text transcription. So, that would be my first thing, that sometimes I think we forget as faculty, that not all our learners are gonna be super learners like we have been. We stayed in school, got our PhDs, and a lot of our students are not gonna have that ability to become the experts that we have been and to develop those skills that we have. So, my hope is that faculty would realize that maybe students can learn how to close the skills gap, and be able to do some of the high-end research, thinking, critical thinking, writing that took 'em years to do, but to do it in an ethical way from the faculty. So, that would be the first one. Again, not all of our students are gonna get PhDs, but how can we help our students here become even better writers, thinkers, and researchers using this powerful tool in an ethical way? So, the first is empathy. I think the second one, I'm gonna go back to literacy. Again, as a learning technology scientist who's been working with assessment for years, when I first came to Notre Dame around 2008, and I'm doing the e-portfolios and working on digital literacies and trying to teach digital citizenship and helping faculty think about how we can make our students not just consumers on the web, but producers, how can we teach them to engage civilly online, fact-check. And I look back, I don't know what, it's almost 20 years ago, 17, 18 years ago, and I think myself, and I think some of us in higher education may have failed a little bit on upskilling a generation over the last 15, 20 years. And I look what happened to our country on platforms like Twitter and Facebook, and the way we talk to each other and treat each other and the disinformation. And I wonder if we did a better job with digital literacy back then, if our country might be in a better, more healed, unified space. So, that's the other thing I would like to share to faculty. I know it's uncomfortable, I know you don't have time, may not think you have time, but I think it's not just educational issue. I think it could be an economic, maybe even national security issue that our future students, our next generation, need to be competitive. They need to understand what does and what does not work with these powerful tools. And I think who better to do that than our professors, especially in the fields like humanities where they have the deepest critical literacies and they're trained to know how to figure out what does and doesn't well, what is strong or ethical. Those would be my two big invitations for faculty thinking about joining into this.[DR. JAMES M. LANG] They're powerful invitations. When I hear you talk, I say, okay, yes, I get it now, right? So, it's, very convincing, and in fact, I always go back to something you told me about your own writing process, and I was really sort of focused on, for sampling, the example of, organizing your ideas, and taking your thoughts and putting them to, in an outline. That's something we should do like, an analog way with just our brains. And you gave me a great example of how your process in writing sometimes using that voice-to-text transcription. So, just remind me of that and what, yeah, how you do that.[PROF. G. ALEX AMBROSE] Well, I actually did it this morning. I was thinking, I know Jim likes stories. Let's see how I can turn my bio into a story. So, on my way driving in today, I did a voice-to-text transcription of speaking out loud of some of these things, and it was just a jumbled mess talking to myself for about my 15-minute commute. Then when I got into my office, I took that jammed, scrambled voice-to-texts, and I copied them and I pasted inside a, I think I used Grok this morning and I said, take my voice-to-text transcription, clean it up, organize it, and put it into some bullets so I can tell a cohesive story. And in a matter of seconds, it took that long two pages worth of garbled text and gave me a couple bullets and kept all bolded keywords for to prepare myself to share that story. So, that's one example of how I use to assist with my writing. I got a lot of ideas and my brain's moving really fast, I'm just trying to get it out, and then I ask AI for help to organize it, synthesize it, condense it, and make it in ways I can communicate a little bit more effectively.[DR. JAMES M. LANG] And I asked you to share that example because I think faculty sometimes need examples to see how it can be used in positive ways. I typically think that many of us are just saying, I only see the interference. I don't see the enhancement process. And examples like that show people the enhancement possibilities, right? Okay, so, one of the other events that we had together this week was that put the two AIs together, Artificial intelligence and academic integrity. And I think that's one of the areas in which people are really focused that, the concerns they have about AI, the artificial intelligence, are coming from the sense that like, how can I do with it the stuff I used to do with this new landscape and these tools that are available to us? It was quite interesting to see how people were reacting to that with the concerns they raised, the fears they had, where they saw possibilities and where they didn't. What's your sense of, a faculty who has those kinds of concerns, what are the places they should be looking to go forward with their teaching and assessments?[PROF. G. ALEX AMBROSE] Yeah. We had some fun this week. I think, what is it three talks we did?[DR. JAMES M. LANG] We did, yeah, yeah.[PROF. G. ALEX AMBROSE] Two new faculty academies, one workshop on academic integrity, and with Ardea Russo, the Director of our Academic Integrity Office Standards. And it was fun to engage with you and her. Her training's in theology and she's our number one ethical code of conduct. Your background in pedagogy and philosophy, and my background in technology, I learned a lot too from working from faculty--[DR. JAMES M. LANG] Yeah, likewise.[PROF. G. ALEX AMBROSE] With the three of you, and then the discussions we facilitated. I think you're right. At the top of the podcast, you mentioned there's a little bit of shift two years in. I think faculty realize that the genie's out of the bottle. It's not coming out. OpenAI didn't give us an instruction manual, and it's time for us to start facing. We can't avoid it. We can't ignore it. We need to start facing it. Some of the things that I found interesting from the reactions that we got from some of this was, again, our lab worked with a bunch of really fantastic faculty to do that census, and we started the workshop by sharing some stats of what's going on in this campus. Again, we have ideas, we have thoughts about how bad AI could be impacting academic integrity, but we actually asked with an anonymous survey. We surveyed all of our second semester first-year students taking the required Writing and Rhetoric course. We had almost 400 students respond to that. And in that, we asked the question, do you understand university policy? They said yes, I don't know if they do. Do you understand your course instructor's policy? They said yes when they gave it to us, but we found that a lot of them aren't getting any guidance at the course level. Again, our university has a policy which really defers down to the faculty about whether they want to ban it, allow it, or like you and I talk about, somewhere in between, use it more strategically in certain contexts. And from this survey, it was interesting to see the reactions of the faculty when we said,'What percentage of the students do you think always use AI even when they're told not to for an assignment?' And to their surprise, not yours,'cause I know you wrote a book about this and you're familiar with the literature, there was only, I think 3% of the students say they're always using AI, even when they're not supposed to. On the flip side, 66.7% of the students, almost exactly two-thirds, said they never do it, they never do it. And then the interesting of that majority, that third, there's half of 'em, sometimes, and half of them, unsure. And we realized A, it may not be as big as of a population we're worried about, but B, we can work on that third. We can be more transparent, more clear on our expectations on what is and is not acceptable. I'm gonna turn it back to you. Do you wanna talk about the thumbs up, thumbs down poll, and the questions that you asked to ask the faculty?[DR. JAMES M. LANG] Yeah. This was one of the things we've done, which in a couple of different places, which I think is very thought-provoking for the people in the room to see how other people are using or what their perspectives are on AI. So, we'll ask them a series of questions. For example, in your course, is it acceptable for students to, for example, take a thesis and put it into the prompt and ask AI to give you three different versions of that thesis. And the faculty would do it thumbs up, thumbs down, or like maybe. And so, we would have like five or six of those questions, and it's fascinating to see. Thumbs are all over the place.(both laughing) They're up, down, in the middle, and across the room. And I always say to people, look around you, look around you, right? So, they can see actually, I might thinking this one thing, Yeah, I would never let anyone do that. But they're looking around saying, but there's like five people are saying yes to that. And they're sort of, it's very enlightening to see the fact that we are very all over the place in terms of our opinions about what's acceptable for AI in terms of our course-level policies, as you say. You know, we have this sort of university policy, where we have this sort of like this philosophical policy that we have. When we bring it down to the course level, things really get sort of challenging there. And so, after we do that, then we try to say, okay, given that, well, you see that students are getting these different policies in course to course, right? So, imagine a student going from course to course and saying okay, in this last class, I learned this is the where to do AI and where you should not use AI. And in this class, it's completely different. And so, as a result of that I think we agreed that then we have to show some ways to help them see how you're gonna articulate your policies to your students. And what policies make sense for you and for your discipline. And actually, you have a couple tools that I think you've introduced to faculty, two different ways to think about how do you articulate that policy. So, you wanna share those?[PROF. G. ALEX AMBROSE] Yeah, yeah. Again, I loved your questions, you're breaking down the writing process. You asked them brainstorming,'Is it okay to use AI? Thumbs up or thumbs down?' Revising and making your words more concise, bring it down to 2,500 words, restructuring. You ask them very specific steps of the writing process, and again, it was all over the place. So, the tool you're referencing that begins to get this that's making some emergence in the literature is called the Generative AI Acceptable Use Scale. I call it the traffic light. It's basically a red to green scale of granule of no AI is red and green is full AI, but there's some levels like you were breaking down with the writing process. It's okay to use it for brainstorming. It's okay if you use light editing and grammar and spell-check, but then it goes into the yellow, which may, may or not be okay, changing the tone, or restructuring things, or refining someone's thesis. So, what we offered, we shared with them the traffic light and that seemed to be a good starting place as a map for us as faculty to say to our students assignment by assignment, not just a one-size policy--[DR. JAMES M. LANG] Yeah, that's important, right? It's not only course-level policy, but also AI, each assessment actually has to have the same kind of--[PROF. G. ALEX AMBROSE] Expectation of use, acceptable use.[DR. JAMES M. LANG] Yeah.[PROF. G. ALEX AMBROSE] Yeah, so the traffic light I thought got a lot of traction, but I think some of them actually preferred the second one, the AI menu, which gets a little bit more nuanced, and, again, this is coming out of a really great professor at University of Sydney, Danny Liu I think his name is? He developed this and he calls it the AI menu, and he puts in like a coffee menu, an appetizer menu, soups, bread, main dish, and dessert. And under each of these subsections of the menu, there's different things. Coffee would be just, hey, light editing and spell check, but main dish would be rewriting full paragraphs. There would be an appetizer, would be maybe help with brainstorming. So, another thing that we presented the faculty was maybe you could either, A, tell the students which ones on the menu are available for them to use or not, or B, one of the faculty participants had this idea, maybe not offer all of them, but you as an instructor focus on one of those menus of like, for this second paper now that you've done a paper by yourself, I'm gonna show you what AI-assisted editing ethically looks like, and I'm gonna teach you that skill, and we'll work on that, and we'll bring that into the second paper. And then the third paper, we'll bring another part of the menu in. So, I thought the AI traffic light and the AI menu were pretty helpful for the faculty to help them understand how they could be more transparent and clear to their students about their assignments.[DR. JAMES M. LANG] Yeah, and I think when you look at those two things, those two options, and by the way, those will be available in the Show Notes, links to both of those resources. When you look at them, I think the first thing you do as a faculty member, you have to think, okay, but what do I think about this assessment?' Where it belongs or doesn't belong, and then when you've done that, then you can share it with your students, right?[PROF. G. ALEX AMBROSE] Exactly.[DR. JAMES M. LANG] And I think that's the idea of helping developing my literacy as a teacher, but also the students' literacy about AI. And so, I think that's where we have to get to. And then another resource that you've developed, which I think is also worth noting here is the AI acknowledgment form. And so, once you've decided, for example, that AI can be used in this part of the writing process or this, developing a presentation, whatever it might be, how do you help the student understand how to analyze that use and cite it and reflect and learn from it? So, tell me what you've developed there as well.[PROF. G. ALEX AMBROSE] Yeah, yeah. The story behind the tool came from me teaching this course called'Learning with Generative AI,' that I taught it for the last two semesters. And one of the things I learned when teaching the students some ethical AI literacies is how they could document their use of AI. So, just like we ask the students to do citations, footnotes, or APA, bibliographies at the end, I showed them that there are some emerging standards for them to use MLA, APA, et cetera, to do a citation if they use AI. I also showed them how they could share the AI chat log that they have with it. So, a lot of people don't know when you do a chat with AI, you could share that whole exchange with somebody else. You can create a link just like a Google Doc share. So, when I taught them the skills of how they can document their AI, then I said, 'Go ahead, use AI for certain reasons, certain ways for each assignment.' But I had this AI acknowledgment at the end as an appendix that they have to fill out. They do their project, but then they add this little appendix which has them do a little pledge of what level they were allowed and authorized to use using the traffic light. We have them put a citation, have them share their documentation of their chat thread so I can get under the hood and see exactly what was their ideas, what was AI ideas, and how they collaborated. There's also some reflection in there that have them critically evaluate what did or did not work well. What was my rationale? What was my reason for using it? Would I do it again? Was this helpful? And I thought that was, it's emerged as a pretty useful tool. The students think, hey, great. I don't feel icky like I'm cheating. It's like turning in my scrap paper for a math test. You can see what's mine. And second, I think it's giving the students the ability to show that they could collaborate ethically. I mean, they're gonna go into jobs where they may be using AI to help with a report or a presentation, and they need to, I think, responsibly say what's their ideas versus others. So, that's one tool that we're thinking about maybe adjusting to our assignments, is can we add an educational backend bookend piece of having students do a little bit of reflection and documentation if and how and why they're using AI.[DR. JAMES M. LANG] And so, you're teaching a course actually on learning with AI, for example, right? So, when students are doing this acknowledgment in every assignment or, and how are they responding to it? Do they see like, for example, what they're learning with it or without it, or like learning from it? What do the reflections look like when you get them from students?[PROF. G. ALEX AMBROSE] Yeah, again, this is not something you have to do on every single assignment, right? You can do components of it, or you can scaffold and build up pieces of it through the semester.[DR. JAMES M. LANG] Yeah, good point.[PROF. G. ALEX AMBROSE] Definitely the big projects at the end, I had 'em do this. But yeah, it was interesting to get into their minds. Again, we had a AI-assisted research presentation that they made, and I really was pushing them on the rubric to develop a real research problem articulate a clear problem. And they said in their reflections that something clicked. They finally were able to push and get to a real clear, researchable problem statement. Otherwise, they were getting to big topic levels, like shup up kind of thesis development. So, that was one where they actually looked at it as a thinking partner, as if they would go to a writing tutor or maybe sit in office hours to help them really flesh out and get a topic, not just AI assist in sustainability, but a really specific topic that's gonna focus on. So, that was interesting to see the processes that they thought were most useful. So, again, I started to learn how they're learning and what was and was not, effective or efficient or ethical for them to use.[DR. JAMES M. LANG] Yeah, that's great. So, you can then take those ideas and then share them the next class, right? To help them understand these tools can be useful in these particular contexts, right? That many students have found it useful in these places or these parts of the assignment. That's great. I wanna just raise one other tool that we've shared with faculty. Another way to think about this challenge between academic integrity and artificial intelligence is something I call next-generation assessments. And this comes from a book by Jessica Singer Early called 'Next Generation Genres,' actually. I love this book. It came out actually just after AI was introduced. And so, it doesn't address it specifically, but her idea is that we have to think, as we go forward with technologies, new and old ones, what the genres of like, for example, student writing have to look like. How will they evolve, right? Given our current technologies and the ways higher education and education is changing more generally. And so, I think this is a way to, I've sort of taken that phrase of'Next generation genres' and try to turn into next generation assessments. So, where we are right now, what kind of genres or assessments do we need for sort of the next generation of them? And how do we think about how do we have students demonstrate their learning in new ways? And so, we try to share some of that stuff with the faculty as well. And then we have to just keep pushing forward with that. So what's the next generation of assignments? We still might have some ones that are more traditional help students develop basic skills, and I think those things are important still to do. We also learn to think forward. Like, how do we have the assessments that are going to integrate AI or partner with it to create different forms of thinking, new ways to learn and to expand our minds, right?[PROF. G. ALEX AMBROSE] Yeah. Maybe we can share a little bit where you talk about the artistic statement,'cause I think that was a nice bookend to the front end of an assignment, right? I talked about the documentation and the acknowledgment of the AI at the backend. Could you share for,'cause I really got a lot out of that artistic statement about in front of a product or work that they did.[DR. JAMES M. LANG] Yeah. One of the genres that Early talks about is the artist statement. For example, you go to an art museum, there's like a contemporary artist who has a bunch of paintings on the wall, and then they have an artist statement on the wall. They've written like a few paragraphs explaining what their method of working is, what the materials they use, how well did they achieve their vision as an artist in this particular set of paintings. And so, she takes that idea and says we can actually append that to any kind of product that a student creates. There can be like a traditional essay and a writer statement, a presentation and a speaker statement, a project and a project creator statement. So, this idea of having not only the product that you create, but also a reflection on the process. And so, I like this idea of, okay, we can still have the traditional assessment, but then we append or we enhance and we build out this other statement about it, which produces self-reflection, metacognition. And I like the point you were just making now, not only for the student, but helps me see what are they struggling with? And as a teacher, I then can learn from that how to help them move forward the next generation of students. Or the next class period.[PROF. G. ALEX AMBROSE] I love that. And I think I shared with you that story. It reminded me of there's a new professor that I met this summer in our new faculty orientation. And I remember asking him as an, he's a sculptor in our art program. And I asked him,'What are you thinking about? How are you thinking about AI and its impact on the creative artistic fields?' And he gave me his artistic statement, and I realized that that's what it was. He said,'Alex, you know, my work, I deal with clay, and I go back to my home country and I source clay from the local area.' That was a whole story that I would have not known if I looked at his piece. And that's what makes it human. That tells the whole story, and that's what AI can't do. That's the un-AI-able skills. And I think you're right. If we have students to push them not just make a product, but give that artistic statement and tell them what's human, what's motivating, why they're proud of that, that's gonna help us push to those next generation assignments.[DR. JAMES M. LANG] Yeah. Okay, so that's great. We're at end of our time here, just quickly, what are you excited about with AI going forward? What do you see in the future that's might be something we should be paying attention to, be excited about, interested in, and following as these technologies continue to improve?[PROF. G. ALEX AMBROSE] Yeah, I'm gonna go back to assessment. Again, assessment has been a lot of cause of suffering for me. And what gets me excited, for faculty is, I think AI is gonna provide some opportunities to make our assessments not just resistant, but more resilient in this new age. I think it gives us the ability to upgrade our assignments. It takes a lot of time to make a rubric. It takes a lot of time to make these next generation assessments. With AI, it could help us create those different and creative types of assignments to supplement our traditional important assessments. I guess one top of my mind related to this is, is I'm working with the physics department right now and we're looking at the role AI can play in rubric-assisted scoring and feedback. We're doing some studies right now, AI is doing a really good job, just as good as humans within the range of variance, to take a picture of human handwriting, analyze the complex computational problems, score individually on a very specific rubric with the different dimensions, and give really detailed feedback, feedback that would take a lot of time for a professor to do. Again, I get excited for faculty that this AI could help them do better assessment moving forward. For students, I'm excited for them too, this is a really once in a, I don't know, not generational, but throughout human history that we've had such powerful smart tools at our disposal. And I'm hopeful that coming to a university like ours and learning to use them in an ethical way, that they can do some really amazing things with those tools. Things that would have took them much longer or not able to do.[DR. JAMES M. LANG] Well, I'm sure you and I have not given our last workshop in AI, Alex.(Dr. G. Alex Ambrose laughing) So, our conversations will continue here on campus and also online and everywhere else we speak. So, thanks very much for being a guest today.[PROF. G. ALEX AMBROSE] My pleasure, my friend too.(light outro tune)[DR. JAMES M. LANG]'Designed for Learning' is a production of Notre Dame Learning at the University of Notre Dame. For more, visit our website@learning.nd.edu.(light outro tune)