
Technology and Learning Research (AARE)
This podcast series on the topic of technology and learning research aims to create a fun and engaging podcast series that is accessible to a wide audience, including those outside of academia. By producing high-quality, entertaining content, we hope to raise awareness of the value of technology and learning research and promote its importance to broader society.
Technology and Learning Research (AARE)
Generative AI in Education: Transforming Teaching, Learning, and Assessment Practices with Dr Hilary Wheaton
Dr Hilary Wheaton is the Principal Advisor for Educational Practice, in the Education Portfolio, at RMIT University. Hilary plays a key role in shaping university-wide initiatives, recently focusing on curriculum design, academic integrity, and AI strategy. She developed RMIT’s Academic Integrity Framework and the institutional response to GenAI technologies like ChatGPT. In 2024, she co-authored RMIT’s AI Plan, providing an institution-wide narrative for AI in addition to the approach for AI in Education. Her research bridges teaching practices and technology, with a background in internet and cultural studies, in which she has periodically published work on topics such as education, persona and computer games while in her professional role.
Ellie Manzari: Hello and welcome to the Technology and Learning Podcast series. I'm Ellie, a member of the AARE technology and learning special interest group. Today, we diving into an exciting topic that's reshaping education as we know it. How institutions are engaging with generative AI in education. We'll be exploring RMIT University approach to AI in education and the way they're supporting educators to integrate this cutting-edge technology. Joining me today is Dr. Hilary Wheaton at RMIT. Hilary welcome to the podcast.
Hilary Wheaton: Thank you, Ellie. Great to be here.
Ellie Manzari: What approach has RMIT taken to GenAI in Education, and how does this differ from some of the other institutional approaches across the sector?
Hilary Wheaton: Yeah, that's a really great question, Ellie. And I think it's important to note that obviously all institutions are responding to the challenges of generative AI at the moment. Not least in part because Texa , as our regulatory body, has done a request this year around information, and to encourage institutions to develop a kind of planned response and share that across the sector. So Rmit obviously has been pretty active in this space. We had quite an initial response and put that through our Academic board and circulated it to our community in February of 2023. And we have continued to evolve that narrative over time. But very early on in that February response of 2023, we set a clear expectation that we weren't going to ban generative AI at Rmit, and we weren't going to view it as a tool that was exclusively framed from a academic integrity or a concern of misconduct approach. Instead, as an institution, we acknowledged very clearly that it was part of our existing strategy to make sure that students were capable for the world of work and life that was emerging as these technologies started to change industry or change expectations and practices. And therefore it was something that we needed to respond to in that way. Both as a technology that could be used for learning and teaching and supporting that practice. Also as a legitimate tool, that our industry collaborators and our industry partners would be wanting to understand and explore with us, and see us engaging and directing students to use. And also, as I guess, a key influence to ongoing practices around assessment, which obviously in the sector, have been a big discussion about how we continue to evolve the way we assess students and make sure that they're ready for the, for the workplace and the challenges of our of our society. But also, you know, some of those long standing concerns that do exist for academic integrity.
So one of the things that Rmit was well positioned with is that we had recently undertaken a change to our curriculum design approaches, and that led to new strategies, for how we put together programs that align to industry and align to our aspirations. And a part of that was what we framed the RMIT capabilities which replaced our graduate attributes. And those RMIT capabilities became really important as part of our response to generative AI, because it allowed us to say that through all of our programs and all of our disciplines and our curriculum that we were developing, there were particular ways that we could frame legitimate and authentic engagement with these technologies as part of that discipline. So there were 3 key capabilities of a set of 6 that we really used, and the 1st one was about being an ethical global citizen. So these capabilities are embedded through our program learning outcomes. They shape our activities the way we conduct assessment. The way we think about every discipline and curriculum and ethical global citizens allowed us to think about, y ou know, things like indigenous perspectives around sustainability, around ethical considerations which obviously have been a key part of the discussions for generative AI as well.
We've also got digitally adept, which is another clear capability that talks to the opportunities for our students and our staff to engage with emerging technologies and be capable of using them in the right way and for the right purposes and outcomes. And then we have critically engaged, which is, I guess, essential in the context of generative AI when it could be producing information that is incorrect or leading our students astray. Which is about making sure that there is intellectual independence and the ability to source evidence and judgments and undertake evidence-based decisions. So this was very early in our in our framing, and has continued on. And this has led to bringing together an AI plan, more recently, as part of that Texa engagement. Where we've actually framed a full institutional narrative around how we're approaching AI. So not just as a technology or disruptor, but as a key part of our institutional strategies, such as our education plan, our RMIT strategy and our research strategies. And for that we've got 4 principles that really are about helping us frame our mindsets around a kind of ongoing change piece, and seeing this as a cultural change and a cultural strategy. As well as helping us to then focus more specifically with these principles into a series of 5 distinct action areas, where we can set priorities for how the institution then responds to manage risks, seek opportunities, and engage in constant evolution of a lot of our business as usual processes, whether that be about the design of curriculum, bringing in resources to support student and staff capabilities. Or even thinking about, you know, some of the operational considerations for admissions or governance processes and things like that.
So that has really broadly been our institutional approach. It's very much been holistic and anchored in our strategies. And while addressing academic integrity, it's not been the central focus of how we've engaged in discussions. And I think that has put us a little bit apart from some of the other institutions because we got there very early on, and that's definitely helped to bring a lot of people along on the journey.
Ellie Manzari: That's fantastic. It's clear RMIT is leading in many ways. So let's talk more about how this approach extends into assessment. Can you tell me more about AMIT's approach to AI, and assessment?
Hilary Wheaton: Yeah, so absolutely. Look, while I've said, it's, you know, that academic integrity has not been the forefront of our driving view. It's obviously a key concern, and no doubt that AI is specifically going to change how we conduct assessment.
So a few things have happened. One is because we have those capabilities that drive our curriculum, and because we've taken a more holistic view around AI. We haven't instituted any kind of direct rules that say, you must engage with AI here in a program, or that you must assess in particular ways. What we've actually really focused on doing is empowering our educators to make appropriate decisions about when and how they introduce AI to their students. So one of these pieces has obviously been to set very clear guidelines to educators that they can then communicate to their students about whether or not AI is allowed to be used in assessment, and we've got 4 guidelines that really allow this response to be communicated. One is that obviously there's no use of AI in a particular assessment. Another is to allow just any use of AI as defined by the by the educator. And then the 2 kind of really pivotal ones is to talk about whether or not a student can use a specific AI tool to complete an assessment. Or whether they can use that AI tool for only a portion of the overall assessment process. And this gives a huge amount of scope for educators to understand the relationship between these tools and the outcomes that they're trying to achieve. And so for those guidelines, we have some templates that really encourage our educators to explain the choice that they're making. And set out a real rationale to students about the use of AI and the benefits that it will bring to them. And that is, I guess, a key part of being very educative in our approach to academic integrity, and being really authentic and transparent to our students about the value of these tools, and whether or not it's going to be in their best interest, both for their future work and career, but also through their learning journey about how to use them.
There's obviously criticisms about whether or not then, how you set that framing of the use of the tools, whether we can actually really enforce it and address things around academic integrity. And one of the views that we're doing is not to take a kind of, secure or not secured view of assessments, but instead to look at how we approach assessment redesign, based on the outcomes that we have for that program or that discipline. And so we're really wanting to engage in a very targeted approach to that redesign of assessment that allows us to understand, if you're using AI in assessment, how is it really bringing out and changing those outcomes for the student? How is it bringing out what the student can demonstrate, and how those outcomes are reflected as part of a human using AI continuum. And so this is really about highlighting that process of learning and the students ability to learn with those AI tools, not just their ability to complete a test or complete an assessment in a particular context. And so for this, we're thinking about the critical courses that exist in programs and using things like our majors and our minors to also help identify those critical courses. And then think about the redesign of assessment in there, and the security of that assessment, and the different types of methods that could be used, that specifically include AI in them and scaffold that skill. So security isn't just about excluding AI as being a tool. It's actually about going. Let's redesign those assessments in those contexts and really understand how we bring AI into that learning journey.
So that's really the approach that we're looking at. To give an example here, you might take an approach and say, look, one of the common ways for students to be assessed is to ask them to do a presentation, and to present on a particular topic or a case study to demonstrate their knowledge and skills, and also to really emphasise their ability to communicate effectively. So if you think about trying to secure that assessment in relation to AI you might say, well, there's no point in a student doing a presentation online anymore, because even if they had their camera on, they could have used AI to generate a fully interactive version of themselves, digital version of themselves and be leading that AI, in responding, delivering the presentation, structuring up the presentation, you know, creating the slides, creating the videos, creating the artefacts, as well as being responsive in that moment to any questions. And so you could say right! 'Definitely, can never do any presentations online', and consider that a valuable form of assessment. Which means that you might then shift to saying, 'Well, we need to get the students in the classroom presenting, communicating in person. Then we can guarantee not only that they've got those communication skills, but that they're able to deliver that presentation and answer questions'. But once you do that, really the only outcome that you're truly able to verify that they have demonstrated in that environment is their ability to communicate in person and physically be there. Because unless you lock them in a room for that entire time without any access to AI tools, that entire presentation, even when they're doing it in person, could have still been prepared and developed using AI tools.
So when it comes down to the actual outcome, you're able to assess you're really limiting yourselves. And you have to ask, is that really appropriate anymore? Or would it be more appropriate to actually acknowledge that we can use these sorts of tools to improve the way we present information, the way we craft a narrative and potentially even help students who do have maybe neurotypical or neurodivergence issues and might struggle to present in a real life environment like that. That these tools could actually help them in a particular way, and reveal the skills and the learning, and how they are brought in responsibly, and apply a lens of integrity and security to that process. So it's a it's a subtle shift in thinking and it's still one that is heavily informed by integrity, both from an academic lens, but also an ethical lens and a consideration of the integrity more broadly around how we might imagine future working scenarios with AI and the ethical use of these tools. But it is a bit of a distinction in the mindset that we're trying to foster as we approach the disruption that AI has brought.
Ellie Manzari: It sounds like these changes having big impact on how assessments are evolving. But as we know that the success of AI often depends on how educators themselves are supported to use it effectively. How is RMIT supporting educators and academics with their engagement of AI?
Hilary Wheaton: This is obviously a real big piece. It's not something where we're all going to be experts in AI or experts in the redesigns of our curriculum, and how we might approach assessment suddenly and abruptly. And that's also something that is going to be very personal for our educators. There are going to be those that are really highly engaged with these technologies, exploring with them, innovating with them. They're going to be our leaders. And there are going to be some who don't want to engage with AI for a whole host of reasons, some of which are very personal and ideologically based. You know, they have concerns around the sustainability of these tools. Concerns around data, and how the motivations of some of the big players in AI are developing these tools. And these are really valid concerns, and we're not seeking at Rmit to silence those voices. We think those voices are really important as part of our of our educator cohort and our academic cohort. And what we're encouraging those to do is to really bring those voices into the classroom with their students and really explain those and show the reason why they have this hesitation for their students to engage with these tools. Because that's a very important learning experience as well and something that can be demonstrated by showing the risks of these tools in those learning contexts.
But basically RMIT are doing a whole range of things. So to really surface all of these differences opinions and take advantage of where our educators are at, and where we are at, as we're responding to the technology. We've got a community of practice that has a large number of academics across the institution as members. Those community of practice events are hosted every month. They are recorded. There are guest speakers who come in, either external or from within the university, showing emerging practices, emerging tools, talking about challenging topics. And that has been a really great environment for educators to join, talk about the challenges that they're facing, but also get ideas and inspirations and keep abreast of how quickly the technology is moving. One of our leaders internally in this space who actually hosts that community of practice. He does a great job of providing a weekly update that summarises the latest changes in the technology, and really provides a very quick summary to those members of the communities to see the new tools or new models that have been developing, and some of the implications. As part of that we also have annual dedicated events, to really looking at AI in learning and teaching and the emergence of changes in practice. So these are what we call our annual showcases. We did one in 2023, and obviously one more recently, in 2024, where we bring the community together, both professional, academic and research staff. To hear from educators and professional staff across the university about some of the strategies or the initiatives that they've been developing with AI. Whether that be around, changing their course or their curriculum, exploring a different approach to assessment, having critical engagements with their students. Or, say for example, from our professional staff developing a new tool with AI that could support learning and teaching or support student services and the overall student experience.
We've also been running online and face-to-face workshops. And this has been a great opportunity to get educators and program teams together to really unpack some of the challenges, the perceptions, and the specific focus of their curriculum and their discipline, and how that has been shifting as part of the AI discussions. And so these workshops have been a mix of both, providing some basic upskilling in what AI is and the tools that are available. As well as then looking at applied situations where we can redesign that curriculum together in those workshops, and and share some perspectives on that. And so there's an online course that provides some of that detail that our educators can refer to as they need. But a key part of this entire piece is something that we have developed, called our RMIT artificial intelligence assurance of learning typology. And this typology really underpins the entire process for educators as they really explore, whether AI has a place in their curriculum, and where that can be structured through activities, assessments, case studies, and the use of particular tools. And so that typology has 7 themes that really guide that decision making. It's a cooperative kind of paradigm that was established for group work that we have adopted into a lens of how to approach AI and consider the cooperation between the educator, the student and AI as working together to help change the experience. And this has been a really handy tool that can exist as a resource for educators to kind of quickly look through, be prompted to consider particular concerns, have a series of reflective questions that can help them in those discussions, and link out to a variety of resources across the institution, that relate to those particular reflections or questions. So that's kind of handy, quick guide, that is trying to connect our educators to obviously quite a wealth of information in this area, but do so in a far more manageable way.
Ellie Manzari: Oh, thank you very much, Hilary, for sharing your insights. And it's been such a pleasure having you on this podcast. I'm sure that our listeners gained a lot from this discussion. And our listeners, if you found today's conversation engaging, don't forget to share it with your colleagues and network and stay tuned for the next episode. Until then, stay curious and keep exploring new ideas. And bye for now, Hilary.
Hilary Wheaton: Bye, for now, Ellie, and bye to your listeners.