Technology and Learning Research (AARE)

When AI Meets Ubuntu: Rethinking Power in Academic Writing

Various academics Season 1 Episode 14

Dr Lynette Pretorius is an award-winning educator and researcher who interlaces imagination and insight to reimagine what higher education can be. Drawing on her interdisciplinary expertise, she weaves together diverse ideas, lived experiences, and pedagogical innovations to create inclusive and transformative learning environments. Lynette has extensive experience teaching across undergraduate, postgraduate, and graduate research levels, including supervising PhD students. Her research advocates for more compassionate, equitable, and creative learning environments, with a particular focus on doctoral education, academic identity, student wellbeing, and AI literacy. She is the author of numerous peer-reviewed publications, including two academic books that explore the lived experiences of graduate research students. As a Senior Fellow of the Higher Education Academy, Lynette has also been internationally recognised for her leadership in teaching and learning, particularly her dedication to designing transformative educational experiences that centre wellbeing, justice, and student growth.


https://doi.org/10.37074/jalt.2025.8.2.9

https://doi.org/10.1016/j.iheduc.2025.101038

Let us know your thoughts on this episode

Ellie Manzari: Hello and welcome to Technology Learning podcast series. I'm Ellie, a member of AARE Technology and Learning Special Interest Group. And today I'm joined by the remarkable scholar Dr. Lynette Pretorius from Monash University. Doctor Pretorius is an award winning educator and researcher known for her innovative work in Doctoral Education, Academy Integrity and A.I. Literacy.

In today's episode, we'll be unpacking two of her recent papers, which explore the ethical use of GenAI in research, and how A.I. might support a more inclusive and equal academic environment, especially for international and multilingual doctoral students. Hi Lynette it's so wonderful to have you on the podcast today. Thank you for joining me.

Dr Lynette Pretorius: Hi, and thank you for that kind introduction.

Ellie Manzari: To start us off, can you tell us how you first became interested in exploring the intersection of A.I., equity and Doctoral Education?

Dr Lynette Pretorius: Sure. So I've been working with generative AI tools and how they could be used in higher education since they came out initially. And through that, I came to believe that generative AI literacy will be a core skill that all graduates will need to have in the future. And so I did some work on how to develop AI literacy. I developed all sorts of AI masterclasses and things that I taught at the university. And then I was talking to my PhD students, and we were talking about all the different ways in which they use generative AI. And that made me really interested in seeing the additional potential that GenAI can have for scholars who may be traditionally more marginalised in academic spaces, particularly in terms of the way in which they communicate their ideas. Because academia is very anglophonic, in that it values English communication above all else. And this does mean also that it often values more westernised perspectives, because things have to be written in certain styles. And I was interested to see whether, if you could use GenAI in good way, whether you could actually use it as a tool to help them express their ideas, in their ways of thinking in an English capacity that others can understand in this way, to help bring forth some of those ideas that may never be seen in the Academy, because they happen to be not in English.

Ellie Manzari: Thank you, Lynn. That's a really thoughtful way of framing your journey into this space. So let's now turn, to the fruits of your two recent papers where you and your colleagues, propose ethical frameworks for responsible GenAI use in research. What motivated the development of ethical frameworks, and how did your collaboration shape, its final structure?

Dr Lynette Pretorius: One of the things with AI use is that there needs to be a way to, use it ethically. Because there's all sorts of ethical considerations that are associated with GenAI use. Most obviously are the things related to data privacy and the use of author voice, and whether there is some sort of loss of authenticity when you are using these tools.

So I was working with a whole collective of scholars that are interested in genAI, in higher education, and in both teaching and research. And we thought that what would be good is if we could come up with a framework that talks about all the ethical components that students, teachers and researchers should consider when they decide which GenAI to use, and how they plan to use it. So we worked together. It was quite a big collaboration, as you would see on all the authors on the paper. But actually we found this to be particularly valuable because the scholars come from multiple countries and they have multiple views on how these things work. And by working together, we were able to develop a really sound framework that's applicable across countries.

Ellie Manzari: One of the the key things that stood out to me in the paper is, how the framework emphasises, continuous reflexive engagement. Could you share a practical example of how researchers might apply this in their work?

Dr Lynette Pretorius: I think that the important thing to remember is that when you are using a GenAI tool, it gives you some information. You're like, 'oh yeah, that sounds pretty good'. And then you go with it. But what is important to think about always is when it gives you results. You like it or you don't like it. There is this need for actually thinking about, okay, what is it I like about it? What don't I like about it and why? And think about does this actually reflect my way of thinking? And when you are doing that reflection continuously, you start to see how, the GenAI output could be influencing your thinking. So it could for example, be making you think about things in certain ways. But that data set that it uses has been trained on certain ways of thinking. So you may be missing some important nuances, that you haven't considered. Some biases that are present in the data set that you haven't even thought of. So you need to think reflexively about the outputs that you receive and what the underlying implications of those things are.

Ellie Manzari: That's very helpful. Let's now shift to the second paper. This is the my favourite one. Which takes a more philosophical and relational approach, drawing on the concept of Ubuntu to explore how GenAI can empower multilingual scholars and challenge academic hierarchies. Why did you choose Ubuntu as a guiding philosophy through this study, and how does it shape the way we think about academic communication?

Dr Lynette Pretorius: I like that you said that it was your favourite paper. At the moment it is also my favourite paper. I like it for many reasons, but the most predominant reason is the philosophical framing through Ubuntu. So I originally come from South Africa, so I was already familiar with this framework. But what I really wanted to do with the paper, as I said before, is to showcase that there's a different way of thinking about how we communicate ideas in academia. At the moment academia is a very meritocratic, in that it's very individualised. It's all about individual success, and often you find that that creates an environment where people are trampling over each other to try and be the best. And I have a problem with that. And so I wanted to use a framework that is more relational, that sees the success of one, as the success of everyone. Which is where the ubuntu philosophy came in, in that it sees everyone as relationally connected. And if you are part of some, collective, you celebrate the success of others, you mourn with the loss of others. And so you want to work together to make others succeed, which is how I wanted to reframe the use of GenAI. In that discussions that I had seen, particularly on social media, and particularly from North American social media, was that there is a lot of discourse that anyone who uses GenAI is cheating. And that if you see things like the word 'delve' or the word 'tapestry' in someone's writing, it must therefore be a GenAI created and therefore cheating. And I get really annoyed by people, firstly, who take such reductive views of things, but secondly who think they have also ownership over a language. And so I wanted to reframe it in a way to say, you know what, how great is that, actually that those people are now able to engage in academia and share their ideas? And so that's why I wanted to use that framework. To try and encourage people to think about things a little bit differently and a little bit more collectively than individualistically.

Ellie Manzari: Thank you. I love the idea of academic communication as a shared relational process, as you mentioned. And I know this work was also deeply grounded in your collaboration with international PhD students. From working with these students, what insights have you gained about how GenAI can challenge academic hierarchies and support equity?

Dr Lynette Pretorius: So I wrote the paper with my group of PhD students, and we found that they use the GenAI tools in all sorts of ways. And I mean, the most obvious one would be to fix their grammar, for example. Which is not surprising because it's a good language model. It does good language things. But what we found was by having the freedom to get it to fix the way they communicate in a way that is therefore appropriate for others to understand. They felt that they were more free to focus on their ideas rather than the technical English language requirements.

So we found that because the focus shifts from language proficiency, to the quality and strengths of the argument being made, they can focus more on the actual intellectual contribution that they are making. I also found that because of the iterative nature of GenAI, they could use it to help them develop confidence to navigate social spaces. So learn how to communicate with their supervisors when they want to write an email, how to phrase it in the appropriate tone, for example. And while yes, that means they write in a nice academic way, it also means that they feel like they actually belong. Because we've found that a lot of it is to do with their perceptions that have been culturally ingrained, that international students are somehow, you know, at a deficit for English language, and they need to do skills to improve. But actually it is more a reflection of the system and the fact that the system privileges a certain way of thinking and a certain way of writing. So when you can use the tool as a way to help you navigate that, you can then start becoming more confident in the way in which you can say, 'well no, I don't agree with that idea'. 'I think this idea is better because', because you feel more confident, you are capable of actually articulating your different ways of thinking in ways that others can understand. And therefore help to challenge some of the the norms in the system, some of the hierarchies that exist in academia.

Ellie Manzari: I think that's really speaks to the transformative potential of GenAI when used thoughtfully. So as we move to our wrapping up, I'd like to bring us back to the idea of AI literacy, which both of your papers highlights as a crucial area of future development. Why is literacy so important for the future of doctoral education, and how should universities approach teaching it?

Dr Lynette Pretorius: I see AI literacy as the capacity to work with AI as a collaborative partner. So not see it as just a tool that does some stuff for you, but see it as a partner in your work, in your research, in your writing. Helping you to think more critically and creatively about your work, because it's freeing up some of your space in your brain to focus on the quality and the argument and so on, rather on the technical qualities of writing. In terms of why is it crucial? I think it's crucial for all graduates at all levels, because in the future it will be integrated into everything. And I mean, realistically it already is integrated into everything. If we want to move forward as humanity as a whole, we will be using these tools to help us do things more intelligently, more effectively. So learning how to use it well, will stand a graduate in good stead. As the saying goes, it's not the AI that will take your job, it's the person who can use the AI that will take your job. Right? In terms of doctoral students, it is the fact that those who are AI literate and able to use these tools well to help them create wonderful and new ideas and help them articulate it in ways, will be the ones who succeed and move ahead much faster than the other ones who are not able to do that so well. So as educators it is our response ability really, firstly, to learn how to use these things ourselves and then to embed training on how to use them ethically and well in the ways that the students use them. 

Ellie Manzari: That's such an important call to action. And finally, for those who are listening, who are supervisors, educators and doctoral advisors. What advice would you give to them who want to incorporate AI responsibly into their teaching and research?


Dr Lynette Pretorius: I think my first advice is to be willing to play an experiment with it, because the generative AI tools change at an hourly basis. And they can do all sorts of things. They recently released Agent Mode, which if you haven't investigated, you really should go and see what Agent Mode can do in some of these large language models. And every couple of days, some sort of new advancement comes along. So you need to be willing to experiment and go out there and see what's happening and what can work. For me personally, for my teaching, for example I have classes that I teach every semester. And I've taught for many years. And I want to make interesting and engaging and new and better all the time. So I now have trained some of my ChatGPT to know what sort of teaching I like doing. And then I would say, here are my slides. This is the outcome for my workshops. How can I better align this? How can I embed more I don't know, student centered activities or whatever it is I want to do, and it will give me suggestions. And then I can think, 'oh, okay. Well that sounds like a good suggestion. I'll incorporate that' or 'I don't really like this one'. So in terms of teaching, I find that quite useful. It's also a way of getting an outsider's perspective on the things that you've been doing forever, and how you can maybe do them a little bit better.

In terms of research, there's a lot that it can do. I've just read something, I have a paper that's hopefully going to be published soon, on how ChatGPT can help you do qualitative data analysis. Not because it can do some, you know, basic coding and things, but because it actually encourages you as the researcher to think more deeply about your data. And come up with different ways of thinking about it, rather than seeing it through those same theoretical frameworks you've always used, or the same perspectives you always think about. It gives you some suggestions for other things that you can do. And I mean just practically, it makes things easier and quicker. So that's always useful, particularly in time poor environments.

The other thing to do, is to educate yourself. How it works? It is important to have a basic understanding of how these things work, not as technical IT experts, but as someone who at least can know what it is that it can and can't do, So you can use it effectively. Always be critical, in terms of what the outputs are that it is generating. When it is appropriate for you, when it isn't, and how to use it. Always making sure that you, ultimately, the person whose voice and argument is the one that is presented not the AI's argument.

Ellie Manzari: This has been a really inspiring conversation Lynette. Thank you very much. And I really appreciate you sharing your time, your research and your insight with us today. And to our listeners, if you would like to read more, both papers are discussed with the link in the episode notes. Be sure to check out Lynette's work and follow her. For more thoughts, leadership on doctoral education, AI Ethics and inclusions. Until next time, stay curious and stay ethical.