The PsycholEdgy Podcast with Dr Paul

The Rise of the Machines - How ChatGPT is Detected in Universities

Dr Paul Season 2 Episode 15

Have you ever wondered what might happen if graduates who used ChatGPT to secure grades and enter clinical programs will do to clients they attempt to work with? Does malpractice come to mind? We shall look into this and much more in this episode of the podcast. We take a look at the university detection systems currently in place to challenge the way that students are using Ai for nefarious means, and what might be a useful way to adopt an approach using Ai.

Make sure you subscribe to the podcast and you can also follow us on LinkedIn and FaceBook

Mental Health Support in Australia
If you require urgent medical support call 000

Call Lifeline if you are experiencing a personal crisis and finding it difficult to cope with your situation.

  • You can discuss any topic that is causing you distress and the trained crisis support staff will not criticise or judge you.
  • Lifeline’s online chat service offers real-time support to people who are feeling overwhelmed.

Contact Lifeline for immediate assistance

Music "Into the Step" by Hidden
Podcast is a Psychology Edge Production
Copyright All Rights Reserved 


Send us a text

 Welcome to the Psychology Podcast with Dr. Paul. Edgy by name and by nature. The Psychology Podcast will provide you with a competitive edge from education through to registration. Dr. Paul supports your transformation into becoming a psychologist, counsellor or allied mental health practitioner. Now here's Dr. Paul. 

Good morning, good afternoon, or good evening wherever you're on the world today. My name is Dr. Paul. Welcome to the Psychology Podcast. I hope that you're doing well. This topic is for you if you are considering or have used or are intending to use or want to be curious about what Generative AI is and what AI can do in an academic setting. We also talk about the things that are no-no's, we also talk about the things that are go-go's. And these two opposing forces can actually transform the way that students attach meaning to the use of AI in support of assessment for any program really, but we're going to talk about it in the context of mental health, psychology, psychiatry, counseling programs, anything that's allied mental health. That's what we're gonna focus on in this episode. So let's have a look at what ChatGPT actually is. 

ChatGPT is what we call a natural language program, an NLP. An NLP is a machine learning artificial intelligence that is using your natural language skills. You do not need to write code. That's the big difference. You can just type like you would type into a word processor and ask the AI a question. The AI then can interpret this and then provide you with a response. The AI is not so intelligent as to be able to do things that might be considered conventional. 

For example, social conventions, whereby if you ask it to give an opinion, it can't do this, it can't self-reflect. It's not a sentient being, It's not something that has the capacity for self-reflection. It is, after all, at the end of the day, a program which takes your question, uses the information that's stored in it, which is up until 2021, and responds to you based on the information it has available. 

So it's almost like an advanced version of Google. So when you put something into Google, it brings you up a whole lot of responses, but to shortcut all of that, The AI then actually gives you an intelligent response, but yet again, it's not thinking, it's actually drawing on resources that are available on the internet. So it's really important to understand that it actually has these significant limitations. The other thing which is a real concern is that that information which it uses, which is on the internet, is owned by somebody else. And herein lies the problem. And the problem is that given that the artificial intelligence isn't really responding to you, it's actually providing you information based on the algorithms and the searches, based on the questions you put in and giving you a regurgitation of somebody else's work. Now, why is that problematic? 

Well, we know what the definition of plagiarism is. Plagiarism is using other people's ideas or concepts that you didn't generate yourself and not a citing or not attributing an attribution to the author. And this is the problem. You have no idea where that resource is. You have no idea if it's accurate. You have no idea if it's verifiable. And indeed the makers of ChatGPT and other AI programs say that the accuracy of these programs is limited. It is able to provide you with completely false information. 

The other thing that's not able to do is actually access journal databases which are related to things like psychology, counseling or allied health services. And it can't provide you with a reference or a citation that supports where it received that information from. If you ask it, give me the references, it might cite URLs for web pages, which may exist, or they may not. I've done the experiment myself, and if you ask it to give you the URL from where the reference came from, and you put that URL into your Google Chrome browser, it doesn't actually go anywhere. It's a broken link. It's a misleading link. So there's a lot that is challenged by the AI giving you these information, because to be honest, it might not be accurate at all. And this is what's being detected now in the university sector. And let's move into talking about that. 

Once again, the technology has streaks ahead of the universities and the way that the policies and procedures actually support the students' understanding and awareness of how AI can or cannot be integrated into assessments. So for myself, we don't use AI in assessments. It's completely banned from assessments. However, the university policies at many major universities say that you can actually use it to formulate ideas, to structure work, to give you some support and guidance, to understand a concept. However, in doing so, you must actually provide the university with an understanding exactly what it is you use the AI for. So for example, if you use the AI to formulate ideas, to create tables, to give you advanced knowledge on certain topic areas, and you use this in your work, you need to acknowledge that that's what you did. Most universities are saying do not put chat GPT down as an author because it's not something that's creating ideas. It's not something that's creating situations where that work can be attributed to the AI. That AI is only drawing off the work of others that already exist on the internet. And this is a really interesting point. 

Can you put information into chat GPT, amass that with other information which exists on the internet? And the amalgamation of that then gets submitted for assessments. The reality is that's a really murky area. It's part your work, part somebody else's work, part the AI. The reality is, I think, and this is a personal perspective, If you're using chat GPT for anything other than maybe giving you a list of things you need to consider when you're looking at a theory or concept, which gives you a shortcut start to be able to go and investigate that phenomena, that concept, that ideas for yourself. I'm not sure students are going to build the capabilities. They're not going to build the search capacity, that is to do a literature review, to understand how to paraphrase information. Read the abstract, read the report, synthesize the concept, put it in your own words, and then put it into a paper, and then that paper is submitted to an assessment criterion or task for a university unit or course or or subject. I don't know whether we're actually now in an age where you might find that people actually get through university degrees undetected, very much similar to contract cheating where somebody is contracted to write the report for you. And in our profession, when it's a helping profession, We have people who are graduates that aren't authentic, they aren't genuine, and we ask them to work with clients in the real world. What harms could this do? What does this create in terms of ethical issues or concerns? And does this then potentially lead to greater instances of malpractice? 

My research is in that malpractice case area, and I for one know the pitfalls. Most of the pitfalls are not because of the person is a great professional and able to work with clients, it's for incompetence. And typically one of the larger categories is unprofessionalism and not being professionally competent to work with clients and to work with peers to be able to provide support for an individual of which you constitute a part. So it's a very interesting dynamic. What are universities doing to detect chat GPT and AI? Well, what they're doing is building in new AI detection systems. Here are the things that are actually being tried right now. When you put your work through Turnitin, there's a plagiarism checking device which looks for sources on the internet and it locates those sources and provides us as academics with an understanding of where you attained that information from. 

So that's plagiarism. Now they're embedding AI detection systems. So when you go into the system and a report is submitted, it's going to give you a number, a percentage, and it's gonna highlight where the AI has written the work or not. Universities are now becoming a little bit more advanced in the way that they're actually trying to accommodate for chat GPT. And so in some instances, assessments are actually being transformed and these transformations exist in the form of doing more work that is self-reflective. And so I know in my own programs, we actually now have a number of case studies which get put forward to students. students read the case studies and respond to a series of questions. And then the next part of that program or that assessment task is actually to reflect on your response versus an expert's response and how you could learn from that. 

The chat GPT AI programs cannot self-reflect. They cannot provide this. And in some instances, students have actually submitted work, which says, I am an AI language processing unit or machine, I cannot self reflect, but here are some things that might be considered. And this work has been copied straight and put straight into the assessments and we pick that up and we can immediately verify that AI has been used. It throws doubt on the whole assessment and it throws doubt on every other assessment that particular student puts in place. Not to mention it is a form of serious academic misconduct. So you need to make sure that you please do not use chat GPT or other AI. You do your work authentically. 

The way that it is required to do, the way that it takes time, the way that you'll actually learn what it is you need to do for that assessment task so that the learning can take place so that we have confidence in you being put in front of clients when the time comes for you to go out into the wide world and work with clients. So let's just summarize before we finish. ChatGPT is here and it's transformed the way we work. Universities have allowed for this transformation to occur, however, policy is now trying to keep up with technology changes. And I think you're gonna see a changes in the way that universities across the country, specifically in allied health programs, including psychology, counseling, psychiatry, social work, OT, et cetera, they're going to change and transform and assessments are going to be based more on you as an individual and it's going to significantly change the way that assessment takes place. Universities are also allowing in some instances for you to use chat GPT and other programs that are natural language programs and AI based to support your understanding of a topic area and drive a foundation for you to launch from to be able to understand more thoroughly. 

For example, giving you the five points or six points of the stress vulnerability model in understanding mental health disorders or concerns. And then you can use the stress bucket, for example, as an idea that the AI has given you and go and have a look at what the stress bucket is actually about and look at how to assess vulnerabilities and how to assess stress so that you have the best opportunity to be able to use it for the right purposes. The universities do not allow you to cite the author as chat GPT, nor do they allow you in many instances to use the chat GPT and pass it off as your work. That is purely plagiarism and it's coming from sources on the internet, which I'll be quite honest, do not have the depth if you're working in a master's program and I'm the person that's looking after that unit, I'm telling you now there's a substantial difference between that and first year undergraduate psychology programs or counseling programs. 

The depth of which the AI can go is very superficial, very surface level compared to what we expect the students when you move into programs such as Masters or PhD. So don't get caught out with being very superficial because you will not, you might get passes and you might get credits. They're generally not the things that get you through programs to be able to then allow you to work with clients and get titles which are protected such as psychologists, such as occupational therapists, such as psychiatrists etc. So please I implore you do not use chatGPT for anything other than foundational structuring or understanding of a concept that you can use to move forward. 

ChatGPT in other programs such as it are here to stay and things are going to transform. Please understand we can detect chat GPT and other AI forms in the university sector. If you get called up for using chat GPT or other AI programs and it is used in your assessments you will be up for potentially academic misconduct. Academic misconduct is something you want to avoid because it is a mark on your education career especially if you're you're near the end of your career and you go, I just don't have the time, I'm just gonna use AI once. It can smear your entire career in the university sector and will be a slight against you and may inhibit your chances for getting into clinical placement programs. 

There are my thoughts on chat GPT and AI programs more generally, please understand we do detect it and we know it is available. Policy is now catching up and assessments are being transformed and changed all across the country to accommodate for What's really become an endemic? cheating problem for Universities across Australia. That's it. Hope you enjoyed this episode. I hope you learned something about it. Good morning. Good afternoon or good evening I'm dr. Paul signing off from the psychology podcast for another episode. (upbeat music)