The Digital Transformation Playbook

When Your AI Assistant Becomes Academic Misconduct

โ€ข Kieran Gilmurray

Artificial intelligence is transforming education at warp speed, raising urgent questions about academic integrity in the age of ChatGPT. This episode unpacks the Joint Council for Qualifications (JCQ) guidance on AI use in assessments - essential listening for students, teachers, parents, and anyone curious about the future of education.

TLDR:

  • AI tools include text generators like ChatGPT, image creators like Midjourney, and other content-generating systems
  • Work submitted for qualification assessments must be entirely the student's own
  • AI misuse includes copying/paraphrasing AI content or using it for analysis without acknowledgment
  • Real cases show severe consequences like zero marks or disqualification even for "accidental" misuse
  • Simply acknowledging AI properly won't prevent mark reductions if you haven't shown your own skills


We dive deep into what constitutes AI misuse, from copying chatbot responses to failing to properly acknowledge AI assistance. The consequences are more severe than many realize: examples from real cases show students receiving zero marks or even complete disqualification from qualifications for improper AI use, sometimes despite good intentions. One history student was disqualified from their entire A-level for accidentally submitting AI guidelines instead of their work.

Beyond the rules, we explore the practical aspects of AI in education. Schools and colleges bear significant responsibility for educating students about appropriate AI use while implementing detection strategies. For students who want to use AI legitimately, we break down the proper acknowledgment requirements - but here's the crucial catch: even perfect citation won't earn you marks if the AI did the critical thinking for you.

As these powerful tools become increasingly sophisticated and harder to detect, the fundamental questions about assessment itself grow more complex. How will we continue to accurately measure human understanding, creativity, and critical thinking? Join us as we navigate these challenging waters at the intersection of technology and education.

Have you encountered AI tools in your studies or teaching? Share your experiences and let us know how you're adapting to this rapidly evolving landscape.

Support the show


๐—–๐—ผ๐—ป๐˜๐—ฎ๐—ฐ๐˜ my team and I to get business results, not excuses.

โ˜Ž๏ธ https://calendly.com/kierangilmurray/results-not-excuses
โœ‰๏ธ kieran@gilmurray.co.uk
๐ŸŒ www.KieranGilmurray.com
๐Ÿ“˜ Kieran Gilmurray | LinkedIn
๐Ÿฆ‰ X / Twitter: https://twitter.com/KieranGilmurray
๐Ÿ“ฝ YouTube: https://www.youtube.com/@KieranGilmurray

๐Ÿ“• Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK

Speaker 1:

Welcome to the Deep Dive. Artificial intelligence, I mean. It's transforming nearly everything at well warp speed. But how is this exciting and sometimes challenging Evolution hitting something really fundamental like academic assessments? Today, we're taking a deep dive into the Joint Council for Qualifications, that's the JCQ Guidance on AI Use and Assessments, and this isn't just for teachers or exam boards. If you're a student, maybe a parent or even just curious about how education is adapting, this is definitely for you.

Speaker 2:

Yeah, absolutely. Our mission today is basically to unpack the key insights from this pretty comprehensive document. We want to help you understand the main regulations, the very real risks and maybe some surprising nuances about AI's role when you need to show your own independent work. We'll try to pull out the most important nuggets of knowledge so you feel properly informed. You know, without getting totally lost in the jargon.

Speaker 1:

OK, good plan. So let's start simple. What does AI use actually mean here in this context?

Speaker 2:

Right At its core. It's about students using AI tools, things that help them get information or generate content that they might then use in their assessed work.

Speaker 1:

MARK MIRCHANDANI, and that's a huge range of tools now, isn't?

Speaker 2:

it. Melanie WARRICK oh, it's massive and growing all the time. We're talking about AI chatbots that generate text, you know, like ChatGPT, gemini, Claude, jenny, ai. They can answer questions, summarize things, even write whole essays. But it's not just text. There are really powerful AI tools now for creating images, music, even video, things like mid-journey sound, draw runway. The list goes on.

Speaker 1:

It's easy to get sort of swept up in how capable they seem, but the guidance flags some pretty significant risks too, right, definitely.

Speaker 2:

While the output might look convincing, you have to remember it's based on statistical likelihood. That means AI can, and often does, produce stuff that's inaccurate, inappropriate or even biased. Some tools have even spat out answers suggesting dodgy actions or get this completely fake references, and the biggest warning really is that using these tools the wrong way can mean committing malpractice.

Speaker 1:

Malpractice. Okay, that sounds serious.

Speaker 2:

It is, and the core principle hasn't changed. For qualification assessments, any work a student submits must be their own. That's right there in the JCQ General Regulations 5.3k. It's explicit.

Speaker 1:

So what does AI misuse actually look like then in practice?

Speaker 2:

Well, the guidance gives some clear examples things like copying or just paraphrasing chunks or even the whole response from an AI tool, or using AI to do parts of the assessment for you, like the analysis or evaluation, maybe calculation so the work isn't really reflecting your own effort or understanding Right and critically simply failing to acknowledge you used an AI tool as a source or, you know, being vague or misleading about how you used it. That's a big no-no.

Speaker 1:

And you mentioned malpractice. What are the actual consequences if someone gets caught doing this?

Speaker 2:

Yeah, ai misuse is malpractice and the sanctions can be pretty severe. We're talking potentially disqualification from the qualification itself or even being barred from taking any qualifications for several years, and beyond that your marks could take a serious hit if relying on AI means you just haven't shown you've met the standard yourself.

Speaker 1:

OK, so the rules for students are strict, but who's actually, you know, policing this? Where does the responsibility lie?

Speaker 2:

That falls squarely on the schools and colleges. The centers, as JCQ calls them. The guidance makes it crystal clear. The centers, as JCQ calls them, the guidance makes it crystal clear the head of center. So the principal or head teacher is ultimately responsible for making sure student work is authentic and marked correctly, and what does that involve for them, for the centers and teachers?

Speaker 2:

Well, it's quite a list actually. They need to have policies and procedures in place specifically to monitor and check for AI misuse. It's about ensuring authenticity. Teachers and assessors need to regularly talk about AI use, agree on how they're handling it, and a huge part is educating students and parents too, about what's okay, what's not okay, the risks, the consequences. Jcq actually provides support materials like posters and info sheets to help centers with that communication.

Speaker 1:

That sounds sensible.

Speaker 2:

And centers also need to ensure their teachers are familiar with AI tools, the potential pitfalls and even the detection tools available Plus for assessments. They have to make sure access to dodgy internet sites or AI tools is blocked on center devices.

Speaker 1:

That's a lot for centers to manage, but there's an interesting point in the guidance about when they need to report issues, isn't there?

Speaker 2:

Yes, this is quite important. If AI misuse is suspected before the student signs their declaration saying the work is their own, the center deals with it internally. They don't have to report it to the awarding body at that stage.

Speaker 1:

Ah, okay, so strong internal checks are key then.

Speaker 2:

Exactly. It highlights why those proactive internal policies are so vital.

Speaker 1:

Now, what about acknowledging AI? If a student does use it legitimately as a tool, how should they reference it?

Speaker 2:

Good question. Acknowledging sources, including AI, is just fundamental academic practice. If you use an AI tool, you have to be really clear about how you used it.

Speaker 1:

And the guidance is specific on the how.

Speaker 2:

Very specific. You need to state the name of the AI source, like chat GPT 3.5. You need its URL and the date you got the content say 25th of January 2025.

Speaker 1:

Okay, name, url, date, what else?

Speaker 2:

You also need to keep a copy of the actual question or prompt you gave the AI and the response it generated. Ideally, keep it in a format you can't easily edit, like a screenshot.

Speaker 1:

Right.

Speaker 2:

And then provide a brief explanation of how you use that AI output. All of that needs to be submitted along with your work.

Speaker 1:

That sounds thorough.

Speaker 2:

But here's the really crucial bit Even if you acknowledge using AI perfectly, you will not get marks for content where you haven't independently met the marking criteria.

Speaker 1:

Ah, okay, so it's not about getting credit for what the AI did.

Speaker 2:

Precisely. It's about your understanding your skills. Just writing AI or chat GPT in your bibliography is totally insufficient. It's like citing Google instead of the actual web page you used.

Speaker 1:

That makes sense. So let's imagine a student has acknowledged AI use, no malpractice suspected. How does an assessor mark that work?

Speaker 2:

Well, even if it's acknowledged, the assessor's job is still to make sure the student isn't rewarded if the AI tool basically did the work for them on a particular point, preventing them from showing their own understanding.

Speaker 1:

So they adjust the marks.

Speaker 2:

Yes, based on what the student has demonstrably understood or achieved themselves, and they have to keep clear records of why they made those decisions. For transparency.

Speaker 1:

Interesting. What about teachers using AI to help with marking? Is that allowed?

Speaker 2:

The guidance says centers might permit it, but only after carefully considering things like data privacy, and this is key. An AI tool can never be the sole marker.

Speaker 1:

Never the final decision maker.

Speaker 2:

Never. A human assessor must review all the work and they remain fully responsible for the final mark awarded.

Speaker 1:

OK, so thinking about prevention rather than just detection, what can schools actually do?

Speaker 2:

Yeah, prevention is key. The guidance frames AI misuse basically as a form of plagiarism. So many good practices already exist, things like restricting access to online AI tools on the school network and devices. Makes sense Setting reasonable deadlines, checking work at intermediate stages, not just the final piece. Maybe getting some work done under direct supervision in class.

Speaker 1:

Like timed essays, that sort of thing.

Speaker 2:

Exactly, or even just having conversations with students about their work. You can often gauge understanding that way and designing tasks that are maybe more topical, current or specific, which makes it harder for a generic AI to produce something relevant.

Speaker 1:

Right, making it harder to cheat, and when it comes to actually detecting potential misuse, does it just rely on teachers' instincts?

Speaker 2:

Teacher judgment is definitely a big part of it. Identifying AI misuse often uses the same skills teachers already have for spotting plagiarism or authenticity issues. But yeah, AI does make it trickier.

Speaker 1:

So what kind of clues might a teacher look for?

Speaker 2:

There are several indicators. Things like suddenly seeing American spelling or currency used by default, or the language or vocabulary just seems off, maybe too sophisticated for the level or just not like the student's usual style. A lack of specific quotes or references that you can actually check can be assigned too, because some AIs, as we said, just make them up. Also, content that feels very generic, not really tailored to the specific task, or sometimes students accidentally leave in the AI's own warnings or provisos in the text.

Speaker 1:

Oh, really Like as an AI language model, exactly.

Speaker 2:

Or even just the format. Maybe the work is typed when the student usually submits handwritten assignments. Little things like that can raise a flag.

Speaker 1:

And what about the automated detection tools, things like copy leaks, gpt-0, turnitin? Are they the silver bullet?

Speaker 2:

Well, they can be helpful. They can certainly give an indication, a probability score that AI might have been used, but it's really important to understand their limitations.

Speaker 1:

How so.

Speaker 2:

For instance, they might give lower scores if a student has heavily edited the AI content and their accuracy can vary depending on which AI tool generated the text and how much is AI versus human writing. They're not foolproof.

Speaker 1:

So not a definitive answer then.

Speaker 2:

No, the crucial point the guidance makes is that these tools are just one part of a holistic approach. Teachers, because they know their students, are usually best placed to judge authenticity. The detection tools are more like supplementary evidence another piece of the puzzle.

Speaker 1:

OK, this next bit sounds really interesting. You mentioned real world examples. Can we look at some cases of AI misuse and what actually happened?

Speaker 2:

Yes, the document includes some anonymized case studies that are quite revealing. For example, an A-level history student that's like a final year high school project accidentally submitted an AI-generated guideline instead of their own work. Detection software flagged it.

Speaker 1:

Oops, what happened?

Speaker 2:

This qualification from the entire history, a level qualification. It shows how serious even accidental misuse can be.

Speaker 1:

Wow Okay, any others.

Speaker 2:

Yeah, there was a case with a vocational course, Cambridge National's Enterprise in Marketing. A student used chat GPT because they felt the teacher wasn't available enough and they thought, well, it's just like asking a teacher.

Speaker 1:

Hmm, I can see how a student might think that misguidedly.

Speaker 2:

Right. That student and another one in the same group who couldn't even remember which parts were theirs and which were AI, ended up getting zero marks for those specific learning objectives. Even though they'd been warned about plagiarism generally, ai hadn't been specifically mentioned.

Speaker 1:

Zero marks. That could really impact their grade.

Speaker 2:

Absolutely. Then there was an extended project, another independent research task. A student's work was flagged for having loads of unreferenced AI content. They didn't offer an explanation and the malpractice committee found a clear breach. Result Disqualification from the EPQ.

Speaker 1:

So again, full disqualification. These consequences are severe.

Speaker 2:

They really can be. There's another one showing center responsibility, a GCSE religious studies exam, so age 16. Typically the examiner noticed weirdly sophisticated language, american spellings, ai detection showed a high probability. What was the issue there Turned out, the student's word processor hadn't been locked down properly during the exam, so they had internet access. The student lost all marks for the affected parts of the exam and the school had to retrain its exam and vigilators.

Speaker 1:

Ah, so the center messed up there.

Speaker 2:

Exactly. And one more an A-level art and design student used an AI translation tool, deepl, for their sketchbook research. Translating from their own language Is translation AI considered misuse. In this case yes, because they didn't acknowledge it properly. They admitted knowing it wasn't allowed, but claimed they didn't realize DeepL was AI-supported. The center reckoned about 98% was AI influenced 98%.

Speaker 2:

Yeah, they lost marks, got zero for that component because the lack of acknowledgement and the heavy reliance called the authenticity of their own ideas into question. We also saw a criminology student copy AI notes directly into an assessment without referencing. That cost them marks on one section which actually stopped them getting the overall qualification.

Speaker 1:

Looking at these stories, it's quite a range, isn't it? Different subjects, different types of misuse, but often really serious outcomes, from losing marks on one part right up to full disqualification.

Speaker 2:

It really drives hard that point about integrity and ownership, which raises another important question what if a student does acknowledge the AI, follows all the referencing rules but still leans on it too heavily?

Speaker 1:

Right? Does acknowledging it mean you're always safe?

Speaker 2:

Not necessarily safe from losing marks. The guidance gives an example. Another A-level history student. They produced decent coursework, acknowledged using AI properly. But for one specific section, comparing historical sources, they relied completely on the AI tool's output.

Speaker 1:

Even though they cited it.

Speaker 2:

Even though they cited it, the assessor decided the student hadn't independently shown they could meet the marking criteria for that specific skill the comparison of sources. So the mark was adjusted downwards. Instead of potentially getting, say, 22, 24 marks, they got 20.

Speaker 1:

So acknowledgement isn't a free pass for credit.

Speaker 2:

Exactly. There's another example with a BTEC level three business student that's a vocational course. They used AI for a section predicting future market changes.

Speaker 1:

And acknowledged it.

Speaker 2:

Yes, they acknowledged it and they met most other criteria. But because they relied on AI for that specific bit of future analysis, the assessor couldn't award them the top distinction grade criteria for that section.

Speaker 1:

So what did that mean overall?

Speaker 2:

It meant their overall grade for the unit dropped from a distinction to a merit.

Speaker 1:

Okay, so let me just recap that Acknowledging AI is vital. It avoids the serious malpractice sanctions like disqualification, but it doesn't automatically mean you get full marks if the AI did the thinking for you. The credit is for your demonstrated knowledge and skill.

Speaker 2:

That's the absolute key takeaway. It's about your learning, demonstrated by you.

Speaker 1:

So, stepping back and looking at the bigger picture, what does all this JCQ guidance tell us?

Speaker 2:

Well, I think it really underscores this huge ongoing challenge how do we maintain academic integrity when technology is evolving so incredibly fast? It's not just about catching cheats. It's fundamentally about making sure qualifications actually mean something, that they truly reflect a student's own hard work, their knowledge, their skills.

Speaker 1:

Okay, let's break that down. The message seems clear AI tools are powerful, yes, but the basic rules of assessment haven't changed. Your work still needs to be your work.

Speaker 2:

And the responsibility is shared. Students need to be honest. Teachers need to educate and verify. Centers need robust policies and checks.

Speaker 1:

So for you listening, especially if you're a learner, the main point is clarity Understand what these AI tools are, know the rules for your specific courses and exams and always, always make sure the work you submit genuinely shows your independent thinking.

Speaker 2:

Which leads us to a final, perhaps slightly provocative, thought. As these AI tools get even better, generating content that's harder and harder to distinguish from human work, how will assessment itself need to change? How will we continue to accurately measure real human understanding, creativity and critical thinking? That's something really worth considering, I think, as you encounter more of these tools, both in your studies and out there in the wider world.

People on this episode