Incongruent

Prompt Wizards and Data Guardians: Making AI Work for Schools

The Incongruables

Spill the tea - we want to hear from you!

We take a comprehensive look at using artificial intelligence safely and effectively in education by examining guidance from the UK Department for Education and related sources. This deep dive unpacks the opportunities, challenges, and key considerations for anyone dealing with AI in educational settings.

• Understanding generative AI as a tool that creates new content based on massive datasets and pattern recognition
• The critical importance of crafting detailed, clear prompts to get quality output from AI systems 
• Maintaining human oversight as AI lacks true understanding and is simply making predictions based on training data
• Safeguarding concerns including exposure to harmful content and the creation of convincing deepfakes
• Data protection as a paramount consideration under UK GDPR with strong warnings against using free AI tools for student data
• The surprising environmental impact of AI systems which consume electricity equivalent to entire countries
• Strategic implementation requiring proper planning aligned with school development goals
• Practical applications including generating teaching resources, personalizing learning, and streamlining administrative tasks
• The need for schools to develop clear AI policies covering usage, data handling, and academic integrity

How do we best equip everyone - students, teachers, leaders - with the critical thinking skills they need to navigate this complex new landscape responsibly, ensuring the technology serves learning and not the other way around?


AI 1:

Welcome to the Deep Dive. Today we're taking a really close look at a whole stack of sources about using artificial intelligence safely and effectively in education. We've gathered up a recent UK Department for Education guidance plus, you know, related videos, transcripts, documents, trying to get a clear picture of things right now.

AI 2:

That's right, and our job really is to sort of cut through all the noise, unpack these sources and pull out the most important bits.

AI 1:

Exactly the opportunities, the challenges, the things you really need to think about if you're dealing with AI in schools or colleges.

AI 2:

Yeah, so think of this as like your shortcut to getting up to speed.

AI 1:

Right, whether you're leading a school in the classroom or just really curious about how AI is changing learning. Okay, let's dive in. When we talk about AI in education, what do we actually mean? Because it feels like it's kind of everywhere already, but there's a specific focus now, isn't there?

AI 2:

absolutely, you're spot-on. Ai is in loads of familiar stuff spam filters, predictive text on your phone right things we don't even notice exactly. You might not. Might not label it AI, but it is For this deep dive, though. We're really zoning in on generative AI, that's the AI that actually creates new stuff Text images, audio video, even computer code.

AI 1:

Oh, got it. The kind of AI that can, you know, write an essay, draft or make a picture from a description. So how does that work? Like in simple terms, based on the sources.

AI 2:

Well, the sources break it down pretty simply. It's basically built on machine learning computers learning from data, okay, and then there's deep learning, using these complex sort of brain-like neural networks that lets them process huge amounts of information and generate something new.

AI 1:

And that leads us to the large language models, the LLMs, things like ChatGPT, Gemini, Copilot.

AI 2:

Exactly those. They're the big examples. They get trained on absolutely massive data sets, learning patterns in language or images or code.

AI 1:

Right.

AI 2:

And that lets them predict what comes next in the next word, the next pixel. That's how they generate stuff that feels so human-like or really creative.

AI 1:

It almost sounds like they understand it, but the sources used a helpful way to think about it like a black box.

AI 2:

Yeah, that's a really useful analogy. You put something in that you're prompt, something complicated happens inside the box you don't see how exactly, and then you get something out. The AI's response.

AI 1:

Okay, and this is where it gets really critical for educators, right yeah, Because the quality of what comes out it hugely depends on what you put in. The sources really stressed at this point they really did.

AI 2:

A detailed, clear prompt makes all the difference. We saw examples. You know. Like asking for a quiz.

AI 1:

Yeah.

AI 2:

Just saying make a quiz about light gives you something well, pretty generic, Right. But if you specify, say, create a 10-question multiple-choice quiz, include the answer key this is for UK year three science based on the national curriculum. The topic is light Then you get something much more useful.

AI 1:

Ah, much more targeted.

AI 2:

Exactly, and the sources even mention tools like AILA from Oak National Academy that are specifically designed to help teachers with this, generating stuff that's already aligned to the curriculum.

AI 1:

So it's not just the AI's power, it's our skill in asking the right questions.

AI 2:

Precisely. And that links straight to another really really critical point from the sources. Yeah, AI is not a human expert fundamentally. Yeah, Its output is just based on prediction from its training data. It doesn't understand things like a person does.

AI 1:

Which means you can't just take its word for it.

AI 2:

Exactly that. You always need human oversight. You've got to check the outputs. Are they accurate? Is there bias? Is it actually relevant for your students? Your context?

AI 1:

Right because the source is warned about American spellings creeping in or outdated teaching ideas.

AI 2:

Or stuff based on, say, the American education system. The AI doesn't know what's current or right for your specific school, it's just giving you the statistically most likely answer based on its training. Okay, and the sources are crystal clear. As the educator, you're professionally responsible for the prompt you use and for the output, how you check it, adapt it, use it.

AI 1:

So cross check with your curriculum. Official guidance, use your own expertise indispensable.

AI 2:

Your knowledge is key okay.

AI 1:

So understanding that input, output thing and the absolute need for human oversight, well, that immediately brings the risks into view, doesn't it? It does the sources spend a lot of time on risks and safeguarding. One expert even said you lots of opportunities but definitely risks to navigate.

AI 2:

Yeah, and the advice was quite pragmatic Learn fast, but act more slowly. Don't feel pressured to just jump in and adopt everything until you've really understood it and figured out your strategy.

AI 1:

That feels like solid advice, especially when you see how much kids are already using this stuff.

AI 2:

It's vital. The sources mentioned an Ofcom study from 2024, something like 54% of 8 to 15-year-olds Wow, and 66%, so two-thirds of 13 to 15-year-olds in Britain had used generative AI in the past year.

AI 1:

That's huge.

AI 2:

It is, and over half of those 8 to 15 said they used it for schoolwork. So you know it's already happening. Educators need to be informed to guide it properly.

AI 1:

Definitely makes this conversation essential. Okay, so the sources flag some key risk areas. What's the first big one?

AI 2:

Exposure to harmful content. That's a major worry.

AI 1:

How so.

AI 2:

Well, generative AI makes it super easy to create really realistic images avatars. So there are concerns about kids potentially seeing inappropriate stuff or even grooming risks through chatbots that seem very human.

AI 1:

And this connects to wider security issues too.

AI 2:

It does. Yeah, the UK's counterterrorism strategy was mentioned, highlighting the risk of AI being used to create and spread fake propaganda. Deepfakes on a massive scale, potentially overwhelming the systems trying to moderate content.

AI 1:

So educators are kind of on the front line helping students spot fakes and misinformation.

AI 2:

Exactly. Teaching those critical thinking skills is vital. How to question what you see, recognize misinformation, challenge extremist ideas. That fits right in with the prevent duty. Safeguarding students and good filtering and monitoring systems are obviously crucial too.

AI 1:

Okay, what's another risk? I know inaccuracy and bias came up a lot.

AI 2:

Yes, Because AI learns from these enormous data sets, often just scraped from the internet. Its outputs can be wrong or biased, Reflecting the biases already out there in the world racial, gender stereotypes, that kind of thing.

AI 1:

And bias can be in the algorithm itself or even in the question we ask.

AI 2:

Both Bias can be sort of baked into the algorithms and definitely prompt bias how you phrase the question can steer the AI.

AI 1:

Ah, like that example they gave about maths.

AI 2:

Yeah, that was a good one. Asking why do students struggle with maths sort of assumes they do struggle right, whereas asking what factors influence students experiences with learning maths is more neutral.

AI 1:

It allows for successes and challenges a much better prompt that really shows how our own assumptions can shape the ai's answer it does highlights the need for critical thinking from us too.

AI 2:

And then there are are hallucinations.

AI 1:

Where the AI just makes stuff up.

AI 2:

Basically, yes, it generates content that sounds plausible but isn't actually true or accurate. Sometimes with real confidence.

AI 1:

So the takeaway again is Check everything.

AI 2:

Cross-reference with trusted sources, curriculum plans, official guidance. Use your professional judgment. Don't just copy and paste. Never just copy and paste.

AI 1:

Okay, then there's data protection UK GDPR. That must be a huge consideration with student data.

AI 2:

Paramount, absolutely paramount, under UK GDPR. If you process personal data names, photos, assessment results, anything identifiable you must have a lawful basis. Using those free, public AI tools is really risky because you don't control where that data goes. It might get stored somewhere else, maybe even used to train the AI model without your OK and it could be accessed by people who shouldn't see it. And children's data needs extra special care, access correction and, importantly, the right to be forgotten, especially if they agreed to something as a child without fully grasping the risks Using their data, particularly sensitive stuff in general AI tools that could be a serious GDPR breach.

AI 2:

So the really strong advice from all the sources is only use AI tools that your school or college has officially approved and provided.

AI 1:

The enterprise versions.

AI 2:

Usually yes, because those have been checked out. They've likely had data protection impact assessments done. They have better safeguards. The message is loud and clear Don't put sensitive or personal data into general AI tools unless your institution has specifically said it's safe after a proper assessment.

AI 1:

That sounds like a hard and fast rule Okay, link to data Intellectual property IP infringement yeah, different, but rule Okay. Link to data Intellectual property IP infringement.

AI 2:

Yeah, different but related. Yeah, this is about creative work, lesson plans, resources and, importantly, student work. Right Copyright belongs to the creator. So if you put student work say an essay for AI marking into a tool that learns from the data you feed it, you could be infringing that student's copyright, unless you have their permission, or their parents, if they're minors.

AI 1:

And the AI itself could spit out copyrighted material.

AI 2:

That's the other risk secondary infringement If the AI learned from stuff it wasn't licensed to use and then its output includes that, like copying text from another school's website or generating an image based on copyrighted art you could be liable if you use it.

AI 1:

So the mitigation is use tools that don't train on inputs, get permissions, be transparent. Be careful sharing AI stuff publicly.

AI 2:

Exactly, transparency is key, especially with student work.

AI 1:

Okay, the last big risk area they covered was academic integrity Students using AI for assignments.

AI 2:

Yeah, a massive challenge and the sources were pretty blunt. Those AI detection tools, they're just not reliable. They throw up false positives, unfairly flagging students, maybe those for whom English isn't their first language.

AI 1:

Yeah.

AI 2:

And they often miss well-hidden AI use anyway.

AI 1:

So relying on detectors isn't the way forward.

AI 2:

Not really no. The sources strongly emphasize using your professional judgment, Knowing your students' usual work, spotting inconsistencies, it's much more effective. Jcq guidance is clear Work must be the student's own. Unacknowledged AI use is malpractice. But it's not just about cheating in the old sense.

AI 1:

It's about whether they're actually learning anything.

AI 2:

Precisely. If you just rely on AI, you bypass the actual learning, the critical thinking. Schools need really clear policies on AI use and assignments need to be designed differently, perhaps focusing more on process reflection things AI can't easily fake.

AI 1:

And talking to students about it Essential.

AI 2:

Discussing the risks, the ethics, helping them see that just getting an AI answer isn't a substitute for genuine understanding and effort. And again, giving them access to approved, safe tools helps guide them towards responsible use.

AI 1:

That's a really thorough rundown of the risks. But there's another angle. The source has brought up one that might catch people by surprise the environmental cost.

AI 2:

Yes, this is sustainability. Generative AI has a surprisingly significant environmental footprint.

AI 1:

How so.

AI 2:

Well, these systems need huge amounts of electricity powering all those servers in massive data centers around the world.

AI 1:

Thousands of them.

AI 2:

Around 7,000 globally, apparently, and they need constant cooling. Altogether, they use more energy than many entire countries.

AI 1:

Wow, that puts a massive strain on energy supplies, even renewables.

AI 2:

It does. It makes it hard even for the big tech companies to hit their own carbon goals, because AI use is soaring.

AI 1:

And it's not just energy, it's water too.

AI 2:

Right For cooling. Again. Just mentioned, an average large data center uses something like 2.1 million liters of water every single day.

AI 1:

That's staggering, often in places already short on water Exactly, and even small AI tasks add up like a search query.

AI 2:

Comparatively yeah.

AI 1:

Yeah.

AI 2:

An AI search can use maybe 10 times the energy of a normal Google search generating one image. It could use as much energy as charging your phone halfway. It really makes you stop and think.

AI 1:

Is there any positive news on this front?

AI 2:

Mitigation- Well, tech companies are investing a lot in renewables and there's potential for AI itself to help tackle climate change, maybe through complex modeling, but the immediate energy and water demand is definitely a big concern right now. Ok, the sources did mention, though, that smaller, maybe more efficient AI models are expected around 2025, which should help.

AI 1:

But until then, it sounds like we need to be conscious users.

AI 2:

Definitely Just being mindful. Do I really need AI for this specific task, or would a standard search or using a resource I already have be more efficient and well better for the planet? It's about thinking about the wider impact of our choices.

AI 1:

Okay, so we've looked at what AI is. The pretty significant risks, the environmental side. With all that on the table, how did the sources suggest schools and colleges actually go about using AI safely and effectively? Marc Thiessen.

AI 2:

Well, the DfE guidance is quite positive really. It says AI can genuinely transform things, help teachers focus more on teaching, but it needs safe, effective implementation and the right infrastructure. Leaders really need to grasp both the potential and the pitfalls.

AI 1:

Yeah, it sounds like something that needs a proper plan, not just, you know, buying some new software.

AI 2:

Absolutely. It needs a strategy. It should tie into your school's wider digital plan, your development plan. The guidance even suggests checking it against the DFE's existing digital and tech standards.

AI 1:

What kind of practical things should leaders be thinking about, based on the sources?

AI 2:

Quite a few key things came up. Obviously, ensuring you're meeting safeguarding duties Keeping children safe in education is fundamental. Making sure AI use aligns with your school's educational philosophy, developing or updating policies, data protection, ip safeguarding ethics that's crucial. Planning for any infrastructure upgrades needed, setting up support teams. Evaluating tools before you commit, monitoring how AI is being used. Maybe even setting up an AI steering group to guide the whole process.

AI 1:

I remember seeing some specific tips for college leaders from JISC too.

AI 2:

Yes, jisc had five clear actions. Lead by example, use the tools yourself, set boundaries, clear guidelines for exploring AI. Invest in staff training that's vital. Create an AI culture, encourage curiosity and critical thinking about it. And collaborate with industry. Understand how AI is changing the workplace students are heading into.

AI 1:

All sounds very sensible but running through all of that, the absolute core message seemed to be about keeping humans in control.

AI 2:

That's the golden thread. Absolutely, you always maintain human oversight. You never outsource your professional judgment, your thinking, your decisions to an AI Right AI is positioned as a tool to support humans support, expertise, interaction, judgment. That human element remains totally central to education.

AI 1:

So it's about empowering people with AI, not replacing them.

AI 2:

Precisely, and the sources suggest a kind of phased approach to rolling it out like explore first, assess your needs, check your tech, talk to everyone then prepare, then deliver. Trained people monitor how it's going, get feedback and then sustain, embed it in your strategy, keep policies updated. Review tools Keep talking about it. They even mentioned using audit tools to help figure out where you are now and plan the next steps.

AI 1:

Right, let's switch gears a bit. What does this actually look like in practice, in the classroom, in the school office? The sources gave some really interesting examples of safe and effective uses.

AI 2:

Yeah, they broke them down into sort of supporting teaching, personalizing, learning and admin tasks.

AI 1:

Okay, supporting teachers first.

AI 2:

Some great examples there AI generating lesson resources quickly, like that photosynthesis plan that included differentiation ideas. Creating quizzes from maybe a block of text. You have Breaking down complex text for different reading levels. Simplifying that geography text was a good example, I like the creative stuff too. Yes, getting AI to generate like a rap about the planets or mnemonics or maths problems set in fun context, like that Avengers Fraction problem, and drafting routine things, emails home, adapting the tone, helping draft policies, even helping plan the logistics for a school trip. Lots of workload reducers.

AI 1:

And personalized learning. That sounds like a big potential area.

AI 2:

Huge potential. Yes.

AI 1:

Yeah.

AI 2:

Helping teachers adapt resources for kids with specific needs, like that computer science lesson adaptation mentioned.

AI 1:

Okay.

AI 2:

Generating personalized learning plans, but always with a teacher overseeing it, because they know the child.

AI 1:

Right.

AI 2:

There was one really powerful example An automotive teacher using an AI tool trained only on his own curated teaching materials.

AI 1:

Ah, so a closed system.

AI 2:

Exactly Safe data. It meant students could ask it questions and get tailored info based only on what the teacher approved. It could even generate podcast summaries for accessibility Really clever.

AI 1:

That's a fantastic use case, assuming all the data handling and permissions were solid.

AI 2:

Absolutely. Safeguards are paramount. What about admin tasks? That seems like a natural fit for AI.

AI 1:

Definitely. Things like getting first drafts of policies, summarizing long documents, checking policies against new laws, maybe helping with timetabling or structuring development plans all stuff that could save a lot of time.

AI 2:

And using data for insights.

AI 1:

Yes, and this warning came up again and again never put individual student personal data into general AI tools. Okay, critical point.

AI 2:

But using anonymized data with approved, secure tools that can help spot patterns, attendance trends, performance across cohorts, like visualizing GCSE results against Key Stage 2 scores, perhaps to see overall patterns, not tracking individuals publicly.

AI 1:

So data analysis possible, but with extreme caution on personal info.

AI 2:

Absolute caution, Transparency, lawful basis under GDPR essential.

AI 1:

Okay, and finally, what about students themselves using AI safely?

AI 2:

Well, if the tools are provided by the institution with the right safeguards like not training on student inputs, having monitoring then students can use LLMs for things like research that EPQ student example using a college tool for research questions was mentioned, but the key is teaching them to verify the facts themselves and to credit the AI properly. Digital literacy.

AI 1:

And giving them safe tools helps bridge that digital divide too right If they can't afford premium tools.

AI 2:

Exactly and students can use AI creatively with guidance like that image generator for creative writing prompts. Some places are being really upfront explaining AI policies at enrollment, setting expectations early.

AI 1:

And there was that framework for writing good prompts, f-c-t-s yeah that's a handy one.

AI 2:

Focus the prompt, clear, concise, analyze the output, check it carefully, check for bias actively look for it, tailor suitability, make sure it fits your context and strengthen the prompt, refine it based on what you got back. A good little checklist.

AI 1:

So, wrapping this all up, what's the big picture from this deep dive? It seems AI offers some genuinely exciting ways to improve education.

AI 2:

Definitely Easing teacher workloads, creating dynamic resources personalizing learning.

AI 1:

the potential is clearly there, but and it's a big but using it safely means really getting to grips with the risks. Safeguarding data protection, ip, academic integrity, even the environment. These aren't side issues, they're central.

AI 2:

Absolutely. The core message really from all the sources is be strategic, be considered. Human oversight and judgment have to stay front and center. Be transparent. Use approved tools with proper safeguards. Develop clear policies.

AI 1:

It's obviously moving incredibly fast this whole area, but it sounds like if we stick to those core principles keeping students safe, focusing on real learning we can hopefully harness the good bits responsibly.

AI 2:

That's the goal, which, I suppose, leaves us with a really important question for you, the listener, to think about. As AI gets more woven into education and we know young people are using it a lot already how do we best equip everyone students, teachers, leaders with the critical thinking skills they need, skills to navigate this complex new landscape responsibly, making sure the technology serves learning and not the?