As artificial intelligence becomes increasingly embedded in classrooms, safeguarding teams and educators are being asked to balance innovation with responsibility. From generative AI tools that support learning and creativity to algorithm-driven systems shaping online experiences, the choices schools make today will have long-lasting implications for student safety and wellbeing.
In this podcast, we explore what safe and responsible use of AI really means in an educational context, cutting through the hype to focus on practical realities. The conversation examines the differences between generative and non-generative AI, why that distinction matters for safeguarding, and how visibility, policy, and proactive monitoring play a critical role in protecting young people online. We also discuss the evolving risks AI introduces, including exposure to harmful content, misuse of tools, and reduced transparency, alongside the opportunities AI presents when implemented thoughtfully.
With UK Safer Internet Day 2026 in mind, this conversation reflects the theme “Smart tech, safe choices: exploring the safe and responsible use of AI”, reinforcing a simple but vital message: AI safety isn’t a one-off conversation or an annual awareness moment. It’s an ongoing commitment that requires informed decision-making, clear boundaries, and a shared responsibility between technology providers, schools, and safeguarding leaders.