The Catalyst by Softchoice
A documentary-style podcast about how IT leaders tackle high-stakes transformations.
Each episode weaves together real voices, expert insights, and compelling narratives that reveal universal challenges and practical wisdom.
Season 7: "Small Teams, Big Dreams" explores the human stories behind IT transformations—from AI adoption experiments to burnout crises, from toxic job markets to infrastructure decisions that matter. These aren't polished case studies. These are authentic accounts from IT professionals navigating the same impossible gaps between expectations and resources that you face every day.
From Softchoice, a World Wide Technology company.
The Catalyst by Softchoice
The School’s Broken Episode: What AI Did to Education—And Who's Fixing It
AI is transforming education—but not evenly, and not easily.
In this episode of The Catalyst, we step inside classrooms, school boards, and districts, navigating the AI revolution with tight budgets, limited staff, and high stakes for students. From fears around cheating and data privacy to confusion over licensing and unused tools already paid for, this conversation reveals what AI adoption really looks like in public education.
Featuring educators, IT leaders, and policy thinkers on the front lines, the episode explores what schools are getting wrong, what’s quietly working, and why the biggest barrier to AI in education may no longer be money—but people and readiness.
You’ll hear from:
- Drew Olsson, Director of AI & Instructional Technology, Agua Fria Union High School District
- Sophie McQueen, Resource Teacher & Board Consultant, Conseil scolaire ViaMonde
- José Antonio Bowen, Senior Fellow, AAC&U; Author, Teaching with AI
- Sandali Amunugama, Microsoft Education Specialist, Softchoice
Key takeaways:
- Why academic integrity fears are masking a deeper relationship problem
- How most schools already have AI tools they aren’t using
- What happens when AI costs drop—but training and trust don’t
- Why meaningful adoption spreads teacher-to-teacher, not top-down
This episode is a candid look at what it takes to move forward when guidance is unclear, expectations are high, and standing still isn’t an option.
—
Learn how Softchoice, a World Wide Technology company, helps public sector and education teams do more: softchoice.com/public-sector.
The Catalyst by Softchoice is the podcast dedicated to exploring the intersection of humans and technology.
So it started right after the ChatGPT moment in November of '22. At the time, I was still in the classroom teaching and I think everyone sort of remembers their first like looking at and seeing chat GPT moment. Mine was as a teacher and I was with a group of teachers and we were teaching computer science. And we're looking at how it could do basically all of the assignments that we give our students, even at the AP level, What even is the purpose of school? That question asked by a computer science teacher in Phoenix staring at a screen, watching a chat bot solve the problems he'd spent years teaching his students to solve. That question is now ricocheting through classrooms, School board meetings and parent-teacher conferences across the continent. From Softchoice, a Worldwide Technology Company, this is the Catalyst. I'm Heather Haskin. This season we're doing things a bit differently. We're making audio documentaries, real stories from the front lines of it, exploring the challenges of small teams chasing big dreams.
Today's episode:A look at what happens when a technology lands in the middle of an institution that's supposed to prepare young people for the future, and suddenly nobody's quite sure what that future looks like anymore. We're calling it the 'School's Broken' episode, which I'll admit sounds bleak, but stay with us until the end. This one lands in a hopeful, optimistic place. But before we get there, we need to go back to where all of it began.
Act One:The ChatGPT Moment. Drew Olson is the Director of AI and Instructional Technology, at Aqua Freea Union High School District in Phoenix, Arizona. It's a district of about 10,000 students across six schools. His title, director of AI and Instructional Technology. Didn't exist until a few months ago. My job is to oversee our AI integration efforts with academics operations in figuring out how it can best work for us. But before he got that title, before anyone was even thinking about AI integration efforts, drew had that moment, the one where everything he thought he knew about his job suddenly felt uncertain. He wasn't alone in Toronto. Sophie McQueen is a board consultant for Conseil scolaire Viamonde, a French-language school board serving about 10,000 students. She handles data analysis, teacher training, and more recently AI rollout. It is a bit daunting because you kind of feel that there's a whole security in confidentiality and protection aspect that you have to, to keep in mind when you're putting in policies and procedures regarding AI and especially if anything is related to students. And then there was the other fear, the one that educators talked about in hushed tones or sometimes not so hushed
With AI, it also comes down to:if the students are using AI, they're gonna have the answer. So we don't want the students to use AI because they're just gonna give whatever the task is, they're gonna give it back to us and it's not gonna be their work. It's gonna be AI. Cheating. That word hung over almost every early conversation about AI in schools. Drew remembers it. Well, In 2023, the main conversation was around cheating. And I know in a lot of places it still might be, but I think that was the most prevalent discussion item, especially with all of the newspaper articles that were coming out at the time. Sunli Amma is a Microsoft Education Specialist at Softchoice. Her job is to help schools across North America figure out how to use the technology they already have. She's seen that fear up close. Right now, based on the data we've gathered, schools seem really excited
about the potential of AI:Things like personalizing lessons, automating routing tasks, and creating more engaging experiences for students. But on the flip side. There's understandable fear around privacy, data security, and making sure AI is being used responsibly And honestly, it can feel overwhelming. Teachers already carry so much, so adding 'figure out AI' to their day can feel like a lot. In one of her webinars, Sandi ran a poll. 44% of educators said academic integrity was their biggest worry about AI entering the classroom. It makes perfect sense that academic integrity is top of mind. Teachers aren't really afraid of ai. I think they are afraid of losing authenticity in learning. They often ask questions like, how do we make sure students are actually learning? Just not generating answers or prompting, or how do we keep assessments fair? Or how do we maintain the trust between the students? Because teachers are suddenly seeing assessments that are way too polished.
They're wondering:Is this the student, or is this the AI? Underneath all of these things, there is a very human concern: They don't wanna lose the relationship. They don't want to lose the relationship. That's what's really at stake here. Not just academic integrity, not just test scores. The relationship between a teacher and a student. The thing that makes education more than just information transfer. Dr. Jose Antonio Bowen spent 20 years. As a professor and 20 years as a university administrator, he's now a senior fellow at the American Association of Colleges and Universities, and the author of a book called Teaching with ai. He's been studying this disruption from every angle. I think we're in a crisis where we want. Transparency. But in fact, what's happening is that people are afraid to disclose.
And so we have a lot of Shadow use:of AI, which is not good for anybody. Shadow use Teachers quietly using AI to write lesson plans, but not telling anyone. Students using AI at home, even when their teachers say they can't, everyone pretending it's not happening, which means nobody's learning how to do it. Well, There's a competency penalty. If you tell somebody that you used AI to create a syllabus, they grade it as, oh, that's less good. Right? Even if it's the same. So we have research that actually shows if I show two people the same bit of work. But I tell one person, AI was part of it. They rate it less highly than if I said, I did this myself as a human.
Act Two:Muddling Through. So here's where things stood by late 2023: AI had arrived. Everyone knew they needed to do something about it. Nobody had a playbook. Because there are no real government guidelines to protect all of this. And they're saying, well, you know, you have to have ethical use of AI and transparency and all of that, but there's no real, like, this is how you should be implementing it. So I think all of the different boards are trying to muddle their way through it and take the steps that they feel ensure that the students are safe, that their staff are safe. Muddle their way through that phrase kept coming up. Sophie says The Ontario government did eventually mandate AI training for teachers, but the guidance on how to actually do it, that was left to individual school boards to figure out. Well, they're pretty much checking everybody's homework. There is some stuff that you can see online, but it's not very clear yet. A lot of, um, the boards are getting together. They are also trying to put together guidelines and, and tools and resources for school boards, but it's still really early on and nobody has it down pat yet. Dr. Bowen has a colorful way of describing this moment. The real barrier for most schools is FERPA is is right, student privacy and student records. But the truth is you can do most of what you need to do with any of the good free tools. If you write the instructions. We're in this funny ask Jevs phase, you know, where it's like we have all bunch, we don't have Google yet. We have a lot of different. Products competing for the same space. The Ask Js phase before Google came along and organized everything when there were dozens of search engines and nobody knew which one to use. That's where schools are right now with ai and here's what makes it especially tricky for the people responsible for actually deploying this technology. The problem is that AI is less like an IT and more like a group of unruly students, right? It doesn't work like other technologies. We've never had a technology we're trying to deploy that responded to threats or offers of chocolate. It responds to offers of chocolate. That's not a joke. There's actually research showing that being nice to AI or promising it rewards can change its outputs, which is not how servers and databases work. This isn't a typical IT deployment, it's something else entirely at Drew's district in Arizona. They decided they couldn't wait for someone else to figure it out. They had to jump in. We had like butcher paper of, of feedback and ideas, but we didn't have like a sure path forward. And so it was, um, myself and the team, those two folks in particular, and a, a pretty supportive school district that basically said here. Go issue some guidance, kind of figure this out for us and really lead that. Going into that fall of 23 semester, they created what they called the AI Stoplight framework. A simple system to help teachers communicate with students about when AI was okay and when it wasn't. The first task that we undertook was coming up with some guidance around the cheating piece, so that's where we developed our. AI stoplight framework for teachers to put in their syllabus on the first day of school so that they could be explicit with their students about the varying levels of acceptable use of AI in the classroom. And we said, as a school district will be default as a red bot. AI in the classroom unless explicitly told by a teacher and then it goes to Yellow Bot where it's, it's doing some editing for you, some brainstorming, some revision, and then on down to Green Bot where we wanted to see some innovation. Here's what they learned quickly. Having a framework wasn't enough. If the teachers themselves didn't understand the technology, we very quickly learned that the teachers weren't well versed enough in AI to be able to implement effectively. What we're finding was that the teachers that were more AI literate just implemented it so much better. They knew where the blind spots were gonna be. They knew where. You know, the AI was gonna sort of think for the students and where they needed to push them to think for themselves. And here's where things get really interesting and a little alarming. Remember son Dely from Softchoice, she runs webinars, helping schools understand. What tools they already have access to. During one webinar, she asked participants a simple question. These lying accelerators are a free set of tools that included in the M 365 suite. Yet many schools either don't know their exes or haven't had the time to explore them. And what we have seen is that for schools with so many technology and so many priorities to juggle, these tools often get lost in the shuffle. 71% of schools in her poll said they weren't using these learning accelerators. 37% didn't even know they existed. Free tools already paid for sitting unused. The awareness gap exists for a few key reasons. Uh, IT teams and teachers are stretched thin. Keeping up with every new tool is tough. Even when they know about a feature, they still need guidance on how to use it inside the classroom. And schools also vary widely in resources. What one district knows inside of might be completely new to another, and there's another complication. The people showing up to learn about the technology weren't always the people making the decisions. Another thing we recently figured out is that oftentimes the people who are joining our webinars are IT teams. The decision making sits with the academics or at district level. So we are now trying to reach a broader audience, including curriculum teams, instructional designers, administrators, so that everyone is aligned. But here's the thing about the awareness gap. It's not just about knowing what tools exist, it's also about affording them, or at least that's what people assume. Dr. Bowen has a reframe on that. The cost of a tool is not just the tool, it's the implementation. It's the training. It's making sure things work the right way. And so people are, don't have a lot of money right now. Schools don't have the money they need, and so where do I find the extra money to do this is a problem. Drew has some surprisingly good news on the actual tool costs. Google and their Gemini offering now has, uh, that free to all schools that are Google shops and we're one of them. So that's access to Gemini three. That's access to Notebook, lm. It even has an integration with Google Classroom where a lot of our teachers interact with our students. So that's a free option. And chat, GBT also just announced. They're gonna be offering a free option for teachers and school personnel that is free from basically all of their services sustainably. The costs are gonna be going down. So if the money problem is solving itself, what's left, really the barrier for entry for any school district is the people, the resources, the will to, to learn, to get people to understand.'cause those financial burdens are just slowly going away. The people, the will to learn. That brings us back to the human side of this. Sophie in Toronto puts it simply in order to get to where we wanna be, which is okay, let's educate the students. We kind of need to start by our teachers. The teachers need to know how to use it. They need to know how to use it efficiently and how to use their critical thinking in order to analyze AI's responses. Once we have the teachers on board, then we can transfer it to the students. But getting teachers on board, that's not always easy. Sophie admits there's some resistance. Some teachers are more than just resistant. They're philosophically opposed. I have to say we do, and a lot of it comes down to, well, if they use ai, we're not developing their brain. They're not learning, they're just cheating. As opposed to, you know, embracing it because they may not be using it in your classroom. They're using it at home. I can guarantee you that your essay was probably all generated at home using ai. Sandle has a philosophy for how to handle this resistance. You can't mandate adoption. It has to be organic. When I say adoption has to be organic, it comes from what I heard during my research. Teachers don't wanna be told, use this tool. They wanna see how it works in real classroom with real students. So we need to start by identifying lighthouse teachers. Curious early adapters who love experimenting when they try tools and get the real win, the the stories spread naturally. Teachers trust each other more than any mandate. Teachers trust each other more than any mandate. That might be the most important insight in this whole story because if this transformation is. Going to work. It's not going to happen from the top down. It's going to spread. Teacher to teacher, classroom to classroom. One success story at a time. But here's the trap. Dr. Bowen sees people falling into, so AI is changing much too fast. It's changing more quickly than any of us can keep track of. So there's a tendency to say, I can't keep track. It'll be different in two minutes, throw up by hands, not do anything, or you know, we don't have the money to. Put in the system and if I put in the system, it'll be out of date in three months, throw up my hands and do nothing. That's one option, but there's a reason that's a bad idea, and it has to do with something that happened a long time before any of us were born. Act three. We've been here before. Here's something that got lost in all the panic about ai. This isn't the first time teachers had to reinvent themselves. Sophie remembers this well. Well, it's not the first time that teachers have had to reinvent themselves, because if you think about it, when calculators came out, they said, oh my gosh, calculators is in math. There's no way the students won't know how to add in multiple. And stuff. Right? And then there was like calculators where it's gonna like deteriorate student's brains. Well, it didn't, okay. It allowed them to do a lot more complex math though, and a lot sooner. And then came the internet. When the internet came out, it was the same thing. Oh my gosh, they're just gonna Google everything. They won't know how to read and write. They won't know how to do this and that. And we had to change the way we taught. Again, because all of a sudden we had a whole bunch of tools. You can go on Word and it will highlight your grammar mistakes, your spelling mistakes. So all of a sudden you're no longer teaching just grammar in spelling. You have to change the way you, you do things. Dr. Bowen goes even further back. A faculty resisted the typewriter. Uh, Plato resisted writing. You know, we, we've fought against technology for a long time. In fairness, AI is a very, very different type of technology than the calculator or other kinds of things. It is a bigger threat to human thinking, but it's here and we have to figure out how to deal with it. Plato resisted writing. Think about that. The technology that allowed us to record and share knowledge across generations. Plato thought it would make people lazy, that they'd stop memorizing things that their minds would atrophy. And honestly, he wasn't entirely wrong. We did stop memorizing long poems. We did outsource some of our memory to written records, but we also gained libraries and literature and laws that could be read and debated, and science that could build on itself across centuries. The point isn't that every technology is automatically good. The point is that we've been here before. The fear is real, but so is the possibility. And underneath all the debate, there's a simple truth. It's like the internet. It ain't going away. It's gonna be here. And we should be worried about its ethical and environmental and other uses, but we can't really contribute to that discussion if we're not also in the game. It's not going away. So what do we do with it? Drew has thought about this a lot. One positive impact of AI to really double click on is the idea of instructional support. Not every school has the ability to resource instructional support for students. You know, a teacher stands in front of a classroom, teaches all their students a, a, a concept. It's like, well, what happens after that lesson and the ability for AI to work with students? There are now resources that can reach these students where they're at and work with a student at their own pace. Sophie imagines the same thing. What if we can use it as a personal tutor? What if a student could come in and say, you know what? I didn't understand my lesson today in geography. Can you explain to me this? And the tutor would actually ask the student questions and get him to understand that concept better, and then validate whether or not the student has had it. And report back to the teacher after that as to, here's what your, your student just asked me. Here's what they were able to understand. Here's the next step for the students. Wouldn't that be great? And then there's this maybe the most hopeful thing Drew said in our entire conversation. Thinking about a student who has such a hard time coming up with an essay or like the first word of an essay, you can really work with an ai, the conversation that you can have with it to develop an idea, to structure your ideas, and then to communicate your ideas. There's never been a time where you can do that faster and more effectively For students that have, uh, English isn't their first language. For students who may have learning differences. The ability for this machine to work as alongside them to help them craft ideas and communicate those ideas is really exciting. The blank page problem, every writer knows it. The moment when you have something to say, but you can't figure out how to start saying it. A lot of people have a lot of good thoughts. That is a hundred percent true. Some of them get beat down by, I don't know how to write an essay, and I've been bad at essays forever, or I don't know how to craft a good message that could get my ideas out there. And it's like to, uh, to start from this asset-based minded thinking that all students have good ideas, that they just need whatever tools and resources to craft those ideas and get them out there. All students have good ideas. They just need the tools to get them out there. That's not a threat to education. That's what education is supposed to be. Sophie sees it too, a chance to move past the fear, and that's when it comes down to, well, maybe that's not the way we should be teaching our students to use ai. What if they could use AI differently? What if we could use AI to build on their critical thinking, on their analysis skills and stuff? So it's, it's that whole aspect. To have, like, let's take some of the negative connotation out of AI and because it's not going away, so how can we use it to educate our, the youth of today because they're gonna be using it, whether or not we want, because they're already using it at home. And Dr. Bowen reframes the whole conversation. AI is a new cognitive technology. We've had others, right? Writing is at, you know, the Compass Maps. AI is a new form of cognitive technology that will affect work, but also human thinking. But AI is a very different type of technology that it's going to raise average, right? It's the new average. If, if you. As a student or as a worker can only do work that's as good as ai, why would I hire you? So we need to raise standards in classrooms so that students are able to graduate, able to do things that AI can't. The new average, not a floor to stand on, a baseline to rise above. Drew was asked a simple question, if you could press a button and make AI go away, would you, would I, uh, turn it all off if I could? I think the answer is no for that. Social media maybe. Yes. Uh, but I think artificial intelligence, you know, there's social media maybe, but AI. No, the genie outta the bottle, this technology is here. Technology in general is here. We've had high speed internet and cell phones and smartphones and all the things for now 10 to 20 years. And the question really is, and is this gonna happen? The question is, how do we make this happen to the benefit of our students? How do we make this happen to the benefit of our students? That's the question. Not whether to use ai, not whether to fight it, but how to use it thoughtfully. Ethically in ways that actually help young people learn. Sun puts it beautifully. At the end of the day, it's not about choosing between AI and traditional learning. It's about blending them, using AI where it helps and still preserving the human experiences that spark, curiosity, creativity, and character. For example, reading a physical book, flipping through its pages and maybe smelling the new ink. For me personally, these experiences are irrepressible, flipping through the pages, smelling the new ink. AI can do a lot of things. But it can't do that. Sophie is optimistic. I'm not necessarily so optimistic that it's leveled off, but I do believe that we haven't seen the edge of the curve yet. It's still gonna grow. Its use is still gonna magnify, and I think we're, it's gonna be a rough, possibly a year or two, but I think it we're gonna come out of it better, for sure. Way better than we are today. A rough year or two, but better on the other side. Drew has a kindergartner at home and he's already thinking about what to tell him. That's what I'm gonna tell him is, is this is a tool that can empower you to do things that you want to do, just like a lot of other technologies that we use. So where does this leave us? AI has landed in education. It's not leaving. The question now is what we do with it. The teachers, IT teams and administrators who are figuring this out. They're not waiting for perfect guidance. They're muddling through trying things, learning from each other, building frameworks like red light, yellow light, green light running webinars to help people discover tools they didn't even know they had. It's messy, it's uncertain. But it's also how real change happens. If you are an IT leader at a school or district, or if you work with public sector institutions and you're hearing yourself in this episode, you're not alone constrained budgets, lean teams stretched too thin. Legacy systems that weren't built for this moment, that's the reality. Soft Choice has been helping public sector organizations navigate for over three decades. Whether it's figuring out what tools you already have and aren't using or building a strategy for cloud adoption data security that actually fits your procurement reality. Softchoice works with education and public sector teams across North America. To do more with less, visit softchoice.com/public sector to learn more. Or click the link in the description. The Catalyst was reported and produced by Tobin Dalrimple and the team at Pilgrim Content Editing by Ryan Clark. With support from Philippe DMAs, Joseph Beyer, and the marketing team at Softchoice. Special thanks to Drew Olson, Sophie McQueen, Dr. Jose Antonio Bowen and Sunli Amma for sharing their experiences and insights.