The Boardhawk Podcast

Podcast season 2 episode 9: How AI should transform schooling, with Aurora's Antonio Vigil

Alan Gottlieb

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 50:16
Alexis Menocal Harrigan

Hi everyone. Welcome back to the Board Hawk podcast, and depending on when this is released, happy spring break to our DPS friends and family members. Actually as we're recording this, right before Alan hit record my eight and 11-year-old started screaming at each other, so. We wanna let you guys know we're not in a live studio, we're not in a sound booth. We are I'm sitting at my kitchen table here with kids running around. So I will not apologize for the noises you may or may not hear. And possible wrestling that may occur in the background. But'cause the kids always know like, when to, when to push the buttons we're, you're an important call. So we'll, we'll see what happens. Today I'm so excited to be one, welcoming, one of my favorite people and talking about one of my favorite topics of the day. AI is one of the hot topics in education right now, and we want to hear from someone who is both thinking deeply about this and leading in this space. It's my pleasure to welcome and introduce my friend Antonio Vijo, onto the podcast. Antonio was born and raised in North Denver is a servant leader who taught in learn from open and transformed or regenerated schools throughout the city and country. Over the past 26 years, across these professional experiences, Antonio has dedicated his entire career. To advocating for social change and transformation in public education through humanizing mental models and systems, culturally responsive expectations, and transformative leadership. As the Director of Innovative Classroom Technology at Aurora Public Schools, he leverages his experiences as an educator school leader to develop proof of concept projects focused on agentic and liberatory design. Welcome, Antonio.

Antonio Vigil

Welcome. Thank you for the

Alexis Menocal Harrigan

opportunity. Thank you. So let's jump in. Antonio, you're someone who I think of that is deeply rooted in community and culture and who cares very deeply about how the children are doing. Can you just talk to us a little bit about your journey and how you approach your work broadly before we get into the specifics of technology and, and ai.

Antonio Vigil

Yeah. Absolutely. I think without going into like all of the excruciating details of my professional and personal experience, I wanna start with two quotes because I think that'll help with our framing for our discussion today. In 1963, James Baldwin gives this presentation a talk to teachers, and ultimately in that presentation, he says, I began by saying that one of the paradoxes of education was that precisely at the point when you began to develop a conscience. You find yourself at war with society. It is your responsibility to change society if you think of yourself as an educated person. This is something that has resonated deeply with me as an educator and school leader and a school designer, and the whole idea and the whole notion is that we have to fundamentally identify what is it that is challenging and problematic systemically about schooling in its current iteration. That particular context and timeframe was one, obviously of the bombings within Alabama and thinking about the sort of treachery of like what was happening in society, that critique is what emerged from that. His intervention in talking to teachers was creating this greater consciousness and the need for. Change, namely that we need to begin to think about redesign of our educational system. And so my whole career and my whole commitment to social change and transformation has really been through the educational system and reimagining and attempting to reimagine what that looks like, not only in terms of pedagogy, but also in terms of critical praxis. And so I think that that's the first aspect of coming into this conversation.

Alan Gottlieb

Antonio, I I just have a really, I mean, what you just said that, that quote from Baldwin so provocative, I just wanted to ask you like, that was 63 years ago, if my math is good. And, and how much, how much of the education transformed, if at all, would you say in those 63 years since he, since he said that to teachers?

Antonio Vigil

I think that. Unfortunately, we see that there are only pockets of innovation that have really transitioned us from horizon one schools to horizon two schools, and there are very few schools that are really functioning at a high level with Horizon three. I mean, obviously like giving our listeners a little bit more of a context. Horizon one, are those traditional transactional depository type learning experiences. Sage on the stage, everything is transmitted. I expect students to give me back that information. Those are your typical historical like status quo schools. I don't think that there has been much of a change fundamentally. In terms of public schooling that really move us beyond that. In horizon two you have some innovative practices where there are some agentic learning within the classrooms. There's some co-design with the learning. But there are very few examples of that as well. They're sort of like niche independent schools that are doing this. But in terms of thinking about like really big systems, like public school districts, there are very few out there that have been bold and courageous enough to really identify what the fundamental challenge and problem is within public schooling. When we think of those horizon three schools where teachers are facilitators, where we're co-designing the learning, and it's much more agentic. I would say that there are very few examples of that that exist systemically. There are pockets that exist in different districts throughout the country, but those have been deliberate and strategic choices that have been built from the ground up and not from the top down. And I think that's a fundamental difference is that unfortunately I would say that there has been not a fundamental or transformative shift within public schooling since. Baldwin let that provocative critique and intervention of public schooling at that time, given the sort of like political and social economic factors as well. From 63 to where we're at currently, we can say that the same sort of chaos in tumult in these sort of vulnerable and uncertain times is very much similar. And his particular provocation would have just as much sway now as it did then and even more so.

Alan Gottlieb

Yeah, I was afraid that's what you were gonna say.

Antonio Vigil

Yeah. I'm hopeful. I'm hopeful, like I'm, I'm a critical optimist and I feel like one of the challenges that we have is that systemically as educators and school leaders. It's difficult for us to really exert a sense of freedom and liberatory design within our work because we are so tightly managed. And every year it's a new systemic point of accountability, which leads to a lack of clarity, a lack of alignment and coherence. And we're constantly, constantly dealing with having to acquiesce and giving up what we know is best for our young people to really conform and adhere to these systems and the systems of accountability. And I think that we have not really fundamentally engaged in being fully transparent about this is not going to take a technical. Upskilling or re-skilling within the work, it's helpful, but we absolutely need to redesign and we need to reimagine what transformative learning looks like, because teaching is not synonymous with learning, unfortunately. And so I think when we come into this conversation around ai. If possible, I'd love to just share a quote because this is the second frame for the work that we're talking about today from one of our students. And one of our students said who's a senior at one of our comprehensive high schools. Her name is Malia, and she said. I want to be able to create solutions that we can use that have an impact. I want to do that within my classroom. The kids that vibe with and actually help design it, they're actually helping to create parameters for new learning. There are however, very few opportunities that young people have to today to leverage that curiosity and intelligence within their spaces. It's just like, do the assignments. Do the work, right? You just do school. You're doing school. But a kid that wants to know more, we just limit ourselves because we're only interested and the school is only interested in chasing after the right answer. And the correct answer, which is such a low threshold. It's a low bar, and then Malia concludes that you have so many kids, unfortunately, that have to confront the death of curiosity because they're living a life through traditional schooling. It's one of our seniors in one of our schools in talking about AI and how it is. Being posited, but more than anything, the sort of narrative about being pathologized around its use and how, unfortunately, in many cases that sort of critical, or I would say uncritical assumption about black and brown children in these spaces, is that the only reason that we would use AI is because we want to plagiarize and we wanna circumvent. The work where Malia is saying very clearly that we want to be much more curious and we want spaces that provide us that curiosity. That move beyond the low bar and threshold around just getting the right answer and chasing the right answer. And I think this is sort of a very interesting and parallel moment that has intersectionality with what Baldwin was recommending and suggesting is that this curiosity is about developing our consciousness and developing conscience about the way in which we understand the nature of work and the way in which we understand the nature of. Our existence and coexistence within these ecosystems around learning.

Alan Gottlieb

It's refreshing to hear it kind of talked about in this way because in from what I've read and heard mostly about education and ai, it's this sort of defensive crouch. Posture rather than what are, let's open up to what the possibilities that are positive here might be. And what you're talking about is that, so I would love just to hear you talk a little more about how you, how you would envision AI being used in classrooms with students by teachers in a, in a really productive way and getting out of that sort of defensive crouch and paranoia and assumption that it's just a, a massive mechanism for cheating basically.

Antonio Vigil

I think two things have to happen fundamentally, even before we begin to think about the sort of use cases and functionalities. One is that Justin Reich, who's really insightful professor out of MIT talks about the necessity of us thinking about what is it about pedagogy and what is it about our transformative vision for what schooling should look like in the first place. That should be the fundamental and foundational thing that we hold first and foremost before we even begin to think about tools and technology and everything else. And so he talks about debunking the way in which ed tech and technology in general has been hailed and all the techno optimism that comes as a result of like, this is revolutionary. This is transformative. And yet he takes the task, all of those moments and all of those critical inflection moments and really shows us that they were not as transformative and revolutionary as we thought they were going to be. And I think that's sort of complicated too. Now, the way in which AI, unfortunately is also being conflated with some of the social media engagement strategies associated with some of the big tech companies as well as even the AI frontier models. And so. I would say that it's important for us to think about that first and foremost. What is the transformative vision that we want in the first place for schooling? Before we even think about AI and technology and tools, then what introspectively, what does that mean in terms of our own upskilling and re-skilling? The tool is not gonna help you if you haven't done the work personally and professionally to think about what are the pain points and problems. A technical solution is not going to help you solve that. If anything, it's going to exacerbate the conditions around the status quo and perpetuate it even further. And also permit perpetuate the sort of oppressive systems that are embedded within that. The second piece is that we have to stop conflating private sector AI with the way in which we are using it within the public sector.'cause I think there's all of this paranoia that's associated and rightfully so. Everything around data mining to data centers, the environmental impact, there are a whole number of things around surveillance, technology, privacy, et cetera. All of that is legitimate and hold sway, but we in the public sector have a deep responsibility to protect our users by way of data privacy and everything else. And part of that is also around the strategic upskilling. So you cannot give people a tool, you cannot give them a technology if you have not done your due diligence around the way in which it is going to align and support. Your definitive vision for what transformation looks like within schooling. The second piece is that our measures have to be much more restrictive and much more mindful about the way in which these are impacting our young people. And that comes back to the idea and the notion of liberatory design is that empathy for our users has to be the grounding foundation for the work that we do. If we lose sight of that, then we. Turned back into, we turned this into a debate over like the technical aspects of it, but this is really around adaptive leadership rather than technical, and it's a combination at times of both. The challenge is, is that most people don't know within these systems. Which one they're solving for. So they think by the mere adoption of a tool that, that's technical in nature, but really it might be adaptive in the sense that they're asking themselves to do something radically different in terms of learning and the way in which we are functioning with schooling practices. So that's all that to say is that there are very specific things that I have seen that have been beneficial. And I can share those with you unless you want to take us another direction.

Alan Gottlieb

Well, there's just so much here. I mean, it's, it's so rich what you're talking about, but I, I guess I want to kind of go back to that Baldwin quote and your statement about, we really haven't made all that much progress since he said that in 1963. I think you said it was. So what I'm wondering is, is what, and this is a very big question. There's no simple answer to it. What is it gonna take for our calcified systems to even begin to think about thinking about what you're talking, the kind of transformation you're talking about? What is it gonna take for us to start to move in a direction that's different than the one we've been, kind of the rut we've been stuck in since God knows when.

Alexis Menocal Harrigan

And to add on to that, as we think about what will it take, I think. I'd love to hear from you, Antonio, because you've been a classroom teacher, you've been a founding principal, you are a leader in a large district in Colorado. What are the different roles and responsibilities of folks depending on their profile? Like what, what do teachers need to be thinking about and doing? What do school principals need to think about and what do admins need be thinking about? Because we can, we can have the same vision, but the way we're approaching the work may look drastically different. So just adding that layer onto Alan's question.

Antonio Vigil

Yeah, so I think it's one of the deep seated challenges that we have right now is an either or thinking philosophy within public schooling. So it's either going to be like academically rigorous and cognitively challenging, or we're just gonna simply like embrace our community and love our kids and create an environment of. So I think what happens is, is that we design our systems accordingly. So we see a lot of systems with a lot of great intentions. They designed the system for these transactional models, which unfortunately are very much counterproductive to the sort of humanized ways, especially generationally like our generations that are coming up now everywhere from millennial to to Gen Z to Gen Alpha. They're desiring and needing different ways of learning and different pathways, and I think that. What we see is that there is a lot of intergenerational impasses, whereas the generational gatekeeping that exists currently is that I had to walk the coals of fire by going to the Dewey Decimal system and looking up all of these books and going to the library. And finding articles and doing deep research and taking notes on index cards and then taking those index cards and developing an outline and writing my very lucid and very targeted and insightful five essay paragraph. You will do the same. And so. Until we let go of those mental models and really flex a bit more. And that doesn't mean that we have to acquiesce either into becoming this techno optimist and like AI is gonna transform learning. It has the potential, but only to the extent in which we can lean into the sort of changes that we need to make as individuals. So. And what I'm saying is not untenable. Two examples currently exist within our ecosystem, our broader ecosystem. What Superintendent Matsuda did within the Anaheim Union High School District, what Superintendent Armand did at the Sana Ana Unified School District, what they did is they ultimately said, first and foremost is that we are not going to calibrate our system. On test results and school performance frameworks. They did that from a grassroots level. They built that consensus from the ground up and they began to shift those mental models within their systems. They said that very clearly they had to fight. They were tenacious about the change, but ultimately what they did is they centered the experiences of students saying, what is it that students absolutely need in order to be critically? Autonomous and independent citizens within these systems that are continuing to evolve. That also includes not only the cultivation of content, but the cultivation of their durable skills within those spaces as well. They knew that the current system was not providing their students with the adequate, durable competencies and skills associated with the future workplace. So what they did is they redesigned this. Systems imagining, what will it take using reverse engineering? How can we get our students fully prepared for the future that's already here? And so you have the example of Armand Dades, who was very clear about the value proposition of AI in his district. Created a very specific strategic plan about how AI would live side by side and fully integrated and used in specific ways. So he gave very clear directives and worked from a grassroots perspective with his faculty, his staff, community members, as well as families to build that human design. In a different way and built a different system. Same thing with Matsuda. Matsuda had to go to work and work at a grassroots level to change this. That's what they did. They stopped chasing letter grades. They stopped chasing all of the ass, the test results and all the assessments, and they went after the experiences that students needed and students wanted. They went. After centering the co-design work with students and, and teachers and community members and the private sector around workplace and CTE and they also began working with families. And I think until we start opening things up in that way like I said, this is not just theoretical. This has happened and they have shifted the way learning happens in those spaces and. The progress that they've made academically is un debatable on the quantitative scale as well. It's not just qualitative, it's a both end. So it comes back to that both end philosophy is that we can have, we can hold both, thi both things and have them be true. One is that we can create deliberate spaces that have high cognitive demand, that really prioritize that sort of cognition that is very much future forward and future facing. AI is not going to be the transform transformative lever, but it can within those certain instances, we have enough research and knowledge now that tells us that brain rock comes as a result of doing a variety of things or parsing out and giving and outsourcing cognitive tasks with students. However, one of the things that they did. Four or five years ago within those districts, as they prioritize the type of cognitive work that teachers and students would do together, design the system accordingly and are iterating all the time. And so I think this is the type of example that we have out there. It's not something that's completely nebulous, it's not something that's completely esoteric. It actually has been done. I think that we're in a situation right now. Currently within the state where it is hard for us as administrative and district leaders to have the courage and tenacity to stand up for what our true North is, which is our young people. We get caught up unfortunately in so many political debates about what merits. Excellence, what merits success? What merits the way in which a system takes pride in its work. And I think oftentimes we lose sight of that very visceral experience that young people have and teachers have within these spaces, a very delicate and fragile ecosystem to really leverage relationships for the type of curiosity that Malia expresses as one of our students. And so. That's what I would say. Those things fundamentally have to change. We have to move from an either or perspective to a both end philosophy. One is that we can have the cognitive demand and high cognition leverage with prioritized AI uses and functionalities. And we can also build places of belonging that fundamentally center that intellectual curiosity and co-design that we want within these spaces.

Alexis Menocal Harrigan

Thanks, Antonio. That, I mean, we could talk to you for a long time about so many of these items. I want to go, go down to the classroom level for a second. Okay. So, and I really appreciate that framing. What would you say to a, well, first, lemme back up. One, one of the things that really concerns me in this space is a child who, as you mentioned earlier, like it's coming in very curious, creative, is looking to provide social solutions, whether it's a middle school or a high schooler or, or even somebody younger than that. Their experience in learning with AI and learning how AI can be a tool for social good may be drastically different, teacher by teacher, classroom by classroom, depending on how the teacher is prepared framing this or is not using it at all. So my, my question to you is, could you speak to, if you could give advice to a teacher right now? Who is AI curious? Yeah. Hasn't necessarily done much beyond maybe introductory, like testing out some tools and models and, and maybe gone to a PD or something. What would you say to them if, if they're interested in learning this better or maybe a little bit cautious? They're, they're afraid of of sort of things around, like data privacy, for example. Like what, what I'm hearing a lot is like fear. But then you have early adopters who may, who may be maybe going. Too far and, and maybe doing some things that, that could potentially be dangerous or exacerbate bias too. Like I'm just seeing all of these things. Yeah. If you could just speak and riff on for a little bit, talking to a teacher, what, what advice would you give?

Antonio Vigil

So I think part of it is, is that there has to be some fundamental grounding. So teachers need to have confide. In the way in which they are leveraging AI by demystifying it. So they need to understand what is the tool in the first place. That doesn't mean that we need to take apart the watch completely and show them every single cog and every single like lever within it, but they do need to understand how to tell time, which means that they need to understand how these frontier models work on influence and prediction. They need to understand. What are the priority use cases that really leverage that sort of cognition, that don't lead to brain rot, that don't lead to the dependency associated with young people, and they have to have a very clear sense that they are protected within the system. So there's a difference between open system, closed system and then also just giving them the assurance that. In this particular instance, for example, with our district, we only use closed systems. There's no 33rd party vending of data. The model is not improving because of the, the data. It's not being trained, it's not being outsourced. So I think that's the first and foremost thing. Then they have to have very specific professional learning and training about how to interact with it. And that's everything from like not entering personally identifiable information. Not uploading spreadsheets of student data, a whole number of things that they need to have in the first place. The second is then giving them professional learning, asynchronously or synchronously around use cases and then getting ongoing support. So I think that is difficult and challenging for a district our size, but we have created and curated professional learning for our teachers that sort of create a baseline. For guidance in the work. I will say that we have seen the greatest success with our multilingual learners and with our students who have unique languages. I'll give you one example. One of our students a number of families came to us unique language, pto, and we had very little to know people on site to actually serve as translation. We were able to, one of the teachers at the elementary school level was able to use one of the AI systems to create the bridge. So one to just communicate within class with that particular student, and then find and source PTO academic content, share that with student, get their feedback, and then determine whether or not it was suitable, whether it was aligned or not. Then was able to communicate with family. The family was able to understand because they were able to essentially translate much of the work and much of the areas of focus in real time. When that family came in, they already presumed that nobody was gonna speak their language. They already presumed that they were gonna be treated differently. They came for their parent-teacher conference. The teacher showed them exactly what they were working on with that particular student. And they were able to use the audio feature of the AI system to communicate back and forth with the family. The family broke down in tears, absolutely astounded that the sort of accuracy, more or less was able to create this sort of bridge for them. And so that created this sort of relationship for them, knowing that the school had done its due diligence to try and. Bridge that gap where historically those students might have sat in silence the way in which previous generations had because somebody was not able to create an access point for them. So one, we created an access point for a unique language. Two, our families felt that they were completely welcomed and belonged in the space because the teacher was able to communicate with them in real time. The third is that that particular student was able to access some of the content in his native language and was able to show progress and demonstrate progress, and was excited about learning. That student is still an emergent speaker, and I've gone back to the school multiple times and now that student is helping other Pashto students who are newly arrived in our district. And now they've created systems. So that teacher took that practice and then scaled it across the school and created access points, not only for himself in his classroom, but then did coaching and training of all the other teachers, gave him the practices, and now other unique languages. People who come to our district, they have a systemic way of creating a sense of belonging, not only for that student in terms of the recognition of their language. But then also giving them content and access to things that are going to continue to stimulate them academically. And I think without the sort of pedagogy and without the sort of specific protected and safeguards with the tool, that would not have been possible. We would've had to have a live interpreter in every classroom to achieve that at scale. And now that student is able to access. His own essentially in the classroom, the teacher has built a very safe and protected chatbot for that student that helps that student in real time translate and also express his thoughts academically.

Alexis Menocal Harrigan

I love that. That's, that's such a great example.

Antonio Vigil

So I would say like once again though, it comes back to this whole human design and liberatory design. If I, as a teacher. I'm only concerned about like just the academic work. I don't have any sort of empathy for like my young people and I don't have. And I don't see the necessity to bridge that gap. I'm not gonna see AI as a potential solution. And if I'm a teacher who is so stuck on my traditional gatekeeping, you're gonna be like, that student came to, made a choice to come to United States. They're gonna have to learn like every other immigrant population who came here. And they're gonna have to like, tough it out until they actually become conversational.

Alexis Menocal Harrigan

I wanna ask one other follow up, and this is, I'm, I'm gonna pivot a little bit, but, but it's connected. You mentioned obviously the incredible power that AI can have to transform a student who is an English language learner, who, who is, or multilingual learner. I, I actually wanna think about the, the technical for a second and zoom out of the teaching and learning for just a moment. Okay. So I have an a family member who is an interpreter translator in a medium to small size district in California. And in that region, like there's a lot of Spanish speakers, but also other students from other, other countries that are also multilingual learners. She asked me, I was going into this AI summit and she said, Hey, I would love for you to like, get advice on this. And, and I got some interesting feedback and I would love to pose this question to you. So she is probably on the older end of her team, there's a team of like four or five of translators, interpreters. And the district allows for, I think, two different AI tools to be used where you can put student personally identifiable information. One of the things that takes them a long time, for example, as. Translators Is, is translating student IEPs. Yes. And then translating those into, into the, into Spanish for families. And then sitting in those, in those IEP meetings with the parents and interpreting the, with the teachers. So they haven't gotten any training. So she did her homework. She found out, okay, I can use Gemini and I can use Claude in these specific tools. And so she started using it and she found she saved a ton of time. She still FactCheck, she still made, she still like double checked everything to make sure that the, the errors that it made were corrected. And so it posed this ethical question for her, which she then is posing to me, which I took back to, to folks. Is do I one, she, she's a little bit concerned because like many districts facing budget crisis, like if this saves so much time. Is this going to cause the district, if like all everybody's already to use it and we're saving time, are they gonna cut staff?

Antonio Vigil

Yeah.

Alexis Menocal Harrigan

And I'm realizing my internet may be unstable. Apologies. Yeah, that's, shoot. Okay.

Alan Gottlieb

Okay. You, you were mostly like 98%.

Alexis Menocal Harrigan

Okay. Let I, yeah. So lemme back up a little bit. So then the, the question becomes the ethics of it. Like, should she now share what she has learned? With her peers and colleagues who, who, funny enough, are much younger than her and haven't thought to do this yet. At least she doesn't think that they have because they're, she's finishing her work much faster than they are. And then two, does she take this to her supervisor of the department and say, Hey, like, here's this thing I've realized. Here's some other things we can do. It's saving time. What, what feedback would you give to somebody like her who's like, realizing there's this technical tool that is saving her time? Yep. And then what does she do from the ethics standpoint of share the information, keep it internally and, and, and this fear of like she's getting close to retirement. So

Antonio Vigil

Yeah.

Alexis Menocal Harrigan

What does she do with that too?

Antonio Vigil

Alright. Keep me on track'cause I'm gonna get heady a little bit.

Alexis Menocal Harrigan

We love it.

Antonio Vigil

And then we'll move into the specifics. Great. First and foremost is that we are in a symbiotic relationship with ai, which means that we have to decide whether it's going to be mutualistic or not. That means that there has to be equal benefits from both sides. And I think that people talk about human in the loop. I refer to it more as human centered, which means that there's a set of values that are associated with the way in which we leverage and use it. I think that sometimes. Unfortunately what we have there underneath all of that concern is the unapologetic politics and economy associated with private sector ai. They're unapologetic about the fact that ai, for efficiency's sake is where we're headed. That gives us greater opportunity to do more strategic and deliberate work within the private sector, which of course is about the accumulation of capital and revenue and power and resources. They're unapologetic about that. I've met with multiple founders, have spent a lot of time in Silicon Valley, and I know for a fact that is precisely what they're working towards. They want greater efficiency. They want greater niche products that will allow them to eliminate staff members and allow them to eliminate FTEs within the private sector. So that tension is real. So I understand it completely, and this is sort of coming back to the point of we can't always conflate that sort of tension with the work that we do in the public sector. We have to maintain the integrity of this mutualistic relationship. I do think that it's absolutely important to name that upfront because in doing so, we maintain the human-centered approach, which means that we are indispensable in that process because there are multiple nuances, especially with language. Language is culture, and culture is the way in which we see the world. Our interpreters. Are completely we cannot function in the same way without them, even though we're leveraging the tool, as I said, it's mutualistic. We cannot just simply deploy an AI avatar or an AI bot to capture the nuance around the human-centered values associated with bridge building because. Having an interpreter within those spaces in the same way of having someone who speaks pashto in that learning space with our student. The teacher is the person who is the nexus. The technology is not the nexus. It enables the nexus, but it is not the nexus. It is the human, not the tool. And so I think it's important to sys systematize the way in which we are up-skilling and re-skilling, and to name that, and then to maintain the integrity of the human-centered value for the work that we do. And I do think if we're able to create more access points and we are able to create greater efficiencies and we're able to prioritize the relationships that are part of that human-centered perspective, then we need to scale those things and we need to systematize them and then come back to them. I think the problem is, is that. We cannot think of this particular people say, well, what's the difference between like a calculator and ai? Like one is strictly computational, the other is like inferential, it's predictive. And there's a lot more interaction and engagement associated with it, and it's iterative. So I think that's what we have to do if we're going to upskill and reskill our systems. It's incumbent upon us to share the learning, but to keep coming back with informational feedback loops to determine how does it get better? It can't stay static. That's the thing of it. With a calculator, I can input the same numbers for decades and I'm gonna get the same answer. It's not always gonna be that way because the LLMs, the frontier models are always changing and they're gonna become much more fluid, even as we they're gonna become much more fluid and much more intuitive. We already know that that's where we're headed. I do think it's inherently important also to, as an ethical stance, is that we share that upfront with our families. And so that's what we're doing as a district whenever we're leveraging AI in spaces. We wanna say upfront, this is what we're doing, this is how we're using it. And then of course we have ongoing legislation as well that is going to really demand more of us to be much more upfront from a compliance based perspective. But to your point, I think that there's a certain ethics around owning that mutualistic relationship. Systematizing it. Scaling it, but also maintaining a systemic informational feedback loop that tells us how is the user experiencing this? How are we experiencing it, and how is it actually delivering on our true north, which is providing greater access and outcomes and learning experiences for our young people. Take that away and then imagine what it looks like as well, right? If you remove AI from that equation. She's less efficient, maybe less insightful, more work, right? And more like more bureaucratic things to negotiate rather than streamlining and optimi optimizing the work. So in that case, I would say it's a status quo use, but the innovative piece of it is the way in which she's leveraging that mutualistic relationship and. Good learning is worth sharing. Like Absolutely.

Alan Gottlieb

I guess the final question I have, I, I don't know, I have questions that just seem like, like we could talk for two hours on each one of them because this is so interesting. But one is, does all of this mean. I mean, the upskilling and the reskilling and stuff is all obviously important. Do we need to start looking different places for educators than where we're currently getting them because of this? Or, do we need it? Do we need a whole different pipeline going forward in the future than we have now? And the other thing I wonder is like the, going back to the Baldwin question is, is, is the potential for that to be realized more fully enhanced, neutral, or. Or hurt by the coming, the coming wave of more sophisticated, I think you said fluid AI systems.

Antonio Vigil

Yeah.

Alan Gottlieb

Big, big question. Sorry.

Antonio Vigil

So two things in the spirit of the both end, Alan, like Absolutely. I do think that. If we're talking about an antiquated schooling system, and I don't know if I have consensus from everyone about that sort of a evaluation, but I do think based upon the sort of cultural and institutional dissonance that we see on a daily basis, and one of the benchmarks of seeing that is our average daily attendance. So you look at anyone's a DA over the past decade, you can see it has plummeted across the nation. And that's just not me like. That's just not conjecture. That's like quantitative data around the sort of lack of engagement and lack of importance and relevance that schooling has for our young people. So I think to your point, yes, we absolutely need one technically to think about where do we source teachers? How do we build a narrative around the gravitas for going into public schooling and public education? And I think generationally, the challenge once again is that if I am someone who is critical, autonomous, creative chasing freedom and not test scores, education may not be the most appealing profession for me, knowing that I'm going to be guided and hampered so much within that space, systemically, it's not really providing an added or value proposition for young people who are graduating from college. Because they're like, I, I mean the sort of rotes mechanics of it is very similar to working in a another profession because sadly the profession has really lost its gravitas to the point where that sort of proposition that Baldwin makes in the intervention. Hopefully is not untenable, but it seems untenable in the spaces that we're currently in. So I would say yes, we absolutely need to think about like where can we begin to source and create indifferent alternatives towards certification for our teachers. The other piece is that when you look at teacher training programs, many of them have not changed at all. So the stereotype of like teachers literally going to their filing cabinet and pulling out Manila folders of lessons. That sort of metaphor is not really outdated in terms of the way in which our teacher training programs are not adapting and not pivoting enough to the future that's already here. And so I think the future facing frameworks around what I spoke to you about, especially around durable competencies, and those skills have always been secondary. I think we're seeing a shift. And most teachers dislike this, is that. It's not about the death of content, but content is becoming less and less relevant because the current workplace is demanding. Our students graduates to be much more nimble, to be able to lean into those durable skills around collaboration, communication, listening, problem solving, creativity. But if we have a school system that is rooted in antiquity, that is all about transactional models. You don't show up to the workplace and say, okay, I need you to solve this algebraic problem for me, or I need you to write a five paragraph essay for me. We've got five problems today. Three of them. We note that the technical solutions for it. Two, we don't. You have to go work with all of these different people and collaborate, decide consensus, build a hypothesis, test it, and really iterate on that. We're not dealing with dynamic. Schooling systems one institutionally around the public schooling systems and our teacher training systems are not aligned to that either. Pockets. Yes. I know there are people from Stanford, MIT, and Johns Hopkins and other, other places would say, well, absolutely we are. Yeah. Pockets. There are pockets of like exemplary and transformative learning that's happening in different spaces. But for the most part, when you look at our teacher training programs, they are not aligning either to the sort of demands that are absolutely necessary for the future skillsets that are young people need today, not on the horizon. So wholeheartedly, one is a technical. Fix. The other is around the adaptive challenge that we have systemically to start thinking about the reverse engineering process. What is a portrait of a graduate actually look like? And that's where Matsuda started. In Anaheim, that's where Armand started in Santa Ana. They were the ones who piloted and really were the first exemplars for the portrait of a graduate within California and throughout the country.

Alan Gottlieb

Fascinating. I, I, again, I could ask a lot more questions, but I think we're probably up against time. Alexis, any final questions?

Alexis Menocal Harrigan

No, this was incredible. Antonio, I would just say like if you have any final words of wisdom you'd like to pass along to our listeners this, this is the chance.

Antonio Vigil

Yeah. I think that one of the ways that I've been thinking about this critically is aI squared and AI squared is ancestral intelligence with artificial intelligence and thinking about how those two really work together, because I think you know, AI in and of itself, you can pathologize all you want. However, we have seen very clear examples in recent history where individuals, unfortunately. With very clear agendas are leveraging AI in different spaces for ne nefarious and questionable purposes. And because we don't have the same sort of regulations and governance that we have that we see in the EU and other parts of the world, we have every right to be concerned. And I think people should not give up that critical agency around questioning AI and the way in which it's developing. And I think that especially the way in which it's being leveraged and deployed in different spaces for unethical purposes, I do think however, we have an inherent responsibility because these systems are inevitably going to intersect with the notion of what is our future community? What is our future community of schooling? We can't simply neglect and ignore the way in which AI shows up in these spaces. We have a responsibility and due diligence. To think about that. And part of that for me means we have to think historically. And that's why I go back to Baldwin because Baldwin is that ancestral intelligence that reminds us of the ongoing critique and intervention that's necessary. And we have to maintain that sort of mutualistic perspective that I mentioned is that we have to understand that this is a tool, it is a technology. But ultimately until we began the critical work for ourselves around questioning systems and pedagogy and work conditions and labor and what does this mean for the nature of schooling? I think that it's not going to be productive for us to just continue to leverage a critique of AI without understanding how to one. Become proficient and competent and critical users in the space that's grounded in that sort of historical and transformative perspective. And two, it's important for us to very clearly understand systemically how we want it to show up to serve and advance pedagogy, not just. Outputs and results and grades and stoplights. Those things are important, but really what's ultimately important, we cannot lose sight of the inherent experiences that come back to Malia. Where do we create deliberate and strategic places within our systems for intellectual curiosity and intellectual co-design and co-ag agency and ultimately. That's what I want to continue to leave us with in thinking about that sort of both end philosophy and we have to do the critical work ourselves. It's not enough to just be a critic of ai. We have to be a critic of the same thing that Baldwin mentioned in 63. Once we know better, we better do better. Period.

Alexis Menocal Harrigan

Thank you. Wow. What a perfect way

Alan Gottlieb

to end.

Alexis Menocal Harrigan

Yes. The students of Aurora Public Schools are better served because you are in a leadership position that you're in. So thank you so much for, for your time. I hope so.

Antonio Vigil

Alright, thank you.

Alan Gottlieb

Well, thank you so much Antonio. Probably we'll have you on again in the future. We're gonna keep probing this topic because it's vitally important and I think. I, I don't see a lot of deep thinking that, that you're of the type you're doing going on around it in a lot of public spaces. At least I'm sure there is in private spaces. But thank you so much again and we will be back, everyone with another episode soon. I hope you've made it through the end of this because it's been fascinating and very thought provoking. Thanks again everybody and Antonio and so long.