Equity Leadership Now!

6. The Potential of AI in Education with Chris Mah and Hillary Walker

Jabari Mahiri

Episode 6 Transcript: https://tinyurl.com/uk33kysh

The latest episode of the Equity Leadership Now! podcast hosted by Jabari Mahiri features two lively discussions on the future of AI in education, recorded one year apart. 

In March of 2023, Dr. Mahiri met with Chris Mah and Hillary Walker to discuss the growing influence of AI and the intersection of technology, education, and equity. Chris Mah, a doctoral student at Stanford University, and Hillary Walker, the director of the Bay Area Writing Project, provide insights on the potential integration of AI technology, specifically Chat GPT, into educational settings.

Chris Mah introduces generative AI and its integration into education, emphasizing responsible use, awareness of biases, and cultural relevance. He proposes educator sessions focused on tool comprehension, ethical considerations, and collaborative application design. Hillary Walker reflects on the Bay Area Writing Project's community-building legacy, promoting student-centered pedagogy and professional growth amid technological advancements. 

Professor Mahiri followed up with Chris Mah in June of 2024 to learn about how AI has evolved and its implications for educators. Mah identifies both optimistic and concerning trends. Optimistically, some schools are investing in AI literacy and integrating AI tools into learning environments. Conversely, less-resourced schools tend to react with bans and punitive measures, potentially exacerbating educational inequities. This disparity raises concerns about equitable access and educational outcomes regarding AI literacy.

Mah urged educators to embrace AI literacy proactively, despite initial trepidations. He advises starting small, experimenting with AI tools, and leveraging collaborative learning environments to foster responsible AI usage and equitable educational experiences. His ongoing doctoral research seeks to further explore these themes, aiming to inform effective AI integration strategies in education and promote equitable educational opportunities in the digital age.

Equity Leadership Now! hosts conversations with equity-conscious leaders from pre-K through university settings who transform structures and strategies for educating students, particularly for those from historically marginalized communities.

The Potential of AI in Education

with Chris Mah and Hillary Walker

Berkeley School of Education Leadership Programs, 21CSLA


Jabari Mahiri Host, Editor, and Producer

Brianna Luna Audio Editor and Production Specialist

Mayra Reyes External Relations and Production Specialist

Becca Minkoff Production Manager

Diana Garcia Communications Manager and External Relations

Audra Puchalski Communications Manager and Web Design

Jennifer Elemen Digitally Mediated Learning Coordinator

Jen Burke Graphic Designer

Robyn Ilten-Gee Editor and Media Consultant

Rian Whittle Sound Technician



Transcript


Brianna Luna  0:18 

Equity Leadership Now! hosts conversations with equity-conscious leaders from Pre-K through university settings who transform structures and strategies for educating, particularly for those who are marginalized. We complement the mission and goals of the 21st Century California School Leadership Academy, 21CSLA.


Brianna Luna 0:37

Housed in the Leadership Programs of Berkeley School of Education, we acknowledge our presence on unceded Ohlone Land. We explore innovative ideas and compelling work of educational leaders at the intersection of research, policy, and practice, to realize individual, social, and environmental justice, because our democracy depends on it.


Brianna Luna 1:08

Our first interview with Chris Mah and Hillary Walker took place in March 2023, while our follow-up with Chris Mah occurred in June 2024. A music interlude will signal the transition between the two segments. 




Jabari Mahiri  1:21  

Hello, everyone, I'm Jabari Mahiri, your host for Equity Leadership Now. We have an exciting opportunity to have a conversation with two colleagues of mine, Chris Mah, who is a doctoral student at Stanford University in their curriculum and teacher education program, and also Hillary Walker, who is the director of the Bay Area Writing Project. 


Jabari Mahiri  1:44  

We're going to start by having them just sort of, tell us a little bit about their own positionality. I just like to start the podcast off in that way so we know the perspective from which people are talking. So Chris, how do you identify yourself?


Chris Mah  1:56  

Thank you for having me, Jabari. I'm Chris, I use he/him pronouns. I identify as a Chinese American male. One other thing your listeners should know about me is that I'm a second career educator. I actually worked in tech before teaching, so I'm bringing kind of multiple perspectives there. Thank you.


Jabari Mahiri  2:14  

and Hillary, how do you identify yourself?


Hillary Walker  2:16  

Hi, my name is Hillary, I use she/her pronouns. I am a longtime educator, for nearly two decades. I am currently director of the Bay Area Writing Project. In terms of positionality, I think of myself as a black woman, a parent, and somebody who's deeply involved in the field of education.


Jabari Mahiri  2:37  

So we've been having some conversations, the three of us, about the possibility of using the Bay Area Writing Project Summer Invitational Institute as a place where some of the considerations of this new AI that's out, called Chat GPT, might be introduced to educators from the standpoint of how they might viably use it within the context of their teaching and learning, as well as some of the potential problems and obstacles. 


Jabari Mahiri  3:08  

Now Chris has designed a very interesting set of sessions that would be engaging for these teachers. Chris, can you tell us a little bit about just a sequence of sessions that you see engaging teachers in the Bay Area Writing Project summer invitational institute with?







Chris Mah  3:22  

Yeah, absolutely. So right now, teachers are at the early stages of experimenting with this new AI technology. I would love to just work with teachers to kind of understand what they're already doing, and hopefully generate some new use cases. The first session would involve just an orientation to the tool, what it is, what it can do, what it can't do, and the ethics around it. 


Chris Mah  3:46  

In the second session, I would love to just interview teachers, either individually or as a focus group, just kind of understand what they know about it already, and how they may or may not be using it already, to understand what are the current use cases that teachers are already using it for. 


Chris Mah  4:01  

In the third session, I would really like to engage educators in a conversation around, just some different artifacts that relate to AI. So I'd love to have them kind of listen to some different interviews of people talking about the future of AI and education, and what that vision might look like. I'd love to have them look at actual, like a live demo of what the tool can already do. Some conversations around what it might do in the future. And then I also want to put that in conversation with artifacts like the common core standards, ACT and SAT writing prompts, and things like that, and really push educators to think about how this new technology might change what kids need to learn and be able to do in order to be successful in the future. 


Chris Mah  4:42  

And then finally, after kind of understanding where the tool is, and what people are already doing with it, I want to run a co-design session with a group of educators to think about new use cases that you know, we might not be able to think of individually, but when we have some inspiration, we've seen what it can do, and we're working together to co-design, what new and novel ideas can we come up with? 


Jabari Mahiri  5:03  

So, Chris, we're working with teachers, and of course, in the Bay Area Writing Project, these are teacher leaders. We're also interested in leaders who are guiding the work of teachers, as instructional leaders in school sites, at the district level for policy considerations. So you've actually given us a template for our conversation today. Session one is going to be, what is this AI? Tell us just a little bit about what it is, what it can do, what it can't do. 


Chris Mah  5:34  

Sure.




Chris Mah  5:34  

We've been talking and we will talk a lot about chat GPT, specifically, but I really like to think about generative AI more broadly. So generative AI is basically, computers that can make stuff and a lot of the stuff that's making right now is text-based. So this tool, it's basically a predictive model, or a probabilistic model that says, given this sequence of words, what is the most likely next word, and in order to be able to do that, it's trained on just enormous amounts of data from the internet, and from a couple of other sources, too. 


Chris Mah  6:10  

So while it sounds like a very kind of simplistic way of generating words, and generating language, it can do all sorts of things. It can basically solve many, many problems that are rooted in language and in a lot of ways, imitate human speech and even approximate human thought. 


Chris Mah  6:28  

Some of the work I've been doing at Stanford has been with teachers, and you kind of hit the nail on the head with the big concerns that they expressed. So like other forms of technology, the tool does inherit the biases of the people and the data that it's trained on. That means that you know if it is learning from data on the Internet, it's going to learn sexist, racist, or otherwise toxic content. And while the companies that develop these tools are, to some extent, trying to build guardrails around that, I think it's very important for school leaders to kind of understand that it's also incumbent on people that use the tools to also use the tools responsibly. So teachers kind of need to understand like, hey, how do I prompt this machine to generate content that is safe for kids? How do I curate and review the outputs that it creates in order to make sure kids aren't being exposed to harmful content? And then just, you know, how do I use this tool in a way that kind of forces me to be a culturally relevant educator as well? So as much as we can focus on the limitations and the bias of the tool, we also need to be thinking about the people that are using the tool and how they can be part of that solution. 


Chris Mah  7:47  

I think the dream scenario is Chat GPT will automate and scale a lot of the work that teachers spend a lot of time on, but you know, it's sort of repetitive work…Creating materials take a lot of time, but that time can be better spent talking to a student one on one, giving them one on one feedback, checking in with them, and building a relationship with that student. So the more that we can kind of outsource some of the repetitive automated work to a computer, that allows teachers to have more time to do the hard relational work, the more important relational work of connecting with students. 






Chris Mah  8:29 

So one of the big principles in Learning Sciences is that you know, people learn from examples. You need lots of examples, which is better than one or two, and you also need contrasting examples. So the idea is, if you want to teach a young child what a circle is, they also need to know what a triangle is, and what a square is. And they also really need to know what an oval is. 


Chris Mah  8:52  

And so if you can kind of take that basic concept, and you think about things like, you know, in government, in a social studies class, you might want to think about types of governments. In biology class, you might think about classifying different types of organisms. Examples are really, really crucial to helping kids build mental models of different concepts. Unfortunately, for teachers, those examples take a long time to generate, especially if you know you want to give a long description. Chat GPT can just take a simple prompt, like, you know, give me explanations of five different types of government. And it can generate that within a matter of seconds, and that can be super powerful for students and just save teachers a ton of time.


Hillary Walker  9:32  

Chris, I wanted to ask, you know, thinking about school leaders, do you have any thoughts about what they might need to know about guiding the use of this tool or this technology in their schools? 


Chris Mah  9:43  

Absolutely.


Chris Mah  9:44  

So I think one common response I've seen, unfortunately, is kind of a knee-jerk reaction, to just ban the use of Chat GPT. Ban the website from the school network. I think that's kind of a short-sighted way of thinking about this. One reason is because of this technology, it's not going anywhere. Before we know it, these types of technologies are going to be integrated into Google Docs, into you know, just like a web search browser. And so, I think the orientation that I would like to see from school leaders is, thinking about putting guardrails on it, while also teaching, while also thinking about ways to teach kids and teachers how to use it effectively, to support their learning. I think that the field of AI literacy is just going to be increasingly important in terms of, how do we use this technology to our advantage, as opposed to just banning it knee-jerk. 


Chris Mah  10:39  

Some of the big things I'd love for school leaders to be thinking about are one, how does this type of technology change what we teach and why? What is now valuable for students to learn? What do they need to learn how to do in order to be successful, given this changing landscape? Secondly, is, you know, I think there's always going to be some sort of a learning curve for teachers adopting new technologies. And while something like Chat GPT has been really intuitive and easy for teachers to use, there are different ways of interacting with a tool. It's called prompt engineering, it's basically how do you write a prompt that results in an output from the tool that you actually want and that's useful. And there's a lot of kind of nuance that goes into it, it's kind of part art, part science. But in order for both students and teachers to use the tool effectively, and to mitigate some of that, you know, ethical stuff that we talked about earlier, really smart prompt engineering is going to be a skill that both teachers and students will need to learn. 


Chris Mah  11:41  

And then finally, I think, you know, I think school leaders themselves need to kind of understand the big picture, of what the technology is, what it can and can't do. I think that there's a lot of policy that's driven by fear, “kids are going to use this to cheat” is the big fear that I'm hearing mostly from educators right now. And I think that that type of fear-based reaction of you know, we're gonna ban this, we're gonna ban this technology, because kids are just going to use it to cheat. I think that's short-sighted too. I think it has some deficit assumptions built in around students. I don't think that's a productive way of thinking of this type of technology. There's a lot of analogies to be made between, you know, this moment, and you know, 2007 when smartphones were just coming onto the scene, or in the late 90s when the Internet was starting to become more mainstream. There are some really valid concerns about new technology. And at the same time, you know, we shouldn't let that we shouldn't let that stop us from also thinking about the possibilities.


Jabari Mahiri  12:46 

Chris has assured us that these new technologies Chat GPT, and Google has its versions, are not going to eliminate human teachers. Now you've been the director of the Bay Area Writing Project over the last few years, you've been doing an amazing job, by the way. So, can you talk a little bit about what you feel this organization has done with human teachers to, you know, change the landscape of teaching and learning in California specifically? And of course, we know we have a National Writing Project, doing similar work across the nation.


Hillary Walker  13:18  

Yeah, I think I'll start with the very beginning of teachers' experience, which is through the summer institute. Our summer institute is three weeks of really intensive work with a cohort of teachers who are selected based on their experience, their grade levels, or disciplines. And so we have this really well-rounded, really interesting group of educators who come together, talk about writing, do some of their own writing, share their own practices in classrooms with student work. And so that's really central to understanding how instruction happens, is continuing to center that student work, and to center the conversation about it. And so, what typically emerges is that teachers walk away with, like, a whole toolkit of interesting thoughts, pedagogy, strategies that they're getting from their peers. 

Hillary Walker  14:13 

Beyond that, it's really just a beautiful community of educators sharing ideas. So when things come up that are interesting, there are other people in conversation about it. This is a, I think, a community that is really special, and also really primed to talk about things that they're curious about, like it's really about inquiry, what's happening in my classroom, but while also what's happening in a larger educational landscape, and we kind of just anchor that in writing.


Jabari Mahiri  14:41  

So teachers walk away with a bunch of innovative ideas, and they also walk away with a new toolkit. Chris, tell us a little bit about the fact that we're now getting another digital tool. What is the direction of these new innovations? Where is all of this going? We understand that Chat GPT primarily deals with texts, but that it also has capabilities for multimodal integrations of textual mediums like visualizations, as well as text, and potentially audio. Are we just at a moment where this is the newest thing right now, and two years from now, there will be a whole bunch of other considerations that educators and leaders will have to take into consideration?


Chris Mah  15:29 

Yeah, I think technology moves so fast. Right now, Chat, GPT takes text inputs and outputs text. But the tech that underlies it, is able to do things like interpret images. So the technology that underlies Chat GPT is called GPT Four. So that can actually take an image and describe what the image is, and it can even interpret the image. So the example I showed you earlier, was a funny meme of a tray of chicken nuggets, but the chicken nuggets were shaped like a globe of the earth or a map of the earth. And so Chat GPT was able to describe it, but it was also able to say why this was a funny juxtaposition. Because, you know, it's juxtaposing two unexpected things. 


Chris Mah  16:16 

And so, I think about, like, you know, English language learners, for instance, the ability to pair text and image together in creative ways, to generate images using a tool like Dall-E, which is another generative AI tool, to be able to do those types of things can be a real game changer for teachers, particularly those working with language learners. You know, there's technology that's speech-to-text. I think, you know, that has a lot of potential too, if you think about translation features that Chat GPT can do. If you can layer in speech-to-text with translation, all of a sudden, now, you have the ability to "talk to", I'm using air quotes,  talk to a robot that can kind of translate between speech, languages, and text, that also is as a game changer as well.


Chris Mah  17:10  

I think the bigger picture is, in a couple of years, you know, the vision that a lot of people have been kind of painting is, these types of tools have the potential to give every teacher a virtual teaching assistant and every student, a virtual tutor. 

Jabari Mahiri  17:26 

In the 21CSLA 21st Century California School Leadership Academy, we work with teacher leaders, we work with site leaders, and we work with district-level leaders, and we understand that they're all instructional leaders. So, I want to just toss a little scenario out there. If you were a principal at a school, anywhere in California, what would you be thinking about now in terms of preparing your stakeholders, the teachers, the students, your community, your parents, for how they can best maximize learning, to be able to take advantage of all of the texts and tools that are now being presented to us to facilitate learning in the 21st century?


Chris Mah  18:17 

Yeah, that's a great question. You know, one thought I have is that there's going to be a lot of new learning that needs to happen specifically around AI literacy. There's going to be new learning that has to happen around how we actually use these tech tools because if teachers have new tools, but there's no kind of onboarding or training, or ramp up or ongoing support, they're not going to use them. Same with kids. 


Chris Mah  18:38

So, one thing I often think about is, anytime you're adding something new, what's going to get taken off the plate? Teachers are already busy, they're already strapped. And so, introducing new priorities has to kind of come with a re-evaluation of what can get dropped. I think a lot of forms of traditional assessment, for instance, will no longer be relevant. I think, a moment that is ripe for reevaluating some of those priorities.


Jabari Mahiri  19:04  

I'm just interested personally, Chris, you are over at Stanford in doctoral studies in education, but you were formerly a techie yourself. Why did you make a transition from a very lucrative, you know, field of, you know, technology and technology development into education?


Chris Mah  19:22  

Yeah, well, I think the irony is, you know, I'm kind of finding myself in this kind of ed-techie space again. And so I think, you know, from a personal standpoint, I really enjoyed what I was doing working in tech actually, working on the marketing side, on the business side. So I really enjoyed what I was doing, had great opportunities, met amazing people, and learned a lot. But at the end of the day, I didn't feel like what I was doing was connected to, kind of, a purpose bigger than myself. 


19:52 

[transition to 2nd interview with Chris Mah]




Chris Mah 20:00 

Well, first, thank you for having me back, Jabari, it's great to see you again. I love working with BAWP. So your questions are super important. I feel like a year in AI time is like dog years. It's like seven years of normal time. I think it kind of answered your question two ways. 


Chris Mah  20:15

First, at the very high level of like, what's happening in AI, irrespective of education? I think there are a couple of big trends, one is just a lot more players in the game. So you know, a year ago, certainly a year and a half or two years ago, Chat GPT was sort of the one big tool that everybody knew and could relate to and was sort of the ubiquitous placeholder for AI. Now, all the other big tech companies have popped in. So Apple, I think yesterday just had their big conference and just released a whole bunch of new things that are going to bring AI to the masses of Apple users. You know, Google has their whole suite. Microsoft is, you know, partnered with Open AI, the company that makes Chat GPT. You know, every single one of the big tech companies is now in the game. And then there are smaller players as well that are, you know, trying to try to get a slice of the pie. So at the industry level, that's a big trend. The technology itself is just dramatically more powerful. It's more multimodal, I say more powerful, deliberately not better because those two things aren't always the same. But you know, in the case of these tools now, they can see and they can hear. They use vision and voice a lot more than text. Hardware is changing, and other companies are trying to make wearables to try and get us away from our screens. There's sunglasses that have AI, there are wristbands and pins. So, I think the way that we interact with AI is going to look very different in a couple of short years. So that's at the industry level. 


Chris Mah  21:40

At the education level, you know, a couple of the big things I'm seeing, I would say, one, one pattern that worries me, and then maybe one that's more optimistic. So the trend that's kind of worrying to me is, we see different approaches to AI, literacy kind of across different school types. So, you know, in schools with a lot of resources, they're investing in professional learning around AI, they're investing in technologies, they're investing in time to teach kids and prepare kids to learn how to use some of these new technologies. In schools that are less resourced, oftentimes, we still see these kinds of knee-jerk bands, and we see, you know, more focus on rooting out kids that are using AI and punishing them. I think that's a really disturbing pattern. We, I think that type of that type of bifurcation just kind of creates two different educational experiences with AI and with technology that I think is a real disservice to students and creates really inequitable outcomes. So a lot of the work that I do these days is working with teachers from all sorts of schools and districts and trying to stress the importance of AI and the importance of just having some basic literacy around AI. 




Jabari Mahiri  22:56

So let me just intervene there, you've raised a couple of interesting considerations and one is ethical considerations of the use of AI. But there also seem to be some ethnicity or racial extensions for the use of AI. In your work with teachers, and teacher leaders, particularly, how are you helping to facilitate their understanding of both the ethical potential problems and considerations that they have to address, as well as ways that it might play itself out differently across ethnicities or racial categories?


Chris Mah  23:35

To your question, you know, one thing that we do when we do professional learning is we try to bring in what we call critical case studies. We asked teachers, not only to, you know, explore tools and play with them, but we asked them to critique them. So some of the professional learning we do with schools and teachers involves having them literally use three, four different AI teaching tools. And then we ask them to unpack some of the assumptions that are built into them. What are those assumptions based on? What are cultural norms around language, because a lot of these models are language-based? Or, you know, what are some of the assumptions about how students learn? 


Chris Mah  24:13

These tools are not always built by people who understand learning sciences or understand what a teacher actually does. So we want teachers to kind of understand, hey, if you're going to be using these tools, there are benefits, and there are limitations too, and we think that, you know, a hands-on approach, and an approach where teachers are working together to both explore how they can make use of these tools, but also to interrogate some limitations. That's kind of the approach that we've been taking in our schools. 


Chris Mah  24:44

So last year, Hillary Walker and I facilitated three sequence workshops and we worked with writing teachers to think about big questions around AI and writing. One of the really tangible things that came out of our work last summer was an academic paper that was just published in the Education Sciences Journal, the core of that study was around academic integrity and what it means to cheat and to learn with AI in writing. So in that study, what we did was we worked with a group of BAWP teachers, and we worked with a group of high school students separately. We had them do a ranking activity, where we presented them with a couple of different examples of how a hypothetical student might use Chat GPT to write, so things like help me make an outline, helped me with the first two sentences that are a hook of this essay. We asked our teachers and students to discuss those examples and to rank them in order of how much they thought each student learned and how much they thought each student cheated. Well, the big punchline of that study was, there was a lot of disagreement, both between groups and within groups. And so kind of the driving argument of that study was, hey, if teachers and students aren't really agreeing on what it means to cheat with AI, or to learn with AI, that's a problem. Those conversations need to happen in the classroom between teachers and students. And so we've been going around with the group that I work with at Stanford with my lab. We've been actually replicating that activity with teachers and the schools that we work with, and encouraging them to do that activity with their students, as a way of opening up honest dialogue about what it means to work responsibly and write responsibly with these types of tools.


Jabari Mahiri  26:21

I know one of your mantras is that we should get away from the idea of learning to write and think about writing to learn. So help us think about just what you said in terms of both the problems of writing to learn in a new technological environment, and at the same time, what somebody's affordances might be setting aside the fact that there's going to be contentions around how much might be plagiarism, how much might be cheating, etc, etc. And yet, you're still making the argument that writing to learn is central to what this technology is.


Chris Mah  26:51

There's a lot in that question, I'll try and pick off as much of it as I can. So, there are a lot of different purposes for writing. I think historically, we've treated writing as a mode of assessment, at least in schools. It's the primary mode of assessment, and that's its primary purpose, is to assess how kids learn. Even before AI, I think that's a problematic set of assumptions. Writing is also a way of learning, of students working through their own thoughts, organizing them, and documenting them on paper for their own learning. And that, I think, has been often overlooked in schools. So as generative AI increasingly writes like humans, you know, begs the question of like, well how does this change the purpose of writing? If a machine can just write this and give me this information, and write this article for me, why do I need to do it? And that's a fair question for kids to ask. And it's a fair question for us collectively as a society to ask. 


Chris Mah  27:51

So, my hope, I won't go so far as to say I predict it. But my hope is that in education, we will start to shift our focus away from writing as the sole form of assessment or the main form of assessment, and towards writing as a way for kids to learn. So right before we talked, I just facilitated a demo workshop downstairs, where we talked about writing to learn. A big message that I have for all teachers across disciplines is that kids should be writing as a way of learning content, as a way of self-explanation, and putting explanations in through their own words, which should be as important as writing as a way of demonstrating what they know. 


Jabari Mahiri  28:31

So how does the technology facilitate writing to learn? Yeah, let's keep it positive.



Chris Mah  28:37

Yep. So I mean, I think there's a lot of different ways to answer that. You know, one is, when kids work with a tool like Chat GPT, it takes a lot of cognitive demand just to write a good prompt. And so you know, there are teachers from last year's BAWP cohort, that are building lessons in their classroom where they're having kids co-write something with Chat GPT, but the teacher is more focused on how the kid is prompting Chat GPT than what it's actually quote unquote, writing for them. So, in writing a good prompt, you need to think about purpose, you need to think about the audience, you need to think about style. And so, you know, for a kid to write a prompt, and then get maybe a piece of writing from Chat GPT that they don't like. And then to revise that prompt, that's a form of metacognition, and it's a way of making the students' thinking visible. 


Chris Mah  29:25

That's one small example. Teachers can also use Chat GPT or a tool like Chat GPT to create materials that help kids write. 


Jabari Mahiri  29:33

You talked about the way that the technology is moving to be more audio, you know, more able to be interacted with verbally, rather than through writing. Are we at a moment where the actual practice of writing may be getting diminished, because the power of technology is forcing us to represent our ideas and other mediums? I'm not going to write a podcast out, I mean, the idea of like, is a podcast gonna be more influential than the transcript of the podcast, for someone who's trying to get the same information.


Chris Mah  30:09

I think that's a great question that everybody in space is kind of grappling with. I think. So, one, I'll start with a positive perspective. First, one way to think about this kind of multimodal model is it's a way of increasing accessibility. So, for students who might have specific reading disabilities, or are sight and vision impaired, being able to just talk to a machine and have it read something back to you, it opens up a lot of doors for that type of student. So, you could make an argument that we'll have a lot of benefits for people who are limited by text-only. I do think writing as a medium, again, a lot of this kind of goes back to purpose. I think that we might see some types of writing become less important, and maybe, like, emails might become less important, because it'll be easier to just communicate by voice and send voice messages. But there are other forms of writing that I think people will continue to value for writing sake. I don't think that the novelist will go by the wayside, I think people will always value art for the sake of art. I think a human writing a novel, or a short story, or some literary text will still always have some value. 


Jabari Mahiri  31:27

We will be able to distinguish in the next five years between a great novel that's written by a human and a great novel that's written by an AI bot, in the same way that we imagined that chess masters would be able to always be able to beat, you know, digitally mediated chess.


Chris Mah  31:46

That's already kind of happening in some arenas, we already have examples of AI winning art competitions.I think there's a difference between what is the quality of something that I can generate, versus what do we value? I think the quality of what AI can generate, I think will eventually get to a point where it will be indistinguishable, and in some cases it already is. Whether or not we value that and in what domains we value that, I think is a different question.


Jabari Mahiri  32:18

I think, and I'm not trying to challenge you on this at all. I think what we're saying, and what I agree with is that the human element is always going to be important, and the AI, whatever level of technology is, just enables additional ways that humans can engage in creativity, production, or whatever it is.


Chris Mah  32:39

I think AI is different in the sense that, like, this type of technology is so powerful, that the consequences of getting it wrong are higher.


Jabari Mahiri 32:49

So Chris, what advice for educators who have trepidation about exploring and learning the AI landscape given how important AI literacy is in this moment when considering also the equity concerns, what would your advice be to these educators?


Chris Mah  33:04

Yeah, I'm really glad you brought that up. So I've had the privilege of working with a lot of teachers this year, and trepidation is definitely a valid emotion. I've kind of seen it on two levels. The first level of trepidation is like just fear over AI generally. And to that, I would say, AI will do some bad things in the world, and it will do some good things in the world and, and teachers, and the degree to which it does each really depends on people and what people use the technology for. 


Chris Mah  33:32

So teachers are on the front line of this. And I would just really emphasize to teachers that we need more teachers doing the good things with AI to outweigh the inevitable bad things that will happen. So teachers have agency in this, that's one that's one level. The second level of trepidation is just around like, oh, man, I have to learn something totally new. I get this a lot from more experienced teachers, teachers who have issues with technology or hang-ups about technology, or like, Hey, I don't want to learn this entirely new skill, that's foreign to me, that's out of my lane. 


Chris Mah  34:04

So to that, I would say a couple of things. One is AI literacy does not mean that you have to learn computer science or become, you know, some sort of software engineering genius. I don't have a technical background, but I know enough to use AI to evaluate its outputs, to understand what are the things that I can do with it that are powerful, what are the limitations? And so I think just that basic level of understanding is AI literacy, you don't need to have the technical knowledge. Then the practical advice I would give them is to start small. Use one tool, there's a proliferation of tools out there, but if you choose one tool like Chat GPT, which is free, or Anthropic is one that I really liked because it focuses more on privacy, it's another kind of AI chatbot. Choose one tool and just play with it. Maybe find a partner in crime, find a student who knows a lot about it, and just sit down and start playing with it. Think about problems you encounter in your everyday life, and that's a good starting point for thinking about how I can use technology to help me address those problems. So those are the two things I would say. Use your agency for the side of good and not evil is one level, and then the second level is, to start small and explore.


Jabari Mahiri  35:20

Thank you, Chris. In closing, tell us what you have to do with your dissertation work. I know that some of the things you've been doing with us at the Bay Area Writing Project are part of it, but when this is completed, let's say, you know, two years from now, what will this piece of research be about? And what do you feel to be its contribution?


Chris Mah  35:39

So, Hillary Walker and I actually completed part of it, which was that study that I mentioned before, around students' and teachers' perception of cheating and learning, so that'll actually be part of my dissertation work. A second part of it will be, that I followed up with two teachers from last year's BAWP cohort, who had very different views about AI. One more on the pessimistic side, one more on the optimistic side. Both of them did some AI literacy work with students in their classroom this year, they had different motivations for different tactics and strategies for how to do it. And so, I'm going to do a little bit of a comparative case study between those two teachers and really investigate, sort of, what are the elements of AI literacy that each of them brought into the classroom? What were their motivations and what were their approaches to working with students?


Jabari Mahiri 36:25

Chris Mah, Stanford University Doctoral Student. Thank you so much for joining us on Equity Leadership Now.


Chris Mah  36:30

Thank you so much. I really appreciate it and always a pleasure to talk to you.




Brianna Luna  36:40  

Our podcast team includes Jabari Mahiri, Brianna Luna, Mayra Reyes, Becca Minkoff, Diana Garcia, Audra Puchalski, Jennifer Elemen, Jen Burke, Robyn Ilten-Gee, and Ryan Whittle.