Campus Conversations

Teaching with Technology: AI's Role in Modern Education

Season 2 Episode 2

This week, we sit down with Eric Hudson, a leading expert on AI in education, and a board member of the Association of Technology Leaders in Independent Schools (ATLIS).

We dive into how artificial intelligence is reshaping the classroom, with insights on how AI can enhance teaching, learning, and administrative processes, while also addressing the challenges and ethical considerations that come with these advancements.

Check out Eric Hudson's work on AI education on his website.

Follow the AISA:

Facebook
Instagram
X
YouTube
AISA Website

As you kind of see how fast it's growing, and you have these teachers that you're hearing from who are saying, you know, maybe maybe not. I'm not sure yet. We'll wait and see. Does it seem like AI in education right now is something where we all just need to kind of get with the times? Or do you think maybe there's a little bit more time for us to kind of have that wait and see approach to see if maybe some of these difficult questions kind of get answered.

Even if you don't want to engage it as a tool that you use in your classroom. You still have to engage it as a current event, as something that is happening to you and to your students and to your school.

Welcome to another episode of Campus Conversations, brought to you by the Alabama Independent School Association. This week we're diving once again into the world of artificial intelligence, AI and its impact on education. And if there were any hopes that I would not find its way into all areas of our life, those hopes are probably fading pretty quickly right about now.

It's hard to escape AI, especially online, whether that's social media, but also learning online learning and learning in the classroom. While the uses of AI are becoming more clear. There's also a very clear maybe perhaps arms race of sorts for companies and developers to create the best AI, maybe leaving some feeling overwhelmed with trying to find out what they should use and how best to use it.

Our guest this week is Eric Hudson. He's an expert in AI who's dedicated to helping educators harness the power of this technology in the classroom. He brings a wealth of knowledge and experience, and Eric Hudson joins us now to guide us through how AI can enhance teaching, do things like streamline processes and open up a whole bunch of new possibilities for both teachers and students.

Eric, thank you so much for being with us today.

Thanks for having me.

So first, to kind of start us off, we were talking a little bit before, we press record here, but, give us a little quick intro of yourself. You know what it is? Your role is, and how you are working right now to assist teachers with this technology.

Sure. So I'm an independent consultant. I've been on my own for about 18 months. So really through the height of all this AI stuff. But I'm a teacher by training. I spent 12 years in the classroom, teaching mostly middle school and high school, English, and independent schools in New England, where I live. And then I spent ten years at Global Online Academy, which is a nonprofit organization that does passion based online courses for high school students and PD for teachers and school leaders.

And you know, that kind of throughline in my whole career has been, how can technology, improve the work of teaching and learning? And so in my work, right now, I've spent a lot of time with schools, teachers, administrators, parents, boards, wrestling with that question. When we are organizations kind of dedicated to teaching and learning, what are the things that we need to adjust in response to generative AI?

And but also like, what are the things that don't need to change? And so I'm doing a ton of direct work with teachers and trainings, and then also strategic work with boards and school leaders.

Kind of correct me if I'm wrong here, but it really feels like right now, teachers and maybe just people in general, are either really fully on board with AI. Or maybe they are just completely against it. You know, they're on the opposite of the spectrum. They just don't trust it. They're not messing with it. What are you hearing from the people that you're working with?

These educators, these board members, about artificial intelligence? Where do they kind of fall on the spectrum? Or are you seeing some people that are kind of more somewhere in between all of that?

Yeah, it's all over the road. I mean, a couple of things to know about this. I mean, it's only like a little more than two years old, so we're still kind of very early days. So I meet people who have never tried ChatGPT or any of these tools before. They've kind of just let it wash over them for now, right?

I definitely meet people who are kind of actively resistant to the technology. I've heard the term like, new Luddites get thrown around a lot. I've seen people who are fully embracing the tool, using it all the time, and especially with classroom educators. Most of what I see is kind of this balance of skepticism, but also hope in the sense that, oh, is this something that could actually make our lives better?

Or is this something that's going to make our lives worse? And so it's a real range. And I have to say, when I speak to parents and when I speak to students, it's the same range. You know, everyone is wrestling with the complexities of the technology right now. And so I don't think there's any kind of clear point where schools are at.

Right now.

And like you said, this technology has just really exploded. And it kind of came out of nowhere for some people. You know, at least people who maybe were not super, you know, in tune with the tech world and the latest developments. But, I mean, yeah, this past two years with AI from where it was to where it is now, it's just, you know, it's every day there's something new.

And but as you kind of see how fast it's growing and you have these teachers that you're hearing from who are saying, you know, you know, maybe, maybe not, I'm not sure yet. We'll wait and see. Does it seem like AI in education right now is something where we all just need to kind of get with the times?

Or do you think maybe there's a little bit more time for us to kind of have that wait and see approach to see if maybe some of these difficult questions kind of get answered?

Yeah, that's a great question. I think you have to engage. And so when I, when I present to faculty, when I speak to groups, I talk a lot about sort of engagement over adoption. Right. Like, this is not about embracing AI fully. It is about understanding what it is. There's this group of researchers and researchers at MIT who call it an arrival technology.

And basically what that means is kind of like the internet and smartphones before it. This is a technology that has already disrupted. Society has already disrupted education and industry. And so even if you don't want to engage it as a tool that you use in your classroom, you still have to engage it as a current event, as something that is happening to you and to your students and to your school.

And so that's kind of the level I try to get faculty to engage with it on. I don't think the right strategy is to kind of wait for things to settle down, whatever that means, and, and sort of see what happens. I think you do kind of need to dig into it now.

Well, no, and I want to talk a little bit more about your approach to how you, you know, really try and help people get their, their, their minds wrapped around all of this because, just in some of my research, just looking into, you and your website and just kind of how you, you know, a 30,000ft overview of how you do things, you focus on a human centered, focus or approach to generative AI.

I'll read off these four points, and then I want you to dive into them a little bit deeper for us, please. Is augmentation over automation literacy over policy, design over technology and vision over decisions? If you don't want to remember all that, I'll link the, your website, Eric, in our in our podcast description. But walk us through those that that four pronged approach augmentation, literacy, design and vision.

What does that mean and how does that shape how people learn about this technology?

Yeah. So those four priorities came out after I've been doing this work for about a year. And what I was noticing is that schools were struggling to figure out what are what is the approach to this that will allow us to do this over the long term, as opposed to just kind of react every single time a new tool comes out.

And so I'll take them one by one. But like augmentation over automation is this idea that the best use of this tool is to is for a human to partner with AI to do more interesting, more efficient, more effective work. Right. I think we're very focused on automation in the sense that o generative, that generative AI can do stuff for us, or kids are going to use it to write their papers for them, or teachers are going to use it to grade papers for them, when in fact, like all the research shows, the most effective uses are when a human being has an idea or a recognizes a challenge or an opportunity, and then thinks

about how I can be used to augment their capabilities to address that thing. Right. And I think that's what we should be teaching kids and adults in our communities is augmentation, not automation. Literacy over policy is just about how hard it is for schools to articulate effective policies around a technology that is evolving so quickly. You know, you think that you can control something like this, but that's not really what, that's not really the right approach.

And so I advocate for a literacy forward approach, which is actually teaching students and adults about the technology, helping them become more literate about it, understanding its potential, its pitfalls, the risks, whatever, so they can make more effective decisions about it. I find schools really try to get policies in place to control behavior, when in fact teaching people about it is probably more empowering and more effective.

And so I advocate for a literacy forward approach, which is actually teaching students and adults about the technology, helping them become more literate about it, understanding its potential, its pitfalls, the risks, whatever, so they can make more effective decisions about it. I find schools really try to get policies in place to control behavior, when in fact teaching people about it is probably more empowering and more effective.

Design over technology is just kind of the assessment and teaching and learning stuff. The best way to react to I as a teacher doesn't necessarily mean use AI, but it does probably mean you have to redesign or rethink some of the stuff you've been doing in your classroom, right? Some of that stuff may involve using AI.

Some of it may not, but you have some design decisions to make. And then the last one, Vision over Decisions, is really about what is your school's position on this technology. Does your school believe that this is a technology that can really enhance your work or not? You know, I think both positions are kind of valid. And so before you try to make a lot of decisions about what to do about generative AI, like have you articulated kind of the longer term vision or belief statements, about the technology at your school?

And that is something that I think is really important to a lot of, of our member schools and schools everywhere. It's really just figuring out how to use this while also making sure that you're staying true to your core values, to your mission statement. And especially when it comes to, student learning and the use of AI in the classroom.

I know, you know, one of the, you know, maybe more, you know, common concerns, perhaps is, you know, how are how am I going to assign my student to write an English paper that how am I going to know for certain that they're not, you know, running that through some sort of AI that writes it for them?

And I know that there are things like turnitin in different tools that can detect AI. But Lord knows, man, every time you find one way to catch something, you know, there's another workaround that that blows up on TikTok or somewhere else. You know what I mean? And so, what how can teachers maybe, figure out ways to, have a little bit more confidence in, in teaching things like this and making sure that they're still giving students that quality education to where, yeah, they are learning how to write, not just how to tell an AI how to write it for them.

Does that make sense? And how can how can teachers really feel good about that right now in this landscape?

Yeah. I mean, you're putting your finger on probably one of the most complicated issues with this, which is how do you know when a student's work is totally their own? I mean, you mentioned those AI detectors, like, turn it in or GPT zero or whatever they are. Those are deeply flawed technologies. They are not accurate. They generate a lot of false positives.

And so I actively discourage schools from onboarding or using tools like that to kind of look at student work. They're not going to give you trustworthy results. Right? But that raises the question of like, what do you do if there's not like a technological solution? And so I've seen teachers respond in a lot of different ways. First and foremost, I think the question is really about trust.

How much do you trust your students? And if you have that sense of trust with them, engage them in an open, transparent conversation about the technology. Ask them what they think of it, what they know about it, what their experiences are. Ask them how they feel when they use it. Co construct some classroom guidelines with them.

Help them help you articulate some boundaries around use of AI. I think that's the most important thing teachers can do. And then, you know, there's more sort of day to day stuff I'm seeing, like, for example, teachers who used to assign a lot of sort of writing process things for homework, like writing a thesis statement, writing a body paragraph, curating evidence from a text.

I'm seeing them move a lot of that stuff into class. So asking students to do that work, maybe with a small group, or doing it by hand in class so they can supervise the students and they can actually put the students in the position of really having to come up with their own ideas. The teacher can feel confident that the student has come up with their own ideas and kind of monitor the process.

I'm seeing them move a lot of that stuff into class. So asking students to do that work, maybe with a small group, or doing it by hand in class so they can supervise the students and they can actually put the students in the position of really having to come up with their own ideas. The teacher can feel confident that the student has come up with their own ideas and kind of monitor the process.

So when they do, let the student go and write the essay at home or on their device. Again, they have that increased sense of trust and confidence. And when students have invested that energy on the front end, they're much less likely to go to generative AI. You know, I think it is a myth that every single student in all of our schools wants and is using generative AI all the time.

There's no data to support that. And in fact, there's more data to support that. Students are really eager to engage with adults and learn about what is appropriate and inappropriate use.

And I know there are, of course, a lot of those questions about the ethics behind it. And and like like we said before, a lot of this is still a wait and see process. And that that I think is really difficult for a lot of people because we kind of want those answers. You know, right now we want to know, you know, should I, shouldn't I?

And really, it is up to all of us to kind of figure out that question, maybe kind of collectively together, which is, can be stressful, but it's also a little fascinating to kind of watch how we collectively, as a society figure this all out. But, I know one thing that that I think probably feeds into the anxiety and these feelings of being a little bit overwhelmed with all of this is there are so many AI tools out there.

There is an AI for everything. Every application you, you open up. There's now some AI feature in there, and everybody is racing to get AI into their products and in front of people. Where do people start when they are asking themselves, you know what, I know AI is a thing. I know that maybe perhaps I want to streamline my lesson planning, or I want to figure out how to more efficiently grade my students homework.

Where do they go for their specific, need as it pertains to AI? You know, where can they go? Like there's not like an app store or Google Play for AI where you look up the thing that you need. But where do people start? Because there's so many options out there.

Yeah. And it's only getting more and more saturated.

Yeah, yeah.

More saturated is exactly the right word. I mean, my what I typically recommend and what I do when I facilitate workshops at schools is there's always a segment in them where we're doing hands on work with AI tools. And where I usually recommend people begin is go to one of those general purpose consumer facing chat bots. So ChatGPT is one.

Gemini is another one. Claude is another one. And just are asking it questions, right? Or just start making requests of it because those general purpose chat bots are the most flexible, most powerful consumer facing tools right now. Certainly there are a ton of sort of like teacher specific tools, like tools that'll make a lesson plan for you or make a rubric, or there's kind of multimodal tools that will create audio for you.

And like all that's great. But I think to begin, you'll probably learn the most from just interacting with a general purpose chat bot and getting a sense of what does it feel like to do what's called conversational prompting, which is to go back and forth with a bot, trying to get it to make something useful for you. That skill set will transfer to any of the specialized tools that are out there, but it's really, really hard to say to like an eighth grade science teacher or whatever.

Oh, here's the right tool for you. I think it's more like, what is a problem you're trying to solve in your own work? Or what is an idea you've been marinating on for a long time as a teacher? Why don't you go to ChatGPT or Claude and see if you can get it to help you address that thing just through conversational prompting, and that might lead you to think about, oh, some of these other specialized tools might be for me.

And that really is, at the end of the day, kind of what it's supposed to be. That's what the technology is there for, is to help you do what you were already trying to do and maybe make it more efficient, give you maybe some ideas that you hadn't thought of before, or provide, you know, data or research that really supports your overall goal, is really what I think it's all about here.

And you've been you've done a lot of professional development in this area, and you've worked with a lot of teachers and school boards. Like I mentioned previously. Lots of hands on stuff. What are some of the reactions from teachers or school board members when they first start having those, you know, generative conversations like you just described?

What are some of their reactions? What do they find? Good experiences, bad experiences. What exactly are they saying?

It's all over the road. Like I mentioned earlier, crosses, but I was just at a school on Tuesday doing this workshop. And whenever we transition into the hands on part of it, I have everyone do the exact same warm up in a general purpose chat bot, and then I just have them. I give them a couple of prompts.

I have them go back and forth. You know, I have them sort of mess with it a little bit. And then I ask for reactions. And one person raised their hand and said, you know, I was really skeptical about this tool, but I'm starting to see how it really does require me to use my brain to work effectively with the tool.

And it really does move me forward in my thinking in a way that I didn't expect. And then right after that, another person raised her hand and said, I'm horrified. This is the worst thing I've ever seen in my entire life. I don't know why a person would ever want to delegate their humanity to this robot, right? And that is a very common experience for me as a facilitator is people tend to react pretty strongly.

Because it is a very powerful technology. And I think those emotional reactions are totally understood and double. And that's kind of what schools are really navigating right now more than anything else, is that variety of emotional reactions.

Isn't it? Yeah. And and I want to make sure that we're being, you know, very fair to kind of all sides or about, you know, both sides of this, you know, conversation here because there are concerns. There are legitimate concerns. And to be very clear, AI is not perfect. I think you touched on that, before.

But, you know, I one thing that I personally have heard just in my personal conversations about AI is there's this discussion about, privacy and your, you know, your personal data, it being secure online because you know, if you use ChatGPT, you can use it, you know, without an account, okay? If you have an account, then it starts to save some of the information that you feed into it, and then it makes it to where it's more personalized responses, which can be very helpful.

But it's saving that response. It's saving your data. It may know your name, where you live, where you work, what your work looks like. And that can be concerning for some people. Is this one of those things where maybe they need to stay at maybe those higher level, you know, generative AI tools like Claude, like, you know, copilot Gemini ChatGPT.

Is that probably a safe bet for people to use? Or do they really or there's, certain safeguards that they need to have in place to, maybe assuage some of those fears?

Yeah, that's a really important question. You know, at the highest level, this is a this generative AI moment is a moment for schools to reeducate their communities about just baseline data hygiene on the internet. Right. So any cloud based, publicly available tool that you're using when you use your Google Single sign on or whatever to get into it, it's capturing your information.

ChatGPT Claude. It's no different. Right? And so I think everyone just needs to be aware that general data hygiene practices are super important and really apply to generative AI as well. When you go into ChatGPT, for example, there's a little switch in your settings where you can turn off the feature that feeds your inputs into the model, right?

And so you do have the ability as a user to say, I don't want my inputs being used to train. ChatGPT. Right. But if you do create an account exactly like you said, you're still operating in a way where it's learning from you, even if it's not being fed into the model. And so just being careful to not upload confidential or personal information into these tools, don't upload student work into these tools if it's full of students personal information.

You know, don't give it your tax return. Those kinds of things. Right? But, you know, I think that's one thing I'm seeing schools do is now schools are sort of onboarding certain platforms or partnering with some of these companies, because when you pay for that kind of enterprise model, you get enhanced data privacy and data security.

And so that's one incentive I'm seeing schools for investing a little bit more in the technology is they don't want everyone in their community kind of going out to the public, consumer facing bots and just putting whatever they want into them. Right. And that's kind of where people really need to be mindful is when you're out on the open internet using publicly available tools, just be mindful of what you're giving them.

Well, and the investing into this technology was one of the big things that I wanted to also discuss with you because like I said, there are so many different tools out there and some of them are free. Some of them are free to an extent, and some of them are just purely, you know, subscription based, pricing.

How much do schools need to really think about putting actual dollars and cents behind this? And especially as we look to the future and how much these, tools are going to grow and, and, you know, I mean, the price of Netflix has, has gone up. You know, everything that we used to stream has gone up time, you know, over time.

Is this one of those things where they need to, you know, really think about, okay, we need to put money aside for this, and we're probably gonna need to pay more and more and more in the future for better and more secure usage of this.

Another good, complicated question. You know, I think one thing that we're already seeing, and you mentioned this earlier, is that generative AI is getting integrated into platforms and tech ecosystems we use every day. So right now, if your school is, say, a Google school, you have the ability administratively to turn on Gemini and notebook for all of your users, adults and students.

And that is, I think, where schools are going to have to make some decisions pretty soon, which is not do we want to buy ChatGPT plus for everybody? Right. But more oh, Google or Microsoft or whoever runs our major cloud platform from now has the these AI features. Do we want to turn them on so people can use them freely?

What are the data privacy concerns around that? How much more does it cost than our baseline account? I think schools are going to have to make those decisions sooner rather than later. I think if your school is kind of in this space where you still want to explore, you're not totally sure you want to make like an institutional investment in a platform.

What I recommend to schools is that you identify your early adopters, who are your faculty and staff that are really excited about this, that want to put in the time to explore and really give them the tools they need to be able to investigate and report back out. So maybe you've got 20 folks who you do want to buy ChatGPT plus subscriptions for.

It's 20 bucks a month. Let them run with it for six months, and then have them report back out about what they learned, what they applied, and that will allow you to make other decisions about it. I always tell schools like whatever you purchase around generative AI, like hold on to it lightly. Because it's not going to be this thing that you can just have for five years, right?

You're probably going to have to make different kinds of investments, let go of things on board, new things. Again, just because the technologies evolving so fast anyway.

And again, like I mentioned before, AI is not perfect. One thing I've seen recently in the news was, was Apple rolling back its, highly marketed Apple intelligence, features and, and just seeing some of the screenshots of people getting these notifications where it like, collects news and tries to summarize it for you. And then it was giving people these ridiculous news headlines that were just flat out false or wrong, or just made no sense.

And, so, so obviously there are instances where even with some of these bigger companies, they can't get it wrong. And these tools aren't perfected yet. And I know for myself, you know, you know, I use ChatGPT, you know, for, for various things. And, you know, it's it can get real easy, real quick to just assume that whatever it's spitting back out at you is like, all right, good to go.

You know, you give it a few small things and then some bigger things, and you're like, okay, good, good, good. But you never know when it might give you something that's just flat out wrong. So to what degree do people need to and specifically educators here, of course, need to be fact checking. You need to be double checking that content.

And and is that something where you can almost get to the point to where it takes just as long to fact check AI as it does to just maybe do the task yourself? Is that a concern here as well?

Oh, definitely. The tools hallucinate and that. Which means they make mistakes. They present false information as if it's true. Right. And, that has not gone away. It's gotten better, but they still hallucinate, and people are starting to wonder if that's kind of a feature, not a bug of this technology is that, I mean, not unlike many people.

It will make mistakes, you know, every once in a while. And so, yes, you do need to be mindful that whatever it gives you might be incorrect or fabricated. And that's especially when I talk to students. I say, you know, using ChatGPT, the way you use Google is probably not the best use case for that tool, because you don't necessarily know if the information that it's giving you is right.

Like search might not be the best use case, but things that are more process oriented, like you need it to help you flesh out an idea or you need it to clarify notes, or you need it to clean up data. Those are things that are more effective, reliable use cases because you're doing it in service of creating something that you are going to end up vetting.

Whereas if you're just asking ChatGPT about, I don't know, photosynthesis, right. And because you've got a quiz and you need to study, that is not the best use case of ChatGPT. So I think it's about understanding that those weaknesses exist, and also understanding that because of those weaknesses, certain uses just aren't the best uses and other uses are.

And so when you need to search for something, you know, go over here. And when you need data processing, use generative AI and how does general I fit in that kind of like suite of different tools that are available to you.

If you and as you look at all of these different tools and as teachers try to really, you know, get their arms wrapped around this, professional development is a huge, thing that, these educators are going to be looking for. Obviously, someone is yourself is able to, to, to provide that, and as best as you can and like we've said, you know, this world that is changing so rapidly, but professional development, you know, as we all know, can oftentimes be, time consuming.

It can be expensive. And obviously, you can take teachers away from the classroom longer than they'd like. So, is there a way that maybe they can use, you know, AI to bring not just professional development on AI, but just professional development, period? Is there a way that they can use this technology to bring that professional development to them?

So they can continue growing and being the best, you know, versions of themselves in the classroom?

Absolutely. You know, that's kind of because this tool is so powerful and so flexible. Really, the only limitation is how well people understand what they want from it. Right? So if you know what to ask, it can probably help you. You know, I'm working with Randolph School in Huntsville. Not on, AI, but on student centered pedagogy and designing for student agency.

And a lot of that work is asking educators to redesign units or assessments in a way that empower students to kind of make their own choices and make their own decisions and it ends up, you know, people end up needing to let go of some of their existing stuff. And that's a lot of work. And that takes a lot of energy and thinking.

And generative AI can be a really good thought partner in that work. If you take your current assessment that you know you want to make student centered, you upload it into a Claude or a ChatGPT and you say, I have been asked to redesign this lesson in a way that nurtures student agency more explicitly and more intentionally. Give me five different ways I can do that, right?

I always say to people, ask generative AI for multiples. You can generate multiples as fast as it can generate one, and so you always get a menu as opposed to just one thing. And I think that kind of partnership with general AI goes back to that augmentation priority I mentioned earlier, which is, you know, you as the human being know that you have a goal, you know, that you want to work on something.

You know, there's an idea you want to execute. In what way can you use generative AI as a tool that helps you get to, you know, making that idea a reality either faster or in ways you never even thought of before. And so that, to me, is the real potential with AI. As for professional development, is you've kind of got this little competent but not perfect teaching assistant alongside you, so that whenever you have an idea, whenever you're struggling with something, you have that as a resource now.

And most teachers didn't have something like that before.

And as we look to kind of how this will continue to evolve and kind of the future of all of this, like, like like we keep saying, I mean, it is it is unpredictable, you know, where this will all go. But, one of the things that we've heard, you know, just in talking with teachers and school leaders is, you know, there's concern that, you know, AI is going to take my job, you know, and that's across many different professions, you know, and, and, you know, I'm sure there are different degrees of, of how, you know, legitimate that concern might be.

But, you know, as far as the classroom, is there any chance in, in the future that we see, you know, in an AI tutor in place of a teacher, or is it just going to be something like you just said, where it's just, you know, there's an AI assistant, you know, maybe not the actual full time instructor. Do you see something like that becoming a reality?

I can't predict the future. So come on.

Yeah. You know, I.

I you're not going to get me to commit one way or the other that I will say that there's already AI tutoring platforms out there that schools are onboarding, not to replace teachers, but kind of as a supplement, for extra help or academic support. So that technology is already out there. I think that the question of whether or not AI is going to replace teachers is not the question we should be focusing on right now.

I think the question we should be focusing on right now is, how can I be an assistive technology that makes school a better experience for all students? Right. And because we are far away, despite what you read in the news, we're far away from the robots being sentient, right? We're far away from us being able to 100% trust these technology to do the work that humans do.

And so I don't see this kind of replacement question as the most important question. I think the most important question is, given what the technology we have right now, how do we start really engaging with it so that we understand the kind of uses we want it to have in schools? I really don't think we should be waiting for these companies to come up with robots that are going to replace us, you know, like, I think that we have some agency right now to start thinking about creative, human centered applications of the technology that will allow us to integrate it in a human centered way, as opposed to kind of this weird wish that some

people have that we can just kind of like hand off a kid to an AI teacher right now. And so that's kind of how I think about it. I think we could probably if you've ever watched Star Trek, like, we could probably get to a place where kids are being taught by robots. I don't know why we would want to, though.

Yeah. No, I mean, it's an interesting conversation, but yeah, I, I agree. And one of the things that I've, heard is that, you know, perhaps, you know, I but I won't take your job, but someone who knows how to effectively use AI probably will. It would would you say that that rings true for you? It would.

Would you agree with that, with that prediction?

I think so, in the same way that I think that a person who only knows how to write by hand and doesn't know how to use a computer, probably doesn't have a lot of skills to bring to the table in most professional settings. I think learning how to use generative AI is an important skill that everyone is going to need for the future, not just students.

You know, we are going to be making decisions about if and how to use generative AI for the rest of our lives. And so I think it's just it is important to understand how it can assist you and make your work better.

Well, Eric, I really do appreciate your insight into all this. I think, you've really answered a lot of these tough questions that we've have as as good as I think anybody else can at this point. I mean, I think the people who are making this technology even are still kind of figuring out, you know, where this is going to go.

And, I hope that, you know, our listeners and, you know, we'll just be able to take something away from this, maybe, calm some nerves, maybe, maybe, however, promote some more healthy skepticism. And, it is what a time to be alive, right? You know,

With that, that's the. Yeah.

Right. I mean, like, it's really interesting, and I know that, you know, I'll be watching it closely and there's certain things that I would love to use it for. But, but, Eric, like I said, thank you so much for coming on and talking about all this. It's a difficult conversation, but one that I think is very important that we have.

So thank you so much for taking the time to speak with us.

Yeah. Happy to do it. Thanks for having me. 

Yeah, absolutely.

This episode of campus conversations is brought to you by the Alabama Independent School Association. If you want to learn more about the AISA, check out our website at aisaonline.org. You can also follow us on Facebook, Instagram, X, and YouTube. Thank you so much for listening. Join us Tuesdays for the next episode of Campus Conversations.