.jpg)
The Original Source
The Original Source is a podcast about all things AI, plagiarism, and more.
The Original Source
AI, Accessibility, and the Art of Teaching: Instructure's Ryan Lufkin on the Changing Role of Educators - S02E04
The Original Source Is A Copyleaks Podcast
Welcome back to another episode of The Original Source! In our latest episode, host Shouvik Paul engages in an insightful conversation with Ryan Lufkin, Vice President of Global Academic Strategy Instructure, the parent company of Canvas. With over 25 years of experience in the EdTech industry, Ryan offers an insightful look into the future of education, addressing key concerns and opportunities brought by the AI revolution.
In this episode, they discuss:
- The future of EdTech, from skills-based education to mobile learning.
- Instructure's new Ignite AI platform and partnership with OpenAI – and the future of personalized learning.
- The debate around AI's impact on student critical thinking.
- The shift in education from a reliance on traditional methods to embracing new technologies that create a more accessible learning experience.
Follow us on social media to stay updated on all things Copyleaks.
Linkedin: https://www.linkedin.com/company/copyleaks/
Instagram: https://www.instagram.com/copyleaksai/
X: https://x.com/Copyleaks
// 'The Original Source' is a Copyleaks podcast /
Welcome to The Original Source. I'm your host, Shovik Paul, and today I have a really interesting guest. I can't wait for you guys to meet him, Ryan Lufkin, and he is the Vice President of Global Academic Strategy at Instructure. For those of you guys who maybe use Canvas, that's the parent company of Canvas. Ryan, his research played a huge role in shaping the company's worldwide wide vision and growth over the years. So I am so excited, Ryan, to have you on the show and really do a deeper dive into what you're seeing with specifically AI and education. I appreciate you having me. Yeah, great. So Ryan, maybe we could take a minute. Just tell us about your journey and how you became the VP of global academic strategy at Instructure. Yeah. I mean, I've been in ed tech now for 25 years, more than 25 years. And I just, you know, in In 1999, there was a little startup. I'd worked for an ad agency, worked for a radio station. Like most kids of the 80s, I thought I wanted to work for an ad agency because all the cool dads on TV worked for ad agencies, right? And there was a startup here in Salt Lake City called Campus Pipeline, and they were an HTML portal that was customizable and personalizable for students to be, you know, kind of a more attractive front end for the really clunky back end SIS systems. And I started working there in 99, and I've been in ed tech pretty much with a couple of small exceptions for that entire time. And so it's just one of those things where education is rewarding in a way that other industries aren't. And, you know, I came to Instructure about seven and a half years ago, originally building up the product marketing org here. And then two years ago, I joined Melissa Lobos, our chief academic officer in this role, which has been amazing. And so I say I fell into my dream job where I get to travel all over the world and meet with schools and present our vision, collect their feedback, bring that back to the company, but really listen to the challenges they face and help bring them as much as I can help bring them resources to, to help achieve those, those goals and overcome those challenges. So it's, I, I truly enjoy what I do every day when, you know, uh, when I wake up and go, oh, you know, when's my next trip? I can't wait to get on a plane and, and fly to, you know, Australia or Chile or, you know, uh, Regina, Canada. You know, it's, it's one of those things, you know, you're doing the right thing. I'm sorry that your mad men dreams didn't come true. This, we've left the second best thing in the world. It It was entertaining while it lasted. Now, speaking of Instructure, you guys just held InstructureCon about a week ago, made some huge announcements. And that's one of the reasons I really wanted to have you on the show. You know, CopyLeaks, we were at InstructureCon as well. And we heard like on the keynote was all about AI. I would love to get your thoughts, especially let's start with Ignite AI. For our listeners, Instructure presented on key function tools, including like content creation, grading, accessibility, all with an AI element added to it. So, Ryan, tell me, first of all, what prompted that? You can't go to a conference and not talk about AI. We try to hit on AI and share a lot of information about AI, but have that not be the only thing, because I think there's a certain level of fatigue that's happening with folks around AI. But, you know, two and a half years now into this AI transformational journey, right, it really is fundamentally changing education. And so we, from the very first said, you know, we're going to focus on human-driven problems. We're going to get feedback from our customers and to make sure that our customers are driving our decisions. We set up an AI advisory council that reviews everything that really from point where it's an idea to when we release the actual features themselves. And so we said, you know what, we're going to be very deliberate with our rollout here. We don't care about being first. We don't care about, you know, across not finished on what we're doing is really setting a foundation for the future. And I think with, with Ignite AI, what you're seeing is the vision really paying off, right? So we've released some features, you know, like translation, smart search, you know, rubric creation, things, really cool features that save educators time. But that's kind of a step along this process. What we're seeing now is we're moving into this agentic phase where AI agents work on your behalf across tools. And so, you know, we've got a lot of schools that are innovating really quickly. We work with, you know, some great innovators like ASU and University of Michigan and RMIT and get their feedback. And really what they're saying is, okay, we're starting to sign deals with, you know, if we're a Google shop, we're signing with Gemini. If we're a Microsoft shop, we're signing with Copilot. If we like what OpenAI is doing, we're signing with them. And so doing these kind of enterprise-wide deals, and how does that plug into our LMS now that that power it? We're in a unique position where Canvas, you know, 14 years ago, we founded Canvas as the first commercial open source LMS. So we publish our code. We have over 500 APIs that we open for integration. We've got a really, the deepest in the industry LTI framework that anybody that builds an LTI app can plug into. And so we've built this open architecture that has positioned us fundamentally well to be able to address this agenda model. And so when we talk about Ignite AI, Ignite AI is the collection of all of that infrastructure and those features. And now we're also rolling out the Ignite Agent, which is something that we've worked closely with AWS to build. And we've been hosting on AWS from the beginning. They're an amazing partner. And we look at their bedrock large language models. I kept saying they have seven. they actually now have 28 or something like that, large language models that they host. And by partnering with them, we can really make sure that we're taking security and privacy, you know, protection of student data very seriously with a partner that we trust, that our customers trust. And then that Ignite agent can actually power interactions much more deeply than an individual feature. So we kind of compare it to like, if you're building AI features alone, point solutions, it's kind of like building flash features, right? They're going to be gone tomorrow when we move towards this more agentic phase and that AI tool kind of starts... disappearing into the background. Yeah, no, totally makes sense. You know, one of the things that we heard a lot about after that keynote presentation was specifically this integration that I think you guys had showcased with OpenAI. Yeah. And it was an interesting integration. I'd love for you to sort of describe it to the audience a little bit more, but like, you know, education, sometimes people, their level of understanding of technology can... Very, yeah, absolutely. I know some folks... especially in education, saw that as way, you know, what do you mean opening AI? These guys are the chat GPT guys. The bad guys. Yeah, right? Like that was like this weird knee-jerk reaction that we heard from some folks. Can you explain it to the audience? Like what does this partnership with opening AI really mean? Yeah, and one of the things I want to make sure we're really clear about is we don't turn on any AI features within Canvas on our school's behalf. They get to choose what they turn on at a feature level, feature flag level. And so we had some people freak out and say, well, you've plugged the robots into the Skynet for kids, right? Like that is not the case. Our schools have complete control over what features they turn on and what features they use. But the OpenAI partnership was really, we highlighted a really interesting integration with them where in this case, it was an assignment that is easily created by an educator and they can set up the parameters that allows students to interact with OpenAI's technology in what they call study mode, right? So it's the Socratic mode that doesn't give the student the answer, but actually ask them questions and leads them to the proper outcome. And so in this case, it's a course that you can have set up. So they're talking to a character from a book or an author or someone specific and interacts until they get the desired outcome. And you set those outcomes all in a really easy to use interface. But what's really interesting is then that the educator actually has access to all of the logs so they can understand how students actually interacted with the large language model with the tool and make sure that they were doing what they were supposed to do, right? Make sure they got to where they are. And that's really the power that human-in-the-loop oversight of these tools. But one of the biggest challenges is, like I said, I literally go all over the globe talking to schools about this. And one of the biggest challenges I think we're facing right now is a lack of understanding of what some of these tools are capable of. They're either Skynet and they're going to end the world, or they're just a cheating tool. There's so much more than that. And there's so much, you know, there's a lot of responsibility around human-in-the-loop and protection that we need to make sure we're cognizant of, but they can do such amazing things. And so we We rolled this out really as a demonstration of what's possible with these tools as a thought starter to say, hey, if I could use it for that, maybe I could use it in this other way that would be really powerful. And I think a lot of people walked away going, wow, that's really cool. I hadn't thought about using it in that scenario. Right. And some just heard open AI. And some of them just heard open AI and immediately. And again, that's the, the, the problem is we're in that trust. I always say we're in that trust building phase, right? And we've, we've got to convert those folks and understanding that, that open AI is not going away, right? Generative AI is not going away. And so we need to make sure that we're embracing these tools, that we're giving our educators the tools they need to become AI literate and advanced so they can pass it on to students. Because we're rapidly approaching a crisis point where, you know, we're not addressing AI literacy at a young enough age. I look at it with my own kids. I have a daughter in college. I have a son that's about to start high school. And I look at the differences in how they're being addressed. And I'll tell you, in junior high and elementary school, they were not addressing AI at all. They were simply saying we're going to ban all the tools from our Chromebooks, done and done. So we've got to change the way we as educators are looking at these tools and make it a more positive, productive conversation. Yeah, you know what? Look, I agree with you, Ryan. I think we're living through this like, you know, as you were describing what you guys are doing there, and you and I think a lot of other companies, you're absolutely right. Look, everyone at this point needs to just accept that AI is here to stay, right? Stop finding that like equilibrium, right? I And the concerns are valid, right? Absolutely. There's plenty of concerns. There's plenty of things to be afraid of. We understand those. Right. And you know, I read this, I think Pearson did a study where they showed that I think something like 40 to 50% of instructors, their main concern is the fact that it's taking away critical thinking skills. In other words, in the future, we're going to have some kids who are going to be really good editors because the starting point is I'm going to edit this document, but if we continue to have over-reliance and usage of AI to, for example, writing essays and things like that, it'll remove critical thinking skills. But we also are living through this, this moment that I think we saw with things like Napster, right? Yeah, now email. We were like, oh, well, behavior change. We no longer have to go to like Tarot Records to listen to, I want to own that CD. Yes. But then, but then if you think about them, I know what, what really attracted me towards something like Napster was not the fact that it was free. That was just, that was like a, you know, thing that just happened. It was more of the fact that I didn't have to go to Tower Records. I didn't have to put on that headset. Way easier to discover music. Yeah. Or sometimes I'm like, I don't, don't force feed me that music, that CD with 12 songs. I just want that one, right? And Napster comes out and they're like, listen, you can get it instantly. You can listen to it instantly. You can compile your own sort of curated CD. And by the way, it's free, right? So like we all did it, but we went through this period. Remember like the movie record industry, they spent billions of dollars. Oh yeah. Educating us like, hey, what you're doing is illegal, what you're doing. And the market sort of, they were like, okay, this is here to stay. This form of listening is here to stay. We just now have to figure out, like it went from the wild west to a lot of like streaming service. First it was Apple music and then it went to streaming, right? So now we're like, yeah, why would I buy a CD when I can just, I don't need to own anything. I'll just rent it through Spotify. Bertone Thurston, you were at InstructureCon, and Bertone Thurston was our keynote, and he was amazing because he was actually talking about the current political atmosphere, but he actually said something about, that I think applies here too, where we're in this disruptive phase, and we can really clutch our pearls and wring our hands around the disruption, or we can prepare for the rebuilding on the other end, right? I think we're seeing that same thing here with AI I and education in general, because I compared to the internet, right? When the internet first came out, there were so many articles talking about how unproductive we were because of the internet and how, and I was like, I used to, my first job, I used to have to get in my car and drive to the library to do research. And then we had a computer and I was able to do that at my desk in a matter of minutes, right? Now we have all that information in the palm of our hand. So this, this shift in perception, like you're talking about is incredibly important. And we're in the middle of the, that, that triple gift. Right. And with this open AI announcement, I think the idea is sort of is look, they're going to do it anyway. Let's find a middle ground. Let's again, find that equilibrium where we can have them use it differently, where they're still alerting. Yep. And what was kind of a loss, I think, and people, because people are a little concerned about open AI in general, or Chet's expertise specifically, what was lost loaded in that is that, you know, we announced two weeks ago that Anthropic Cloud is building an LTI integration. and an LTI app to plug into Canvas. You know, we've already announced that we're working with Microsoft and Google, right? Our goal really is to facilitate the innovation that our schools want to drive. And they're doing some amazing things and they want to be in the driver's seat. They want options and flexibility. And our job is to provide that and work with the best of breed providers to ensure that they can build what they need to in an effective way. So we take that really seriously. And so, you know, while OpenAI is an incredible partner, like they're not our only partner. We're working with all of the best providers in the market to make sure, you know, we're doing the right thing. We're seeing a browser war happening in front of us. Honestly. Yeah. And you'll, the proliferation, but it's so funny too, because we're also, we're seeing this proliferation of large language model providers, but then we're also seeing this, you know, they were getting bigger and bigger and bigger until you're being trained on trillions of points. And now we're seeing a thing, okay, let's, let's go the other direction. How do we make micro and mini versions of these? All language models. Yeah. That are better at tab. or more affordable, consume less energy, right? It's such an innovative, transformative time, and it's been moving so quickly that it's really easy to get overwhelmed. Even as I'm making my finger quotes, expert in the industry, it's hard to keep up with everything that's happening. And so I get the perspective of educators. I'm very much sympathetic. But I also at the same time stand up and say, look, go out and use these tools. Understand what they're capable of, and you'll understand they're not a magic wand. They don't read your mind. They are still prone to hallucination and confidently incorrect answers. We need the human in the loop. And when you understand that, it's empowering to understand the role, how you work with these tools as a partner, as opposed to thinking they're going to replace you or otherwise. Well, let's talk about that a little bit more, right? So human in the loop, I know every instructor that I've met with recently, you know, they're all sort of debating their, or at least having this introspective moment of like, what does this mean for me? Five years, 10 years down the line, right? As a human educator, where AI is coming in, uh, it's assisting with the learning in, in so many ways. It's assisting grading in certain cases, right? So how do you envision a teacher's role or instructor's role changing in the next like five to 10 years? Like, well, you know, sure. We all know AI is going to free up more time. Yeah. If you're, if you don't need to theoretically grade, then you have more time to do X, Y, Z. Yeah. But, but what is this, if I'm an instructor today, I'm seeing companies like Instructure and others just being basically saying, hey, we're doubling down on this and I'm feeling a little insecure. What does my future look like? I mean, there's so many and I actually do a presentation on AI literacy with a lot of schools and I share a bunch of resources in my presentation. And in fact, if you're an Instructure customer, go to the Instructure community right now and look at the AI hub or go to the academic strategy page and I share a bunch of links out to, uh, you know, teaching courses. Like there's a university of Michigan offers a, how to use AI in your K 12 course, free course for K 12 educators taught in Kansas. Uh, and, and another one for building ad literacy with, with, you know, professors at a higher ed level. And so there's so many free resources out there to make yourself smart on these tools. The reason that's important is there again, they can seem very unapproachable. They can, seem like they're just a cheating tool or they're just a writing tool and the more you use them you realize what they're good at and what they're not good at and what we're going through right now is really interesting because the de facto way to measure mastery of a set of knowledge or a set of skills was to have students write a 20-page paper about it right how did that become the case like why why did we decide that that was the best form of measuring mastery it's pretty one-dimensional when you look at it and so what we're seeing now is this shift to how do we actually, with the knowledge that students are going to use AI, tell them when to use it appropriately as part of the process of whether that's writing a paper or doing any kind of assignment and shifting the pedagogy. I was actually at San Jose State University and there was a great panel of professors who were, you know, being forward thinking using AI. And one of them was a product design professor. And he said, you know, it used to take us 60% of the course to come up with the idea for the product, do all the package design the packaging, do all of that. Now it takes about 10% of the time with AI because you can pretty rapidly have AI create the visuals and this kind of stuff. He's like, so now we've shifted the percentage of our course time that we spend on that. And now I make the students reach out to professionals in the industry and get their feedback and do presentations for them. And so I think about my own 20 year old daughter doing this, who does not, you can call her and she will not answer. She'll just text you back, right? The idea of picking up the phone and calling calling someone you don't know to try to set up a meeting with them, right? So it's actually inadvertently building, having us lean into those human skills, right? That we've lost a little bit. We've actually, now we have the ability to lean into those if we're creative with how we redo our pedagogy. And I think that's what's really important right now is they're going to use AI. How do we make it productive? That's right. And how they continue to learn. Yeah. That's the part. It's, they have to learn. You know, because there are things that you can also unlearn. The example I gave last week to someone was I used, growing up, I was always riding shotgun with my dad with a map in my hand, navigating or zoning. At any given time, if you would ask me, where are we? I could tell you towns nearby. I guess I'm next to a lake or the mountains. These days, I just follow the GPS blindly. Truly, I have no idea, like, physically where I am on on a map, right? Yeah. And, and, and we unlearn skills that way. Like we have to, and maybe the map didn't, it didn't affect the world too much aside from the one lady who followed, uh, uh, like maybe infected her world, but not much, not much, but like, right. We have to find, again, we have to find that balance. You know, I, I read this really interesting article with Vinod Khosla, uh, from Khosla Badri, for those of the listeners who don't know the really big tech billionaire. And he basically was saying that college degrees are going to be obsolete in the future, right? As someone who's in the ag tech space, as a leader in the ag tech space, what do you think about that? Well, there's two. And, you know, Bill Gates famously said that AI was going to replace doctors and teachers in the very near future, right? I disagree with both of those statements. And the reason is if you've ever tried to hire someone, you know, we recently saw companies like Google say, we're no longer going to require a bachelor's degree in our hiring process, right? We're going to be more flexible with that. And a number of other companies followed suit. And then a number of those companies then quietly shifted back to requiring degrees. And the problem was, whether it's a degree or a non-degree program, a certificate, something like that, as employers, we receive thousands of job applications, right? In some cases, hundreds for others, dozens for others, whatever. But how do you measure one candidate against the other? How do you know who has the knowledge to do the job? And that's the biggest challenge. And so if you say, well, AI just taught me all that stuff, right? I call that the goodwill hunting syndrome, right? This idea that I could get the equivalent of a master's degree from Harvard for the cost of a library card, right? That sounds amazing, but that's not how most of us learn, right? Most of us need more structure around our learning. We need guides. We need people to structure and set those goals. And we need proof of the mastery of skill. So it's pretty hard, unless you're a true savant, to get hired at a job if you don't have that credential that proves that you have mastered those skills. And so, you know, I think there'll be an evolution. Again, I'm a firm believer that not everybody needs a four-year degree. I think there's a lot of paths, and I think that's one of the exciting things about education right now is regardless of what you want to do and what skills you have previously, there's so many ways to upskill and reskill. There's so many ways to achieve your goals in ways that just didn't exist 30 years ago, right? And So I think that's one of the exciting times. But I still do think that that ability to prove, demonstrate knowledge to a potential employer isn't going to go away. And the currency, the de facto best currency for that has been the degree and probably will be for the foreseeable future. Yeah, no, I mean, for sure. I think, look, the rising costs in education sometimes also factors into statements like that where they're really talking about is not the efficiency of the model when it comes to teaching or learning. It's more so like from an ROI perspective does a payoff, especially when the economy's not doing well. But I agree with you, Ryan. Like if I was a teacher, a professor of any kind, I think what the use of AI will also do is it'll again free up their time. You know, when calculators came out, accountants didn't use their jobs. Exactly. Yep. People didn't stop teaching math. It just allowed us to skip past the grunt work and focus on more complex things to learn. and teach. Exactly. That is one, actually, that's one that we've been discussing lately is the AI kind of stealing some of those entry-level jobs, right? My first job when I came out of school and I was working for the ad agency was to write 20-second bumpers for Coca-Cola for the Western United States, right? Uh-huh. 20-second radio bits customized for the market, right? AI can do that work in a matter of seconds, right? But that was my foot in the door. That was my, that was my, my beachhead job. Right. And so as all of those beachhead type jobs are eliminated, how do we bridge that gap even further to make sure that these students can step into those higher level jobs? And that's one of those challenges is that we're going to be grappling with in the next couple of years. Right. Right. Yeah, no, for sure. And again, we're, we're in that wild west right now. It's that transformative phase. Yeah. Nobody really knows. Everyone's trying to figure out like, how do we bring some law? order to all of this. And I think there is no right or wrong. I don't know if you saw this, but this is a clip that went viral, I want to say like a month or two ago. It was during a graduation at UCLA on the Jumbotron. They show this kid and he has his laptop open, Ryan, and he's showing all his assignments that he completed at UCLA using AI. This is during the graduation ceremony, right? So I watched that video and I had two thoughts. The first one was, boy, it's never good to be on Jumbotrons these days. Don't make your, if you're on Jumbotrons, things not looking good. No, but jokes aside, I think that the real thing that I thought about was, it was almost like, I can't believe the kids just showing off all the assignments. However, I started looking at it from the kid's perspective and I was like, I don't think he saw it as anything wrong. Again, he was like, You handed me a calculator. I used it. What do you mean? Yeah. And it's that change in perception. I think we all, we have two sides of the fence here. One side is going, I'm not seeing anything wrong with this. Another side is going, wait a minute. And that leads me to, my question is like, look, you know, these days, um, again, I think, uh, there was a Wiley report that showed almost 40% of all students. And this is, by the way, we're still in its infancy, right? When the early days, 40%. percent of students admit to using AI, generative AI, to write essays, all right? Yeah. So professors are concerned. That's why, by the way, you know, we have, we're in this weird zone of, like, professors want to monitor AI usage. You have to do it in a way where they're not, again, taking away those rights, right? The days of catching a kid cheating and, you know, and expelling them, I mean, that was a, there were, there were English professors I knew that actually, like, actually that was their Super Bowl, right? They achieved something with that. Why, what, why the fixation on that, right? Anyway, and I love the point you're making because I think the, the big challenge is it's very difficult for educators to say, well, that's how I learned. That's how I showed my, my mastery is I wrote, I wrote, I wrote really long page, you know, I, I slogged through hundreds of pages of dry reading every weekend. And then I wrote a 20 page paper about it every week, and that's That's how I show my knowledge. And that's how kids should do it too, right? Again, our kids aren't doing things the same way we did, right? Our kids are digitally native. There's a clip of a toddler and they hand her a magazine and she tries to scroll on the magazine and then puts her finger and wipes her finger on her shirt and then tries again. And in her mind, something's wrong with her finger because this tablet's not working. You know, my kids, we have a big TV in the living room, but my kids will lay in their bed and watch YouTube clips on a little teeny screen. Like they're consuming information differently. What they're interested in is different. We have to adapt to their journey of the world, not compare it to ours or say that we did it the right way. And the bottom line is that pain of reading used to be the learning. That's, they're not going to read anymore. They're going to say, give me the, give me the, you know, of this hundred page, summarize that and give me the top five points that might be on the test or something like that. Right. And my son who was in junior high, he said to me, I don't understand why I can use Grammarly, but I can't use ShadGBT. They're just tools. It's all the same. Everything's fine. It's So it's, it's the other aspect too, in January of, of 2023, right after ChatGPT came out, I was hosted the first webinar on AI and we had, I think like 300 attendees. It was crazy. Um, and the, the chat was honestly, it was one of the best webinars, most entertaining webinars I've ever done. Cause the chat was just going crazy conversation on the side. We were just following, we were basically just part of the conversation that was happening in the, the chat thread. And it was, it was great. Um, But there was a really great point from an educator, and he said, I have a neurodivergent son. And if you don't explain to him the why of the assignment, he can't engage with it. He doesn't understand. And at some point, we stopped explaining the why of, like, why are they writing a 20-page paper? Why are they reading that? And we just give the assignment and walk away. And I think that's fundamentally wrong. It's a laziness on the part of educators. It doesn't help learners of any kind. When they feel like it's busy work, they're much more likely to cheat. And so the funny part is we can use AI to make learning much more engaging so they don't turn to cheating because it's not engaging in the first place, right? And again, if educators understand that, instead of treating AI as the boogeyman, but actually using it as the solution to the problem that it actually builds itself, we'll be in a much more productive place. But I think we're still in that. We've got a lot of camps that are still digging in and saying technology in the classroom is bad. At this point, all technology should be, or all learning should be technology-enriched learning, period. You should have, a course should be hybrid in all cases, right? Like, we've moved into that zone, and we need to make sure we bring everybody along. But give them the resources, because I do truly understand how time-constrained a lot of educators are, and that's the challenge. Right. So, look, I think you may know, like, at CopyLeaks, we have a product that also detects how much AI is being done. What we're doing there is really trying to get professors, like you were saying, instead of using it as a gotcha tool. Yes. And it's the wild west, right? So everyone's using it where again, you have 40% admitting it. It's probably much higher usage, et cetera. And that's what we're seeing. So there's a couple of dangers, right? To, to over-reliance. There's the part of like critical thinking. That's the part it's removing critical thinking. That's the part the professor cares about. Yeah. Another part of this that, that, uh, I think you touched on a little earlier. One was, uh, AI tends to hallucinate. Over time, it better maybe even go away. But like right now, that's where we are. It hallucinates. The other thing that we tend to find is that AI, if you really think about it, Ryan, like it's like a DJ. It's not writing anything original. DJs don't create original songs. It's remixing songs from the 1980s. And if you knew that song was from the 80s, you'd be like, oh, they remixed that song. It's the same with these LLMs today. It's not going doing research. It's just you ask it to write a paper on microbiology, it's looking for other papers that were written and remixing it, multiple papers. If there was only two papers, it doesn't have much to remix, by the way. That's one issue. And also, it's being asked the same questions over and over again. So there is, if you think about it, even as a human, if I told you to write a paper on the U.S. tariffs, after the fifth time of writing it, there's no way you're saying... They're going to end up being pretty similar, yeah. So this thing's... asked millions of times. So what we are finding is that students, like professors need to like now, look, it is good to just understand. Yeah. Like what's the lay of my land, right? Like how prevalent is this or not? The second for me is helping the students understand that, look, it's not just that you're not doing your work. If you're relying on this, this thing might be bringing in actual things. Absolutely. Yeah. And I actually talk about cubby leaks in my presentations a lot because I think it is what the conversation around, let's move away from trying to use these punitively and let's use the detection aspect as a way to inform students how to use the tools properly. Right. And to not overlay on them. And I, the underlying piece is let's figure out how to use these AI tools to enhance learning, not avoid learning. Well, at least it in that, that. Yeah. By the way, one of the things that we're now doing is aside from just saying, this is AI, we'd actually build these tools more for like the really large corporations. We have a big, uh, corporate side of our business. You can imagine that the clients on that side, we couldn't just say to them, this is AI or not. We had to give them some logic. And so it turns out AI also uses a bunch of phrases more commonly, right? It does, yes. Sam Altman, CEO of OpenAI, never said, I'm going to help you find your tracks that use AI. I'm just going to build one great gen AI system, right? Where it uses certain phrases, so we show you the use of phrases. More importantly, that archive, like AI, we use assume that AI is seeing the same things over and over in different ways, we started creating an archive of that. And now what we do is we show you side by side, hey, you see this text that was written, we're telling you it's written by AI, but not only is it written by AI, something very similar or exactly has already been published. And because again, let's go back to the tariff example of someone's, if it's being asked that question millions of times, and you could assume some of those end up in the web somewhere, and we just archive those right so it's it's more about empowering the professor yeah hey this exists student like let's learn from this and like the idea is let's also face it when we're in school there aren't a lot of original thoughts right like we're just learning we sit it's more like tell it repeat it back to me in your own words and i look at that student from ucla and you just think man you've avoided a lot of learning like how much did you pay to not learn That's right. And what's funny, there's a, there's some great, uh, great data from the digital education council in a global survey. And I think it's so fascinating because they actually went by perspective of in different regions and like it's their, their thing is faculty's view on AI's impact on education. Right. And the U S and Canada actually had a much considered it more of a challenge than an opportunity. And Latin America considered this massive opportunity. like the graph is so shifted it's crazy and what I think is I've kind of come to this this idea that like in Latin America they're like well you're paying for education why would you avoid the learning you want that learning and and somehow in the US we've created this culture that's like just get through it as quickly and painlessly as possible instead of hey there's a lot of value to what you're actually learning let's make sure that we capture that and I think that if anything there's gonna be a silver lining around this tool that is gonna say look education is going to be way more engaging. It's going to be way more personalized, right? I can choose how I want to receive this content. I'm learning from these brilliant educators in a way that we don't have gaps in communication and things like that. And I'm doing so in a way that is going to leave me better off than, you know, I hate my professor and I'm just going to do as little as possible to get through this class because I have to take this class, right? We've got to change that mentality. By the way, even historically, look, a long, long time ago in high school, I took AP Spanish, believe it or not. I don't speak a word of Spanish. I don't learn a thing, okay? Now, recently, last couple of years, I started doing Duolingo, right? And I started actually learning in a maybe different kind of way. I don't know, right? And so I now feel like I know a little bit more Spanish than I am from taking AP Spanish, right? My point is, like, technology can, like, actually help us somehow learn in a different kind of way. Oh, absolutely. Absolutely. The, the accessibility aspect of this, I think are incredible, right? Because everyone learns differently. Everybody, all teachers teach differently. And AI has this ability to like fill those gaps at scale in very personalized ways that we could never do before. And so when you surely start looking like that, like, you know, I'm a, even right now, notebook LM, you can take all the course content, pull it into notebook LM and say, I want to listen to this as a podcast. And so you go into your working space. student, you're commuting on the bus and you're listening to a podcast instead of, uh, trying to read all that material on the weekend, right? Like how fascinating, how amazing to be able to have that kind of choice. And if educators understand that that's not a threat, that's a, that's a superpower like that. Wow. We'll be in a, in a totally different space. Yeah. For, for any of our listeners, if you have not used notebook LM, by the way, we're not sponsored by them or anything like that. Like even in our company, what we've done is we've taken a lot of material, let's say onboarding, new employee joins. We've taken all of that and we fed it into Notebook a lot. Because what we realize is when a new employee joins, when we would say, read through these million documents and some retained it after the 10th document. But everyone loves a podcast and this thing automatically creates that podcast. By the way, it sounds so human. It sounds great. You wouldn't even know that it's not a real podcast. And if you're an educator, you can even actually have it, like train it on your voice. Upload videos of you speaking and It'll train it on your board. It's like the, and again, that doesn't replace the educator that just provides a different path for consuming that content. And I think that's what, that's the conversation we need to have is this is people famously said, you know, AI is not going to replace educators, but it might replace educators that don't use AI. Right. And I think that's really important. And it's not a threat. It speaks to the flexibility and the power of these tools. For sure. For sure. That's all replaced our, uh, Ira glass at this American life. Um, or on the hot pad. Right. Um, so I mean, great tool actually for, for any educators listening. Check out Notebook Alarm. That might be a good tool for you to even like promote and utilize in your class. Ryan, let me ask you a final question. We saw a huge ed tech boom, like educational technology companies like Canvas. And we saw the rise of assessment companies. We saw right through COVID. So it was this huge boom. And then we saw like a cool off post COVID when things went back to, and there was just like, the market was just like flooded with ed tech companies. Yeah. And we've seen a cool off since. What is the future in your opinion of ed tech in general? Yeah, I think the, where we stand today, about half of all college students in North America use Canvas on a daily basis, about a third of all K-12 students, right? An increasing number of, of corporate learners as well. And, and so we know that digital classes classroom experience gives students the ability to navigate their day, know where to start, right? And as we integrate more AI features, and as those are adopted by our end users, it'll just be more seamless. How do I start my day? Where do I start? How do I organize myself? How do I make sure I'm hitting my learning goals, right? That is key. And I think we're still seeing this transition towards more mobility, right? About four years ago, I did an interview with a student in Texas, and she said, you know, this is the first time I've ever been to campus. You know, like I, I've done everything online and I was like, well, tell me a little about your experience. And she's like, well, I do everything on my phone. I'm like, well, what about like, when you have to write papers, do you come to this lab? She's like, no, I use my phone. Right. I mean, we never could have thought that students would be writing like on form papers, one thumb click at a time on their phone. And yet they are right. But it's so fascinating to me that this shift in how they consume technology. So I, to me, I think, like I said, I think all, all learning will be in technology enriched learning. I think this idea. idea of going, you know, I hated to hear early on when open AI and generative AI tools came out, you'd hear we're going medieval on them. We're going back to blue books and pencils. Well, that's not preparing your students for the future. That's not like you're so over-correcting that you're under my value of the own education, you know? Well, it's not just the students, right? I think there was a report that came out that said one in three teachers are planning on leaving their field because of the workload associated grading homeworks, right? Yeah. Going back to the, as you said, medieval ages with this knee-jerk reaction of let's have everything all left you being taken in class. That's just like the number of stuff to grade for the teachers, which. And there's this big fear. There's this big fear that like, okay, students are going to use AI to do homework. And then I'm using AI to grade the homework. So it's just, it's just robots teaching robots. Right. And, and. I want to kind of push that to the side because I think one of the, one of the things about, you know, I've talked to my, my own daughter and she's like, you know, I handed my paper in two days ago and I haven't gotten feedback yet. She wants that feedback. And so one of the features that we demoed last, you know, a couple weeks ago in StructureCon was, uh, first pass grading. So the educator's content, the educator's rubric, and AI just applying that rubric to the assignment, right? And doing a first pass grading and saying, yes, they did this. Yes, they did that. Here's feedback. Here's feedback. Here's feedback. But then the educator actually needs to go through and approve that feedback and they can modify it and they can choose that. And then you also have data on whether or not the educator is using that. So saving educators time, removing grading fatigue, removing bias and providing students faster feedback on their assignments that they expect nowadays, right? There's so many benefits to that, but I feel like, you know, people are very scared of that possibility. And in reality, when you understand how AI works, it's your content, it's your rubric. It's just doing a first pass for you. It's like, it's like a lazy graduate assistant is my brain. You know, I was going to say that. That's right. I was just going to say, like, if you've for all these years, trusted your TA. Right. To do sometimes forget first pass to do all the grading. Yeah. This is one to be a more accurate, more consistent way to grade than a TA, frankly. Exactly. Cause you're not. not taking into account, or you have to take into account as a human, we all experience fatigue after reading that 10th essay, any earrings go, and I don't know what words I'm reading. And the second thing is, unfortunately there is, or there's a possibility of bias, right? You look at a student's first name, last name and make a judgment call or whatever else. And so, uh, this using AI, uh, removes a lot of those elements, which is great. And as you said, delivers it a lot quicker, but I think the point you're making, Which I wholeheartedly agree with is that there has to be the human in the loop. Yeah, absolutely. If it was in a let it, grade it, like set it and forget it moment, this is more like we're going to do some of the groundwork for you, but then you still need to look at it. Yeah, yeah. And literally nothing goes directly to a student without your approval, period. That's right. That's right. By the way, very similar in medicine, Ryan, that we're seeing where it's not about replacing your doctors, but if you can get that imaging analysis come back from your MRI in seconds, do you really need a professional doing that or can AI do it and then have your doctor sit there? I mean, I had an MRI three weeks ago and next week I finally have my appointment to have them analyze my MRI. That's right. AI could have given me the feedback the same day and recommended treatment. I mean, it is, that's exactly your point. I think the timeliness, the convenience that we've come to expect It is lacking in some of the more traditional systems. It can really improve that. Right. And in that situation, it's not that you want AI to tell you the results. You still need a doctor. Absolutely. To review those. Right. He knows you. He knows like the X factors involved and he's going to tend to account when he or she delivers that news to you. Right. So like, I think similar with professors, teachers, it's not about removing them. It's just like, do you really want to do the grout work? Yeah. It's removing those administrative tasks, those on day tasks. It's letting them focus more on the things they want to do. I think, and that's why I think the shift in perception is so important. The more we can get educators on board, the more that, you know, we can really reap the benefits across education in this. And ultimately, you know, just like we see schools that we're moving towards a more skills-based education system, we've got to prove that people are leaving work and demonstrable skills. And whether that's, you know, employers or whether it's the federal government We've seen some guidance lately saying, look, we need to make sure that these programs that are, are leading to well-paying jobs. There's a piece towards that. So schools that are doubling down and say, we don't do skills programs. We're not, uh, we're not vocational schools. Like at some level, we're all teaching vocation. We're all teaching job skills. So the more we embrace that, the better off we are. Amazing. Amazing. Well, this has been really, really insightful. I loved having this session with you, Ryan. Uh, if folks want to learn more, where can they find you? What are some resources they can go to. Yeah, I'm on LinkedIn like everyone else, but Instructure.com, you know, and the academic strategy team that we have here is amazing. The Instructure community is, we're very active in the community. We've got a lot of information out there, but people can just connect with me on LinkedIn and shoot me a note, and I love to provide additional information where I can. Amazing, folks. Sweet shot to Ryan. Listeners, thank you so much for joining me on this episode of The Originals. source please be sure to tune in for the next episode we'll be announcing a special guest for that very soon uh and stay positive stay original stay safe thanks