
EdTech Empowerment: Innovating Education Together
Hosted by Juan Rodriguez, founder and Executive Director of NextGen Classrooms, EdTech Empowerment: Innovating Education Together dives into the power of technology to bridge the digital divide and revolutionize education. Each episode brings insights from guest speakers across the education spectrum, including educators, tech experts, policymakers, and community leaders, who share strategies to empower every student, regardless of background, with access to cutting-edge educational tools. Rooted in NextGen Classrooms’ mission to create globally connected, innovative learning spaces, this podcast covers topics like digital literacy, AI ethics, equitable access, and transformative practices in the classroom. Join us as we explore the latest trends and tools shaping the future of education and empower educators to create impactful, inclusive learning environments for all students.
EdTech Empowerment: Innovating Education Together
AI in Higher Ed: Upskilling Faculty & Empowering Students
In this episode of EdTech Empowerment, Juan Rodriguez sits down with Anna Haney-Winthrow, Director of the Institute of Innovative & Emerging Technologies at Florida Southwestern State College. They explore how AI is transforming higher education, from faculty upskilling to algorithm literacy for students. Anna shares insights on how educators can embrace AI rather than fear it, create more engaging learning experiences, and ensure technology is used equitably in the classroom. Whether you're an educator looking to integrate AI or just curious about its impact on education, this episode is packed with valuable takeaways.
🔹 AI in higher education
🔹 Faculty upskilling & tech integration
🔹 Algorithm literacy for students
🔹 Ethical AI & student privacy
Tune in now and join the conversation on the future of learning!
EdTech Empowerment: Innovating Education Together is hosted by Juan Rodriguez, founder of NextGen Classrooms. Our mission? To empower every student with access to technology-rich education. Tune in each episode to hear from thought leaders, educators, and tech experts on transformative strategies in education, from digital literacy and AI ethics to building inclusive classrooms.
Let’s bridge the digital divide, together!
Visit our website at NextGen Classrooms to learn more about our mission and programs.
Don’t forget to subscribe, share, and join our growing community of educators shaping the future of learning!
Welcome back to EdTech Empowerment Innovating Education Together, where we explore the future of learning through technology. I'm your host, juan Rodriguez, and today's episode dives into the topic Shaping the Future of Higher Education AI in Teaching and Learning. Our guest today is Anna Haney-Winthrow, the Director of the Institute of Innovative and Emerging Technologies at the Florida Southwestern State College. With over a decade in education, anna is a leader in faculty upskilling AI literacy and ensuring that technology is used equitably in the classroom. Today, we'll discuss how AI is changing higher ed, the importance of algorithm literacy for students and how faculty can embrace AI rather than fear it. If you're an educator looking to integrate AI in your teaching, this episode's for you, hi Anna how you doing.
Speaker 2:I'm doing great. Thanks for having me on.
Speaker 1:Sure John, Thank you for being a guest on our podcast. I heard so many great things about you. Can you just like share a little bit about yourself?
Speaker 2:Yeah, I have a sort of a non-traditional background. I started teaching at the college level in the 90s, had a few different iterations in my career, but always been very passionate about teaching and learning, and I've always had a deep personal connection to technologies and really the ability of technology to bring people's purpose into focus and to supercharge what's meaningful in their lives. So I have this opportunity now to take a deep dive into AI and I'm hoping to bring those same ethics to it.
Speaker 1:All right, so let's jump right to it. Let's share. Let's see what you have, let's share with other educators, other listeners. I like to start off with philosophy, right. I like to talk about your teaching philosophy Specifically. Your teaching philosophy highlights the confidence, curiosity and equity. How does your philosophy influence how you integrate technology into education?
Speaker 2:It's a great question and it definitely spans technology and it spans non-technological integrated teaching too. But I think it's really important for students to have agency and choice. So when I create an assignment or when I help somebody else create an assignment, I think how can we incorporate as much choice in here as possible and still meet the learning outcomes? Because that's important. I believe that's important with technology as well. So usually if I have an assignment where there's a way to integrate technology deeply, I might have an alternative way for a student who perhaps doesn't want to go down that path.
Speaker 2:Similarly, I like to create assignments where there are no mistakes and I teach a subject matter where this works really well, but make assignments learning by doing so, instead of hey, give me the right answer or give me a product that looks really polished like. Show me your journey and don't worry about the mistakes you make along the way. Just show me how you learned and corrected your path from what your experiences were. So I really, again, I understand we have to stay focused on those learning outcomes, but I think there are some more exploratory ways that students really enjoy learning and so I focus on those.
Speaker 1:And, as the director of the Institute of Innovative and Emerging Technologies, what is your primary focus when it comes to implementing new tech in higher ed? I know you talked about providing different choices for students, but let's talk about the technologies that you're actually using. How are you implementing these in higher ed that you're actually using? How?
Speaker 2:are you implementing these in higher ed? It's a great question. I think the key for me is always start with the human and not the technology. And that's hard for me, because I like technology and I see something and I want to run down the road with it and make everybody try it and talk about it, and so what I think really helps is instead to say what are people's motivations, what gives a student a sense of purpose and meaning, what gives a faculty member what gives a staff member? And then think about how a technology can help them have an impact. So with AI, that might look one way. With VR, that might look another way. I'm really interested in how people are using wearable technologies to monitor their well-being. How does that have an impact? So I want to make sure that we engage the technology and the humans in a way that gives them the most agency to shape what effect it's going to have on their work and on their lives.
Speaker 1:I love that answer. So, anna, what's your favorite technology? What do you like to use in the classroom?
Speaker 2:It's interesting. I'm doing an assignment this week where students are working on a self-portrait and they have an option to render their self-portrait artistically through any means other than photography or use an AI generator to bring about the effect that they want to have. With an AI tool that generates a portrait and it's difficult because it's not going to necessarily look like them it's really hard to make the prompt to make it look exactly like you, but I love that because it does make them think more deeply. What I'm really trying to show about myself is this, and the AI tool helped me do that in a way that I just didn't have the skills artistically to do and, as someone with no artistic skills, I feel really happy about being able to offer that option.
Speaker 1:Oh, for sure. It just kind of like evens out the playing field. So what are some of these popular tools that your students are using? I know MidJourney is probably like one of them because that's a popular image generator, but are there any others?
Speaker 2:I have my students use Adobe Firefly for a few reasons. One, that for those students who are worried about the way AI is trained which is, I think, a legitimate concern, and a conversation we need to keep having about the way that Adobe has trained its image generators is really, in my opinion, responsible and something that doesn't have as many ethical question marks as some of the other models. I also like that in Firefly they can sort of experiment with different techniques, movements and art and so on where they're choosing buttons. They don't really need to have a deep background in art to say, oh, what's mid-century modern, what's this? So they can explore those buttons and see what the impacts are. So for an assignment like this, I really enjoy that. I think Ideogram is a great image generator and, again, if a student were to use that, I would just want them to look really carefully at the privacy policies and how the tool works and make sure for themselves that they feel comfortable putting their prompts in there.
Speaker 1:For sure, and to the listeners out there, if you don't want to spend too much time reading those policies or those thin lines, kind of like small contract that you check off right before you start using these new tools, you can actually copy and paste that into AI and tell it to break it down in a simpler format. You could even tell it to break it down in a paragraph so you don't have to read the whole thing, but that's a way to understand what you're agreeing to jump into. A lot of folks are jumping into different tools and different platforms and agreeing to sign their life away. They're sharing all this data that they're not aware, that being kind of given away. But I like that you're talking about AI, because AI is becoming a major tool in education and you did talk upon this right. What are some of the most effective ways that you use AI to enhance teaching and learning outcomes?
Speaker 2:I think, as a faculty member, what I'm going to say is probably a really easy way to get started. That you'll have a lot of impact is assignment redesign, and there are multiple reasons that you might want to redesign assignments. Personally, I probably redesign mine too often because I get a good idea or I get excited about something. But I think because we do have academic integrity concerns around AI and just because things are changing so quickly that it's really a good time to reorient assignments to be more process focusedfocused and less output-focused, I think relevance matters more than ever to students.
Speaker 2:It used to be, maybe in my generation, that sure we asked the question when am I ever going to need to know this? But we also dutifully learned it nonetheless, even though we weren't sure what the relevance was going to be. What I see in my Gen Z students especially is, if it's not relevant, I just can't be there. There's so much else to choose from for me, and so this is not going to get my cognitive weight here. It's just not going to be worth my brainwaves to do that. So scaffolding how do I get students to be better prepared for this assignment All of those are things that you can ask AI to help you with, and so, for me, it rarely gives me the idea where I'm like, oh great, I'm just going to copy this and paste it into the new assignment, but it helps me see gaps in my thinking or helps, you know, just kind of be a brainstorming partner to create better assignments.
Speaker 1:That's right on. I like that. As an educator, I can see how that works right, because once the content's relevant to students, students will become more engaged and they own their projects right. They own, they have ownership to everything that they're learning. You've worked on upskilling faculty in AI. Let's get into that Like what are some of the biggest challenges faculty face when adapting AI driven instruction and how do you help them overcome these hurdles?
Speaker 2:This is not going to be a surprise to any faculty member who's listening Time and burnout are the two biggest barriers and part of the reason I even got into this space when this was very, very new, like, very new, like people were freaking out. I signed into a webinar and there were four people talking and I respect all of their opinions. They're four really smart people. What they were saying was faculty members, you're going to have to redesign the way you teach. It's here, get over it. And I was like, wow, that is not the message. And not that there isn't some kernel of truth in that, but how many people have to redesign their jobs and keep doing their jobs with no extra time or compensation to do that? We would never ask somebody in the private sector to do that, and so I think we really, really need to honor the fact that for faculty to redesign their courses to adapt to AI is a big lift, and we need to support them in any ways that we can. So that's the first one, and to be kind and not tell people things like get over it, and it's here, there's no escaping it, like to have more nuanced conversations.
Speaker 2:I think there's a lot of fear and lack of understanding. I think that's the second hurdle, and again I'm just meet people where they are with that fertile. And again I'm just meet people where they are with that literally, ai is challenging the mental models that many of us built our careers on right. How many people in academia built their career on being a good writer or a good researcher? And now this is saying they're getting messages that sound like it's not really important to be a good writer anymore. Ai can do what you can do. I don't believe that that's true, but I know people are getting those messages and so, again, I think it's about those more nuanced conversations where we let people know you do have choices and mean that, and then help them collaboratively build an approach that's going to work for their own style of adaptation.
Speaker 1:Nice. And let's jump into that right. Let's say, figuratively, you have an educator who's having a tough time trying to adapt AI into their lessons, into their whole instruction, and they approach you and they say hey, and I'm really struggling with this, I don't know where to start. It's almost as if what you just said I'm a great writer, but I feel like AI is going to take my job away from me. What should I do?
Speaker 2:That's a really great question, right? And I think that I see three ways of responding to AI. One is embracing it. One is to say it is here, I'm going to use it as much as I can, I'm going to teach about it as much as it fits into my subject area. Then I think there's adapting People who say, okay, I need to make some changes to how I'm teaching because AI is changing how my students work. It's changing what's going to be expected of them in the workplace. I'm going to change my mind. I'm going to say there's four right.
Speaker 2:So we have embracing, we have adopting, Then we have resisting. Then we have people who say I've made the effort to understand what this is and I don't think it has a place in my classroom and I'm refusing to bring it in. And then we have avoiding, right? People who just say this is too hard, it's weird, I don't get it, I'm just going to do what I've always done. So I think the avoider is the person who we really need to build up and support so that they can make the decision.
Speaker 2:All of those things the embracing, the adapting, the resisting, all of those have a legitimate role to play. The avoiding, I think is where we have to say, all right, we really can't responsibly be in that space anymore. So if I were to have an avoider come to me and say like hey, don't tell anybody, but I haven't really figured out this AI thing yet, the first thing I would tell them is you're not alone. There are so many people who are in that space and it's because you're busy doing your job, so we get that. Then I would have them play. Then I would say why don't you see if it can give you an idea for what to grow in your garden this summer or whatever their interest is, and really have them start to figure out what it does, what it doesn't do, what their style of interacting with it is, and hopefully buddy up with someone else. I really think doing this collaboratively makes a big difference to people.
Speaker 1:Yeah, it definitely does. Like the collaborative work does make a difference, and I love that you give in teachers, allowing them to give themselves some grace, to not fully understand it or even just to start at the beginning right and not know where to go with it, and this is an exciting way to ensure that the faculty feel confident and prepared to integrate AI into the curriculum. But what's another way that you can see that right, rather than seeing it as a threat and unnecessary complexity?
Speaker 2:I think you know going back to that playful approach and understanding that you don't have to do everything at once. So choose one assignment and see if you can figure out for yourself how to use AI to tweak that assignment. Maybe you get a little bit more adventurous and you say okay, now I'm going to choose an assignment where I let my students use an AI tool to collaborate. It's not going to be the most high stakes assignment in your course, probably. Maybe it's extra credit, even if you really aren't sure that it's going to work, and allow yourself to see what happens and even be vulnerable with the students that like, hey, I'm not really sure how this is going to work out. Can we all try it together?
Speaker 2:I feel really strongly that there needs to be a reconstruction of our classroom culture in the direction of trust. If we walk into classrooms and say I don't trust you, and here are all the ways I'm going to surveil you, we can't expect a like wow, I'm really excited to learn from you, lady, that sounds fantastic. I think we can expect they're going to feel there's a little bit of an adversarial relationship there. So I really think using AI in a way where you acknowledge your own vulnerability could be a great way of reintroducing trust into your classroom.
Speaker 1:Right on, right on. Let's transition. Let's get into student algorithm. That can be another complex topic to talk about and you had mentioned, you had talked about student algorithm and student algorithm literacy. Before we started recording this podcast, I had asked you some questions and you brought it up in the questionnaire and I want to know why do you think this is important for students to understand and how AI driven algorithms work?
Speaker 2:Yeah, you know it's so interesting because we're interacting with algorithms all the time and now, as AI becomes more prevalent and AI systems. The time and now, as AI becomes more prevalent and AI systems become more integrated into our daily experience, what we see is that those algorithms literally shape our experience of the world. They shape and especially the younger generation, whose brains aren't fully developed. They're literally shaping their minds. So years ago, right, we all kind of figured out that, like our devices were spying on us and we'd say something like wow, I really love bananas. And then we'd get, you know, all kinds of ads for bananas. And I used to be one of those people who said, well, why do I care? It's great that I get ads for bananas if I'm in the mood for that, because then it's really convenient and it's right there. But if you do a deep dive on this, that data is actually much, much richer than whether or not you like bananas and intrusive and being sold again and again and again without your consent. So I no longer have sort of that laissez-faire attitude about my devices spying on me. I think that's a really big deal that we need to be aware of. It's very hard to get students to be interested in that and what's more interesting to them and where I think we really need to I'll just use the word intervene is their use of social media. So social media algorithms the harms have been proven by research over and over again in terms of mental health, eating disorders, isolation, and many students are aware of that. But if they don't suffer from something like that or haven't recognized that they do, they may not think that their worldview is being shaped by algorithms in ways that it really is.
Speaker 2:And I think it starts to create unrealistic expectations of experience outside social media. And I'm giving you an example when TikTok was going to go dark and then went dark and then came back on one of the things I heard a lot of young creators and I'm not a TikTok user, to be fair in this conversation but one of the things I heard a lot of TikTok creators saying was I've built this community on TikTok and now I don't want that to be taken away from me and I totally acknowledge their feelings about that I think that people were really panicked about that. But what I didn't hear people acknowledging was the role of the algorithm in creating the community. So posting content doesn't by itself create a community. The algorithm decided to act in a certain way based on that content. That resulted in community and, by the way, made someone else richer who is not you content creator, by the way.
Speaker 2:What I worry about is that person tries to replicate the strategies that they used to build community outside of social media and it doesn't work. Like going to a party and talking about yourself incessantly is not a way to build community, but on social media that works kind of well. Um, so I could go on for days.
Speaker 1:That's right on. I love that you mentioned that. Those are such great examples and I think we're going to save a lot of that information about algorithms for another episode, another podcast episode, because we could go dive into that and talk about that for hours, go for it and even kind of like the reactions that students or anyone gets from these algorithms and how they interact with the real world is really important. So I hope that we can talk about that in the future. But let's jump into something else. Right, with students having varying levels of familiarity and attitude towards AI, how do they create an equitable and inclusive learning experience? How do they create an equitable and inclusive learning experience.
Speaker 2:It's a great question. I'm seeing this semester more than ever, a lot of students who are very well-versed in AI and then some students who really haven't gotten started for a variety of reasons. So what I like to do is be really open. I lead with it At the first week in my class, we're going to talk about AI and we're going to talk about how we use it, how we don't use it and so on, and really give them the fundamentals. I think, going forward, I might make some of that fundamental training optional, because a lot of students this semester are finding it redundant for what they've already known and explored semester are finding it redundant for what they've already known and explored. But I think in my class, if I'm going to ask them to use it and I do really important that they understand some of the fundamentals and I'm really honest with them about some of the ethical questions and implications and so on. So I don't want to come across as such a techno-optimist that I haven't given them a fair perspective on some of the pitfalls associated with AI. I really encourage them and I think this is important to look at AI as a collaborator instead of a generator and I think that helps some students feel comfortable.
Speaker 2:98% of our students don't want to cheat.
Speaker 2:98% of our students don't want to not develop the skills that they're in that class to develop.
Speaker 2:They may get distracted and have other priorities from time to time, but they're not looking for AI to replace their learning, and so I think showing them through really specific examples and letting them try it in really sort of measured ways at first is a great way to understand that. But to help them understand that this is not a shortcut that we're simply inviting them to take, I like to always sort of give the context for how we're using AI and make sure that everybody you know agrees on the importance of that context and then goes to explore it in their own way. A lot of times I give other choices, so I say use an AI collaborator for this assignment or a human collaborator for this assignment. Here's how you would do it with the AI. Here's a kind of a conversation you might have with a human collaborator to do the assignment. So I think that invites everybody to participate in ways that they're comfortable. At the same time, I don't want students to be so resistant to it that they never participate in any of the assignments that explore AI.
Speaker 1:That's right, and you had mentioned the ethics, which is really important. How can higher education institutions ensure that AI is being used ethically and does not reinforce existing bias in academia?
Speaker 2:It's such an important question. I think that we have to lead. We have to be the ones out there training them. If we think at this point that people aren't going to develop their own ways of using it, we're wrong. So at our institution, we really looked at what values do we want to bring into the use of AI and how can we bring those values into a comprehensive training program for faculty and staff and then bringing that into students. And so I think it does take that kind of really deliberate leadership.
Speaker 2:And when it comes to bias, one of the things that is deeply integrated into all of our training is the faster AI moves, the slower you move things. That can happen, and anyone that's used Chad, gpt or quad can experience this. You put something in there and it just goes right and it spits something out way faster than you can actually read, and there's a part of our brain that wants to match that speed, and so we read it really quickly. And so if you can't resist the urge to read it really quickly, that's okay, but then stop and go back and read it carefully and slowly. And then there are questions that we ask people always to have in mind. One is what voices might be missing.
Speaker 2:Sometimes a case of bias most times a case of bias in AI outputs, in my experience, isn't overtly biased, it's just leaving out something that's nuanced, something that's not been traditionally centered in our society and in the academy.
Speaker 2:And then the related question is is what perspectives are being amplified, and can I go back and re-prompt in a way that's going to give me something more balanced? I'll give you a great example. One of the things we talk about in my humanities class is world mythology. If you ask ChachiBT to generic questions about world mythology, it invariably, in my experience, draws from Greek and Roman mythology, but I mean that leaves the rest of the globe unrepresented. Right, and especially the myth that we're working with in the classes from Southeast Asia has literally shaped the worldview of billions of people and it's not even mentioned in the AI outputs unless I deliberately prompt in a way that and having done that, then I started to learn more about which I'm not familiar with mythology from the Americas, mythology from Africa and so on. So it really has, for me as a person, enriched my own understanding of mythology because I started to prompt it in a more balanced way.
Speaker 1:It seems like prompting can also be a challenge for students and that could be a new course too, like prompt engineers and coming up with the right prompt so it isn't biased. But aside from coming up with the right prompt and having that be a challenge, what are some of your biggest challenges that you've encountered when integrating AI into education and how do you address them?
Speaker 2:I think the biggest challenge has been the diversity of responses and perspectives. I don't think we realize what a tsunami-like event AI really is and has been, and what came up at our institutions is. There were some people who responded in ways that were very like that embracing. There were some people who said, okay, well, I'm not sure about this, but it seems like I should learn it. There were other people who were like stop nobody, just let's pretend this is going away.
Speaker 2:And what I started to realize, as we had some super difficult conversations at the institutional level, was that it was a degree of change that for some people psychology is not my field, so I want to make sure that I say this really carefully it was a degree of change that, for some people, may have brought forward the kind of responses that we normally associate with trauma, associate with trauma, and so I have a little bit of background in trauma-informed training in a completely different arena, and so what I said was like, hey, let's get this core group of people trained in understanding trauma-informed principles and make sure that we're bringing those principles into our conversations about AI.
Speaker 2:And where I am located geographically, we literally were coming out from a devastating hurricane as all of this was showing up in our classrooms, and so there was even more volatility in people's experiences. Letting people be where they needed to be, and so on really helped us get to a better place, where now I feel like we have a culture of innovation at the college, and AI is a really significant part of it.
Speaker 1:That's great, I mean that's. I mean AI seems to be playing a big role where the school, where you're at, the higher institution, where you're working at, but how do you see it being played in other higher institutions? How do you see AI being played in higher education over the next couple of years, over the next decade?
Speaker 2:I think it's hard to overstate how significant AI will be in terms of changes in higher education, but exactly what those changes will be, I don't know. And part of the challenge, first of all, things are moving so quickly right. The new research tools from Google and OpenAI, the deep research tools, are. It's going to be another big, fundamental wave of change, and all I know to do at the moment is make sure we're having conversations about it. So it is a little bit hard to navigate as new things come up every day. I sort of have two views that I try to make sure I'm always considering. One is the optimistic view. So, optimistically, we choose that pathway of adapting and that allows us to amplify our impact in strategic ways in terms of engaging students, in terms of meeting outcomes, in terms of institutional health and sustainability. We recenter human connection. We have more time and more incentive to sit with each other and be face-to-face and hear and experience others' perspectives. Our graduates leave with the skills that they need to leverage whatever the emerging technologies are for the greater good. So that is my hope. What I see could happen.
Speaker 2:The more pessimistic view is.
Speaker 2:View is we take not the middle path right, Either we overcorrect in terms of embracing, we're just going to use AI, we're not going to be thoughtful and critical, or we just reject and say we're going to do things the way that we've always done.
Speaker 2:I think if we over-embrace it, we could over-rely on it for decision-making and I think we could create harm in systems that don't have a human in the loop. I really really care deeply about student privacy and I want to make sure that we're not integrating AI systems that create potential harms in that way. I think we could, if we get this wrong, increase gaps in access. Students who already have access to more have access to more and better AI tools. Students who already have access to less have fewer opportunities in emerging technology generally. And then, last but not least, I think if we continue this adversarial battling each other among faculty members battling students and faculty members we're not going to get anywhere and we're going to increase the public perception that higher education is losing its relevancy. So I think that's a real risk and I think the answer to those risks, on that pessimistic view, are all about leadership right on.
Speaker 1:I love that you talked about making access equitable. I love that you mentioned. Everything that you just kind of listed is really very important. So I just want to know what advice would you give to educators or institutions looking to start integrating AI into their teaching practices?
Speaker 2:I would say you already know what works. Pedagogy is incredibly well-studied. We don't need AI to come in and reinvent the wheel. Sometimes it's so different and strange to us that we assume that we need to do that. But look at the things that we already know work all those evidence-based practices like scaffolding, like relevance and so on and look for ways to scale and amplify what we already know works. I think that's an easy way for anybody to get started and then start to look for gaps, what's not working at your institution.
Speaker 2:Can you take three people from different parts of the institution who do different things? Let them spend a day together with some generative AI tools and see what solutions they come up with for things like a problem in the enrollment process or whatever it is, booking rooms. I mean, there's more problems, probably, than we have time to list here. And then I would say guide student use, show them how to use it. There's a tool that we're piloting right now where a professor can literally see the chats that the students are having with the AI and go in there and say what if you tried asking this? Instead, what if you brought asked it for a different perspective? And I think that's that's really important. And then I just have to put in another plug for honoring student privacy. Really be sure you understand what you're asking students to do and the implications on their privacy for the tools that you use.
Speaker 1:That's key. That's key and today, with all the tools that we're using, is to make sure that we ensure the students' privacy. I like that you mentioned that, and that's a wrap for today's episode of EdTech Empowerment. A huge thank you to Anna Haney Winthrop for sharing her insights on AI in higher education, faculty upscaling and the future of algorithm literacy. If you enjoyed this conversation, be sure to subscribe and share this episode with fellow educators. You can follow Anna on LinkedIn and check out her work at the Florida Southwestern State College. Stay tuned for more discussions on the future of ed tech and always keep innovating in education. Thanks again, anna.