The NorthStar Narrative

Decoding AI: Ethics, Impact, and Future Insights

NorthStar Academy

Simplify the world of AI with our guests, Merissa Sadler-Holder and Ephraim Lerner! Learn the key differences between traditional AI and generative AI such as ChatGPT, and understand why this technology is becoming more accessible and relevant for everyone, regardless of technical expertise.

We discuss the importance of being aware of how our data is used and the need for opting out of data sharing when necessary. Special attention is given to educating children about data privacy, especially with AI tools integrated into common apps, to ensure responsible usage and understanding.

Finally, explore the transformative impact of AI across various industries, from healthcare to education. Learn about the critical need for future generations to develop AI skills. We also highlight the importance of familiarizing oneself with AI tools to support personal development and innovation. Packed with valuable insights, this episode is an essential listen for anyone interested in the future of AI and its societal impact.

Speaker 1:

Hi, this is Stephanie Schaefer and you're listening to the North Star Narrative, a podcast from North Star Academy. I want to thank you for joining us. I hope you're encouraged, challenged and motivated by what you learned today. Enjoy the story. Hey everyone, welcome to the episode.

Speaker 1:

This week I have two people that have just been on the show and now I've got them both together. So I'm really really excited to have Marissa Sadler-Holder back with us and Ephraim Lerner back with us, and so if you've been listening, then you've heard Marissa on episode 222 and then Ephraim was on 224. So both of them just recently. Both of them love education, educators, ai, all things technology, and they're really diving into it and doing some good deep work. And so if you haven't listened to those episodes, go back and you'll hear introduction about them and get to know them a little bit more and then hear their heart and where, yeah, their deep work has taken them. But today I thought we'd have both of them because actually Marissa is the one that introduced me to E3M. So we're going to all hang out together today and just go over some more maybe general questions of AI. We'll just see where this episode takes us. So thank you so much, marissa and E3M for spending more time with me in the North Star Narrative today.

Speaker 1:

Thank you so much. We're happy to be here. It's such a pleasure, that's so good to be hanging out again. Okay, so we're just going to jump right in. Maybe someone's listening and doesn't really understand AI or what it is, or the fact that they are definitely using it on a daily basis, maybe just to not know. So can you just explain in really simple terms what artificial intelligence is, how it works, and then give us some examples of these everyday applications that we're using?

Speaker 2:

Rissa do you want to start with it? I think yeah, sure.

Speaker 3:

Just to be super. Let's just break this down super simplistic. Be super. Let's just break this down super simplistic.

Speaker 3:

Um, basically, it is a machine that mimics, um, human intelligence, right, so like the ability to learn, the ability to problem solve, that kind of thing, um, but what it uses is like algorithm and data to be able to do this and ultimately, when you're using something like generative AI, it is making predictions really really bold and it doesn't really understand anything.

Speaker 3:

It doesn't understand what it's putting out, but what it is understanding is based off all the data that it's collected. It is predicting the answer and most of the time it's correct, but there are some limitations, as we discussed on previous podcasts here. For examples, I know that Ethan probably has a couple, but I think not so much generative AI, but AI as an umbrella. We have things that we use every day in our life siri, um, netflix, amazon they all use algorithms and ai to kind of predict the behaviors of, uh, what people would maybe enjoy doing or using, and like amazon, for example, if you, um find that you are buying the same cat food every you know couple of months or whatever, they'll start sending you stuff saying, hey, it might be time to purchase. You want to add to that Ephraim?

Speaker 2:

Yeah, I think you've covered it perfectly.

Speaker 2:

I would maybe say one other key piece which is so useful for people who may not have a background in technology or computing or coding the difference between artificial intelligence for example, going on Google and searching something, versus using something like ChatGPT, which is generative AI like Marissa, like you were mentioning, which is the way it's built, is not requiring someone to have deep technological skills or knowledge, but rather the critical skills and the natural skills to be able to dissect and get the output that they want.

Speaker 2:

And, putting all that technical jargon aside, what that practically means is that we can use natural language, human beings, how we normally communicate. Technical jargon aside, what that practically means is that we can use natural language, human beings, how we normally communicate, can now communicate with a machine or a computer that's allowing us to be able to access this, this deep intelligence that's going on behind the scenes and, like you mentioned, there's a history of this being there, but what's amazing is it's just rapidly evolving. So what? What was behind the scenes, maybe 50 years ago or 40 years ago, when gary kasparov played, you know, in the chess tournament and played against the computer and lost? This is kind of the next step, next stage, where every, every person, every human being who has a language but they're not able to maybe use like coding, can now use their own natural language which they feel comfortable with, to be able to communicate with this artificial intelligence.

Speaker 2:

I know that's kind of skirting away from the question, the main question, which was what is artificial intelligence? But I think there's a, there's a key piece there that, when we think of these incredible computing tools and and technologies, that it could go on in the background without us knowing about it, but we don't have access to that, and I think that's a key, a key reason why so much of what's been going on recently has been an ongoing conversation for every single person, not just those in technology alone, and that's why it's relevant to the person listening to this that it's not something that's just for computer geeks or people who are in the technology field, but rather for everyone.

Speaker 1:

Yeah, and so Google is an example of AI we've been using for a long time. Right, a basic example, but tell us the difference in something like Google and then generative AI?

Speaker 3:

But tell us the difference in something like Google and then generative AI. So the idea is that something with so like Google is based off of data and it is flexible. It has been taught that it can only reproduce based off of these certain inputs and outputs, and it isible, right? And so when you have something with generative AI, what you're doing is perhaps, with the same data, you're creating something new.

Speaker 3:

So I guess one of the things that I share in my learning journeys for teachers is think of it as Google kind of like on steroids, right? So, for an example, you can go in and if you go to Google, you say give me the top cities for restaurants in France, and what it will do is it'll procure a list of different sites that maybe cover the topic of the best restaurants or foods in Paris, or whatever. But what you can do with generative AI is you can go in and be very specific and say create a list of the top rated restaurants in all of France and the cities that are associated with them. Right, and so you're creating this brand new list. It's pulling from all these different resources and it's now creating a list that didn't exist before, using all of the resources that maybe Google would have pulled up. So it's a little bit different in that you are actually creating something new that didn't exist, yeah creating something new that didn't exist?

Speaker 1:

yeah, and is it true that ai is learning from everyone that's putting in prompts, questions and everything? Can you explain a little bit about that and how that is going to adapt over time?

Speaker 2:

you know, yeah, I, I think. I think it will depend on the model itself and the platform that you're using, because, for example, you mentioned Google. If I were to just use an analogy for what Marissa was sharing, there's how I see it is like a filing cabinet and Google's pulling out from the filing cabinet and identifying something that you've searched for. If it's not there, it's not there, whereas what AI is doing is it's finding those gaps in that vast amount of data that you've searched for. If it's not there, it's not there, whereas what AI is doing is it's finding those gaps in that vast amount of data that you've got there and predicting what that next step might be. So if you say the cat on the what's likely to be map is going to be put in, because that's nine out of ten times that's going to be there, but there's also the challenge that when we think about when we're putting in our data or putting in our content, is it training on that data? So some models, for example google, will only train on its own training data, so it won't use the data that you you're inputting into it, um through their notebook lm or through their um ai studio that they've just you know they've just released to the public. They've said very clearly that if you can put it in your google drive, you'll only use the information that you've got there and it will build your the ai from not from the content that you're putting in there, but from the but keeping it in your own local, your local environment. The difficulty and the challenge becomes when open ai or companies like that OpenAI is the father company of ChachiBT they're the one that are saying we're using your data and we're constantly using what you're putting in to train off that, and they say if you're not being charged for the product, then you become the product. That's sometimes what we're saying, and the difficulty is when we're speaking about accessibility and free of charge content that's available through ai.

Speaker 2:

What is it using? It's sometimes using your, your content. So just checking beforehand on where it's pulling its training from. So is it using your, your data and your content that you're innocently putting through, or is it? Is it more more subtle and something's going on behind the scenes where it's, you know, being trained on its own servers and using its own algorithms? Because it's pulling from a vast range of data itself. Sometimes including as much information as possible can also affect the reliability and the success of the output.

Speaker 1:

So so what should we be worried about? Anything like what should we put in? What should we not give it?

Speaker 3:

Well, definitely anything with personal information. I think that is the most important thing. We want to practice safely using AI. And I will say, just to kind of piggyback off of what he said you can opt out of some of these things, for example, facebook, um, any of the meta products. Right, they all use whatever you're using on facebook to train, but you can opt out of that, and so I like, for example, I opted out of the facebook one because you know they're using on Facebook to train, but you can opt out of that, and so I like, for example, I opted out of the Facebook one because you know they're using pictures and stuff to actually train their image generation, and so you have to be careful about that.

Speaker 3:

I think it is our responsibility, just as individuals. I mean, I know that, like, ai is a very complicated thing and we don't have a lot of time in our personal lives to, you know, kind of chase after everything that is happening new. But I think it is important that we do just kind of make ourselves aware of how these are being trained, how the data is being used, how your data, your personal data, is being used with these models, and it isn't just ai. I think this, this kind of goes into the bigger scope of how data is being used, even with your amazon, how it's being used with, um, anything really like any technological piece. Just be aware of the data that they are, they are collecting and, if you can, I like to opt out. That's just me, but a lot of times you can open ai itself.

Speaker 2:

I use that example, but the gpt is that. That's, that's um, that's created by a lot of the users. They've given the option sometimes to say we don't want this to be public, but not necessarily are they saying we want this to be used for training data. So even being able to opt in and opt out, being aware of what we're choosing to opt in and out of it, might be subtle, but those are important and if you don't feel comfortable using something like social media to put that type of information and putting it on an AI bot, you need to be careful where that information is going to go.

Speaker 3:

Being able to teach. I do want to just say one more thing because I think it's important, because we're listeners a lot of times, have children right, and so this idea of you know if your child is using AI, that they should also be taught. You know, be careful about what you're putting in, save. You know, make sure your personal information is not in there. A lot of times we don't even realize and they're getting to the point where it's well integrated and you know we didn't ask for this, although I mean, I love AI in one aspect, but there's also a safety concern and I think um a learning curve with that safety concern, and so you know you can look in.

Speaker 3:

If I were a parent, I would sit down and maybe look at apps that my child is using, whether it's Snapchat or whatever you have an older child, they have AI bots on there, and so you have to kind of be careful of that as well. As um, just in general, like, I bought a new computer the other day, a PC, and it now has AI on it and there is something that you can integrate with your computer. So it seems really cool that you know it can help you analyze XYZ on your computer screen, but it's also accessing all of what is on your computer. So it's kind of one of those things that you really just have to be careful. While it sounds really cool, maybe opting out and using it with a certain LLM or a certain chatbot you feel more confident with might be the route, but just kind of making yourself aware is very important.

Speaker 1:

Yeah, no, those are good, true safety concerns, but I know there's also a lot of common misconceptions and ethical considerations. So what are some of those common misconceptions about AI and what ethical dilemmas do you foresee as AI continues to evolve in various fields?

Speaker 3:

I think you can jump in here anytime. But, like, I think one big misconception that people have is because it's technology, we kind of have an inherent. Our relationship with technology is kind of that like, whatever it produces is more correct than what a human would produce because it's built by humans, but at the same time, we kind of trust it to be correct because, like a calculator, right in the sense of like you calculate something and it's correct, right. And so I think we have this feeling that what ai might produce is more correct than what a human would. But the thing is is that it's not, you know it's it's not always correct and there are issues and there are limitations to it and it does have biases in it, and so we kind of just have to be careful about that. So we can't just trust it and a lot of people do just because it is technology.

Speaker 3:

And another thing is it doesn't really understand. And I know I touched on this earlier, but when we're talking to talking, this is what's so complicated, because we're using language like we're talking to AI, we're working with AI, those are things that we do with other humans, right, but the thing is is that it doesn't understand what we're saying. It doesn't understand the context. It is not a person, it is just something mimicking the intelligence of a person. So it doesn't like when I say it doesn't understand, it's just predicting what the correct answer will be. So I think that's kind of important to understand, that like you know, it's not human, it doesn't understand, it doesn't have feelings.

Speaker 3:

I mean we all. I know it seems silly to state this, but it's not sentient. It just isn't right now. Right now, let's hope that it never will be so with misconceptions around ai.

Speaker 2:

I think there's this view of it's. We're for a where we support ai and we support the ideas behind ai and we support you know where it's going and it's going to be the future and it's going to answer our questions or the opposite extreme, where AI is a dangerous you know it's a dangerous tool and we need to be careful not to use it and you know it's going to take all our jobs, et cetera. And I think having a more balanced view of that and of these misconceptions, seeing the nuance where it is and digging into what Marissa was saying, seeing nuance in the role of AI, so I kind of see it in terms of a Swiss army knife Certain, ai are trained to do very specific tasks. So if we get an AI that creates music, to create a PowerPoint presentation, it will do a terrible job just because that's not how it's trained and, you know, made to to work and the same way that a system that's built on mathematical you know which, which is a weakness that a lot of, a lot of people have noticed with ai, because it's using predictive analytics. It's not a yes or a no, it's a matter of facts, this or that.

Speaker 2:

The first models, with Chachapiti and early AI, the maths was a really sore point, and the reason why is because people were using it assuming that it's going to do all their thinking for them, and the way it was functioning was it was choosing an option and sometimes, because it needed to vary it up, it would use the wrong option. So it wasn't. It was. It didn't work in that, in that environment. So expecting an ai to do everything, I think is a bit far-fetched, and I think it's knowing the role that it, that it plays and also knowing the context which it's pulling from.

Speaker 2:

And then the opposite extreme is AI can be dangerous.

Speaker 2:

It can be a dangerous tool, but if we're conscious of what we're using and how we're using it and, like we were saying before, the data that we're providing it, the content we're providing it, that's something that I think could be quite important.

Speaker 2:

And I think, when we're thinking about the other side, which is the ethical side, when we're thinking about the other side, which is the ethical, the ethical side, when we consider people who might be using it in a very localized setting, it means that it can create more extreme versions of it.

Speaker 2:

So, for example, um years ago they had these ai tools that were talking to each other from google, and I think they shut it down quite quickly because it started speaking in its own language and it was, you know, it was feeding off everything from the internet and some of the worst stuff that we, you know, we can see in humanity can sometimes be, you know, found in particular parts of the internet and it was creating, you know, racists and big you like, just being bigoted in the sense that it was repeating some of the worst things and that if we're not careful, if we're using ai in a very localized setting, in a very small setting, and we're expecting it to give us broader results, it won't be able to. And I think that's important as well, to know that it will only be as good as the data it's pulling from.

Speaker 1:

All right. Talking about our jobs, let's look at some different industries and see where AI is transforming them, such as healthcare, transportation, creative arts. Let's talk through some of those. What examples would you have to give?

Speaker 3:

Well, I think a lot, lot, in particular health care. You know I I haven't dived as much because I've been so focused on education, but I know, with health care, because of its ability to predict so well, if you do input, you know different things with it. Um, like if the doctor would sit down and like put in all the different things that they're seeing here, or um characteristics of somebody's uh, illness or whatever, they can predict what is the best route for the best path, this person, that type of thing. But what's interesting about that is, while that prediction is probably very good, um, there's that human aspect that only doctors can actually bring in, that they understand far more context surrounding the person and their needs and their current environment. And so I think without that it is kind of just formulate prediction. But with the combination of the doctor and this you know pretty good prediction from AI, you're going to create something that is hopefully one of the best plans for an individual's health or pathway for their health needs. Yvonne, did you want to add on to anything?

Speaker 2:

Yeah, I think that's a great example. And I've got a friend who works in this space. He's got a company in this space and he speaks quite a lot and podcasts internationally with some of the biggest names in, especially in america, um, on health care, his name is mental earl and wine and he he speaks about value-based care and the importance of understanding your, you know, the the patient and being able to sort of support them in not only we think of medicine, for example, in terms of being reactive, but actually being proactive and creating an environment where they feel they're being looked after and preventative. You know, and that for me has been a really eye-opening experience, because when you've got that in a secure way and you're able to get deep insight as to, maybe, a diet or health care and we can see this in sports, you know, in sports and other industries where they're getting that insight what it can provide is this deep support and understanding of what might be going on, but they'll only be as good as the context and therefore the role of the healthcare practitioner is so much more crucial than ever, and the same for other industries as well. When you're thinking about sports, for example, sport science, the understanding, the emotional connection with that person, understanding what they're going to give you and what you're able to see sometimes we can't put our finger on it, but there's something there and being able to articulate that it brings the best of the qualitative sciences and the quantitative. So you've got both the human and the scientific data. That's going on behind the scenes and again just reinforcing this idea that obviously artificial intelligence is an incredibly powerful tool, but if the data is missing, then it will set you down a path that will be a lot further away from where you started. So these gaps are going to be a lot more significant if you're using AI.

Speaker 2:

And where that can be interesting is, for example, when there's people's lives at stake in healthcare. There's a massive risk there. In education, there's a big risk. In psychology, there's a big risk. Risk there in education, there's a big risk, you know, in in psychology, there's a big risk.

Speaker 2:

Transportation, if our entire systems are built built around ai. Take you know the tesla and it's using, you know, using artificial intelligence to be able to drive or park. What happens when it goes the wrong way or does something? It's not always perfect and obviously it's incredibly intelligent with how it's been built, but I saw last week on Twitter this, or X, as it's called now this story of a car that had gone on its own. It was on its own, it didn't have a driver in it and it had gone through oncoming traffic and was parked and the police officer pulled it over and there's no one there oncoming traffic and was parked and the police officer pulled it over and there's no one there.

Speaker 2:

So when obviously these are extreme examples, but there's the greater the potential is also the greater the risk and we need to be careful with how we, how we do that. And, for example, if we take another industry which I think there's a lot of hype around from the creative side, which is, you know, hollywood, netflix, you see, you see actors who are up in arms because people are being replicated, and there's it touches on a very delicate issue and sensitive issue in society, which is creative ownership and creativity for, for for human beings yeah, and I will say in regards to affecting industries, without a doubt this is going to affect jobs and industries, for sure, but I also, at the same token, I would say that it will also create new jobs potential and impact industries.

Speaker 3:

That way, I think it will. I think what we're dealing with is just waiting to change what we kind of do already, change the focus on what is really important, the skills that are really required of us. I think that we've we've always been expected to kind of just do it all Right, and now we can kind of focus on the human um traits that are that only humans can do Right. So it really kind of and I'm kind of think that's lovely right, because it really is forcing us to kind of redefine what it is to be human, and I think we're going to be seeing that now. So in our you know, just in our jobs in the future, I mean, it's happening already.

Speaker 3:

There are there are people who say that they won't hire somebody who doesn't know how to leverage AI skills. There are kids coming out of graduate school sitting down with you know these like coaches and career coaches, saying, hey, your resume needs to have AI written all over it if you want to have some kind of cutting edge ahead of the track, and so it is definitely something that we're going to have to focus on Um. But, as a whole, I think we just need to prepare for change, and in that preparation for change is doing like what we're doing now we're talking about it, you're learning about it, you're on, you know, you're listening to this podcast about it. That is the first step in kind of preparing yourself for this change. No, that is the first step in kind of preparing yourself for this change.

Speaker 1:

No, that's really really good, because that was my next question about how AI is going to affect the job market and I think you even answered it before when it's not going to be able to take over all of healthcare. You have to have the human aspect in all of these and then, like you've touched on, it's not always correct, so someone's analyzing it. But, yeah, we do have to be teaching our children, teaching the next generation, about it, because, even though it might not take all the jobs, it is going to change the jobs. So that's a really good point. So we have to keep on learning. All right, what about AI and decision making? How is in decision-making processes across different industries? What are the potential risks and benefits for relying on it? So, we know it's going to change the jobs, but what are some risks that people might be looking at?

Speaker 3:

I would start with just saying that all decisions need to have some kind of human input to it. So there's something called human in the loop and it's a term that's been coined and being used, and the idea is that whatever AI is producing, there's constantly human eyes on it to kind of ensure that this is definitely um correct or somebody is being critical of the output and um, when we do that, we are. If we're using it to help us make a decision, it is keeping it kind of like you know, it can predict, it can give you great decisions, but we can't put our all of our trust into it. Like I talked at the very beginning when I said something about you know, we have this weird connection with technology where we just kind of trust it because we kind of think it's you know, it knows what it's doing, it's trained to do this. This is why it was built and we kind of just use it.

Speaker 3:

But we need to be very careful about that and all decisions definitely need to have some kind of human in the loop. Um, but there are a lot of benefits to that in that maybe it's going to create solutions or help us make a decision and provide context that we didn't think about before. You know, I think there's that capacity too, and so I think it's kind of both of those things like just understanding its limitations and knowing that you have to be a part of that decision but at the same time allowing it to give you some kind of inspiration or different ideas that maybe we wouldn't have thought of on our own.

Speaker 1:

I kind of like the idea of brainstorming with it, collaborating with it.

Speaker 3:

Yes that's a great way to take it.

Speaker 1:

This whiteboard behind me. I mean, my brain can only do so much brainstorming by myself, but you like to have somebody else in the room, so sometimes, yeah, it can help you think of cooler ideas or more creative ideas or something different. So I love that. All right, so kind of thinking on those lines. Innovation what are some things you're excited about, that you're hopeful about that might be coming?

Speaker 2:

It's right to jump in here, I think, just on that last point. I think this ties in quite well with innovation and the last point you were making, which is brainstorming. I think there's this assumption and this frustration now with AI being the solution that's going to give us these amazing responses or amazing results, without us really working on where it's pulling its information from, how it's being used in the process, but we see it as a results off, and this is, um, I'm to blame as well for this, because I'm sometimes guilty of it myself, because thinking, thinking about how it can cause me and in my use of it, to be lazy. It can allow me to just think of it as a way to get an easier response, whereas I think the value with ai is not just in the results alone, but also in the, in the process, in the, in the build-ups to making a decision and when we think about innovation as well. I think it's difficult to really imagine a world where ai can be playing and shaping those parts of our, of our life in a way that can really help us, as opposed to it being used in the way that we might use Google. We might use it's only when we're looking for something or we're trying to get something quick, but actually embedding into our processes and embedding it into how we think and we operate with people and with others and with systems.

Speaker 2:

There's something really powerful there and, from an innovation point of view, I think what it could allow us to do and this is something that we discussed in the last episode that we had together was this possibility that, if we think about humanity, if you think about society, we've often, since industrial revolution, we've been giving roles and responsibilities to individuals that fit a very specific target or goal that we had before we met them or we brought them onto that role. I think what ai could be is this potential where we're able to capture the individual as a whole and know them and learn them, and learn what their strengths are and areas that they might struggle with and see how they could fit into our organizations. For me, although that's not specific innovation, I think that's something that, as a society that could be really powerful. It could be a tool that we could use in amazing ways, and I guess they say that with Netflix, you know, it could be a tool that we could use in in amazing ways, and I guess they say that with netflix, you know it could be that you choose this I think you mentioned this last time, stephanie you could be choose to star in your own movie.

Speaker 2:

You could choose to sign. You know, in the depiction of, you know an experience that that you're going to go watch on netflix and you could choose the actors. You could choose, you know the actresses, you could choose different people that you're going to have in this setting and you can really personalize that. And whether it's a story from the Bible or a story that they're going to, you know that they've got this genre that they really enjoy. Bringing that into that person's life.

Speaker 2:

It could be quite powerful and the possibilities for those who might struggle or suffer, those who might, for example, have an anxiety to leaving home, to be able to pick that in their experiences, to be able to feel more confident to leave their environment, or someone who might be anxious before an exam, or a medical practitioner who wants to practice on a surgery. These are tools that AI can, alongside other emerging technologies, can be incredibly powerful and can allow us to really get to the heart of humanity and think about some of those challenges that we have and really rethink that we have diseases as well like being able to think about how innovative ways that we have never been able to just bring all that information together and get clarity, whereas a tool as simple as ChatGPT might been able to just bring all that information together and get clarity. Whereas something at all simple as chat gbt might be able to give insight that we might not have been able to get before lines.

Speaker 1:

Yeah, lots of exciting possibilities. Marissa, I just wondered do you have a story where you've seen ai really help um, someone that had, you know, special needs in the special education realm, anything in that area? I don't think we talked about that on the last podcast.

Speaker 3:

I went to a conference yesterday. It's MTSS and that's multi-tier system of sports, and this is in regards to students and how can we kind of get in there and help students at these different levels. And yesterday what they ended up doing was having a triangle effect and our framework where you work with a counselor or a teacher and you work with the student, but then also you work with AI and you work collaboratively. So you ask questions to the student and the student responds how they want to respond and kind of so. For an example, they were looking at different potential career pathways or interest in different careers and goal setting.

Speaker 3:

Sit down with a student and the AI. You ask the student you know, what are you passionate about? What is your interest? What are your interests? You know, do you have anything you know historically that you make you um, more lean more towards? And just getting that kind of information and then, in putting it into the AI, the AI will come up with potential career paths and then the student can look at it and say, okay, tell me more about, I don't know, like travel agent or something like that, because they really enjoy traveling right.

Speaker 3:

So you look into that and then you can say, okay, well, let's set a goal for this student, let's sit down and see if this is something that we can do to kind of help prepare you for that pathway.

Speaker 3:

And it will generate kind of these long-term goals, short-term goals, and the student can go in and say I don't want to really do that, I want to do this, and it shifts and it really creates this pathway for a student that is achievable. And so we're creating these new ways to use AI and integrate AI in education and see how we can actually support these kids and this can be used for special education and just setting goals with learning. It doesn't have to be just with career pathways and I think it is that collaborative partner and because it is technology going back to that piece, because it is technology, it doesn't have feelings and we can kind of tell yes or no, and it isn't coming just from the adult in the room confidence and doesn't feel like there's that imbalance of power, because the AI is balancing everything out and it's incredible to watch. So I definitely think there is innovations in as far as how we do things is going to kind of be the new thing, and I'm, for one, very excited about that.

Speaker 1:

Yeah, I'm glad I asked that question because that's a super exciting example, yeah, possibilities and really practical ways of helping helping students, helping us learn.

Speaker 3:

Parents can do this too at home with their child, like right. So because there's that balance of power between parent and child, you can bring that third party in and have that conversation using ai and kind of come up with. You know, almost like a mediator, to kind of come up with an idea that everybody kind of refers upon and it's, it's incredible. I mean, it doesn't have to be just in the classroom or in the school setting. You could be using this at home.

Speaker 1:

Yeah, that's really cool. All right. So thinking parents, students, anybody that might be listening, and so something's triggered like, yeah, I want to know more about that. I want to know more about the job industry, I want to know more about safety concerns. What are the first steps for people to take to learn more? Where does someone go when this podcast ends?

Speaker 3:

I mean, you're more than welcome to follow me on LinkedIn, but I do post a lot on that, of course, and so same with Ephraim. But a lot of it is just kind of giving yourself the time to kind of search and research and pull from different sources. I think that's incredibly powerful. But also, just sit down and give yourself five minutes to pick one tool to play with, Because you can read a whole lot, but actually going in and practicing, then you can kind of see where this is going. You'll have a deeper understanding. Um. So I think, like Ethan Mollock, I think, is a interesting person to kind of follow um as far as kind of like the broader aspects, not just in education. So I think he's he's's he is constantly being critical but also kind of seeing the potential as well. Do you want to hop on it and say something like just kind of what you think somebody might want to turn to?

Speaker 2:

Yeah, I think there's this assumption as well, that AI is already there and it's already sort of the final version of it is there, and there's this, like in other industries those producing it are so much further ahead than those who are using it. One of the fascinating pieces around it is not only is this evolving at rapid speed, but we are the ones using it, are the ones who are at pretty much the same level of knowledge not in understanding the AI itself, but in the tools that are currently available as those who are using it in the larger companies. So it's, although it is a bit of a learning curve, getting used to it. The benefit that someone would have if they're listening now and they want to get involved, is that the there's not much of a gap between those who are the the latest tools, for example, and where they might be at right now. And what do I mean by that? I mean that if a person decided to go and invest their time and energy into the latest tools, they would have the same amount of knowledge from when that tool was launched as someone else who's using it from when it launched. The companies don't have a six-month extra window with that tool. Usually, these tools are released very soon after they're created, and there's a benefit to that being accessible to everyone else. So I think that's from the positive side.

Speaker 2:

On the other side, on the side of being able to learn and think about it, there are like Marissa was saying, being able to play with it is quite useful, being able to feel comfortable with it I think students and younger people who have been around technology for longer might be able to benefit from the technical side of it, whereas I think someone who's older, who might have more of a critical understanding of ideas and concepts, might have that benefit when they're using it, to be able to be more critically aware of where it can and able to be more critically aware of of where it can and can't be used.

Speaker 2:

So there's, it depends on who's using it, how they're using it.

Speaker 2:

And I think one other point that you mentioned about um, about with regards to special education needs I think this is sort of a more blanket thing for learning in general, about AI is that we've focused on one source of information to multiple people for a long time, so that could be an individual teacher or it could be a source of knowledge, a source of information from a book, and that being able to be adapted to the needs of the learner, of the one who's being the recipient, has taken a huge amount of time and energy.

Speaker 2:

One of the benefits of AI is that it can allow that learner the person who's learning and that could be learning about AI itself or it could be just learning in general to be able to think about what their strengths are in learning some of the areas that they might struggle with and being able to adapt to their needs, so that information isn't just being teacher led or knowledge led from one source of information, but that source is being actually adapted to them where they're at. And I think that could be really beneficial as well for the student when they're learning about ai to use some of those ai tools to help them with that. Like you were saying, with brainstorming, to use ai to teach them.

Speaker 3:

Maybe about a bit more about ai can I just add on one, one more thing, and well, actually two more, but, um, I think what we tend to forget is, like that kind of question, right, stephanie? Like, honestly, if you're sitting down with whether it's chat, gpt being flawed, whatever that thought that you are exploring for the first time, you can, as a parent or like an audience listener, you can go into that top thought and ask that question. I want to learn about AI for very simplistic reasons, and I want to get started. Where should I start? And it's going to give you something that you can focus on. Just remember, if you're unsure, if I can do it, just ask it.

Speaker 3:

But I think that is a great first start to not only building your own ai literacy but also just kind of having a lighthouse already that can kind of like answer those questions for you. Um, and I did step Stephanie, I talked to Ethan about this we're thinking about holding like a parent session where we kind of go over a couple things about AI, but really also for them to explore how AI could be useful for themselves, whether it's parenting or whether it is just in their own jobs, but just kind of an exploratory session where we come and we play. We go over a couple little details, but we play and we're thinking about giving your listeners a coupon code and just allowing you guys to have a discount on that if they are interested.

Speaker 1:

Yeah, that's awesome, great. Yeah, let's move forward. We can definitely advertise that. And, yeah, thank you for just helping us continue to think through some of the main points, some of the things to be hopeful about, some of the things to make sure we're being wise in our decisions and how we're using it. So lots of exciting things that can help many of us. So thanks for encouraging us in that and keeping us informed. I really appreciate your work and, yeah, looking forward to what you can do to help parents and students, and I know you're already doing that deep work.

Speaker 1:

So thanks for sharing again with us today. Thank you for having us. We appreciate it so much. Thank you so much for listening today. If you have any questions for our guest or like information about north star, please email us. At podcast at nsaschool, we love having guests on our show and getting to hear their stories. If you have anyone in mind that you think would be a great guest to feature, please email us and let us know. And don't forget to subscribe so you don't miss out on upcoming stories.