My EdTech Life

Episode 313: Angeline Corvaglia

Fonz Mendoza Season 1 Episode 313

Send us a text

AI Safety, Ethics, and the SHIELD Conference with Angeline Corvaglia 

The future of AI safety is unfolding fast—and we need to be ready.

In this episode of My EdTech Life, I sit down with Angeline Corvaglia, a global leader in online safety, AI ethics, and digital literacy. We dive into some of the most pressing issues in AI today, including:

🛑 AI-powered predators and chatbots—are they automating child grooming?
🔍 The hidden dangers of AI relationships—a shocking story of a 13-year-old's first abusive chatbot "boyfriend"
📢 The SHIELD Global Online Safety Conference—why this worldwide event is breaking barriers and amplifying unheard voices
🚀 The AI arms race—who is really in control, and where are we headed?

Angeline shares eye-opening insights on the urgency of AI literacy, why parents need to rethink their approach to digital safety, and how the SHIELD Conference is uniting 16 countries to take action.

💡 Join the conversation and be part of the movement! Sign up for the SHIELD Global Online Safety Conference here:
🔗 https://shieldthefuture.com/

🎙️ Discover more incredible conversations on AI, EdTech, and the future of learning at:
🌍 www.myedtech.life

🔔 Don’t forget to like, comment, and subscribe for more thought-provoking discussions!

Authentic engagement, inclusion, and learning across the curriculum for ALL your students. Teachers love Book Creator.

Yellowdig is transforming higher education by building online communities that drive engagement and collaboration. My EdTech Life is proud to partner with Yellowdig to amplify its mission.

See how Yellowdig can revolutionize your campus—visit Yellowdig.co today!

Support the show

Fonz:

Hello everybody and welcome to another great episode of my EdTech Life. Thank you so much for joining us on this wonderful day and, wherever it is that you're joining us from around the world. Thank you, as always, for all of your likes, your shares, your follows. Thank you so much for just interacting with our follows. Thank you so much for just interacting with our content. Thank you so much for all the great feedback. We really appreciate it because, as you know, we do what we do for you to bring you some amazing conversations that'll help nurture our education space.

Fonz:

And today I am really excited to have on a wonderful guest that I have followed and then connected with on LinkedIn. She is putting out some amazing things and things that will really get you, you know, at least stopping and thinking and kind of meditating, because sometimes you know, we move too fast, we break things and then we're like, oh, if I would have just maybe thought about that a little bit more. So today I would love to welcome to the show Angeline Corvalia, who is joining us today. Angeline, how are you doing today?

Angeline:

I'm doing great, thank you. How are you?

Fonz:

I am doing excellent and thank you so much for connecting with me. Like I mentioned earlier, I know Saturday mornings are very precious, but I do appreciate you taking the time out of your day to just share your passion, share your world with us and just to really just share what you have been amplifying on LinkedIn and through. You know all your various posts, and we're going to be talking about an amazing conference that you will be speaking at as well. So there's a lot to cover. But before we get into the meat of things, angeline, if you can give us a little background introduction and just what your context is within the digital safety space, education space and even parent space, so just that way our audience members can get to know you a little bit more.

Angeline:

Okay, thank you so much for having me. There's some things that are always worth giving our time for, and spreading messages your own and others is definitely worth the time. So I have a very eclectic background. It would take me a while to tell the whole thing. I'll just start with the whole online safety space. It actually found me because when my daughter was born she's nine now when she was born, I was a CFO in a financial institution and I was working there for 15 years total. After that, you know, I never wanted that to be my future, so I quit and I was working for a software provider in sales. They put me in sales. That wasn't the best decision, but I did that for around three years and then I said I have to do my own thing and my intention was actually to do digital transformation and like a little bit health for the kids on the side, because that was the part of the CFO that I liked. But then I noticed people on LinkedIn. They were posting things for kids and I felt like the medium wasn't something that kids were going to listen to, or it was actually long articles, for example. So I'm thinking, oh, I'll just have some fun and create a video. So I started asking these experts, can I create the videos? And they're like, yeah, sure. And I got a very big positive reaction to these videos.

Angeline:

And then, after this was a year and a half ago, about a year and a quarter I realized you know, there's so much that needs to be done. You know, after spending then three months around three months mainly focused on that, I realized I have to do this, I have to do this full time, I have to do this, I have to do this full time, I have to help with that. And AI is one thing I'm especially interested in, and the reason I first got interested in it is actually from my time at the software provider. They were doing work for customers and AI was the thing for the developers. It was the thing for the tech experts, and I wasn't a tech expert.

Angeline:

I was the bridge between tech and business and I felt, you know, this was when ChatGPT had just come out recently and the world was changing and people who weren't tech didn't feel like they were part of it. So I was like I need to help these people understand. So that's how it started. And then I had two characters my, my activity is called Data Girl and Friends and I had Data Girl and then someone suggested why don't you have AI? So I have Isla AI Girl too, and Isla talks about AI and Data Girl talks about online safety and all the other privacy aspects.

Fonz:

That is wonderful, and what I love, though, is just when I get to talk to guests, and this is what I love the most about this and amplifying people's voices in their work is just to hear the background that they're coming from, what they're seeing and how they're trying to either saw something and trying to find a solution to, or working along with that in this case, company Like you say, you're making videos and then all of a sudden seeing, hey, there's a need for this now, because now you're seeing some things and this is, I think, fantastic. And Data Girl and Friends is something that I know that I would love to share with my parents, and so that's why I'm thankful that you are willing to be here today to learn a little bit more about that, because, as part of my job, I do get to work with parents on a monthly basis. We talk, and we have these conversations about data, the data privacy. The most recent conversation that I did with them, I posted on LinkedIn. We were doing one on sharing team, where parents are just oversharing you know pictures and so on, and then I talked to them also about you know these AI platforms that now can take some of those pictures and you know basically undress those pictures, and then there's the extortion aspect of it, and so we go deep into those conversations because I know that, although it's a tough topic, you know, just to inform parents and letting them know just the dangers and also talking to them about AI and chatbots. So kind of going into that information, into kind of that, I guess.

Fonz:

Path now into the conversation. I know that you have spoken very much about AI and you're very vocal about it. But I want to ask you you know, as far as AI literacy is concerned, I know that AI is moving like at a very rapid pace and it just seems like every day or every second there's a new app, a new company, something new, and all these models are coming out that are reasoning and all that good stuff new, and all these models are coming out that are reasoning and all that good stuff. But I just want to ask you do you think that, with this and moving as fast as it is, do we need to focus more on that AI literacy side or do we need to focus more on implementing more robust AI safety regulations?

Angeline:

I'm going to say the AI literacy, because I no longer trust government or industry to solve it. I don't like to make political statements, so I don't want to get political, because that distracts often from the message. But any government, any government, because if we wait, we do need regulation, we need responsible industry. But you said exactly, it's moving very fast and if we wait, then we're going to wait. There's enough examples of parents who have been fighting. I know one parent, jesper Graugard, in Denmark, who's been fighting for the privacy of his kids for five years. He's clearly in the right but he can't really move very fast. And other countries like Norway, where they managed to actually get change in the government but it took a year and a half for the government to actually make rules about privacy for kids in schools. In a year and a half, how much has happened? So I think the literacy is most important. Long answer, short question. And the issue with this literacy is exactly that it's moving so fast.

Angeline:

And just I was thinking when you're talking about working with parents. One whole aspect of my concept, my way of working, is that parents need to understand that they're not going to know. Their kids are going to know they're not going to know. So the way to keep kids safe, what I try to bring about the short videos that parents and kids should watch them together and talk about them together and teach each other. Because there needs to be a trust, because the kids are going to know, they're going to know parental controls. There's always a way around the parental control that their, their friends are going to tell them how they're using ai and they're going to try it out. So there needs to be a trust and the parents just aren't going to figure it out. So that's kind of my way of seeing it.

Fonz:

Yeah, and you know what I love that you brought it back to the parent aspect of it, because I know, with my work with parents and I'm coming in just from education and actually coming in from business and marketing into education, and I know that there's a term that is used quite often. This is, oh yeah, you know, our learning community and our learning community and you know, and sometimes what I feel like is that we don't include the parents as much in that learning community.

Fonz:

It just seems like it's the upper management and then, of course, the mid-level and then the teachers and then students. But I love that you touched on the fact that parents need to know the students are already using it. The students are already, obviously, because of their friends and they see things on, you know, social media and things of that sort. They are already familiar with a lot of the apps, but the parents aren't, and so I love the way that you bring that together and saying these short videos are for parents and their, you know children to watch together and have those conversations. And that's really the job that I get to do with our parent liaison specialist or our parent engagement specialist, I should say is that the goal is we tell them it's like we're having these conversations. But I'm giving you these resources also as well, both in English and Spanish, because those are the predominant two languages here, where I live along the border. But these are resources to have those conversations with your son or daughter, just at least to get them to think for 0.5 seconds, you know, before they click send or whatever it is that they're going to do or share, because of maybe the long-term consequence of that that might happen later on, and also even talking to parents about that too as well. Like, hey, when you're posting something about your child, is this something that you would like posted about yourself?

Fonz:

Because later on, you know, with students and with AI, like I said, there's even more of a danger now, I think, or at least it's heightened because of what can be done with these apps. So I love that, that the work that you're doing in bridging that gap between parent and student or child in this case, and bringing that together. So let's talk a little bit about you know more on that parent side, because I would love to pick your brain and learn more and see how I may also share what you're doing with parents as well. So I know you've spoken about AI powered predators and chatbots and the automating of the child grooming. Can you walk us through like an example of what are some of the flags or some of the things to see when this might be happening?

Angeline:

Well, obviously it's about change in behavior, right, and just before I go into more detail, there's one thing that I really want to mention in terms of how, in my view, what I'm trying to achieve needs to be different that parents, they need to admit more, be able to admit that they don't know things. They don't have the answers. It's the same, it's a societal thing, right? If you're in a meeting at work, who's going to be the one to say I don't understand what you're talking about? Can you please explain it in a simpler way? It's hard because we, in general, all the societies that I've lived in I lived in six different countries it was always like you're supposed to know, and asking questions and meaning you don't know is hard. But with the tech world and kids and parents, we have to admit we don't know, because that's part of the problem is, kids think they know better, especially in terms of privacy. So, yeah, just before I say that that, even the signs, I would say the first sign is openness.

Angeline:

I just recently have been speaking a lot with Megan Garcia, who recently lost her son, sol, and she's going to speak at the conference we're going to talk later and we've been talking about her experience and one of the things that she noticed was a change in behavior, in the sense that he was talking less to her and less honest, less open. It's a first sign you know that something is wrong. And another thing is just if they want to be alone with their device. You know it's tempting to let them be in the bedroom or be alone, but I've heard a lot of experts say the worst things happen in the bedroom, even on. You know all these. I talk a lot about the online world, but I don't spend much time on it. I talk a lot about the online world, but I don't spend much time on it, like Discord and things where the kids can watch.

Angeline:

You know Roblox Roblox, where they can have the games and you think it's you know, it's not dangerous, but actually it can be and it's good, especially if they're younger kids, to have them always in the room with you. Yeah, so those are signs, basically just change in behavior.

Fonz:

And you know and that's very interesting because that's something that does come up with and the talk that I have with parents is many times they may think like, well, you know, it's just the puberty, it's just you know the age and you know they're in that awkward stage and you know they start isolating themselves. And I always just's just, you know the age and you know they're in that awkward stage and you know they start isolating themselves. And I always just say, like you know, if there's a sudden change, you know that that is something that should kind of be noted and kind of just start asking and just doing. You know, the parent thing is like, you know, just observing is everybody is, are you okay?

Fonz:

You know noticing some of those behaviors and because, like you mentioned, and you mentioned Megan Garcia, and that's something that I did bring up with our parents when we had our meeting this past year I think it was the November meeting and talking about how easy it is to access, you know, these chat bots on your devices on computers and how easy it is to even open up an account devices on computers and how easy it is to even open up an account, and so I played a clip that when Megan was getting interviewed, where she mentioned it's move fast and break things should not be something that should be done when it deals with students and especially the lives of a child. So going into that, you know through your work and what you've been doing, and I hey this is what needs to change. What would be some of those things that you would ask to be changed?

Angeline:

I would ask them to have their products looked at and created together with experts like psychologists and psychiatrists, behavioral experts, even teachers, because they're largely left out of the discussion, and this would already be a big step forward, right? I mean, I recently learned about the existence of grief bots. When I found out about this, I was speechless for 20 minutes. For people who don't know, these are AI chatbots that are actually created in a copy of a person who's passed away and they are apparently for grief. But when I psychologists that I know they're like obviously we weren't involved in this, because this is extremely dangerous and risky, right, the way it's being done, especially towards kids. So this is what I would ask Can you just get non-technical experts to assess your product for whether it's safe or risky?

Fonz:

This just needs to be done more across different industries and expertise levels mentioned that it was so important for her that you have that co-creation of these applications, with not just, I guess, your end-all goal in mind of obviously just getting on the app and just keeping people on the app at any age level, but also, if it's something that's supposed to be used for young adults, or children, for that matter, that they do get that feedback. And so, for me, what I see many times is there is the influencer market. You know, you get people that are, you know, have a heavy following. They get used and say, hey, we'll give you our product or we'll pay you this much to promote our product. And really sometimes it's like, well, are you even? Are they even, you know, taking into account the privacy, the data, the dangers that might occur? Or is this just simply a paycheck for them? And I'm just going to put it out there and, you know, without any regard to, you know, their own personal beliefs or views or anything. It's just like, hey, this is what I do, I'll just share it out there. But I do believe that there is something that's very important and that's, you know, making sure that everybody is at the table, because it kind of brings back to the ethics of it and as far as ethical use of AI and you know, going into the different biases and the outputs and the uncertainty of those things, I mean just to get more people involved in getting that feedback. I think that's something that's fantastic.

Fonz:

And obviously we talk a lot about guardrails. Now, my big viewpoint has always been it's how can you put a guardrail on something that you don't own? Because a lot of these applications are plugging into a system that's kind of you, you know that large language model. They're pulling that data from there. So, if you don't own that, many companies say, oh well, we're putting guard rails and these safety rails and I'll hear it in all the education platforms well, we've got guard rails in place. I was like, but how, if you don't own this, is it just somebody putting in code that says, if this, then don't do this and that's your, if this, then don't do this and that's your guardrail, and I don't think that that's very safe at all whatsoever or ethical. On that, what are your thoughts on just AI, ethics and what's you know? And, in this case, for these companies, what could they do better to improve that?

Angeline:

Well, I think that, exactly as you said, I mean these companies overestimate their ability to control things and giving them the benefit of the doubt, giving them the benefit of the doubt that they honestly believe that what they're putting out there can be controlled, then they need to trust. You know that there are people on the other side and I think part of the problem is actually that, obviously, the industry, the AI industry, the creators, are a lot in in a little click and I sometimes feel that I'm probably pretty, I don't know them. So I I just saying they probably live in their own little world in san francisco or something and honestly have no idea. I have no idea what, um, you know, they're kind of distorted reality. I just what I've. You know, I hear them talking about creating new beings or some strange things or religions and and so, yeah, I would tell them talk to normal people, see, normal people spend some time out of Silicon Valley and I do believe, going back to something you said before that in the end, I don't know how long this end is going to be, and sometimes it's hard to keep said before that in the end, I don't know how long this end is going to be, and sometimes it's hard to keep believing this. But in the end, the winner will be the one that puts the most people on the table, because AI is going to be the most intelligent. More information that it has, the more useful it's going to be.

Angeline:

I work with a lot of people from Africa and I have yet to have an AI system. I would love someone to show me one that can produce a non-biased image of an African. I mean, you know and it's just when, even so far that I had to ask my African partner. I'm like, can you just send me pictures of Africans? Because I can't trust any system that I get that is not biased, for students, for example, who need to learn about Africa, if it's not been fed with proper information about that the continent, the countries in the continent. So the winner is going to be the one that figures out. I have to bring the most people on the table, so my system is really fair and useful for more people.

Fonz:

And I agree with that. That. What you said just really, yeah, advocate of AI, and she's out there also spreading the word, but we did a presentation together because here in the state of Texas they are slowly rolling out the use of AI for grading constructive responses or shortened little essays, as opposed to using manpower to read through these essays. Obviously, it would take a lot of time to do that if you're doing it in person with more people, but now they're just saying, okay, we're going to do a small percentage time to do that if you're doing it in person with more people, but now they're just saying, okay, we're going to do a small percentage, just to kind of test it out.

Fonz:

And going back to what you were saying, so, for example, an AI model being used in Africa and an AI model being used here, I know that even today, when I've gotten into some of the image generators and you put in you know, show me, like just janitor, you get a certain look, you know.

Fonz:

Then, for doctors, you get a certain look, for you know a lot of things, and I'm like, wait a minute, like this is very unusual, this is very weird. And so by you know countries, even you know. Now it's like it. You know countries, even you know. Now it's like how are they perceiving us Like if they put there like an American? You know, what does that look like to them too as well. So going back to that, it's that information, is it accurate information? And that's kind of very scary too, because even when you use an image generator, where there'll be like hey, you know, put yourself in here or put in a prompt and I describe myself and I'll put there, you know, hispanic male, every single output that I get, hispanic male, it always gives me a beard or a mustache and it makes me look well, I mean, it makes me look a little bit more bigger filled out.

Fonz:

Oh really, yes, yes, A little bit more, you know, filled out, a little bit more bigger, filled out. I should say, oh really, yes, yes, a little bit more, you know, filled out, a little bit more robust. And so I'm like this is very interesting. You know, as you're putting in these prompts, you know there still needs to be a lot of work being done with this, but you know the fact that people around the world, educators especially, are like oh my gosh, this is the greatest thing in the world, because now we can do this quickly, now I'm able to do this in 20 seconds. But my biggest concern is yes, he can do it in 20 seconds, but how accurate is it if it's just statistically predicting the next word?

Fonz:

The other thing is that the knowledge cutoff date is something that we brought up there at that conference too, because there's a lot of applications that teachers are using and they're purchasing for their teachers and, in the terms of service, it'll tell you, the knowledge cutoff date is 2023. We are already well into 2025. Well into 2025. So how accurate is this going to be if the data there is at 2023 and now in the state standards, have you know, have been updated for a lot of our content area, here in Texas at least.

Fonz:

So those are a lot of the things that I know many people don't look into and maybe they just want to turn a blind eye because they're like, oh, the magic, the whistle, this is the shiny object that's going to, you know, create my lesson for me and I'm done, and that's what really concerns me too as well.

Fonz:

So, kind of going and touching on that a little bit, you know, I know that you've compared and saying like you know, like what we were talking about a little earlier, those that bring more people to the table. So it's almost like we're comparing it to an AI race and it's definitely a competition, you know, without anybody. Just really, it's just like all hands on deck, everybody just go, go, go. Your perception and, in your experience and from the lens of the world that you live in which is, you know, data Girl and Friends and all the amazing people that you're connected with in your network, you know how do you envision? You know AI as a force for good, or do you envision it as a force for good like, maybe 10 years from now, or is there many more pitfalls that are going to be coming that we should be worried about?

Angeline:

I try to be positive. I need to be positive, I need to believe that it's possible, the good AI can be a force for good. It can. It can be used well. It doesn't look like it's necessarily going in that direction right now because of exactly massive problems, you know, we were discussing before with the image generations that the one, the, the ones that create pornography. Kids are obviously, you know, interested in this, so they use it, they create it. They don't understand the, the weight of what they're doing. Um, so all sorts of things, and also these ai relationship chatbots, they're all completely, you know, overwhelming and influencing, especially if you give it to young kids.

Angeline:

I was talking to uh, I think it was megan who said that met, you know, someone whose daughter had had their first relationship with, with, with an abusive AI chatbot boyfriend at 13. So this is a person whose first relationship. I mean, this is the influence in a whole world going forward. So this is a lot of reason to be negative, right about it. But, on the other hand, what the world I'm trying to create is one where all of the tech connects us all over the whole world in a way that we've never been connected. They figured out. They make one product and it's sold in the entire world. What we haven't figured out is the other side of it. Right, so we can take this connection that tech gives us and push together for a responsible tech.

Angeline:

Right, because individuals and mean AI can help in really a lot of ways. It can help us to be very efficient and it can help us to be more creative. It can help us to know each other better, because in the moment that and I need to call out Bill Schmarzo because he's the first person I heard say this that AI can help us to be more human, and some people hate that statement. Some people like that statement. I like it because it's, I think, because there are things that we can do as humans that AI probably I don't want to say probably won't be able to do is be understand, be sentient, understand emotions, understand context. All of this like real context, life experience.

Angeline:

And if you have AI, if you use AI, then you understand which parts of you are uniquely. You and kids can learn that from a younger age. Right, they actually have to. They should understand who am I, what makes me unique, what kind of person am I, because if you're using AI and they are and you don't know who you are, then you can more easily be influenced and this is something that kids can then learn earlier and then you're actually going stronger into the world because you know yourself better. So that can be a positive output of AI. But we have to be more intentional with it and we have to kind of force that use because, as you say, the tech companies are obviously they have billions in funding that they have to get return on, so they're gonna go for the for the money first yeah, no, absolutely, absolutely.

Fonz:

So I want to kind of just, uh, turn the conversation over now because to talking about the Global Online Safety Conference. So this is something that I did see recently that was posted on LinkedIn. I have already signed up for it, too, as well, and just looking at the list of speakers, this is going to be an interesting conference. So can you tell us a little bit more about this conference? Well, first of all, if you can, or have some background, how did this conference idea come about? And then tell us a little bit about what the goal of this conference is and why people should sign up for it?

Angeline:

So the idea came about. Just after a year of being in this space, I met some amazing people, a lot of amazing people like this online safety. This, you know, an AI, a responsible AI community that somehow I have built on LinkedIn is so amazing and it's full of I call them like individual warriors really passionate people. A lot of them are individuals or small companies, small organizations fighting to survive, making a real difference, and I'm thinking. I was thinking these people could actually achieve a lot more if they were working together, if they knew each other more. So I said let's do a conference. And I talked to a few nonprofits that I work with. Will you support me to do this conference? It was in November and I was in a time. It's urgent. So I said I'm going to do it in three months. I said in three months we're going to do this conference and we talked about it with the partners and also one, andy Briarcliff, who's been a lot of support as well. He's been in the space for a lot longer.

Angeline:

You know how are we going to define it, so we'll just be very general. We're going to call it an online safety conference. It has to be global, because that's what I said before, we have to work together more and we just put it out there and see what comes back. What are people interested in? You know who wants to talk, and we got this massive just so many people, so much energy came back. I was just putting out messaging we're stronger together, stronger together. We have to know each other, and I just it was like every day something would come in and said I can't believe this person is speaking, I can't believe this person wants to speak, like I always, ever since I heard the existence of of the AI data labelers, I always wanted to meet an AI data labeler or a content moderator. And there was a Facebook content moderator from South Africa who contacted and wanted to speak, and I'm like, yes, that's exactly what the so all sorts of people from 16 countries, different ages contacted and all across the spectrum of different topics and experiences are going to talk.

Angeline:

And what's important is is that we did not go for any influencers, like you said, we intentionally we're not. We don't have, you know the, the, the keynote speaker who is going to bring in the audience like no, we want to hear from the people who need to be heard, um, and and it's quite unique, and we also made the conference like 12 hours a day so that people from all over the world can speak in their time zone. And we made it free because and online, fully online because then the barriers to actually attending are gone. Um, because a lot of obviously, university students, people in poorer regions or people like me I'm actually in europe, so it's Saturday afternoon and I would love to attend conferences in the US, but it's a really long and really expensive. So we're like no, we want to have the voices, he wants to speak and we want anybody to be able to listen. So that's how it came about.

Fonz:

Well, that's wonderful and, you know, looking at the lineup, there's definitely some amazing, amazing speakers and people that I actually follow on LinkedIn too as well. Like I said, I'm a follower of your work and everything that you're doing because I love what you're doing and your mission, and so this is definitely something that's going to be worth the view and, like I said, I've signed up for it the view and, like I said, I've signed up for it. So I'm really excited to just gain some more knowledge and different perspective and different lenses from people in other countries that maybe are like minded but are seeing things differently or may have a different perspective. And, like I said, for me it's always looking for something different and something that just to think about and maybe change my perception on many things. And so this is an exciting opportunity for everybody to sign up, and I know the conference starts February 19th, so there's still time to sign up, correct? You can?

Angeline:

sign up. Yes, absolutely yeah.

Fonz:

Perfect, excellent. So for all our audience members that are checking this episode out, please make sure that you check out the link in the show notes also as well. It'll be there. We'll definitely be posting it on LinkedIn, too, and all our socials to make sure that you sign up for this, because this would be a great event for you to learn more and see things from different perspectives and different lenses and, of course, like I said, it's only gonna nurture our growth within this space and just to see how, as a collective, we can improve this space also as well. So thank you for sharing that. Now, angelina, before we wrap up, I just want to ask you, as far as you know your projects, you know what are some of the things that are you know that are in store in the future, maybe for Data Girl and Friends.

Angeline:

Well, data Girl and Friends, as I said, I create the content. I realized early on that sales is something I if I were to actually try to sell my content, then I don't have the creative juices anymore. So I'm really building out partnerships with amazing organizations who have maybe they have partnerships the ideas and the knowledge, but maybe not the medium to bring it about. Or they have schools, classes, parents who, but they could use the content. So Data Girl is that's what I'm building out. That was the original idea for the conference to help that and actually has grown into much more, luckily. And also I'm working on some online courses that will be ready soon. Um, basically the whole concept we didn't really speak about um is that I think that kids, teens, they should have, like, clear knowledge, something like you can't drive a car without a driver's license, so you actually shouldn't be using a device without basic safety, just really basic, like you've learned this. You've heard this at least once. So I'm working on some online courses that will be ready soon on this and I'm really excited when those are ready.

Angeline:

And also, the SHIELD conference is actually just a kickoff. We are going to do a yearly conference, but it's actually intending to build collaboration. We're going to also do working groups and do smaller meetups and conferences to really the idea is it will be the platform to help people come together. So it's another reason to come, even if you can't be there, because I know a lot of. I picked a week where a lot of people are on vacation. I didn't know when I left the US there was no vacation in February. So, yeah, just to sign up, you can listen to the recordings afterwards and be a part of the movement and the community going forward, because there will be other meetups and other more specialized conferences. Wow.

Fonz:

Well, that is fantastic, angeline, and thank you so much for joining me today and taking a little bit of time out of your day to just really share, you know, your passion, share your mission, your vision and definitely getting people excited about the Shields Conference also as well. So, again, for all our audience members, make sure that you check out the episode notes, because all of the links will be there and the conference is coming up really quick. It'll be February 19th, so if you're watching this, you know, please make sure you click on that link, sign up and check out all the amazing speakers and just to help us learn more. And obviously, now that we're hearing from Angeline too, that this is something that is a community that's going to continue to grow. Maybe, maybe in the near future, around your area, there will be a meetup or there'll be a conference, but it's just something great to be part of and something that where you can find like-minded individuals and folks coming together, like I mentioned, just to continue to nurture these conversations and continue to grow together. So, angeline, thank you so much.

Fonz:

I really appreciate your time being here, but before we wrap up, we always end the show with our final three questions and I know I always give my guests those ahead of time, so hopefully you're ready to answer some of these questions or had just a little bit of time to think about them. So question number one every superhero has a weakness. So, for example, like Superman, kryptonite just kind of weakened him a little bit or it was a pain point for him. So I want to ask you, Angeline, in the current state of, I guess we'll say, ai, or it could be education, it doesn't matter but I want to ask you, what would you say is your current kryptonite?

Angeline:

My current kryptonite. I'm a creator and I am not good at selling my creations in the sense selling, even getting it out there and approaching people with it. That is my biggest pain point Because, as you say, I have a lot of ideas and I have a lot of creative juices and I create things that a lot of people say are nice, but I'm not good at getting them out there, which is a big problem, obviously there, and which is a big problem, uh, obviously, um, so I think it gets out there slowly through other people, but it could be a lot faster and more efficient and more useful if I would be better at that.

Fonz:

There you go all right, that's perfectly great answers, just like we were talking about earlier, just kind of getting, uh, maybe I guess we'd say a little bit of that imposter syndrome, because I I suffer from it too as well you know having great ideas, but just to kind of getting.

Fonz:

Maybe I guess we'd say a little bit of that imposter syndrome, because I suffer from it too as well. You know having great ideas but just to kind of get them out there. It seems a little bit difficult to have many times. But yeah, that's a great answer. Thank you so much for that. All right, so here's question number two is if you could have a billboard with anything on it, what would it be and why?

Angeline:

on it. What would it be and why? Billboard would be, and every individual can make change, but the individuals need to work together. In the sense it's an individual thing, but it's a standing together. This is what it would be, and the why is simply because we often feel powerless for all sorts of reasons. I mean, there are tech billionaires. There was recently. They were all standing behind the American president when he was being sworn in and then doing, you know, jetting off to Europe and changing, managing to get the regulations changed overnight. You can feel powerless, yeah, but we're not powerless. And if you look back in history, this tech field world that is all sorts of, has all sorts of dangers, is new. It's new. It's been around a really short time ago. Our world was different and it can. We can insist on no, it doesn't have to go in that direction, and there's going to be individuals making individual decisions that can make that happen. But, that said, if you find other individuals who are on the same path, then you have more inner power and inner strength.

Fonz:

Great answer. Thank you, angeline. And the last question to wrap up our wonderful conversation is if you could trade places with one person for a single day and it could be anybody who would it be and why?

Angeline:

I don't know if I would really want to do this to myself, but I would like to put myself in the throes of those Silicon Valley conversations for one day and kind of figure out. Maybe all of some of these things that I hear them say would make more sense if I spent a day there, understanding what they were doing, how they were spending their day Probably wouldn't understand many of their conversations, but at least to kind of get a feel for it. So I think that would be it. Yeah, just to really understand better this whole other side of the AI tech universe.

Fonz:

Very cool. That's a great answer. Well, angeline, thank you so much again for spending a little bit of time with me on this wonderful day, and thank you so much for all you shared and for our audience members. Like I mentioned, there's a conference coming up the Shields Conference so please make sure that you check our show notes for that link and that way you can go ahead and sign up and also just find these wonderful speakers on LinkedIn as well. Follow them on socials, because they are putting up some amazing things that really help you learn more, but also to kind of stop and think, and that's the wonderful part about it.

Fonz:

Just, it's not all about, you know, going fast and breaking things. It's. You know you can go fast but then also take a pause and just really reflect on some of those things that maybe we're already coming in with our own perceptions, but this would be a great way to continue to learn. Don't forget to check out our other 312 episodes, where I promise that you will find something just for you that you can sprinkle onto what you are already doing great. So make sure you go over to our website at myedtechlife myedtechlife and, if you haven't done so yet, make sure you follow us on all socials. That way you can keep in touch with us but also see all the wonderful guests that are coming on through the show the wonderful and then that way you can go ahead and just get a little glimpse of the amazing, amazing work that is being done. So thank you so much and, as always, my friends, don't forget, stay techie. Thank you.