Follow The Brand Podcast with Host Grant McGaugh

S05 Technology Innovation EP 25 Future of AI, Cybersecurity, and Inclusion with Tech Pioneer Felicita 'Fifi' Sandoval

January 14, 2024 Grant McGaugh CEO 5 STAR BDM Season 5 Episode 25
S05 Technology Innovation EP 25 Future of AI, Cybersecurity, and Inclusion with Tech Pioneer Felicita 'Fifi' Sandoval
Follow The Brand Podcast with Host Grant McGaugh
More Info
Follow The Brand Podcast with Host Grant McGaugh
S05 Technology Innovation EP 25 Future of AI, Cybersecurity, and Inclusion with Tech Pioneer Felicita 'Fifi' Sandoval
Jan 14, 2024 Season 5 Episode 25
Grant McGaugh CEO 5 STAR BDM

Send us a Text Message.

When the worlds of AI and cybersecurity entwine with the essence of human compassion and diversity, remarkable stories like Felicita "Fifi" Sandoval's emerge. In today's episode, we're weaving through Fifi's vibrant tapestry of experiences, seeing how her shift from finance to tech pioneer is reshaping her life and the landscape of ethical AI and cybersecurity. Her tale is a beacon, highlighting the importance of fostering inclusive technologies and the power of a diverse workforce in shaping a responsible digital future. With Fifi's guidance, we also explore mentorship's vital role, proving how one's dedication to uplifting others can ignite a chain reaction of innovation and progress.

Hold onto your hats as we navigate the intricate labyrinth of AI implementation, where every step is a dance between benefit and risk. Together with Fifi, we journey through the maze of digital hygiene, emphasizing the urgent need for both industry professionals and casual users to armor up against cyber threats. We cast a light on the critical frameworks that are the NIST and ISO, contemplating their influence in carving out a path for AI's ethical evolution. Moreover, Fifi shares her insights on the EU's pioneering role in AI regulation and why it's crucial to distill wisdom from our technological past to responsibly steward AI's future.

Wrap up your earbuds tightly; we're about to dive into the story of Fifi's first Capture the Flag event, where she wrestled with self-doubt only to emerge victorious on the other side. It's a narrative studded with perseverance, academic dedication, and the blossoming of self-confidence that underscores the importance of stepping out of your comfort zone. PP's journey serves as a testament to the transformative power of community and mentorship in the tech world. So, take this opportunity to connect with a trailblazer who is not just navigating the cybersecurity frontlines but also laying down the welcome mat for the next wave of tech enthusiasts.

Thanks for tuning in to this episode of Follow The Brand! We hope you enjoyed learning about the latest marketing trends and strategies in Personal Branding, Business and Career Development, Financial Empowerment, Technology Innovation, and Executive Presence. To keep up with the latest insights and updates from us, be sure to follow us at 5starbdm.com. See you next time on Follow The Brand!

Show Notes Transcript Chapter Markers

Send us a Text Message.

When the worlds of AI and cybersecurity entwine with the essence of human compassion and diversity, remarkable stories like Felicita "Fifi" Sandoval's emerge. In today's episode, we're weaving through Fifi's vibrant tapestry of experiences, seeing how her shift from finance to tech pioneer is reshaping her life and the landscape of ethical AI and cybersecurity. Her tale is a beacon, highlighting the importance of fostering inclusive technologies and the power of a diverse workforce in shaping a responsible digital future. With Fifi's guidance, we also explore mentorship's vital role, proving how one's dedication to uplifting others can ignite a chain reaction of innovation and progress.

Hold onto your hats as we navigate the intricate labyrinth of AI implementation, where every step is a dance between benefit and risk. Together with Fifi, we journey through the maze of digital hygiene, emphasizing the urgent need for both industry professionals and casual users to armor up against cyber threats. We cast a light on the critical frameworks that are the NIST and ISO, contemplating their influence in carving out a path for AI's ethical evolution. Moreover, Fifi shares her insights on the EU's pioneering role in AI regulation and why it's crucial to distill wisdom from our technological past to responsibly steward AI's future.

Wrap up your earbuds tightly; we're about to dive into the story of Fifi's first Capture the Flag event, where she wrestled with self-doubt only to emerge victorious on the other side. It's a narrative studded with perseverance, academic dedication, and the blossoming of self-confidence that underscores the importance of stepping out of your comfort zone. PP's journey serves as a testament to the transformative power of community and mentorship in the tech world. So, take this opportunity to connect with a trailblazer who is not just navigating the cybersecurity frontlines but also laying down the welcome mat for the next wave of tech enthusiasts.

Thanks for tuning in to this episode of Follow The Brand! We hope you enjoyed learning about the latest marketing trends and strategies in Personal Branding, Business and Career Development, Financial Empowerment, Technology Innovation, and Executive Presence. To keep up with the latest insights and updates from us, be sure to follow us at 5starbdm.com. See you next time on Follow The Brand!

Speaker 1:

Welcome to another episode of Follow the Brand.

Speaker 1:

I am your host, grant McGaw, ceo of 5-Star BDM, a 5-Star Personal Branding and Business Development Company.

Speaker 1:

I want to take you on a journey that takes another deep dive into the world of personal branding and business development, using compelling personal story, business conversations and tips to improve your personal brand. By listening to the Follow the Brand podcast series, you will be able to differentiate yourself from the competition and allow you to build trust with prospective clients and employers. You never get a second chance to make a first impression. Make it one that will set you apart, build trust and reflect who you are Developing. Your 5-Star Personal Brand is a great way to demonstrate your skills and knowledge. If you have any questions for me or my guests, please email me at grantmcgauth at 5-Star BDM. Be for brand, be for development and for masterscom. Now let's begin with our next 5-Star episode on Follow the Brand. Welcome to the Follow the Brand podcast, where your journey of personal and professional excellence begins. I'm your host, grant McGall, the CEO of 5-Star BDM, and your guide in this adventure of innovative and business and technology. Today, I would like to introduce a guest who is not just shaping the future of AI, but is also redefining the landscape of cybersecurity. Meet Felicita PP Sandoval, a trailblazer whose name is synonymous with innovation and ethical technology. Pp's journey is one of determination and brilliant. From a foundation in finance to becoming a leader in AI, her path has been nothing short of inspirational. Pp's work stands at the fascinating intersection of AI and cybersecurity. She has been instrumental in developing AI-driven security systems that are not just effective but also ethically sound and inclusive. Her commitment to using technology for the greater good is a beacon of hope in an industry often clouded by concerns of data privacy and ethical use. In today's episode, we will delve into how PP's unique perspective as a broad investigator has empowered her to design AI systems that are several steps ahead of cyber threat. We'll explore the nuances of AI ethics and challenges of ensuring.

Speaker 1:

Welcome to the follow brand podcast, where your journey to personal and professional excellence begins. I'm your host, grant Gaw, the CEO of 5-Story BDM, and your guide is a venture of innovative and business and technology. Today, I would like to introduce a guest who is not just shaping the future of AI, but is also redefining the landscape of cybersecurity. Meet Felicita PP Sandoval, a trailblazer whose name is synonymous with innovation and ethical technology. Pp's journey is one of determination and growth. From a foundation in finance to becoming a leader in AI, her path has been nothing short of inspirational. Pp's work stands at the fascinating intersection of AI and cybersecurity. She has been instrumental in developing AI-driven security systems that are not just effective but also ethical, sound and inclusive. Her commitment to using technology for the greater good is a beacon of hope in an industry often clouded by concerns of data privacy and ethical use. In today's episode, we will dwell into how PP's unique perspective as a broad investigator has empowered her to design AI systems that are several steps ahead of cyber threats. We will explore the nuances of AI ethics, the challenges of ensuring unbiased algorithms and the importance of diversity in tech. Pp is also a passionate advocate for mentorship, firmly believing in nurturing the next generation of tech innovators.

Speaker 1:

Brands sites today are not just for tech enthusiasts, but for anyone who believes in the power of technology to change the world for the better.

Speaker 1:

So let's welcome Felicity the BB Sandeval to the Follow Brand Podcast, where we are building a five-star brand that you can follow.

Speaker 1:

Hello everybody, and welcome to the Follow Brand Podcast. I'm your host, grant McGaw, and I am here talking to what I want to see more of, and that's a young professional, someone who's involved in technology. She's been wonderful. She's going after her PhD now and she's got a 4.0 perfect score in her previous course, which we're going to talk about. I want to see more people participate. We're going to demystify some things in the cybersecurity realm. I'm a big part of artificial intelligence. I happen to actually meet her at an AI forum that we were talking about legal ramifications of the ethics in AI and how this is all going to move forward, talking about compliance, talking about regulation. So I want to see what she is and I said hey, I want you on the show because you represent a part of our demographic society that needs to be more involved in our technology platforms, because this is the way going forward, doing more and more interactions, whether it's business or personal as well as professional. So I'd like you to introduce yourself to me if you don't mind.

Speaker 2:

Thank you. Thank you for everything you just said. I love it. Well, my name is Felicita Sandoval, you can call me Fifi, but I am a security jersey analyst at Libram and what we do, what I do, is that is that risk of governance, risk compliance. So that's what I do and it's in the realm of cybersecurity. And well, I actually I didn't start in technology before I started working in cybersecurity.

Speaker 2:

I was. I just started in finance. I was a teller at a bank. I was 19 years old and I moved up to a personal banker and then I, after seven years in the financial institutions, I had decided that I wanted to become an investigator and I started working for the government as a fraud investigator and also for a few banks as an untimely, non-drink analyst, and that was fun. But my end goal was to be an information security auditor, because at that time we didn't call it cybersecurity, it was just information security auditor.

Speaker 2:

And then cybersecurity kind of came along, that word, and everybody was talking about it. It was the craze. But you make a few good points when at that time I was a little bit afraid, because I am a woman, you know, and I thought I was a woman, a woman of color Hispanic so I thought I don't think I have a chance. I didn't really know any women in cybersecurity, let alone a woman of color. So it took a lot of courage for me to say you know what I'm going to do it. I'm going to go to the master program. I was taking my master, so I'm going to get my master's in cybersecurity at that time and I'm going to make it. I'll make my own opportunity to see if I have to, and that's what I did. I finally was able then to become a cybersecurity analyst, grc at Lightgram after I got my master's degree and now I'm in my doctorate program.

Speaker 1:

Well, you make it sound so simple. I don't know it wasn't, because at first I'm impressed with your educational background and your attainment and understanding of all this. I think what intimidates certain people is they're like, wow, I need to have all this experience in mathematics, I need to be in science and all this real techy stuff, and I'm just, oh my God, there's no way I'm going to be able to pass and become a cybersecurity professional. Just help us demystify some of these things and help us understand what you learned as you went through your program.

Speaker 2:

Well, you're completely right. At the beginning, even I thought before I contemplated on getting in, I was like I don't think I'm going to be able to make it because I am. You know, in my head I was like I'm not good at math, I'm not good at science, I am not going to be able to be a part with all the intelligent people in that community. But it was the opposite. I think that as you get more mature, you start realizing that you have a lot of potential. You just have to be confident in yourself. But it is a hard road. It is hard and you have to. I always call it.

Speaker 2:

You have to meet organizations in the middle. If you want to get into cybersecurity or in the tech industry and actually any industry. You have to do your work first. You have to research, you have to study, you have to take courses, you have to believe in yourself and you really have to fight hard, especially if you're a person of color, different religion or different race. So it is very important to work hard for that. And then networking, too, is very important to communicate with people. But after you go through all the obstacles and learning lessons and teaching yourself, as well as going to school, because school is only one resource. You really need many resources to make it, and that's what I did.

Speaker 2:

I went to school, but I also networked. I also took my own courses online. I also research and that's how I was able to make it. But you could just feel confident and don't think that because you don't know or you were not good at a subject like math, that you're not going to be able to make it. That's just going to take you back and it's going to prevent you from actually going towards your goals.

Speaker 1:

Wow, man, well stated, Well stated. So we've got to get involved, because anybody brings a wealth of experiences to the table. Your background, your understanding, your intellect is what is needed in the tech space and now that we're morphing with what we call artificial intelligence, so what I coined this? I've been in information technology for 20, 25 years Now we're doing what I would call intelligent technology. Information technology is almost like very reactive, very reactive, whereas intelligent technology is very proactive. It's engaging you, your human intellect. That's intelligent technology and that leads to insights.

Speaker 1:

So we look at those three eyes and we were both again at that particular form and what I like the bot is that it started to really look at that. Our laws and some of the things we have in society are pillars of how we do. Things are starting to change and morph into something completely different. Even when we talk about finance, we talk about operations and we talk about how we get things done, that technological layer is becoming bigger and bigger and bigger. Talk to me more about what you're understanding. What are you seeing in the landscape and the future as you move forward?

Speaker 2:

It is being integrated in every part of society. You know organizations at home, our education, and I see a lot of good things going that are going to happen and also concerns. So the good things are going to be that we're going to have, if we're going to talk about companies, you're going to have automation. You're going to have time. You're going to be able to save time by using some of these tools that are going to that you're able to use that time to create more products, to actually have more money, revenue In order for you to actually hire more people.

Speaker 2:

But also the concerns are you know any vulnerabilities that are going to be acquired by using these technologies. The data that we're going to be acquiring like big data. So you're going to have large scales of data and that also brings vulnerability. And I tie that down to society, because you know when you're signing up on a website or when you're actually want to have access to these tools like I'm just going to mention chat GPT, because it's the most famous one you are giving your data away. So now I'll take that back to what I said home, where, if I'm here and I'm like I'm going to use chat GPT to help me create a presentation. I'm putting all this personal information there, and so that also brings issues, because you have to think as a professional, you have to think is my data being stored? Where is it stored? Who is this going to be shared with?

Speaker 2:

But then as a regular person, nothing to do with work you let that happen because you're not thinking about it Like for example my mom, my mom is not going to think about none of those questions, she's just going to use it, create her little TikTok presentation there, right, and she just gave a bunch of her information. And so I see concerns, but I also see that as professionals like you and I in the industry in this space, we need to help people and society by creating policies, frameworks, guidelines, not only for organization but for people, because this is going to be. Ai is everywhere.

Speaker 1:

What you said is so important.

Speaker 1:

I remember when social media really got hot, you know, like 10, 14 years ago, and people were definitely putting a lot of what I would call private information out there, not realizing this is a public forum and it's out there for years, not just that day that you felt a certain way and you put this information. Now a lot of people still do that today. But I think there's a lot of awareness and understanding is that this is a public platform. Understand that it's public knowledge, not private knowledge. You know so unless you're in a private group and things like that.

Speaker 1:

So the same thing with utilizing these generation generative AI tools, whether it's chat, gpt, whether it's Bard Claude and some others that are out there that you are putting information out there into the public. So, with that understanding, are you okay with what you're typing into that application? If you're okay with it, fine. If it's very proprietary, you need to look at those things. And I know you understand about digital hygiene and why that's so important, how far understanding or audience understand some of those guidelines, some of the things that you look for in your world during regulation and compliance and risk and things of that nature.

Speaker 2:

Yes. So you know, one of the most important guidelines that I think that are frameworks that we all go to and we're familiar with is the NIST framework. They have so many different functions and areas that you can go into cloud, ai, cyber security, just so many things that you can actually go and take from that the same thing as following ISO standards as well. There are many different regulations as well that you can follow, like GDPR, ccpa, that they can give you a little bit of guidance, and I think that there is, when it comes to artificial intelligence, there is a lack of, I believe. So, yes, we do have a lot of policies on information security and we can mention them all, google them and you can find them, but I think AI is too very, it's not mature, it's premature. I think that's the word and that's what we're lacking, and we have to start creating those policies or guidelines to be able to follow them. Like, for example, that's something that we're doing right now in a lot of different organizations Because of the lack of these policies is that we're creating groups, ai groups or comedies within the organization that are able to assess the AI services or tools that we want to use within the organization and, within that, as you reveal those products, then you create conditions, like the team creates conditions on like yes, you can use this tool. We're going to approve this.

Speaker 2:

However, you're not able to place sensitive data intellectual property so that is something that we have to be that we have been building in different organizations, just because of the lack of guidelines and even if you follow regulations and I know regulations are not guidelines, but there's a lot of good information like GDPR, a lot of good information there, especially when it comes to data or privacy, and those are also good to follow as well. I one of that I actually like a lot because they're so advanced, I believe, is EU. They have an AI. They're working well. They have an AI act, but they're working on also regulations and I think that they are advancing. I do enjoy what they have put in the table for us in regards to AI, and best practices is what we call. So I do recommend going to like EU privacy or Google in EU privacy, and you will find a lot of information on that, but right now, they need us. They need us to create them.

Speaker 1:

Absolutely, and we have to be mature about it, because we have gone through the creation of the internet, we've gone through mobile computing, call computing, social media. We've got a lot of good experience now. So AI should not be that new. It's a great tool, but we have to know how to utilize, just like anything else. If you've got a vehicle, guess what You've got to learn how to drive, and then you have responsibilities by getting that driver's license. So aptly reminded that driving is a privilege, so you have to understand it's a privilege. So these things we're going to get to. I think we'll probably get to a lot quicker than we have in the past, but just understanding what you are doing in that space, you bring up another good point about you are engaged in a lot of groups, associations. You're gaining knowledge and experience, and networking help our younger technologies understand what they need to do to begin to grow their particular brand and become even more well known and to get more exposure for their expertise.

Speaker 2:

Yes. So I love that area because I was before I actually was one of them who I didn't know where to go or how to start my career. What I suggest, the first thing I suggest, is network Networking is so important and please, my advice, when you go and you start networking, don't be afraid. Don't be afraid to put yourself out there to approach executives, to approach professionals who you think are. You know they're like oh my God, I'm not going to approach this. They are people.

Speaker 1:

This is something that I learned from my mentor is like their people.

Speaker 2:

Just go to them, talk to them, engage their conversation. So do that. But then also research. And whenever you hear something, if you're interested in cybersecurity and you hear a term, something that you do not understand, research it. You know, go to Google, read articles about it and you know what. Now ShadGPT exists. Go to ShadGPT and ask it what is, I don't know, a learning machine model? What is this? What is a learning model? What is a? What is a GRC and what is a GDPR? Like you have to go ahead and do this for yourself so you start getting acquainted and you're able to engage in conversations where you're together with these professionals.

Speaker 2:

Also, another advice that I want to give is be confident. I know it's hard sometimes because you feel like, if you're a young technologist, you feel like too much. So how am I going to talk to all these professionals surrounding me? It's OK. Nobody is more knowing. You're not. You're not, you know.

Speaker 2:

So just approach them and ask questions, be curious, listen, like you have to listen. If you don't listen, that's going to be a huge problem and it's going to take you a long time to reach your goal. So, listening, asking questions, and the last thing I'm going to say is there's nothing better than asking them questions, because once you start asking those questions that you think are dumb, those are the best we doubt, because then you get the response and you're like oh OK, got it. So now I know exactly what I need to know. Like you already, you check yourself, you're checking yourself, you're checking your intelligence. At that moment and you'll see after you ask all of those questions dumb or not you're going to see a progress in your intelligence and your intellectuality in the subject that you're learning and people are going to gravitate towards you and then they're going to treat you like an expert as well.

Speaker 1:

Ready to elevate your brand with five star impact? Welcome to the five brand podcast, your gateway to exceptional personal growth and innovative business strategies. Join me as I unveil the insider strategies of industry pioneers and branding experts. Discover how to supercharge your business development. Harness the power of AI for growth and sculpt a personal brand that stands out in the crowd. Transform ambition into achievement. Explore more at firestarmediumcom for a wealth of resources. Ignite your journey with our brave brand blueprint and begin crafting your stand up five star teacher.

Speaker 1:

Today, exactly, you're on par with them, because everybody has certain experiences and understanding and intellect and the insights that no one else has, and that's why it's so important to share. This is why we're on this platform today, because I love sharing the stories of others. This is how I learn, this is how I get educated. I get like, wow, I didn't look at that. That's interesting. And I know, when I was there Not the form that we were at and we were talking amongst these different. They were not just engineers, they were lawyers, right, so they're big into. You know the regulations and compliance that aren't existing, but they really wanted to frame the story. I want to get your take. What did you learn from that particular form, what really resonated with you.

Speaker 2:

If you'd like to talk to us about, oh my God, I remember we had fun in that in that meeting. It was awesome. But my take was that we're I feel like we're not ready to have technology Because we don't even have a regulation on AI. So I feel like there's a lot of work to do. But then you start. You have to start somewhere and that's what's happening. So I'm glad that we were there with lawyers to understand more of the legal Battles that are happening right now or the issues that are coming on with that intellectual property, For example, those questions that they were asking in the comments about who owns what the artificial intelligence tool is giving you, like when it gives you the output.

Speaker 2:

Who owns that? You know, and you could argue well, the company right, Open AI, but it's like no, but they got that data from you know, like you know public. So who really owns this work? Or who really owns the image that you're creating when you're painting it? And do you, should you have a statement saying, hey, I use chat or reference for your work? So, yeah, that is very that. I took that very seriously because I know that's where we're heading at. And then, on top of that, when they were talking about more advancing the future, as if we have to give rights to AI In the sense of and I imagine what they were talking about was more of like you have now an AI, artificial intelligence, that is far advanced and that is able to have some sort of emotions, or maybe mimic emotions, as humans, and they're going to want to be part of society as well in the sense of having rights. That is wild. That is wild.

Speaker 1:

I don't know what you think about that, because I'm you know, and I do have, I would say, my opinion on that and that is just like saying my car has rights, like has emotions, like it's like big saw. You know, I don't think it's sent in at all. It might mean an AI. Yeah, it can mimic a lot of things. And I had the same conversation.

Speaker 1:

Actually, I commented on someone's post that they put out there and they were talking about, you know, the same thing is that what is sentient, what is intelligent, what is consciousness? And I said look, when I look in the mirror, look in my own mirror, see my own reflection. And I'm talking is that reflection intelligent? Is it conscious? Is it? Is it sent in? Is it alive? It's a mirror? No, it is not. I am the living reality. That is just a reflection of my own intellect. So, you know, genitive AI is just a reflection of the somewhat of the totality of human intellect. Is it alive? Is it consciousness? Does it need actual rights? I'm going to say no, it's a tool, it's a machine. I think we're a little bit overboard with that one. That's my opinion.

Speaker 2:

I agree, I agree, I 100% that. I think the only way that it will change is if we do reach a way into having this genitive, ai or sentience feel the same as humans have consciousness, which I will love to see how that's achieved. Then that is different, and then we'll have to start thinking like you know what, yes, my robot now is named Linda and I will have to then have her rights because I, you know, I love my robot, linda. So, yeah, it's hard and we'll get there. When we'll get there, I don't doubt anything anymore. I feel like we can do anything.

Speaker 1:

We can probably go round and round in that discussion. But, to your point, there's certain rights that we already have Intellectual property. That's right. You can't go around damaging someone else's property. You damage it. Yeah, there are already laws with that. But then there's ownership, to your point earlier. Well, who owns the intellectual property of artificial intelligence? I'm not sure. So that's a different conversation.

Speaker 1:

But to say something is living or something is not, I think we need to come to a hard, hard no on that. For me. That's my opinion. But we can definitely debate around the legalities and regulations and everything else of how the use of some of these intelligences are the emotional qualities. But I always say that it is a mimicking of reality, not reality of itself. So we need to really make a hard line on that. But I want to continue on because I want our communities to really engage with the tech. Talk to us about what happens. You're in the world of data, data privacy, data governance. You see the data that's out there. If you don't have a certain amount of data or data that's not coming from a certain group, does that skew the data or not, and what's your opinion on that?

Speaker 2:

Yes, 100 percent, it would make a big impact. I'll give you an example something that happened using artificial intelligence for court cases. So the judge had, when you want to do sentencing, they were using AI and what happened was that they were sentencing the AI was sentencing more people of color, longer terms than the white color. So I started digging what's going on, because a lot of people were arguing well, there's bias, of course, and that has to be with the engineers, and maybe we have to have more diversity, which is 100 percent true. We have to have diversity in the engineering room. However, I thought there has to be something more than that. I had spoken to one of machine learning engineer, very smart, and I asked him hey, when you start, when you're creating this models, what is the test that you know? What testing data do you use? He said, well, that case, the court case, they use testing data from the 70s, from Alabama. Of course it's going to be bias, bias, of course it's going to be bias. There was a lot of racism, so, of course, the records were more of black males in jail than white males. So that's why, when they had acquired that testing data, they didn't QA it, they just used it and they were excited about creating this tool, and that's what happened. So I believe that we have to put some QA controls in place for when we are creating this models and we have testing data and there has to be someone assigned for that, so human oversight, in order for us to be able to have to lower our biases and be able to have this tools function more properly.

Speaker 2:

In my opinion, in regards to that court case, I think they shouldn't. I don't think even courts should be using that right now, yet I think we need to still work on it in making sure that we have rich data, which, when you are talking about data and how it's skewed like, you have to also think that data is not easy to. It's easy to get, but not for us, for companies, because if you go to Google, they're able to get your data. You're adding it to the Google search, right. If you go to ShadGPT, you're entering a lot of data, so it's easier for them. But in order for them to get rich data like personal information, pii kind of thing, then it's a little bit more complicated because we have regulations.

Speaker 1:

We have the GDPR. You have to abide by the CCPA or the.

Speaker 2:

Virginia. There's also a Virginia Privacy Law, so you have to abide by them. Hipaa as well, right?

Speaker 2:

So it makes it a little bit difficult for what they can obtain, and that's why a lot of these tools are still not as mature. But what I have I read in OpenAI that I think what they're doing now is that they're doing business with a lot of organizations and they exchange services and we'll do the service. In return, you will share your data so that now is going to be fed into this AI models and that's going to be even more functional.

Speaker 1:

Yeah, no, then that's true. And there's certain things around this bad data and good data. That's what I don't think a lot of people realize. There's a lot of bad data, I mean, it's just incomplete, not factual, it's not true. The machines don't know that. They just get a lot of data because it's predictive analytics and it's ones and zeros. There's no real thought behind if it's reality or not. Not at this point, and we've got a lot of work to do in that respect. Now, there's a lot of good data out there. You can. That's why you can tell right now.

Speaker 1:

I think AI for simple searches and whatnot will give you usually 80% to 90% accuracy of what it is. Still, 20% is not accurate. And if you're doing it for medical research, that's big. Or you're doing it for a law research, well, 20% is not right Well, that's huge, like you're just saying. And if you're using that as the quote, unquote, the benchmark of how we're going to do business or hand down sentences and things of that nature, that's something that's seriously concerned. So always step back and understand that the data or the output that comes back is under consideration. It's not something that's in stone. Look at it, verify it, massage it. Maybe it's a template of knowledge, not knowledge of itself. It's just a regurgitation. Now that's what I'm saying. We're still in that finite world of moving from information technology to intelligent technology, and that's a transition, and we still have a lot of information. That was what you would call, or data, that was in an information age. That's just not good. But as it moves into the intelligent technology age it'll get better and better and better.

Speaker 2:

Yes, and I absolutely agree, and I see it also in the part of, for example, the medical industry. When you look at how they use AI for certain functions. I feel like it's still not completely ready, because if you think about the data that they acquire, health industries to be able to use AI for diagnosing certain autoimmune disorders or diseases, that's just saying, the data that they get is the data from already sick patients and in order for you to have good data, you need to have data also of healthy patients and that's how you get results. But only having sick patients. If you go in for a checkup and you're using an AI tool that someone had created, it's probably going to tell you how a cup is going to tell you, oh yeah, you have three days to do it, you're not to live and you're like what's going on? It's because it's not comparing it and it's not reaching off with that data also of healthy people, to be able to make a good determination. You just probably have a cough, that's it. So there's a lot to work on.

Speaker 1:

Yeah, no, you complete data sets. That's so important. We're getting there. This is what Cough Computing is allowed us to do. They really merge a lot of different large that's what they call it large language models, a lot of big data together to get to what we call generative AI. And then in the future it'll be artificial generative, I think, artificial general intelligence, and that's what we get.

Speaker 1:

What does that mean? General intelligence you have to understand that's like all right, you have a five-year intelligence of a five-year-old from a computer standpoint. So it's that soul mature. And that's what we're talking about the maturity level. We're going to get there and understand where we are right now and how to utilize these tools and everything else we're going to. Just one more question we want to get to as we close, and that question is really for you, as you look back now and where you're at and where you were at before you started this journey into cybersecurity technology, deep technology, deep thinking, all of these things that you're doing now. If you could go back and talk to yourself, that 19-year-old person and advise them about the journey they're about to embark on, what would you say?

Speaker 2:

I would just say do not be afraid. Do not be afraid because you are as smart as you like you can be, because I remember a lot of my fears were I'm not a smart. I'm not a smart, I'm a woman, a woman of color. Nobody's going to hear what I have to say or what I have to bring to the table. That is complete ignorance. As, being a young person because it's not true there is a lot of people that want to hear what I have to say, but no one is. They cannot guess what you're thinking. No one can guess anything about you you have to assure to the world, you have to really be confident in yourself and say, hey, I am going to do this. I don't know anything right now, but I know I'm going to make it. So I would say that to myself and I will also tell myself there to do even the impossible. If you don't know how to code and you look at all of those all of those are different programming language and you're like I'm going crazy, I don't think I can do this, do it. Go take a course, get together with a friend, go to a tour, just do it. You'll see One thing that I did when I was, before I got into cybersecurity, is that I dared myself to go to a CTF.

Speaker 2:

So it captured the flag. I remember I was like, before I even get into cybersecurity, I want to try this. And I was actually working and I had gotten out of work and there was like a local CTF event. I saw it on Eventbrite and so I went in. I took my laptop and there was all these people you know they were hacking, excited, a bunch of cables. I'm like this is okay, I'm nervous now, intimidated.

Speaker 2:

And I sat down and I remember the first prompt to get into and get your flag. It was just to get into. You were on the terminal and you just have to go and go into your desktop and just get a file that was created there and there was your flag. I didn't know how to do that. I didn't know how to navigate the command line and I was like, oh my God, this is so difficult. And I told myself you know what? No, I'm going to do it, I'm going to figure it out. So I will research on Google and that first one that I got. When I did it, I felt accomplished. I was like, oh my God, I'm a hacker.

Speaker 1:

Of course not.

Speaker 2:

I'm a hacker, I made it, I'm so good and it was so funny. And then I remember it was like 15. The capture the flag was 15 different questions to get to the end. I only did three. But those first three I was never able to get past that. With no experience I went home like a winner. I was like, okay, I'm going to get in this because I'm a hacker.

Speaker 2:

Everybody. I told my mom, my brothers, and it was hilarious. But that's when I started believing in myself and saying you know what I do have to dare? If we don't dare to do it, then you're going to be scared your whole life and people around you do not know. They don't know, they cannot guess how you feel. You have to express it. Hey, grant, I need you to help me with this, or do you think you can give me a little bit of overview of that? Trust me, a lot of people are more good than that and they will help you. They will help you, don't be afraid.

Speaker 1:

Oh man, I couldn't have said that better. That was a great story. That's how you got that confidence and you believe in yourself. Even before you take the classes and the courses you just say let me just make sure I have what it takes to get to that next level. And I would say that first flag boy man, look at you now, right, what makes you know? I mean you actually get your GPA. Tell us, tell the artist, what your GPA was.

Speaker 2:

My GPA at that time was a 3.6. 3.8, I'm sorry, 3.8. Wow, yeah, I mean so that's pretty high.

Speaker 1:

Yeah, I mean, she actually knew what she was doing. That's pretty good. I really, really appreciate you being on the show. This has been wonderful. Let the artist know how to contact you on LinkedIn.

Speaker 2:

Yes, so you can go to LinkedIn and you can find me Felicitas Sandoval and you can also find me on Instagram, the AI researcher, as well, so you can find me there and if you have any questions, you can feel free to be in me in LinkedIn. I am more than glad to go ahead and answer back and give you any advice that I can. I'm always available even if you see that I don't answer right away, but I will always make sure to go back and help anyone that needs any assistance or any. If you need somebody to share you up, I'm here.

Speaker 1:

Well, I'm glad you are here. I advise everybody in your audience they can see all the episodes of Follow Brand that will be at five star. That's the number five, five star BDM, b for brand, b for development informationcom. This has been wonderful and your your twenty twenty four. Thank you for being on the show, thanks for joining us on the follow brand podcast, thanks to all the fact productions for their incredible support on each and every episode. Now the journey continues on our YouTube channel. Follow brand TV series dive into exclusive interviews, extended content and bonus insights that will fuel your success. Subscribe now and be a part of our growing community sharing and learning together. Explore, engage and elevate at follow brand TV series on YouTube. Stay connected, stay inspired. Till next time, we will continue building a five star brand that you can follow.

Personal Branding and Cybersecurity Power
Concerns and Guidelines in AI Implementation
AI Consciousness, Data Skewing, Data Privacy
First Capture Flag, Build Self-Confidence