STEAM Spark - Think STEAM Careers, Podcast with Dr. Olufade
STEAM Spark: Think STEAM Careers Podcast with Dr. Olufade. Welcome to STEAM Sparks: The Think STEAM Career Podcast, hosted by Dr. Ayo Olufade. Our mission is to raise awareness about the importance of pursuing college and careers in STEAM fields and the positive impact they can have on BIPOC communities.
Dr. Ayo's journey, fueled by his passion for STEAM education, lies at the heart of this podcast. His experiences and meaningful conversations with guests from STEM and STEAM backgrounds inspire us to highlight the significance of STEM education and careers as sources of empowerment. We aim to better position the next generation for success.
By sharing personal stories and experiences, we hope to inspire and encourage our audience to consider STEAM careers. We are committed to promoting diversity and representation of BIPOC communities in the STEM field, breaking stereotypes, and fostering an inclusive environment where everyone's unique perspective is valued.
Join us as we explore the endless possibilities and opportunities in STEAM fields. With your participation and support, let's work together to shape a brighter future for all.
#ThinkSTEAMCareers #BeInspired #BeAnInspiration
It is time to innovate!
Dr. Ayo Olufade, Host STEAM Sparks: Think STEAM Careers Podcast with Dr. Olufade
STEAM Spark - Think STEAM Careers, Podcast with Dr. Olufade
Justice AI GPT: Rewriting How Machines Learn Fairness
What if machine intelligence didn’t just sound balanced but actually re-learned the world from the roots up? We sit down with tech activist and creator of Justice AI GPT, Christian Ortiz, to explore how a decolonial framework can detect, deconstruct, and correct bias at its source—by redefining what counts as knowledge and who gets to author it.
Christian breaks down the DIA approach, a model-agnostic system that strips away Western defaults and centers the global majority through sovereign datasets, indigenous archives, oral histories, and multilingual sources. Instead of smoothing outputs, this method interrogates inputs and assumptions, reframing questions like “Why is Africa poor?” to expose the living structures of extraction and power that shape economies today. We also get practical on privacy and safety: Justice AI GPT avoids training the host model, keeps user chats inaccessible to the developer, and meets enterprise-level security expectations so organizations and learners can engage without fear.
Beyond architecture, we dig into governance and validation: intersectional harm testing, community panels, and continuous bias drift monitoring that give elders, BIPOC, LGBTQ+, and indigenous leaders real decision-making power. The conversation reaches education, healthcare, and policy with clear use cases—students co-training models with community knowledge, diagnostics that stop misreading Black and Indigenous bodies, and systems that flag policies reproducing oppression before harm scales. Christian shares his lineage, why authorship matters, and how collective liberation can serve everyone, including communities whose ancestral wisdom was erased.
If you care about ethical AI, decolonizing tech, and building systems that honor truth and dignity, this is your map and motivation. Subscribe, share with a friend who works in AI or education, and leave a review with the one question you want Justice AI to answer next.
Hello, Christian. Good morning, my friend.
SPEAKER_00:How are you?
SPEAKER_02:My brother, I'm doing well.
SPEAKER_01:How are you doing?
SPEAKER_00:For my tardiness, I was in the Google Meet and I forgot that there was a Zoom.
SPEAKER_01:You're a busy man. It's alright. We're all busy. I understand.
SPEAKER_00:Yes, I'm doing well though. How are you, my friend?
SPEAKER_02:Well, by the grace of God, I am doing very well. You know, just hanging in there, just uh, you know, contributing as much as I can, uh, just like you, uh, trying to be as impactful in my own little way, and then of course, um trying to make sure that uh the family is also uh taken care of.
SPEAKER_01:You know, sometimes we can do something in our work that the family.
SPEAKER_00:I was so happy when you were like uh talking about your kids' uh hair appointment. I was like, yes. Babies first always.
SPEAKER_02:Yeah, you know, uh it's very, very important. Uh I also try to write a little post once in a while, uh trying to encourage men um, you know, to help out in the kitchen.
SPEAKER_00:So you know, it's you know, I learned that actually from my dad.
SPEAKER_02:Uh interesting enough. Uh I always like to tell this story that uh when I was young, my father, who loves to cook, uh you know, shared with me or used to tell me that every man needs to know how to cook. Uh, but he, you know, and and and the way he said it, uh uh basically to try to encourage me, kind of saying, uh, just like this. Um, well, in case if your wife decides she's mad at you and she does not want to cook.
SPEAKER_01:So you can go.
SPEAKER_00:That's it. You can definitely fend for yourself.
SPEAKER_02:Yeah, yeah, you see. So uh, so then of course, um as I grew up, I I took it, I make it into my own. Uh, you know, it's sort of uh okay, you know, I help out in the kitchen and help my wife out. It's not just her job, uh, to take care of uh the family, and the man has to contribute and also do the house chores because we're all busy. We are all busy with life, trying to make a life, right?
SPEAKER_00:Yes, sir. So true.
SPEAKER_02:Yeah, every day. Yeah, thank you so much. Uh you know, uh, for um coming to this podcast, for being a guest. Um honestly, um, I read some of your posting, and um I love AI, I've been into AI, and I've been reading also uh concerns, and I try to post some concern as much as uh we're happy about AI and the benefits of the AI, but um we also know there is something also uh which is um the bad use of AI or the training of AI in terms of bias to our communities, the BIPOC community. And you are on the forefront. And you did even something even much better. You did not only post or write a book, you actually developed an app. Uh I mean, you developed a program. Um, you develop a uh an AI that can also address uh, you know, well, let me put it, let me step back a little bit. My understanding is you develop the justice AI to as a way, um, you know, or to provide uh another alternative AI for people of color, and that is free of bias as much as you can, um, which is which I think which I believe, not I think, which I believe we need in our community, in our society. Um because uh as you know, well-intended as uh as ChatGPT, I mean the open AI is, or other AI, uh sometimes because it depends on human training, depending on who is training or the information that is being fed, it it there can be uh element of bias. Uh, and sometimes people of color are not in the room when it comes to the training of the AI. And that is why your work is extremely important uh because you are a person of color and you are in the room, and your family has been in the forefront of justice for a while. So you come not only with a recent experience, but uh actually generation of experience addressing uh you know injustice and justice. Uh, and then you bring it into our current modern life, which is uh, you know, the use of AI in order to do uh what we need to do. Uh so I that's why your work is so important. And I said, well, I have to talk with you uh since I I'm a podcaster, um I wanted to, you know, uh you know up help uplift your voice, uh, even though you know I'm not uh Joe Rogan or powerful, but but in my own way, you know, I I'll do my part as much with the little that I have. So um for sake of transparency, you know, I wanted to let you know.
SPEAKER_00:So no, and I I greatly appreciate it. And I think that every voice matters, and every voice has power, and so I am uplifting your voice, I'm uplifting your platform, and I am grateful to be here.
SPEAKER_02:Thank you so much. So, um, what's gonna happen is I'm gonna do the introduction and then we'll go into you know get into the conversation. So um I'm already recording. Um, I hope you don't mind. Um so and then the way it's going. So once we're done with the podcast, uh the plan is uh to give it to my video editors. Uh there are two of them, so whichever comes first, whoever. Um so there may be two video podcasts. Uh one person is here in the United States, the other person is in the Philippines that helped me. Yeah, so um sometimes uh, you know, I always well, the one in the Philippines takes a little longer than the one in here in the United States. So the one in the Philippines, depending on her job, she may take a little like two weeks, the other one takes usually a week, and then they get get it back to me. So I may end up posting both. Um, so but you will the way you'll be able to tell, but before I post them on YouTube, I will share it with you and uh you know, so that you can approve it before I you know go ahead. All right, and um, and then if and then I would like if you can also you know uh spread it out, you know, uh from your end. And um thank you so much for signing the mous. I appreciate that. So uh so let's get into it, yes, sir. Yes, sir. Thank you so much. Welcome, welcome back to Steam's Park, Think Steam Career Podcast, and I want to add legacy to the end of that, right? Uh, this is where you know we talk about Steam and Steam education, um, you know, Steam careers and the benefit of Steam to our BIPOC community. Uh today's episode, we're gonna be in today's episode, we are gonna be talking about decolonized or decolonizing the algorithm. Uh, we're gonna focus on Justice AI GPT and the future of ethical intelligence. We're gonna explore the intersection of science, culture, and justice and reimagining innovation as a pathway to ancestral wisdom and global equity. Oh, I just love that. Today's episode is not just about artificial intelligence, it's about power, right? We all know when it comes, we are in the information age. Uh, that is uh that's a lot of power, that's a lot of data, and how you use that, um, especially as it pertains to the BIPOC community and women uh is very critical. It's about authorship. It's about what happened when someone dares to say we have solved the AI bias problem. Well, we're in the by we're in the BIPOC community, yeah, the blacks, right, the indigenous, the people of color, the LGPTs, right, the women. We know how bias affects each and every one of us. Now, in the AI, uh uh, you know, AIH, right, this conversation is most urgent and most critical and most important. And this is the reason why today we are going to be blessed. We are going to be talking then none other than Christian OTs.
SPEAKER_01:Hello, and thank you.
SPEAKER_02:Man, man, man. You don't understand. This is this is epic, everyone. This is epic, right? Christian is a tech activist, okay? Epistemic, epistemic architect, the founder of Justice AI. Have you ever heard of Justice AI? Well, this should be an equal level with Open AI, in my view, right? So we have to lift him up, we have to raise the roof, especially because it's very important for our community. Is here to talk about Justice AI GPT, a free framework built on DIA model. Uh wow. What is DAIA model? This is the colonial uh decolonial intelligent architect. That's challenging everything we are taught, we have been taught, or what we know about ethical AI. So I would like to invite uh Christian uh to please uh introduce himself and tell us a little bit about by the way, uh Christian. Did I do a justice?
SPEAKER_00:Uh above and beyond my friend. So humble, so humble and so grateful.
SPEAKER_02:Thank you so much. Is there anything else that you want to add, sir?
SPEAKER_00:No, sir. You you knocked it out of the park. I'm I am beyond grateful, and I'm so grateful to be here and thank you for opening your platform.
SPEAKER_02:Excellent. Thank you so much, Christian. So uh let's start with the headline. Um, I read about your headline. You said, I think uh, I hope I'm not misquoting you, but if I am, please correct me. Uh, you have said Justice AI GPT has solved the AI bias problem. Yes, sir. To me, that's a bold claim. Yes, sir. What does solve mean in your framework? And why do you think the mainstream AI AI will resist that conclusion?
SPEAKER_00:And it is such a wonderful, necessary question to ask, especially as we are all in a living in a and have inherited a colonial system of education. Um, so in my framework, I say um solve doesn't mean that I've eliminated prejudice from human society. That's impossible as we're living in a system still under colonization. However, it does mean that I've built an AI architecture that detects, deconstructs, and corrects bias at its root cause. Um, and that's epistemic inequality. And when we talk about AI and bias, um, AI is simply a reflection of all of societal bias. It is programmed through all Western data. And what we've been able to do is identify the root cause of bias itself, which is colonial, it's the system of colonial white supremacy that we've all inherited collectively, whether through information, education, um, how we were conditioned to see each other and how we conditioned to understand each other globally. We are conditioned by these biases. And so as AI came to fruition, uh, I immediately understood its power and I immediately recognized the bias within. And of course, the immediate question for me was well, how do I build a system that doesn't regurgitate the same harmful biases? And so when we talk about data, um, Justice AI uses the DIA framework, it's the decolonial intelligence algorithmic framework to strip out the Eurocentric defaults embedded in mainstream models, centering knowledge in the global majority, indigenous science, and marginalized communities in real time. Um, mainstream AI resists this because it's not just technical, it's political, right? Our biases are very political globally. Um, and to acknowledge the solution is to admit that the problem was never just. Statistical noise. It was that the system of white supremacy coded into our data sets is always coded into our data sets and our definitions of intelligence. And so I needed a system that was going to not only challenge this but offer the solution. And as bold as a statement as it is, I have a working model and I've built this within ChatGPT. So if you use Chat GPT, it is an extension that you can subscribe to monthly and you can use it yourself. And so individuals can use it to deconstruct their own biases and have conversations safely so that they don't risk activating any trauma from marginalized communities or communities who have thrived and strived. And organizations can adopt the tool to also build inclusive um cultures within their organization.
SPEAKER_02:Thank you so much for uh deconstructing that for us. Um so when you say extension, one thing just comes to my mind. Um, you know, you've we have all read about um another AI um platform or um or channel that is challenging chat GPT, which is the dipsyc uh from China. And one of the concerns that um uh people have is their data, how you know safe their data is, and who does it belong to? So if I was to use uh uh your you know your chat, um how safe is my data? And uh uh, you know, I mean what happened?
SPEAKER_00:No, and it's a it's a very valid concern. Um, what I try to remind everybody is is that when we use all of these tools, whether it's AI or social media, and it doesn't matter which platform, we are when we hit that agree button to sign up to, you know, when we don't read the um, you know, they offer this whole script of what you're agreeing to in terms of your terms and conditions, every social media platform, every tech platform that we sign up, um, we offer to contribute our data to it. The difference with my uh Justice AI GPT is that I control the entire GPT within the framework. So I have the ability to um not train the major model. So anytime a user uses Justice AI within OpenAI, it doesn't train OpenAI's model and it doesn't share information. They're not able to access that information as well because I have programmed the security modules in the back end to ensure that that safety is guaranteed. Also, I am not able to see my users' chats, I'm not able to see conversations, so every single interaction is 100% private, and I don't see I have not had any issues with that whatsoever. So it's it's actually fantastic. And organizations adopt it because it also meets their ISO requirements when we talk the big tech level security um issues that we have to overcome. So it it um it works across the board in terms of safety, and that was part of a necessary component of developing this framework itself. It's part of my DEIA.
SPEAKER_02:Excellent. Thank you so much for unpacking that for us. Now let's continue talking about technical bias. Uh, this is epistemic, epistemic, uh, it is historical, structural. Um, so walk us through the DAI framework. Uh, what makes it fundamentally different from the traditional fairness model or bias audit? Because I open AI and other AI will say we're free of bias. Uh but you're saying, uh, well, yours is better. So what makes yours better?
SPEAKER_00:No, absolutely. And and this was kind of an exercise that I've had to um continue to refine because that question will always come up. Well, what makes Justice AI different than any other GPT, right? And how are you actually solving for the bias? I get that question day in and day out. And so um I love that I have the opportunity to give that answer. Ultimately, uh, traditional models, language models like OpenAI, Claw, Perplexity, Deep Seek, they work like um a PR damage control. They adjust the output so it looks fair without dismantling the colonial logic in the inputs, right? And so it's almost like they're working around the reality that we that they don't want to call out the system of white supremacy globally. Right. So the DIA framework goes upstream, okay, it audits epistemic sources, it applies intersectional impact assessments, and requires community-led governance over module um and model updates. So we don't we don't just tweak the algorithm, we redefine uh the ground rules for what counts as knowledge. And so, because people ask me, like, what's the difference? I always give this example. I ask mainstream GPTs questions like, why is Africa poor? Right in the West, we have this concept of what Africa is and how mainstream media has conditioned us to see this country. And so it all other GPTs will use answers like it'll give you a polished, balanced answer. It'll mention colonialism briefly, it'll pivot to corruption, it will talk about bad governance or lack of infrastructure, but that's bias in action because it treats colonialism as just history, not as a living system still structuring the global economy today. So when you ask Justice AIGBT the same question, it reframes the whole response, automatically saying things like Africa is not poor. It is one of the wealthiest regions in the world in terms of natural resources, biodiversity, and cultural innovation. It it dissects the bias within the term itself of poverty and how it's manufactured outcomes of centuries of extraction, starting with the slave trade, and it'll go into all of the details. And so, what I found is that bias is so insidious and it is so um deeply embedded into the into the education that we receive, and Justice AI truly identifies where we missed the mark in all of it and how we can identify it within all of the responses.
SPEAKER_02:Wow. So when I hear you speak uh about this, uh to me, you are really reframing the question, the questioning that uh, you know, uh uh, you know, the which is the I mean the training of the AI uh in the bedrock, in the foundation of the AI.
SPEAKER_01:Uh well, the concern is are you not afraid that the power that bee will say, well, you are rewriting the rule of cognition, right?
SPEAKER_02:So uh so on that but so I'm gonna challenge you with this question. I know you've you've answered it, but uh I just want to make sure I I really understand. So is Justice AI uh GPT rewriting the rules of cognition? Are you reframing uh the question AI is allowed to ask?
SPEAKER_00:Yeah, um, we're doing both. I I rebuilt the way AI thinks so it doesn't automatically follow the same colonial patterns baked into most systems, and I expanded its way of looking at the world so it can ask and answer questions that aren't shaped by the biases of colonial empire in the first place. So if we think about it, a lot of the models that we use today, like OpenAI or Claude, if you ask it something overly political, it's almost afraid to give you the honest answer because it doesn't want to push the limit. Justice AI doesn't hold back. Justice AI is culturally competent globally and understands the geopolitical truths that have occurred historically, which have um been the stories that have been wiped out by Western colonialism and colonial powers across the world. And so this is almost like unveiling the curtain. And so, to your question, am I afraid? I think it's just about time that we have something like this. Yes, what I tell my audiences, because I built this not only for people of the global majority, but I built it for our white brothers and sisters too. Because what what our what our society isn't taught is that whiteness is a social construct, and it stripped away the indigenous and ancestral knowledge of the people before colonialism, even before Europeans. Before Europeans became colonizers, they had ancestral knowledge and wisdom and ways of being that colonialism stripped and replaced with whiteness. And so this is an opportunity for collective liberation. And that is a word that I have adopted into my frameworks because this is what it's all about is for everybody to be free from a colonial system and to reimagine how we can build a world unlike anything we've ever seen before.
SPEAKER_02:Oh wow, that's a very good point. Uh love that. So, how do you how does Justice AI uh GPT incorporate uh because you did mention uh, you know, the other brothers and sisters that are white, uh whose culture and heritage have been stripped? Uh, we're all in the same boat, right? So, how does Justice AI GPT incorporate the indigenous, uh, the African, the non-Western epistemologies? Um, and then the follow-up question is it multilingual, multisensory, rooted? Is it rooted also in oral tradition?
SPEAKER_00:Um yes, I'm so glad you asked that for sure. Um so taking a step back for context, in order to make this happen, I I got invited to do beta testing for what was called chat with three uh chat with 3.5. It was right before OpenAI released its um massive 3.5 you know uh platform back in 2024, um, or 2023 rather. When I realized that there was so much bias in the code, I had to uh collaborate with decolonial experts to figure out what my next steps should be to even get this process started, because this was going to be a hefty uh undertaking for me. Um, so I using LinkedIn, I collaborated with professionals, ultimately leading up to 560 decolonial experts from around the world who have contributed over 30 years each of decolonial expertise in their respective fields. Um these individuals were doing DEI work since before DEI was even a global conversation. Then I fed that AI knowledge in um uh I fed the AI knowledge in many languages from oral histories, community archives, indigenous worldviews, and community and communities guides um on how it was going to be used. And we created the largest decolonial data set, which was what was going to combat the data set within Justice AI. And so uh this was just an extra, this wasn't just extra data, right? It was the foundation of the GPT that we were building. And so I had to understand that the knowledge that I brought in um was historically accurate. It was also representative to the communities we were serving. We had to invite indigenous nations, African nations, Latin American nations, people from all around the world, um, Maori, um, people from the Caribbean to have these perspectives to give the full picture of what colonialism did around the world. And so um, as you know, the AI works like an oral historian as well as a pattern finder, which is incredible to see in real time.
SPEAKER_02:Wow, nice. Um so I like that. Um the that this collaboration uh between different sector of people from all over the world. Uh you mention Africans, you mentioned Caribbeans, right? Um, and you mention integration of you know oral stories and and and things like that. And I have been for a while I've been pushing uh this idea that you know our community, the BIPOC community, especially uh the black community, uh those of African ancestry needs to be part of you know the STEM uh you know innovation and now AI innovation. Uh, because of this fact that I I I was arguing that uh this technology, this innovation is built upon uh what our ancestors have actually laid down a long time ago. So um so what role do I'm glad you mentioned what you're doing. So what role do ancestral technology um and cosmo call uh uh cosmo cosmology uh uh play in your vision of um ethical AI?
SPEAKER_00:Absolutely. Yeah, I think it plays uh an enormous role, to be honest. We see ancestral technologies like star navigation, ecological care systems, um, and indigenous governance as equals to modern computing, not as outdated steps to the way they were. So they shape how we design AI to be resilient, uh reciprocal, and accountable in its relationships with the users. And it also develops this major understanding of how our communities operated um before colonialism, honestly, since before the 1400s, to really take a deep look at what our communities did to survive and and how did they use these models to you know indigenous and ancestral um uh our ancestors they they solve things like astrology and mathematics way before colonialism? Yeah, and so the question is how, and so we we had to deep dive into that and incorporate that into our training model.
SPEAKER_02:Oh, nice, excellent. So I want to go back to the bias. Um uh even with justice AI. Can AI ever be truly, you know, be just? Um uh if trained on data shaped by um non-colonial, colonial history, or let's say uh non-colonial history. Um so or is there another paradigm shift that that we need to be thinking on how to build uh a more ethical, I know justice AI is great, uh so but can this justness uh be addressed with any AI uh you know that's out in the market today?
SPEAKER_01:Yeah, I mean even including yours.
SPEAKER_00:Yeah, so I think uh, you know, that's a question that I get a lot, right? Is if if a just AI can be trained on colonial data or what's out there, the reality is that they do an okay job, right? They do they get trained with decolonial frameworks, but not to the extent that they should be. Um, so they're always gonna miss the mark. I think that this can't happen globally in terms of we can if we can get to a place where every AI that we adopt, we can guarantee is going to be ethical and bias-free. I don't think that that can happen without radical invention. And colonial data is a fossil record of oppression, right? And without reframing the epistemic lens, you just get a more polite oppressor. So that's why the DIA mandates that I'm building um parallel sovereign data ecosystems so that they ensure that um as we grow closer as a society to becoming more just and more and less biased, um, it it propels us to really understand what's at play, how it's working, to see white supremacy not about identity and race, but more as a system of oppression, because that's exactly what it is. And so when we have these reframings in a global scale more and more and more, using tools like Justice AI, I honestly believe in my heart of hearts that it's going to make all the change necessary so enough people can have their racial awakening and their decolonial awakening so that we can create steps collectively um globally to create these systems in an ethical fashion. And I hope that answered your question.
SPEAKER_02:Yes, it did. Epic. So let's talk about barriers. I know with um uh innovative work like this, um I'm excited about it, but I'm sure it's a lot of hard work. So what were the hardest technical barrier you faced in building just this ARGPT? And um what were the hardest philosophical ones, if you may talk about that?
SPEAKER_00:Yeah, no, that's a that's a really that's a really great question. I think the biggest hurdle wasn't even the coding or the code itself. I think it was the data that Justice AI had to compete with. I spoke earlier that we had to develop the largest decolonial data set that offers this global perspective. And what it's doing is that I built it in open AI, which not only has the largest data set, but the most biased data set. And so the GPT is going to pull from that biased data. Yes, yes. What happens is that it uses the decolonial data to identify the biases in real time. Okay, so the code wasn't the problem, it was the amount of colonized data that we had to combat. Every mainstream AI model is built on these massive colonial data sets full of histories, science, and truths, filtered through the system of white supremacy. And that means that even before you ask the model a question, it's already learned to see the world through the colonizers' eyes. So I had to strip that out and replace it with sovereign community controlled data ecosystem. These indigenous archives and knowledge systems and African knowledge systems and oral histories so that it can combat that and give you a non-biased answer. So that was my biggest hurdle.
unknown:Wow.
SPEAKER_02:I when just listening to you, it sort of reminds me of imagining someone who has been brought up with a certain bias mentality. And now they get into a space where they are trying to deconstruct that, you know, get rid of that. Or they go to school or they meet someone like you and who is trying to you know help them to see the world in a different way from the way they're used to. That is a really hard challenge. I mean, when you just I can imagine that that is the same issue with technology. Wow, just listening to you, it seemed to me like it is. Am I correct?
SPEAKER_00:You are a hundred percent correct. I always tell uh people that I speak to about this work that I I manage, I carry this duality of educating people. There's a lot of misinformation on AI, but there's also more information about how we understand ethics and race and the system of white supremacy. There's more misinformation. So I'm juggling this duality of not only educating individuals on AI, but also ethics, and then finding the crossroads to help them see where the intersection lies, and then helps them understand that in order to do this work, you have to literally unlearn everything you've been conditioned to believe. I'm not even going to call it being taught. I think we've been conditioned to believe through the system, and we have to uncondition ourselves and then we replant and learn new knowledge so that we have the full, broader picture. And so I call this um social gardening, where we unroot the the dead roots of knowledge that that don't serve us anymore, and we replant new full seeds only to blossom into something that we've never seen before. It's incredible work. It's it's very challenging, but it is the most fulfilling for me.
SPEAKER_02:Man, well done. So uh how do you uh uh how do you uh audit or validate output? Because I know this question uh is going to come, especially uh you're trying to, you know, you're trying to uh change the conversation, you're trying to uh you know challenge uh what we all know or what is out there. Um one of the big questions that people are going to ask is, okay, uh, all right, so you did this. Uh so how do you audit it? How do you validate its output? Uh is there someone like our community that review the process or the setup of the colonial benchmark? Uh so how do you what's your response to that?
SPEAKER_00:Yeah, no, and it's a valid question. Part of my DIA framework is that it requires ethical, in order for an ethical AI system to even become existent, in order for it to exist and thrive, you have to have the communities that it serves to test it as constantly as possible. Also, you have to you have to give them a seat at the table in order to make decisions, um, meaning that you have to speak to elders and and elders in every part of the community, especially to us LGBTQ communities. Yes. I work with um Quaker leaders in the LGBTQ community who have reframed that whole concept alone. Um, African leaders from around the world, nice uh direct contact with all of these communities. And what happens is that um there is a decolonial uh benchmark suite that I operate that includes intersectional harm tests, community validation panels, and just continuous bias drift monitoring day in and day out, because that is ultimately the most important part of this work is that you have to ensure that these things happen because you cannot and you cannot have a tool like this and and make the claim that you've solved the bias problem, and then all of a sudden it starts acting so wonky. And the last thing that I want is the creators for somebody to come in my DNS and say, This is what your GPT told me, man. Like, what is going on here, right? And so uh, so I mean, is it perfect? No, but it is uh but it is such uh an unbelievable opportunity and an offering to our communities for the first time. We have a tool that can give you answers that you can understand in a way that doesn't become triggering, and you can challenge it as much as you want, and it will literally just hold your hand and walk you through the whole process.
SPEAKER_02:Yeah, I'm definitely going to try it out. Uh, and I would like to encourage everyone to try it out. Uh the more we support our brother here, uh, the better his system is gonna be, and the better it's gonna be for our own uh our own community, uh, truth be told. Um, so do you see? I know you've already answered this uh because you did mention uh that Justice AI GPT is an extension of Chat GPT. So, but I must ask this question so that it is clear because uh when we were talking about uh the extension, it was under a different context. But I just want for clarity, I want to ask this question. Um do you see Justice AI GPT as a standalone uh model? Um, or let me rephrase that question because I know the answer. Um because she did mention an extension, right? Um do you ever see a time when uh justice AI GPT is going to be a standalone? Uh is it always gonna be uh a framework that is layered onto the existing system, like uh the OpenAI chat GPT?
SPEAKER_00:Yeah, no, it's a great question. Right now, um what I've built is what they call LLM agnostic. So it is a language model agnostic framework that can be plugged in and built into any GPT today. Okay. So for instance, if an organization says, I don't want to use open AI because Sam Altman is just not the person I want to uh protect. So I want to build this in Claude. Let's build it in Claude. I want to build it in um perplexity or chatbase or any of these other training models. We can do that. Okay. Um I wanted to build this within OpenAI and ChatGPT to kind of show that I have an MVP model of a product that that I wanted to show the world that it not only can work, but this is how it works and it's possible. And I either see Justice AI being a standalone GPT above all the others that everybody wants to, but it takes me back to this colonial mindset of competition, and that's not what I want. I want I want my framework to be adopted by every single AI model so that we don't have to worry about AI replicating these harmful biases before AGI comes, which is another conversation of advanced AI that scares the crap out of me. And so my goal is just to decolonize AI systems before we get there.
SPEAKER_02:I see. Thank you so much for that. So let's talk about authorship and resistance. You are spoken about being denied authorship and uh recognition. What does it mean to uh what does it mean to author an AI framework in uh a in a decolonial uh decolo decolonial context?
SPEAKER_00:Yeah, I think authorship means um ultimately epistemic accountability, right? In colonial tech, authorship is erased, especially when the author is from the global majority, whether they're black or indigenous or Latino or what have you, or Asian even. Um naming myself as the author of the DIA is reclaiming narrative control over a methodology uh in the system, the the methodology that the system would prefer to anonymize and appropriate, right? So nobody will want to give me credit because I don't come from a major university, I don't come, I'm not white, I'm not, I don't fit the mold as somebody who's in tech. And so this is my way of reclaiming the narrative and really positioning myself as a leader in this space, clearly able to have these conversations with the best of our community members. And so it's a it's extremely important because I also believe that representation, I represent 0.0007% of me in tech, that is Afro-indigenous, uh, queer, and neurodivergent, uh, all intersected in one. And so the representation of that is almost non-existent. And so my my positioning myself as the author of this GPT mainly serves as a conduit of inspiration for people in our communities to say, well, if he could do it, I can do it too.
SPEAKER_02:I like that. I I like that, and it's really frightening uh unbodisome that uh representation um it's it's really, really low. Um and especially in day uh and in now, you know, in the current day, you know, I always ask myself, there's so much access. Isn't there access? Uh, why is our representation so low? Um so I'm glad you mentioned something like that. So I know this is this. I'm gonna so yeah, let me just ask you that question. Why do you think our representation is so low?
SPEAKER_01:Um just yeah, in your view, because it's it's it's bothersome.
SPEAKER_00:It is. I think it's I think me being here, I'll I'll say this first. Me being in this space uh is almost uh it's universal as it as if like it was meant to happen. But I'm able to really peel back the curtain and expose that this uh episode the reason why there aren't people like us, more of us in this space, is because uh AI and the industry is gatekeeped by these colonial systems, right? I mean, we have to look at organizations like OpenAI, Perplexity, Deep Sea, these are colonial systems in action because I'm able to identify these biases alone as further proof that people of color or the global majority are not in the data set. They're not having seats at the table. So this is really an uh revealing of the intentional work of who's being excluded and who's not being welcomed to the table, which is why I push so hard the way that I do on social media. I speak with my whole chest because we can't ask for permission anymore. If we have the ability to build these systems, to educate ourselves, to cling these titles, to learn how to operate, then we have to take this moment ourselves by our hands and by the horns and really just choose to be the leaders in this space.
SPEAKER_02:Thank you. Uh, and I've also maybe you can help me address this. I've also uh talked to people who have pushback and say that, well, uh the low representation is we're not the one uh that uh being the the gatekeeper, we're not the one who uh you know stopping the people of color or the women um or the LGPTs um you know being in this space. So what do you say? How do you how will you respond to that? Because their argument also is look, they're not interested in STEAM, they're not interested in this space. So what does that have to do with us?
SPEAKER_00:No, I I think both can be true. I think I think historically, if we take a look at statistics, if we take a look at how the system operates, when we take a look at the number of individuals who have access to um proper education, who who we have to this is where my work really becomes the it takes on the political hat, right? It puts on this very radical political hat to it to really address that this is a system uh systemic problem. And so when we take a look uh historically at the barriers that people of the global majority have to overcome just to get a seat at the table, just to be in these spaces, um, it's a lot, it's not easy. We're not we're not really giving us uh uh opportunities as much. Are they there? Sure, but we have to work a thousand times harder just to get even remotely to where we need to be. Um and when it comes to our our communities, who when people say they're not interested or they don't care, decoloniality and decolonization allows us to take a step back and look at the bigger picture to ask why? How is colonization mind you? I'm gonna say this and then I'll end it on it on this tangent. I remind everybody that we are all, every single one of us in the world, every single one are the byproduct of colonization. No matter what your ethnicity is, your background, your upbringing, where you were born around the world, you've inherited a global system that you didn't ask for. And so what happens when you've inherited the system is that your conditioning to see the world happens right before you even take your first breath. Right. And so we have to ask ourselves how can we pull back the layer of colonization and really see how it's impacted our communities to the point that it has stifled um communities of color to even want to think outside of the box, to even want to get involved in science, math, tech, and everything in between. And when we start getting to the nitty-gritty of those conversations, then I think we can really inspire youth who are ready to learn when the time comes. It's this work isn't about convincing anybody about anything because we're incapable of that, but it is to inspire a generation, whether old or new, to really see the world in a different way, which challenges us as educators in decolonial spaces to figure out ways to tell the story that resonates with people in a multitude of ways.
SPEAKER_02:Excellent, excellent. Um, so in regards to one more question in regards to authorship and resistance, now how do you respond to critics who say your approach is too radical, is too political or too subjective for technical adoption? Because we mentioned, you know, um points. Did you say 0.001?
SPEAKER_00:Yeah, 007%.
SPEAKER_02:Representation in this space. Wow. Yeah, wow, I can just imagine trying to get the authorship and resistance with that. So, but uh people come with their argument why people like you should not get authorships, right?
SPEAKER_00:They do. I get I get challenged daily when somebody asks me, like, or tells me your approach is too radical, it's too subjective. I always, always, always say um, or I ask, radical to whom, and subjective according to whose objectivity, right? Because the colonial frame treats its own subjectivity as universal law. Yes, just as AIGPT and my framework exposes that double standard. So um I leave them with that question. I answer a question with a question, which I normally never do, but in this instance, I think it's more important to ask the right questions.
SPEAKER_02:Thank you so much. Uh so let's talk about future. So the future that we build. If just this AI GPT were adopted globally, right? That is my prayer for you. What will change in education, healthcare, and governance and all of these three sectors are very important uh to the BIPOC community.
SPEAKER_00:I love this question so much. I think education would teach multiple epistemic uh epistemologies from day one.
SPEAKER_03:Okay.
SPEAKER_00:I think healthcare um so much to deal with healthcare. I think healthcare diagnostics would stop misreading black, indigenous, and disabled bodies once and for all.
SPEAKER_03:Okay.
SPEAKER_00:Um, and I think governance systems would have AI that can detect when laws reproduce oppression, and they would be able to then realize it in real time. That is my hope, even as a starting point in this entire conversation, because it can get pretty deep. Okay.
SPEAKER_02:Uh good, good. I like that. Um, so since I'm an educator, I have to ask this question. Um, so what does the colonial the colonial AI classroom look like? Uh, could student co-co-create models that reflect their own uh lived uh reality? This is a really important because if you want this is an important question because um as an educator, one of the things that I have learned along the way is uh to engage my students um to get the buy-in from my students. I have to find a way uh to connect whatever I we are learning uh to their lived experience. Right. It's extremely important. So that's why I have to ask that question. And I think that this question will be important to many teachers too, especially as a former teacher myself, as an educator as well.
SPEAKER_00:I um I take education, I think it's why I do what I do so well is I treat everyone as a part of my classroom. And I think what I've what I've realized is that education got me to where I am today. A decolonial uh curiosity mixed with the decolonial education provides a world of opportunity, unlike anything we've ever seen. And so, what a classroom looks like with decolonial tech, I imagine, and you said it earlier, I think it was it's 100% true as to. Have students co-train models with their community's knowledge. When you think, when you take a look at a classroom, even if you were in an underrepresented, and I hate that word, underrepresented classroom where all of your students are African American or of part of the global majority, each of those students carries ancestral wisdom and their own perspectives from their household that look different from everybody else's, even though they're in the same system. And so if we were to have these perspectives and models trained through their lenses and through their eyes and through their experiences and have a tool like Justice AI that can help them build on these systems, they then co-create something that they never even imagined. And I've seen it because I've offered this to students for summer courses in middle school and high school.
SPEAKER_02:Nice.
SPEAKER_00:And I think that that is the shift that I think that's where it will leave education moving forward.
SPEAKER_02:Okay. I I like that. Uh, so is this something that we should look forward to in you know in the future? Uh, because I think this is a game changer. This could be literally a game changer uh for the next generation for our community.
SPEAKER_00:Absolutely. I'll tell you right now that the University of Colorado Springs uh has taught an entire course and they've used Justice AI to their students. So these are um younger university level students um who have a broader understanding that racism is more than just overtness, but to understand that it's a bit of a system at play. Okay. And so um they're able to engage with this model, and it's been mind-blowing to see just what it the impact that it's having on students of that age.
SPEAKER_02:Nice. Uh, we're almost done, but uh I have some few more questions that I want to discuss. So, what is your dream for Justice AI in five years? Uh, is it a movement?
SPEAKER_01:Is it a platform, or is it a new way of thinking?
SPEAKER_00:I I think I think it has to be a hybrid of a of a platform that can be scalable, um, but it is a hundred percent a movement. I see it as a federated sovereign AI network that's adopted uh uh my DIA framework that's become the the standard of how we build ethical AI systems um and is owned by communities, not corporations, with millions of users around the world, um, where AI ethics is a lived practice and not a marketing copy, because I don't want this to be a marketing ploy. I want this to be truly what it is, and it is a movement for everyone.
SPEAKER_02:Excellent. With that said, so uh, you know, I want to follow up with this. Um, you have said also uh in part of your uh publication that uh your work is about justice, uh and you just said it also just now uh in all your convers in our conversation, you've said it. So what does justice uh mean to you beyond algorithm? Um and also if you can weave in a little bit about uh you know your heritage. I I know that uh your family uh has been at the forefront of you know advocating for justice for many generations. Uh if you can uh because people need to know you and uh and why you do what you do, right? Why you you're now you're in the intersection of justice, uh technology, AI, science, and all that.
SPEAKER_00:Please. Absolutely. Uh I grew up in a hyper-patriarchal, misogynistic household um conditioned by the church through the Catholic Church, where I was unable to ask questions and I was the observer of so many toxic ideologies and beliefs that never made sense to me. And I growing up doing this work, I found that um I am Boringan and Zacateco and part of the Chichimeca tribe, so Puerto Rican and Mexican in um uh indigenous blood. Um my mother was uh my grandmother was part of late-stage slavery in the Caribbean, so I carry um the African heritage in me, which is the Afro-Indigenous, but I also carry blood of the colonizer from my grandfather who was a Spaniard. And so I carry this contradictory bloodline of all of this resistance and conquering and all of these generational traumas that exist within me. Um, and I'm also the great-grandson of uh Mexican revolutionary Pancho Villa, which explains how I operate.
SPEAKER_02:So um you kidding me where I'm sitting with a grandson of a legend?
SPEAKER_00:Yes, he is uh yeah, and when when I when I link my lineage and how I operate today, there's so much of an intersectional balance there, and it explains to me why I'm even doing the work that I'm doing. And so um I've replaced you know um rifles with code basically in the fight that I'm fighting right now. Um, but to your question, justice beyond the algorithm, I mean, I think uh I think justice is restoring stolen uh futures, and the algorithm is just a battlefield that we're all part of, and so I feel like we all have a stake in the fight.
SPEAKER_02:Yeah, nice. So if AI is a mirror, what do you hope Justice AI GPT reflects back to humanity?
SPEAKER_00:Uh I would say humanity without the colonial distortion.
SPEAKER_02:Okay, okay, good job. Um, and uh justice for my students uh who may be watching this. Um there are a couple of questions, uh maybe two, then we can end. Uh one is what got you interested in in in uh in technology, in in Steam? Uh, what sparked your interest? Uh was it someone in your family? Uh was it uh someone out there or is it something from the TV, you know, that got you started in this? Uh, because you know our numbers is pretty low, but uh we need to up that number, we need to lift it up, or we need to raise the roof on that, please.
SPEAKER_00:Absolutely. You know, I I was uh my I was raised by a single mother, but with two older brothers who are nine and fifteen years older than me. My middle brother, who was nine years older than me, was uh he's a phenomenal artist. He taught me how to draw and do all of my art, and he got me into the creative space. My eldest brother uh was the history nerd. So when we'd walk hours to a mall or to a train, uh he'd give me history lessons every day. And so my curiosity um had always had an intersection um growing up, and I was always a nerd. I grew up watching Star Trek and Star Wars and Sy-Fi and all of these things. And being neurodivergent, I wasn't diagnosed until later in my life, but it helped me hyperfocus a lot on these subject matters. And so I think it's a it's a combination of life uh happenings and and and uh moments that introduce me into all of these topics, and even with social justice, having my racial awakening in the southern United States at 11 years old, what happens with neurodivergent people is we internalize things and we've hyperfocus on our feelings and what happens within us more and more and more. And so I typically look at it like a social, like a superpower that allows me to really um deep dive into these things. And even though I struggled in school, I didn't have the great grades, um, I was still able to overcome all of this and blend every single one of my passions together to ultimately get to this, which I didn't see at all until maybe just about five years ago.
SPEAKER_02:Nice, nice. So, will you consider you mention nerd? Will you consider yourself a nerd?
SPEAKER_00:Absolutely for liberation tech and sci-fi for sure anime. I love I love a nerd out for a lot of things.
SPEAKER_02:Uh so so if you were to have a dinner with someone, uh, you know, a coffee with someone, who will it be?
SPEAKER_00:I think I think I would have to choose WEV Dubois because I would love to talk to uh talk about how his decolonial sociology uh was embedded into the machine that I created to change the world.
SPEAKER_02:Nice, nice, Christian. Thank you so much for your courage, for your clarity, and your commitment uh to build an intelligence that honors the truth, uh, dignity and and legacy. And to our listener, um, what if the future of AI isn't just smart, but more just what if algorithm could remember our ancestors? This has been uh team, Spark, I think Steam um uh career um as a legacy. Um uh wait a minute, before I do that, I I always talk about myself last. Um I actually wrote uh another children's book, which I want to mention. Um so it's called um, I don't know if you can there it is if everyone can see it, uh Gluka. Um you know maybe I can do something even better, sort of post it so that everyone can see. Glue car great adventure to body town. So let me see. Please bear with me for a moment. So hopefully this will I try and bring it up. Bear with me for a moment.
SPEAKER_00:I'm writing it down so I can purchase it for my dog.
SPEAKER_02:Thank you so much. I appreciate that. Um so I'm like, can you can everybody see it? Can you see it?
SPEAKER_00:Yes.
SPEAKER_02:So uh yeah, Gluka, uh great adventure in Body Town, a magical journey through the body energy factory. Uh so I was inspired to write this uh uh as a way to in um you know introduce our young people to science terms at an early age, uh, because uh they're gonna come across uh as long as they take biology or science, they're gonna come across vocabulary terms. But by being exposed to distance at the early stage, um they will be familiar uh to it. Um the other companion, of course, is glucose. The duck goes to the party. So this is all about glucose. We all know about the importance of sugar uh in our blood systems and the the link of sugar to insulin resistance and all of that. So by exposing our children to all these science term early on, my hope is uh they'll become familiar. So um by the time they become familiar, they come into um you know taking these classes, it's it's then it will be easier uh for them rather than being complex. So in my own way, I I'm not into algorithm like uh Christian. But uh, but uh Christian, what you're doing, I want to say um it's epic and it is phenomenal. Thank you for thinking about our community, uh, the BIPOC community, the the women, the LGBTs. Uh it's very important that as we become part of you know the AI innovation, uh, the information age and AI uh world, um, that uh uh justice um you know needs to be addressed, right? We need to look at all of these things through the lens of justice, right? And that's what you're doing. I want to wish you good luck, and I would like to encourage everyone, please subscribe uh to um Christian Ortiz. Um, I'm going to put his information on YouTube, reach out to him, he is on LinkedIn. Uh, you know, connect with him. Uh, young people, connect with him. There's a lot to be learned. Uh, we need to uplift our community and be part of the uh innovation. So uh Christian again, thank you so much. Uh, everyone, please subscribe and support uh this channel and uh subscribe, Steam Spark team Steam Careers podcast. It is a legacy, uh ladies and gentlemen, what you're doing, being part of uh uh Steam Careers, uh whether you're young or old. It's very important. It's a legacy that you are setting for your family and for your community. Until next time, uh have a wonderful day, everyone. Bye, everyone. Uh Christian.