Scratchwerk ^EDU

Big Brother for Equity's Sake : The Advantage of AI Monitoring

Scratchwerk Tech

Ever considered the radical notion that surveillance and AI could be our most powerful allies in the fight for equity? This exploration challenges everything we think we know about technology's relationship with social justice and opens a provocative question: for communities consistently receiving unfair treatment from human decision-makers, could technology offer a more equitable solution? 

Modern AI can process incredibly complex scenarios with countless variables—potentially more fairly than humans with implicit biases. The key is ensuring these systems are created with diverse community involvement.

This isn't about choosing to participate in an AI-driven future—that choice has already been made for us. The question is whether marginalized communities will help shape these systems or merely be subject to them. 

Send us a text

Speaker 1:

What if I said that we need Big Brother to monitor everything, literally everything. We should have surveillance going on all over the place? That would be equity at its core. Equity at its core To be able to monitor everything, to track everything you ever remove. Maybe that's where equity actually lives. We need to be able to to check everything. Leave, leave all the decision making to the robots, and then maybe we can have some equity. What if we allow the robots to determine what's fair?

Speaker 1:

I like to be educated, but I'm so frustrated. Hello to my loneliness. I guess that endurance is bliss. Take me back to before the noon, go away and take it out of queue.

Speaker 1:

So when I was at the University of Florida I've said this on this podcast before I will admit I was not the most attentive student. While I was there, I had a lot of fun. I had a lot of fun playing basketball at UF. I pledged and the crazy thing is I'm not even that social like that. But yes, I had some fun while I was in college. Sue me, but even back then we were actively, especially through my fraternity.

Speaker 1:

We took a stand when we needed to. We tried to help and give back when we needed to. We were very active in that sense on campus and one of the things or the issues I should say that came up at that time was around this concept of affirmative action and it was around this concept of including or not including race on applications to the University of Florida, on applications to the University of Florida. And of course, you know, at its most basic level, most of the folks, especially people of color, were arguing that we should have race on the application period. That should be included in the decision-making process when we were talking about should somebody be accepted or not. And obviously at the time you had a lot of people opposed to that. And so we, along with some other organizations on campus and the college NAACP and the other fraternities and sororities, I mean we had a couple protests. I mean we did our kind of college thing in terms of resisting those decisions to remove race from the application.

Speaker 1:

But one of the arguments that I was trying to make back then I'm not sure if I did it successfully at that young age, but I was advocating, actually a little bit on the opposite of some of my peers at the time I too agreed that we take race off of the application. I said it a little tongue in cheek, but I was advocating that we take race off the application. We should not include race on the college application. It should not matter if somebody is black or white or whatever. Take race off the application. It should just strictly be on the merit of their GPA in high school or their SAT score and ACT score. That's all that really should matter. That's all that should matter, right? And people looked at me especially folks in my circle looked at me like I was crazy. But I guess my argument didn't stop there. What I was saying was it should be on that merit. So take race off. But we should take everything off if we feel like it's not important, right.

Speaker 1:

If we feel like race is not important, then I don't think we should include gender on the application. I don't think we should include. If your parents went to school at the University of Florida, let's not include that on the application. I don't. I don't think we should include. I mean, for that matter, let's not even include any recommendation letters in the application. I don't even want to know if you are a top tennis player or quarterback or basketball player. I don't want to know any of that. I don't want to know if somebody has a physical disability. I don't want to know any of that information I was advocating for.

Speaker 1:

Let's just have a list of all the applicants. You have their GPA, you have their SAT score or the ACT score. You rank them based off their GPA and SAT score and then you just have as many freshmen you're trying to accept that, you just draw the line and whatever you get, whatever you get, I mean if you get I don't know 80 percent women or 80 percent males, that's that's what comes in. If you get 95 percent white, white people or black people, that's what comes in. If you get 95% white people or black people, that's what you let in. If that quarterback doesn't make that cut, he doesn't get in. If the alumni's kids don't make that cut, they don't get in. So I say like, let's do it strictly. Yes, on full merit, full GPA.

Speaker 1:

Now, clearly, when you get to that level, nobody that was advocating for taking race off of the application wants to take it that far. They didn't want to take it that far, right, they just wanted to take race out of the application. But they know for a fact you can't have an incoming class that has 80% men right, so they have to keep gender on there. You can't have the class come in and that those alumni or those boosters their kid doesn't get in you can't have that. You can't actually have the quarterback the number one ranked quarterback in the country you can't have him not be accepted. That's not reasonable. So of course, we're going to leave some of those things on the application and so if you're going to leave those on the application for consideration and consider race and that was the argument that we was essentially making at the time how does this tie into this concept of surveillance?

Speaker 1:

Big brother robots making decisions is because, as marginalized communities, as communities that often are the ones that are on the short end of the stick when it comes to a human making a decision on you being accepted into the university, you getting that job, or you getting that speeding ticket, you getting pulled over, you getting that certain decision handed down to you from the judge, any of those situations we as marginalized communities, collectively have been on the short end of the stick and we traditionally think about OK, well, how do you get around that? Well, you get around that because you need to have maybe the right judge making that decision, the right cop stopping you, the the right admissions director at the school, the right CEO of the company. So in some ways, we've been thinking that, yeah, this is how we get around that, but there might not be enough of that right someone to make all of those decisions. Right? When it comes to AI? And this is where I am mostly scared of. When it comes to to AI, we will get closer to so-called allowing these systems to make these decisions for us. Right, allowing these systems to to decide whether or not somebody is going to be accepted to the University of Florida or whether or not somebody is going to receive a speeding ticket, those types of things. But if we aren't careful, if we're not careful, even those AI systems and we all know this can be biased, can be biased. What we should be actually advocating for is the creation of those systems, but in a way that is truly diverse. If you had an admission system at a university that was developed, that was developed with community, by community and in collaboration with others, right. If you had a system that was set up that way, well, now we can leave it up to the robots to make equitable decisions on our behalf.

Speaker 1:

I even on a speeding ticket concept when you think about somebody constantly monitoring you, constantly watching some stuff. First of all, I'll make the argument right now. Some people, they feel like they are off the grid. If you have a cell phone in your pocket, if you are listening to this podcast, you are officially on the grid. You are not off the grid, you're on it. There is a company that knows exactly where you went today, at all times. They understood and know about all your conversations. They have a very good sense of who you are, what your health is. Right on down the line you are on the grid. If you listen to this podcast. I can confirm that, all right. Right on down the line, you are on the grid. If you listen to this podcast. I can confirm that, all right.

Speaker 1:

So if we are going to be on the grid, if we are going to be constantly monitored anyway, we should be leveraging that to our advantage. To our advantage, if you think about a speeding ticket, why? Why is it left up for chance for police officers to randomly pull us over because we were speeding? Let's just think about this for a second. We know that Black communities, people in Black communities, are pulled over far more often than any other community. Why not advocate for a system so that, hey look, forget the cop pulling you over for speeding? We should have something at every corner in America, maybe, or in a city, and you know the speed limit on this particular road is 45 miles per hour.

Speaker 1:

If you are driving over that, we got the technology. You just mail them a ticket period. I don't care if you're white, black, indian, green, doesn't matter. If you are in a car on this road and you go over 45 miles per hour, you know you're going to get a ticket in the mail. That'll do a few things. Number one I bet you will slow a lot of people down because you're not even trying to risk the thing where I hope the cop is there. Are they there? Can I speed up here? Can I slow down now? No, no, you know, if you go, if you go fast, too fast, you're going to get a ticket. It's going to be in your mailbox tomorrow, right, that's, that's, that's one. It'll slow some stuff down. But then too, what it really does, what it really does actually, when we say, oh, we wouldn't want that, no, no, there's a lot of other people that wouldn't want that, no different than there's a lot of other folks that didn't want us to say well, just take GPA and SAT scores and just cut the line on the admissions of color, for sure, for sure, there's a lot more people that will be getting tickets under those conditions. Under those conditions, we should be fighting for equity. No, everybody gets a ticket, including us, including me. We go too fast. Give me a ticket, matter of fact, give everybody a ticket. We should lean in this this moment in time for us, right now, with the different ways that we can monitor stuff, technology we should be leaning in to those types of tools in order to bring about equity.

Speaker 1:

We had on our podcast a couple of weeks ago I was talking to a good, good brother, a good friend of mine of mine. He's a prosecutor and we had a very just, brief, kind of you know, but colorful conversation around the use of AI in the decision to prosecute or convict criminals and obviously he was making the argument that that would never necessarily be the case. You couldn't do that because there's so many different, many conditions and things that one would have to consider when it comes to prosecution of a crime, and I don't doubt that. Probably true, matter of fact. We know that to be true. Right, all crimes are not the same. You can't just consider all burglaries to be the same, all whatever to be the same. We understand that.

Speaker 1:

But that is the power of data in the moment. If anybody has opened up a chat GPT browser, if you've ever done anything in AI, you understand that there's so many. That's the powerful nature of it. You can handle so many different scenarios, so so many different scenarios. And will it always be right? No, no. But I tell you what if we said, okay, whenever this particular scenario happens or doesn't happen, this is how you're supposed to prosecute. This is what determines whether or not the person gets convicted or not. Period, period Right, and we should not be scared of that, of any group, of any group.

Speaker 1:

What are we going to get? More like? We already are prosecuted, more, convicted, more that's. That shouldn't be our word. We already are having trouble getting into the University of Florida. We already are getting pulled over more. We should be the ones to say, no, let's put some rules around that and advocate for that. And yes, there's going to be some casualties along the way, 100%, 100%.

Speaker 1:

But in the grand scheme of things, we could use technology, it's possible to enforce equity, to make sure that things are fair. The key is, we have to make sure that we're creating that technology. We have to make sure that we are are developing that. But if that can be done, that is how we scale equitable situations. That is how we scale fairness. That is how we can ensure, in this new climate that we're in, we can't run from this. This is like the Internet.

Speaker 1:

You can't be in business in 2025 and tell somebody that you don't do Internet things not even possible. You're not allowed to be in business in 2025. You're not allowed to exist. You're not allowed to necessarily have a job, be productive, participate in this economy. If you're saying you don't do internet things like email, you don't do email, you don't do text and stuff like that, you don't go on websites. That's not what you don't. You don't do that kind of stuff. No, that's impossible for you to do in 2025. Very, very shortly, very, very shortly, that same concept will apply when it comes to artificial intelligence. You don't have the option to say you don't do artificial intelligent things, you don't, you don't participate in that. Oh no, you do participate in that, because if you've ever shopped on Amazon, then you've participated in that. If you've ever received a phone call or a text or you're trying to figure out, like, how in the world did it know to show me that commercial on Netflix, you're participating in the AI economy. So this is not an option on.

Speaker 1:

Whether or not we're going to participate in AI or not is a fundamental reality, and so AI is going to be here, like the internet is going to be here. How might we use AI, how might we use data and those types of things to force equity in these environments and in our community? I believe that the time is now for us to absolutely lean in and drive equity using the tools that we have at our disposal. This is the Scratch Word Podcast, where we don't fear the future. We create it. One thought, one idea, one dream at a time. Thank you.