Cream City Calculation

Data & Democracy

Cream City Calculations

Podcast Notes:

Hosts:
Colleen, Frankie, and Sal

The Data Pulse:

Thank you to our sponsor, Continuus Technologies, for providing us use of their space and technology.

Podcast Description:

In this episode of Cream City Calculations, the hosts dive into the evolving relationship between data and democracy. They explore how political campaigns leverage data on over 200 million Americans to target voters with personalized strategies. Discussions touch on how campaigns use voter registration data, combined with information from firms like Experian, to build detailed profiles for precision targeting. The episode also looks at the rise of data-driven political ads and the ethical questions surrounding the use of personal data in elections.

Tune in for insights on how data is shaping the future of politics and how to protect your privacy. Stay informed, and stay data-savvy!

If you’re concerned about data privacy, check out our episode notes for helpful resources and tips on protecting your information.


Colleen Hayes:

Welcome to the Cream City Calculations Podcast. We're three colleagues and friends that love data and to talk about how data is impacting our lives. I'm Colleen.

Frankie Chalupsky:

I'm Frankie. And

Sal Fadel:

I'm Sal.

Frankie:

Welcome to the Cream Cities Calculations podcast. Today's podcast is Data and Democracy where we're going to talk about how data is used in politics. But before we dive into that, it's time for our Data Pulse. So the first article that we're going to talk about is there's a new group that's trying to make AI data licensing ethical. So just generally the first wave of major generative AI tools were All trained on publicly available data. So they're using enormous data sets and basically anything that they could find or scrape off of the Internet. And so now there's a lot of people questioning is that right? Is it ethical?

Colleen:

Somebody in this article makes a good point to say that previously, These organizations that were creating these AI, either chatbots or other models, they were using sort of an opt out approach, where if you found that your data was being included in this model, that you could opt out and say, I don't want my data included. But people are celebrating this shift, saying that, artists and creators should be on board with this because You know, you want the data that's being included to be thorough and only what, creators want to have included. So they're saying it's a good thing that this group they call it the DPA,

Frankie:

the Dataset Providers Alliance

Colleen:

thank you, Dataset Providers Alliance they're the ones pushing for this opt in system and so folks overall are in that group are seeing that as a good thing.

Sal:

Yeah, this is awesome that they're making this a thing, but the real question is as they start to work with, data vendors, right? And, who owns that data? Like, even people, like, how many points upstream do you have to go to sort of collecting some of that sensitive data or that, you know? More restricted data. At the company I work at we've already hit like questions of Hey, you cannot put trained models or you can't train your models on any of the data that we're providing. Through our vendors. And so that's a big question is like, all right. Is it us that is regulating that or are they putting that? Or are they going even more upstream to whoever we're getting it from? Or are they even going up even further to whoever they're getting it from? So it's like, where is that, line?

Frankie:

Yeah, and one challenge that they might be facing trying to do this is they're really limiting access to companies that aren't those large. Yeah. Like monopoly, like kind of companies because they're just making data really expensive. And so there's a couple of companies that can afford that, but it's just limiting a lot of the small companies to what they can actually utilize.

Colleen:

Yeah, the other thing that the DPA is also endorsing is the use of what they're calling synthetic data. So data generated by AI saying that's going to constitute the majority of training data in the near future. But I mean, I guess I read that and then immediately questioned, well, you know, If you're proposing this opt in system, are you suggesting that this synthetic data is only coming from those providers that have opted in, or is this, AI generated data from sort of an open marketplace? So, is that sort of like a way to skirt around this opt in? Requirement is to go and just say, oh, well, we've sort of run this through a model already, and so now it's synthetic data and we can incorporate it. It's not really a question. Yeah.

Sal:

This AI landscape is going to be just an ever evolving thing of like, it's going to be like the regulators are going to push back and people that are pushing this technology are going to try to get as much as they can. It's going to be a crazy thing.

Colleen:

Yeah, it's probably going to be a real, like you said, a fast evolving thing over, I'm guessing probably in the next five years that things will sort of ebb and flow and get pushed more one way and then push more another. I think it's great that so many organizations are thinking through the, points of what's ethical, what's not what shouldn't be included in these training models and what shouldn't, and what's the best way to go about. Making a product that is consumable, but also affordable for end users to use.

Frankie:

For sure. And I'm really glad that you mentioned that because there's another article that I also found very interesting is that the California Governor, Newsom, vetoed an AI safety bill that was dividing Silicon Valley. So in this article, they just talked a little bit about the bill and it was, Basically, just really strict regulations put on artificial intelligence industry. And so the governor ended up vetoing it, even though California's legislature overwhelmingly passed the bill.

Colleen:

Yeah so I just want to read a little snippet from that article. It says the measure would have made tech companies legally liable for harms caused by AI models. In addition, the bill mandated that tech companies enable a kill switch for AI technology in the event that the systems were misused or went rogue. I think that's something that we've touched upon, many times when we were talking about AI is this idea that AI can quote unquote, develop a mind of its own and sort of go rogue and come up with answers to things that are not what the typical human being would see as a like viable response.

Sal:

You know, it's the Hollywood thing. It's the Terminator, right? We got to put, go in place to Terminator.

Colleen:

Yeah, so and I, again, only read this article about this bill SB 1047. So I don't really know it too deeply, but I think Newsom's point was that he felt like it went too far. So, Sal, do you have like thoughts on, maybe where the line would fall in there that might be kind of more acceptable or more palatable.

Sal:

Yeah, actually a little bit. I mean, not a ton of thought on it, but, I think the biggest things are going to be around, as these things become more intelligent and starting to build out additional thought, I guess, in a weird way. It's not thought, but right. Right. Quote, unquote, your quote's thought. Yeah. In that aspect of like real true AI is understanding how that's implemented. So putting restrictions on like they did with the entertainment, it was you can't write, screen plays, or you can't use AI to ease and stuff like that. those are things that they'll have to put in these regulations. I don't think it ever will go to a point of we need a kill switch because this thing is taking over everything. But because you have the internet and someone will just recreate it. It doesn't really make a difference. But it's really about. How are we putting this into implementation or how are companies or other things using this as, they implement?

Colleen:

Yeah. And I think it is like acceptable to ask companies to conduct a certain level of like safety tests or, other testing before they make their models live for public consumption. But maybe, this particular bill went a little bit too far pushing creators. I think it did say too that the concern was that it would limit smaller creators who could have just as much harm to the public as large ones.

Frankie:

Absolutely. Yeah. And some of the biggest companies that were lobbying against the bill were Google and Meta, which I thought was interesting because even though they felt like it was harmful towards the smaller companies, it was the bigger companies that were. Substantially against this bill coming into play. Although there were a couple companies that were for the bill. Or maybe not companies, but people like Elon Musk was in favor of the bill. So I thought that was interesting the different sides of the story and how some people were very confident and how this would help their companies, even though they were large companies, but others that were. More wary of it because it was limiting to them on how they could develop the AI models.

Sal:

Who would get the kill switch? Like

Frankie:

who is stuck on this kill switch?

Sal:

Yeah, but it really is but what who does who would get it? Is it? The government that gets that and they choose, or is it the people that they have, this is getting out of control that we do it in the voting.

Colleen:

Is it like one engineer that's sitting in this little window office, a big red button that says kill on it. Yeah. Frankie, there was another interesting article in the news recently about Oracle designing a new data center. Do you want to talk about that one?

Frankie:

Sure, yeah. So they're designing a data center that's going to require more than a gigawatt of power, according to their chairman and co founder Larry Ellison. This would be powered by three small modular nuclear reactors. So this is really interesting because this is the next generation design for modular reactors. And apparently the deployment is going to be reliable. It's supposed to be carbon free power. But it's very interesting because it's a little bit controversial and people are not completely sold on the fact that this is going to be carbon free. Something substantial and good for the environment.

Colleen:

Yeah, I thought it was interesting that there's only three of these small modular reactors in the world. There's two, one in China, one in Russia, and another in Japan. Those would probably be the big four.

Sal:

Yeah, nuclear capabilities.

Frankie:

So I'll be curious to see, if this is actually going to be allowed and they're going to commercialize the use of the small modular reactors, or, will the U. S. government have anything to do with how that all plays out? Yeah.

Colleen:

They're also saying he's not disclosing the location of the data center. That was kind of interesting to me. I don't know how long, how far along in plans they would be with something. Like this at this point in time, but if I feel like if it were somewhat, close to beginning to happen, they would at least have that location selected. So I wonder if it's a matter of they don't want the public to become worried about it, or if it's really that they aren't that far along in their planning yet that they don't have a site selected.

Sal:

With this, like you, you're adding three small nuclear reactors, right? It's going to be an article, didn't say talk about this, but are they going to be like federally run so that it supplies energy to the whole community? Or is it just like for this data center? And then so every data center, as they pop up, we'll have their own small nuclear reactor, right? I was just going to bring that up.

Colleen:

Yeah. is that going to become the standard?

Frankie:

Well, and who's paying for it? is Oracle paying for the power and the nuclear reactors to be added? Or is this a government funded project? Because that would be interesting too, is our data center is going to be more responsible now for the amount of power that they're consuming.

Sal:

Which in turn turns to the people, right? For these data centers, it's going to go upstream to open AI and all these other AI kind of platforms. And then we're going to have to pay more money for, to use those. Right. And then you kind of see the snowball effect on that,

Colleen:

right? And then you have to think, too. I mean, this is no, this is not pocket change that if it's Oracle that's going to be paying for these, like that's going to cost a lot of money. If there's three in the world, this is an expensive product and that's going to really limit smaller, data or AI organizations from being able to implement the same type of strategy because they just won't have the resources that Oracle does.

Sal:

Oh, absolutely. It's going to monopolize that industry because it's only certain people can afford it or certain companies can afford it for sure.

Frankie:

Right.

Sal:

But that's what they keep saying is this is going to be the next big industry like is building out these, Data centers all across the United States, so it can handle the load and GPU load, that is needed to build out these additional AI models or LLMs that they're, building out. Yeah, That was a great data pulse. Now onto our main topic data and politics. Our first article that we're kind of, talk about is how political campaigns use data. I think this is a really interesting overall topic, because as we're going into the new election coming up here in November presidential election, We're seeing every day how we are getting bombarded by information. Right? And so understanding how that data is being pulled in and used. And I won't say manipulated, but used to direct our, votes is, going to be really important. And I think understanding how that system works is a great topic for this

Colleen:

yeah. And I thought this article came up from Reuters, but I think they did a great job at making this a very understandable topic for non data people. There's a lot of infographics in this article that really paint a picture as to where your data comes from and sort of how it's used throughout the process of a campaign.

Frankie:

Yeah. And they really open with a fact that. Caught my eye that in the United States, political campaigns use data on more than 200 million voting age Americans to inform their strategies and tactics that's insane that they are able to capture that much data. And that many people. I just was really. Awoken by that. Yeah. Yeah.

Colleen:

I mean they are using this data to help decide who to target with their campaign messages and how to best deliver their messages so that those folks actually respond to them. I thought that was really a good point too. Not just who is undecided or who should we target with our campaign messages, but what's the best way to, to reach gen Z versus Gen Xers, for example.

Frankie:

Well, apparently it's text messaging. As Colleen and I were talking about this, we probably get, at least ten political text messages per day. Yeah. But I don't answer any of them.

Colleen:

Yeah, no, me neither except to say stop these days. I feel like I'm getting so many. That was an interesting point. There was a, separate article that I came across and most of the article was a video by a woman who is the analytics director for the Democratic National Committee. The video is a bit older. It came out in 2019, but she talked to the point that prior to 2014, use of text messages and political campaigns was relatively unheard of. And in recent campaigns, the budget for text message campaigns has grown to 30 percent of the overall budget for campaigns. Which it shows, like Frankie was saying, we were kind of lamenting before we started recording about how many political text messages we've been getting recently. And I wonder if they're really hitting their point. Properly if we're getting so many messages sort of all along the same lines Are those text messages really that effective, or are they being targeted effectively?

Frankie:

They're definitely not being targeted effectively. I think about all the people that I talk to within my network and a lot of our conversations, even if you're on a meeting, people are like bringing up how annoyed they are with the amount of text messages that they're getting. Nobody's talking about actual politics. We're all just annoyed with the text messages.

Colleen:

Yes, yeah,

Sal:

Yeah, it actually is making me not look at, Any of the text messages, but, don't know what's true anymore and it's almost hard because, they just keep putting, new things in there. They're like, this person said this or they're saying this and everybody's supported. It's almost so much information that I'm struggling to like weed out what is true or what is important for me.

Frankie:

Yeah, I also get text messages for somebody named Caitlin, and they've given me her address. And. I'm like, this is probably not okay. Right? Like they're sharing this personal information with me who is not Caitlin. And they gave me her address. I could go and find this girl.

Colleen:

Oh, you know what? I think that is, I've gotten text messages from voters rights organizations that just want to make sure that you're registered to vote. And so I've gotten text messages like that, not for the wrong person for me. And they're basically saying, we see you live at this address. Here's where you would go to vote. Which can be very useful but not useful if you're getting messages for the wrong person. That's not good.

Frankie:

Yeah, and I get so many messages for this girl, Caitlin, almost all of my political messages are sent to Caitlin. I have gotten one or two sent to my sister Phoebe, so she clearly used my phone number for something.

Colleen:

I was going to say in this world where, I don't watch a lot of like over the air television anymore. The only times I really do are probably for sporting events. But I think, the time that campaigns would spend money on political ads, that, that weight may be shifting. And I can understand why they're shifting to something that's like either ads through social media or these text messages because folks are on their phones more often and do use streaming services more often than regular or cable news or cable TV

Sal:

as we did in the past. Yeah. That's a really great

Frankie:

point. So one thing that I did want to touch on too is how are they putting together their databases and gathering all their information. They're creating their, database based upon your registration to vote. They can purchase data from the state around your registration and who you voted for in the past. And they're creating a profile around each individual and their stances and their The issues, the candidates, then they purchase more data from firms like Experian which can include data around, your property, your income, your consumer purchasing patterns, or any other like demographic information. And so they're sewing all that data together. And this is something we've talked about a lot in our podcasts is the power of data when you're putting it together. Is incredible. So that's what they're doing and, how they're targeting people and figuring out all that information about you.

Sal:

I have use Experian data and build out models for marketing materials. So like it's the exact same thing. So, I'm pretty familiar with that data. A lot of this stuff is called mosaic. And it, what it does is it gives a really good, Habit or background into what their spending habits are and what kind of interest that they like. And so, you can pretty much identify, a, what we would do is, a block group or, a specific block and say, hey this block has these habits. They all like to do this thing. And you could really understand your voting. Constituents in certain areas, just by looking at that data and saying, all right, yeah, they tend to go to, hunting magazines, this whole neighborhoods, the big hunting, are they more Republican in that case than like mapping that out and clustering it together? I think it's crazy how much information that you can easily pull on people and, regions and stuff like that. It's, amazing.

Frankie:

Yep. So then, once they have their data sets put together too, they can, create predictive models around that data. And they're trying to determine, if a person's going to support a particular candidate or demonstrate a certain behavior, think a certain way about topics that are controversial or the key here, Okay? Is are they going to change their position? And so they're really targeting those people who go back and forth between parties. I'm trying to target that particular group of people because they're just more easily influenced is how they are determined.

Sal:

Yeah, do you guys feel about that? For these political campaigns. Do you think it's a good thing that they are, in a way, understanding and targeting their constituents or people that they can convert into.

Colleen:

Yeah.

Sal:

Yeah. Influencing, converting is a bad word. What do you feel about that?

Colleen:

Well, I guess it's smart, right? Why spend your money and your time trying to convince a person who's never going to change their mind?

Frankie:

A business standpoint, it makes a ton of sense, really politics and government is all running a business or running the country. And so it makes sense that they would be utilizing data. Paying for it to their advantage, right? That totally makes sense. And I agree with how their business is running their business, but from a personal standpoint. I would be more hesitant to utilize any information that is sent to me through a targeted message. I would personally rather do my own research and find my own sources because I'm just not confident in the data and information that they're sending to me.

Sal:

Yeah, it makes a ton of sense, I think it is that was like, around, 60 percent of people actually turn out to vote that are registered voters. Being able to understand and send that material to maybe people that. Typically don't turn out to vote, or they just don't go and, maybe they don't have enough in. Understanding of the information or what's at really at stake here. I think that's where I lean new. I'm like, it would be really good because, maybe it can get more voters out. Maybe it's going to get more of a broader American, decision, but in other ways, I think just our AI thing is, what data is going into this? Is there any regulation on, that? what am I exposing or do I have any say in what I'm exposing to political campaigns, or any of that? So I definitely, I go back and forth constantly on this.

Colleen:

Yeah. And I think it's important too, that campaigns still communicate to the entire electorate, even though you realize you're not going to sway people, overall that aren't aligned with your political beliefs. It's, there's a good section of the population on both sides that won't be swayed from, those political parties. But, yeah I, do agree with Frankie too, that, it'd be great to have, here's a link to the source for this. Here's a link to the study that was done. Here's a link to the article that was written about this thing. I think that makes it more palatable, at least to me. to taking a message for, what they're saying it is.

Frankie:

Yeah. And if they have the data around me and my personality, my beliefs, any petition I've signed or anything like that, how do I know that they're being honest with me and not just utilizing the data to their advantage? Right. how do I know that candidate actually believes in what I believe in? I'll just give an example of I love animals. And so I would agree with a candidate who also believed in protecting animals and, keeping them from being harmed or anything like that. So what if they targeted me and said, oh, this candidate believes in animal rights as well. And they want to, put a bill in place to protect animals or whatnot. But what if it's not right and they're just utilizing that data to their advantage.

Colleen:

Yeah, I guess that's kind of always the fear, right? They definitely are going to target you, because they probably can tell in this data that they purchased from somewhere that, let's just say, We're going to pretend that Frankie made a big donation to some like dog shelter, let's say, which is something I could see Frankie doing. But she made a donation to this animal shelter, so they're going to now target her with messages to say, Hey, this particular candidate really loves dogs. Here's a picture of this candidate with their dog. And I think that's fair, right? If I care about certain things, I'm going to want to investigate each candidate for how they feel about that particular thing. But again, I think you've got to bring your receipts and you have to be able to be an informed voter. I, and I don't want to put this all on the campaigns themselves, but I think we should each have a commitment to really looking for the truth and really going after this information that is out there to see how a candidate has voted on a particular issue in the past. They may say, I'm pro this or pro that, but if they voted against that in the past, then it's probably just lip service. So I think we also have to be willing to do our own homework. And unfortunately, I think a lot of people don't do that any longer.

Sal:

Yeah, absolutely. It's crazy because like literally as we're sitting here, it's 10 am and I've already got three political text messages on my phone just in the last 40 minutes that we've been on this.

Colleen:

Yeah. Yeah. It's getting to be a bit much. And honestly, I started, like I said earlier, I've started just unsubscribing. In mass from all these political texts, regardless as to who they're coming from, because most of them were just asking for money. It was the same message over and over again from slightly different groups and it's no longer really has any value. So, I don't know if that's the type of messages you get, Sal, but I feel like they're kind of missing the point on some of their messaging.

Sal:

100 percent that's exactly I get it the same way.

Colleen:

Yeah,

Sal:

And I never because I never know if that's just going to like, oh, this person does respond, you know, even more. I mean,

Colleen:

I think legally they have to unsubscribe you if you've asked to be unsubscribed. I believe that there's like legal repercussions that could come out of getting unsubscribed. If they continue to send you messages and you've asked to opt out, that I think, that they could be defined for, continuing to message you. But I think that too, like, okay, well now I'm engaging. Should I just not engage at all?

Sal:

Yeah. I just want to send back like a fax machine sound so that like, this doesn't even work.

Colleen:

You think people still have fax machines, Sal? That's cute. I don't know. I do want to call out this one article was talking about how political campaigns use your data. I thought it was interesting in that article, they talked about how some Organizations might target you based on your phone's location. And they're saying that might be useful to reaching voters who are not in the campaign's files already. But I was, thinking about what would that look like in practice? Does that mean like, okay, I've gone to the pharmacy to pick up a prescription. Am I going to now get a political ads from some candidate who promises to bring down the price of prescription drugs? Or is it more specialized or is it more broad than that?

Sal:

So I have some experience in this. So, what I think that they're doing is based on places that you go and people again, I don't, this is what I could see them doing. Sure. I don't know this based on this article or based on any of, you know, Any actual fact going in, but this is what I witnessed in real life. And so what they could say is based on your location, I will cluster and understand everybody that is in that same location or going to those same things. And are there any people that I can say, Hey, they have very similar traits to that are already in my system. Right. And saying Oh, based on these habits of these other people that are doing the exact kind of same thing. Let's say for example, like you go to the grocery store, then you go to, you go to volleyball that same place. Like a lot of you, that your kids are around the same ages and so that they can advertise and, Kind of predict that you're going to have kids around this age that and then you're going to be influenced in this way because of that, because other people that cluster or what the group that you're in. Very similar to right and so, I think that's where they're going to come in and this is almost like they're gonna know exactly what you're doing, who you're affiliating with. So that they can have a better understanding of who you are.

Frankie:

Yeah. Yeah. And then I totally agree with everything that you said. And another perspective I could add to that is, so now you know where people are spending their time and where can you market? So that's why I think they utilize the text messaging so much is that a phone is pretty much constantly with the individual because we're all attached to our phones. And so that's a really easy targeted location, because everybody's got their cell phone. But then I think about, the generation that, maybe it's a little bit older. And so the older people. They don't necessarily have the cell phone or the text messages and things like that. So they might need to look at where should they be advertising to reach that generation and get their vote. Where are those people going? And where, like those common places that they're all headed to.

Sal:

So, kind of on a side task here, but I don't know if you saw that Ford files a patent for in car advertisements. I

Colleen:

don't know. I didn't see that one.

Sal:

I think this directly relates, it's not just your phone. It's going to be your car, most likely. It's going to be all Opt out.

Colleen:

Opt out.

Sal:

Yeah, exactly. But literally, they just filed a patent in September. To do in car and what they want to do is as you drive into grocery stores, they'll advertise for that grocery store, right? Like coupons and stuff like that. And so, they're going to know exactly how you interact with this. And especially now, as it goes into politics, they're going to have more information. How are you traveling between places? what are the things that you're going to?

Colleen:

Can I just say that immediately makes me think of that scene in WALL E. Have you guys seen that Pixar WALL E? Yep. Yeah. Where they're all riding around on their chairs and it's just like this big screen projected in front of their faces and that's all it is ads and stuff and they just constantly, they move from place to place sitting in a chair and it's just like ads constantly. That's what it made me think of when you said that, Thao. We're becoming the people from Wall E. We are becoming that, for sure. This article that we've sort of been talking about, it comes from a group called the Electronic Frontier Foundation. And I think it's interesting because they've got a section in this article that about how to what you can do to better protect your privacy, which again is something we've talked about on this podcast before. So it's got a really great list here and we'll have all these links to these articles in our show notes, but things that you can do to protect your privacy. I think it's a pretty good, very thorough checklist of different things that you can do aside from just your phone, but you can also opt out of things like tracking on your TV or streaming device, which is not something I thought of before that article.

Sal:

Yeah. I always look into these and I always want to do them again. Maybe I'm skeptical on this. I'll do no sir. My phone. I don't do tracking or any of that, but you can't use it. Some of these things, if you don't do it. And so Is there any other option? If you're saying Hey don't collect my data or don't, accept my cookies as every website that you go on to now, you no longer can look at that information. Right. And so you're now just focused because you're, saying, Hey, I don't want you to collect my information, but I like the stuff that you look at or that you're, providing me. Right. it's just give and take. And I'm like, man, it's like no way out of it.

Frankie:

Absolutely, yeah, or the location tracking where you have to turn it on to utilize the app. Yeah, for sure.

Colleen:

I'm always surprised, I'll go to a website or something, I'm always surprised by how many sites I go to where that pops up. Share your location with this website? No man, no, you don't need it, you're a printing company or whatever, why are you asking to track that? That's just, it seems a little overused these days.

Frankie:

For sure. Yeah. And so one of the interesting things too that I wanted to bring up was Roku. And this article talked about Roku and how they pitched to potential political advisors, that there's an opportunity for campaigns to use their own data like never before and reach households in particular districts where they might need to get out. The vote. And so Roku has at least 80 million users. And we have our Roku TV and we see so many political ads on our Roku TV. So it kind of adds up to me at least that, they're really pitching and trying to sell this on political campaigns because of their location data and all the data they're using on their customers.

Colleen:

Yeah.

Frankie:

Amazon and Netflix are the only. Big streaming services that don't accept political ads. So that's a fun fact and i'm not sure if that's up to date.

Colleen:

Yeah I just want to call it this other part because I made a point to comment on the same section it says ads can also integrate data based on what you've watched using information collected through automated content recognition So I feel like that's full big brother moment where they're collecting information about what shows you've watched on roku and then In which you haven't to probably better complete that profile of you as a consumer, whether it's up political data or otherwise. Yeah. Do

Sal:

Those ads, I mean, I imagine those ads go to the highest donator or whoever, can pay for it. Yeah. And so are, they fundamentally one sided? And so you're only seeing through the media that you get, right. One sided ads based on whoever your provider is. And so, yes, I think it's crazy that they can. Track every little bit of data that you have.

Colleen:

And I think it's crazy, too, the granularity with which they can do their targeting. they can look at the fact that, this particular household in this neighborhood watched this particular show, which may make them more susceptible to certain messages. And they can target and add a certain household, but not their neighbor. Even if they are watching the same show, so an ad that I get when I'm watching, I don't know, the Big Bang Theory or whatever might not go to my neighbor because they may have watched other, shows while on Roku that make them less susceptible to that type of messaging.

Frankie:

The last fun fact that I wanted to pull from that article is that in 2020, OpenSecrets found political groups paid 37 different data brokers at least 23 million to access Data or service their data. Yeah. 23 million.

Colleen:

That's insane. I wonder what the statistics will look like for this 2024 election when that information is released. I bet it'll be double. Yeah, probably at least double.

Sal:

I mean, as they say, data is everything, right? No company and no political campaign can survive without it. And so I think that gets us back to our overall. Kind of topic of, how is politics being used here? And I think just thinking through it is understanding where, as a listener and as a constituent, understanding how data is being used, how your data is being used and understanding what are the things that I do want to share? And what are the things I don't want to share? And then understanding that it's going to be going into the way you vote. It's going to go into The things that you shop for this data is going to go into the beer that you drink. I mean, there's going to be so much that we're going to be using.

Frankie:

So moving on, there's also an entire program now for applying big data for political success at George Washington University, which I thought was really interesting. They're gearing politicians towards data or potential politicians or future politicians they want them to understand that data is one of the keys to success in campaigns. And so that's whole program is dedicated towards, digital strategies and how to target and contact voters and maximizing your social media and digital content creation kind of. Courses so I thought that was really interesting. There's not much to pull from that article, but just an interesting stat that's an actual program now. I think it's going to be very popular for any political science. Program at any school.

Colleen:

Yeah, it's called the shankman initiative, and it's at the George Washington University in Washington, D. C. I kind of took had the same takeaways. They're just interesting that this group exists.

Sal:

I definitely think that this is going to be at more universities as time goes on because I think it's going to be a big impactful impact. And I think a lot of those political science majors will probably move into more of this room.

Colleen:

Yeah, and it's, a comment I made to somebody else, another colleague the other day is that, back in the early 2000s, there were people in the workplace that would say, oh, I'm not a techie person. And I think now we're, we've seen the shift in the last 20 years of people in All, careers having to become more technical, right? Again, we've reiterated we all have our phones on us. It's not we basically are using computers 24 7. I think we're going to see a shift in the next 20 years that everyone's going to have to become a data person. And so this idea of incorporating a group within this political, this political arm of the school, the poli sci arm of the school to utilize data and, study the proper ways to use data, I think is to Sal's point, going to become a more widespread thing through other universities. And so they're going to be, there's going to be people who go for healthcare degrees that are data people. And there's going to be people who go to school for, poli sci degrees that are data people. So I think that's going to be the new push going forward. Do we want to talk about this Nate Silver article at all? Sure, I mean, this, what we're referring to here, he's calling it now the silver bulletin, but it used to be the, 538 election forecast, This did come up in the news recently because they're using one of their data points that they're using, and I'm just sort of scrolling to try and find it right now has basically been identified as being pretty right wing, and they're saying, well, we've already, it's no problem because we've already identified that particular group was a little bit more right leaning but I think Nate Silver has come under fire for making comments that way in the past so this is an article that can continuously gets updated. So, at the very top, it says last update 115 PM on Thursday, October 3rd. So, it may be something that was noted at the top like last week and has since kind of fallen off. I'd have to go look it up, but it's something that has come into question that they're using data from this group. They're using it from sources that are, heavily leaning one way

Sal:

or the, other

Frankie:

one thing that was interesting in this, and their data was that they adjust whether polls are conducted among registered likely voters and the house effects. They also wait more reliable polls to be heavier and then the polls that they don't think are as reliable or weighted lightly. So I thought that was kind of interesting and a different way of looking at this data and trying to make it more accurate. But it could be making it less accurate to if their suspicion on whether it's an accurate poll or not is incorrect.

Sal:

As I look through it, it's amazing how close the race is. yeah, it is neck and neck. And then every one of these polls, if you go to the actual poll, I'm looking at Wisconsin, the Marquette University one, the, error rate, or I guess I forget what they call that the error is 2 or 3%. Yeah. They're saying that it's a half a percent or a percent and a half towards a Democrat, but it's well, you have an error rate, it's like, is it or is it not?

Colleen:

Yeah, right, is your rate of, your margin for error is, that larger than whatever you have That you're considering that this particular candidate may be ahead by, one thing may override the other. I do think it's very interesting that the election is this close

Frankie:

well that's a wrap on today's episode around data and democracy. The purpose of the discussion is to open everyone's eyes around how political campaigns are utilizing our data. And why we're getting 35 text messages a day will definitely include that article that has all the suggestions on how to hide some of our data and protect our data. So thank you to our sponsor, Continuous Technologies, for providing us use of their space and technology. If you loved today's episode, make sure to subscribe and stay up to date on other topics related to data. Thanks for listening to Cream City Calculations, and until next time, keep calculating!