Grandpa Is Him
A general microcast about everything and nothing in everyday life. True short stories, family fun, some true crime, anything that I find interesting. In short, it is about everyday life, as lived by every day people, presented a fun and entertaining way.
Grandpa Is Him
Episode 5 - Capitol Riot and Your Privacy
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode we dig into the investigative tools used to search for suspects for the January 6th riots. We uncover some concerning tools, identify some serious concerns, and help you decide if this is a real concern or another conspiracy.
Keywords
microcast, policing, January 6th, surveillance technology, privacy, facial recognition, Clearview AI, geolocation, geofencing, law enforcement
Summary
In this episode of 'Grandpa is Him', host Lynn Dimick discusses the implications of surveillance technology in law enforcement, particularly in the context of the January 6th Capitol riot. The conversation explores the differences in policing between the U.S. and the UK, the role of facial recognition technology like Clearview AI, and the ethical concerns surrounding geolocation data collection. Dimick emphasizes the need for a balance between security measures and the protection of individual privacy rights, highlighting ongoing debates about the legality and morality of these technologies.
Takeaways
- The microcast focuses on personal experiences and societal issues.
- Policing shows often dramatize real-life events for viewership.
- January 6th is a significant date in American history.
- Surveillance technology has evolved significantly post-January 6th.
- Clearview AI collects images without user consent, raising privacy concerns.
- Facial recognition technology is not always accurate, especially for marginalized groups.
- Geofencing can track individuals' locations, raising privacy issues.
- Legal implications of surveillance technology are still being debated.
- The balance between security and privacy is a critical issue for society.
- Ongoing dialogue is essential to protect civil liberties.
Sound Bites
- "This is just for our little piece of the world."
- "January 6th has edged itself onto American history."
- "Clearview AI has more than 30 billion images."
- "Facial recognition is not as accurate as they say."
- "Geofencing raises big questions about privacy."
Lynn Dimick (00:08.472)
Welcome to episode five of Grandpa is Him, a fun microcast where we talk about everything and nothing as they happen in ordinary life. Why do I call it a microcast? Because this is not for the whole world. This is just for our little piece of the world. Therefore, it's a microcast and I'm your host, Lynn Dimmick. One of the favorite things that I like to do is watch police procedural shows like Cops, like PD.
On Patrol Live and others. And I even like to watch shows produced by the BBC about policing in the UK. I understand that these are not true real life TV shows the way they claim they are, because they can pick and choose what segments they are showing and what content we receive. For instance, Live PD often has eight or nine TV camera crews out and about, and they get to pick which of those eight or nine they're going to show you. Remember that their primary intent is to garner viewers
and they want to show things that will draw people in. I find it interesting to see the differences between the way they do things in Great Britain and the way the police operate here in the United States. In Europe, the police have less of a regard for people's rights than they do here in the United States. For example, during COVID, in Great Britain, it was not uncommon for the police to use automated license plate readers to identify cars that were outside of their home county during the COVID lockdown.
they would literally use that as a reason to pull them over and investigate why they were not at home. And in Britain, this is legal. That type of thing did not happen here in the United States, and that's just one example of how things are different between countries. And that leads us to today's point of discussion. In the United States, there are certain dates that I can mention, and you'll know exactly what those dates represent. When I say Black Friday or December 7th, 1941,
or 9-11, you know exactly what I'm referring to. It's not just the date, but the things surrounding it. The memories and emotions they stir up. One of those dates that I think is being added to our lexicon is January 6th. That is the day that has edged itself onto American history. It's not just because of the chaos at the U.S. Capitol, but also for the unprecedented investigation that followed. This may not seem like it has a great impact on your life,
Lynn Dimick (02:34.872)
But it really does. And that's what I want to talk about. In their efforts to find the truth and to identify the people responsible for the actions of January 6th, law enforcement has brought out a lot of new technology in the name of security. And I want to talk about some of this technology and how it may impact our privacy in this digital age. Briefly to review, on January 6th, 2021, a large crowd gathered to protest and they stormed the Capitol building.
Some estimates say that there were 10 to 20,000 people, while AP, the Associated Press, had a report that upwards of 120,000 people attended. Either way, it was a large gathering. One of the first things you need to do after an incident is to try and find data. Why? To reconstruct what happened, identify those guilty, and try to bring them to justice. One of the primary tools used was the surveillance footage and police-worn body cameras.
By the time this incident was over, law enforcement had recovered more than 4,000 hours of high-resolution video from dozens of fixed security cameras and more than 2,000 hours of video from body-worn cameras operated by the police who responded to the riot. Managing the body camera footage alone required a team of 60 people who laboriously completed a 752-page spreadsheet
detailing the relevant clips where they're trying to identify people and incidents. The use of body cams has been in use for many years now. Several years ago, I was talking to a law enforcement friend of mine and he told me the following story. He rode a motorcycle for the California Highway Patrol in the downtown Los Angeles area. At that time, he did not have a body cam as they were not in use. Instead, they had pocket tape recorders that they carried in their shirt pocket.
They would turn it on before every citizen interaction. Sometimes the CHP would receive a citizen complaint against my friend. Upon hearing of the complaint, the watch commander would call the citizen, listen to the complaint, get the details of the stop, and then offer to sit down with the citizens and review the audio recording of the stop and then take any necessary action based upon the review. The citizen did not realize they were being recorded at the time.
Lynn Dimick (05:01.614)
Not once in my friend's career did the citizens ever show up or follow up on their complaint. The point is the recordings exist to protect the public as well as the police. Now the next place they turned to in the investigation of January 6th was to the public. The FBI received more than 300,000 tips about people that were involved in the events at the Capitol. Do the math. If there's 20,000 people and 300,000 tips, that's an awful lot of people.
that thought they knew somebody that was involved. Two thirds of the identifications that they ultimately found came as a result of these tips. Most of those tips were generated as a result of the participants being involved using social media. For instance, tips referencing Facebook posts resulted in identifying 388 people. Instagram and Twitter, or X as they now call it, were used to identify and charge another 188 people.
Now there's a program called the FBI Face Services Unit, and what they do is they compare video footage with passport and state issued driver's license photos, but that only accounted for 25 identifications. There were other sources that I call the once removed, and that included two that I want to pay particular attention to. One of those is Clearview AI. The other is something called Geofence Warrants.
By using these two tools as well as additional sources in the 11 months following the riots, there were 884 people who had been identified. After three years, 1,230 people have been charged or convicted. Now I'm willing to bet that most of us have not heard about Clearview AI. It's a facial recognition search engine that created, maintains, and updates a massive database. And I'll tell you how massive it is.
It has more than 30 billion, that's what it be, images that have been collected and scraped from public websites and social media platforms without user consent. This practice has raised some major concerns. Let's review the role that Clearview AI played in this law enforcement effort to help you determine if this should be a major concern in your life. Clearview AI is a tool that's been used by law enforcement
Lynn Dimick (07:25.76)
and other organizations here in the US and around the world. While discussing the role of Clearview AI, Hone Tom Thatt, who's the CEO of Clearview AI, told a magazine called Spectrum that the court filings did not necessarily reflect how often this technology was used. The specific quote he used was, law enforcement don't always have to disclose that they found a certain person's information through facial recognition.
All right, that's the first part of it. Hone Tom Thatt noted that Clearview's algorithm is not yet admissible in court. I want you to keep that statement in your mind as I go through this next section. Any identification that Clearview makes from open source imagery requires further vetting and confirmation. In other words, it's not the proof, it's a tool. Without providing specifics,
He suggested that Clearview system was used by the FBI. He said, as a company, it was gratifying for us to play a small role in helping to apprehend people who caused damage and stormed the Capitol. Now, the Capitol riots were not the first time that such technology has been used. Facial recognition was reportedly used to identify protesters at a Black Lives Matter event in New York City in 2020.
and also at other protests across the United States. Now, let me explain what the problem here is. Because it's not admissible in court, it's treated like a lie detector test, which you can refuse or submit to voluntarily, and the court cannot hold your participation or the results against you. It cannot be used as evidence in court. It can, however, be used by the police as an indicator of possible deception
or possible involvement in their investigation. The problem with the facial recognition is that you don't get the choice to participate or not. If you've got pictures out on the social media, you are in their database, whether you want to be or not. That means that they can look at your pictures without your consent and you can't do anything about it. And I'll explain it in a minute why that's a problem. Within the United States, the ACLU, who I disagree with on several positions,
Lynn Dimick (09:47.016)
sued Clearview AI in Illinois. They alleged that this data collection was a violation of the Illinois Biometric Information Privacy Act, which requires that companies must obtain explicit consent before collecting biometric data like facial scans. I don't know of any other states that have this, but I'm willing to bet that we're going to start to see more states do this. Until then, your pictures in the wild.
Clearview settled the lawsuit in May of 2022 by agreeing to limit its operations in Illinois and Illinois alone and offering an opt-out option for Illinois residents. But that means you have to go ask for it. Regulators in countries like Canada, Australia, and this one I find interesting, the UK have declared Clearview AI's practices illegal under their respective data protection laws. Clearview AI
has been ordered to delete all data collected on residents of those countries, which is just a good first step. I'm sure that many of you are thinking, well, if I'm not doing anything illegal, why should I care if they have my picture? The problem is that the facial recognition is not as accurate as they say it is, particularly for women, people of color, and other marginalized groups. Their system is simply not good enough.
this lack of accuracy makes it appear that it has biases against certain groups and it has led to false arrests or misidentifications in law enforcement use. The company, of course, asserts that their technology is unbiased and the technology and the formulas may be, but the results are not. They are not always 100 % accurate and they do create problems. The fact is that the algorithm
or formula that Clearview AI uses to identify faces is not good enough for the US courts to allow it as evidence. Now let me explain how this works. And this happened to at least four different people. For some reason, it pulls your picture up and says, we think this person did it. The police look at the picture and they know that since they can't use it as evidence, they don't have to report it. So what do they do? They take your picture that they found on Clearview AI
Lynn Dimick (12:10.515)
and they compare it to your passport photo or your driver's license photo and they said, yeah, it kind of looks like him. Let's go arrest him because now we've got a video that we can match it to. They arrest you based upon the second source of your picture or video, which they obtained based upon an initial inaccurate identification. The second source is really just a biased confirmation. In other words, they're seeing what they want to see.
You go to jail and you have to prove that it's not you. The problem is, is that it may take time to prove it. In one case, it took nine days to prove it. Well, many of us have jobs that if you don't show up for work three days in a row, you are considered to be forfeiting your job. Imagine that you've been employed somewhere for about, I don't know, 19 years and you miss three days of work because you're in jail, wrongfully arrested and your job is gone. Think about the consequences.
Or if you are required to inform your employer that you have been arrested, your job may be terminated without a conviction. This creates all kinds of problems with things such as retirement benefits, medical benefits, life and health insurance, and other things that you need to support your family. Gone. Why? Because some software that has not been vetted for legal purposes thinks you did something wrong.
There are reports that suggest that Clearview has been used by authoritarian regimes, which raises fear about its use in suppressing dissent or targeting activists. Think about the countries in the world today that you would consider to be authoritarian. Can you imagine the suppression of people and rights that will happen in places such as North Korea, Russia, and other places where they don't care about people's rights? Here in the United States, some cities, including San Francisco, have
banned the use of facial recognition technology by law enforcement, partly due to concerns raised because of some companies, particularly Clearview AI. The next area of concern is geolocation or geofencing. One of the other key tools that investigators turned to after January 6th was something called geofencing. So what is it? Geofencing is a digital technique that creates or defines an invisible boundary
Lynn Dimick (14:33.447)
like a virtual fence, around a specific location. You may have heard about some of these yards that people do this with. They place or bury a wire under the ground around their yard. And if their dog, who has a collar, tries to pass that line, it sends a mild shock to the dog. It's a way to keep the pet from going past the appropriate border. In this case, law enforcement was able to say, we want to find data around this particular location
at this particular time. Companies like Google and cell phone carriers were compelled via geofence warrants to provide information on devices that were in and around the Capitol during the riot. The results? They identified 5,723 devices near the Capitol that day and out of those 1,535 were flagged as likely being inside the building itself. And there's that word likely. With this data,
the authorities were able to pinpoint individuals locations during the chaos, which ultimately led to many arrests. It's impressive if that's very specific, but it's also unsettling. And let me tell you why. Because while geo-fencing is powerful, it also brings up big questions about privacy and how much of our digital footprints are up for grabs. Where did the data come from? It came from cell tower data. The information on which cell towers
A phone connected to revealed approximate locations. It came from Wi-Fi connections. There are records of when and where your Wi-Fi connected. In some cases, that information about your phone can be captured even if you don't connect to the network. There are app and location services. These store data that monitor location, like Google Maps or social media check-ins. It came from GPS data.
location data collected from GPS systems and devices. Now, here's the part that most of us don't know. They can even collect data from phones that have been put in airline mode. The GPS data isn't actively transmitting when a phone is in airplane mode, but location data can still be stored locally on the device and accessed later when it connects to a cell carrier. That data includes the latitude and longitude of each device
Lynn Dimick (16:56.297)
within an accuracy of seven decimal places. At seven decimal places, the position is accurate to approximately eight to 11 centimeters, depending upon the location of the earth. For those of you that are still stuck in the last millennia, eight to 11 centimeters translates to approximately three to four inches. Allow me to share with you a story from a job that I had during my career. It was in a shipping yard.
We had about 1200 trailers that would come off the ships in Long Beach and be taken to our warehouse yard where we would unload them and break them down into smaller containers for stores. Let's say for instance, there's a big buck store that has a red and white logo that you might find at a shooting range. I'll let you figure out who it was. They may have a trailer full of flatware that comes in from across the ocean. They want to break that large shipment down to smaller shipments
and trick it out to the 350 stores in California. That's what we did. We took the big shipment and made smaller shipments out of it. One of the problems in a business like that is that you have hundreds and hundreds of trailers passing through your yard. The way that you make money and the way you keep your customers happy is that you do it as quickly, accurately, and inexpensively as possible. The problem was that with that many trailers going in and out,
keeping track of them was a nightmare. The trucks used to hate to come into the yards because they knew that because of the delays in finding parking spots, reporting the location and finding their outbound loads, that they were only going to be getting one or two runs a day and they got paid for time on the road and the miles they covered. Well, we came up with a system where we were able to put tracking devices that connected to wifi on every one of the trailers when they came into the yard. By using the wifi connections,
We were able to tell within a tenth of an inch the location and content at every single one of those 1200 traders in that yard. The efficiency gains were impressive and profitable. We went from having two turns a day to five or six times a day. The location data was that accurate and that valuable. The information is out there and now guess what? They're using the same technology and information to track cell phones and the location of people.
Lynn Dimick (19:18.589)
Let's talk about the geolocation data and those problems. Number one is that there are individual geolocation warrants. What this means is that there's a specific target, a device or person that is believed to be connected to a crime. An example is tracking a suspect's movement in a kidnapping case. Now, no one wants to see the kidnappers get away. There are what they call geofence warrants. These are broader in scope. These warrants demand location data for
all devices within a specific geographic area. In other words, the fence during a specific time window. These are used to identify unknown individuals who may have been at a crime scene such as during the January 6th riot. The use of geolocation warrants has some significant Fourth Amendment concerns in the U.S. which protect us against unreasonable searches and seizures. The debate goes like this.
Is there a probable cause to suspect someone? Do we think that a specific person was in this location at a certain time? Now, we're talking about a specific person, not a group. Do geolocation warrants meet the legal requirement of probable cause or are they overly broad? In other words, is it reasonable to suspect that you were there and that you're one of the people they're looking for? Without further proof, and all they have is your cell phone, guess what?
you now become a suspect whether they know who you are or not. The other concern is privacy versus security. Critics argue that such warrants can collect data on innocent individuals, not just suspects, potentially violating privacy rights. Let me share with you another one of those stories that I saw on the BBC police procedures. A woman called up and said that her husband was missing. She wanted to know if they could find him on their license plate readers.
Well, the police were able to find him and they found out where he was. The wife asked if he was all right and wanted to know if he was alone. The problem was, is that he was going through a separation and divorce with his wife and she was trying to track him down to use who he was with and where they were in her divorce proceedings. That's a clear violation of his privacy and his rights, but yet the technology is there. If the police had not been aware of that situation with that family,
Lynn Dimick (21:44.799)
They would have turned that data over and the husband's privacy would have been violated. I mentioned something called the FBI's Face Services before. It's a unit that compared the images from the riot against state and federal databases. I mentioned this unit a little bit earlier where they were looking at driver's licenses and passport images. This technology enabled the identification of numerous suspects.
even those who did not publicize their involvement on social media. The FBI has done tens of thousands of face recognition searches using software from outside providers in recent years, including Clearview AI. Yet only 5 % of the 200 agents who use this technology. Think about that. 5 % of the 200 agents, 10 people with access to the technology
have taken the Bureau's three-day training course on how to use it. That's according to a report from the General Accountability Office or the GAO. You should be concerned that people that have the authority to determine who is going to be arrested are not being trained in the use of the tools they are using. There are 10 people in all of the FBI that have that certification. The FBI did 60,000 searches over approximately two and a half years.
35,000 of those searches were done using vendors like Clearview AI, who is not recognized as having a valid algorithm. Over time, the investigation evolved from relying heavily on public tips and social media evidence to these more automated methods. FBI increasingly depended on geolocation data and facial recognition technology to identify suspects.
This shift allowed for the identification of individuals who had not publicized their activities or been reported by witnesses, demonstrating the growing capabilities of digital forensic tools. The methods employed in the capital right investigation highlight a broader trend in law enforcement toward the use of digital surveillance and data analysis. While these tools can enhance investigative efficiency and effectiveness,
Lynn Dimick (24:03.081)
They also raise important questions about oversight, accountability, and their potential for misuse. The balance between leveraging technology for security purposes and protecting our individual privacy rights is a critical issue for policymakers and society at large. In conclusion, the investigation into the January 6th Capitol riot underscores the transformative impact of digital forensic technologies in modern law enforcement.
As these tools become more sophisticated and pervasive, ongoing dialogue and careful considerations are essential to ensure that their use aligns with democratic principles and respects our civil liberties. They are our liberties, our rights. They are not merely privileges. As of now, the authorities are still working to identify more than 80 people wanted for acts of violence at the Capitol, and more importantly in my mind,
to find out who placed pipe bombs outside the Republican and Democratic National Committee offices the day before the Capitol attack. They continue to regularly make new arrests, even as some of the January 6th defendants are being released from prison after completing their sentences. One of the tools that has me concerned, and I briefly mentioned it, is the ALPR, or the Automated License Plate, readers. Through a Freedom of Information Act request,
filed on behalf of the Electronic Freedom Foundation, I found out that our local city uses ALPRs. There are more than 80 cities in California that make use of these devices. The amount of data they collect and share is concerning. Our little city of about 100,000 people shares that data with more than 600 city, state, county, and federal agencies. I'm going to make that a topic of another discussion and another podcast.
and I do have an appointment requesting for our Deputy Chief to discuss their use.
Lynn Dimick (26:12.949)
Thank you for tuning in to this episode of Grandpa is Him. We hope you enjoyed our discussion and stories and maybe even found some inspiration for your own family adventures. Now, we want to hear from you. What questions do you have or what topics would you like us to explore in future episodes? What stories can you share? To share your ideas, simply visit our website at grandpaishim.com and fill out the submission form.
You can also reach out to us on email at grandpaishim.com at gmail.com. We're always looking for your thoughts and experiences, so don't be shy. Join the conversation and help us create the content that matters to you and your family. Until next time, keep laughing, keep sharing, and keep those ideas coming.