The Jeff-alytics Podcast

Policing in the Age of AI with Ian Adams

AH Datalytics

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 44:41

Have you heard about the police department in Utah where report drafting AI interpreted footage from an officer’s body camera of The Princess and the Frog playing in the background of an incident to mean the officer had morphed into a frog? 

AI has come a long way in the last few years but it still isn’t perfect. Within AI is the potential for revolutionary disruption of traditional processes, but there is also the danger of relying too heavily on a tool that is only right most of the time for efforts that require perfection or near perfection. 

For this conversation, I turned to Ian Adams. Ian is an assistant professor of criminology and criminal justice at the University of South Carolina. Before taking his PhD in political science at the University of Utah, he was a police officer and police labor executive. His research is focused on policing, broadly construed, with a focus on behavior and technology.

Ian has also researched and written extensively about AI, and today’s conversation is all about the uses of AI in policing, the potential/actual pitfalls, and where this technology might be heading in the world of criminal justice. 

If you’re interested in some extra credit work, two papers related to this topic you should check out are:

  1. Adams, I. T., Barter, M., McLean, K., Boehme, H. M., & Geary, I. A. (2024). No man’s hand: Artificial intelligence does not improve police report writing speed. Journal of Experimental Criminology. https://doi.org/10.1007/s11292-024-09644-7
  2. Adams, I. T., McLean, K., & Alpert, G. P. (2026). Improving police behavior through artificial intelligence: Pre-registered experimental results in two large US agencies. Criminology, 0(0), 1–15. https://doi.org/10.1111/1745-9125.70028

To get in touch or peruse peruse different papers/projects/dashboards, Ian’s website is ianadamsresearch.com

And while you’re here, be sure to check out these other recent great episodes:

Politics podcaster Galen Druke

Arnold Ventures Executive Vice President Jennifer Doleac

FBI Assistant Director Timothy Ferguson

Orleans Parish District Attorney Jason Williams

Resources:

Follow the Jeff-alytics Podcast:

Website: The Jeff-alytics Podcast

SPEAKER_01

I'm Jeff Asher, and this is the Jeffalytics Podcast. Policing has always been shaped by technology. What's different now is how quickly it's happening. Tools that sounded experimental a couple years ago or even a couple of months ago are already sitting inside real departments, affecting how officers actually write reports, how supervisors review body cam footage, and how agencies think about accountability. In this episode, I'm joined by Ian Adams, who's a former police officer turned researcher and professor at the University of South Carolina. Ian studies how emerging technologies and especially AI are actually being used in policing, and where the promises don't necessarily match reality. We talk about what AI and policing looks like on the ground, from using body worn camera footage to generate feedback on officer communication to the idea that AI can dramatically speed up report writing. Ian breaks down one project that measurably improved officer behavior and another that failed to deliver the efficiency gains everyone expected, despite officers thinking ahead. This episode examines how technology is being tested in real departments and what the future of these technologies might look like. My guest today is Ian Adams. Ian, thank you so much for joining the program.

SPEAKER_00

Oh, my pleasure. I'm very happy to be here.

SPEAKER_01

First question to all the guests. Walk me through your background. What what brought you here today?

SPEAKER_00

Yeah, a series of misfortunate events that ended well, I suppose. Uh I was a police officer, so I was I was in law enforcement for close to 13 years. Um, then did some uh labor executive sort of management stuff uh while I was doing my PhD there at the University of Utah. I I come from a political science sort of policy evaluation background, but all my work in grad school and and since has been in uh policing. And so I ended up in a uh criminology and criminal criminology and criminal justice department at the University of South Carolina, and that's where I still hang out now.

SPEAKER_01

And so a lot of your work has centered on, and what I'd love, I'm I'm excited to talk about this. I um I guess I I only hear about it when it's in the news, which probably makes it not as uh as useful of a knowledge base. But uh you study AI and policing. Why, why look at that?

SPEAKER_00

Yeah, I mean, it kind of grew out of just a general fascination uh with technology. I've always been a bit of a hobbyist in technology and what we used to just call machine learning, but now has gets called artificial intelligence uh in some ways. Um, you know, where it started for me was in I don't know, early 90s. I was one of those uh kids who liked to uh build PCs uh back in the day. And I don't know if you remember, but back then uh the audio card was not embedded into the motherboard, right? It wasn't it wasn't uh built in. You it was kind of like video cards are today. You would slot it in, you would purchase it separate. Anyway, there's um a an old manufacturer of sound cards was called Sound Blaster. And when you ordered a Sound Blaster card, you got this little floppy disk that came with a little program called Dr. Sabetsu. And what it was was really just like a text puzzle, right? Like it sort of mimicked being able to um have a conversation, but I was pretty fascinated with it. Um, and that just over the decades kind of stayed consistent for me as a something I was interested in. So when ChatGP or when early versions of GPT started to come out in like 2017, um, and it was all an open source project back then, I was I was um in grad school at a good point to sort of take advantage of that and continue to uh monitor it. And then I naturally start started seeing a lot of my um research was around body worn cameras at the time. That was the big technology that was um being studied. And uh I saw an opportunity and an interest in the artificial intelligence piece, started seeing agencies and officers begin to use these products um on their own or sort of off-the-shelf commercial products as they became available. And so I've been extremely fortunate to have the good luck to be in that space just as it was sorting sort of starting to emerge.

SPEAKER_01

So I'll I'll put you on the spot. Um massive question here. If you were to summarize, how are police departments using AI?

SPEAKER_00

Great, uh, poorly, a lot, and not at all. Um it's it's a wild world, right? Um You know, police agencies are extremely varied in the US. Uh the the number we like to kind of throw around is there's about 18,000 local or state agencies. Um the meat, the the mean or the average size of that agency is somewhere between 12 and 15 officers, which means, you know, half of the agencies in the country, less than 20 officers for sure. Um the agencies we normally think about, like the bigger ones, let's say over 100, that's just about 6% of the agencies or so. Um, and so what you you've you're you've started to see like a concentration of technology in some of the largest agencies. So you have large agencies that are that are spending hundreds of millions of dollars in um axon packages for for body cameras, uh uh uh real-time translation, evidence storage, drones, report writing. And then you have some agencies that don't have any of that technology at all. Um the degree to which they are able to use them wisely, I think, is still an open question as the technology develops, and certainly an area where we're trying to contribute to that uh the that research base.

SPEAKER_01

And does the public have really any awareness of how these tools are being used? Is this just is this just a way for the police to use it, or is this something that you think that there's at least a conversation understanding of the trade-offs that come with using more advanced technologies, especially as you get into, you know, civil liberties and surveillance and things like that?

SPEAKER_00

Certainly there's some public awareness of this. Um, you know, my one of my teams in a paper that was led by Kaylin Schiff at uh Purdue um about a year ago, we put out what what we found was that the public is actually pretty trusting of at least local police, their local sheriffs, police departments, that sort of thing, as compared to federal agencies. They're pretty trustworthy worthy, or they they are pretty trusting of those local agencies to use this technology and pretty demanding that they do. So the public is is expecting agencies to sort of keep up to date, uh, regardless of the sort of ability of the agency to afford and use that technology well. Um, there are some differences in, as there always are in the US around partisanship. So um, the more left-leaning respondents really wanted to see agencies using this kind of technology for internal controls, right? To to bring more accountability and transparency to the police departments. Whereas respondents who might lean more to the right, uh what we saw was a pattern where they really wanted agencies to be using these tools for external crime fighting uh uh reasons. And that gets to one of the difficulties of these conversations is what do we mean when we say police are using AI? Do our do we mean like police are using AI to help write reports, sort of an internal operational piece? Or do we mean we want police or don't want police using AI in surveillant capabilities on the outside of the agency? Sort of um think Flock and uh other technologies companies that are more aimed at collecting information and feeding it into the investigative pipeline. The same person might have very different views about those just those two uses, let alone sort of the panoply of um uses that we might imagine agencies could use.

SPEAKER_01

So I guess let's start at kind of the middle then. Body worn cameras is something that I think that AI has been used for and that you've studied. What have you found about how are police departments using AI to supplement their body worn cameras? And is it impactful making more making their departments more efficient, or is there just nothing happening?

SPEAKER_00

Yeah, let's underline that word efficiency because it's it'll come up again, I think, in the rest of the conversation. But just to um answer your questions, so body cameras come around, we begin rapidly adopting in the United States around 2015. They immediately start producing a tsunami of data storage problems and data review problems. The basic problem is that for every hour of body-worn camera footage, to give that a good audit to really review it takes at least one human hour, right? And so we just don't have the capacity within law enforcement or really probably within the United States to adequately review all that body worn camera footage. So all the sort of hoped-for benefits of body cameras, right? Maybe they maybe they produce better training, maybe they can produce better transparency, all those problems never really get solved because we don't have a way of actually looking at the body camera footage to determine any of that. This was my insight in grad school, and basically what I rolled my dissertation on was well, what if we could use um advances in machine learning and artificial intelligence to audit 100% of that footage, extract information out of it, and use that as sort of a database that we can then get information out back out of that huge data storage. Um agencies that that technology soon became available in the early 2020s. There's a company called Trulio that is sort of the um original uh company doing that and still doing it today. And so beginning in 2022, my team set out and started up an experiment, an RCT in two places. One was Aurora, Colorado, one was Richland County, South Carolina. And what we wanted to do there was see if the the see if Trulio's approach was successful in changing officer behavior for the better. And this is what they do they take the audio from every one of those body cameras, get a transcript, and then use machine learning and natural language programming to extract features of that conversation, the officer's language, the civilian's language, and then in real time feed that back to the officers in near real time, I should say. So think about it this way: you're an officer, you're on patrol, you've had um, you know, eight hours of work the previous day. When you come in today, you have a dashboard that can tell you basically a grade of how you did the previous day. Did you use a lot of explanatory language? That would be considered highly professional. Did you use maybe threatening or uh profanity or insults, right? That would be kind of a low grade. Or did you just kind of like, you know, do standard professional work? Well, what we found at the end of a one year of experimenting across these two agencies was that yes, officers actually do use that sort of self-learning mode to improve their professionalism. In other words, in Aurora, we found um a very high decrease in the amount of unprofessional language. So the insults, the threats, the um sort of the language we don't want to see officers using with members of the public. And in Richland, we saw um the opposite. Um, instead of uh raising the floor, it raised the ceiling, meaning like officers uh began using up to, I think, about 80% more explanatory language with with the public about why they were there, what and why they were doing what they were doing. Um and that so what we saw was basically a consistent picture across these two large agencies of um the AI feedback loop improving police professionalism.

SPEAKER_01

Is there any any look at sort of the efficiency landscape from a supervisor perspective? Does it make it easier for management to basically go through all of this footage and and find either reasons to commend or reasons to uh discipline, or is is the AI not being used for that?

SPEAKER_00

I think we can think about that as an efficiency issue, right? I mean, the the problem is is in business as usual in in agencies, um often supervisors are required or requested at least to audit some small number of these videos per month. But I want you to put you yourself in a really busy sergeant's shoes for a minute. And you're supposed to review, let's say, five of your squad's videos per week. Which videos do you think you select? Probably not the ones that are six hours long on a standoff, right? You probably pick ones that are a little bit easier to deal with. Um, now some agencies have tried to like impose a randomization schedule on that to um greater or lesser success, but the ultimate um problem is that you don't have enough time to actually review one, even one officer's 40-hour week, let alone five or six officers' 40-hour weeks. I think of it less as efficiency and more as a step change in the gain on actually auditing all the videos. It makes it easier for the sergeant or supervisor to surface that information without actually doing a second-by-second audit yourself. But I want to point out something important. The channel that I talked about on improving professionalism operates differently when that information is provided directly back to the officer versus through a supervisory channel. So I didn't really get into all the details, but in this experiment, we randomly assigned officers to either business as usual control or a self-learning sort of mode where that in trulio-based information is coming directly back to the officer, or a supervisory mode, where the information goes to their supervisor and then gets translated to the officer. We saw lesser effects on the supervisory channel when we're talking about increasing the highly professional behavior using that explanatory language. But we saw greater effects in the um sort of uh improving low professionalism when that information was coming from the supervisor. So it's a complex uh study, it's a complex area. But the general lesson coming out of it is that when we want to think about an agency improving the highly professional behavior of its officers, it looks like we probably get our best bang for the buck by providing that information directly back to the officers themselves. But when we want to sort of reduce unwanted behavior, sort of deter unwanted behavior, having that supervisor in the loop seemed to um get the best effects.

SPEAKER_01

So uh kind of switching gears. Obviously, this is like very minutiae switching gears, but um looking at report writing, there's a very famous uh story, I guess, that made the rounds earlier this year of a police department that was using software, AI reporting software to um basically capture what was happening and it didn't go so well. Can you walk me through that incident and then also what that says about um kind of the other side of the coin of using AI to improve efficiency, very much on the back end in terms of improving your efficiency with report writing?

SPEAKER_00

Yeah, I think you're talking about the princess and the frog out of Hebrew, Utah. Yeah. Um that's my old stomping grounds. I I come from Utah, so I'm I'm quite familiar with the area. Uh, this is a smaller agency, and they were testing out Axon's draft one product. So Axon is probably the largest technology provider in the space, right? Um, they're the manufacturer of the taser, for example, but they have their hands in a lot of different technology, advanced technology sectors, um, including report writing. They were the first to launch this draft one product. And what happens in the draft one product is similar to the Trulio approach, right? We're gonna take a body camera, we're gonna separate out the audio and generate a transcript. And then what the draft one does is it sends that transcript along with a set of custom instructions that the officer doesn't really see. But if you've used ChatGPT before, think about it as giving some directions to the um to Chat GPT on how to create a police report from this transcript. So that goes out, it hits the OpenAI API, and it returns the first draft, right? Thus the name draft one to the officer within the software program. The officer has to make some changes to it and then submits and then is able to like sort of copy and paste that over into their agency's report management system and submit the report. But what happened in Heber, unfortunately, is a breakdown in a couple places. First, the officer was responding to apparently a domestic call. And in the background of that domestic call was um a Disney movie, Princess and the Frog. So the dialogue from Princess and the Frog was recorded on the transcript. This transcript then got sent out to the API. It returned a police report in which the officer was apparently turned into a frog, and then the officer apparently didn't edit that uh report at all before submission. And so that's a problem. That's a big problem. But uh importantly, it's is it a technology problem or is it a people problem? And we run into that quite a bit in this area. That one seems to me to be a people problem, right? The technology actually operated kind of how we asked it to. It it's taking a transcript and it's creating a report. What why it's creating a report where the officer turns into a frog, who knows? But the safeguard in that system is supposed to be the officer. The officer is supposed to be uh reviewing that report for accuracy and completeness before submitting. And it sounds like maybe that didn't happen. So um that did not accomplish its efficiency goals, obviously, uh, and probably a bunch of other goals uh as well.

SPEAKER_01

There's probably a good movie script coming out of that, though. I feel like you're like 90% of the way there.

SPEAKER_00

I'm not a big fan of the police procedurals, but I'd watch that one, yeah.

SPEAKER_01

So is this the solution here better technology, or is this one where come on, guys, like the software tells you what to do when you didn't do it?

SPEAKER_00

Yeah, we're always gonna have human failure, right? And so like we probably shouldn't judge uh an entire class of products off of that story as amusing as it is. But, you know, my team did produce the first and only experimental evidence on this type of technology about a year ago. So we teamed up with an agency uh that was testing out that same draft one product. A very simple approach, right? What we were interested in is testing whether the uh software was actually able to improve efficiencies, because the number one claim, like the marketing claim that Axon was using and still uses, is that this will massively speed up officer report writing time and save a save a lot of time for the officers so they can get back out there onto the road and do more productive tasks. In fact, Axon claimed that it would save a very specific number, 82% of an officer's time, which is a weirdly specific number, right? My team set out, and what we did was we assigned half of patrol to business as usual. They were just going to write reports as usual. Half of the officers had access to the draft one tool and were able to create reports within draft one. And at the end of that, we were measuring how long did it take those officers from the time they began a report to the time they submitted it. And what we found was not an 82% savings. In fact, we found a null. It didn't help save any time at all. Certainly a surprise to me. My, my, admittedly, my priors were sort of going into this study were that, sure, of course, AI, you know, I'd use Chat GPT. Of course, it's gonna like sort of help on the efficiency front. But not only did we find it didn't work there, that's not a policing only story. There's been a couple other studies in the that have come out since in both um sort of medical record keeping and in computer programming that have found similar, if even negative, results. So in the computer programming study, for example, set up very similar to ours, but what they're asking is can that same sort of approach, can AI help computer programs become more efficient? And what it found was actually a 19% loss in time. So, and then in the medical records attempt, they found it basically a null as well. So there's sort of an emerging consensus in these studies that the hoped-for efficiency gains are running into context, professional contexts, that sort of hinder or even prevent those same uh hoped-for efficiency gains.

SPEAKER_01

What are some of the things that would prevent those gains from being realized?

SPEAKER_00

Yeah, policing has a very specific um issue. One, and that is the technology stack. So if you've ever hung out in a police department, you're gonna find out that they have incredibly complex idiosyncratic technology stacks. What I mean by that is it might, if you were naive and new to policing, you might imagine that two systems, um, that is the report management system, where officers write reports, store digital evidence, link everything together, and the um CAD, which is the computer-aided dispatch uh system, that's how officers get dispatched from the dispatch center. It's sort of a mapping and um incident tracking uh type of software. You might imagine, like naively, that those two things probably work really well together, maybe even from the same uh technology manufacturer or vendor. But what you'd find out is like, no, these things were adopted at different points in history, developed by very different people. Like, you know, it might have been your RMS may have been developed by somebody's like 14 year old nephew in 1982 on Coban Hall. And like it's never been updated since. And the CAD is like super advanced and um very up to date. So they don't sit in the same part of the technology stack. They don't even talk to each other in many places. And then a second feature is that maybe we're thinking about the police, quote unquote, police report wrong. The police report is more than just a narrative. The police report is a very complex data entry process. In the agency we were studying, for example, there were up to, I think, more than 200 separate text entries that an officer could make, right? They're not going to be making them for every incident, but could make outside of just typing up the narrative, uh recounting of sort of what they did and what they saw and what they learned as part of their investigation. And so the sort of AI generated narrative might save you, even if it did save you a little bit of time, it's not going to save you any uh any time on sort of some of the main uh activities that an officer has to do in order to complete the report. So we should probably expect less marginal gains than if it's just a if it's just one piece of that whole puzzle. Um, those are just two of two of the the basic problems. The third is that not every call actually generates a transcript, right? So, like a a lot of calls that officers take that still need to have a report are done over the phone and won't have that sort of um transcript of a conversation going on between them and somebody in the public. So we shouldn't expect that this is uh a sort of silver bullet that can solve the hundred-year-old problem of how do we get officers to write better, faster reports.

SPEAKER_01

So, what do officers think about sort of the effectiveness of these tools? Are they annoyed by it or do they think that this can change everything that they do?

SPEAKER_00

Yeah, good question because we went back. So, our second study in that same department, what we wanted to find out is how did those same officers think about the technology that we had just um allowed them to use? And we found some fascinating stuff. First, officers are pretty positive about this wave of technology. They they're pretty excited about it. They're somewhere between neutral and positive. And that is a little bit of a surprise just uh if you think about it in the context of body cameras, right? When body cameras were first sort of introduced into the policing workspace, they were received quite negatively. They were seen as maybe a tool of where uh management was going to use them as a as a fishing expedition to find small policy violations and jam, jam officers up. But they they sort of quickly became very, very popular within policing. So if you ask officers today about body cameras, they're very positive. On the this AI report writing front, what we found was somewhere between neutral to very positive, uh basically across the entire um uh department from the officers to their sergeants. But then a really fascinating thing showed up in our analysis. About half of the officers told us this technology definitely helped them write faster reports, right? They they they were in fact endorsing that marketing message that Axon was using. And so we went back and we looked because, of course, we have records of those officers in black and white data. Like, I can actually find out. And lo and behold, what we found out was there was this huge gap between perception and reality. The officers were telling us it's saving them time. But when we go look at their actual records, we we see, like, no, it's not saving them any time at all. And so that's one of the things I like to warn uh police executives about, especially, is that if you don't do a careful evaluation here, you you might be misled. If you just do a traditional sort of check-in with the troops, how's it going? Did you like the technology? Did it work? That might give you one answer that's very, very different than the a sort of more rigorous approach where we're looking at their at the underlying data.

SPEAKER_01

Do we have any idea why that is? Is it like I'm thinking that, you know, I'm I can either sit at this traffic stop and it's gonna take 10 minutes to get through all of this construction, or I can go the other way around and it's gonna take 10 minutes to go the other way around, but at least I'll be driving and I'll feel like I'm doing something. Is that the same thing, but for police officers?

SPEAKER_00

I think you're on to something there. Um the the the honest academic answer is like we don't know the pure mechanism that's causing this discrepancy, right? To be clear. But it's probably something like that, or probably something like, hey, if if if I come in and I, you're an officer, you're doing a difficult job. It is a it is a challenging work environment. There's a lot of time pressure, there's a lot of task pressure. Um, every single year, call volume goes up, right? Even as over the last five years, there's been pretty significant labor pressures within policing, meaning we don't have as many officers as we need to answer all those calls. So if you're in that environment and somebody comes in and says, hey, I've got a magic tool that's gonna save you all kinds of time, I think it's just human nature to kind of feel like, yeah, that saved me some time. There's a bit of confirmation bias. But again, who knows about the exact mechanisms? I know there's um Brandon May is a psychologist down in Florida who's very interested in this technology. And that's that's more his interest, is like what exactly is occurring um in the sort of perception uh uh boundary of this of this problem. Um, but what I can tell you is that right now, as far as the most rigorous evidence in the world is concerned, there was no time savings. And yet about half of the officers using the technology said it they perceived that it was. And so we should be aware of that when we're trying to sort of on balance figure out is this technology worth the huge dollars that agencies are being asked to put towards it. So, just as one example, in that agency that we studied, the cost for just the draft one product was equal to about 11% of this agency's after labor budget.

SPEAKER_01

Jeez, right?

SPEAKER_00

So, like this agency has about a$32 million budget a year. But in policing, somewhere around 95% in this agency, I think it was 96%. 96% of its budget goes directly to sort of paying the troops and and and making sure that they can answer 911 calls and investigate crimes. So you have about 2.2 million left over. And the cost for just the draft one piece was around 11% of that per year. And it's, you know, chiefs and sheriffs are under a lot of budget pressure all the time. They always need more to do more. And so asking them to spend 11% of their budget on a product that isn't having the primary effect that it was promised is probably too much to ask. And we don't really know functionally how much agencies are actually spending in the US. I mean, we can we can observe uh quarterly earning reports from Axon and know that um in some of their latest big dollar contracts, they're getting to about$600 per officer per month in annual recurring revenue. So that can give you some idea of the dollars at stake, but it doesn't tell us in aggregate how much is being spent because of course that's just one vendor, albeit the biggest one.

SPEAKER_01

Is there a happy medium between how AI can best serve policing and how police departments can responsibly and effectively and efficiently integrate AI?

SPEAKER_00

Well, yes, at least in theory, right? I mean, that that's part of the process that's occurring right now across the United States in different agencies and different um jurisdictions. This is a good time to give a shout out to the um Council on Criminal Justice. I sit on the AI task force at the Council on Criminal Justice, so I quite like that organization, especially it has a it has a pretty even-handed approach to this sort of thing. And what's coming out of those sort of meetings is one, it's really hard to come up with the answer to the question on everybody's mind, which is sort of like, is it worth it? Right. It's some sort of balancing question. Is it worth, you know, what benefits can we get? What risks are we trading off on? That's ultimately a big public policy question. To the degree that it's useful to have an empirical grounding for that question, I would say we are nowhere near knowing the answer to either the benefits or the costs, right? Like I've covered two um evaluations of AI and policing. One of them uh we saw really important effects, right? It's important that officers uh behave professionally when they're dealing with members of the public. In the other, we got a big null, right? We didn't that that same technology didn't save officers time when they're generating reports. That's not much of an empirical basis to proceed on in like a policy uh question. Uh, but I would also argue that we we are confronting the exact same, if not worse, picture when it comes to the risks. We don't even know really how to define the risks. I think they sort of get baked into questions about the surveillance and constitutional questions and due process, all of which are super interesting and important. But we are absent in empirical record of showing like that that stuff actually shows up somewhere in the data. And so I'm a data guy. That's where I live. It's uh it doesn't mean that those more normative questions aren't important. But when we want to ultimately get to the sort of um end question of those normative discussions, I think that it's helpful to have good data. We just need more of it. We need more researchers, more research funding, more time, and more agencies willing to do the hard evaluations up front to sort of get ahead of that uh that bad outcome where agencies are spending 10, 11% of their budget on a technology that feels good but isn't actually doing anything. Do you get a sense that this is something?

SPEAKER_01

I mean, we before we went on, we were talking about Cloud Code and just how incredible the advances have been over what, six months?

SPEAKER_00

Is that, you know, I would even say like a month, right? Like I would say in the last month that thing is taken off.

SPEAKER_01

Is this something where just the conversation we're having now today in early 2026 will be just completely null and void, and the technology will have advanced to the point where it'll be completely new in six months, a year from now?

SPEAKER_00

There's a big part of me that always has that in the back of my mind when I'm talking about these things, right? Just like the academic's uh worst nightmare of being wrong at some point. But um the the answer is like, of course, things are gonna change. There, there is a counterweight to that, and that is that policing is an inherently conservative, like small C conservative institution. It doesn't move quick. Um, and so we have time. I don't want, I am a technology optimist. Sometimes people hear about the the report writing study and they think I'm a technology hater. It's like, no, that's just what the experiment showed. I actually think that we are um today is the worst that any of these technologies will ever be. Next month we're gonna be in a much better spot and let alone in 10 months. So I yes, the world will look a lot different. I sometimes, when I'm talking to police executives, like to use the uh the pager metaphor. So I ask them like, who here remembers getting a pager for the first time? And usually there's somebody around who started in maybe their late 90s and they remember getting the pager and how cool it was. And then they remember getting the first time they got a mobile phone and a laptop in their car. And and I ask them, like, well, if I had asked you back in '97 to predict what technology looks like today from that pager, could you have done it? And the answer is always no. And like, and yet you lived through 30 years of a profession relatively rapidly adopting modern technology into police practices. We will find a way. There will be a new normal at some point. We're living through probably the fastest technology adoption cycle that we've ever seen, right? I would argue that AI is being adopted much faster than body worn cameras uh were, let alone laptops and that and all that technology. But um eventually we'll get there. Uh the the safer we can do it with good research, the better.

SPEAKER_01

Can I ask you about for science?

SPEAKER_00

Yeah. I didn't expect you to, but you certainly can. Yeah.

SPEAKER_01

Good. Well, I I had seen some of it, and I just I watched the full upload of the video for the perf. So I I I want to talk about this just because you're you're here, and I think this is fascinating and such a good example of just sort of ethics and responsibly approaching research and and data. What is force science and sort of what is the controversy, the the issue that you've been looking at?

SPEAKER_00

Yeah, okay. So force science is a private company. It's a vendor, it's a training vendor mostly, in that is very, very widespread in policing. Um, they position themselves in the market as a scientific voice that can give scientific information about what happens in sort of the most uh uh critical incidents in policing, right? Use of force, shootings, that sort of thing. That's their bread and butter. They hold um dozens and dozens and dozens of trainings across the United States every year, training officers and investigators in this um purportedly scientific approach. The problem is, of course, when you have a a scientific background and you begin to look at their scientific corpus, the the collection of studies that they say they've put together, anybody that's taken even a very basic research design course will look at one of their studies and uh be horrified, right? Extremely small sample sizes, samples that were uh not just convenient, but super conveniently sampled, um, bad reporting, incomplete reporting, inferring to uh generalized populations that weren't even tested in the original study. It's pretty bad. It's really, really bad. So um my team published a study last year that reviewed all of their studies, right? They put them all together in a book and they said these are our most critical studies, our 24 most critical studies. And we just sent each study through very basic scientific review processes, right? You may have heard of like the Maryland scale, which is um widely used in criminology to sort of judge a study or a set of studies on its ability to safely inform policy, for example. Um, but we used and we used two others. That's the details may not matter as much, but if you're interested, uh the paper's available at police quarterly. And at the end of it, we basically found that across this corpus, there was um very, very low scientific reliability, meaning they just didn't follow basic, normal scientific processes in either the design, the execution, or the reporting of those studies, and thus should not form the a reliable basis for police training or evaluation. Um you called it controversial. It certainly is. That study was raised eyebrows because this is a company that is well received within policing, right? I they they've they're well loved. Um, they give pseudoscientific answers that officers often want to hear. I was one of those officers once, by the way. I've been through the four science training. Um, and there's something quite seductive when you're an officer and you're sitting in a room and they're parading a bunch of people who insist on being called doctor in front of you, and they tell you the answers you want to hear, and they say not only that, that's what science says. That's if you're not, if you don't have a scientific background and don't know how to evaluate that information, um, that could be a pretty seductive message. It it it all of us like to be told what we were hoping to hear, but science doesn't give us the answers we were hoping to hear, right? That's not how science works. Science doesn't promise some answer, it promises answers to well-constructed questions that we can um sort of know how that scientific answer came to be and evaluate it. The unfortunate reality is that when Force Science and I met for a debate at the Perth Town Hall uh this last October, they didn't send a scientist to defend their results and they haven't ever really even tried to rehabilitate those results. Um, they sent a lawyer, right? That's who they sent to debate. I see myself as one of the people who has the unique perspective of a street cop, somebody who's been through force science, somebody who has a rigorous scientific education and I think contributes usefully to scientific conversations in policing. And so um hopefully we can sort of keep pushing because I think it's really, really important that agencies do have an a scientific evidence base when it's available to help guide them in their operations. Like that is the best approach. We know that's the best approach from all kinds of professions, right? Um, policing's no different. And I don't, what I don't want to see is sort of a universal distaste for scientific information that comes out of that, right? That's not my goal. My goal is to improve the scientific evidence base and scientific knowledge and education and make police agencies better consumers of that information, ultimately.

SPEAKER_01

That's great. And if anybody's interested, um the video is available on YouTube uh through the police executive research forum. So um I I think it's a great lesson and even if you have no background in it, I think it it was very interesting. And and um many of the nerds that have, I think, been on this podcast or will eventually be on this podcast were made an appearance there. So it was it was fascinating to watch.

SPEAKER_00

Ian, what what's next for you? Man, uh I am super excited about Claude Code and other agentic coding. Um, I'm super excited about the ability to maybe use some of these tools to improve investigative processes. Um in policing, we have a real real terrible problem with solving specific types of crimes. Just as an example, right? You you're you and your listeners will be well aware that we don't solve every murder in the United States. But even worse are the ones that are near murders. So um this oftentimes the difference between a shooting that results in a death and the shooting that results in a life-altering injury is nothing more than stochatic chance. It's just randomness. Uh, where that exactly that bullet lands in your body, a millimeter this this way or that way. But our solve rate on murders hovers around nationally around 60%. You you'd be better on that number. Yeah, that's about right. It gets as low, it gets under 20%, sometimes close to 10% on um what these attempted murders uh got by gunfire. And like we know that those two types of events are very, very alike one another. And so we shouldn't see that big of a gap in solve rate. So why are we seeing that solve rate? Well, there's something about investigative effort that's going on, right? I know cops, they care about those cases too. But if you are a murder cop in LA or uh Albuquerque or Houston, you've got a pretty big caseload. And a lot of those are uh going to be both homicide and near homicide. So your investigative effort, there's only so many hours in a day. So, how can we use technology to better improve those processes? Is something that I see a lot of companies attacking right now. To be clear, I don't develop. I'm not a developer, I'm an evaluator. But I see these um agencies beginning to experiment with that technology, and I'm super excited about like maybe we can actually get a handle on that problem uh sometime in the next year or two.

SPEAKER_01

That's great. And I I know that that is a fascinating question of we've had all of these advances in the last 25 years, and yet the murder clearance rate is basically slightly lower than it was 25 years ago.

SPEAKER_00

Right. And the other technology that uh I'm very excited about, and and it is being used more often than people know, is drones in policing. And there's a couple different ways to use drones. Like in the last 10 years, you've seen a real upswelling of drones in the police car, right? Uh drones as operational response, meaning there's a cop in a car and he can get the drone out if he wants and maybe use it um sporadically. We haven't seen good evaluations of that, to be clear, and and adoption has been rather uh stumbly, let's say. But over the last year or two, we've started to see DFR drones, drones as first responders. And um, I've had the chance to sit in at agencies watching this technology work. And in fact, my um uh one of my teams has uh an active dual-site RCT in two large American cities right now using this technology. This is different. These sit on pods on the top of buildings and are able to um, in one of our cities, get on top of a reported incident within 72 seconds of the call, not the dispatched call. So um, just as a reminder of how this process works, somebody calls 911. You are talking to a call taker, usually takes a few minutes to get information before that information is dispatched out to the police agents, police officer for actual response. So there's already like this two-minute gap, and that's before it takes, you know, the officer a few minutes to a long time to get to that actual call. Well, these uh these agencies are able to um have sergeants and corporal pilots who are already experienced officers who are hearing the 911 call come in in real time and make a quick snap decision about whether to get a drone up and going towards that location right now and be over that target on average within 72 seconds of that call. That's a huge change. Um, has tons of opportunity, not only for like the public service piece of this, people like fast police response, to be clear. So, like that's an important um sort of primary goal. But but unknown and still unproven is like, can we get better operational outcomes out of this as well? Um, can we get better suspect IDs? In violent crimes. Can we, um, instead of sending a bunch of officers into a foot pursuit, can we use the drone technology to sort of make for a safer environment, not only for the officers, but for the subject themselves? So there's all kinds of questions to come. I think we'll get some of those answers here in the next six months.

SPEAKER_01

That's awesome. Oh, excited to hear about that. We'll have to have you back on later this year then. Anytime. Anytime. All right. Ian, thanks so much for coming. Thanks. Thanks for listening to the Jeffalytics Podcast. Be sure to subscribe and to learn more, head on over to ahdatalytics.com for more information and previous episodes. If you like what you heard, please leave a glowing review, which will help others to discover the show. Until next time, I'm Jeff Asher.