
Police In-Service Training
This podcast is dedicated to providing research evidence to street-level police officers and command staff alike. The program is intended to provide research in a jargon-free manner that cuts through the noise, misinformation, and misperceptions about the police. The discussions with policing experts will help the law enforcement community create better programs, understand challenging policies, and dispel myths of police officer behavior.
Police In-Service Training
Episode 7: Artificial Intelligence in Policing
Technological advancements have always found their way into policing, and Artificial Intelligence is no exception.
Dr. Ian Adams joins the podcast today to discuss some of the seminal research exploring AI in policing. Ian is an Assistant Professor in the Department of Criminology and Criminal Justice at the University of South Carolina. Ian is also a 2023 National Institute of Justice LEADS (Law Enforcement Advancing Data and Science) Academic, and he is the Managing Editor for Police Practice & Research: An International Journal. In a prior life Ian was a police officer who worked in Utah.
Beyond simple questions of "does AI work to make policing more efficient?," Ian explains that AI can accidentally do a better job in some parts of policing, but this may open the door to legal questions about the development of suspicion.
Don't forget to like, FOLLOW, and share. Sharing this podcast or an episode is one of the best complements I can receive, which will help grow the show.
And don't forget to provide a review. Giving five stars is never a bad idea.
Feel free to email me your comments using the "send us a text" option, or at the following email address: policeinservicetrainingpodcast@gmail.com
Bluesky: @policeinservice.bsky.social
Welcome to the Police in Service Training Podcast.
This podcast is dedicated to providing research evidence to street level police officers and
command staff alike.
The program is intended to help the police and law enforcement community create better
programs, understand challenging policies, and dispel the myths of police officer behavior.
I'm your host, Scott Phillips.
If you are familiar with the movie, The Terminator, you may remember that Skynet is a defense
computer that, through an artificial intelligence system, becomes self-aware.
When humans try to pull the plug, Skynet fights back.
Like any other sentient being, when it is attacked, it defends itself.
Fortunately, we're not too concerned about computers and machines taking over.
Well, not yet anyway.
But in the meantime, while college students use AI to write their term papers, the police
are exploring ways to apply artificial intelligence to improve their own efficiency.
This is no surprise.
Private use AI and government AI applications are all being used in the name of efficiency.
It's inevitable that new technology is being tapped into as a method for improving police
efficiency.
In the early 20th century, cars allowed officers to patrol more broadly.
In-car computers of the 1980s allowed officers to sidestep the dispatcher to check license
plates or driver's licenses.
More recently, we see the use of video technology in dash cameras, body cameras, and doorbell
cameras.
And now drones are being used as part of search and rescue or for monitoring isolated locations.
But it takes time to know precisely where the technology will fit and how it will become
more useful, for better or worse, for the police.
Fortunately for us, there are policing experts who are now exploring the use of AI in policing.
Joining me today is Ian Adams, an assistant professor in the Department of Criminology
and Criminal Justice at the University of South Carolina.
Ian is a 2023 National Institute of Justice Leeds academic, and he is the managing editor
for Police Practice and Research in International Journal.
In a prior life, like many of us, Ian was a police officer and he worked in Utah.
Welcome to the podcast, Ian, and I'm sorry I had to abbreviate your resume, which is
actually quite impressive considering you finished your Ph.D. just a few years ago.
Yeah, 2.22 of 2022 makes it easy to remember.
That's quite easy to remember.
Now when I first approached you about being a guest and talking about artificial intelligence,
you had mentioned that your dissertation revolved around AI and body cameras even before
AI was around, or at least being accessible to the police agencies.
And I thought that's a pretty risky approach to a dissertation, studying something that
didn't exist.
So I do have to ask, what were you thinking?
Yeah, I have always sort of used academia to explore my inner child in a way.
I like to ask questions and I don't like to be shut up.
And so I've been an AI hobbyist since the early 90s as a middle schooler, playing around
with early Sound Blaster sound cards would come with this little program called Dr. Sabetsu,
which was really just like a forked programming language where you could sort of ask questions
and it would have a set number of answers and I was just fascinated and I'd spend hours
trying to drive this thing into a corner that it couldn't escape.
And I've, you know, since then, in the last 30, 40 years, I've continued to be sort of
a technologist.
I like to build computers, play video games, adopt the latest technology, but I'm also
a former police officer and I'm at this point also an academic.
And so I could kind of see body cameras hitting a wall.
And what I meant was, even as an early adopter of body cameras as a police officer starting
in, you know, 2011 ish, one of the first things I noticed is that it put in an immense drain
on officers and agencies, right?
And what I mean by that is as soon as you produce a body worn camera video, you're creating
a public record.
Now you've got public record requests.
You've got to clean it up somehow and send it to prosecutors and defense attorneys and
journalists.
And all of that back then, you know, 15 years ago was falling on me, but eventually it would
create positions within police departments to try and review this body worn camera because
now we have training demands and internal affairs reviews, et cetera.
And so I could see that we were creating the world's largest repository of information
about policing and yet we had no way to humanly review it.
And so I just sort of started thinking about like, how are we going to do that?
There's too much demand for those reviews of body worn cameras, whether internally or
externally.
And we weren't as a nation or as a people just going to leave it there to rot, right?
Somebody is going to want to get to it.
And I kind of saw what I would have said as machine learning approaches as the first obvious
step because these can be once developed cheaply sort of strewn across this lake of data to
extract important information.
And so, yeah, I kind of took that risk.
It's small of a risk as it might seem now to write that kind of dissertation.
And I have to add, like, I'm not that smart because if I'd been really smart, I probably
would have left academia and made real money trying to develop that very technology because
unbeknownst to me at the time, there were people working on exactly that technology
and we're sort of seeing the first commercial applications of that, of those efforts today.
Nice.
OK, so because intelligence, artificial intelligence is so new in policing, it's likely only accessible
to the larger police agencies that are out there.
So can you speculate?
And this is probably just speculation on why this area of inquiry and again, not necessarily
research.
We'll get into that shortly.
Why this area is even of value and relevance to police now?
Well, I hope you don't mind if I push back on you a little bit there.
I don't think AI is primarily only available at the largest agencies.
In fact, let's think about Chad GPT.
Most people by now hopefully are aware of what it is.
But in late November 2022, when it launched as a public product, no one knew what it was.
And yet I found evidence as early as February 2023, just within three months, that police
officers around the country were already using Chad GPT to try and create police reports.
I love police officers.
And one thing I love about them is that they are intensely good short term problem solvers.
And so we train them that way, right?
We want our patrol officers to get a dispatch call that they don't, you know, here's an
unknown problem, unknown setting, please go solve it.
And every patrol officer in America gets that call and says, 10-4 on my way, right?
So when we present them though, in real life with other problems, like here's escalating
workloads, escalating call volumes, decreasing numbers of officers becoming into the profession
and more leaving it than ever before.
That's a problem that they need to solve.
And one way to kind of creatively solve that is to reach out to a product like Chad GPT
and say, hey, can you help me with this part of my work that takes up 25 to 30, 40% of
my day and make it more streamlined, effective and efficient?
So yes, there are tools out there that are going to be quite expensive, especially in
their initial sort of off commercial offering.
But I would sort of warn against this instinct that this is something like maybe really expensive
data centers or something like that, like AI as a concept stretches across all kinds
of tools, some of which are free.
And like, we have good evidence that they're being used by officers of every size of agency.
You know, it's a good thing you mentioned that.
I appreciate the pushback because somebody of my age, I was not familiar with this stuff,
didn't use it.
Obviously, you know, years ago it didn't exist.
But the population of officers now are younger and much more tech savvy.
And I can now picture that in my head.
As you say, the officers are problem solvers.
They're innovative.
There's no question about it.
If an officer wants to try something different, if there's some downtime, they're going to
do it.
And so the idea of younger tech savvy officers exploring this option is something that hadn't
crossed my mind.
And you know, hopefully agencies will appreciate that and tap into them as a potential source.
Okay, so we want to talk about your research.
Now this is just a small part of it, but it wasn't your actual research.
It was what you were writing about artificial intelligence and report writing and its use
in reviewing body camera images.
In both of your, or two or three of your articles, anyway, I was reading a few things and one
of them came across to me as a red flag for officers.
And maybe you didn't mean it this way, but can you explain what generative suspicion
is?
Oh, yeah.
So actually that's a coined term from Andrew Ferguson, another, he's a legal scholar at
American University and wrote a book back in, I think, 2017 or 2018 about technology
and its uses within policing to both sort of as an internal and external control, meaning
sometimes technology gets used in a way to examine or hold police accountable, right?
And sometimes it's used in crime fighting external ways as well.
But Andrew, or Ferguson's primary concern in this technology is that it might serve
to generate, say, reasonable suspicion or probable cause that historically we've relied
on humans to do.
And it maybe is best to illustrate this.
I was reading an article recently about AI report writing and it being adopted in Oklahoma.
And they quoted this sergeant, a canine sergeant, so a man after my own heart, canine is the
finest of the fine.
He was relating an incident about how, why he liked this AI report writing tool.
And this is what he said.
He said, I was on a call and I didn't even notice that another officer told me the suspect
vehicle was a red vehicle and it had just gone by.
I don't recall that at all.
But AI picked up on that in the transcript of my body worn camera and placed it into
my report.
And he was reviewing that.
He was saying that was quite positive.
And I sort of was horrified in a way, because another way to think about what that AI just
did is create a giant Brady issue in his report, meaning the report is supposed to be a collection
of the officer's observation and facts.
What do we do with this problem where the AI might notice something in your video through
text, essentially, through transcribing of audio, places that in your report and you
sign at the bottom that this is a true recollection of the events, a true recollection of your
observations.
That's generative, maybe not suspicion, but it's certainly generative information, meaning
AI generated information.
It's both true.
It occurred in the world and it's also false.
It didn't arise as an observation of the officer themselves.
I think it's a fascinating problem.
I don't actually have a solution to it yet.
I think it's something that we as a profession have to kind of struggle with a little bit.
But you can also imagine more critical voices.
I would sort of characterize Andrew Ferguson as a more critical voice of policing.
You can see where somebody might hypothesize from that example and start worrying about
even more insidious uses, like the AI is going to create the fact pattern that creates probable
cause for an arrest.
Or even, again, I don't think these are necessarily rooted in a true reality of policing, but
some might worry about something like a really bad actor officer making spurious arrests
and then just sort of relying on the semantic proficiency of the AI to create the underlying
justification for that arrest.
I think that starts to get off into the fantastical, but I think it's rooted in this real world
example I just gave you where it is true that officers don't attend to every single fact
in their environment, but an AI can.
And it can sort of create that picture later, post facto, that the officer did attend to
it.
And so we have to kind of, I think, still work through that problem.
Right.
That's something that I've been working on some other issues that I don't want to talk
about at the moment, but it deals with, okay, what are the defense attorneys, what are the
prosecuting attorneys, where is this going to go in the courts as far as just what you
said?
This is supposed to be coming from what you see and what you perceive, not what you forgot
something else caught.
It's not like I was driving down the street, you were my partner, you look out the window
one way, I'm looking the other way, and you point at something which I didn't see.
Okay, that's fine, it's another human being doing it.
So I, okay, again, fascinating, yes, I think that's something that's going to be interesting
down the line.
Right now?
Maybe not too much.
Who knows?
Right.
Yeah, I think for right now the best advice back to officers is I'm a fan of using these
tools.
I'm a fan of using them for reports, although we can kind of get into the study about some
of the maybe unforeseen consequences, but I'm a fan of technology.
Technology is at its root the way that policing becomes more efficient, and that's not something
special about policing.
That's just something true about human labor, right?
We use technology to become more efficient.
Cops happen to be humans, for better or worse, and so they also do.
And they're often early adopters.
I think the idea that police are sort of a conservative institution that doesn't, that
lags behind society is relatively false.
I mean, officers, early adopters of telephones, early adopters of vehicles, early adopters
of cameras, etc., to enhance both their ability to work, but also to respond to the criminals
out there using those technologies as well, right?
For every technology that comes along, the first application is pornography, and then
the second one is to commit fraud somehow.
And so police are naturally going to sort of be at the forefront of experiencing those
technological shifts, and they often respond by adopting the technology themselves.
OK, now, the second thing you mentioned was, I'll quote this, the first principles of police
report writing, unquote.
Now, when I read this, I had some flashbacks to my own Academy days and obviously years
ago, but like every standard police form, many of the forms have checkboxes, which just
make your life easier.
You know, this was stolen or this was evidence that I found or collected.
But obviously, there's going to be a substantial narrative.
So what did you mean by the first principles of report writing and how does how does this
apply to artificial intelligence?
Yeah, I probably could have been a little bit more clear about what I mean by that,
although it gives me the ability later on to just make it up as I go as well.
So I appreciate the callback.
Yeah, I think the first principle of report writing policing is that if you don't write
it down, it didn't happen.
And if we put that into the perspective of AI report writing, where it's it's writing
things that you didn't necessarily write, although you may edit it, you may approve
it, etc.
Does that mean those things happened?
It's sort of the reverse of if you don't write it down, it didn't happen is if you wrote
it down, that means it happened.
The second is that the second principle is probably something like your main job as an
officer in many ways is to become a documentarian of what you observe.
Every police action has an equal and equal and opposite paperwork demand, right?
Every time you step out of the headquarters, you're probably writing a report about something
you're about to do.
Every training you attend, etc. is going to be recorded at some level.
And so the basic demand on officers, whether you're in patrol or investigations or management,
is always going to be a lot of writing.
Officers are writers first in many ways.
And so when we're thinking about AI, AI's first kind of public application is in writing.
So we shouldn't be surprised that a lot of these tools, there's a lot of companies out
there aiming to capture some part of this police writing marketplace.
And I suspect that in five to 10 years, there's going to be far less human writing than we're
doing now, which opens up all sorts of interesting possibilities about like sort of efficiencies,
capturing those efficiencies inside policing.
Now one of your studies, you mentioned that used artificial intelligence to review body
camera images.
And it was suggested in the article that it would be used to transcribe what the officer
says and measure the officer's level of professionalism.
In this study, you were looking at professionalism, right?
And so even if I, I'm sorry, artificial intelligence could create a transcription, how is professionalism
going to be defined here?
Now, I bring this up for a larger perspective, a doctor can be a professional and they can
have still have a good sense of humor with their bedside manner, but they can also treat
and I've experienced this myself in my own family, a doctor who was very clinical, very
professional, very sympathetic, but you've got cancer and you've got six months to live.
That was professional.
Some people might call that cold hearted, but it was professional.
Now the same thing can apply to a police officer.
They can use humor to defuse a situation, but they can also be very professional, much
like a doctor and say, I'm sorry, I have to tell you that your son died in a car accident.
So this gets me to the question when I was reading this, your article, can artificial
intelligence be programmed to understand those kinds of differences?
Yes.
Now that's a different answer than is it right now?
And I think you're hitting upon essentially the key question in that study, because when
we say we're measuring professionalism, that is a word and a measure taken directly from
the claims of the manufacturer, right?
So it's not us imposing maybe a procedural justice framework or anything like that across
it.
In fact, it's pretty simple.
Here's how professionalism is defined across three levels in that program.
One is subpar professionalism.
This is language in which the officer uses profanity directed at the public, derogatory
terms, yelling, screaming, sort of unnecessarily, not yelling or screaming because it's not
really picking up tone, but let's say derogatory language aimed at the public, right?
Calling somebody an idiot is derogatory.
Even if you say, well, it is a nice voice, it's still kind of derogatory intent.
So any kind of language that fails that test is considered below professional level.
Highly professional language in the other case would be something like an officer who
uses 25 or more words of explanation prior to taking some sort of police action, like
riding a ticket, making an arrest, conducting a Terry Frisk.
So explanation becomes sort of the primary driver of professional language in this measure
at least.
And then standard professionalism is just neither of the other two levels are true.
So you didn't use derogatory language, you weren't mean, you weren't swearing at somebody,
but you also didn't necessarily offer 25 words of explanation prior to riding that ticket.
And so that's where that measure comes from.
We don't defend it as a perfect measure of professionalism by any means, but it is grabbable
by AI as a tool.
Think about this, going out across nearly 200,000 body camera videos, or more than 200,000,
I guess, in that study, because we're in two different sites, runs for a year, full RCT,
a full experiment, field experiment.
How much time would it take us, Scott, if we were to assign a team of people to review
200,000 body worn cameras?
Now I think we could solve the unemployment levels for the rest of history if we wanted
to do it the human way, right?
For every hour of body camera video, we need two to three hours of human review.
No one would ever go unemployed again.
And just editing this, when I edit these podcasts, it takes time, so it's almost immeasurable.
I would just say it's impossible, or at least improbable.
We're always going to be balancing a lack of perfect human judgment against something
about more effective or efficient AI or computerized judgment.
And I think most officers, when I talk to them, and most chiefs, most line officers,
would all agree, all else equal, not in the middle of a firefight, not in the middle of
some hardcore arrest, just the normal stuff we do every day.
To the degree possible, explaining what you're doing with somebody is usually going to gain
more compliance, better overall outcomes for everybody, right?
There's a reason in our study, for example, when we were running the pilot stages,
that motor officers tended to score highly professional.
It was something that we had to think about how to fix.
It's because every motor cop you know has a script in their head.
Hi, I'm Officer Adams, West Jordan Police Department.
The reason I stopped you today is I clocked you going 55 in a 45.
We value speed, or we value street safety for the nature of our community.
Do you happen to have your driver's license and insurance on you today?
I was not a motor cop, as you can tell.
But as canine cops, we drag our knuckles a little lower than that.
But you can see how that script would offer more explanation than maybe,
hey, I stopped you because you were speeding.
You got your license?
That's just going to have a different take from the public view.
Again, not a perfect measure.
The point of that study was actually not to even judge the efficacy of the measure itself,
but rather to see when we provide AI-generated feedback to the officer,
about their own behavior, does their behavior change?
And that's what we found it did.
That was the fascinating part for me about that study is not as an internal affairs tool,
not as scraping it to find out when officers made mistakes,
but actually seeing the officer as a professional.
And one definition of a professional is they're trying to constantly refine their craft.
If we can provide you, Scott, going back to the 1990s,
I'm not sure exactly when your career was.
But if I could go back and capture a decade of your words
and provide you some feedback about how you were talking to the public,
what would you do with that information?
And the answer we got back from the data in over one year,
two different fairly large agencies was that on the whole,
how officers reacted to that information
is that they increased the amount of explanation that they're giving to people.
They got better at it.
And so that's pretty fascinating little finding, right?
I think.
Yeah, I agree with that.
But the other study was looking at the goal was to assess whether the AI tools would
significantly reduce the time in report writing.
So getting off the body camera one compared to traditional measures.
First, can you give a real quick explanation what the study was?
And did the AI write a better report?
I guess that's what people might be asking.
Did the AI do a better job?
Well, first on the study, it's a relatively large agency, right?
Over 100 officers, but not huge.
It's Manchester PD up there.
And they were an early adopter.
They contacted me, actually.
I spend a lot of time, a lot of my time is basically working with agencies
to try and find ways to evaluate what matters to them, not necessarily me.
And so they wanted to know, hey, before we go spend
100, 200, $300,000 on this AI report writing software, does it work?
And the claim from the manufacturer is yes,
it's going to save you 80% of time on report writing.
And so I said, well, that's our outcome.
That's what we should measure, right?
That's a huge claim.
And it was unsupported by independent analysis, at least.
And so we set out for six weeks or so to sort of capture.
We randomly assigned officers.
They would either be using that tool or they had no access to that tool.
And then we just very simply compared, does the officers with access to this tool,
does their report time get lower compared to officers without access?
And so we found, surprisingly, kind of, that it did not.
There was no difference in the time it took to generate these reports.
But that's only, I think, half the answer to your question,
because what you said was, does this tool work?
And I think there's different ways to think about it working.
And the most obvious one would be sort of quality.
Did the report writing get better?
I don't know.
That's ongoing research.
That's in the field right now.
We're trying to work on that.
Another way to think about it is, there's a lot of downstream consumers of police reports.
You can think about juries and judges and prosecutors and media.
Do they think this creates a better report?
That's another aspect of our study that's ongoing, so I don't have answers yet.
But I agree that there are important questions that need to be answered
before we sort of start dropping huge amounts of public monies on these types of tools.
OK, so we've only got about five minutes left.
And if you could identify two, maybe three implications for police agencies,
whether it's the agency itself, the personnel, police leaders.
Yeah, the number one is don't.
At this stage, the only experimental evidence out there is ours.
And it is not supportive of the notion that
AI report writing is going to save you time, right?
So if time is your main goal,
there's not good evidence right now that that is going to be accomplished.
Now, technology advances.
Maybe next year I have a different answer, a different take on it.
But right now, that's the answer.
Number two, nonetheless, there are huge efficiencies to be captured here.
I think the body worn camera review is one of them.
Right now, a lot of different agencies have sort of semi-random
approaches to getting camera footage reviewed.
It's rare to see anything much more than single digit percentages getting captured.
I think it's worth it for police executives to know about their officer's behavior
out there on calls.
And there are AI solutions that have shown promise in doing just that.
And the third is think carefully about before dropping large amounts of money
about getting some help in doing a good evaluation.
There's good police researchers out there.
If it's not me, I can name a dozen more
that would be happy to sort of help you design and run that kind of study
in order to get an answer to the question that underlines almost all my work,
which is, does this thing work?
Does it do what it's supposed to do?
Yeah.
Even though, as you said, there are different definitions to the phrase,
does this work?
This is important for police agencies to have these conversations,
which I'm glad you brought that up.
There are a lot of people out there,
a lot of academics with these kinds of skills in this kind of research
to tap into whether it's funded pro bono or whatever the case may be
before agencies start expending substantial amounts of money.
I can see states and maybe even the federal government kind of like
under the Obama administration when they were funding body cameras.
They didn't fund everybody, obviously,
but this is still a lot of money to spend on technology
as you demonstrate right there.
In some cases, it's not so great.
In other cases, it makes a contribution in different ways.
So without dismissing the idea that AI is possible
or impossible for an agency to use,
it's good that these kinds of research studies
are coming out for the police agencies.
I think sometimes we think about these technologies as magic
and then we ascribe magical outcomes to them
and that should be avoided.
Most likely, given the history of technology adoption and policing,
we will see successes,
but they will be small incremental and over time
and that's probably where we should align our expectations.
Great.
Ian, thank you very much.
We've been talking with Ian Adams down in South Carolina.
I really appreciate your time and your information
about artificial intelligence.
Loved it. Thanks, Scott.
Have a great day.
That's it for this episode of the Police In-Service Training Podcast.
I want to thank you, the listener, for spending your valuable time here.
If you like what you have heard,
please tell a friend to subscribe on Apple Podcasts
or wherever they get their podcasts.
And please take a moment to review this podcast.
If you have any questions or comments, positive or negative,
or if you think I should be covering a specific topic,
feel free to send me an email at
policeinservicetrainingpodcast.gmail.com
Police In-Service Training is all one word.
Or you can find me on Blue Sky using the handle
at policeinservice.bsky.social.
Have a great day.