Researchers use AI to predict crime, biased policing in cities - Los Angeles Times
Advertisement

Researchers use AI to predict crime, biased policing in major U.S. cities like L.A.

An investigation is underway after a woman was shot and killed while driving on the 710 Freeway.
An investigation is underway after a woman was shot and killed while driving on the 710 Freeway and then crashed along the Anaheim Street offramp in Long Beach early Tuesday morning. A group of social and data scientists has developed a machine learning tool it hoped would better predict crime.
(Irfan Khan/Los Angeles Times)
Share via

For once, algorithms that predict crime might be used to uncover bias in policing, instead of reinforcing it.

A group of social and data scientists developed a machine learning tool it hoped would better predict crime. The scientists say they succeeded, but their work also revealed inferior police protection in poorer neighborhoods in eight major U.S. cities, including Los Angeles.

Instead of justifying more aggressive policing in those areas, however, the hope is the technology will lead to “changes in policy that result in more equitable, need-based resource allocation,†including sending officials other than law enforcement to certain kinds of calls, according to a report published Thursday in the journal Nature Human Behavior.

Advertisement

The tool, developed by a team led by University of Chicago professor Ishanu Chattopadhyay, forecasts crime by spotting patterns amid vast amounts of public data on property crimes and crimes of violence, learning from the data as it goes.

Chattopadhyay and his colleagues said they wanted to ensure the system not be abused.

“Rather than simply increasing the power of states by predicting the when and where of anticipated crime, our tools allow us to audit them for enforcement biases, and garner deep insight into the nature of the (intertwined) processes through which policing and crime co-evolve in urban spaces,†their report said.

For decades, law enforcement agencies across the country have used digital technology for surveillance and predicting on the belief it would make policing more efficient and effective. But in practice, civil liberties advocates and others have argued that such policies are informed by biased data that contribute to increased patrols in Black and Latino neighborhoods or false accusations against people of color.

Advertisement

Chattopadhyay said previous efforts at crime prediction didn’t always account for systemic biases in law enforcement and were often based on flawed assumptions about crime and its causes. Such algorithms gave undue weight to variables such as the presence of graffiti, he said. They focused on specific “hot spots,†while failing to take into account the complex social systems of cities or the effects of police enforcement on crime, he said. The predictions sometimes led to police flooding certain neighborhoods with extra patrols.

His team’s efforts have yielded promising results in some places. The tool predicted future crimes as much as one week in advance with roughly 90% accuracy, according to the report.

Running a separate model led to an equally important discovery, Chattopadhyay said. By comparing arrest data across neighborhoods of different socioeconomic levels, the researchers found that crime in wealthier parts of town led to more arrests in those areas, at the same time as arrests in disadvantaged neighborhoods declined.

Advertisement

But, the opposite was not true. Crime in poor neighborhoods didn’t always lead to more arrests — suggesting “biases in enforcement,†the researchers concluded. The model is based on several years of data from Chicago, but researchers found similar results in seven other larger cities: Los Angeles; Atlanta; Austin, Texas; Detroit; Philadelphia; Portland, Ore.; and San Francisco.

The danger with any kind of artificial intelligence used by law enforcement, the researchers said, lies in misinterpreting the results and “creating a harmful feedback of sending more police to areas that might already feel over-policed but under-protected.â€

To avoid such pitfalls, the researchers decided to make their algorithm available for public audit so anyone can check to see whether it’s being used appropriately, Chattopadhyay said.

“Often, the systems deployed are not very transparent, and so there’s this fear that there’s bias built in and there’s a real kind of risk — because the algorithms themselves or the machines might not be biased, but the input may be,†Chattopadhyay said in a phone interview.

The model his team developed can be used to monitor police performance. “You can turn it around and audit biases,†he said, “and audit whether policies are fair as well.â€

Most machine learning models in use by law enforcement today are built on proprietary systems that make it difficult for the public to know how they work or how accurate they are, said Sean Young, executive director of the University of California Institute for Prediction Technology.

Advertisement

Given some of the criticism around the technology, some data scientists have become more mindful of potential bias.

“This is one of a number of growing research papers or models that’s now trying to find some of that nuance and better understand the complexity of crime prediction and try to make it both more accurate but also address the controversy,†Young, a professor of emergency medicine and informatics at UC Irvine, said of the just-published report.

Predictive policing can also be more effective, he said, if it’s used to work with community members to solve problems.

Despite the study’s promising findings, it’s likely to raise some eyebrows in Los Angeles, where police critics and privacy advocates have long railed against the use of predictive algorithms.

In 2020, the Los Angeles Police Department stopped using a predictive-policing program called Pred-Pol that critics argued led to heavier policing in minority neighborhoods.

At the time, Police Chief Michel Moore insisted he ended the program because of budgetary problems brought on by the COVID-19 pandemic. He had previously said he disagreed with the view that Pred-Pol unfairly targeted Latino and Black neighborhoods. Later, Santa Cruz became the first city in the country to ban predictive policing outright.

Advertisement

Chattopadhyay said he sees how machine learning evokes “Minority Report,†a novel set in a dystopian future in which people are hauled away by police for crimes they have yet to commit.

But the effect of the technology is only beginning to be felt, he said.

“There’s no way of putting the cat back into the bag,†he said.

Advertisement