Your Saved Resources Close

  • Saved resources will appear here

How We Determined Crime Prediction Software Disproportionately Targeted Low-Income, Black, and Latino Neighborhoods

The expansion of digital record keeping by police departments across the U.S. in the 1990s ushered in the era of data-driven policing. Huge metropolises like New York City crunched reams of crime and arrest data to find and target “hot spots” for extra policing. Researchers at the time found that this reduced crime without necessarily displacing it to other parts of the city—although some of the tactics used, such as stop-and-frisk, were ultimately criticized by a federal judge, among others, as civil rights abuses.

The next development in data-informed policing was ripped from the pages of science fiction: software that promised to take a jumble of local crime data and spit out accurate forecasts of where criminals are likely to strike next, promising to stop crime in its tracks. One of the first, and reportedly most widely used, is PredPol, its name an amalgamation of the words “predictive policing.” The software, derived from an algorithm used to predict earthquake aftershocks, was developed by professors at UCLA and released in 2011. By sending officers to patrol these algorithmically predicted hot spots, these programs promise they will deter illegal behavior.

But law enforcement critics had their own prediction: that the algorithms would send cops to patrol the same neighborhoods they say police always have, those populated by people of color. Because the software relies on past crime data, they said, it would reproduce police departments’ ingrained patterns and perpetuate racial injustice, covering it with a veneer of objective, data-driven science.

Read the full report and explore the data here.

Read the accompanying news article here.

Sign up for our weekly resource roundup