top of page

Algorithmic Bias and the Erosion of Procedural Fairness in Predictive Policing

  • Human Rights Research Center
  • 1 day ago
  • 8 min read

Author: Meesha Falik

April 22, 2026


[Image credit: Kindel Media via Pexels]
[Image credit: Kindel Media via Pexels]

While AI has proved useful in helping write dreaded emails, get started on a school paper, or just creative ideation, law enforcement agencies around the globe have adopted AI for something with far more serious implications: crime prediction and analysis. Predictive policing tools are fed large chunks of historical crime data, which are then used to identify patterns and predict crimes before they occur. 


Law enforcement agencies have argued that this makes their policing far more efficient and just. On the one hand, they are now able to use targeted deployment strategies for areas most likely to see criminal activity. On the other hand, these analyses are based on data, and decisions are entirely removed from personal biases or judgment calls. 


While major police departments, with the Los Angeles Police Department (LAPD) leading the charge, and including the New York Police Department (NYPD), among others, adopted predictive policing tools over a decade ago, critics remain wary of their “bias-free policing” claims. If prejudices that have historically guided policing are systemic, then simple elimination of bias-inducing data (such as race, age, or gender) is not enough to create a fair system.  


By analyzing official NYPD statistics, this article examines whether the PD’s machine learning tools have, in fact, introduced procedural fairness into its activities. In particular, the article discusses how biases are retained within statistics and the consequences of systemic injustice in the world of AI.


Predictive Policing and Algorithmic Decision-Making


“Predictive policing” is the use of machine learning algorithms to “predict” criminal activity, including the type of activity and geographical location, guiding law enforcement deployment and policing strategy (Singh & Mahajan, 2025). Historical data from crime reports, stops, and arrests are used to determine “hot spots” of likely criminal activity, which are then used to make more efficient policing decisions and resource allocation.


Data from past police records is fed into the algorithm, based on which police activity is determined. Thus, an active loop is developed, whereby new data is continuously fed into the system, which, at least theoretically, refines the algorithm to learn from the evolving data and produce more accurate predictions (Ensign et al., 2017). But whether these predictive algorithms are free of bias is a more complex matter than simply eliminating bias-inducing characteristics from consideration. If the underlying data is prejudiced, so will be the predictions.


Predictive Policing in the New York Police Department


As the largest police force in the country, the NYPD has also been one of the first and most prominent adopters of data-driven policing technologies. Among its most notable tools is Patternizr, an AI-based pattern recognition system designed to analyze crime complaints and identify related incidents across the city (Charles, 2019). Using a decade’s worth of data, this machine learning tool was designed to connect the dots and identify patterns across thousands of crimes committed during that period.


Patternizr compares crime data to identify similarities in factors such as modus operandi, geography, and time of occurrence, allowing investigators to determine whether crimes are potentially connected (Griffard, 2019). Theoretically, the system works. However, it does not take into account historical biased practices, which may shape the kind of predictions the system spits out (Griffard, 2019). 


Empirical Analysis: Pre- and Post-AI Policing Outcomes in New York City


To assess whether predictive policing may reinforce biased policing outcomes, publicly available New York Police Department (NYPD) datasets were examined across two time periods: a pre-AI period (2003–2012) and a post-AI period following predictive policing adoption (2016–2024).


According to the NYPD’s Stop, Question, and Frisk dataset, 2011 saw nearly 25,000 stops per 100,000 Black individuals (the majority being males), with only 3,128 stops per 100,000 white people. Since then, stop-and-frisk rates have seen a rapid decline, reduced to less than 500 stops per 100,000 Black people, and less than 50 per 100,000 white people in 2017 (Data Collaborative for Justice, 2017). Concerns regarding the NYPD’s practices of racial profiling are not new; up until 2006, the Black and Latino communities were subjected to 80% of stop-and-frisks (New York Civil Liberties Union, 2007).


While numbers initially dipped following the implementation of AI-assisted policing practices in 2016, 2024 saw the highest number of stop-and-frisks since 2014, and a whopping 50% increase since the previous year at just over 25000 stops, with nearly 9 of every 10 people stopped being Black or Latino (Venkat, 2025). 


Racial bias within the stop-and-frisk policies can be further narrowed down based on the number of arrests that resulted from the stops. Statistics show that, despite record high numbers of stops, over 685,000 total in 2011, just over 80,000 resulted in an arrest or summons - a success rate of just 11.72% (New York Civil Liberties Union, 2025). Between 2003 and 2013, no arrests resulted from stops, and while arrest rates have since increased, so have stop-and-frisk rates compared to recent years (New York Civil Liberties Union, 2025). Only 6% of arrests in 2024 involved white persons.

 

The Bias Behind the Numbers


At first glance, the declining stop-and-frisk rates seem to indicate a policy that works. However, a look beyond the basic numbers paints a completely different picture. For one, while overall rates fell, Black and Hispanic individuals still appear to make up the majority of stops, despite no arrests or summons resulting from the stops in 69% of the cases as of 2024 (New York Civil Liberties Union, 2025). 


Second, the reduction in stop-and-frisk rates came from changes in policy and legal rulings, in particular, from Floyd v. City of New York (2013), which found the NYPD’s stop-and-frisk policy unconstitutional and in violation of the Fourth Amendment. Thus ensued a time of policy reform, which saw the use of body-cams and overall declining rates of stops, but may not have eradicated the issue at its core. 


The disproportionate effect of stop-and-frisks on Black and Latino communities can also be assessed by the location of the precincts where most of these stops occur. According to NYPD’s stop-and-frisk data, Black people have consistently made up close to 50% of all stop-and-frisks - even when total stops fell - and of the ten precincts with the highest stop rates, 9 were located in predominantly Black and brown neighborhoods, with over 80% residents of color, and 6 of the ten were in neighborhoods almost exclusively home to people of color (over 90%) (New York Civil Liberties Union, 2025). 


Interpreting the Data 


Patternizr has been in use by the NYPD since 2016, although its use only became public when two of its creators published their work (McCormick, 2019). Researchers claimed they had built in safeguards to protect the system against implementing racial bias within its findings. But the use of predictive policing is not quite as straightforward as an algorithm with safety nets - researchers argue that, rather than a matter of data, “fairness” is highly context-dependent, and should be treated as such (Hung & Yen, 2023). 


In fact, a growing body of research suggests that while algorithmic biases, which can be caused by what variables and outcomes, as well as training methods are selected and used for machine learning, are far simpler to control than structural biases, which reflect discriminatory social practices and systemic injustice, as these become embedded in and “color” the data used to train such systems (Hung & Yen, 2023; Soon, 2020). 


The data on stop-and-frisk actions by the NYPD over the past 20 years is suggestive of this type of bias. Despite historically leading to less than 10% of arrests, the majority of stops have involved Black and Latino men, and have been based out of precincts located in neighborhoods predominantly occupied by persons of color. When this data is fed into a machine learning model, it creates a pattern of understanding for where the most stops occur, and thus aligns its predictions accordingly. 


The targeting of certain communities, determined by increased police presence and activity, shapes the data that is used to develop predictive models, which then regurgitate these historical policing biases (Bozkır et al., 2025). A feedback loop is created, where predictive tools present historically over-policed communities as high-risk, thus creating structural biases stemming from biased data (Bozkır et al., 2025).  


AI: To use or not to use 


Despite expert claims, predictive policing cannot be entirely free of biases, particularly when they stem from systemic injustice and bias in practice. While people – civilians and law enforcement officials alike – are given to trust such policing tools more, as they are based on empirical “data” and free from outward human influence, such assumptions ignore the limitations of AI- that it learns from past human behavior. 


Concerns for civil liberties combined with issues of transparency present further challenges. The lack of public oversight over the development and implementation of these machine learning tools has raised serious questions regarding the operations of this system (Levinson-Waldman & Posey, 2017). Many experts also argue that the use of predictive policing is a violation of the Fourth Amendment requiring “reasonable suspicion” for police stops, which is not satisfied by algorithmic predictions of crime (Levinson-Waldman & Posey, 2017).

Many more ethical concerns emerge that make the issue doubly complex. For example, the involvement of private companies in the development of predictive policing tools rather than public institutions, which may operate on agendas other than public service, the use of personal data, and the over-reliance on a system whose very design is inherently biased. Breaking this loop calls for more than “clean” data – it requires widespread systemic change, corrective policy reform, and an active commitment to unbiased policing practices. Whether this goal is achievable remains to be seen.


Glossary


  • Algorithmic Bias: Systematic errors in algorithms that create unfair outcomes.

  • Artificial Intelligence (AI): Computer systems capable of performing intellectual functions typically characteristic of humans.

  • (Crime) Hot Spots: Areas identified through data analysis as likely locations of criminal activity.

  • Deployment Strategy: (of law enforcement) Determining when, where and how to use patrol and police resources

  • Feedback Loop: A cycle where algorithm predictions reinforce the data used to train them.

  • Fourth Amendment: A U.S. constitutional protection against unreasonable searches and seizures.

  • Machine Learning: A type of AI that learns patterns from data to make predictions.

  • Modus Operandi (MO): (in crime) The distinct behavioral pattern and strategy used by a person to commit criminal activity.

  • Patternizr: An NYPD machine learning tool that identifies patterns in crime reports.

  • Predictive Policing: The use of algorithms and data analysis to forecast criminal activity.

  • Procedural Fairness: Fairness in administrative decision-making.

  • Racial Profiling: Targeting individuals for police action based on race or ethnicity.

  • Stop-and-Frisk: A policing practice where officers stop and search individuals suspected of wrongdoing.

  • Structural Bias: Inequality embedded in systems or historical practices that affects outcomes.


Sources


  1. Bozkır, E., Chen, Y., Ghulam, S., Ma, Q., Razmi, R., Swaminathan, R., & Thiru, I. (2025). Predictive Policing or Predictive Prejudice? A Study of the Legal, Historical and Ethical Implications of AI in Policing. OxJournal. https://www.oxjournal.org/predictive-policing-or-predictive-prejudice/

  2. Charles, J. B. (2019, March 19). NYPD's Big Artificial-Intelligence Reveal. Governing. https://www.governing.com/archive/gov-new-york-police-nypd-data-artificial-intelligence-patternizr.html

  3. Data Collaborative for Justice. (2017). City Wide Rates of the Stop, Question and Frisk Action by the NYPD. Data Collaborative for Justice. https://datacollaborativeforjustice.org/dashboard-sqf/

  4. Ensign, D., Friedler, S., Neville, S., Scheidegger, C., & Venkatasubramanian, S. (2017). Runway feedback loops in predictive policing. Proceedings of Machine Learning Research. https://arxiv.org/abs/1706.09847

  5. Floyd v. City of New York. (2013). Harvard Law Review, 126(3). https://harvardlawreview.org/print/vol-126/southern-district-of-new-york-certifies-class-action-against-city-police-for-suspicionless-stops-and-frisks-of-blacks-and-latinos-ae-floyd-v-city-of-new-york-82-fed-r-serv-3d-west-833/

  6. Griffard, M. (2019). A Bias-Free Predictive Policing Tool? An Evaluation of the NYPD's Patternizr. Fordham Urban Law Journal, 47(2). https://ir.lawnet.fordham.edu/ulj/vol47/iss1/2

  7. Hung, T.-W., & Yen, C.-P. (2023). Predictive policing and algorithmic fairness. Synthese, 201, 206. https://doi.org/10.1007/s11229-023-04189-0

  8. Levinson-Waldman, R., & Posey, E. (2017). Predictive Policing Goes to Court. Brennan Center for Justice. https://www.brennancenter.org/our-work/analysis-opinion/predictive-policing-goes-court

  9. McCormick, J. (2019, October 31). NYPD Built Bias Safeguards Into Pattern-Spotting AI System. Wall Street Journal. https://www.wsj.com/articles/nypd-built-bias-safeguards-into-pattern-spotting-ai-system-11572514202

  10. New York Civil Liberties Union. (2007, May 1st). Long-Awaited "Stop-and-Frisk" Data Raises Questions About Racial Profiling and Overly Aggressive Policing, NYCLU Says. NYCLU. https://www.nyclu.org/press-release/long-awaited-stop-and-frisk-data-raises-questions-about-racial-profiling-and-overly

  11. New York Civil Liberties Union. (2025, May 27). Stop and Frisk Data. New York Civil Liberties Union. https://www.nyclu.org/data/stop-and-frisk-data

  12. Singh, Y., & Mahajan, A. (2025). AI And Predictive Policing: Balancing Technological Innovation And Civil Liberties. International Journal of Legal Studies and Social Sciences, 3(6), 236-246. https://ijlsss.com/ai-and-predictive-policing-balancing-technological-innovation-and-civil-liberties/

  13. Soon, V. (2020). Implicit bias and social schema: A transactive memory approach. Philosophical Studies, 177, 1857-1877. https://doi.org/10.1007/s11098-019-01288-y

  14. Venkat, S. (2025, April 2). NYPD Stop-and-Frisks Soared in 2024. New York Focus. Retrieved March 12, 2026, from https://nysfocus.com/2025/04/02/nypd-stop-and-frisk-eric-adams

​Address:

2000 Duke Street, Suite 300, Alexandria, VA 22314, USA

Tax exempt 501(c)(3)

EIN: 87-1306523

© 2026 HRRC

bottom of page