The prevention of criminal activity has changed dramatically over the past two decades, largely due to the increased reliance on systems that provide crime data analysis. Created specifically for police, judicial sentencing, and prison applications, these systems conduct both predictive and retrospective analysis to aid decision making within the criminal justice system. Furthermore, these software platforms typically combine spatial informatics packages and advanced statistical features behind user-friendly interfaces. Recent studies have demonstrated problems with both the flawed logic within these systems’ algorithms and the inherent biases in the underlying data.
Students will be able to articulate how the goals of predictive policing algorithms can differ from the impact of these algorithms on communities of color.
Students will be able to identify and describe the sources of potential bias in predictive algorithms.
Students will be able to demonstrate an understanding of the ways to check for and mitigate algorithmic bias when designing and testing predictive algorithms.
Understanding the Problem Through Narratives
Student Pre-Activity 1 Survey:
What do you know and think about Predictive Policing Technologies?
Read/View the Following Narratives:
MIT Tech Review (2019) Can you make AI fairer than a judge? (2019) article and simulation
Angwin, Larson, Mattu, and Kirchner (2016). Machine Bias Pro Publica
Hao, K. (2019) Congress wants to protect you from biased algorithms, deepfakes, and other bad AI (2019) MIT Tech Review
Wired. (2018) How cops are using algorithms to predict crimes.
National Institute of Justice (2018)Predictive policing algorithms.
Perry, McInnis, Price, Smith, and Hollywood (2013) Predictive Policing: Forecasting Crime for Law Enforcement. RAND Corporation.
Upturn, Inc. (2014) Ch 3 Predictive policing: From Neighborhoods to Individuals.
LexisNexis, Inc. (2019) Getting the most out of predictive policing strategy.
Predictive Policing Algorithms use reported crime data from local police data. In the City of Oakland, CA, data is crime/arrest is classified based on geographic ‘police beat’ zones that align with one or more postal zip codes. In this activity, we want you to dig a little deeper into the type of annual crime data that would drive a predictive policing algorithm created by software companies like Lexis Nexis, PredPol, COMPAS that sell systems to police departments with the goal of efficiently identifying high risk areas in order to predict and prevent future crimes.
Narratives to Read/Listen for Activity 2
Two students from the group will work with the following readings and a podcast to characterize why the use of these predictive law enforcement systems are problematic.
Articles Folder Link (pdfs)
Thomas, (2016) Why Oakland Police Turned Down Predictive Policing
Lum and Isaac (2016) To Predict or to Serve? (pdf in folder)
Haskins (2019) Academics Confirm Major Predictive Policing Algorithm is Fundamentally Flawed
Hao (2019) Police across the US are training crime-predicting AIs on falsified data
Good Code podcast (2019) Solon Barocas on Teaching Ethics in Data Science
Last couple minutes is a good introduction to how we can think about predictive crime algorithms
While class is coming into classroom and getting ready show selected Minority Report clips (from Kinolab links) without saying much.
Clip 1: Vote Yes
Clip 2: Predetermination Happens All the Time
Clip 3: Minority Reports
Student Post-Activity 3 Survey:
What do you now know and think about Predictive Policing Technologies?