Predictive Policing: An Unsolved Topic in Smart Cities
Team Members: Raquel Esther Jorge Ricart, Trent Kannegieter, Chenab Navalkha
Mentor: Amar Ashar
Municipalities across the world, from England to Uruguay have recently adopted predictive policing software. Under the guise of efficiency, these algorithms guide police allocations of resources toward “higher risk” areas and surveil everyday citizens, often without asking how decisions about risk and suspicion are made. Our group studied different “smart policy” interventions that cities around the world have taken, isolating shortcomings, successes and existing policy vacuums, while developing a framework of principles on AI to guide future policies.
This policy-oriented research was necessarily combined with social concerns’ ideas, by raising the questions of justice, as well as potential clashes with fundamental principles of AI, such as the presumption of innocence. This reflection led us to rethink new dynamics and avenues towards citizens’ engagement in any decision-making process concerning predictive policing, as well as the need of ensuring transparency and public information on this issue due to the lack of information about predictive policing in the Global South.
We have set up two strategies to raise awareness about this topic:
Advocacy tool: We began with an “Instagram Explainer” about PredPol, one of the most popular predictive policing programs.
Research-styled visualization: We hope to continue producing similar visualizations as well as written products like op-eds in the quest to examine big data’s dark side.
Images Isolated (not posted on Instagram): https://docs.google.com/presentation/d/1JwIP5q-lHDbjAPkNi9ukvkJP4PfYglZ-IueKCVP3GvY/edit#slide=id.p
Ethics and Technology in the Criminal Justice System: Exploring the BKC Risk Assessment Tool Database
Team Members: Dylan Doyle-Burke, Janna Huang, Jenny Lee
Mentor: Lis Sylvan
As conversations around police brutality and policing technologies become increasingly urgent, it is essential that policymakers and organizers alike have clear access to information about the criminal justice system. Our project tackles an important part of that existing system by amplifying the Risk Assessment Tool Database (Risk DB), housed in the Berkman Klein Center for Internet & Society. Risk DB serves as a one-stop shop for data on risk assessment tools implemented throughout the policing process, from pre-trial to sentencing to parole. In conversation with the team behind Risk DB, we produced a podcast episode that walks through the database, its goals and motivations, its intended audiences, and its next steps. The podcast is available for download, along with a transcript of the conversation and a Q&A blog post with the Risk DB team.
Facial Recognition, (Un)Covered
Team Members: Bijal Mehta, Martyna Kalvaityte, Ana Qarri
Mentor: Ryan Budish
In response to the international Black Lives Matter movement against police brutality and racial injustice, several large tech companies released statements expressing their solidarity with the Black community and actions to ban or pause the sale of facial recognition software to law enforcement due to its discriminatory nature. While these statements were a general step forward in the fight against law enforcement’s use of harmful technologies, many activists have raised questions about the actual impact these statements have for companies’ existing or non-facial recognition police contracts, and have noted the lack of action from lesser known facial recognition companies such as NEC or Ayonix who in reality are the bigger suppliers of law enforcement technologies. Due to the large volume of statements and the variety of players involved, this project aims to provide a simple webpage for the public to gain a holistic understanding of who is involved in the tech company-law enforcement relationship, and a one stop shop for links to a specific company’s statement on facial recognition, the Black Lives Matter movement, or external articles outlining a company’s additional ties to law enforcement agencies. Additionally, the webpage provides an initial text analysis of these statements to reveal the different ways in which companies address the issues surrounding racial discrimination and facial recognition/surveillance technologies. While this project is still in its preliminary stages, we hope to provide a more comprehensive aggregation and analysis of tech company actions to create a clearer picture of the issue.
Link to webpage: https://bijalm99.github.io/BKC-project5/
Face-Off! Between Perception and Reality in Facial Recognition Surveillance
Team Members: Karen Kennedy, Ashley Mehra, Luca Righetti, Cierra Robson
Mentor: Momin Malik
In response to the Black Lives Matter movement and recent protests against police brutality after the killing of George Floyd, public awareness has skyrocketed around law enforcement’s use of facial recognition for surveillance. Various sectors of American civil society -- from activism to state government to media and industry -- are responding as the U.S. is forced to reckon with the implications of facial recognition technology with a new urgency and gravity. However, the facial recognition debate is far from new. Our group endeavors to show how the mainstream conversation crucially misses or glosses over several important features of the debate: the similarity to historical surveillance with critiques against state and industry dating back to the 1970s; the variegated approach toward state legislation that gives important nuance beyond a ‘ban’ or ‘no ban’ solution; the consonance between ‘left’ and ‘right’ political opinions in public and media discourse; and the implications of recent industry actions and all the covert key players outside of ‘Big Tech’ who are fueling the ‘surveillance as a service’ business model.
Our project maps and explores in great depth these crucial misconceptions that we believe are at the root of the dissonance between perception and reality about facial recognition technology.
We studied academic papers; analyzed scores of bills, acts and laws, court injunctions, congressional hearings, and City Council debates; web scraped news articles from bipartisan sources and ran text analysis on news headlines; consolidated findings from investigative journal articles and conducted discourse analysis on press releases, interviews, and website materials from industry.
Our upcoming blog post will provide a snapshot of our findings along with our visualizations. We are thankful for the opportunity to conduct this preliminary analysis at the BKC Summer Institute and look forward to expanding upon this work at the forefront of today’s challenging and urgent AI ethics debates.
Power not Paranoia: Online Safety for Student Activists
Team members: Jessica Blumenthal, Lily Liu, Moe Sunami
Mentor: Lance Eaton
In the midst of the largest political movement in U.S. History, and the unfolding of the global COVID-19 pandemic, student organizers have risen up to support and protect their communities through campaigns such as #40BillionForWhat (Harvard) and a $170,000 mutual aid fund (Pomona). Many of these students have been displaced from their college campuses in the wake of the pandemic and organize mostly online, presenting promises as well as perils of making oneself visible online and collecting data through their activities. In this project, three undergraduate college students — some of us organizers ourselves — interviewed 10 college students and 3 legal experts to put together an accessible resource for college students to gain awareness of factors affecting their online safety. Our guide aims to address differential levels of online safety across factors such as race, citizenship, and gender, clarifies existing legal frameworks around online safety, and introduces community-based strategies for protecting one another. We intend for the guide to be a starting point for college community members to acknowledge and reconceptualize the notion of online safety as an integral part of mobilizing students and building community-based power, and envision this as the first in a series of interventions directed towards students, college administrations, and campus organizations.