Tutorial on Fairness Accountability Transparency and Ethics in Computer Vision at CVPR 2021

Important information

Computer vision technologies are being developed and deployed in a variety of socially consequential domains including policing, employment, healthcare and more. In this sense, computer vision has ceased to be a purely academic endeavour, but rather one that impacts people around the globe on a daily basis. Yet, despite the rapid development of new algorithmic methods, advancing state-of-the-art numbers of standard benchmarks, and, more generally, the narrative of progress that surround public discourses about computer vision, the reality of how computer vision tools are impacting people paints a far darker picture.


In practice, computer vision systems are being weaponized against already over-surveilled and over-policed communities (Garvie et al. 2016; Joseph and Lipp, 2018; Levy, 2018), perpetuating discriminatory employment practices (Ajunwa, 2020), and more broadly, reinforcing systems of domination through the patterns of inclusion and exclusion operative in standard datasets, research practices, and institutional structures within the field (West, 2019; Miceli et al., 2020; Prabhu and Birhane, 2020). In short, the benefits and risks of computer vision technologies are unevenly distributed across society, with the harms falling disproportionately on marginalized communities.


In response, a critical public discourse surrounding the use of computer-vision based technologies has also been mounting (Wakabayashi and Shane, 2018; Frenkel, 2018). For example, the use of facial recognition technologies by policing agencies has been heavily critiqued (Stark, 2019; Garvie, 2019) and, in response, companies such as Microsoft, Amazon, and IBM have pulled or paused their facial recognition software services (Allyn, 2020a; Allyn, 2020b). Buolamwini and Gebru (2018) showed that commercial gender classification systems have high disparities in error rates by skin-type and gender, Hamidi et al., (2018) discusses the harms caused by the mere existence of automatic gender recognition systems, and Prabhu and Birhane (2020) demonstrate shockingly racist and sexist labels in popular computer vision datasets--resulting in the removal of datasets such as Tiny Images (Torralba et al., 2008). Policy makers and other legislators have cited some of these seminal works in their calls to investigate unregulated usage of computer vision systems (Garvie, 2019).


We believe it is critical that the computer vision community meaningfully engage with the societal impacts of the technology the field is producing, and work to shift towards a more equitable, rigorous, and accountable field. More specifically, we believe CVPR is an appropriate venue to grapple with these complex questions and a tutorial format will serve to engage a young generation of computer vision researchers.


Our tutorial will highlight research on uncovering and mitigating issues of unfair bias and historical discrimination that computer vision models learn to mimic and propagate. We will also highlight the lived realities of marginalized communities impacted by computer vision technologies. Finally, we will work through case studies focused on computer vision research and products, examining the underlying assumptions and values embedded in the technologies and the potential societal impact.

References

Ifeoma Ajunwa (2020). The Paradox of Automation as Anti-Bias Intervention, 41 Cardozo, L. Rev. 1671


Bobby Allyn (2020a). IBM Abandons Facial Recognition Products, Condemns Racially Biased Surveillance. Npr. https://www.npr.org/2020/06/09/873298837/ibm-abandons-facial-recognition-products-condemns-racially-biased-surveillance


Bobby Allyn (2020b). Amazon Halts Police Use Of Its Facial Recognition Technology .https://www.npr.org/2020/06/10/874418013/amazon-halts-police-use-of-its-facial-recognition-technology


Simone Browne (2015). Dark Matters: On the Surveillance of Blackness. Duke University Press.


Joy Buolamwini , and Timnit Gebru (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. ACM Conference on Fairness, Accountability and Transparency.


Sheera Frenkel (2018). Microsoft Employees Question C.E.O. Over Company’s Contract With ICE, Sheera Frenkel, New York Times, https://www.nytimes.com/2018/07/26/technology/microsoft-ice-immigration.html


Seeta Peña Gangadharan (2020). Context, Research, Refusal: Perspectives on Abstract Problem Solving. https://www.odbproject.org/2020/04/30/context-research-refusal-perspectives-on-abstract-problem-solving/


Clare Garvie, Alvaro Bedoya, and Jonathan Frankle (2016). The Perpetual Line-Up: Unregulated Police Face Recognition in America. Georgetown Law, Center on Privacy & Technology.


Foad Hamidi, Morgan Klaus Scheuerman, and Stacy M. Branham (2018). Gender recognition or gender reductionism? The social implications of embedded gender recognition systems. Proceedings of the 2018 chi conference on human factors in computing systems.


George Joseph and Kenneth Lipp (2018). Ibm Used NYPD Surveillance Footage to Develop Technology that Lets Police Search by Skin Color


Steven Levy (2018), Inside Palemer Lucky’s Bid to Build a Border Wall, Wird, June 6 2018, https://www.wired.com/story/palmer-luckey-anduril-border-wall/


Milagros Miceli, Martin Schuessler, Tianling Yang (2020). Between Subjectivity and Imposition: Power Dynamics in Data Annotation for Computer Vision. ArXiv:2007.14886



Vinay Uday Prabhu and Abeba Birhane (2020): Large image datasets: A pyrrhic win for computer vision? CoRR abs/2006.16923.


Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, Janet Vertesi (2019). Fairness and Abstraction in Sociotechnical Systems. ACM Conference on Fairness, Accountability, and Transparency.


Luke Stark (2019). Facial recognition is the plutonium of AI. XRDS 25, 3 (Spring 2019), 50–55. DOI:https://doi.org/10.1145/3313129


Antonio Torralba, Rob Fergus, and William T. Freeman (2008). 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE transactions on pattern analysis and machine intelligence.


Daisuke Wakabayashi and Scott Shane (2018), Google Will Not Renew Pentagon Contract That Upset Employees, New York Times, https://www.nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html


Sarah Myers West, Meredith Whittaker, Kate Crawford (2019). Discriminating Systems: Gender, Race, and Power in AI.