Computer vision technologies are being developed and deployed in a variety of socially consequential domains including policing, employment, healthcare and more. In this sense, computer vision has ceased to be a purely academic endeavor, but rather one that impacts people around the globe on a daily basis. Yet, despite the rapid development of new algorithmic methods, advancing state-of-the-art numbers of standard benchmarks, and, more generally, the narrative of progress that surround public discourses about computer vision, the reality of how computer vision tools are impacting people paints a far darker picture.
In practice, computer vision systems are being weaponized against already over-surveilled and over-policed communities (Garvie et al. 2016; Joseph and Lipp, 2018; Levy, 2018), perpetuating discriminatory employment practices (Ajunwa, 2020), and more broadly, reinforcing systems of domination through the patterns of inclusion and exclusion operative in standard datasets, research practices, and institutional structures within the field (West, 2019; Miceli et al., 2020; Prabhu and Birhane, 2020). In short, the benefits and risks of computer vision technologies are unevenly distributed across society, with the harms falling disproportionately on marginalized communities.
In response, a critical public discourse surrounding the use of computer-vision based technologies has also been mounting (Wakabayashi and Shane, 2018; Frenkel, 2018). For example, the use of facial recognition technologies by policing agencies has been heavily critiqued (Stark, 2019; Garvie, 2019) and, in response, companies such as Microsoft, Amazon, and IBM have pulled or paused their facial recognition software services (Allyn, 2020a; Allyn, 2020b). Buolamwini and Gebru (2018) showed that commercial gender classification systems have high disparities in error rates by skin-type and gender, Hamidi et al., (2018) discusses the harms caused by the mere existence of automatic gender recognition systems, and Prabhu and Birhane (2020) demonstrate shockingly racist and sexist labels in popular computer vision datasets--resulting in the removal of datasets such as Tiny Images (Torralba et al., 2008). Policy makers and other legislators have cited some of these seminal works in their calls to investigate unregulated usage of computer vision systems (Garvie, 2019).
Outside the computer vision community, we observe the broader field of AI beginning to grapple with the impacts of the the technologies that are being developed and deployed in practice: entire interdisciplinary conferences have emerged to study societal impacts of machine learning tools (e.g. FAccT*, AIES); prominent NLP conferences have integrated ethics tracks into their submission processes and regularly host ethics-oriented workshops and tutorials; interdisciplinary, ethics-oriented, and critical workshops have been introduced into major machine learning conferences (e.g. NeurIPS, ICML, ICLR). Meanwhile, despite growing concerns about the development and use of computer vision technologies, the computer vision research community has been slower to engage with these issues.
The computer vision community has not been unchanged however. Indeed, we observe an increased focus on fairness-oriented methods of model and dataset development. However, much of this work is constrained by a purely technical understanding of fairness -- an understanding that has come to mean parity of model performance across sociodemographic groups -- that offers a narrow way of understanding how computer vision technologies intersect with systems of oppression that structure their development and use in the real world (Selbst et al., 2019; Gangadharan, 2020). In contrast to this approach, we believe it is essential to approach computer vision technologies from a sociotechnical lens, examining how marginalized communities are excluded from their development and impacted by their deployment.
Our workshop will also center the perspectives and stories of communities who have been harmed by computer vision technologies and the dominant logics operative within this field. We believe it is important to host these conversations from within the CVPR venue so that researchers and practitioners within the computer vision field can engage with these perspectives and understand the lived realities of marginalized communities impacted by the outputs of the field. In doing so, we hope to shift the focus away from singular technical understandings of fairness and towards justice, equity, and accountability. We believe this is a critical moment for computer vision practitioners and for the field as a whole to come together and reimagine what this field might look like. We have great faith in the CVPR community and believe our workshop will foster the difficult conversations and meaningful reflection upon the state of the field that is essential to begin constructing a different mode of operating. In doing so, our workshop will be a critical step towards ensuring this field progresses in an equitable, just, and accountable manner.
Ifeoma Ajunwa (2020). The Paradox of Automation as Anti-Bias Intervention, 41 Cardozo, L. Rev. 1671
Bobby Allyn (2020a). IBM Abandons Facial Recognition Products, Condemns Racially Biased Surveillance. Npr. https://www.npr.org/2020/06/09/873298837/ibm-abandons-facial-recognition-products-condemns-racially-biased-surveillance
Bobby Allyn (2020b). Amazon Halts Police Use Of Its Facial Recognition Technology .https://www.npr.org/2020/06/10/874418013/amazon-halts-police-use-of-its-facial-recognition-technology
Simone Browne (2015). Dark Matters: On the Surveillance of Blackness. Duke University Press.
Joy Buolamwini , and Timnit Gebru (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. ACM Conference on Fairness, Accountability and Transparency.
Sheera Frenkel (2018). Microsoft Employees Question C.E.O. Over Company’s Contract With ICE, Sheera Frenkel, New York Times, https://www.nytimes.com/2018/07/26/technology/microsoft-ice-immigration.html
Seeta Peña Gangadharan (2020). Context, Research, Refusal: Perspectives on Abstract Problem Solving. https://www.odbproject.org/2020/04/30/context-research-refusal-perspectives-on-abstract-problem-solving/
Clare Garvie, Alvaro Bedoya, and Jonathan Frankle (2016). The Perpetual Line-Up: Unregulated Police Face Recognition in America. Georgetown Law, Center on Privacy & Technology.
Foad Hamidi, Morgan Klaus Scheuerman, and Stacy M. Branham (2018). Gender recognition or gender reductionism? The social implications of embedded gender recognition systems. Proceedings of the 2018 chi conference on human factors in computing systems.
George Joseph and Kenneth Lipp (2018). Ibm Used NYPD Surveillance Footage to Develop Technology that Lets Police Search by Skin Color
Steven Levy (2018), Inside Palemer Lucky’s Bid to Build a Border Wall, Wird, June 6 2018, https://www.wired.com/story/palmer-luckey-anduril-border-wall/
Milagros Miceli, Martin Schuessler, Tianling Yang (2020). Between Subjectivity and Imposition: Power Dynamics in Data Annotation for Computer Vision. ArXiv:2007.14886
Vinay Uday Prabhu and Abeba Birhane (2020): Large image datasets: A pyrrhic win for computer vision? CoRR abs/2006.16923.
Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, Janet Vertesi (2019). Fairness and Abstraction in Sociotechnical Systems. ACM Conference on Fairness, Accountability, and Transparency.
Luke Stark (2019). Facial recognition is the plutonium of AI. XRDS 25, 3 (Spring 2019), 50–55. DOI:https://doi.org/10.1145/3313129
Antonio Torralba, Rob Fergus, and William T. Freeman (2008). 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE transactions on pattern analysis and machine intelligence.
Daisuke Wakabayashi and Scott Shane (2018), Google Will Not Renew Pentagon Contract That Upset Employees, New York Times, https://www.nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html
Sarah Myers West, Meredith Whittaker, Kate Crawford (2019). Discriminating Systems: Gender, Race, and Power in AI.