This page is out-dated; please check out the latest information about this event at our new website: https://www.queerinai.com/facct
These two 3-hour collaborative CRAFTs are hosted in conjunction with ACM FAccT 2022. We will offer one session virtually and one session in-person (at the conference venue). The in-person session will tentatively take place on the afternoon of the first day (June 21), between 14.30 and 18.00 KST. The virtual session will take place on the second day (June 22), between 19:30 and 23:00 UTC (12:30 to 16:00 PDT). You must be registered for the conference to attend.
This CRAFT aims to gather diverse perspectives to find ways to identify, categorize and measure potential harms of AI systems. The first part of this CRAFT will introduce tools to conduct critiques of existing frameworks and generate new theories. The second part will involve the hands-on design of frameworks to understand the potential harms of AI systems to the queer community.
Queer in AI’s CRAFT aims to create a safe and inclusive casual networking and socializing space for LGBTQIA+ individuals and allies involved with AI. We aim to create a community space where attendees can connect with each other, bond over shared experiences, and learn from each individual’s unique insights into AI, queerness, and beyond!
Communities at the margins of gender and sexuality are often overlooked in the process of building AI systems. Thus, these communities are disproportionately impacted by the increasing prevalence of AI systems. The goal of this CRAFT is to explore the research questions:
Where can frameworks for understanding AI harms be expanded to encompass queer identities?
How can the lived experience of queer people inform the design of harm evaluation frameworks?
Participants will work in teams to identify, categorize, and measure queer harms of AI systems as part of one of the two tracks described below. We will seek to ensure that participants work in interdisciplinary teams with different backgrounds (e.g. life backgrounds, research backgrounds, queer experiences, gender experiences etc.) Each team will be joined by a facilitator who will act as a support and guide for the team.
The goal of this CRAFT is to aggregate findings across different teams into a set of community guidelines around queer AI harms. We additionally hope to build a community of researchers centered around reducing queer harms introduced by AI systems. The results of this CRAFT will be published at an archival venue, and ground future algorithmic bias bounty competitions.
To this end, we will offer two tracks: a top-down track and a bottom-up track.
The top-down track will focus on selecting an existing framework or taxonomy of AI harms and expanding upon the framework to fill gaps that pertain to intersectional queer identities. Participants in this CRAFT will then produce a revised version of this framework.
The bottom-up track will focus on selecting a specific AI system, and enumerating potential harms that could be introduced by this AI system. Participants will then find themes in these harms and develop these themes into a way of classifying potential harms. Participants may radically reimagine current understandings of harms and re-envision the format of bias bounties (e.g. how and when to publicize findings of biases, if and how to have a relationship with industry, funding for the bounty, payments for findings of bias, fundamental weaknesses of bounty format, etc.)
Participants will submit files, websites, and/or other media that communicate their results. Participants will submit their work anonymously by default for privacy reasons but can optionally include their names. Moreover, participants will be invited to contribute their findings to be published in the proceedings of an archival venue.
Queer AI Harms
The design, development, and deployment of artificial intelligence (AI)-driven technologies have long harmed and excluded queer people. As a result, these technologies encode and reinforce cisnormativity and heteronormativity, misgendering and outing end-users, just to name a few examples [Dev et al. 2021]. At the same time, the LGBTQIA+ community has leveraged AI-driven technologies to connect and realize more authentic ways of being. For instance, online spaces have been important for queer people, especially youth, to build community, experiment with new identities, and learn about queerness [AVN, Gay et al., 2013, Hammack and Cohler, 2009, Harper et al., 2016, Pinter et al., 2021]. But, since online spaces are not designed by, with, or for queer people, they frequently harm, marginalize, or under-serve queer communities [Katyal and Jung, 2021, Samuel, 2021, Van Zyl and McLean, 2021]. Pervasive and insidious data collection and AI-based surveillance threatens outing [Haug, 2021, Olmstead, 2021, Payton, 2021]. For example, Egyptian police officers have utilized Grindr to pinpoint the location of people suspected of being LGBTQIA+ [Payton, 2021]. Furthermore, queer people are more likely to experience online harassment, yet they do not benefit from automated content moderation and are victims of the widespread automated censorship of queer language and identities [Dalek et al., 2021, Elefante, 2021, Simonite, 2021, Smith et al., 2021, Dev et al., 2021, Treebridge, 2021]. For example, Salty Algorithmic Bias Research Collective found “substantial evidence of censorship on Instagram for […] people who are transgender and/or nonbinary, LGBQIA, BIPOC, disabled, sex workers, and/or sex educators” [Smith et al., 2021].
Beyond online spaces, AI has been used to give a dangerous veneer of legitimacy to physiognomy and phrenology, including using computer vision to identify and out queer people [Agüera y Arcas et al., 2021, Stark and Hutson, 2021], and infer binary gender from faces [Keyes, 2018, Long, 2021, Scheuerman et al., 2019, 2021c]. All these technologies are premised on medical models of queerness, “rely[ing] on biomedical standards of “normal” bodies, and classif[ying] what falls outside of these standards as pathological” [Whittaker et al., 2019, Siebers et al., 2008]. Language models are trained on datasets containing hate speech and censored of queer words and overwhelmingly fail to account for non-binary genders and diverse pronouns, contributing to the erasure of these identities [Dodge et al., 2021, Cao and Daumé III, 2020, Dev et al., 2021].
Bowker and Star [2000] and Keyes [2019] argue that hegemonic forms of AI are fundamentally focused on classifying complex people and situations into narrow categories and scaling at the cost of context. Furthermore, they are applied to reinforce surveillance, prediction, and control. Hence, this conceptualization of AI is incompatible with queerness. Consequently, theorizing models of AI that are inclusive of and celebrate queerness becomes a critical step towards preventing active harm to and even benefitting the LGBTQIA+ community [Ashwin et al., 2021, Drage, 2021, Subramonian, 2020].
Bias Bounties
Historically, bounty programs have taken the form of coordinated vulnerability reporting that allows companies to understand flaws in their systems and improve security. These platforms provide a clear and safe way to identify vulnerabilities and disclose flaws. Reported bugs are highly valuable, as teams use them to mitigate and prevent harms propagated by the software. The severity of the bugs are context specific but overall may be captured through vulnerability assessment frameworks, like the Common Vulnerability Scoring System (CVSS). This system provides an approach that standardizes severity measurement across different vulnerabilities (FIRST 2019) (Kenway et al., 2022). Taking inspiration from this, we frame this as a shared task of working towards developing harm evaluation frameworks, where participants are encouraged to investigate and create methodologies that allows one to assess the harms of AI systems with respect to the queer community. We are also motivated by Twitter META’s reported challenges in balancing holistic assessment criteria with quantifiable measures (e.g. how many people does the discovered bias harm) during their algorithmic bias bug bounty challenge (Yee and Peradejordi, 2021).
Further Readings:
On, A. E. B. (2018). We Need Bug Bounties for Bad Algorithms. Vice. https://www.vice.com/en/article/8xkyj3/we-need-bug-bounties-for-bad-algorithms
Ellis, R. and Stevens, Y. (2022). Bounty Everything: Hackers and the Making of the Global Bug Marketplace. https://datasociety.net/library/bounty-everything-hackers-and-the-making-of-the-global-bug-marketplace/
Globus-Harris, I., Kearns, M., & Roth, A. (2022). Beyond the Frontier: Fairness Without Accuracy Loss. arXiv e-prints, arXiv-2201.
Kenway, J., François, C., Costanza-Chock, S., Raji, I.D., & Buolamwini, J. (2022). BUG BOUNTIES FOR ALGORITHMIC HARMS?. https://www.ajl.org/bugs
Yee, K. and Peradejordi, I. F. (2021). Sharing learnings from the first algorithmic bias bounty challenge. https://blog.twitter.com/engineering/en_us/topics/insights/2021/learnings-from-the-first-algorithmic-bias-bounty-challenge
Other Resources
While not required, we recommend that attendees familiarize themselves with some existing frameworks for evaluating harms, methods for conducting qualitative analysis, and specific AI systems to explore. We provide some examples below.
Frameworks for Harm Evaluation:
Salty’s investigation of biased content policing in Instagram
Harms of Gender Exclusivity and Challenges in Non-Binary Representation in Language Technologies
Conducting Qualitative Analysis:
10 minutes “Introduction to Queer Algorithmic Biases, Bias Bounties, and Harm Evaluation Frameworks”
15 minutes “Twitter’s Algorithmic Bias Challenge” Rumman Chowdhury (Twitter META, Director) and Kyra Yee (Twitter META, Research Engineer)
10 minutes break
20 minutes "NEITHER BAND-AIDS NOR SILVER BULLETS: HOW BUG BOUNTIES CAN HELP THE DISCOVERY, DISCLOSURE, AND REDRESS OF ALGORITHMIC HARMS" Camille François and Sasha Costanza-Chock (Algorithmic Justice League and Harvard Berkman-Klein Center for Internet and Society)
20 minutes group formation
120 minutes discussion and collaborative work in groups
Email: queerinai [at] gmail.com
Please read the Queer in AI code of conduct, which will be strictly followed at all times. Recording (screen recording or screenshots) is prohibited. All participants are expected to maintain the confidentiality of other participants.
FAccT 2022 adheres to the ACM Anti-harassment policy and Queer in AI adheres to Queer in AI Anti-harassment policy. Any participant who experiences harassment or hostile behavior should report directly to the ACM (instructions in ACM Anti-harassment policy), and contact the Queer in AI Safety Team. Please be assured that if you approach us, your concerns will be kept in strict confidence, and we will consult you on any actions taken.
Arjun Subramonian (they/them); PhD Student; Queer in AI and University of California, Los Angeles; arjunsub@cs.ucla.edu
Anaelia Ovalle (they/them); PhD Student; University of California, Los Angeles; anaelia@cs.ucla.edu
Luca Soldaini (they/he); Applied Research Scientist; Queer in AI and Allen Institute for AI, Los Angeles; luca@soldaini.net
Nathan Dennler (he/they); PhD Student; University of Southern California; dennler@usc.edu
Zeerak Talat; Post-Doctoral Fellow; Digital Democracies Institute, Simon Fraser University, Vancouver; zeerak_talat@sfu.ca
Sunipa Dev (she/her); UCLA; Research Scientist; California; sunipadev@gmail.com
Kyra Yee (she/her); Research engineer; Twitter, San Francisco; kyray@twitter.com
Irene Font Peradejordi (she/her); Responsible ML Experience Researcher; Twitter; irenef@twitter.com
William Agnew (he/him); PhD Candidate, Queer in AI and University of Washington; wagnew3@cs.washington.edu
Avijit Ghosh (he/him); PhD Candidate, Queer in AI, Northeastern University, and Twitter (intern), Boston. ghosh.a@northeastern.edu