Most current algorithmic fairness techniques require access to data on a “sensitive attribute” or “protected category” in order to make performance comparisons and standardizations across groups. In practice, however, data on demographic categories that inscribe the most risk of mistreatment (e.g. race, sexuality, nation of origin) are often unavailable, due in part to a range of organizational barriers and concerns related to antidiscrimination law, privacy policies, and the unreliability of self-reported, proxy, or inferred demographics.
FAccT, HCI, and Critical Studies scholars have surfaced many other issues with how technical work conceptualizes identities and communities in recent years, looking at categories such as race, gender, and disability. A key part of this work is exposing the ways in which these categories do not simply exist in nature--they are co-constructed and reproduced by the sociotechnical infrastructure built around them. Exploring this process of reproduction is thus key to understanding how, if at all, we should be infusing demographics into fairness tools. Additionally, work around issues such as data justice, big data abolition, and Indigenous Data Sovereignty has sought to center the ways in which data collection and use are wielded to exploit and further disempower individuals and their communities. These critiques point to the ways in which data centralization and ownership allows just a few individuals to determine what narratives and economic or political projects the data will be mobilized to support. While these types of work do not center around demographic data or algorithmic fairness specifically, these perspectives can help identify largely unexamined risks of algorithmic fairness’ data requirements.
The goal of this workshop is to confront the co-construction of demographic categories and sociotechnical infrastructures, as well as the implications of continuing to design fairness interventions that presuppose demographic data availability. Through various narrative and speculative exercises, participants will build out a picture of the various underlying technical, legal, social, political, and environmental infrastructures necessary to support most proposed demographic-based algorithmic fairness techniques and collectively reflect on what types of demographic data infrastructure we would want to construct in the pursuit of fairness and justice.
This workshop is run by the Partnership on AI. In order to attend as a participant you must be registered for the ACM FAccT 2021 conference.
AGENDA
(Pacific Time)
You must register for the FAccT conference to attend this CRAFT session. Once you have registered for the conference, you will be asked to select the CRAFT sessions you wish to attend.
We hope to see you there!
We are seeking individuals who are interested in contributing talks to help inform workshop discussions. We will be accepting up to 6 contributed talks. Talks will be 10 - 15 minutes in length, depending on the number of accepted speakers.
Topics include, but are not limited to:
Critical Race Theory
Data Feminism
Data Colonialism
Indigenous Data Sovereignty
Politics of Categorization
Measurement
Privacy and Consent
Science and Technology Studies
If you are interested in contributing a talk to this workshop, please submit an application by Sunday, February 14, 2021 (AoE). Applicants will be assessed on a rolling basis based on relevance of prior work or experience and novelty of proposed contribution. Applicants will be notified of their acceptance by February 17, 2021.
Accepted speakers will be expected to pre-record and share their talk with the coordinators by February 26, 2021, and will be required to participate in a live Q&A period during the workshop on March 5, 2021.
Program Lead, Partnership on AI
Research Associate, Partnership on AI