Fairness and Discrimination through the Dual Lens of Mechanism Design and Machine Learning

A tutorial at the ACM conference on Economics and Computation (EC) July 19, 2021

Jessie Finocchiaro, Edwin Lock, Faidra Monachou, and Manish Raghavan

Description

This tutorial aims to bridge notions of algorithmic fairness between the machine learning and mechanism design communities, particularly in resource allocation. Mechanism design (MD) is often used in inherently social and human domains, such as matching and admitting students to schools (Abdulkadirouglu and Sonmez, 2003), delivering online advertisements (Lambrecht and Tucker, 2018), assigning gig workers to jobs (Quillian, Pager, Hexel, and Midtbøen, 2017), in healthcare (vaccine allocation, kidney exchange, health insurance markets (Roth, Sonmez, and Unver, 2004), and matching candidates to public and affordable housing (Arnosti and Shi, 2019). Increasingly, machine learning (ML) algorithms have been leveraged to tackle similar problems, albeit often from a different perspective involving large-scale automation of decisions using Big Data. For example, machine learning is used in hiring settings to recommend the "top" candidates for a job to be interviewed (Bogen and Rieke, 2018).

Both machine learning and mechanism design study fairness and discrimination in these settings, albeit through different lenses. Finocchiaro, Maio, Monachou, Raghavan, Stoica and Tsirtsis, 2021 highlight some of the lessons machine learning and mechanism design can teach, and have taught, each other through an extensive analysis of previous literature in these fields. The authors assert that while considering perspectives from both fields is not sufficient to develop a just algorithmic decision-making system, it is at the very least necessary to solve (or move towards improving) critical questions across fields.

This tutorial aims to present a unified overview of bias, discrimination, and fairness through the dual lens of machine learning and mechanism design. At a high level, we plan to discuss and further evaluate some of these lessons through a thorough engagement with seminal and relevant literature, consider the tensions that arise at the intersection of these fields, and challenge participants to think critically about these issues in a concrete setting.

Schedule

Session 1: Introduction (10:30am-11:30am ET)

While decision-making algorithms have the potential to bring greater efficiency and accuracy to our decisions, it is crucial that we ensure that decisions will be fair. Classic works from Science and Technology Studies tell us that fairness is not the default; values are embedded throughout decisions that impact society (Winner, 1980). If we seek to develop algorithms and mechanisms that impact society, it's important to recognize that the tools we build are not automatically "fair" or "neutral." This is not to say that the question of fairness is a purely normative one, separate from the technical considerations of algorithm design. The goal of this tutorial will be to demonstrate how technical tools and ideas from both machine learning and mechanism design can shed light on normative questions surrounding fairness, leading to new insights and building towards practical solutions. We will briefly discuss a few motivating settings including advertising, resource allocation, and hiring, examining how questions of fairness and discrimination manifest here. Building on these examples, we will review past efforts to define fairness in a variety of fields.

Exercise Session (11:45am-12:45pm ET)

Overview: There has been much debate about challenges and inequalities in COVID-19 vaccine distribution on a local and global scale. The pandemic has accelerated research that attempts to address these issues through, e.g., "artificial markets", priority systems or allocation schemes for global vaccine allocation under constraints including price capping, production timelines, various statistical parity notions, infection rates, as well as GDP and population demographics. In preparation for the exercise, we briefly illuminate works in the resource allocation literature that increasingly focus on issues of social justice, and in particular on fairness with respect to low-income and racial groups (Akbarpour et al. 2021, National Academy of Sciences, 2020). We will also survey recent multi-national efforts to achieve equitable access to vaccines for low- and middle-income countries that have been established (Usher, Durkin, Bhular, 2020). This exercise will guide participants to apply insights from earlier sessions, and to understand and challenge what it means for their created allocation mechanism to be fair through these lenses.



Session 2: Economics of discrimination and recent applications (3:00pm-4:00pm ET)

Motivated by applications in labor, education and other settings, this session will give an overview of classic and recent results in the economics of discrimination and then discuss applied research questions where mechanism design and machine learning can been applied to mitigate discrimination or detect bias.

Theories of fairness and discrimination in the economics literature began decades ago in an attempt to understand the mechanisms behind discrimination. Out of this work, two dominant theories of discrimination arose: taste-based discrimination (Becker, 1957), where individuals discriminate due to their own discriminatory preferences, and statistical discrimination (Phelps, 1972; Arrow, 1973; Aigner and Cain, 1977) , where individuals discriminate due to rational priors and imperfect information. Later work in economics identified additional sources of belief-based discrimination beyond statistical discrimination, including the endogenous rise of coordination failure (Coate and Loury, 1993) and misspecified agent behavior (Bohren, Imas, and Rosenberg, 2019; Fryer, 2007).

After reviewing the various theories of discrimination proposed by the economics literature, the rest of this session will discuss the connections between the aforementioned theories and algorithmic fairness. More specifically, we will focus on two emerging research directions:


Session 3: Lessons and Critique (4:15pm-5:15pm ET)

The fields of mechanism design and machine learning are increasingly continuing to teach and learn lessons across fields, and this session will enumerate some of these, though a more thorough discussion can be found in (Finocchiaro, Maio, Monachou, Raghavan, Stoica and Tsirtsis, 2021, Section 3).

Practical concerns: Theoretical models enable us to study and test the impact of some small changes to a system before deploying it in the real world. However, machine learning models often have to consider practicality concerns, such as designing for strategic agents and constraints in practice. We will overview specific practical concerns such as strategic incentives (Hardt, Megiddo, Papadimitriou, Wootters, 2016), resource allocation constraints (Bakker, Noriega-Campero, Tu, Sattigero, Varshney, 2019), and the consequential or long-term effects of fairness (Liu, Dean, Rolf, Simchowitz, Hardt, 2018).

Critique, connections to other fields: While much progress has been made towards understanding and designing fair and just algorithmic decision-making systems, there is still much to be learned. Algorithm design is simply one small part of the algorithmic ecosystem, and there is much that theoretical computer scientists and economists have to learn from sociologists, HCI researchers, AI ethicists, political scientists, as well as the practice-driven work of several non-profit organizations, policy makers and related stakeholders.

Within EconCS, there are still some gaps in understanding just algorithms that exist, such as diagnosing (un)fairness under uncertainty and heterogenous preferences. It is crucial to engage in dialogue outside of the EconCS community to understand the challenges that come with modeling real-world problems. We will dedicate this time to discussing some of the insufficiences of modeling "fairness" solely through algorithmic systems, and discuss open areas of work that will move us closer to designing just systems. (Abebe, Barocas, Kleinberg, Levy, Raghavan, 2020; Li, 2017).

ec_tutorial

Organizer Bios

Jessie Finocchiaro (she/her) is a PhD student at the University of Colorado Boulder working with Dr. Rafael Frongillo on understanding when and how we can construct consistent surrogate loss functions given a prediction task via property elicitation. Her work is currently funded by the National Science Foundation (NSF) Graduate Research Fellowship. She is a co-organizer of the MD4SG working group on Discrimination and Equality in Algorithmic Decision-making, and has been a part of MD4SG since 2018.

Edwin Lock (he/him) is a finishing DPhil (PhD) student at the University of Oxford supervised by Paul Goldberg. His work currently focuses on designing algorithms to efficiently solve auctions and elicit demand expressed in compact bidding languages. He is also a co-founder of Test and Contain, a project that aims to utilise limited testing resources in an optimal way so as to minimise the impact of COVID-19 on the health and livelihoods of those who are hardest hit in LMICs, and he is a co-organiser of the MD4SG Development Healthcare working subgroup.

Faidra Monachou (she/her) is a Ph.D. candidate in Management Science and Engineering at Stanford University, advised by Professor Itai Ashlagi. She is interested in market design and her work focuses on the role of discrimination, diversity and information design in education, sharing economy, and matching markets. Faidra’s research has been supported by scholarships and fellowships from Stanford Data Science Institute, Google, and others. She co-chaired the MD4SG’20 workshop and currently co-organizes the Stanford Data Science for Social Good research program.

Manish Raghavan, Ph.D. (he/him) is a recent graduate of the Computer Science Department at Cornell University, advised by Jon Kleinberg. He will be a postdoctoral researcher with Cynthia Dwork at the Harvard Center for Research on Computation and Society starting in the fall of 2021. His research focuses on the societal impacts of algorithmic decision-making, with a particular focus on discrimination, algorithmic transparency, and behavioral models. He has been supported by fellowships from Cornell University, the National Science Foundation, and Microsoft Research.