Talks
Asia Biega - Data minimization and fairness
Data minimization is a legal obligation defined in the European Union’s General Data Protection Regulation (GDPR) as the responsibility to process an adequate, relevant, and limited amount of personal data in relation to a processing purpose. However, the lack of technical interpretations of the principle in the context of machine learning systems has inhibited adoption. In this talk, I'll discuss ways in which data minimization continues to play an important role in personal data processing, how it can be computationally interpreted, and what are the limits to its practical implementation. Crucially, one of such limits is the tension between data minimization and fairness. I'll conclude the talk by highlighting some open problems at the intersection of these two algorithmic and legal principles.
Christos Dimitrakakis - Fair Set Selection: Meritocracy and Social Welfare
We formulate the problem of selecting a set of individuals from a candidate population as a utility maximisation problem. Even when utility reflects a notion of social welfare, it raises the question of how to define meritocratic decisions in this framework. We suggest the notion of expected marginal contribution (EMC) of an individual with respect to a selection policy as a measure of deviation from meritocracy. This has links to the well-known of Shapley values for collaborative games and gradient ascent algorithms for certain policy structures, but not for others. We apply this framework to a simulated university admissions setting and discuss implications for trading off individual meritocracy and social welfare in decisions about populations. This discussion is partially based on this preprint: https://arxiv.org/abs/2102.11932
Steffen Grunewalder - Oblivious data for kernel methods
I’ll present an approach to reduce the influence of sensitive features in data in the context of kernel methods. The resulting method uses Hilbert space valued conditional expectations to create new features that are close approximations of the original (non-sensitive) features while having a reduced dependence on the sensitive features. I’ll provide optimality statements about these new features and a bound on the dependence between the sensitive features and these new features. In practice, standard techniques to estimate conditional expectations can be used to generate these features. I’ll discuss a plug-in approach for estimating conditional expectation which uses properties of the empirical process to control estimation errors.
Mohamed Hebiri - Fair regression via Wasserstein barycenters
I will consider the problem of learning a real-valued function that satisfies the Demographic Parity constraint. It demands the distribution of the predicted output to be independent of the sensitive attribute. We consider the case that the sensitive attribute is available for prediction. We establish a connection between fair regression and optimal transport theory, based on which we derive a close form expression for the optimal fair predictor. Specifically, we show that the distribution of this optimum is the Wasserstein barycenter of the distributions induced by the standard regression function on the sensitive groups. This result offers an intuitive interpretation of the optimal fair prediction and suggests a simple post-processing algorithm to achieve fairness. We establish risk and distribution-free fairness guarantees for this procedure. Numerical experiments indicate that our method is very effective in learning fair models, with a relative increase in error rate that is inferior to the relative gain in fairness.
Hoda Heidari - On Modeling Human Perceptions of Allocation Policies with Uncertain Outcomes
Many policies allocate harms or benefits that are uncertain in nature: they produce distributions over the population in which individuals have different probabilities of incurring harm or benefit. Comparing different policies thus involves a comparison of their corresponding probability distributions, and we observe that in many instances the policies selected in practice are hard to explain by preferences based only on the expected value of the total harm or benefit they produce. In cases where the expected value analysis is not a sufficient explanatory framework, what would be a reasonable model for societal preferences over these distributions? We investigate explanations based on the framework of probability weighting from the behavioral sciences, which over several decades has identified systematic biases in how people perceive probabilities. We show that probability weighting can be used to make predictions about preferences over probabilistic distributions of harm and benefit that function quite differently from expected-value analysis, and in a number of cases provide potential explanations for policy preferences that appear hard to motivate by other means. In particular, we identify optimal policies for minimizing perceived total harms and maximizing perceived total benefits that take the distorting effects of probability weighting into account, and we discuss a number of real-world policies that resemble such allocational strategies. Our analysis does not provide specific recommendations for policy choices, but is instead fundamentally interpretive in nature, seeking to describe observed phenomena in policy choices.
Aaron Roth - Online Multivalid Learning: Means, Moments, and Prediction Intervals
We present a general, efficient technique for providing contextual predictions that are "multivalid" in various senses, against an online sequence of adversarially chosen examples (x,y). This means that the resulting estimates correctly predict various statistics of the labels y not just marginally --- as averaged over the sequence of examples --- but also conditionally on x \in G for any G belonging to an arbitrary intersecting collection of groups.
We provide three instantiations of this framework. The first is mean prediction, which corresponds to an online algorithm satisfying the notion of multicalibration from Hebert-Johnson et al.. The second is variance and higher moment predictions, which corresponds to an online algorithm satisfying the notion of mean-conditioned moment multicalibration from Jung et al. Finally, we define a new notion of prediction interval multivalidity, and give an algorithm for finding prediction intervals which satisfy it. Because our algorithms handle adversarially chosen examples, they can equally well be used to predict statistics of the residuals of arbitrary point prediction methods, giving rise to very general techniques for quantifying the uncertainty of predictions of black box algorithms, even in an online adversarial setting. When instantiated for prediction intervals, this solves a similar problem as conformal prediction, but in an adversarial environment and with multivalidity guarantees stronger than simple marginal coverage guarantees.
This talk is based on a paper that is joint work with Varun Gupta, Christopher Jung, Georgy Noarov, and Mallesh Pai, which is available at: https://arxiv.org/abs/2101.01739
Samira Samadi - Socially Fair k-Means Clustering
We show that the popular k-means clustering algorithm (Lloyd’s heuristic), can result in outcomes that are unfavorable to subgroups of data (e.g., demographic groups). Such biased clusterings can have deleterious implications for human-centric applications such as resource allocation. We present a fair 𝑘-means objective and algorithm to choose cluster centers that provide equitable costs for different groups. The algorithm, Fair-Lloyd, is a modification of Lloyd’s heuristic for 𝑘-means, inheriting its simplicity, efficiency, and stability. In comparison with standard Lloyd’s, we find that on benchmark datasets, Fair-Lloyd exhibits unbiased performance by ensuring that all groups have equal costs in the output 𝑘-clustering, while incurring a negligible increase in running time, thus making it a viable fair option wherever 𝑘-means is currently used.
Ricardo Silva - On Prediction, Action and Interference
Ultimately, we want the world to be less unfair by changing it. Just making fair passive predictions is not enough, so our decisions will eventually have an effect on how a societal system works. We will discuss ways of modelling hypothetical interventions so that particular measures of counterfactual fairness are respected: that is, how are sensitivity attributes interacting with our actions to cause an unfair distribution outcomes, and that being the case how do we mitigate such uneven impacts within the space of feasible actions? To make matters even harder, interference is likely: what happens to one individual may affect another. We will discuss how to express assumptions about and consequences of such causative factors for fair policy making, accepting that this is a daunting task but that we owe the public an explanation of our reasoning.
Joint work with Matt Kusner, Chris Russell and Joshua Loftus
Steven Wu - Involving Stakeholders in Building Fair ML Systems
Recent work in fair machine learning has proposed dozens of technical definitions of algorithmic fairness and methods for enforcing these definitions. However, we still lack a comprehensive understanding of how to develop machine learning systems with fairness criteria that reflect relevant stakeholders' nuanced viewpoints in real-world contexts. This talk will cover our recent work that aims to address this gap. We will first discuss an algorithmic framework that enforces the individual fairness criterion through interactions with a human auditor, who can identify fairness violations without enunciating a fairness (similarity) measure. We then discuss an empirical study on how to elicit stakeholders' fairness notions in the context of a child maltreatment predictive system.
Mikhail Yurochkin - Practical individual fairness algorithms
Individual Fairness (IF) is a very intuitive and desirable notion of fairness: we want ML models to treat similar individuals similarly, that is, to be fair for every person. For example, two resumes of individuals that only differ in their name and gender pronouns should be treated similarly by the model. Despite the intuition, training ML/AI models that abide by this rule in theory and in practice poses several challenges. In this talk, I will introduce a notion of Distributional Individual Fairness (DIF) highlighting similarities and differences with the original notion of IF introduced by Dwork et al. in 2011. DIF suggests a transport-based regularizer that is easy to incorporate into modern training algorithms while controlling the fairness-accuracy tradeoff by varying the regularization strength. Corresponding algorithm guarantees to train certifiably fair ML models theoretically and achieves individual fairness in practice on a variety of tasks. DIF can also be readily extended to other ML problems, such as Learning to Rank.