Fair AI in a Nutshell

Can Algorithms be Fair?

1/4 Day Tutorial @IJCAI-PRICAI 2020
https://www.ijcai20.org/tutorials.html
January 2021, Japan

In recent years there has been a lot of interest in algorithmic fairness in machine learning; this research area aims to enhance learning algorithms with fairness requirements, namely ensuring that sensitive information (e.g. gender, race, political, and sexual orientation) does not ‘unfairly’ influence the outcome of a learning algorithm. This tutorial aims at describing the state-of-the-art on algorithmic fairness as well as discussing currently unexplored areas of research such as the problems of representation (e.g. learn fair graph embedding) and temporal unfairness (e.g. detect shift in unfair biases over time).

Abstract

AI-based systems and products are reaching society at large in many aspects of everyday life, including financial lending, online advertising, pretrial and immigration detention, child maltreatment screening, health care, social services, and education. This phenomenon has been accompanied by an increase in concern about the ethical issues that may arise from the adoption of these technologies. In response to this concern, a new area of machine learning has recently emerged that studies how to address disparate treatment caused by algorithmic errors and bias in the data. The central question is how to ensure that the learned model does not treat subgroups in the population unfairly. While the design of solutions to this issue requires an interdisciplinary effort, fundamental progress can only be achieved through a radical change in the machine learning paradigm. In this work, we will describe the state of the art on algorithmic fairness as well as discuss currently unexplored areas of research such as the problems of representation and temporal unfairness. We will first use the framework of graphical models to provide a clear and intuitive formalization and characterization of the subject. We will then use statistical learning theory, machine learning, and the deep learning tools to be able to learn fair models and data representations.

Detailed, point-form outline of the tutorial

For more details about the topic listed below please refer to [1-4]

  • Humane/Trustworthy AI and Current Initiatives

  • Can Algorithms be Fair?

  • What is Fairness?

  • Measures of (un)fairness

    • Mathematical incompatibility of the measures

    • Accuracy-Fairness tradeoff and the cost of fairness

  • Understanding (un)fairness: Graphical Models

    • A graphical view of (un)fairness and the COMPAS debate

    • Causal models, direct and indirect effects, and path specific (un)fairness

  • Theory and Practice in Learning Fair models

    • Post-Processing Methods (Hands-On with [5])

    • In-Processing Methods (Hands-On with [6])

    • Pre-Processing Methods (Hands-On with [7])

  • From Fair Model to Fair Representation (Hands-On with [8])

  • Open Challenges and Future Perspectives

[1] Oneto, L. and Chiappa, S., Fairness in Machine Learning, Springer, 2020.

[2] Oneto, L., Learning Fair Models and Representations, Intelligenza Artificiale, 2020.

[3] Chiappa, S., and William, S. I., A Causal Bayesian Networks Viewpoint on Fairness. IFIP International Summer School on Privacy and Identity Management, Springer, 2018.

[4] https://fairmlbook.org/

[5] Chzhen, E. and Hebiri, H. and Denis, C. and Oneto, L. and Pontil, M., Advances in Neural Information Processing Systems (NIPS), Fair Regression with Wasserstein Barycenters, 2020.

[6] Donini, M. and Oneto, L. and Ben-David, S. and Shawe-Taylor, J. and Pontil, M., Advances in Neural Information Processing Systems (NeurIPS), Empirical Risk Minimization Under Fairness Constraints, 2018.

[7] Calmon, F., Wei, D., Vinzamuri, B., Ramamurthy, K. N., and Varshney, K. R., Advances in Neural Information Processing Systems (NeurIPS), Optimized pre-processing for discrimination prevention, 2017.

[8] Oneto, L. and Donini, M. and Luise, G. and Ciliberto, C. and Maurer, A. and Pontil, M., Advances in Neural Information Processing Systems (NIPS), Exploiting MMD and Sinkhorn Divergences for Fair and Transferable Representation Learning, 2020.

Targeted Audience

We think that this tutorial can be interesting and significant for a wide range of researchers that are active in the data mining and machine learning communities. The tutorial can be useful for:

  • Practitioners, data scientists and professionals who are interested in better understanding the ideas behind these methods and discovering how they can be useful in practical applications;

  • PhD students and young researchers who are interested in discovering and learning more about the seminal works and the state-of-the-art results of this field or research;

  • Researchers who already work on these topics but want a complete overview of all the approaches available in the literature, or they want a different point of view on these subjects;

  • Researchers who are interested in improving their level of knowledge on the future perspective of this field of research.

The main target audience of this tutorial are PhD students, young researchers and practitioners who want to start knowing more or to better understand the above mentioned topics by having a complete overview of this field of research. We expect people from heterogeneous areas of research: in particular, researchers focused on the applications of data mining and machine learning techniques to real world problems, researchers who develop new learning algorithms or theoretical approaches to learning, data science practitioners, and data analytics consultants. We do not require any additional knowledge with respect to the ones obtained through an MSc in Computer Science, Computer Engineering, or related fields.

Why you should follow this Tutorial

IJCAI (this year together with PRICAI) is the premier conference bringing together the international AI community to communicate the advances and achievements of artificial intelligence research. Given the social and industrial relevance of Fair AI together with the hand on components and interactive elements, we think that our tutorial is particularly suited for IJCAI. Moreover, fairness is the fundamental brick to design and deploy AI systems that enhance human capabilities and empower both individuals and society as a whole rather than replace human intelligence.

Related Tutorials

Recently many other tutorials on this topic have been presented on top tier conferences, for example

  • Sara Hajian, Francesco Bonchi, and Carlos Castillo, Algorithmic bias: From discrimination discovery to fairness-aware data mining, KDD Tutorial, 2016.

  • Solon Barocas and Moritz Hardt, Fairness in machine learning, NeurIPS Tutorial, 2017.

  • Kate Crawford, The Trouble with Bias, NeurIPS Keynote, 2017.

  • Arvind Narayanan, 21 fairness definitions and their politics, FAT* Tutorial, 2018.

  • Sam Corbett-Davies and Sharad Goel, Defining and Designing Fair Algorithms, Tutorials at EC 2018 and ICML 2018.

  • Ben Hutchinson and Margaret Mitchell, Translation Tutorial: A History of Quantitative Fairness in Testing, FAT* Tutorial, 2019.

  • Henriette Cramer, Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miroslav Dudík, Hanna Wallach, Sravana Reddy, and Jean Garcia-Gathright, Translation Tutorial: Challenges of incorporating algorithmic fairness into industry practice, FAT* Tutorial, 2019.

  • Moustapha Cisse, Sanmi Koyejo, Representation Learning and Fairness, NeurIPS Tutorial, 2019.

  • Sarah Bird, Ben Hutchinson, Krishnaram Kenthapadi, Emre Kıcıman, Margaret Mitchell, Fairness-Aware Machine Learning in Practice, ACM International Conference on Web Search and Data Mining, 2019

Nevertheless, we believe that our tutorial has several innovative aspects and it can be very useful in this field of research. Indeed, our tutorial comprehensively deals with the problem of understanding what is the problem of (un)fair AI, gives tools to understand the path to unfairness (with graphical models), shows theoretical (with statistical learning theory) and practical (with hands on session) methods to deal with unfairness and shows open research paths in this field (fair representations and temporal unfairness).

About the Presenters

Background in the tutorial area, including a list of publications/presentation
Both presenters have multiple publications in top venues on the subject of the tutorial. They are one of the first authors who developed state of the art results on this subject. Here the list of works of the presenters on the subject of the tutorial (full CV at the website of the authors).

    • Chzhen, E., Hebiri, H., Denis, C. ,Oneto, L. and Pontil, M., Advances in Neural Information Processing Systems (NIPS), Fair Regression with Wasserstein Barycenters, 2020.

    • Chzhen, E.,Denis, C.,Hebiri, H., Oneto, L. and Pontil, M., Advances in Neural Information Processing Systems (NIPS), Fair Regression via Plug-In Estimator and Recalibration, 2020.

    • Oneto, L., Donini, M., Luise, G., Ciliberto, C., Maurer, A. and Pontil, M., Advances in Neural Information Processing Systems (NIPS), Exploiting MMD and Sinkhorn Divergences for Fair and Transferable Representation Learning, 2020.

    • Chzhen, E., Hebiri, H., Denis, C., Oneto, L., & Pontil, M., Advances in Neural Information Processing Systems (NeurIPS), Leveraging Labeled and Unlabeled Data for Consistent Fair Binary Classification, 2019.

    • Oneto, L., Donini, M., Pontil, M., & Shawe-Taylor, J. Randomized Learning and Generalization of Fair and Private Classifiers: from PAC-Bayes to Stability and Differential Privacy, Neurocomputing, 2020.

    • Oneto, L., Donini, M., Pontil, M., & Maurer, A., Learning Fair and Transferable Representations, Advances in Neural Information Processing Systems (NeurIPS) Workshop on Human-Centric Machine Learning

    • Donini, M., Oneto, L., Ben-David, S., Shawe-Taylor, J., & Pontil, M., Advances in Neural Information Processing Systems (NeurIPS), Empirical Risk Minimization Under Fairness Constraints, 2018.

    • Oneto, L., & Chiappa, S., Fairness in Machine Learning, Springer, 2020.

    • Oneto, L., Donini, M., Elders, A., & Pontil, M., AAAI/ACM Conference on AI, Ethics, and Society (AIES), Taking Advantage of Multitask Learning for Fair Classification, 2018.

    • Oneto, L., Learning Fair Models and Representations, Intelligenza Artificiale, 2020.

    • Navarin, N. and Oneto, L. and Donini, M., European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), Learning Deep Fair Graph Neural Networks, 2020.

Citation to an available example of work in the area (ideally, a published tutorial-level article or presentation materials on the subject)

    • Oneto, L., & Chiappa, S., Fairness in Machine Learning, Springer, 2020.

    • Oneto, L., Learning Fair Models and Representations, Intelligenza Artificiale, 2020.

Evidence of teaching experience (courses taught or references)

    • INNS BDDL 2019 Tutorial: Silvia Chiappa, Luca Oneto, Fairness in Machine Learning

    • PhD Course: Trustworthy AI, University of Genoa, Luca Oneto

Evidence of scholarship in AI or Computer Science
Prof. Luca Oneto
has been awarded of

    • Amazon AWS Machine Learning Research Award in 2019 & 2020

    • Somalvico Price as best under-40 Italian AI researcher in 2019

University of Genoa

Amazon Web Services