Socially Responsible Machine Learning: A Causal Perspective

The evergrowing reliance of humans and society on machine learning methods has raised concerns about their trustworthiness and liability. As a response to these concerns, Socially Responsible Machine Learning (SRML) aims at developing fair, transparent, and robust machine learning algorithms. However, traditional approaches to SRML do not incorporate human perspectives, and therefore are not sufficient to build long-lasting trust between machines and human being. Causality as the key to human intelligence plays a vital role in achieving socially responsible machine learning algorithms which are compatible with human notions. 

Bridging the gap between traditional SRML and causality, in this tutorial, we aim at  providing a holistic overview of SRML through the lens of causality. In particular, we will focus on state-of-the-art techniques on causal socially responsible ML in terms of fairness, interpretability, and robustness. The objectives of this tutorial are as follows: (1) we provide a taxonomy of existing literature on causal socially responsible ML from fairness, interpretability, and robustness perspective; (2) we review the state-of-the-art techniques for each task; and (3) we elucidate open questions and future research directions. We believe this tutorial is beneficial to researchers and practitioners from the areas of data mining, machine learning, and social sciences.

Presenters

Raha Moraffah

Arizona State University

Amir-Hossein Karimi

Max Planck Institute for Intelligent Systems

Adrienne Raglin

Army Research Lab

Huan Liu

Arizona State University

Contributors 

Miriam Rateike

Saarland University

Ayan Majumdar

Max Planck Institute for Software Systems

Isabel Valera

Max Planck Institute for Software Systems, Saarland University

Tutorial Outline 

(1) Background and Overview (10 mins) (Slides) 

  (i) Overview of Causal Socially Responsible ML

(ii) Introduction to Causality

(2) Causal Interpretability (50 mins) (Slides) 

  (i) Causal Data-base Interpretability

(ii) Causal Model-based Interpretability

(iii) Causal Decision-base Interpretability

(3) Causal Fairness (50 mins)  (Slides) 

  (i) Fundamental of Causal Fairness

(ii) Causal Fairness Notions

(iii) Causal Fairness Methods

(3) Causal Robustness (50 mins) (Slides) 

  (i) Causal Data Augmentation

(ii) Causal Representation Learning

(iii) Causal Mechanism Learning

(iv) Post-training Methods

(4) Conclusion and Future Trends (10 mins)  (Slides) 

  (i) Future Directions and Promises

(ii) Challenges and Limitations

References

[1] Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. 2019. Invariant risk minimization. arXiv preprint arXiv:1907.02893 (2019). [2] Haoyue Bai et al. 2021. Decaug: Out-of-distribution generalization via decomposed feature representation and semantic augmentation. In AAAI. 

[3] Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2019. Fairness and Machine Learning: Limitations and Opportunities. http://www.fairmlbook.org. 

[4] Yang Chen, Yu Wang, Yingwei Pan, Ting Yao, Xinmei Tian, and Tao Mei. 2021. A style and semantic memory mechanism for domain generalization. In ICCV. 

[5] Cynthia Dwork, Nicole Immorlica, Adam Tauman Kalai, and Max Leiserson. 2018. Decoupled classifiers for group-fair and efficient machine learning. In Conference on fairness, accountability and transparency. PMLR, 119–133. 

[6] Atticus Geiger, Hanson Lu, Thomas Icard, and Christopher Potts. 2021. Causal abstractions of neural networks. NeurIPS (2021). 

[7] Yash Goyal, Amir Feder, Uri Shalit, and Been Kim. 2019. Explaining classifiers with causal concept effect (cace). arXiv preprint arXiv:1907.07165 (2019). 

[8] Dominik Janzing, Lenon Minorics, and Patrick Blöbaum. 2020. Feature relevance quantification in explainable AI: A causal problem. In AISTATS. PMLR. [9] Amir-Hossein Karimi, Krikamol Muandet, Simon Kornblith, Bernhard Schölkopf, and Been Kim. 2022. On the Relationship Between Explanation and Prediction: A Causal View. arXiv preprint arXiv:2212.06925 (2022). 

[10] Amir-Hossein Karimi, Bernhard Schölkopf, and Isabel Valera. 2021. Algorithmic recourse: from counterfactual explanations to interventions. In ACM FAccT. 

[11] Hyemi Kim, Seungjae Shin, JoonHo Jang, Kyungwoo Song, Weonyoung Joo, Wanmo Kang, and Il-Chul Moon. 2021. Counterfactual fairness with disentangled causal effect variational autoencoder. In AAAI. 

[12] Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness. NeurIPS (2017). 

[13] Joshua R Loftus, Lucius EJ Bynum, and Sakina Hansen. 2023. Causal Dependence Plots for Interpretable Machine Learning. arXiv preprint arXiv:2303.04209 (2023). 

[14] Divyat Mahajan, Shruti Tople, and Amit Sharma. 2021. Domain generalization using causal matching. In ICML. 

[15] Raha Moraffah, Mansooreh Karami, Ruocheng Guo, Adrienne Raglin, and Huan Liu. 2020. Causal interpretability for machine learning-problems, methods and evaluation. ACM SIGKDD Explorations Newsletter 22, 1 (2020), 18–33. 

[16] Judea Pearl. 2009. Causality. Cambridge university press. 

[17] Jonas Peters, Dominik Janzing, and Bernhard Schölkopf. 2017. Elements of causal inference: foundations and learning algorithms. The MIT Press. 

[18] Numair Sani, Daniel Malinsky, and Ilya Shpitser. 2020. Explaining the behavior of black-box prediction algorithms with causal learning. arXiv:2006.02482 (2020). 

[19] Patrick Schwab and Walter Karlen. 2019. Cxplain: Causal explanations for model interpretation under uncertainty. NeurIPS (2019). 

[20] Paras Sheth, Raha Moraffah, K Selçuk Candan, Adrienne Raglin, and Huan Liu. 2022. Domain Generalization–A Causal Perspective. arXiv:2209.15177 (2022). 

[21] David S Watson, Limor Gultchin, Ankur Taly, and Luciano Floridi. 2021. Local explanations via necessity and sufficiency: Unifying theory and practice. In UAI. 

[22] Guandong Xu, Tri Dung Duong, Qian Li, Shaowu Liu, and Xianzhi Wang. 2020. Causality learning: a new perspective for interpretable machine learning. arXiv preprint arXiv:2006.16789 (2020). 

[23] Lu Zhang, Yongkai Wu, and Xintao Wu. 2016. A causal framework for discovering and removing direct and indirect discrimination. arXiv:1611.07509 (2016).