NeurIPS Tutorial on Algorithmic Fairness at the Intersection

Monday 5th December 2022
2 p.m. EST — 4:30 p.m. EST

Join us via zoom. Check the NeurIPS website for more info: https://neurips.cc/

This event has passed. Slides can be found here, while a video recording can be found on SlidesLive


The goal of this tutorial is to bring machine learning researchers together from across the globe to discuss algorithmic fairness at the intersection of Privacy, Robustness and Explanability.

 

The Speakers

Golnoosh Farnadi

HEC Montreal/University of Montreal/Mila

Q. Vera Liao

Microsoft Research

Elliot Creager

University of Toronto/Vector

Panel Moderator: 

Su Lin Blodgett
Microsoft Research


The Panelists

Elizabeth Anne Watkins

Intel Labs

Ferdinando Fioretto

Syracuse University

Amir-Hossein Karimi

MPI for Intelligent Systems & ETH Zurich

Pratyusha Kalluri 

Stanford

References mentioned in the tutorial

Algorithmic Fairness:

Selbst, A.D., Boyd, D., Friedler, S.A., Venkatasubramanian, S., Vertesi, J., 2019. Fairness and Abstraction in Sociotechnical Systems

Abebe, R., Barocas, S., Kleinberg, J., Levy, K., Raghavan, M., Robinson, D.G., 2020. Roles for Computing in Social Change.

Gebru, T., Denton, E. 2021 NeurIPS Tutorial: Beyond Fairness in Machine Learning

Ndebele, L., 2022 Social media companies urged to block hate speech linked to Tigray conflict.

Mahoozi, S., 2022. Mahsa Amini death: facial recognition to hunt hijab rebels in Iran

Barocas, S., Biega, A.J., Fish, B., Niklas, J., Stark, L., 2020. When not to design, build, or deploy

Obermeyer, Z., Powers, B., Vogeli, C., Mullainathan, S., 2019. Dissecting racial bias in an algorithm used to manage the health of populations.

Suresh, H., Guttag, J.V., 2021. A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle

Solon Barocas, Kate Crawford, Aaron Shapiro, and Hanna Wallach. 2017. The problem with bias: from allocative to representational harms in machine learning.Special Interest Group for Computing, Information and Society (SIGCIS)(2017)

Scheuerman, M.K., Brubaker, J.R., 2018 Gender is not a Boolean: Towards Designing Algorithms to Understand Complex Human Identities.

Hu, L., Kohler-Hausmann, I., 2020. What’s Sex Got To Do With Fair Machine Learning?

Lu, C., Kay, J., McKee, K., 2022. Subverting machines, fluctuating identities: Re-learning human categorization.

Dwork, Cynthia, et al. "Fairness through awareness." Proceedings of the 3rd innovations in theoretical computer science conference. ACM, 2012.

Verma, S., & Rubin, J. (2018, May). Fairness definitions explained. In 2018 ieee/acm international workshop on software fairness (fairware) (pp. 1-7). IEEE.

Calders, T., Kamiran, F., & Pechenizkiy, M. (2009, December). Building classifiers with independency constraints. In 2009 IEEE International Conference on Data Mining Workshops (pp. 13-18). IEEE.

Hardt, M., Price, E. and Srebro, N., 2016. Equality of opportunity in supervised learning. In Advances in neural information processing systems (pp. 3315-3323).

Friedler, Sorelle A., Carlos Scheidegger, and Suresh Venkatasubramanian. "On the (im) possibility of fairness." arXiv preprint arXiv:1609.07236 (2016).

Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807.

Kusner, M. J., Loftus, J., Russell, C., & Silva, R. (2017). Counterfactual fairness. Advances in neural information processing systems, 30.

Zhang & Bareinboim. “Fairness in Decision-Making - The Causal Explanation Formula, AAAI, 2018

Nilforoshan, Hamed, et al. "Causal conceptions of fairness and their consequences." International Conference on Machine Learning. PMLR, 2022.

Alabdulmohsin, I., Schrouff, J., & Koyejo, O. (2022). A Reduction to Binary Approach for Debiasing Multiclass Datasets. arXiv preprint arXiv:2205.15860.

Sattigeri, Prasanna, et al. "Fairness GAN: Generating datasets with fairness properties using a generative adversarial network." IBM Journal of Research and Development 63.4/5 (2019): 3-1.

van Breugel, Boris, et al. "Decaf: Generating fair synthetic data using causally-aware generative networks." Advances in Neural Information Processing Systems 34 (2021): 22221-22233.

Zemel, R., Wu, Y., Swersky, K., Pitassi, T., & Dwork, C. (2013, May). Learning fair representations. In International conference on machine learning (pp. 325-333). PMLR

Louizos, C., Swersky, K., Li, Y., Welling, M., & Zemel, R. (2015). The variational fair autoencoder. arXiv preprint arXiv:1511.00830.

Madras, D., Creager, E., Pitassi, T., & Zemel, R. (2018, July). Learning adversarially fair and transferable representations. In International Conference on Machine Learning (pp. 3384-3393). PMLR.

Dong, Yushun, et al. "Individual fairness for graph neural networks: A ranking based approach." Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 2021.

Choi, Y., Farnadi, G., Babaki, B., & Van den Broeck, G. (2020, April). Learning fair naive bayes classifiers by discovering and eliminating discrimination patterns. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 06, pp. 10077-10084).

Mohammadi, K., Sivaraman, A., & Farnadi, G. (2022). FETA: Fairness Enforced Verifying, Training, and Predicting Algorithms for Neural Networks. arXiv preprint arXiv:2206.00553.

Kamishima, Toshihiro, Shotaro Akaho, and Jun Sakuma. "Fairness-aware learning through regularization approach." 2011 IEEE 11th International Conference on Data Mining Workshops. IEEE, 2011.

A. Agarwal, A. Beygelzimer, M. Dudík, J. Langford, and H. Wallach, “A Reductions Approach to Fair Classification,” arXiv.org, 16-Jul-2018. [Online]. Available: https://arxiv.org/abs/1803.02453.

Kearns, M., Neel, S., Roth, A., & Wu, Z. S. (2018, July). Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In International Conference on Machine Learning (pp. 2564-2572). PMLR.

Kamiran, F., Karim, A., & Zhang, X. (2012, December). Decision theory for discrimination-aware classification. In 2012 IEEE 12th International Conference on Data Mining (pp. 924-929). IEEE.

Model Privacy:

Dwork, Cynthia. "Differential privacy: A survey of results." International conference on theory and applications of models of computation. Springer, Berlin, Heidelberg, 2008.

Shokri, Reza, et al. "Membership inference attacks against machine learning models." 2017 IEEE symposium on security and privacy (SP). IEEE, 2017.

Carlini, N., Liu, C., Erlingsson, Ú., Kos, J., & Song, D. (2019). The secret sharer: Evaluating and testing unintended memorization in neural networks. In 28th USENIX Security Symposium (USENIX Security 19) (pp. 267-284).

Bassily, Raef, Adam Smith, and Abhradeep Thakurta. "Private empirical risk minimization: Efficient algorithms and tight error bounds." 2014 IEEE 55th annual symposium on foundations of computer science. IEEE, 2014.

Abadi, Martin, et al. "Deep learning with differential privacy." Proceedings of the 2016 ACM SIGSAC conference on computer and communications security. 2016.

Goldreich, O. (1998). Secure multi-party computation. Manuscript. Preliminary version, 78, 110.

McMahan, B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017, April). Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics (pp. 1273-1282). PMLR.

Gupta, Otkrist, and Ramesh Raskar. "Distributed learning of deep neural network over multiple agents." Journal of Network and Computer Applications 116 (2018): 1-8.


Algorithmic Fairness & Privacy:
 

Dwork, Cynthia, et al. "Fairness through awareness." Proceedings of the 3rd innovations in theoretical computer science conference. ACM, 2012.

Bagdasaryan, Eugene, Omid Poursaeed, and Vitaly Shmatikov. "Differential privacy has disparate impact on model accuracy." Advances in neural information processing systems 32 (2019).

Chang, H., & Shokri, R. (2021, September). On the privacy risks of algorithmic fairness. In 2021 IEEE European Symposium on Security and Privacy (EuroS&P) (pp. 292-303). IEEE.

Kulynych, Bogdan, Mohammad Yaghini, Giovanni Cherubin, Michael Veale, and Carmela Troncoso. "Disparate vulnerability to membership inference attacks." arXiv preprint arXiv:1906.00389 (2019).

Song, Congzheng, and Vitaly Shmatikov. "Overlearning reveals sensitive attributes." arXiv preprint arXiv:1905.11742 (2019).

Abadi, Martin, et al. "Deep learning with differential privacy." Proceedings of the 2016 ACM SIGSAC conference on computer and communications security. 2016.

Tran, Cuong, My Dinh, and Ferdinando Fioretto. "Differentially private empirical risk minimization under the fairness lens." Advances in Neural Information Processing Systems 34 (2021): 27555-27565.

Dwork, C., & Ilvento, C. (2018). Group fairness under composition. In Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency (FAT* 2018).

Dwork, Cynthia, and Christina Ilvento. "Individual fairness under composition." Proceedings of Fairness, Accountability, Transparency in Machine Learning (2018).

Pentyala, S., Neophytou, N., Nascimento, A., De Cock, M., & Farnadi, G. (2022). PrivFairFL: Privacy-Preserving Group Fairness in Federated Learning. arXiv preprint arXiv:2205.11584.

Fallah, A., Mokhtari, A., & Ozdaglar, A. (2020). Personalized federated learning: A meta-learning approach. arXiv preprint arXiv:2002.07948.

Li, T., Hu, S., Beirami, A., & Smith, V. (2021, July). Ditto: Fair and robust federated learning through personalization. In International Conference on Machine Learning (pp. 6357-6368). PMLR.

Li, T., Sanjabi, M., Beirami, A., & Smith, V. (2019). Fair resource allocation in federated learning. arXiv preprint arXiv:1905.10497.

Model Robustness:

Shah, H., Tamuly, K., Raghunathan, A., Jain, P., Netrapalli, P., 2020. The Pitfalls of Simplicity Bias in Neural Networks.

Sagawa, S., Raghunathan, A., Koh, P.W., Liang, P., 2020. An Investigation of Why Overparameterization Exacerbates Spurious Correlations

Geirhos, R., Jacobsen, J.-H., Michaelis, C., Zemel, R., Brendel, W., Bethge, M., Wichmann, F.A., 2020. Shortcut Learning in Deep Neural Networks

D’Amour, A., Heller, K., et al., 2020. Underspecification Presents Challenges for Credibility in Modern Machine Learning.

Peters, J., Bühlmann, P., Meinshausen, N., 2015. Causal inference using invariant prediction: identification and confidence intervals.

Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A., 2019. Towards Deep Learning Models Resistant to Adversarial Attacks.

Goodfellow, I.J., Shlens, J., Szegedy, C., 2015. Explaining and Harnessing Adversarial Examples.

Fredrikson, M., Jha, S., Ristenpart, T., 2015. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures

Geiping, J., Fowl, L., Huang, W.R., Czaja, W., Taylor, G., Moeller, M., Goldstein, T., 2021. Witches’ Brew: Industrial Scale Data Poisoning via Gradient Matching.

Carlini, N., Tramer, F., Wallace, E., Jagielski, M., Herbert-Voss, A., Lee, K., Roberts, A., Brown, T., Song, D., Erlingsson, U., Oprea, A., Raffel, C., 2021. Extracting Training Data from Large Language Models.

Duchi, J., Glynn, P., Namkoong, H., 2018. Statistics of Robust Optimization: A Generalized Empirical Likelihood Approach.

Sagawa, S., Koh, P.W., Hashimoto, T.B., Liang, P., 2020. Distributionally Robust Neural Networks for Group Shifts

Oren, Y., Sagawa, S., Hashimoto, T.B., Liang, P., 2019. Distributionally Robust Language Modeling

Gretton, A., Smola, A., Huang, J., Schmittfull, M., Borgwardt, K., Schölkopf, B., 2008. Covariate Shift by Kernel Mean Matching

Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., 2015. Domain-Adversarial Neural Networks.

Edwards, H., Storkey, A., 2016. Censoring Representations with an Adversary.

Beery, Van Horn, and Perona, Recognition in terra incognita, ECCV 2018

Gulrajani and Lopez-Paz, In search of lost domain generalization, ICLR 2021

Robert Geirhos, et al., Shortcut Learning in Deep Neural Networks, Nature Machine Intelligence vol. 2, 2021

Peters, J., Bühlmann, P., Meinshausen, N., 2015. Causal inference using invariant prediction

Arjovsky et al 2019. Invariant Risk Minimization.

Krueger, D., et al 2021. Out-of-Distribution Generalization via Risk Extrapolation (REx).

Gulrajani, I., Lopez-Paz, D., 2020. In Search of Lost Domain Generalization.

Menon, A.K., Jayasumana, S., Rawat, A.S., Jain, H., Veit, A., Kumar, S., 2021. Long-tail Learning via Logit Adjustment

Kirichenko, P., Izmailov, P., Wilson, A.G., 2022. Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations.

Algorithmic Fairness & Robustness:

Creager, E., Jacobsen, J.-H., Zemel, R., 2021. Environment Inference for Invariant Learning

Shankar, S., Halpern, Y., Breck, E., Atwood, J., Wilson, J., Sculley, D., 2017. No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World.

Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., 2015. Domain-Adversarial Neural Networks.

Edwards, H., Storkey, A., 2016. Censoring Representations with an Adversary.

Louizos, C., Swersky, K., Li, Y., Welling, M., Zemel, R., 2017. The Variational Fair Autoencoder.

Madras, D., Creager, E., Pitassi, T., Zemel, R., 2018. Learning Adversarially Fair and Transferable Representations.

Yurochkin, M., Bower, A., Sun, Y., 2020. Training individually fair ML models with Sensitive Subspace Robustness.

Yeom, S., Fredrikson, M., 2020. Individual Fairness Revisited: Transferring Techniques from Adversarial Robustness

Hashimoto, T.B., Srivastava, M., Namkoong, H., Liang, P., 2018. Fairness Without Demographics in Repeated Loss Minimization.

Garg, S., Perot, V., Limtiaco, N., Taly, A., Chi, E.H., Beutel, A., 2019. Counterfactual Fairness in Text Classification through Robustness

Rudinger, R., Naradowsky, J., Leonard, B., Van Durme, B., 2018. Gender Bias in Coreference Resolution.

Martinez, N., Bertran, M., Sapiro, G., 2020. Minimax Pareto Fairness: A Multi Objective Perspective.

Diana, E., Gill, W., Kearns, M., Kenthapadi, K., Roth, A., 2021. Minimax Group Fairness: Algorithms and Experiments.

Hébert-Johnson, Ú., Kim, M.P., Reingold, O., Rothblum, G.N., 2018. Calibration for the (Computationally-Identifiable) Masses.

Kim, M.P., Ghorbani, A., Zou, J., 2018. Multiaccuracy: Black-Box Post-Processing for Fairness in Classification.

Lahoti, P., Beutel, A., Chen, J., Lee, K., Prost, F., Thain, N., Wang, X., Chi, E.H., 2020. Fairness without Demographics through Adversarially Reweighted Learning.

Creager, E., Jacobsen, J.-H., Zemel, R., 2021. Environment Inference for Invariant Learning

Schoelkopf, B., Janzing, D., Peters, J., Sgouritsa, E., Zhang, K., Mooij, J., 2012. On Causal and Anticausal Learning.

Veitch, V., D’Amour, A., Yadlowsky, S., Eisenstein, J., 2021. Counterfactual Invariance to Spurious Correlations: Why and How to Pass Stress Tests.

Makar, M., D’Amour, A., 2022. Fairness and robustness in anti-causal prediction.

Lechner, T., Ben-David, S., Agarwal, S., Ananthakrishnan, N., 2021. Impossibility results for fair representations.

Rezaei, A., Liu, A., Memarrast, O., Ziebart, B., 2021. Robust Fairness under Covariate Shift.

Singh, H., Singh, R., Mhasawade, V., Chunara, R., 2021. Fairness Violations and Mitigation under Covariate Shift

Fogliato, R., Chouldechova, A., G’Sell, M., 2020. Fairness Evaluation in Presence of Biased Noisy Labels

Wang, S., Guo, W., Narasimhan, H., Cotter, A., Gupta, M., Jordan, M., 2020. Robust Optimization for Fairness with Noisy Protected Groups

Schrouff, J., Harris, N., Koyejo, O., Alabdulmohsin, I., Schnider, E., Opsahl-Ong, K., Brown, A., Roy, S., Mincu, D., Chen, C., Dieng, A., Liu, Y., Natarajan, V., Karthikesalingam, A., Heller, K., Chiappa, S., D’Amour, A., 2022 .Diagnosing failures of fairness transfer across distribution shift in real-world medical settings


Model Explainability:

Rudin. (2019). Stop Explaining Black Box Machine Learning Models for High Stakes Decision and Use Interpretable Models Instead. Nature Machine Intelligence

Caruana et al. Intelligible Models for Healthcare: Predicting Pneumonia Risk and Hospital 30-day Readmission. KDD 2015

Wei et al. Generalized Linear Rule Models. ICML 2019 

Guidotti et al. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR) .

Lakkaraju et al.,Faithful and customizable explanations of black box models. AIES 2019.

Ribeiro et al. Why should i trust you?" Explaining the predictions of any classifier. KDD 2016

Chen et al. This looks like that: deep learning for interpretable image recognition. NeurIPS 2019

Gurumoorthy et al. Efficient Data Representation by Selecting Prototypes with Importance Weights. ICDM 2019 

Dhurandhar, et al. Explanations based on the missing: Towards contrastive explanations with pertinent negatives.NeurIPS 2018 

Mothilal et al. Explaining machine learning classifiers through diverse counterfactual explanations. FAccT 2020

Liao & Varshney, (2021). Human-centered Explainable AI (XAI): From Algorithms to User Experiences.
Liao et al. (2020). Questioning the AI: informing design practices for explainable AI user experiences. CHI 2020

Ehsan et al. (2021). Operationalizing human-centered perspectives in explainable AI. CHI 2021 EA

Algorithmic Fairness & Explainability:

Dodge et al.. Explaining models: an empirical study of how explanations impact fairness judgment. IUI 2019
Aïvodji et al. Fairwashing: the risk of rationalization. ICML 2019
Anders et al. Fairwashing explanations with off-manifold detergent.  ICML 2020
Schoeffer et al.  (2022). On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making. arXiv
Bansal et al. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. CHI2021

Zhang et al. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making.FAccT 2020

Green& Chen. Algorithmic risk assessments can alter human decision-making processes in high-stakes government contexts. CSCW2021

Ustun et al. Actionable recourse in linear classification. FAccT 2019

Barocas et al. The hidden assumptions behind counterfactual explanations and principal reasons. FAccT2020

Karimi et al (2021). A survey of algorithmic recourse: contrastive explanations and consequential recommendations. ACM Computing Surveys (CSUR).

Dai, et al.. Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations. AIES 2022.

Balagopalan, at al. The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations. FAccT 2022.

Szymanski et al. Visual, textual or hybrid: the effect of user expertise on different explanations. IUI 2021

Ghai et al. Explainable active learning (xal) toward ai explanations as interfaces for machine teachers. CSCW 2021

Liao & Varshney, (2021). Human-centered explainable ai (xai): From algorithms to user experiences. 

 Looking forward to seeing you!