FAIRNESS & EXPLAINABILITY in MACHINE LEARNING (Spring 2023)

Instructor:         Chih-Duo Hong
Lecture Hours:  Wed D56
Lecture Room:  260305
Office Hours:     By appointment
Contact:              chihduo@nccu.edu.tw

This course will introduce selected topics in machine learning fairness, explainability, and safety, with a focus on approaches providing provable correctness and quality guarantee. The course will consist of three parts: lectures, group discussions, and paper presentations. In the lectures, selected research topics and results will be introduced in a self-contained manner. During the group discussion sessions, students will discuss in groups over selected scientific articles and case studies. For the paper presentation sessions, students will read and present recent technical papers. After taking this course, students will gain a general knowledge of Trustworthy AI, as well as a deep understanding of specific techniques for practicing and researching formal AI fairness, explainability, and safety.

Schedule

ADIS 2023 Schedule

Lectures

Week 01.  Course overview
Week 02.  Fairness metrics (I)
Week 03.  Fairness metrics (II)
Week 04.  Counterfactual fairness
Week 05.  Abductive and contrastive explanations (I)
Week 06.  Abductive and contrastive explanations (II)
Week 07.  Logical approaches to XAI (I)
Week 09.  Logical approaches to XAI (II)
Week 10.  Property inference
Week 11.  Anchor explanations
Week 12.  Counterfactual explanations (I)
Week 16.  Counterfactual explanations (II)
Week 17.  Adversarial robustness (I)
Week 18.  Adversarial robustness (II)

Presentation Paper List


From Contrastive to Abductive Explanations and Back Again
https://alexeyignatiev.github.io/assets/pdf/inams-aiia20-preprint.pdf

On Relating Explanations and Adversarial Examples
https://alexeyignatiev.github.io/assets/pdf/inms-nips19-preprint.pdf

Which Neural Network Makes More Explainable Decisions?
https://link.springer.com/article/10.1007/s10515-022-00338-w

Towards Formal Approximated Minimal Explanations of Neural Networks
https://arxiv.org/pdf/2210.13915.pdf

Finding Common Ground for Incoherent Horn Expressions
https://arxiv.org/pdf/2209.06455.pdf

DL2: Training and Querying Neural Networks with Logic
http://proceedings.mlr.press/v97/fischer19a/fischer19a.pdf

On the Explanatory Power of Boolean Decision Trees
https://arxiv.org/abs/2108.05266

Counterfactual Explanations without Opening the Black Box
https://arxiv.org/abs/1711.00399

Constraint-Driven Explanations for Black Box ML Models
https://ojs.aaai.org/index.php/AAAI/article/view/20805

Formalizing the Robustness of Counterfactual Explanations for Neural Networks
https://arxiv.org/pdf/2208.14878

Learning to Deceive with Attention-Based Explanations
https://arxiv.org/pdf/1909.07913

A Unifying and General Account of Fairness Measurement in Recommender Systems
https://www.sciencedirect.com/science/article/pii/S0306457322002163

DeepRED: Rule Extraction from Deep Neural Networks
https://link.springer.com/chapter/10.1007/978-3-319-46307-0_29

Formal Security Analysis of Neural Networks using Symbolic Intervals
https://arxiv.org/pdf/1804.10829

Using MaxSAT for Efficient Explanations of Tree Ensembles
https://alexeyignatiev.github.io/assets/pdf/iisms-aaai22-preprint.pdf

Explaining and Interpreting LSTMs
https://arxiv.org/pdf/1909.12114

A Moral Framework for Understanding of Fair ML
https://arxiv.org/pdf/1809.03400

Inherent Trade-Offs in the Fair Determination of Risk Scores
https://arxiv.org/abs/1609.05807

Strategic Classification is Causal Modeling in Disguise
http://proceedings.mlr.press/v119/miller20b/miller20b.pdf

On Efficiently Explaining Graph-Based Classifiers
https://arxiv.org/pdf/2106.01350

Globally-Robust Neural Networks
https://arxiv.org/pdf/2102.08452

On Computing Probabilistic Explanations for Decision Trees
https://arxiv.org/pdf/2207.12213

The Bouncer Problem: Challenges to Remote Explainability
https://arxiv.org/abs/1910.01432

ART: Abstraction Refinement-Guided Training for Neural Networks
https://arxiv.org/abs/1907.10662

Identifying and Correcting Label Bias in Machine Learning
https://arxiv.org/abs/1901.04966

Discussion Articles


Formalizing Fairness
https://cacm.acm.org/magazines/2022/8/262911-formalizing-fairness

Explainable AI: Opening the Black Box or Pandora’s Box?
https://cacm.acm.org/magazines/2022/4/259398-explainable-ai

Want a job? You'll Have to Convince Our AI Bot First
https://www.cbc.ca/news/business/recruitment-ai-tools-risk-bias-hidden-workers

Great Promise but Potential for Peril
https://news.harvard.edu/.../ethical-concerns-mount-as-ai-takes-bigger-decision-making/

Scientists Increasingly Can’t Explain How AI Works
https://www.vice.com/en/.../scientists-increasingly-cant-explain-how-ai-works

How Big Data is Unfair
https://medium.com/@mrtz/how-big-data-is-unfair-9aa544d739de

Can AI’s Recommendations Be Less Insidious?
https://spectrum.ieee.org/recommendation-engine-insidious

Are You Still Using Real Data to Train Your AI?
https://spectrum.ieee.org/synthetic-data-ai

Racial Bias Found in Algorithms That Determine Health Care for Millions of Patients
https://spectrum.ieee.org/racial-bias-found-in-algorithms-that-determine-health-care

Engineering Bias Out of AI
https://spectrum.ieee.org/engineering-bias-out-of-ai

Moving Beyond "Algorithmic Bias Is a Data Problem"
https://www.cell.com/action/showPdf?pii=S2666-3899%2821%2900061-1

Are We Witnessing the Dawn of Post-theory Science?
https://www.theguardian.com/.../are-we-witnessing-the-dawn-of-post-theory-science

Machine Bias: Risky Assessment in Criminal Sentencing
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women
https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

What Happens When an Algorithm Cuts Your Health Care
https://www.theverge.com/.../healthcare-medicaid-algorithm-arkansas-cerebral-palsy

The Parable of Google Flu
https://gking.harvard.edu/files/gking/files/0314policyforumff.pdf

It’s Too Easy to Hide Bias in Deep-Learning Systems
https://spectrum.ieee.org/its-too-easy-to-hide-bias-in-deeplearning-systems

Conservatives Are Panicking About AI Bias, Think ChatGPT Has Gone 'Woke'
https://www.vice.com/.../conservatives-panicking-about-ai-bias

Crime-Prediction Tool May Be Reinforcing Discriminatory Policing
https://www.businessinsider.com/predictive-policing-discriminatory-police-crime

There’s an easy way to make lending fairer for women. Trouble is, it’s illegal.
https://www.technologyreview.com/.../theres-an-easy-way-to-make-lending-fairer-for-women/

The scary truth about AI copyright is nobody knows what will happen next
https://www.theverge.com/.../generative-ai-copyright-infringement-legal-fair-use-training-data

Why It’s So Damn Hard to Make AI Fair and Unbiased
https://www.vox.com/future-perfect/22916602/ai-bias-fairness-tradeoffs-artificial-intelligence

Artificial Disinformation: Can Chatbots Destroy Trust on the Internet?
https://medium.com/.../artificial-disinformation-can-chatbots-destroy-trust-on-the-internet

AI Doesn’t Have to Be This Way
https://spectrum.ieee.org/ai-skeptics

Protecting AI Models from “Data Poisoning”
https://spectrum.ieee.org/ai-cybersecurity-data-poisoning

Preventing AI From Divulging Its Own Secrets
https://spectrum.ieee.org/how-prevent-ai-power-usage-secrets

Further Reading

Following is a list of surveys and articles about AI explanability, safety, and fairness.

Explainable AI


Safe AI


Fair AI


Formal XAI


Opinions