Welcome to this special course on Responsible AI! This is an " exploratory" course about what is missing from the different educations our students have on responsibility and AI. We aim to keep the class small and interactive to learn as much as possible, Downstream, we hope to turn this into a permanent course in the curriculum.
Our course will meet on Wednesdays at 13:00 in building 116 room 049.
31.8: Welcome, and Fairness, basics (Aasa)
Material:
Larrazabal et al, Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis, PNAS 2021
7.9: Fairness, paper discussion (Aasa). Building 324, room 161
Material:
Louppe et al, Learning to Pivot with Adversarial Networks, NeurIPS 2017
Kilbertus et al. Avoiding Discrimination through Causal Reasoning, NeurIPS 2017
Dwork et al, Fairness Through Awareness, ICTS 2012
Ananny & Crawford, Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability, new media & society 2018
Sara Hooker, Moving beyond “algorithmic bias is a data problem”, Patterns 2021
Sambasivan et al, Re-imagining Algorithmic Fairness in India and Beyond, FAccT 2021
14.9: Fairness cases (Group work) - Analyze and mitigate bias. Building 324, room 161
Material:
21.9: Philosophy 1: AI ethics (Siavash). Building 324, room 170
Material:
Part1: Unintended by Design: On the Political Uses of “Unintended Consequences”
Part 2: read a story or watch a movie. (x2) Email will follow on Friday afternoon.
28.9: Philosophy 2 (Siavash). Building 324, room 161
Material:
Part 2: watch two video video lectures. Email will follow on Friday afternoon.
5.10: XAI 1: Starting XAI project (Aasa). Building 324, room 261
Material:
For the project, please download the Chexpert dataset (downsampled version)
12.10: XAI 2: Saliency methods versus Example- and prototype based methods (Aasa). Building 324, room 161
Smilkov et al: Smoothgrad: Removing noise by adding noise, ICMLVis 2017
Sundarajan et al: Axiomatic Attribution for Deep Networks, ICML 2017
Selvaraju et al: Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization, ICCV 2017
Gu and Tresp, Saliency Methods for Explaining Adversarial Attacks, 2017
Ghassemi et al: The false hope of current approaches to explainable artificial intelligence in health care , The Lancet Digital Health, 2021
19.10: Fall break
26.10: XAI 2: Example- and prototype based methods (Aasa). Building 322, room 017
Chen et al, This Looks Like That: Deep Learning for Interpretable Image Recognition, NeurIPS 2019
Barnett et al, IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography, Nature Machine Intelligence 2021
Koh et al, Concept Bottleneck Models, ICML 2020
Margeloui et al, Do Concept Bottleneck Models Learn as Intended? ICLR workshop 2021
2.11: XAI cases (Group work). Building 324, room 170
9.11: Philosophy 3 (Siavash). Building 324, room 161
Material:
Part 1: read the article The Unreasonable Effectiveness of Mathematics in the Natural Sciences by Eugene Wigner
Part 2: watch two video lectures. Email will follow soon.
16.11: Privacy 1 (Siavash). This lecture will be online.
Material to read:
23.11: Privacy 2 (Siavash). Building 324, room 170 and Zoom
Material:
Paper by Shokri et al. Membership Inference Attacks Against Machine Learning Models and a Survey
30.11: Privacy 3 (Siavash). Building 324, room 170
Course type: Specialized course for Bachelor, Master’s and PhD students with a technical background in deep learning.
Maximum number of students: 20
ECTS: 5
Schedule: 5B
Exam form: 50% group project assessment during the course, 50% 2-hour individual oral exam
Teachers: Siavash Bigdeli and Aasa Feragen
Sign-up: By email to Siavash Bigdeli (sarbi@dtu.dk) – please note the limited seats
Scope and form: The course will consist of lectures, discussions, and practical exercises. It will be structured around four topics, all starting out with an introduction of established knowledge, moving to reading and discussing state-of-the-art papers, and finally a practical case implementation building on the visited methods.
General course objectives
In this course students will get introduced to ethical challenges in AI and tools to understand and examine them. Main topics of the course are on paradigms and limitations of machine learning (Epistemology), Fairness and bias, Explainable-AI, and Privacy. Current state-of-the-art topics and recent publications from relevant ML conferences and journals are selected and discussed in detail. Participants will implement prototypes of the presented algorithms and present their observations and results to the class.
Learning objectives
The aim of the course is to build and enhance know-how on four major topics for responsible AI:
a deep understanding of what AI means, what are the common assumptions we make and which ones we should avoid when building AI responsibly,
overview of challenges and state-of-the-art in fairness and bias when building and deploying AI,
hands on experience on deep neural network interpretations and explainable-AI,
hands on experience on privacy preserving models and training strategies
Content
The course will consist of four parts:
The first part of the course will be about the ethics of AI, epistemology of machine learning, model-fitting vs. artificial intelligence, Bayesian problems and limitations. Next, the fairness component will review classical algorithmic fairness algorithms and its limitations and potential solutions. We will learn about different paradigms of explainable AI such as saliency and prototype based methods, their use cases, as well as their validation. Finally, we will study differential privacy and federated learning.