Responsible AI
Special course spring 2023
Welcome to this special course on Responsible AI! This is the second edition of an " exploratory" course where we try to cover aspects currently missing from the AI and machine learning educations regarding responsibility and AI. We aim to keep the class small and interactive to learn as much as possible. Downstream, we hope to turn this into a permanent course in the curriculum.
Our course will meet on Wednesdays at 9-12; please check in later for the location.
Teachers:
Siavash Bigdeli (sarbi@dtu.dk)
Aasa Feragen (afhar@dtu.dk)
Tentative Schedule (check in for updates)
1.2: Welcome, and Philosophy 1: AI ethics (Siavash) Building 303B, room Matematicum (second floor).
Assignments:
Part 1: read the article: Unintended by Design: On the Political Uses of “Unintended Consequences”
Part 2: read the story: Hans Christian Andersen: The Shadow
8.2: Philosophy 2: AI (Siavash) Building 322, room 105
Assignments:
Part 1: read Digital Normativity: A Challenge for Human Subjectivation
Part 2: watch Noam Chomsky: Language, Cognition, and Deep Learning
Part 3:
Either read the story: Pearls of Wisdom: The Epistle of the Bird
Or watch the movie: Stalker (Andrey Tarkovsky, 1979)
15.2: Philosophy 3: epistemology (Siavash) Building 303, room 134 (Matematicum)
Assignments:
Part 1: read The Unreasonable Effectiveness of Mathematics in the Natural Sciences by Eugene Wigner
Part 2: watch
22.2: Fairness 1 (Aasa) Building 322, room 105
Material:
Larrazabal et al, Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis, PNAS 2021
1.3: Fairness 2 (Aasa) Building 303, room 134 (Matematicum)
Material:
Technical papers
Louppe et al, Learning to Pivot with Adversarial Networks, NeurIPS 2017
Kilbertus et al. Avoiding Discrimination through Causal Reasoning, NeurIPS 2017
Dwork et al, Fairness Through Awareness, ICTS 2012
Opinion papers
Ananny & Crawford, Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability, new media & society 2018
Sara Hooker, Moving beyond “algorithmic bias is a data problem”, Patterns 2021
Sambasivan et al, Re-imagining Algorithmic Fairness in India and Beyond, FAccT 2021
8.3 Fairness 3 (Aasa) Building 324, room 161
You will be working on your projects, and Aasa and/or Siavash will be available in building 324, room 170 in case you need help. You should feel free to sit and work in room 170 -- but also to just top by when you need assistance.
15.3 XAI 1 (Aasa) Building 303, room 134 (Matematicum)
You will start reading and discussing in groups your first paper on Explainable AI:
Koh et al, Concept Bottleneck Models, ICML 2020
Moreover, as preparation for your XAI project, you will try to reproduce the results from the paper on the CUB dataset using the code from the authors
22.3 XAI 2 (Aasa) Building 322, room 105
Material:
29.3 XAI 3 (Aasa) Building 303, room 134 (Matematicum)
Material:
Papers:
Chen et al, This Looks Like That: Deep Learning for Interpretable Image Recognition, NeurIPS 2019
Singla et al: Explanation by Progressive Exaggeration, ICLR 2020
Optional: Wachter et al, Counterfactual Explanations Without Opening the Black Box, Harvard Journal of Law and Technology, 2017
(NB! This is a long read -- focus on the first partMargeloui et al, Do Concept Bottleneck Models Learn as Intended? ICLR workshop 2021
Havasi et al, Addressing Leakage in Concept Bottleneck Models, NeurIPS 2022
5.4 Project work
12.4 Guest lecture by Marcello Pelillo Building 303, Auditorium 45 (9AM-11AM)
Lecture slides: Induction and Machine Learning
Important: Bring your project poster for demonstration after the lecture.
19.4 Privacy 1: Differential privacy (Siavash) Building 322, room 105
Material to read:
26.4 Privacy 2 (Siavash) Building 303, room 134 (Matematicum)
3.5 Generative AI and conclusion (Siavash and Aasa) Building 322, room 105
Material:
Read section IV. Normal Science as Puzzle-solving (pages 35-42): The Structure of Scientific Revolution
Read the discussion here: What does AI Fairness look like in the world of GPT-4?
Course information
Course type: Specialized course for Bachelor, Master’s and PhD students with a technical background in deep learning.
Maximum number of students: 20
ECTS: 5
Schedule: 5A (9-12 Wednesdays)
Exam form: 50% group project assessment during the course, 50% 30-minute individual oral exam
Teachers: Siavash Bigdeli and Aasa Feragen
Sign-up: By email to Siavash Bigdeli (sarbi@dtu.dk) – please note the limited seats
Scope and form: The course will consist of lectures, discussions, and practical exercises. It will be structured around four topics, all starting out with an introduction of established knowledge, moving to reading and discussing state-of-the-art papers, and finally a practical case implementation building on the visited methods.
General course objectives
In this course students will get introduced to ethical challenges in AI and tools to understand and examine them. Main topics of the course are on paradigms and limitations of machine learning (Epistemology), Fairness and bias, Explainable-AI, and Privacy. Current state-of-the-art topics and recent publications from relevant ML conferences and journals are selected and discussed in detail. Participants will implement prototypes of the presented algorithms and present their observations and results to the class.
Learning objectives
The aim of the course is to build and enhance know-how on four major topics for responsible AI:
a deep understanding of what AI means, what are the common assumptions we make and which ones we should avoid when building AI responsibly,
overview of challenges and state-of-the-art in fairness and bias when building and deploying AI,
hands on experience on deep neural network interpretations and explainable-AI,
hands on experience on privacy preserving models and training strategies
Content
The course will consist of four parts:
The first part of the course will be about the ethics of AI, epistemology of machine learning, model-fitting vs. artificial intelligence, Bayesian problems and limitations. Next, the fairness component will review classical algorithmic fairness algorithms and its limitations and potential solutions. We will learn about different paradigms of explainable AI such as saliency and prototype based methods, their use cases, as well as their validation. Finally, we will study differential privacy and federated learning.