Reading 3: FAccT AI & ML
Reading Theme
Machine learning (ML)-based AI systems, especially Deep Learning (DL), have demonstrated remarkable learning capabilities. Unfortunately, current AI systems are inherently hard to gain trust from their users. With responding to this issue, a lot of ML/DL researchers have been focusing on improving the Fairness, Accountability, and Transparency (FAccT) of ML/DL models in addition to their performance.
More recently, FAccT is a fast-growing and important area of the ML/DL research.
Objective
To develop research thinking in the field of Fair, Accountable, and Transparent (FAccT) machine learning.
References
Christopher M. Bishop, Pattern Recognition and Machine Learning, 2006
Barocas, Solon, Moritz Hardt, and Arvind Narayanan. Fairness and Machine Learning, 2018.
Molnar, Christoph. Interpretable machine learning, 2019.
Information
Date and Time: January 9 - 11, 2024 from 15:30 to 17:00
Place: IS 7階輪講室/7F Seminar Room
All speakers should share your presentation via this shared folder beforehand.
Outline
Day 1: Bias and Fairness && Interpretability and Transparency
Chapter 4 of Barocas (Speaker: Khine Myat Thwe)
Chapter 3 of Molnar (Speaker: Prabhat Parajuli)
Day 2: Interpretability and Transparency && Local Model-Agnostic Methods
Chapter 5 of Molnar (Speaker: Ziheng Zhong)
Section 9.2 of Molnar (Speaker: Suphawit Xu)
Day 3: Local Model-Agnostic Methods && Neural Network Interpretation
Section 9.5 of Molnar (Speaker: Suphawit Xu)
Section 9.6 of Molnar (Speaker: Suphawit Xu)
Chapter 10 of Molnar (Speaker: Chayanee Junplong)