Reading 1: FAccT AI & ML
Reading Theme
Machine learning (ML)-based AI systems, especially Deep Learning (DL), have demonstrated remarkable learning capabilities. Unfortunately, current AI systems are inherently hard to gain trust from their users. With responding to this issue, a lot of ML/DL researchers have been focusing on improving the Fairness, Accountability, and Transparency (FAccT) of ML/DL models in addition to their performance.
More recently, FAccT is a fast-growing and important area of the ML/DL research.
Objective
To develop research thinking in the field of Fair, Accountable, and Transparent (FAccT) machine learning.
Information
Date: February 20 (Monday), 21 (Tuesday), 24 (Friday)
Time: 16:00 - 17:30 Japan time
Place: Seminar room on the 7th floor (I-75)
All speakers should share your presentation via this shared folder beforehand.
Outline
Day 1 (Interpretability): February 20
(Wuttichai Vijitkunsawat) Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI), IEEE Access 2018
(Teeradaj Racharak) "Why should I trust you?" Explaining the predictions of any classifier, SIGKDD 2016
(Xiguang Li) Model agnostic supervised local explanations, NeurIPS 2018
Day 2 (Interpreting Neural Network): February 21
(Jianan Chen) GNNExplainer: Generating explanations for graph neural networks, NeurIPS 2019
(Khadija Meghraoui) Learning how to explain neural networks: PatternNet and PatternAttribution, ICLR 2018
(Haowei Cheng) Towards robust interpretability with self- explaining neural networks, NeurIPS 2018
Day 3 (Fair ML & Etc): February 24
(Surawat Pothong) Gender Bias in Contextualized Word Embeddings, NAACL 2019
(Khine Myat Thwe) Men also like shopping: Reducing gender bias amplification using corpus-level constraints, EMNLP 2017