Schedule

MORNING SESSION

08:50 AM -- Introduction and opening remarks. Alessandra Tosi, Alfredo Vellido and Mauricio Alvarez

09:10 AM -- Invited talk: Is interpretability and explainability enough for safe and reliable decision making? SUCHI SARIA

09:45 AM -- Invited talk: The Role of Explanation in Holding AIs Accountable. FINALE DOSHI-VELEZ

10:20 AM -- Contributed talk: Beyond Sparsity: Tree-based Regularization of Deep Models for Interpretability. Wu, Hughes, Parbhoo, Doshi-Velez

10:30 AM -- Coffee break

11:00 AM -- Invited talk: Challenges for transparency. ADRIAN WELLER

11:20 AM -- Contributed talk: Safe Policy Search with Gaussian Process Models. Polymenakos, Roberts

11:30 AM -- Poster spotlights:

              1. Network Analysis for Explanation. Kuwajima, Tanaka
              2. Using prototypes to improve convolutional networks interpretability. Drumond, Vieville, Alexandre
              3. Accelerated Primal-Dual Policy Optimization for Safe Reinforcement Learning. Liang, Que, Modiano
              4. Deep Reinforcement Learning for Sepsis Treatment. Raghu, Komorowski, Szolovits, Ghassemi, Celi, Ahmed
              5. Analyzing Feature Relevance for Linear Reject Option SVM using Relevance Intervals. Göpfert
              6. The Neural LASSO: Local Linear Sparsity for Interpretable Explanations. Ross, Lage, Doshi-Velez
              7. Detecting Bias in Black-Box Models Using Transparent Model Distillation. Tan, Caruana, Hooker, Lou
              8. Data masking for privacy-sensitive learning. Pham, Ghosh, Yegneswaran
              9. CLEAR-DR: Interpretable Computer Aided Diagnosis of Diabetic Retinopathy. Kumar, Taylor, Wong
              10. Manipulating and Measuring Model Interpretability. Poursabzi-Sangdeh, Goldstein, Hofman,Wortman Vaughan, Wallach

12:00 PM -- Poster session part (all posters)


12.30 PM - 14.00 PM -- Lunch break


AFTERNOON SESSION

2:00 PM -- Invited talk: When the classifier doesn't know: optimum reject options for classification. BARBARA HAMMER

2:35 PM -- Contributed talk: Predict Responsibly: Increasing Fairness by Learning To Defer. Madras, Zemel, Pitassi

2:45 PM -- Contributed talk: Deep Motif Dashboard: Visualizing and Understanding Genomic Sequences Using Deep Neural Networks. Lanchantin, Singh, Wang, Qi

2:55 PM -- Announcement: BEST PAPER PRIZE

3:00 PM -- Coffee Break and Poster session part II (all posters)

4:05 PM -- Invited talk: Robot Transparency as Optimal Control. ANCA DRAGAN

4:40 PM -- Panel discussion

            • SUCHI SARIA - Assistant Professor, Johns Hopkins University
            • FINALE DOSHI-VELEZ - Assistant Professor of Computer Science, Harvard
            • ADRIAN WELLER - Computational and Biological Learning Lab University of Cambridge and Alan Turing Institute
            • BARBARA HAMMER - Professor oat CITEC centre of excellence, Bielefeld University
            • ANCA DRAGAN - Assistant Professor, UC Berkeley

5:20 PM -- Final remarks

5:30 PM -- End of the day