Detailed information

Title: Impact of Legal Requirements on Explainability in Machine Learning


Abstract: Deep learning and other black-box models are becoming more and more popular today. Despite their high performance, they may not be accepted ethically or legally because of their lack of explainability. This paper presents the increasing number of legal requirements on machine learning model interpretability and explainability in the context of private and public decision making. It then explains how those legal requirements can be implemented into machine-learning models and concludes with a call for more inter-disciplinary research on explainability.


Bio:

  • Adrien Bibal: Adrien Bibal is a Ph.D. student at the University of Namur (Belgium) under the supervision of Professor Benoît Frénay. He received an M.S. degree in Computer Science and an M.A. degree in Philosophy from the Université catholique de Louvain (Belgium) in 2013 and 2015 respectively. His Ph.D. thesis in machine learning is on the interpretability of dimensionality reduction models.

  • Michael Lognoul: Michael Lognoul is a Ph.D. student in law from University of Namur (Belgium). He specialized in ICT law, and his Ph.D. is on explainability of AI.