Worskhop on Interpretable Policies in Reinforcement Learning @ RLC-2024 

What distinguishes explainability from interpretability? Should we confine the development of explainable and interpretable solutions solely to domains where transparency is imperative, such as healthcare? What advantages do interpretable policies, like decision trees or programs, offer over neural networks? How can we rigorously define and measure the degree of interpretability in policies, independent of user studies? Additionally, do certain reinforcement learning paradigms, like imitation learning or evolutionary methods, inherently lend themselves better to interpretability than others? Furthermore, how can we enhance Markov Decision Processes (MDPs) to facilitate the learning of interpretable policies, particularly regarding interpretable state space representations?

Key dates


Interpretable RL is growing, let's talk about it!

Organizing team

contact:  interppol.workshop@gmail.com

Hector Kohler - 2nd year PhD Student, Inria

Quentin Delfosse - Final year PhD Student, TU Darmstadt

Paul Festor - Final year PhD Student, Imperial College London

Philippe Preux - Professor, Inria