Workshop on Interpretable Policies in Reinforcement Learning @ RLC-2024
LOCATION: Campus Center at hotel UMass: https://maps.app.goo.gl/LY9xwmcUQHAcFu9a7 , orals in Room 162 and posters in Room 165.
What distinguishes explainability from interpretability? Should we confine the development of explainable and interpretable solutions solely to domains where transparency is imperative, such as healthcare? What advantages do interpretable policies, like decision trees or programs, offer over neural networks? How can we rigorously define and measure the degree of interpretability in policies, independent of user studies? Additionally, do certain reinforcement learning paradigms, like imitation learning or evolutionary methods, inherently lend themselves better to interpretability than others? Furthermore, how can we enhance Markov Decision Processes (MDPs) to facilitate the learning of interpretable policies, particularly regarding interpretable state space representations?
Key dates
Submission deadline: May 23, 2024 AoE (see Call for Papers)
Acceptance notification: May 30, 2024
Workshop happens: Aug. 9th, 2024 (1 Campus Center Way, Amherst, MA 01002)
Interpretable RL is growing, let's talk about it!
Organizing team
contact: interppol.workshop@gmail.com
Hector Kohler - 2nd year PhD Student, Inria
Quentin Delfosse - Final year PhD Student, TU Darmstadt
Paul Festor - Final year PhD Student, Imperial College London
Philippe Preux - Professor, Inria