This workshop covers current research topics in reinforcement learning and causality, and in particular questions at the interface of these research areas. Of particular interest this year are also questions on the role of large pretrained models in this context.
The ELLIS UnConference workshop is co-located with EurIPS Copenhagen and will take place on December 2, 2025 in Auditorium 11.
Time Session
8:00 - 9:00 ELLIS Unconference Registration
9 - 9:30 Yevgeny Seldin "Best-of-both-worlds : Algorithms that thrive in stochastic and adversarial environments"
9:30 - 9:50 Lorenzo Croissant "Bandit Optimal Transport"
9:50 - 10:10 Hamish Flynn "Sub-linear regret bounds for posterior sampling reinforcement learning with Gaussian processes"
10:10 - 10:30 Klaus-Rudolf Kladny "Aligning Generative Models with Reality"
10:30 - 11:00 Coffee break
11 - 11:30 Michael Muehlebach “On the hardness of learning in dynamical systems”
11:30 - 12 Giorgia Ramponi "Multi-agent reinforcement learning without rewards"
12 - 12:30 Christoph Lampert "Differentiable Weightless Controllers"
12:30 - 13:30 Lunch
13:30 - 13:40 Peter Auer "Improved Best-of-Both-Worlds Regret for Bandits with Delayed Feedback"
13:50 - 14:10 Lukas Schäfer “Exploiting State and Action Uncertainty for Imitation Learning using Inverse Dynamics Models”
14:10 - 14:30 Sadegh Talebi "Sample complexity of offline RL in regular decision processes"
14:40 - 15 Gergely Neu "Inverse Q-Learning Done Right: Offline Imitation Learning in Qπ-Realizable MDPs"
15:00 - 15:30 Coffee break
15:30 - 16:00 ELLIS Unconference Welcome Remarks in room D1-D2
16:00 - 20:00 ELLIS Unconference Poster Session and Reception
University of Leoben
ENSAE
UPF Barcelona
Max Planck Institute for Intelligent Systems
ISTA
Max Planck Institute for Intelligent Systems
UPF Barcelona
University of Zurich
Microsoft Research
University of Copenhagen
University of Copenhagen
University of Milan
ETH Zürich
Max Planck Institute for Intelligent Systems
University of Milan