International Workshop on Safe Reinforcement Learning
Introduction
Reinforcement learning (RL) is the dominant AI paradigm for learning interactively from the environment in which the AI agent is embedded. Despite significant advances in model-free and deep reinforcement learning, particularly in gaming environments such as Atari and Go, such advancements rest on strong assumptions on the environment and do not translate into real-world systems where safety is at stake.
In many real-world applications, for example robotic systems or autonomous cars, the RL agent cannot learn directly from the environment in which it is embedded, since in such cases trial-and-error learning would come with dangerous side-effects. Beyond this problem of safe exploration, there is additionally the need for policies that meet certain desirability criteria, where some behaviours need to be prevented at all costs. These and related issues are of primary concern in the field of safe reinforcement learning.
To achieve safe reinforcement learning, a variety of angles have been taken recently. From an algorithmic point of view, safe RL researchers have brought together insights from constrained optimisation, robust optimisation, model-based reinforcement learning, formal methods, control theory, statistical hypothesis testing, etc. From a more societal point of view, researchers have investigated how to adjust the environment, to restrict the autonomy of the agent, and how humans can intervene for safe RL.
To bring this diverse community of researchers together, we propose the Safe RL 2022 Workshop. The workshop is proposed for IJCAI 2022 (see https://ijcai-22.org/) since IJCAI brings together a large audience within the AI community with lots of prior interest being demonstrated in the fields of safety, reinforcement learning, robustness, and robotics in the wild. The workshop will be a 1-day event combining invited talks and contributed talks covering various approaches to safe RL. There will be ample opportunities for researchers to interact with the speakers, discuss novel and exciting research, and establish new and fruitful collaborations.
Safe RL Workshop Topics
The goal of the workshop is to bring together researchers that are working on safe reinforcement learning systems, where safety is defined widely as avoiding self-harm, harm to the environment, significant financial or societal costs, and violations of social, ethical, or legal norms.
With this definition of safety in mind, we encourage submissions on the following topics:
Definitions of safety
Incorporating safety, social norms, user preferences into RL policies
Safe exploration
Runtime verification and runtime enforcement
Satisfying safety constraints in non-stationary environments
Predicting safety constraint violations
Interventions to prevent failures when RL agent is at risk with no safe options left
Simulation platforms and data sets to help safe RL application use cases, demonstrations or problem statements.
We particularly welcome use cases in robotics and virtual applications.Application use cases, demonstrations or problem statements. We particularly welcome use cases in robotics and virtual applications.
Governmental policies or other aspects of the wider context to develop safety standards into artificial intelligence systems.
Organisers:
David Bossens, University of Southampton d(dot)m(dot)bossens(at)soton(dot)ac(dot)uk
Stephen Giguere, University of Texas at Austin sgiguere9(at)gmail(dot)com
Roderick Bloem, TU Graz roderick(dot)bloem(at)iaik(dot)tugraz(dot)at
Bettina Koenighofer, TU Graz bettina(dot)koenighofer(at)iaik(dot)tugraz(dot)at
Invited Talks
Our programme involves invited talks from the following world-renowned researchers in Safe RL:
Yash Chandak, University of Massachusetts Amherst
Nils Jansen, Radboud University
Chih-Hong Cheng, Fraunhofer IKS
Sanjit Seshia, University of California Berkeley
Felix Berkenkamp, Bosch Center for AI
Ruzica Piskac, Yale University
For more details on the speakers and their talks, please see the programme page.
Important dates
May 13, 2022: Workshop Paper Due Date
June 3, 2022: Notification of Paper Acceptance
June 17, 2022: Camera-ready papers due
July 23, 2022: The Safe RL Workshop as a one-day event.