2nd Safe Reinforcement Learning Workshop @ IJCAI 2023



Introduction 

Reinforcement learning (RL) is the dominant AI paradigm for learning interactively from the environment in which the AI agent is embedded. While model-free and deep reinforcement learning methods have made significant progress in simulated environments such Atari and and the game of Go, such advancements rest on strong assumptions on the environment and the learning process. The application to physically embodied systems in the real world is significantly more challenging as the world comes with many unexpected events and is not forgiving -- one cannot run the experiment again if a catastrophe occurs. Therefore, there is a need for reinforcement learning systems that are robust to unexpected disturbances, avoid the dangerous side-effects that come with trial and error, and that satisfy certain constraints. This need has resulted in the growing field of safe reinforcement learning, or Safe RL.

To achieve Safe RL, a variety of angles have been taken recently. From an algorithmic point of  view,  Safe RL researchers have brought together insights from constrained optimisation, robust optimisation, model-based reinforcement learning, formal methods, control theory and dynamical systems, off-policy evaluation, etc. . From a more societal, practical perspective, others have investigated how to adjust the environment, to restrict the autonomy of the agent, and how humans can intervene. Safe RL in this sense has been applied in a range of domains, including navigation, power systems, game playing, recommendation, and many others. 

To bring the diverse community of Safe RL researchers together, and give an opportunity to researchers to discuss fundamental algorithmic as well as practical insights into safe reinforcement learning, we propose the Safe RL 2023 Workshop. The workshop is proposed for IJCAI 2023 building further on the success of the previous Safe RL workshop @ IJCAI 2022 (see https://sites.google.com/view/safe-rl-2022 for the previous workshop) . The proposed format is kept similar to last year in the sense that there will be a combination of invited talks and contributed talks with opportunities for researchers to interact with the speakers, discuss novel and exciting research, and establish new and fruitful collaborations. We had a great experience last time and continuing the workshop series will help to establish a research community around safe reinforcement learning at the IJCAI venue. 

This year the Safe RL workshop is a merger with the AI Safety workshop, which has been an IJCAI tradition since 2019 and provides speakers across a broader set of topics on AI safety. The Safe RL portion of the programme combines two invited speakers and four contributed speakers. 

 Safe RL 2023 Topics

The goal of the workshop is to bring together researchers that are working on safe reinforcement learning systems, where safety is defined widely as avoiding self-harm, harm to the environment, significant financial or societal costs, and violations of social, ethical, or legal norms.

With this notion of safety in mind, we encourage submissions in extended abstract style on the following topics: 

In terms of application areas, we are interested in aerospace, power systems, robotics, cyber-physical systems, safety-critical systems, and others. The call is open to submissions from a variety of disciplines relevant for safe RL, including but not limited to constrained optimisation, control theory, robust optimisation, human-robot interaction, formal methods, industrial robotics, and societal perspectives. 

Organisers:

David Bossens

University of Southampton 

davidmbossens(at)gmail(dot)com




Bettina Koenighofer

TU Graz

bettina(dot)koenighofer(at)iaik(dot)tugraz(dot)at



Sebastian  Tschiatschek

Unversity of Vienna  

sebastian(dot)tschiatschek(at)univie(dot)ac(dot)at



Anqi Liu

Johns Hopkins University

aliu(at)cs(dot)jhu(dot)edu



Invited Talks

The Safe RL part of the merged workshop will include two invited talks from world-renowned researchers in Safe RL.

Yanan Sui

Associate Professor at Tsinghua University

Thiago Simao


Important dates

May 8, 2023: Workshop Paper Due Date

June 1, 2023: Notification of Paper Acceptance

June 15, 2023:           Early registration deadline

July 1, 2023: Camera-ready papers due

August 21, 2023:                 The AISafety-SafeRL Joint Workshop as a one-day event in Macao.