Call for papers

Safe RL 2023 Topics

The goal of the workshop is to bring together researchers that are working on safe reinforcement learning systems, where safety is defined widely as avoiding self-harm, harm to the environment, significant financial or societal costs, and violations of social, ethical, or legal norms.

With this notion of safety in mind, we encourage submissions in extended abstract style on the following topics: 

In terms of application areas, we are interested in aerospace, power systems, robotics, cyber-physical systems, safety-critical systems, and others. The call is open to submissions from a variety of disciplines relevant for safe RL, including but not limited to constrained optimisation, control theory, robust optimisation, human-robot interaction, formal methods, industrial robotics, and societal perspectives. 

Submission guidelines

Submissions should be anonymous and use the IJCAI author kit (see https://www.ijcai.org/authors_kit). Each paper submitted should be at most 3 pages in the IJCAI double-column format. The 3 page limit includes references; for example, you can add half a page of references only if your document is two and a half page long.

Paper submission will take place through EasyChair. Please go to https://easychair.org/my/conference?conf=saferl2023 and click on "make a new submission" to start your submission.

Authors are welcome to submit supplementary information with details on their implementation; however, reviewers are not required to consult this additional material when assessing the submission.

The workshop will allow for the submission of papers similar to papers being concurrently submitted elsewhere, as the aim of the workshop is to get an overview of the relevant ongoing work in Safe RL. However, be aware that this has to be OK for the other venue for publication as well. Note that the papers will be showcased on the website rather than on the formal proceedings so this should generally be OK.

Double blind review

Authors are required to submit their paper anonymously. To submit anonymously, all names and affiliations must be removed from the paper. This also involves removing any linked pages with personal identifiers (e.g. github code).

Each submission will be reviewed by at least two reviewers, who will assess the submission based on relevance, novelty, impact, and technical soundness. Strong overlap with your own previously published work should be indicated, but does not disqualify a submission. To preserve anonymity, please do not include this information in the paper but instead write an email to one of the organisers with the title "Safe RL 2023 Overlap Note". There will be no rebuttal period.

Oral in-person presentation

For each accepted paper, at least one co-author should present it at the workshop. Like all IJCAI workshops this year, the Safe RL 2023 event is fully in-person, which implies presentations must be in-person. If the number of submissions is large, then only the highest-scoring accepted papers will be presented as a contributed talk while the remaining accepted papers will be presented during poster sessions.

To be able to present, authors must register for the IJCAI 2023 conference.

Important dates

May 8, 2023: Workshop Paper Due Date

June 1, 2023: Notification of Paper Acceptance

June 15, 2023:                 Early registration deadline

July 1, 2023: Camera-ready papers due

August 21, 2023:                 The AISafety-SafeRL Joint Workshop as a one-day event in Macao.