Call for Papers

ICLR 2019 Workshop. Safe Machine Learning: Specification, Robustness and Assurance

Important dates

  • Submission deadline: Friday February 22nd, Friday March 1st midnight Anywhere on Earth (AoE).
  • Acceptance notification: Friday March 22nd, midnight AoE.
  • Camera ready deadline: Friday April 19th, midnight AoE. The deanonymized, camera-ready papers will be made available on the website then.
  • Workshop: Monday May 6th, 09:45–18:30 in Room R6, New Orleans time.

Poster instructions

Ideally, the poster should be in portrait mode and of size Arch E: 36 x 48 inches, or 92 x 122 cm (width x height). You may also use similar or smaller sizes (for example A0 portrait, or A1 in landscape mode).

We recommend you print them using light weight paper, not laminated paper. We will not have poster boards and posters will instead be taped to the wall, so it is better if the paper does not curl strongly.

Topics

We encourage all researchers to submit work that falls into one or more of the areas of the workshop: specification, robustness and/or assurance (blog post). Some example research topics within each area are:

  • Specification
    • Reward Hacking: Reinforcement learning systems may behave in ways unintended by the designers, because of discrepancies between the specified reward and the true intended reward. How can we design systems that don’t exploit these misspecifications, or figure out where they are? (Over 40 examples of specification gaming by AI systems can be found here: http://tinyurl.com/specification-gaming.)
    • Side effects: How can we give artificial agents an incentive to avoid unnecessary disruptions to their environment while pursuing the given objective? Can we do this in a way that generalizes across environments and tasks and does not introduce bad incentives for the agent in the process?
    • Fairness: ML is increasingly used in core societal domains such as health care, hiring, lending, and criminal risk assessment. How can we make sure that historical prejudices, cultural stereotypes, and existing demographic inequalities contained in the data, as well as sampling bias and collection issues, are not reflected in the systems?
  • Robustness
    • Adaptation: How can ML systems detect and adapt to changes in the environment (e.g. low overlap between train and test distributions, poor initial model assumptions, or shifts in the underlying prediction function)? How should an autonomous agent act when confronting radically new contexts, or identify that the context is new in the first place?
    • Verification: How can we scalably verify meaningful properties of ML systems? What role can and should verification play in ensuring robustness of ML systems?
    • Worst-case robustness: How can we train systems which never perform extremely poorly, even in the worst case? Given a trained system, can we ensure it never fails catastrophically, or bound this probability?
    • Safe exploration: Can we design reinforcement learning algorithms which never fail catastrophically, even at training time?
  • Assurance
    • Interpretability: How can we robustly determine whether a system is working as intended (i.e. is well specified and robust) before large-scale deployment, even when we do not have a formal specification of what it should do?
    • Monitoring: How can we monitor large-scale systems to identify whether they are performing well? What tools can help diagnose and fix the found issues?
    • Privacy: How can we ensure that the trained systems do not reveal sensitive information about individuals contained in the training set?
    • Interruptibility: An artificial agent may learn to avoid interruptions by the human supervisor if such interruptions lead to receiving less reward. How can we ensure the system behaves safely even under the possibility of shutdown?

Instructions for submitting

Submission link: https://easychair.org/conferences/?conf=safeml2019.

The recommended paper length is 4 pages, but we can accept papers of up to 8 pages. Submissions may include supplementary material, but reviewers aren't required to read after 8 pages. The references can take as many pages as necessary and do not count towards the 8 page limit. Submissions should be in PDF format and ICLR style (use the relevant LaTeX style files).

The reviewing process is double blind, so the submissions should be anonymised and not contain information that could identify the authors. If the authors' work has already been published in a journal, conference or workshop, their submission should meaningfully extend their previous work. However, parallel submission (to a journal, conference, workshop, or preprint repository) is allowed.

If your paper is accepted, you will be invited to present a poster at the workshop. Some of the accepted contributions will also be invited to give a talk. Accepted submissions will be shown on the workshop website, but there will be no formal published proceedings.

VISA: If you have submitted a paper (whether or not the deadline has passed) and you anticipate your USA travel visa to take a long time, we encourage you to please contact us. We'll find a solution, for example, fast-tracking the review of your work.