Safety, Risk and Uncertainty in RL


Safety, Risk and Uncertainty in Reinforcement Learning

UAI 2018 Workshop - August 10th

In sequential decision-making tasks, maximizing objectives represented by a cumulative reward function is, in some cases, not the unique goal of reinforcement learning agents. For example, it might also be important for agents to avoid uncertain outcomes in order to protect themselves and their environment. Another example is in human-agent interactions, where exhibiting a behavior that is expected, thus not surprising to a human, is often desirable in order for people to feel comfortable and safe around the agent.

The goal of this workshop is to discuss the risk and safety perspectives in reinforcement learning. More specifically they include, but are not limited to, risk-awareness, safety, and robustness for:

      • exploration
      • model ​uncertainty (e.g. limited data)
      • environment uncertainty (e.g. noisy feedback)
      • hierarchical learning
      • transfer/meta learning
      • adversarial environments
      • human-machine interactions
      • multi-agent systems