In sequential decision-making tasks, maximizing objectives represented by a cumulative reward function is, in some cases, not the unique goal of reinforcement learning agents. For example, it might also be important for agents to avoid uncertain outcomes in order to protect themselves and their environment. Another example is in human-agent interactions, where exhibiting a behavior that is expected, thus not surprising to a human, is often desirable in order for people to feel comfortable and safe around the agent.
The goal of this workshop is to discuss the risk and safety perspectives in reinforcement learning. More specifically they include, but are not limited to, risk-awareness, safety, and robustness for: