Building 101, Room 00036
When autonomous systems such as self-driving cars and robotic manipulators are deployed in real-world environments, it is of the utmost importance to consider---and ideally to guarantee---safe runtime operation. Since these systems often operate in highly uncertain and dynamic environments, it is crucial for them to model and quantify environmental uncertainty, understand its impact on system dynamics, predict the motion of other agents, and make safe, risk-aware decisions. Safety and robustness have been studied extensively from a theoretical perspective, and there are several prominent success stories in application, e.g. in aviation. However, techniques with strong theoretical safety properties have yet to penetrate many new and exciting robotic application areas, such as autonomous driving, in which uncertainty in environmental perception and prediction overwhelm traditional safety analysis.
This workshop aims to:
Modeling uncertainty, safe motion planning, collision avoidance, decision-making in dynamic environments, intent prediction, safe exploration, safety and risk analysis, etc.
Optimal control, Robust control, Probability theory, Bayesian inference, POMDPs, etc.
Sunday Jan 23rd, Building 101, Room 00036
Melanie Zeilinger (ETH Zurich)
Learning-based control, model-predictive control, optimizationGeorge Pappas (U Penn)
Verification of hybrid systems, semantic SLAM, multi-robot systemsRuss Tedrake (MIT)
Robotics, optimization, motion planning, controlDorsa Sadigh (Stanford)
Human-robot interaction, robotics, control theory, formal methodsMorteza Lahijanian (CU Boulder)
Formal methods, control theoryChristian Pek (TU Munich)
Robot motion planning, formal verification of robotic systems