IROS 2022 Tutorial on
Open and Trustworthy Deep Learning for Robotics
IROS 2022 Tutorial on
Open and Trustworthy Deep Learning for Robotics
Speaker: Prof. Xiaowei Huang
Abstract: TBD
Biography: Professor Xiaowei Huang is the Director of the Trustworthy Autonomous Cyber-Physical Systems lab, at the University of Liverpool, UK. His research is concerned with the development of automated verification techniques that ensure the correctness and reliability of intelligent systems. He is leading the research direction on the verification and validation of deep neural networks, and is the author of the ``Machine Learning Safety and Security'' book, which is to be published by Springer in 2022. He has published 80+ papers, most of which appear in top conferences and journals in areas such as Artificial Intelligence, Formal Methods, Software Engineering, and Robotics. He co-chairs the AAAI and IJCAI workshop series on Artificial Intelligence Safety since 2019. He is the PI or co-PI of several Dstl (Ministry of Defence, UK), EPSRC and EU H2020 projects.
Speakers: Prof. Anastasios Tefas, Prof. Alexandros Iosifidis and Dr. Nikolaos Passalis
Abstract: Deep Learning has provided powerful tools for a wide range of different robotic perception applications that span from object detection and face recognition to activity and emotion recognition. However, using existing DL tools for robotics comes with several challenges: DL models are often too resource-intensive to be deployed in embedded hardware, models are designed to follow a static perception paradigm instead of taking advantage of active perception capabilities of most robotic systems, the fragmentation of DL development and lack of an open and interoperable ecosystem slows down integration and debugging, etc. This presentation will focus on how DL tools can be used for various robot perception tasks, ranging from models for environmental perception to human-centric perception, focusing especially on the aforementioned challenges and discussing potential solutions, as well as current research directions. Furthermore, selected tools included in the Open Deep Learning Toolkit for robotics (OpenDR) will be showcased, including object detection, person recognition, pose estimation, online video classification, and skeleton-based human action recognition, demonstrating a case study of an open ecosystem for DL development for robotics, discussing how tools can be optimised and adjusted for different embedded platforms typically used in robotic systems.
Speaker: Prof. Abhinav Valada and Daniel Honerkamp
Abstract: Mobile manipulation remains a critical challenge across both service and industrial settings and is a key component to visions such as household assistants. But for this it requires the combination of a wide range of capabilities, such as perception and exploration in unknown environments while controlling large, continuous action spaces for simultaneous navigation and manipulation. In this tutorial we will first provide an overview on the main challenges and current benchmarks. We will then summarize and walk through the pipelines of current state-of-the-art approaches. In particular, we will cover both the high-level task learning and the low-level motion execution on robotic agents. Lastly, we will discuss potential paths forward and ways to integrate these low- and high-level components.
Speaker: Prof. Erdal Kayacan
Abstract: Deep learning can leverage the autonomous navigation capabilities of mobile robots, especially small aerial vehicles considering their limited size and computational power. In this tutorial, we will first focus on safe obstacle avoidance in dense environments. We present a deep reinforcement learning-based end-to-end planning method for safe navigation of quadrotors. Second, we will introduce the autonomous drone racing problem, and explain how deep learning could improve the robustness of the robot's perception and navigation. In particular, we will provide our approach to design an efficient and robust convolutional neural network for a racing quadrotor platform. Finally, we will discuss the upcoming challenges and future work for deep learning-based aerial robot navigation.
Speaker: Prof. Robert Babuska, Prof. Jens Kober
Abstract: Deep learning can be employed for robot control in many different forms, ranging from integrating it in more traditional control approaches (e.g., replacing slow to compute functions by faster learned functions, using learned models for components that are hard to model analytically), to purely neural-network-based end-to-end control (i.e., going directly from raw perception to motor actions). In this tutorial we’ll cover some basic principles of deep learning for robot control as well as recent advances, with a focus on deep reinforcement learning. Finally, we will present some tools enabling integration of these concepts in robotic applications.