Demos

Demo Sessions

Demo sessions are running in parallel with poster sessions. For example, Demo Session 1 runs at the same time as Poster Session 1. In each one-hour demo session, there will be three demos in the West Corridor.

Demo 1

TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo

Contributors: Lukas Koestler (Technical University of Munich), Nan Yang (Technical University of Munich & Artisense), Niclas Zeller (Karlsruhe University of Applied Sciences & Artisense), Daniel Cremers (Technical University of Munich & Artisense)

Abstract: Real-time demonstration of TANDEM, a monocular tracking and dense mapping framework. TANDEM performs direct sparse visual odometry (VO). Dense depth maps are predicted based on the novel Cascade View-Aggregation MVSNet (CVA-MVSNet) that is able to utilize the entire active keyframe window. The predicted depth maps are fused into a consistent global map represented as a TSDF voxel grid. The TSDF model, again, is employed in the VO front-end to improve tracking accuracy and robustness.

Based on paper: “TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo”, Koestler et al., CoRL 2021.

Demo Sessions: I and VIII

Demo 2

RLOC: Terrain-Aware Legged Locomotion using Reinforcement Learning and Optimal Control

Contributors: Siddhant Gangapurwala, Mathieu Geisert, Romeo Orsolino, Maurice Fallon, Ioannis Havoutis. All contributors are with the Oxford Robotics Institute.

Abstract: We will demonstrate our reinforcement learning-based controller for robust locomotion over mobility challenges on our ANYmal C quadruped robot. This is a model-based and data-driven approach for quadrupedal planning and control, that utilizes on-board proprioceptive and exteroceptive feedback to map sensory information and desired velocity commands into footstep plans. This is achieved with an RL policy trained in simulation over a wide range of procedurally generated terrains. We will show how the ANYmal C robot is able to walk and robustly overcome steps and inclines while following a joystick input from the user.

Based on paper: "RLOC: Terrain-Aware Legged Locomotion using Reinforcement Learning and Optimal Control," Gangapurwala et al., 2020.

Demo Sessions: I, II, and III

Demo 3

iMAP: Implicit Mapping and Positioning in Real-Time

Contributors: Edgar Sucar (Imperial College London), Shuaifeng Zhi (Imperial College London), Shikun Liu (Imperial College London), Joseph Ortiz (Imperial College London), Andre Mouton (Dyson), Iain Haughton (Dyson), Tristan Laidlow (Imperial College London), Andrew Davison (Imperial College London)

Abstract: We will demonstrate iMAP (ICCV paper under the same name), the first SLAM system to use a neural implicit scene representation. An MLP is trained in live operation without prior data, building a dense, scene-specific implicit 3D model of occupancy and colour which is also immediately used for tracking. We will also show new development on leveraging iMAP’s efficient scene representation for interactive semantic scene labelling.

Based on paper: “iMAP: Implicit Mapping and Positioning in Real-Time”, Sucar et al., ICCV 2021.

Demo Sessions: I and VIII

Demo 4

Trust Your Robots! Predictive Uncertainty Estimation of Neural Networks with Sparse Gaussian Processes

Contributors: Jongseok Lee, Jianxiang Feng, Matthias Humt, Marcus G. Müller and Rudolph Triebel.

All authors are with the German Aerospace Center (DLR). Jongseok Lee is also affiliated with Karlsruhe Institute of Technology (KIT).

Abstract: We demonstrate a real-time probabilistic object detection pipeline, which returns uncertainty estimates of the predictions from deep neural networks. The method is based on our CoRL contribution: "Trust Your Robots! Predictive Uncertainty Estimation of Neural Networks with Sparse Gaussian Processes". The key technology herein is a sparse Gaussian Processes with the so-called Neural Tangent Kernel, which can provide uncertainty estimates of neural network predictions in a closed form solution. In our live demonstration, we show that an object detector can not only "know the known" objects, but also can "know the unknown" objects by providing confidence measures for both object classes and their 2D location in an image.

Based on paper: “Trust Your Robots! Predictive Uncertainty Estimation of Neural Networks with Sparse Gaussian Processes”, Lee et al., CoRL 2021.

Demo Sessions: II, III, and VIII

Demo 5

Coarse-to-Fine Imitation Learning: Robot Manipulation from a Single Demonstration

Contributors: Edward Johns (Imperial College London)

Abstract: Coarse-to-Fine Imitation Learning is a new method which allows for a robot to learn everyday tasks from a single human demonstration, without any prior knowledge of the objects involved. The method is simple: the robot trains an object pose estimator with self-supervised learning, and then at test time, the robot reaches the object and then simply replays the demonstration. In this demo, I will show the test-time performance of this method when trained with only a single demonstration.

Based on paper: “Coarse-to-Fine Imitation Learning: Robot Manipulation from a Single Demonstration”, Johns, ICRA 2021.

Demo Sessions: II and III

Demo 6

ReSkin: versatile, replaceable, lasting tactile skins

Contributors: Raunaq Bhirangi (CMU), Tess Hellebrekers (FAIR), Carmel Majidi (CMU), Abhinav Gupta (CMU/FAIR)

Abstract: Interactive demo for the paper "ReSkin: versatile, replaceable, lasting tactile skins". The audience will be able to play with a ReSkin sensor and see a visualization of its contact localization and force prediction capabilities. Information on starter kits to obtain and set up your own sensors will also be available.

Based on paper: “ReSkin: versatile, replaceable, lasting tactile skins”, Bhirangi, Hellebrekers et al., CoRL 2021.

Demo Sessions: IV and V

Demo 7

Learning to Walk in Minutes while Visualizing the Training on Hardware

Contributors: Nikita Rudin (Robotic Systems Lab ETH Zurich & NVIDIA), David Hoeller (Robotic Systems Lab ETH Zurich & NVIDIA), Philipp Reist (NVIDIA), Marco Hutter (Robotic Systems Lab ETH Zurich)

Abstract: Following the training setup in our paper, we will show how a quadruped locomotion policy can be trained in ~5min in simulation while visualizing the progress by continuously updating the policy running on the real robot. The robot will start with a randomly initialized policy and will progressively learn to walk. Additionally, we will show how this setup can be used to tune rewards with hardware in the loop.

Based on paper: “Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement Learning”, Rudin et al., CoRL 2021.

Demo Sessions: IV and V

Demo 8

Dexterous robot control using deep learning and biomimetic touch

Title: Dexterous robot control using deep learning and biomimetic touch

Contributors: Alex Church, Anupam Gupta, Yijiong Lin, Nathan Lepora. All contributors are with the University of Bristol.

Abstract: We will demonstrate the dexterous control of robot arms using a high-resolution tactile sensor on tasks including ball rolling, contour following and non-prehensile (pushing) manipulation. This will also showcase sim-to-real deep reinforcement learning methods in robot manipulation.

Based on papers: “Tactile Sim-to-Real Policy Transfer via Real-to-Sim Tactile Image Translation”, Church et al., CoRL 2021. And “Soft Biomimetic Optical Tactile Sensing with the TacTip: A Review”, Lepora, IEEE Sensors 2021.

Demo Sessions: IV and V

Demo 9

Dexterous robot hands with a biomimetic sense of touch

Contributors: Efi Psomopoulou, Chris Ford, Nathan Lepora. All contributors are with the University of Bristol.

Abstract: We will demonstrate two state-of-the-art dexterous robot hands that we have integrated biomimetic tactile sensors into the fingertips: the 3-fingered Shadow Modular Grasper and the anthropomorphic (5-fingered) Pisa/IIT SoftHand. Interpreting the high-resolution tactile data with deep learning enables fine control of objects held in-hand and light, stable grasping.

Based on papers: “A robust controller for stable 3D pinching using tactile sensing”, Psomopoulou et al., IEEE RA-L & IROS 2021. And “Towards integrated tactile sensorimotor control in anthropomorphic soft robotic hands”, Lepora et al., ICRA 2021.

Demo Sessions: VI and VII

Demo 10

Fast and Efficient Locomotion via Learned Gait Transitions

Contributors: Yuxiang Yang (University of Washington), Tingnan Zhang (Robotics at Google), Rosario Scalise (University of Washington), Erwin Coumans (Robotics at Google), Jie Tan (Robotics at Google), Byron Boots (University of Washington)

Abstract: We will demonstrate fast and energy efficient locomotion on the A1 quadruped robot, which is achieved by a hierarchical learning framework (Paper #108). As the user increases the robot's desired speed, the robot switches between a range of locomotion gaits to minimize its energy consumption, including low-speed walking, mid-speed trotting and high-speed fly-trotting. We will also demonstrate the controller's robustness under external perturbations and abrupt changes in speed commands.

Based on paper: “Fast and Efficient Locomotion via Learned Gait Transitions”, Yang et al., CoRL 2021.

Demo Sessions: VI and VII

Demo 11

Learning Multi-Stage Tasks with One Demonstration via Self-Replay

Contributors: Norman Di Palo, Eugene Valassakis, and Edward Johns

Abstract: We will demonstrate the test-time performance, as well as the data collection phase, of our CoRL 2021 paper on multi-stage imitation learning, which we called Self-Replay. This method allows for a novel task to be learned from a single human demonstration, without any prior knowledge of the objects in the scene.

Based on paper: “Learning Multi-Stage Tasks with One Demonstration via Self-Replay”, Di Palo and Johns, CoRL 2021.

Demo Sessions: VI and VII