09:00 a.m. – 09.25 a.m.
Organizers
09:30 a.m. – 10:30 a.m.
Abstract: Motion planning is a fundamental robotics problem that involves finding a collision-free path for a robot to move from its initial position to a desired goal position. While traditional planning methods exist, recent advancements have led to the development of imitation learning-based motion planners that are able to find solutions much faster. However, these learning-based methods require an extensive amount of expert trajectories for training, which can be computationally expensive to produce. To address this issue, this talk will discuss the newly emerging class of physics-informed neural motion planners. These methods directly learn to solve the Eikonal partial differential equation (PDE) for motion planning and do not require expert demonstration paths from traditional planners for training. The results show that these new approaches outperform state-of-the-art traditional and imitation learning-based motion planning methods in terms of computational planning speed, path quality, and success rates. Furthermore, the data generation times for these physics-informed methods take just a few minutes compared to hours or days for imitation learning-based methods.
10:00 a.m. – 10:30 a.m
Abstract: A robot's interaction with the environment with policy may not require high-resolution, complete 3D geometric information. Nonetheless, an appropriate geometric context can provide vital clues to achieve a goal. In this talk, we focus on perceiving and manipulating transparent objects. Conventional perception modules, such as images or depth sensors, often fail to provide reliable observations and suffer from significant domain gaps reflecting surrounding environments. We suggest extracting an intermediate geometric representation for grasping transparent objects. First, we demonstrate how to detect instance masks where transparent and ordinary objects co-exist. Even though no dedicated dataset contains a sufficient number of both objects, we suggest an efficient augmentation that reliably detects both classes of materials. Secondly, I present capturing the complete 3D layouts of transparent objects using normal field formulation. Inspired by recent works on neural radiance fields, we quickly estimate the density fields and the surface normal to aggregate multi-view information into a coherent 3D geometric context. Our proposed pipelines stably perform in diverse novel environments and provide sufficient geometric cues for grasping.
Coffee Break 10:30 a.m.–11:00 a.m.
11:00 a.m. – 11:20 a.m.
Kendal G Norman (Purdue University), Analysis of Continuous Learning Models for Robot Motion Planning
Yunlong Song (ETH) Reinforcement Learning for Agile Flight: From Perception to Action
11:45 p.m. – 11:50 p.m.
Organizers
Accepted papers
1. Kendal Norman and Ahmed H. Qureshi: Analysis of Continuous Learning Models for Robot Motion Planning
2. Philipp Blättner, Johannes Brand, Gerhard Neumann, and Ngo Anh Vien: DMFC-GraspNet: Differentiable Multi-Fingered Robotic Grasp Generation in Cluttered Scenes
3. Yunlong Song, Davide Scaramuzza: Reinforcement Learning for Agile Flight: From Perception to Action
4. Thanh Nguyen, Tung M. Luu, Chang D. Yoo: Fast and Memory-Efficient Uncertainty-Aware Framework for Offline Reinforcement Learning with Rank One MIMO Q network