Examining Sensing Modalities for Robust and Dexterous
A full day workshop at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Date: October 1, 2018
Room: 1.L.2 (PARÍS)
Object manipulation is still a challenging problem in both academia and industry. In many scenarios, such as logistics, service robots and industrial assembly, dexterous and robust manipulation can enormously improve the work efficiency. In some cases, dexterous manipulation can enable the robot to achieve tasks which would not even be possible without it, e.g., picking up a workpiece and re-orienting it to be able to assemble it with other workpieces.
Robotic manipulation involves a large variety of sensing modalities, such as visual perception, tactile feedback, force/torque sensing and proprioceptive information from fingers or arms. Various sensors can acquire necessary knowledge of the environment, the object being manipulated and the robot itself. Due to the complexity of fusing information from different channels, most of existing works have focused on only one or two sensing modalities for either manipulation planning or robust manipulation control.
Recent years have seen enormous demands in deploying robotic systems in industry. Being the goal of industry 4.0, flexible and high-precision manufacturing and fast factory deployment can no longer be satisfied by only human labors. A large amount of robotic applications have demonstrated the need of making robots to work with skills similar to humans or even better than humans, as well as the necessity of enabling robots to collaborate with human workers. As such, the problem of object manipulation has been evolved from planning and controlling a robot to manipulate objects to encompass a much broader range of challenging problems, such as data-driven manipulation skill learning, multi-armed manipulation, human-robot collaboration etc. With the rapid developments in manipulation algorithms and sensing hardwares, the urgent need of integrating more sensing modalities to better understand the task space has been shown in both practical industrial applications and research frontiers. In order to enable the robot to effectively make use of multiple sensing modalities, so as to better understand tasks, dynamics and uncertainties, we have to investigate the following questions to bring the research forward while keeping the problem tractable:
1. What sensing modalities are necessary and unnecessary for different object manipulation tasks?
2. How to represent different sensing modalities in unified forms to enable them jointly modeling the information required in a task?
3. How can we design hybrid systems that are able to adaptively switch between different combinations of sensing modalities to facilitate the task execution in different manipulation stages?
The aim of this workshop is to bring researchers from both industry and academia to pave the foundations and to define the core open problems for multi-modal sensing object manipulation, such as perception, representation, learning, control, and planning. This workshop will also discuss advantages, limitations, challenges and progress with different approaches along these lines.
Topics of Interest
The workshop’s topics include, but are not limited to:
- The meaning and function of different sensing modalities in object manipulation
- Modeling and representation of sensing modalities
- Integration of sensing modalities
- Hardware optimization for sensor fusion
- Sensing and planning in object manipulation
- Multi-robot manipulation and coordination
- Control strategy for object manipulation and collaborative assembly
- Learning object manipulation skills from human demonstration
- In-hand object manipulation
Jeannette Bohg, Stanford University, USA
Matei Ciocarlie, Columbia University, USA
I-Ming Chen, Nanyang Technological University, Singapore
Robert Platt, Northeastern University, USA
Jianwei Zhang, Universität Hamburg, Germany
Kenji Tahara, Kyushu University, Japan
Maximo A. Roa, Roboception GmbH, Germany
Hao Ding, OrigiTech co. LTD, China
Berk Calli, Worcester Polytechnic Institute, USA
Nima Fazeli, Massachusetts Institute of Technology, USA
Kaiyu Hang, Yale University, USA, (kaiyu(dot)hang(at)yale(dot)edu)
Hao Ding, OrigiTech co. LTD, China, (hao(dot)ding(at)origi-tech(dot)com)
Miao Li, Wuhan University, China (limiao712(at)gmail(dot)com)
Danica Kragic, KTH Royal Institute of Technology, Sweden (dani(at)kth(dot)se)
Aaron Dollar, Yale University, USA (aaron(dot)dollar(at)yale(dot)edu)