Motion Planning Networks
Motion Planning Networks (MPNet) is a computationally efficient, learning-based neural planner for solving motion planning problems. MPNet uses neural networks to learn general near-optimal heuristics for path planning in seen and unseen environments. It receives environment information as point-clouds, as well as a robot's initial and desired goal configurations and recursively calls itself to bidirectionally generate connectable paths. In addition to finding directly connectable and near-optimal paths in a single pass, we show that worst-case theoretical guarantees can be proven if we merge this neural network strategy with classical sample-based planners in a hybrid approach while still retaining significant computational and optimality improvements. To learn the MPNet models, we present an active continual learning approach that enables MPNet to learn from streaming data and actively ask for expert demonstrations when needed, drastically reducing data for training. We validate MPNet against gold-standard and state-of-the-art planning methods in a variety of problems from 2D to 7D robot configuration spaces in challenging and cluttered environments, with results showing significant and consistently stronger performance metrics, and motivating neural planning in general as a modern strategy for solving motion planning problems efficiently.
Motion Planning Networks (MPNet) is a computationally efficient, learning-based neural planner for solving motion planning problems. MPNet uses neural networks to learn general near-optimal heuristics for path planning in seen and unseen environments. It receives environment information as point-clouds, as well as a robot's initial and desired goal configurations and recursively calls itself to bidirectionally generate connectable paths. In addition to finding directly connectable and near-optimal paths in a single pass, we show that worst-case theoretical guarantees can be proven if we merge this neural network strategy with classical sample-based planners in a hybrid approach while still retaining significant computational and optimality improvements. To learn the MPNet models, we present an active continual learning approach that enables MPNet to learn from streaming data and actively ask for expert demonstrations when needed, drastically reducing data for training. We validate MPNet against gold-standard and state-of-the-art planning methods in a variety of problems from 2D to 7D robot configuration spaces in challenging and cluttered environments, with results showing significant and consistently stronger performance metrics, and motivating neural planning in general as a modern strategy for solving motion planning problems efficiently.
MPNet Code Release & Supplementary Material
MPNet Code Release & Supplementary Material
Supplementary Material:
Supplementary Material:
1- Implementation Details
1- Implementation Details
2- A video talk, virtually given at University of Toronto explaining MPNet with pointers to future research directions.
2- A video talk, virtually given at University of Toronto explaining MPNet with pointers to future research directions.
Available Code Repositories:
Available Code Repositories:
MPNet planning motion for 7DOF Baxter on multi-target problem
MPNet planning motion for 7DOF Baxter on multi-target problem
The task is to pick up the blue object (duck), by planning a path from the initial position to graspable object location, and move it to a new target (yellow block). Note that the stopwatch indicates the planning + execution time. In this scenario, MPNet computed the entire path plan in less than a second whereas BIT* took about few minutes to find a solution that is within 10% cost of MPNet path.
The task is to pick up the blue object (duck), by planning a path from the initial position to graspable object location, and move it to a new target (yellow block). Note that the stopwatch indicates the planning + execution time. In this scenario, MPNet computed the entire path plan in less than a second whereas BIT* took about few minutes to find a solution that is within 10% cost of MPNet path.
MPNet
MPNet
Planning time < 1 second
BIT*
BIT*
Planning time= 3.01 minutes
MPNet planning motion for rigid-body in SE (3) in about 1 second planning time
MPNet planning motion for rigid-body in SE (3) in about 1 second planning time
MPNet planning motion for 7DOF Baxter on reaching task
MPNet planning motion for 7DOF Baxter on reaching task
We evaluated MPNet to plan motion for a Baxter robot in ten challenging and cluttered environments, out of which four are shown below. In these scenarios, MPNet again took less than a second (sub-second time).
We evaluated MPNet to plan motion for a Baxter robot in ten challenging and cluttered environments, out of which four are shown below. In these scenarios, MPNet again took less than a second (sub-second time).
Environment 1
Environment 2
Environment 3
Environment 4
MPNet in 2D Environments
MPNet in 2D Environments
Following videos show the REAL-TIME path generation by MPNet for a point-mass robot and rigid-body in 2D environments between a randomly selected start and goal pair in the obstacle-free space.
Following videos show the REAL-TIME path generation by MPNet for a point-mass robot and rigid-body in 2D environments between a randomly selected start and goal pair in the obstacle-free space.
MPNet planning motion for 6DOF Universal Robot
MPNet planning motion for 6DOF Universal Robot
Following video shows the planning of 6 DOF manipulator by MPNet. The shadowed region indicates the target configuration while the yellow objects indicate the obstacles to avoid.
Following video shows the planning of 6 DOF manipulator by MPNet. The shadowed region indicates the target configuration while the yellow objects indicate the obstacles to avoid.
MPNet planning motion for 7DOF Baxter Robot Manipulators
MPNet planning motion for 7DOF Baxter Robot Manipulators
Following video shows the planning of 7 DOF manipulators of Baxter by MPNet.
Following video shows the planning of 7 DOF manipulators of Baxter by MPNet.
Bibliography
Bibliography
@article{qureshi2019motion,
@article{qureshi2019motion,
title={Motion Planning Networks: Bridging the Gap Between Learning-based and Classical Motion Planners},
title={Motion Planning Networks: Bridging the Gap Between Learning-based and Classical Motion Planners},
author={Qureshi, Ahmed H and Miao, Yinglong and Simeonov, Anthony and Yip, Michael C},
author={Qureshi, Ahmed H and Miao, Yinglong and Simeonov, Anthony and Yip, Michael C},
journal={IEEE Transactions on Robotics},
journal={IEEE Transactions on Robotics},
year={2020},
year={2020},
pages={1-9}
pages={1-9}
}
}
@inproceedings{qureshi2019motion,
@inproceedings{qureshi2019motion,
title={Motion planning networks},
title={Motion planning networks},
author={Qureshi, Ahmed H and Simeonov, Anthony and Bency, Mayur J and Yip, Michael C},
author={Qureshi, Ahmed H and Simeonov, Anthony and Bency, Mayur J and Yip, Michael C},
booktitle={2019 International Conference on Robotics and Automation (ICRA)},
booktitle={2019 International Conference on Robotics and Automation (ICRA)},
pages={2118--2124},
pages={2118--2124},
year={2019},
year={2019},
organization={IEEE}
organization={IEEE}
}
}
@inproceedings{qureshi2018deeply,
@inproceedings{qureshi2018deeply,
title={Deeply Informed Neural Sampling for Robot Motion Planning},
title={Deeply Informed Neural Sampling for Robot Motion Planning},
author={Qureshi, Ahmed H and Yip, Michael C},
author={Qureshi, Ahmed H and Yip, Michael C},
booktitle={2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
booktitle={2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
pages={6582--6588},
pages={6582--6588},
year={2018},
year={2018},
organization={IEEE}
organization={IEEE}
}
}
Project Contributors
Project Contributors