Accelerating Motion Planning via Optimal Transport

An T. Le, Georgia Chalvatzaki, Armin Biess, and Jan Peters

This work has been published as a conference paper at NeurIPS 2023.

Simplex

Orthoplex

Cube

Rotating Hypercube By JasonHise

We use these polytope families as search directions, known to exist in any dimension, and their vertex constructions are simple. At each iteration, we apply a random rotation to the chosen polytopes for adding search stochasticity. 

For the following demos, we set MPOT-Cube for the planar case, MPOT-Othorplex for Panda, and TIAGo++ cases. The annealing option is applied for all MPOT variants.  All plans are first-order (position + velocity) trajectories in the configuration space. Hence, the optimizing dimension is n x T x d, where n is the number of plans, T is the horizon, and d is the full-state dimension. The full-state dimension for the planar case is d=4 (2D xy), the Panda case d = 14, and the TIAGo++ case is d = 36 (3 base + 1 torso + 14 two arms).

I. Planar Demos

In these examples, we parallelize planning with three goals, each having 33 plans (with 64 timestep horizons), on a planar occupancy map having random 15 circles or square obstacles. To show the increasing parallelization quality over Sinkhorn Steps, we plot collided trajectories in black and collision-free trajectories in red. All example instances converged at around 70 Sinkhorn Steps, and took ~0.2-0.3 seconds on an RTX3080Ti GPU.

II. Panda Demos

In these demos, we show successful trajectories in a batch of 10 trajectories (with 64 timestep horizons) for each environment seed, each environment has 15 random obstacles. The green line denotes SE(3) goal.  All example instances converged at around 60 Sinkhorn Steps, and took ~0.6-0.8 seconds on an RTX3080Ti GPU.

III. TIAGo++ Demos

We show successful fetch and place trajectories (two distinct trajectories with 128 timestep horizons each) for each environment seed. We randomly spawn the robot for each environment. All example instances converged at around 40 Sinkhorn Steps, and took ~1-2 seconds (for either fetch or place trajectories, hence in a total of 3-4 seconds) on an RTX3080Ti GPU.

The left picture shows the exemplary TIAGo++ trajectory optimized by MPOT-Orthoplex, while the right one shows the GPMP2 trajectory. As can be seen, the trajectory smoothness of MPOT is comparable to GPMP2, without requiring gradients. The red circle represents the chairs for Matplotlib plotting.

IV. Fun Extras

We also try implementing the Sinkhorn Step in JAX (i.e., the standalone Sinkhorn Step repository), testing on standard synthetic functions to observe empirical convergence in multiple local minima. Note that the model/cost functions are implemented with batch-wise processing. Below are the visualizations of 10K points optimized by Sinkhorn Step in 1-2 seconds (after JIT the step function) on 2D functions, sampled uniformly randomly at the beginning. All instances are run on RTX3080Ti GPU. Please find more details in the Sinkhorn Step (JAX) repository above.

Reference: https://en.wikipedia.org/wiki/Test_functions_for_optimization

Bukin function

Drop Wave function

Eggholder function

Hölder Table function

Levy function

Rosenbrock function

Rastrigin function

Styblinski–Tang function

V. Acknowledgement

An T. Le was supported by the German Research Foundation project METRIC4IMITATION (PE 2315/11-1). Georgia Chalvatzaki was supported by the German Research Foundation (DFG) Emmy Noether Programme (CH 2676/1-1). We also gratefully acknowledge Alexander Lambert for his implementation of the Gaussian Process prior; Pascal Klink, Joe Watson, João Carvalho, and Julen Urain for the insightful and fruitful discussions; Snehal Jauhri, Puze Liu, and João Carvalho for their help in setting up the TIAGo++ environments.