Karush-Kuhn-Tucker loss

Novel loss for learning the shape of terrain from the robot trajectory.


This work is motivated by our observation, that the shape of the terrain that supports the robot during its traversal -- the supporting terrain -- is an essential input for many subsequent procedures such as motion control or path planning. Since this shape cannot be measured directly, we instead learn a convolution network to predict it from lidar and camera measurements. The prediction is not straightforward since the measurements are often incomplete due to the terrain reflectivity or biased due to the terrain occlusion by a flexible (not-supporting) layer such as vegetation or water. In addition to that, it is complicated to obtain a sufficient amount of manual annotations for any fully-supervised training. Consequently, we focused on designing a self-supervised method that learns to predict the shape of supporting terrain from offline-optimized maps and robot trajectories. Since offline optimization has access to privileged information in the form of future measurements, the resulting ground truth is significantly better than what can be measured in the time of inference. While the learning from the ground-truth maps directly leads to minimizing the cross-entropy or L2 loss, the learning from ground-truth trajectories captured during the terrain traversal is non-trivial. To this end, we proposed so-called KKT-loss that allows us to backpropagate from ground-truth trajectories to the predicted terrain shape by measuring the physical consistency between them. The KKT-loss leverages a simple first principle model of the robot-terrain interaction that places the robot trajectory into the local minimum of its potential energy. The physical consistency is then measured as the Euclidian distance from the Karush–Kuhn–Tucker necessary conditions of this first principle model. The resulting fully-differentiable KKT-loss thus provides the additional self-supervision signal, which helps significantly, especially in cases where usual lidar fails, such as non-rigid or non-Lambertian terrain surface.