On Offline Evaluation of Vision-based Driving Models
Autonomous driving models should ideally be evaluated by deploying them on a fleet of physical vehicles in the real world. Unfortunately, this approach is not practical for the vast majority of researchers. An attractive alternative is to evaluate models offline, on a pre-collected validation dataset with ground truth annotation.
In this paper, we investigate the relation between various online and offline metrics for evaluation of autonomous driving models. We find that generally offline prediction no necessarily correlated with the driving quality, and two models with identical prediction error can differ dramatically in their driving performance. We show that the correlation of offline evaluation with the driving quality can be significantly improved by selecting appropriate validation dataset and suitable offline metrics.
You can find the paper here
A large scale training/validating framework for imitation learning using CARLA will be released soon ! Follow https://github.com/felipecode for news.