ParkPredict
ParkPredict: Motion and Intent Prediction of Vehicles in Parking Lots
Xu Shen*, Ivo Batkovic*, Vijay Govindarajan*, Paolo Falcone, Trevor Darrell, and Francesco Borrelli
* Indicates equal contribution University of California, Berkeley, CA, USAChalmers University of Technology, Gothenburg, SwedenZenuity AB, Gothenburg, SwedenParkPredict: Motion and Intent Prediction of Vehicles in Parking Lots (IV 2020)
investigated the problem of predicting driver behavior in parking lots, an environment which is less structured than typical road networks and features complex, interactive maneuvers in a compact space.
Using the CARLA simulator, we developed a parking lot environment and collect a dataset of human parking maneuvers.
Studied the impact of model complexity and feature information by comparing a multi-modal Long Short-Term Memory (LSTM) prediction model and a Convolution Neural Network LSTM (CNN-LSTM) to a physics-based Extended Kalman Filter (EKF) baseline.
Video
Data Collection
Experiment setting and infrastructure:
We used the CARLA simulator and CARLA ROS bridge.
The subject used a Logitech G27 racing wheel to control brake, throttle, and steering of the ego vehicle and execute parking tasks.
The subject was instructed to park into a free spot of their choosing, following a specified forward or reverse parking maneuver.
When the subject selected the parking spot, he or she was instructed to press a button to signal a determined intent.
In this experiment, only the ego vehicle was moving; all other vehicles in the scene remained static and parked.
Map configuration:
The custom parking lot map is modified from 'Town04'. The parking lot consists of 4 rows with 16 spots each.
In each trial, static vehicles were spawned into parking spots such that only 8 free spot options, located in the middle two rows, were available.
The specific locations of free spots were varied across trials to gather a diverse range of parking demonstrations from the subject.
Experiement process:
A total of 10 subjects performed 30 forward parking and 30 reverse parking demonstrations, resulting in 600 total demonstrations.
In each demonstration, the kinematic motion state history of the ego vehicle was recorded, as well as intent signals to know when a parking spot had been selected.
All demonstrations containing collisions or without intent signaling were filtered out.
The configuration of the parking lot was recorded together with the bounding boxes of the ego vehicle and all other parked vehicles.
Data Processing
Dataset Request
The dataset in this work is an old version, and we strongly suggest you take a look at our new dataset Dragon Lake Parking (DLP) Dataset and new model for parking scenarios.
Citation
@inproceedings{shen2020parkpredict,
title={Parkpredict: Motion and intent prediction of vehicles in parking lots},
author={Shen, Xu and Batkovic, Ivo and Govindarajan, Vijay and Falcone, Paolo and Darrell, Trevor and Borrelli, Francesco},
booktitle={2020 IEEE Intelligent Vehicles Symposium (IV)},
pages={1170--1175},
organization={IEEE}
}