Complex Urban Dataset

  • The target environment of the urban data set is a highly complex urban environment. Complex urban environments such as downtown areas and residential areas are challenging for many robotics applications. Validation and implementation in highly complex urban environments are not straightforward because of the many moving objects and high-rise buildings.

The figure on the left shows the GPS reception rate in a complex urban environment. The color brightness of the region represents the number of received satellites (Yellow: more satellites / Red: fewer satellites). In complex urban areas, it is challenging to estimate the vehicle location because GPS signals are interrupted by the large number of high-rise buildings.

The Sensor System

  • The platform used for acquiring sensor data was a Toyota Prius. The vehicle was equipped with two 2D and two 3D LiDARs to collect data on the surrounding environment. The 3D LiDAR was mounted at an angle of 45 degrees for maximum coverage. In the case of the 2D LiDAR, the rear sensor was mounted downwards for information on the road, and the front sensor was mounted upwards for information on buildings. Stereo camera was installed facing the front of the vehicle.

  • Also, various sensors were installed to estimate the position of the vehicle. GPS and VRS GPS were installed to estimate the global position. GPS is a general level sensor, and VRS GPS accurately estimates position using calibration via communication. Inertial Measurement Unit (IMU) and Fiber Optic Gyro (FOG) were installed to measure the rotation of the vehicle. IMU is a general level sensor, and FOG provides highly accurate sensor values compared to IMU. Our sensor system also had an altimeter to measure altitude information and a wheel encoder to measure the amount of movement of the vehicle. All sensor information is provided in a raw file format with timestamps.

Extrinsic Calibrations

The figure shows the coordinate system of each sensor. Red, green, and blue arrows represent x, y, and z, respectively. All sensors were calibrated based on the vehicle reference coordinate system. Calibration was performed each time data was collected, and the calibration results are provided with each data set.

The figure shows left (red) and right (green) 3D LiDAR data and rear (white) 2D LiDAR data based on the vehicle coordinate system. The calibration was performed using the information of overlapping parts among the sensor data.

Extrinsic calibration of the stereo camera was performed using pointcloud generated using SLAM algorithm.

For more information on the calibration process, please refer to our calibration paper