Multispectral Pedestrian Detection 2015
This work was published in CVPR2015
This work was published in CVPR2015
Soonmin Hwang (KAIST), Jaesik Park (Intel), Namil Kim (NAVER LABS), Yukyung Choi (KAIST), In So Kweon (KAIST)
We developed imaging hardware consisting of a color camera, a thermal camera and a beam splitter to capture the aligned multispectral (RGB color + Thermal) images. With this hardware, we captured various regular traffic scenes at day and night time to consider changes in light conditions.
The KAIST Multispectral Pedestrian Dataset consists of 95k color-thermal pairs (640x480, 20Hz) taken from a vehicle. All the pairs are manually annotated (person, people, cyclist) for the total of 103,128 dense annotations and 1,182 unique pedestrians. The annotation includes temporal correspondence between bounding boxes like Caltech Pedestrian Dataset. More infomation can be found in our CVPR 2015 [paper] [Ext. Abstract].
Many researchers struggle to improve pedestrian detection performance on our benchmark. If you are interested, please see these works.
The horizontal lines divide the image types of the dataset (color, thermal and color-thermal).
Note that our dataset is largest color-thermal dataset providing occlusion labels and temporal correspondences captured in a non-static traffic scenes.
@inproceedings{hwang2015multispectral,
Author = {Soonmin Hwang and Jaesik Park and Namil Kim and Yukyung Choi and In So Kweon},
Title = {Multispectral Pedestrian Detection: Benchmark Dataset and Baselines},
Booktitle = {Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
Year = {2015} }