COCO-FreeView

COCO-FreeView is a laboratory-quality dataset of free viewing behavior. It contains the same natural images used in COCO-Search18, but labeled with 822,602 eye fixations from a free-viewing task. 

10 university students participated in the data collection. The experiment procedure was modified from COCO-Search18, with no target being cued, no response being required from the participant, and the viewing time for each image being fixed to 5 seconds.

Resource

👏 We are releasing COCO-FreeView to the public! We took down the testing data in order to set up an online evaluation service, stay tuned!

Download

COCO-FreeView Dataset contains : 

Code on Github

Paper

Chen, Y., Yang, Z., Chakraborty, S., Mondal, S., Ahn, S., Samaras, D., Hoai, M., & Zelinsky, G. (2022). Characterizing Target-Absent Human Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (pp. 5031-5040).

Yang, Z., Mondal, S., Ahn, S., Zelinsky, G., Hoai, M., & Samaras, D. (2023). Predicting Human Attention using Computational Attention. arXiv preprint arXiv:2303.09383.


@inproceedings{chen2022characterizing, 

title={Characterizing Target-Absent Human Attention}, 

author={Chen, Yupei and Yang, Zhibo and Chakraborty, Souradeep and Mondal, Sounak and Ahn, Seoyoung and Samaras, Dimitris and Hoai, Minh and Zelinsky, Gregory}, 

booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops}, 

pages={5031--5040}, 

year={2022} 

}

@article{yang2023predicting,

title={Predicting Human Attention using Computational Attention},

author={Yang, Zhibo and Mondal, Sounak and Ahn, Seoyoung and Zelinsky, Gregory and Hoai, Minh and Samaras, Dimitris},

journal={arXiv preprint arXiv:2303.09383},

year={2023}

}