COCO-FreeView is a laboratory-quality dataset of free viewing behavior. It contains the same natural images used in COCO-Search18, but labeled with 822,602 eye fixations from a free-viewing task. 

10 university students participated in the data collection. The experiment procedure was modified from COCO-Search18, with no target being cued, no response being required from the participant, and the viewing time for each image being fixed to 5 seconds.


👏 We are releasing COCO-FreeView to the public! We took down the testing data in order to set up an online evaluation service, stay tuned!


COCO-FreeView Dataset contains : 

Code on Github


Chen, Y., Yang, Z., Chakraborty, S., Mondal, S., Ahn, S., Samaras, D., Hoai, M., & Zelinsky, G. (2022). Characterizing Target-Absent Human Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (pp. 5031-5040).

Yang, Z., Mondal, S., Ahn, S., Zelinsky, G., Hoai, M., & Samaras, D. (2023). Predicting Human Attention using Computational Attention. arXiv preprint arXiv:2303.09383.


title={Characterizing Target-Absent Human Attention}, 

author={Chen, Yupei and Yang, Zhibo and Chakraborty, Souradeep and Mondal, Sounak and Ahn, Seoyoung and Samaras, Dimitris and Hoai, Minh and Zelinsky, Gregory}, 

booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops}, 





title={Predicting Human Attention using Computational Attention},

author={Yang, Zhibo and Mondal, Sounak and Ahn, Seoyoung and Zelinsky, Gregory and Hoai, Minh and Samaras, Dimitris},

journal={arXiv preprint arXiv:2303.09383},