Modeling visual attention, specially bottom-up and image-driven saliency, has been the subject of many research efforts in the past 20 years. There are many models available now which have been evaluated over different datasets using various evaluation measures.

Our mission, here is to unify the research in visual attention modeling by sharing evaluation softwares and benchmark datasets. In this direction, we have already ran and evaluated nearly 30 saliency models over synthetic images, eye movement datasets on still images and videos. 

We hope that, our efforts here helps setting some standard benchmark datasets and evaluation scores for fair evaluation of models and therefore boosting advancement in saliency modeling research. 

Clearly, the success of this project is highly dependent on contributions of all researchers in this field.

Note: I will update this website soon. Please stay tuned. 

Ali Borji and Laurent Itti  {borji,itti}@usc.edu                                        
Please also visit our CVPR 2013 saliency tutorial at: http://ilab.usc.edu/borji/cvpr2013/

Neuromorphic Vision C++ Toolkit (iNVT) developed at iLab, USC, 
   http://ilab.usc.edu/toolkit/. A saccade is targetted to the location that 
   is different from its surroundings in several feature channels. In this frame
   of a video, attention is strongly driven by motion saliency.