Unsupervised Learning of Eye Gaze Representation

Neeru Dubey, Shreya Ghosh and Abhinav Dhall

Sample images** from the Dataset

Abstract

Automatic eye gaze estimation has interested researchers for a while now. In this paper, we propose an unsupervised learning based method for estimating the eye gaze region. To train the proposed network 'Ize-Net' in self-supervised manner, we collect a large ‘in the wild’ dataset containing 154,251 images from the web. For the images in the database, we divide the gaze into three regions based on an automatic technique based on pupil-centers localization and then use a feature-based technique to determine the gaze region. The performance is evaluated on the Tablet Gaze and CAVE datasets by fine-tuning results of Ize-Net for the task of eye gaze estimation. The feature representation learned is also used to train traditional machine learning algorithms for eye gaze estimation. The results demonstrate that the proposed method learns a rich data representation, which can be efficiently fine-tuned for any eye gaze estimation dataset.

  • Unsupervised Learning of Eye Gaze Representation from the Web. Neeru Dubey, Shreya Ghosh, Abhinav Dhall, IJCNN-2019. [paper, code]

If you find this work useful for your research, please consider citing our work:

@article{dubey2019unsupervised,
  title={Unsupervised Learning of Eye Gaze Representation from the Web},
  author={Dubey, Neeru and Ghosh, Shreya and Dhall, Abhinav},
  journal={International Joint Conference on Neural Network 2019},
  year={2019}
}


Contact:

  • For code, data and any other query please email the authors below :

Neeru Dubey (neerudubey@iitrpr.ac.in), Shreya Ghosh (shreya.ghosh@monash.edu), Abhinav Dhall (abhinav.dhall@monash.edu).

** Image Source: You tube creative common channel.