In mobile eye tracking, it is important to compare the performance of different eye tracking algorithms that provide the mapping between eye image and the world object (i.e., different pupil detection techniques, semantic segmentation of eye images, 2D vs 3D tracking of the eye features,etc). Furthermore, as the subject moves in the environment, we are interested in techniques for ultra-wide eye tracking solutions i.e. beyond 100◦ field of view. The need for eye tracking re-calibration and validation is known to be inevitable. Thus, we encourage techniques developed in order to detect and reduce the effect of slippage and loss of data accuracy as a post-hoc or during run-time.
Most of the head motion tracking solutions use post-hoc techniques in order to estimate the orientation of the head. While we invite studies that take into account the synergy between eye and head movements, we especially encourage contributions to real-time,on board head pose tracking systems. Furthermore, we encourage studies that measure the accuracy of stand alone tracking devices used in VR systems. As we record the movements of the head along with the eye, conventional eye movement events such as saccades and fixations are no longer sufficient to explain the full synergy between eye, head and object of interest in the real world. Therefore, we encourage studies that include head motion into their annotation and classification algorithms in order to better understand the underlying oculomotor mechanisms.
Currently the major issue is the limited field of view and poor optics. A higher quality camera with better optics is likely to add more weight. In addition to the field of view and optics, there are issues such as image quality, dynamic range, color consistency, frame rate and spatial resolution that need to be addressed scientifically. Furthermore, an important compromise between video quality and size raises the question of what compression techniques and parameters are optimal during data acquisition and post-hoc storage. Recently, reliable and lightweight technologies for depth camera has been made available for head mounted devices, opening a wide range of possibilities.
The day-to-day challenges of mobile eye tracking can be further exacerbated under certain experimental conditions and with specific participant populations. The latter can include children and older adults, who might have additional ergonomic and physiological constraints; or patient populations (e.g., those with strabismus or macular damage). Experiments done in darkness or when eyes are artificially dilated can also provide additional challenges for computer vision-based eye tracking approaches commonly employed in mobile eye tracking.
Considering the growing demands for state of the art deep learning techniques, one key feature of a dataset is providing an accurate and reliable annotated data. Therefore, we welcome submissions that include efforts for annotating different aspects of such datasets efficiently. This includes but is not limited to different types of eye movements, head motion, different eye regions for real or synthetic images and scene objects. In order to clearly define the limits of the topic, we are only looking for submissions related to egocentric eye tracking projects and not broader image and video datasets which is beyond the scope of this workshop.
Best practices that researchers take into account when running large scale data collection. Comparison between different calibration routines, subjects’ comfort reports, experimenter’s procedure,UI design that can efficiently be used by a larger group of participants, system error tolerance, error handling and notifications, particularly in instances such as the pandemic when there might be less in-person supervision by the experimenter.
An important obstacle in front of large scale and outdoor data collection especially for extended time periods is subject’s discomfort due the ergonomic design of the head-worn device. In the context of data collection for special population i.e. children, the final form factor of the device faces more design constraints. In this workshop we welcome efforts from human factor researchers in academia and industry to share their experience and potential solutions.