Accepted Papers

1. Investigating Visual Features for Cognitive Impairment Detection Using In-the-wild Data

Fatimah A Alzahrani (The University of Sheffield)*; Bahman Mirheidari (University of Sheffield); Daniel Blackburn (University of Sheffield); Steve Maddock (University of Sheffield); Heidi Christensen (University of Sheffield)


Abstract:

Early detection of dementia has attracted much research interest due to its crucial role in helping people get suitable treatment or care. Video analysis may provide an effective approach for detection, with low cost and effort compared to current expensive and intensive clinical assessments. This paper investigates the use of a range of visual features - eye blink rate (EBR), head turn rate (HTR) and head movement statistical features (HMSF) - for identifying neurodegenerative disorder (ND), mild cognitive impairment (MCI) and functional memory disorder (FMD). These features are used in a noval multiple thresholds approach, which is applied to an in-the-wild video dataset which includes data recorded in a range of challenging environments. A combination of EBR and HTR gives 78\% accuracy in a three-way classification task (ND/MCI/FMD) and 83\%, 83\% and 92\%, respectively, for the two-way classifications ND/MCI, ND/FMD and MCI/FMD. These results are comparable to related work that uses more features from different modalities. They also provide evidence to support the possibility of an in-the-home detection process for dementia or cognitive impairment.


2. Automatic Assessment of Infant Face and Upper-Body Symmetry as Early Signs of Torticollis

Michael Wan (The Roux Institute at Northeastern University); Xiaofei Huang (Northeastern University); Bethany Tunik (Independent Physical Therapist); Sarah Ostadabbas (Northeastern University)*


Abstract

We apply computer vision pose estimation techniques developed expressly for the data-scarce infant domain to the study of torticollis, a common condition in infants for which early identification and treatment is critical. Specifically, we use a combination of facial landmark and body joint estimation techniques designed for infants to estimate a range of geometric measures pertaining to face and upper body symmetry, drawn from an array of sources in the physical therapy and ophthalmology research literature in torticollis. We gauge performance with a range of metrics and show that the estimates of most these geometric measures are successful, yielding strong to very strong Spearman's rho correlation with ground truth values. Furthermore, we show that these estimates, derived from pose estimation neural networks designed for the infant domain, cleanly outperform estimates derived from more widely known networks designed for the adult domain.


3. New Insights on Weight Estimation from Face Images

NĂ©lida Mirabet-Herranz (Eurecom)*; Khawla Mallat (SAP); Jean-Luc Dugelay (EURECOM, Campus SophiaTech)


Abstract

Weight is a soft biometric trait which estimation is useful in numerous health related applications such as remote estimation from a health professional or at-home daily monitoring. In scenarios when a scale is unavailable or the subject is unable to cooperate, i.e. road accidents, estimating a person's weight from face appearance allows for a contactless measurement. <br/>In this article, we define an optimal transfer learning protocol for a ResNet50 architecture obtaining better performances than the state-of-the-art thus moving one step forward in closing the gap between remote weight estimation and physical devices. We also demonstrate that gender-splitting, image cropping and hair occlusion play an important role in weight estimation which might not necessarily be the case in face recognition. We use up-to-date explainability tools to illustrate and validate our assumptions. We conduct extensive simulations on the most popular publicly available face dataset annotated by weight to ensure a fair comparison with other approaches and we aim to overcome its flaws by presenting our self-collected database composed of 400 new images.


4. Pain Detection in Masked Faces during Procedural Sedation

Yasamin Zarghami (University of Toronto)*; Sebastian Mafeld (University Health Network); Aaron Conway (Cardiovascular Nursing Research); Babak Taati (University Health Network)


Abstract

Pain monitoring is essential to the quality of care for patients undergoing a medical procedure with sedation. An automated mechanism for detecting pain could improve sedation dose titration. Previous studies on facial pain detection have shown the viability of computer vision methods in detecting pain in unoccluded faces. However, the faces of patients undergoing procedures are often partially occluded by medical devices and face masks. A previous preliminary study on pain detection on artificially occluded faces has shown a feasible approach to detect pain from a narrow band around the eyes. This study has collected video data from masked faces of 14 patients undergoing procedures in an interventional radiology department and has trained a deep learning model using this dataset. The model was able to detect expressions of pain accurately and, after causal temporal smoothing, achieved an average precision (AP) of 0.72 and an area under the receiver operating characteristic curve (AUC) of 0.82. These results outperform baseline models and show viability of computer vision approaches for pain detection of masked faces during procedural sedation. Cross-dataset performance is also examined when a model is trained on a publicly available dataset and tested on the sedation videos. The ways in which pain expressions differ in the two datasets are qualitatively examined.