Evaluating affect analysis methods presents challenges due to inconsistencies in database partitioning and evaluation protocols, leading to unfair and biased results. Previous studies claim continuous performance improvements, but our findings challenge such assertions. Using these insights, we propose a unified protocol for database partitioning that ensures fairness and comparability.
For 6 widely used affective databases, we provide detailed demographic annotations (in terms of race, gender and age), evaluation metrics, and a common framework for expression recognition, action unit detection and valence-arousal estimation.
We also provide multiple trained models (baseline and state-of-the-art) with the new protocol and introduce a new leaderboards to encourage future research in affect recognition with a fairer comparison. Our annotations, code, and pre-trained models are available here.
CUE-Net a novel architecture designed for automated violence detection in video surveillance. CUE-Net combines spatial Cropping with an enhanced version of the UniformerV2 architecture integrating convolutional and self-attention mechanisms alongside a novel Modified Efficient Additive Attention mechanism (which reduces the quadratic time complexity of self-attention) to effectively and efficiently identify violent activities. Code of this development is available here.
A deep neural network for valence-arousal estimation trained on Aff-Wild database, producing state-of-the-art results on it. The trained models can be found here.
This repository contains our solution to the OMG-Emotion Challenge 2018 -using only visual data for valence-arousal estimation utilizing the OMG Emotion database- that ranked 2nd for vision-only valence estimation and 2nd for overall valence estimation. more details with training and inference code can be found here.