Aff-Wild
Frames from the Aff-Wild database which show subjects in different emotional states, of different ethnicities, in a variety of head poses, illumination conditions and occlusions.
Abstract
Automatic understanding of human affect using visual signals is of great importance in everyday human–machine interactions. Appraising human emotional states, behaviors and reactions displayed in real-world settings, can be accomplished using latent continuous dimensions (e.g., the circumplex model of affect). Valence (i.e., how positive or negative is an emotion) and arousal (i.e., power of the activation of the emotion) constitute popular and effective representations for affect. Nevertheless, the majority of collected datasets until 2016, although containing naturalistic emotional states, have been captured in highly controlled recording conditions. In this work, we introduce the Aff-Wild benchmark for training and evaluating affect recognition algorithms. We also organized the First Affect-in-the-wild Challenge (Aff-Wild Challenge) in conjunction with CVPR 2017 on the Aff-Wild database, and was the first ever challenge on the estimation of valence and arousal in-the-wild. Finally, we propose an end-to-end deep learning architecture, AffWildNet, which produces state-of-the-art results on the Aff-Wild Database.
How to acquire the data
If you want to have access to Aff-Wild that is described in our IJCV and CVPR papers (please check the references' section below), send an email to d.kollias@qmul.ac.uk, using your official academic email (as data cannot be released to personal emails) with subject: Aff-Wild request. Include in the email the reason why you require access to the Aff-Wild2 database, your official academic/industrial website and your job and title (if you are a student specify whether you are a UG, PG or PhD one).
Source Code
The source code and the trained weights of various models can be found in the following github repository:
https://github.com/dkollias/Aff-Wild-models
Note that the source code and the model weights are made available for academic non-commercial research purposes only. If you want to use them for any other purpose (eg industrial -either research or commercial-) email: d.kollias@qmul.ac.uk
Important Information
All the images of the database are obtained from Youtube. We are not responsible for the content nor the meaning of these images.
You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purposes, any portion of the images and any portion of derived data.
You agree not to further copy, publish or distribute any portion of annotations of the dataset. Except, for internal use at a single site within the same organization it is allowed to make copies of the dataset.
We reserve the right to terminate your access to the dataset at any time.
References:
If you use the above data or the source code or the model weights, you must cite the following papers:
D. Kollias,S. Zafeiriou: "Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units and a Unified Framework, 2021
@article{kollias2021affect, title={Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units and a Unified Framework}, author={Kollias, Dimitrios and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:2103.15792}, year={2021}}
D. Kollias, et. al.: "Deep Affect Prediction in-the-wild: Aff-Wild Database and Challenge, Deep Architectures, and Beyond". International Journal of Computer Vision (2019)
@article{kollias2019deep, title={Deep affect prediction in-the-wild: Aff-wild database and challenge, deep architectures, and beyond}, author={Kollias, Dimitrios and Tzirakis, Panagiotis and Nicolaou, Mihalis A and Papaioannou, Athanasios and Zhao, Guoying and Schuller, Bj{\"o}rn and Kotsia, Irene and Zafeiriou, Stefanos}, journal={International Journal of Computer Vision}, pages={1--23}, year={2019}, publisher={Springer} }
S. Zafeiriou, et. al. "Aff-Wild: Valence and Arousal in-the-wild Challenge". CVPR, 2017
@inproceedings{zafeiriou2017aff, title={Aff-wild: Valence and arousal ‘in-the-wild’challenge}, author={Zafeiriou, Stefanos and Kollias, Dimitrios and Nicolaou, Mihalis A and Papaioannou, Athanasios and Zhao, Guoying and Kotsia, Irene}, booktitle={Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on}, pages={1980--1987}, year={2017}, organization={IEEE} }
D. Kollias, et. al. "Recognition of affect in the wild using deep neural networks". CVPR, 2017
@inproceedings{kollias2017recognition, title={Recognition of affect in the wild using deep neural networks}, author={Kollias, Dimitrios and Nicolaou, Mihalis A and Kotsia, Irene and Zhao, Guoying and Zafeiriou, Stefanos}, booktitle={Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on}, pages={1972--1979}, year={2017}, organization={IEEE} }