2ND Workshop on learning with few or without annotated face, body and gesture data

In conjunction with the 18th IEEE Conference on Automatic Face and Gesture Recognition

May 31 2024, Istanbul, Turkey

Aim and scope

Since more than a decade, Deep Learning has been successfully employed for vision-based face, body and gesture analysis, both for static and dynamic granularities. This is particularly due to the development of effective deep architectures and the release of quite consequent datasets. 

However, one of the main limitations of Deep Learning is that it requires large scale annotated datasets to train efficient models. Gathering such face, body or gesture data and annotating them can be time consuming and laborious. This is particularly the case in areas where experts from the field are required, like in the medical domain. In such a case, using crowdsourcing may not be suitable. 

In addition, currently available face and/or gesture datasets cover a limited set of categories. This makes the adaptation of trained models to novel categories not straightforward. Finally, while most of the available datasets focus on classification problems with discretized labels, continuous annotations are required in many scenarios. Hence, this significantly complicates the annotation process.

The goal of this 2nd edition of the workshop is to explore approaches to overcome such limitations by investigating ways to learn from few annotated data, to transfer knowledge from similar domains or problems, or to benefit from the community to gather novel large scale annotated datasets

TOPICS

We encourage scientists and industrials to submit their contribution under one of the following topic of interest but also welcome any novel relevant research in the field:

This Workshop is organized in conjunction with the 18th IEEE Conference on Automatic Face and Gesture Recognition, Istanbul, May 27-31 2024