Eighth Emotion Recognition in the Wild Challenge (EmotiW)

ACM International Conference on Multimodal Interaction 2020

1. Is it necessary to use both audio-video channels in the audio-video challenge?

The challenge data contains: audio, video and meta-data. The meta-data is composed of actor identity, age and gender. The participants are welcome to use any combination of modalities.

2. Can scene information other than face information be used?

Context analysis in FER is a hot topic. Participants can use scene/background/body pose etc. information along with the face information.

3. Which face and fiducial points detector have you used?

For face detection, we found Zhu and Ramanan's mixture-of-parts based detector useful in our experiments. The authors have made public ally available an implementation of their method on: LINK. The tracking was performed using the Intraface tracker [LINK].

4. Can I use both train and validate data for learning my model?

For evaluating a method on the Test set, data from both, Train and Val can be used for learning the model.

5. Is the use of commercial face detector such as the Google Picasa OK?

Any face detector whether commercial or academic can be used to participate in the challenge. The paper accompanying the challenge result submission should contain clear details of the detectors/libraries used.

6. Can I learn my model on both labelled train and unlabelled test data?

No, the datasets are subject independent, test data is supposed to be used for testing purposes only.

7. Can I use external data for training along with the one provided?

The participants are free to use external data for training along with the AFEW Train and Val partitions. However, this should be clearly discussed in the accompanying paper.

8. Will the review process be anonymous?

The review process is double-blind.

9. Is it necessary to participate in both the challenges?

Participants can contribute to both or either of the challenges.

10. Mismtach between number of samples in a set (Train/Val) and aligned faces/features?

For some sample, the face detector failed in the Train and Val sets. For these there are no aligned faces and features provided.

11. Can the Test data from earlier EmotiW challenges be used for Training a model in 2020?

No

12. Is there any sanity check on the Test results?

Similar to earlier EmotiW challenges, the top three performing teams will be asked to share their code/executable/library for checking at our end.

13. Are the papers published in EmotiW part of the workshop proceedings?

No, EmotiW is a grand challenge in ICMI and hence, EmotiW papers are part of the ICMI main conference proceedings.

14. What is the criteria for paper acceptance?

A team may chose to submit a paper describing the paper. The acceptance is a factor of relative performance in the challenge and anonymous reviews (and novelty) of the submitted paper describing the method. However, note that for the top performing three teams, it is mandatory to submit papers for the ranked. Right after the Test phase the leader board will be posted. As with earlier EmotiWs the code run will also be there for the top three teams.

For any queries please email at: EmotiW2014@gmail.com