2) Apply for access to the sub-track(s) you wish to participate in using the links below:
Track 1: MAR: https://codalab.lisn.upsaclay.fr/competitions/22399
Note that the test set (1138 samples) of Track 1 is a subset sampled from the original test (5586 samples) set of the MA-52 dataset.
Track 2: MMAD: https://codalab.lisn.upsaclay.fr/competitions/22455
Note: Only accounts that have submitted the registration information will be approved.
3) Following the instructions in the Datasets to apply the dataset used in this competition and the HuggingFace page.
4) Once your dataset application is approved, you will be granted access to the evaluation system within 24 hours.
If you have any questions, please feel free to contact Dr. Kun Li.
Track 1: Micro-Action Recognition
Baseline code: https://github.com/kunli-cs/MAC20205_starter_kit/tree/main/MAR
Development Phase (2025.4.1~2025.6.30):
Participants must submit their predictions in a single .zip file. You can check the submission sample file here.
Testing Phase (2025.7.1~2025.7.8):
Participants must submit their predictions in a single .zip file. You can check the submission.csv sample file here.
Note: We encourage all participants in the micro-action recognition task to also evaluate their approaches on Bodily Behaviour Recognition in related Multimediate'25 (https://multimediate-challenge.org/Description/). Strong results in both challenges will improve the chances of your papers being accepted. However, please be aware that we do not accept double submissions, i.e. you should submit your paper to only one of the two challenges (either MAC’25 or MultiMediate’25).
Track 2: Multi-label Micro-Action Detection
Baseline code: https://github.com/VUT-HFUT/MAC_2024_baseline/tree/main/MMAD
We deploy a cross-subject evaluation protocol. The video samples in training, validation, and test sets come from different subjects.
Development Phase (2025.4.1~2025.6.30):
Participants must submit their predictions in a single .zip file. You can check the submission.csv sample file here.
Testing Phase (2025.7.1~2025.7.8):
Participants must submit their predictions in a single .zip file. You can check the submission.csv sample file here.
Micro-Action Recognition (MAR):
Submissions will be ranked in terms of overall score based on F1_mean [Ref1]. And F1_mean is defined as follows:
where "b" and "a" denote the granularity of labeling for body-level and action-level categories in the MA-52 dataset, respectively. The labeling of body parts can be derived from the labeling of micro-action categories. The terms "micro" and "macro" are used in the calculation of the F1 Score.
Submissions will be ranked on the basis of the F1_mean score on the test set.
Multi-label Micro-Action Detection (MMAD):
Similar to the typical multi-label action detection task, we use the detection-mAP metric [Ref2] as the evaluation metric for the MMAD task. The detection-mAP metric evaluates the completeness of predicted microaction instances. It is the instancewise mAP of action predictions under different tIoU thresholds, i.e., [0.1,0.1,0.9].
Submissions will be ranked on the basis of the average mAP score on the test set.
Notice: The organizers will evaluate and rank all teams based on their submitted results. The top 3 teams need to submit their code to us (email: kunli.hfut@gmail.com, guodan@hfut.edu.cn) to ensure the algorithms' reproducibility. The final results and ranking will be confirmed and announced by the organizers.