Notice: The organizers will evaluate and rank all teams based on their submitted results. The top 3 teams need to submit their code to us (email: kunli.hfut@gmail.com, guodan@hfut.edu.cn) to ensure the algorithms' reproducibility. The final results and ranking will be confirmed and announced by the organizers.
2) Using the links below, apply for access to the sub-challenge(s) you want to participate in:
3) Following the instructions in the Data to apply the dataset used in this competion.
4) Subsequently, you should be granted access to the evaluation system within 24 hours. Instructions on the submission format can be found there.
Track 1: Micro-Action Recognition
Baseline code: https://github.com/VUT-HFUT/MAC_2024_baseline/tree/main/MAR
Development Phase (2024.5.1~2024.6.30):
Participants must submit their predictions in a single .zip file. You can check the submission.csv sample file here.
Testing Phase (2024.7.1~2024.7.8):
Participants must submit their predictions in a single .zip file. You can check the submission.csv sample file here.
Submission Format:
The "submission.csv" is in the format of "Id, pred_label_1, pred_label_2", where "ID" denotes the video id, "pred_label_1" and "pred_label_2" denote the body- and action-grained micro-action category, respectively.
Track 2: Multi-label Micro-Action Detection
Baseline code: https://github.com/VUT-HFUT/MAC_2024_baseline/tree/main/MMAD
We deploy a cross-subject evaluation protocol. The video samples in training, validation, and test sets come from different subjects.
Development Phase (2024.5.29~2024.6.30):
Participants must submit their predictions in a single .zip file. You can check the submission.csv sample file here.
Testing Phase (2024.7.1~2024.7.8):
Participants must submit their predictions in a single .zip file. You can check the submission.csv sample file here.
Submission Format:
The "submission.csv" is in the format of ",video_id, t_start, t_end, class, score", where "video_id" denotes the video id, "t_start" and "t_end" denote the starting and ending times of the detected micro-action, respectively. ''class'' denotes the category of the detected micro-action. "score" denotes the confidence score.
Micro-Action Recognition (MAR):
Submissions will be ranked in terms of overall score based on F1_mean [Ref1]. And F1_mean is defined as follows:
where "b" and "a" denote the granularity of labeling for body-level and action-level categories in the MA-52 dataset, respectively. The labeling of body parts can be derived from the labeling of micro-action categories. The terms "micro" and "macro" are used in the calculation of the F1 Score.
Submissions will be ranked on the basis of the F1_mean score on the test set.
Multi-label Micro-Action Detection (MMAD):
Similar to the typical multi-label action detection task, we use the detection-mAP metric [Ref2] as the evaluation metric for the MMAD task. The detection-mAP metric evaluates the completeness of predicted microaction instances. It is the instancewise mAP of action predictions under different tIoU thresholds, i.e., [0.1,0.1,0.9].
Submissions will be ranked on the basis of the average mAP score on the test set.