However, students affiliated with the host or organizing laboratories are not eligible to participate.
Private sharing of code or data outside your team is strictly prohibited. Teams must maintain confidentiality to ensure fair competition.
D. Challenge Completion Requirements
To successfully complete the challenge, participants must:
Deliverables
Paper Submission
At least one team member is required to present the per in the workshop.
By default, accepted papers will be published under IJABC (International Journal of Activity Behavior and Computing) under Jstage. For paper format, please check the ABC Main website.
The evaluation involves building models to predict depressive episodes using the dataset. There are two evaluation approaches:
Universal Model:
Train a single model using all participant data.
Use leave-one-participant-out (LOPO) cross-validation.
Evaluate using metrics like AUROC, Accuracy, Precision, Recall, F1 Score.
Hybrid Model:
Combine user-specific data with general data for training.
Use nested cross-validation (e.g., leave-one-participant-day-out with time-series awareness).
Focus on personalized depression detection while maintaining generalizability.
Evaluation Overview
The challenge goal is to assess your models’ performance through Universal (LOSO) and Hybrid (LOPDO) approaches with feature engineering.
For Leave One Person Day Out cross-validation, you can use the instructions mentioned in the below paper.
Kennedy Opoku Asare, Isaac Moshe, Yannik Terhorst, Julio Vega, Simo Hosio, Harald Baumeister, Laura Pulkki-Råback, and Denzil Ferreira. 2022. Mood ratings and digital biomarkers from smartphone and wearable data differentiate and predict depression status: A longitudinal data analysis. (2022).
2. For the Hybrid model, you can incorporate a small quantity of user-specific data with a broader general user dataset. The strategy is the same as Asare et al. (Opoku Asare et al., 2022) for the predictive analysis of depression. You need to employ stratified three-fold cross-validation with a time-series-aware leave-one-participant-day-out (LOPDO) cross-validation for the outer and inner cross-validation. In other words, one participant’s day is chosen as the test set, and the remaining participant’s dataset is chosen as the training set for each iteration of the nested cross-validation. All training set samples captured after the test set are subtracted for time-series awareness.
For example:
## Iteration 1 Train data: [All participant dataset excluding test participant] + [Day 1 data of test participant] Test data: [Day 2 data of test participant]
## Iteration 2 Train data: [All participant dataset excluding test participant] + [Day 1 and 2 data of test participant] Test data: [Day 3 data of test participant]
## Iteration 3 Train data: [All participant dataset excluding test participant] + [Day 1 , 2 and 3 data of test participant] Test data: [Day 4 data of test participant] . . . .
## Iteration N Train data: [All participant dataset excluding test participant] + [Day 1, 2, 3,... (N-1) data of test participant] Test data: [Day N data of test participant]
To determine the winners, we will assess both the computational model's performance and the quality of the submitted paper. The paper will be evaluated based on its completeness in addressing the challenge, the novelty of its contributions, the methodological rigor of the approach, and the clarity of writing.