Kick-start scripts

Two scripts are provided with the data to simplify the participation:


kick_start_training.py

The training script contains a very simple implementation of a RNN in PyTorch. The model is trained with a contrastive loss function. 20% of the subjects in the development set (selected randomly) are used for validating the model. At the end of each epoch, the model is saved (and overwritten) if a lower value of EER on the validation set is achieved.

The raw data are processed to extract the duration of each press and the ASCII code, and then zero-padded or sliced to a fixed sequence length of 100.

For each input sequence, the output of the model is a 32-value array (embedding). The goal is mapping embeddings belonging to the same subject close to each other, while distancing embeddings belonging to different subjects.

After each epoch, the loss and the EER on the training and validation set are saved in a log file. At the end of the training, an image containing the plot of such metrics is saved. 

The training script works for both development sets (desktop and mobile). To select the correct dataset, it is necessary to change the scenario variable to 'desktop' or to 'mobile'.


kick_start_evaluate.py

The evaluation script loads the trained model, extracts the features described above, computes the embeddings in the specific list of comparisons (desktop or mobile, according to the scenario variable), and it computes the scores for each of the comparisons. Then, a text file containing the comparisons is saved, ready to be submitted to Codalab, where participants can get the global EER on the evaluation set.


Performance achieved by the RNN system provided on the evaluation set

Desktop case: global EER = 25.0303

Mobile case: global EER = 33.3442%


Submission expected performance

These scripts are provided to participant just as examples, and we encourage the participants to try different systems and training protocols. The performance of the baseline system proposed is in fact not competitive.