Data and evaluation

Evaluation

For both tracks, we split the data into training and testing partitions. For developing their methods, participants will use the training partition, and subsequently the test partition will be used to evaluate the participant methods and to determine the winner of the challenge.

The evaluation will be the same for both tasks. Since the possble results of the systems is in [1,5] the results of the participants will compare with the ground truth with the MAE metric. The system with the lowest MAE value will be considered the winner in each subtask.

Data

To access the data, you must register your team. Soon you will receive the data collection link.

Evaluation Rules

The performance of your recommendation system solution will be ranked by the MAE measure.

The performance of your sentiment analysis solution will the ranked by the MAE measure too.


Runs for Track 1 will be received from 16th April 0:01 until 30th April 23:59 (-0600 UTC)

Runs for Track 2 will be received from 16th April 0:01 until 30th April, 23:59 (-0600 UTC)

Participants are allowed to submit up to two runs for each track: one primary and one secondary. The participants must clearly flag each of the two.

Output Submission

Submissions formatted as described below and sent via email to the account: malvarez@cicese.edu.mx

​Your software has to output for each task of the dataset a corresponding txt file. The file must contain one line per classified instance. Each line looks like this:

"TaskName"\t"IdentifierOfAnInstance"\t"Class"\n

It's important to respect the format with the " character, \t (tabulator) and \n (linux enter). The naming of the output files is up to you, we recommend to use the author and a run's identifier as filename with "txt" as extension.

For the recommendation system the possible labels are:

  • TaskName: recommendation

  • IdentifierOfAnInstance: NumberOfRecommendation

  • Class: [1,5]

  • Output example:

" recommendation" "Usuario1" "2"

" recommendation" "Usuario2" "5"

" recommendation" "Usuario3" "3"

" recommendation" "Usuario4" "5"

" recommendation" "Usuario5" "1"

For the sentiment analysis the possible labels are:

  • TaskName: sentiment

  • IdentifierOfAnInstance:NumberOfOpinion

    • where NumberOfOpinion is the number line of the each opinion in the test file.

  • Class: [1,5]

  • Output example:

"sentiment" "1" "1"

"sentiment" "2" "2"

"sentiment" "3" "4"

"sentiment" "4" "3"

"sentiment" "5" "3"


A submission failing the format checking will be considered null.