For the Sentiment Analysis Track, we split the data into training and testing partitions. For developing their methods, participants will use the training partition, and subsequently the test partition will be used to evaluate the participant methods and to determine the winner of the challenge. For the Thematic group, we only use a partition as test.
For the evaluation of the Magical Town (MT) task, the idea is similar to the type prediction measure. For this, we suppose that there exists a list with all Magical Towns named MTL (Magical Towns list). The Equation 3 shows this classification measure.
The final measure for this task is the average of 3 sub-tasks. The idea is that polarity and the Magical Town identification have more weight than the other two subtasks, it will be given two and three times the importance, respectively, as we can see in the Equation 4.
Systems are evaluated using standard evaluation metrics, including precision, recall, and F1-score. How each task will be evaluated is listed below:
For this edition, Equation 1 is applyed to evaluate the result of the polarity classification. Where k is a forum participant system, C = {1,2,3,4,5}. Finally, Fi(K) is the F-measure value for the class i obtained by the system k.
For the Type prediction, there are 3 classes (Attractive, Hotel, and Restaurant). For this reason, we apply the Macro F-measure as the Equation 2 indicates. Where FA(k) represents the F measure obtained by the system k for the Attractive class. FH(k) represents the F measure obtained by the system k for the Hotel class. In the same way, FR(k) represents the F measure obtained by the system k for the Restaurant class.
To access the data, you must register your team. Soon you will receive the data collection link.
Runs will be received from 14th April 0:01 until 15th May 23:59 (-0600 UTC)
Participants are allowed to submit several runs for each track.
Submissions formatted as described below and sent via email to the account: miguel.alvarez@cimat.mx
Your software has to output for each task of the dataset a corresponding txt file. The file must contain one line per classified instance. Each line looks like this:
"TaskName"\t"IdentifierOfAnInstance"\t"Class"\n
It's important to respect the format with the " character, \t (tabulator) and \n (linux enter). The naming of the output files is up to you, we recommend to use the author and a run's identifier as filename with "txt" as extension.
For the Sentiment Analysis the possible labels are:
TaskName: rest-mex
IdentifierOfAnInstance:NumberOfOpinion
where NumberOfOpinion is the number line of the each opinion in the test file.
Classes: [1,5] '\t'[MagicalTowns(40 possibilities)]'\t'[Attractive, Hotel, Restaurant]
Output example:
rest-mex 0 5 Sayulita Restaurant
rest-mex 1 4 Bacalar Restaurant
rest-mex 2 1 TodosSantos Restaurant
rest-mex 3 3 Isla_Mujeres Restaurant
Notice that al instances number starts with 0.
A submission failing the format checking will be considered null.
Participants of the tasks will be given the opportunity to write a paper that describes their system, resources used, results, and analysis that will be part of the official IberLEF-2025 proceedings.
Here are some important considerations for the article:
System description papers should be formatted according to the Springer Conference Proceedings style: https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines. Latex and Word templates can be found there.
The minimum length of a regular paper should be 5 pages. There is no maximum page limit.
Papers must be written in English.
Each paper must include a copyright footnote on the first page of each paper: {\let\thefootnote\relax\footnotetext{Copyright \textcopyright\ 2025 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). IberLEF 2025, September 2025, Spain.}}
Eliminate the numbering in the pages of the paper, if there is one, and make sure that there are no headers or footnotes, except the mandatory copyright as a footnote on the first page.
Authors should be described with their name and their full affiliation (university and country). Names must be complete (no initials), e.g. “Soto Pérez” instead of “S. Pérez”.
Titles of papers should be in emphatic capital English notation, i.e., "Filling an Author Agreement by Autocompletion" rather than "Filling an author agreement by autocompletion".
At least one author of each paper must sign the CEUR copyright agreement. Instructions and templates can be found at http://ceur-ws.org/HOWTOSUBMIT.html. The signed form must be sent along with the paper to the task organizers.