The paper title must follow the following template: <Team_Name> at TAQEEM 2025: <Own_Title>.
Each team can submit one paper even if it participates in both subtasks.
The page limit is 4 pages according to the conference guidelines.
The paper should cover the following sections:
Abstract: four/five sentences highlighting your approach and key results.
1. Introduction: ¾ a page expanding on the abstract mentioning key background such as the task definition, why the task is challenging for current modeling techniques, and why your approach is interesting/novel.
2. Data: review of the data you used to train your system. Be sure to mention the size of the training, validation/dev, and test sets that you’ve used, and the label distributions, as well as any tools/extenral resources you used for preprocessing data.
3. Background: briefly outlines the task setup, including input/output types with examples, and provides key dataset details such as language, genre, and size. It should also mention the specific tracks participated in (if applicable) and cite relevant related work to highlight the novelty of your contribution.
4. System Overview: a detailed description of how the systems were built and trained. If you’re using a neural network, were there pre-trained embeddings, how was the model trained, what hyperparameters were chosen and experimented with? How long did the model take to train, and on what infrastructure? Linking to source code is valuable here as well, but the description should be able to stand alone as a full description of how to reimplement the system.
5. Experimental Setup: should describe how the data is split into train, development, and test sets, along with any preprocessing steps and hyperparameter configurations needed for replication. It should also specify external tools or libraries used (including versions and URLs), summarize the task evaluation metrics, and, if space is limited, defer detailed implementation aspects to the Appendix.
6. Results: a description of the key results of the paper. Results on the dev set, official results on the test set, analysis of the results, etc. You can also report and analyze the results of other runs that you didn’t officially submit. If you have done extra error analysis into what types of errors the system makes, this is extremely valuable for the reader.
7. Conclusion: a restatement of the introduction, highlighting what was learned about the task and how to model it.
For ease of approach reproducibility (and faster learning by others), you are strongly encouraged to release your code and make it publicly available. If so, please indicate that in your paper and provide a public link.
Please pay careful attention to the "Anonymity" section in the formatting guidelines below.
Put your code files into a .zip file.
Add a readme file within the zip file that explains your code structure and how to replicate the results reported in the paper.
On the submission page of OpenReview, you can upload the code within the "Supplementary Materials" section.
Please follow the formatting guidelines/instructions detailed on the conference website before submitting your paper.
Please submit your paper on OpenReview.
22 August 2025: Shared Task System Descriptions Paper Deadline
29 August 2025: Notification of acceptance
5 September 2025: Camera-ready papers due