SHARED TASK PAPER SUBMISSION GUIDELINES


What format should the paper be?

§ The paper will be in the WANLP proceedings and must follow the EMNLP official LaTeX or Word templates available at: https://2022.emnlp.org/calls/style-and-formatting/

§ The paper is expected to be up to 4 pages long plus any number of pages for references. Note that the review process is not double blind, so anonymity is not required.

§ Submissions should be done via softconf: https://softconf.com/emnlp2022/WANLP2022/. Make sure you choose the Propaganda Detection Shared Task- System Description Paper

WHAT IS THE PAPER FOR?

The system description paper should let another researcher:

  • Verify what the system does and how it has been trained.

  • Reimplement the system to reproduce the results.

  • Understand the system’s strengths and weaknesses.

HOW SHOULD THE PAPER BE STRUCTURED?

A common structure for system description papers is:

  • Abstract: four/five sentences highlighting your approach and key results.

  • Introduction: ¾ a page expanding on the abstract mentioning key background such as why the task is challenging for current modeling techniques and why your approach is interesting/novel.

  • Data: review of the data you used to train your system. Be sure to mention the size of the training, validation and test sets that you’ve used, and the label distributions, as well as any tools you used for preprocessing data.

  • System: a detailed description of how the systems were built and trained. If you’re using a neural network, were there pre-trained embeddings, how was the model trained, what hyperparameters were chosen and experimented with? How long did the model take to train, and on what infrastructure? Linking to source code is valuable here as well, but the description should be able to stand alone as a full description of how to reimplement the system. While other paper styles include background as a separate section, it’s fine to simply include citations to similar systems which inspired your work as you describe your system.

  • Results: a description of the key results of the paper. If you have done extra error analysis into what types of errors the system makes, this is extremely valuable for the reader. Unofficial results from after the submission deadline can be very useful as well.

  • Discussion: general discussion of the task and your system. Description of characteristic errors and their frequency over a sample of development data. What would you do if you had another 3 months to work on it?

  • Conclusion: a restatement of the introduction, highlighting what was learned about the task and how to model it.


Acknowledgment

This page is an adaptation of the guidelines shared by the organizers of the 1st ACL Workshop on Gender Bias for Natural Language Processing.