NEW: DocEng´2022 Competition on Extractive Text Summarization

Systems evaluation

The system will be evaluated using:

1 - The traditional measures Rouge-1 and Rouge-2, taking into consideration the precision, recall, and f-measure.

2 - The number of sentences matching the gold standard summary.


Instructions to submission

1 - The comparison rate is 10% of the original document limited to a minimum of 3 sentences.

2 - The data available for download (with 2,000 documents) is the training set. We will evaluate in a different set containing 1,000 documents.

3 - The competitors should submit a JAVA or PYTHON script. The output of the script should be a folder (system_name_summary_xml_cnn) containing the summaries of each text with the same name as the original text.

For example the output for the file directory_xml_cnn/tech - Meet the robot guitarist with 78 fingers and 22 arms.xml should be system_name_summary_xml_cnn/tech - Meet the robot guitarist with 78 fingers and 22 arms.xml.

Click here for an example of output file.

4 - The submissions should be sent by email to the workshop organizers: rdl.ufpe@gmail.com, rafael.mello@ufrpe.br, Steve.Simske@colostate.edu.

The authors should submit the codes and systems description to the organizers emails before the end of the deadline.

Rafael Dueire Lins

rdl.ufpe@gmail.com

Rafael Ferreira Mello

rafael.mello@ufrpe.br

Steve Simske

Steve.Simske@colostate.edu