Shared Task - FinSBD-3

The 3rd Shared Task on Structure Boundary Detection,

an extension of Sentence Boundary Detection

Introduction

Sentences

Sentences are basic units of the written language. Detecting the beginning and end of sentences, or sentence boundary detection (SBD), is the foundational first step in many Natural Language Processing (NLP) applications such as POS tagging; syntactic, semantic, and discourse parsing; information extraction; or machine translation.

Despite its important role in NLP, Sentence Boundary Detection has so far not received enough attention. Previous research in the area has been confined to only formal texts (news, European Parliament proceedings, etc.) where existing rule-based and machine learning approaches are extremely accurate so-long the data is perfectly clean. No sentence boundary detection research to date has addressed the problem in noisy texts extracted automatically from machine-readable files (generally PDF file format) such as financial documents.

One type of financial document is the prospectus. Financial prospectuses are official PDF documents in which investment funds precisely describe their characteristics and investment modalities. The most important step of extracting any information from these files is to parse them to get noisy unstructured text, clean the text, format the information (by adding several tags) and finally, transform it into semi-structured text, where sentence and list boundaries are well marked.

These prospectuses also contain many visual demarcations indicating a hierarchy of sections including bullets and numbering. There are many sentence fragments and titles, and not just complete sentences. The prospectuses more often than not contain punctuation errors. And in order to structure the dense information in a more easily read format, lists are often used.

Lists

A list can be similar to a sentence that enumerates several items of the same category. For example, the “Simple List” from Figure 1 can be easily read as one normal sentence. However, looking at Figure 2, the list cannot be read as one sentence; although it is one unit, because there are multiple sentences included and there is a visible hierarchy of information. It is therefore important to make the distinction between sentences and lists and, for these lists, to create a hierarchy that organizes the items. Mastering this distinction and item hierarchy can pave the way for more accurate information extraction.

Figure 1. Simple list
Figure 2. Complex list

Document structure elements : Footer, Header, Tables

This year, we have included the task of extracting document structure elements like footer, header and tables due to their unique structure and common occurrence in financial documents.

Footers and headers are used in financial prospectuses as shown in Figure 3, for including information that the author wants to appear on every page of a prospectus such as the title of the document or page numbers. Tables are also largely used for presenting text information and statistical data as shown in Figure 4 and we often observe multi-page tables (see Figure 5) in financial documents.

Figure 3. Footer in a prospectus
Figure 4. Single-page table in a prospectus
Figure 5. Multi-page table in a prospectus

Task Description

In the last edition of FinSBD-2 , we focused on extracting well-segmented sentences, lists and list items from financial prospectuses in PDF format by detecting their beginning and end boundaries, in two languages: English and French. This year, we improve the previously proposed tasks and extend this task to the detection of document structure boundaries.

The goal of FinSBD-3 is thus to extract the boundaries of sentences, lists and list items, including structure elements like footer, header, tables. Given a set of textual documents extracted from pdf files, participants in this shared task have to extract a set of well-delimited sentences, lists, list items and structure elements (footer, headers and tables).

For each given PDF, a JSON will be provided containing:

  • text extracted by us (key "text")

  • sentence boundaries (key "sentence")

  • list boundaries (key "list")

  • list item boundaries (key "item")

  • list item boundaries of level 1 (key "item1")

  • list item boundaries of level 2 (key "item2")

  • list item boundaries of level 3 (key "item3")

  • list item boundaries of level 4 (key "item4")

Item boundaries overlap with item boundaries of different levels. Each item level represents its depth within the list.

  • table boundaries (key "table")

  • footer boundaries (key "footer")

  • header boundaries (key "header")

Boundaries are represented by indexes of starting and ending characters that the system has to predict.

We also included the PDF coordinates of each boundaries as metadata (which can be used for visualization on PDF if needed).

Example:

{

"text": "Ce document fournit des informations essentielles aux investisseurs ...",

"sentence": [ {"start": 17, "end": 53, "coordinates": ... }, ... ],

"list": [ {"start": 1080, "end": 1267, "coordinates": ... }, ... ],

"item": [ ... ],

"item1": [ ... ],

"item2": [ ... ],

"item3": [ ... ],

"item4": [ ... ]

}

We are providing original PDFs, indexes of characters as well as coordinates of boundaries to allow different kind of character or word tokenization and/or possible usage of spatial and visual cues. Therefore, we hope to encourage novel approaches based on multimodality, especially since lists are often spatially structured to convey information visually.

This task is open to everyone. The only exception are the co-chairs of the organizing team, who cannot submit a system, and who will serve as an authority to resolve any disputes concerning ethical issues or completeness of system descriptions.

Evaluation

The evaluation metrics will be computed based on boundaries which are pairs of character indexes ("start" and "end").

A boundary is considered well detected if and only if both start and end indexes are correct.

The macro F-score will be the official metric and an evaluation script will be provided to all the teams.

To compute the metric, we average the F-score computed for each document on all boundary classes.

Platform

This year, FinSBD-3 will be hosted on at https://competitions.codalab.org/competitions/28485.

Codalab is an opensource platform for organizing research challenge. Upon submission of prediction, the evaluation metric will be automatically computed to rank each candidate.

Registration

Register here at: https://forms.gle/FnVThgUbUa2x7Rr76 in order to receive the link for the training data.

For dev and test data, please go on the codalab platform at https://competitions.codalab.org/competitions/28485.


Prize

A USD$1000 prize will be rewarded to the best-performing teams.


Important Dates

Submission System: https://easychair.org/conferences/?conf=finweb2021

  • Dec 23, 2020 - First announcement of the shared task and beginning of registration

  • Jan 08, 2021 Extended to Jan 15 - Release of training data and scoring script

  • Feb 02, 2021 Extended to Feb 05 - Test set made available

  • Feb 10, 2021 Extended to Feb 12 - Registration deadline

  • Feb 10, 2021 Extended to Feb 17 - Systems' outputs collected

  • Feb 15, 2021 Extended to Feb 19 - Release of results.

  • Feb 19, 2021Extended to Feb 22 - Shared task title and abstract due

  • Feb 23, 2021 Extended to Feb 25 - Shared task paper submissions due

  • Mar 01, 2021 - Camera-ready version of shared task paper due

  • April 19-23, 2021 - FinWeb 2021 Workshop (Ljubljana, Slovenia)


Contact

For any questions on the shared task, please contact us on fin.sbd.task@gmail.com.


Shared Task Co-organizers - Fortia Financial Solutions