October 19, 2023

Automated Legal Question Answering Competition 

(ALQAC 2023)

associated event of KSE 2023

held at the Vietnam Academy of Cryptography Techniques, Ha Noi, Vietnam

Sponsored by

Overview

 As an associated event of KSE 2023, we are happy to announce the 3rd Automated Legal Question Answering Competition (ALQAC 2023). ALQAC 2023 includes two tasks: 

For the competition, we introduce a new Legal Question Answering dataset – a manually annotated dataset based on well-known statute laws in the Vietnamese language. Through the competition, we aim to develop a research community on legal support systems.

Leaderboard

A leaderboard has been established to facilitate the submission of system predictions by participating teams for both the public and private tests. 

The designated website for the leaderboard can be accessed at https://eval.ai/web/challenges/challenge-page/2072/overview.

 

Tasks

Task 1: Legal Document Retrieval

Task 1’s goal is to return the article(s) that are related to a given question. The article(s) are considered “relevant” to a question iff the question can be answered using the article(s).

[

    {

        "question_id": "DS-101",

        "question_type": "Đúng/Sai",

        "text": "Cơ sở điện ảnh phát hành phim phải chịu trách nhiệm trước pháp luật về nội dung phim phát hành, đúng hay sai?",

        "relevant_articles": [

            {

                "law_id": "05/2022/QH15",

                "article_id": "15"

            }

        ]

    }

]


[

    {

        "question_id": "DS-1",

        "question_type": "Đúng/Sai",

        "text": "Phim đã được Bộ Văn hóa, Thể thao và Du lịch, Ủy ban nhân dân cấp tỉnh cấp giấy phép phân loại phim sẽ có giá trị trên toàn quốc, đúng hay sai?"

    }

]



Note that “relevant_articles”  is the list of all relevant articles to the questions.

The evaluation methods are precision, recall, and F2-measure as follows:

Precisioni = the number of correctly retrieved articles of question ith
            the number of retrieved articles of question ith

Recalli = the number of correctly retrieved articles of question ith
          the number of relevant articles of question ith

F2i= (5 x Precisioni x Recalli)
      (4Precisioni + Recalli)

F2 = average of (F2i)

In addition to the above evaluation measures, ordinal information retrieval measures such as Mean Average Precision and R-precision can be used for discussing the characteristics of the submission results. The macro-average F2-measure is the principal measure for Task 1.


Task 2: Legal Question Answering

Given a legal question, the goal is to answer the question. In ALQAC 2023, there are three types of questions:

[

    {

        "question_id": "DS-101",

        "question_type": "Đúng/Sai",

        "text": "Cơ sở điện ảnh phát hành phim phải chịu trách nhiệm trước pháp luật về nội dung phim phát hành, đúng hay sai?",

        "relevant_articles": [

            {

                "law_id": "05/2022/QH15",

                "article_id": "15"

            }

        ],

"answer": "Đúng"

    }

]


For the True/False questions, the answer must be "Đúng" or "Sai".


[

    {

        "question_id": "TN-102",

        "question_type": "Trắc nghiệm",

        "text": "Nam, nữ kết hôn với nhau phải từ đủ bao nhiêu tuổi trở lên?",

"choices": {

"A": "Nam từ đủ 20 tuổi trở lên, nữ từ đủ 18 tuổi trở lên.",

"B": "Nam từ đủ 18 tuổi trở lên, nữ từ đủ 20 tuổi trở lên.",

"C": "Nam từ đủ 21 tuổi trở lên, nữ từ đủ 19 tuổi trở lên.",

"D": "Nam từ đủ 19 tuổi trở lên, nữ từ đủ 21 tuổi trở lên."

},

        "relevant_articles": [

            {

                "law_id": "52/2014/QH13",

                "article_id": "8"

            }

        ],

"answer": "A"

    }

]


For the multiple-choice questions, the answer must be "A", "B", "C" or "D".


[

    {

        "question_id": "TL-103",

        "question_type": "Tự luận",

        "text": "Cơ quan nào có trách nhiệm thống nhất quản lý nhà nước về điện ảnh?",

        "relevant_articles": [

            {

                "law_id": "05/2022/QH15",

                "article_id": "45"

            }

        ],

"answer": "Chính phủ"

    }

]


For the free-text questions, the answer is free-text and will be evaluated by human experts.


The principal evaluation measure is accuracy:

Accuracy = (the number of questions that were correctly answered)
(the number of questions)


Note: 

Submission Details

Participants are responsible for ensuring that their result files adhere to the format requirements. The format should be as follows:

Task 1: Legal Document Retrieval


[

    {

        "question_id": "TN-2",

        "relevant_articles": [

            {

                "law_id": "05/2022/QH15",

                "article_id": "95"

            }

        ]

    },

    ...

]


Task 2: Legal Question Answering


[

    {

        "question_id": "TL-3",

        "answer": <the answer>

    },

    ...

]


Participants must submit the files containing the systems' predictions for each task via the leaderboard or via email. For each task, participants are allowed to submit a maximum of 3 files, which should correspond to 3 different settings or methods for this task.

Participants are required to submit a paper on their method and experimental results. Papers should conform to the standards set out on the KSE 2023 webpage (section Submission). At least one of the authors of an accepted paper has to present the paper at the ALQAC workshop of KSE 2023.

The papers authored by the task winners will be included in the main KSE 2023 proceedings if ALQAC organizers admit the paper novelty after the review process.