PRELEARN

Prerequisite RElation LEARNing

Shared Task at EVALITA 2020


Overview

The proliferation of e-learning platforms, electronic textbooks and educational applications has shed light on the need of developing systems able to identify educational relations between learning concepts in order to develop intelligent agents to support both students and teachers in distant learning. Prerequisite relations are the most relevant among all educational relations since they establish which sequence of concepts allows students to have a full understanding of a subject. The need of inferring prerequisite relations from educational texts inspired PRELEARN (Prerequisite Relation Learning), the first shared task on Automatic Prerequisite Learning. We invite participants to build models that identify prerequisite relations between pairs of concepts. We will challenge these models proposing different experimental settings and scenarios.

News

14/12/2020 - Check the Proceedings

09/12/2020 - Workshop Program and Leaderboard are out!

05/10/2020 - Stay tuned for the results and leader board.

02/10/2020 - Evaluation window now closed!

25/09/2020 - Evaluation window starts today!

31/07/2020 - EVALITA goes virtual! See the new deadlines.

29/05/2020 - Download Training Data

16/03/2020 - We are online!

Motivation

Prerequisite relations are the most fundamental among all pedagogical relations since they establish which sequence of concepts allows students to have a full understanding of the domain: the order in which concepts are presented to the learner plays a crucial role in avoiding student's frustration and misunderstandings while approaching a new topic. For this reason, the authors of educational textbooks and teachers are very careful to organize the content of their learning materials accordingly and to highlight relevant connections to learners. Doing this automatically is still challenging from many perspectives.

The image below represents a concept map where concepts are nodes and edges represent prerequisite relations. In the example, "Aritmetica" is a prerequisite of "Potenza" since, if a student wants to understand what "Potenza" is, he/she has to know "Aritmetica" first. Hence, we define a prerequisite relation as a relation connecting a target and a prerequisite concept if the second has to be known in order to understand the first.

The NLP community has tackled automatic prerequisite learning in order to integrate prerequisite relations in systems for, e.g., curriculum planning (Agrawal et al 2016), reading list generation (Gordon et al. 2016; Fabbri et al. 2018), automatic assessment (Wang et al. 2016) and automatic educational content creation (Lu et al. 2019).

Given the limited availability of real educational materials, prerequisite learning systems are often exploited on manually annotated prerequisite relations between Wikipedia pages (Talukdar and Cohen 2012; Gasparetti et al. 2018; Zhou and Xiao 2019) and can be based on relational metrics or on machine learning approaches. Relational metrics are designed to capture the strength of the relation between co-occurring concepts and identify pairs of concepts obtaining low values as non-prerequisites. The RefD metric (Liang et al. 2015) is possibly the most popular among them and measures how differently two concepts refer to each other considering the Wikipedia links of the pages associated with the concepts of the pair. Prerequisite concept learning from textbook concepts is addressed in Adorni et al. 2019, which presents a method based on burst analysis combined with temporal reasoning to identify possible propaedeutic relations and compare it with a concept co-occurrence metric. Among machine learning approaches, we distinguish between those that exploited link-based features (e.g. Liang et al. 2015, Gasparetti et al. 2018), text-based features only (e.g. Miaschi et al. 2019, Alzetta et al. 2019), or a combination of the two (Liang et al. 2018).

The above systems, although reporting good results, however agree that prerequisite relations are difficult to identify, also by human annotators. Datasets manually enriched by experts show that human judgments about prerequisite identification can considerably vary (Chaplot et al. 2016, Gordon et al. 2016, Fabbri et al. 2018, Alzetta et al. 2018) depending on several factors, including the subjectivity of annotators and the type and complexity of the document being annotated. Moreover, the difficulty in providing a clear definition of such relation makes the identification of features for automatic prerequisite learning particularly complex, especially for cross-domain settings (Wang et al. 2016, Miaschi et al. 2019). This task is aimed and designed to address this open issue by challenging systems in different settings and scenarios.


Contact the organizers: prelearn.evalita2020@gmail.com


HOW TO CITE

ITA-PREREQ dataset:

Miaschi, Alessio, et al. "Linguistically-driven strategy for concept prerequisites learning on italian." Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications. 2019.


@inproceedings{miaschi2019linguistically,

title={Linguistically-driven strategy for concept prerequisites learning on italian},

author={Miaschi, Alessio and Alzetta, Chiara and Cardillo, Franco Alberto and Dell’Orletta, Felice},

booktitle={Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications},

pages={285--295},

year={2019}

}


PRELEARN shared task:

Alzetta, Chiara, et al. "Prelearn@ evalita 2020: Overview of the prerequisite relation learning task for italian." Proceedings of Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2020), Online. CEUR. org. 2020.


@inproceedings{alzetta2020prelearn,

title={PRELEARN@EVALITA 2020: Overview of the Prerequisite Relation Learning Task for Italian},

author={Alzetta, Chiara and Miaschi, Alessio and Dell'Orletta, Felice and Koceva, Frosina and Torre, Ilaria},

booktitle = {Proceedings of Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2020)},

editor = {Basile, Valerio and Croce, Danilo and Di Maro, Maria, and Passaro, Lucia C.},

publisher = {CEUR.org},

year = {2020},

address = {Online}

}