NATURAL LOGIC MEETS MACHINE LEARNING III

Workshop @ESSLLI 2022, August 8-12 2022

HOME

If you would like to join ESSLLI and NALOMA in-person, registration is now open! Please keep in mind that early registration closes on June 5th. If you would like to join NALOMA online (ESSLLI cannot be joined online), you don't have to register but you may use this link to join.


*****************

After the successful completion of NALOMA’20 and NALOMA’21, NALOMA'22 seeks to continue the series and attract exciting contributions. Particularly, this year NALOMA expands its focus to the whole field of Natural Language Understanding (NLU) . The workshop aims to bridge the gap between ML/DL and symbolic/logic-based approaches to NLU and lay a focus on hybrid approaches. NALOMA'22 will take place August 8-18, 2022, during ESSLLI 2022 organized at the National University of Ireland Galway (a hybrid format is planned).


Recently, there has been a surge of interest in tasks targeting NLU and Reasoning. Particularly, the task of Natural Language Inference (NLI) has received immense attention. This attention has led to the creation of massive datasets and the training of large, deep models reaching human performance (e.g., Liu et al. 2019, Pilault et al. 2020). The world-knowledge encapsulated in such models and their robust nature enable such models to deal with diverse and large data in an efficient way. However, it has been repeatedly shown that such models fail to solve basic inferences and lack generalization power. When presented with differently biased data (Poliak et al. 2018, Gururangan et al. 2018) or with inferences containing hard linguistic phenomena, (e.g., Dasgupta et al. 2018, Nie et al. 2018, Naik et al. 2018, Glockner et al. 2018, Richardson et al. 2020, McCoy et al. 2019, Yanaka et al. 2020, to name only a few), they struggle to reach the baseline. Explicitly detecting and solving these weaknesses is only partly possible, e.g., through appropriate datasets, because such models act like black-boxes with low explainability. At the same time, another strand of research has targeted more traditional approaches to reasoning, employing some kind of logic or semantic formalism. Such approaches excel in precision, especially of inferences with hard linguistic phenomena, e.g., negation, quantifiers, modals, etc. (e.g., Bernardy and Chatzikyriakidis 2017, Yanaka et al. 2018, Chatzikyriakidis and Bernardy 2019, Hu et al. 2019, Abzianidze 2020, to name only a few). However, they suffer from inadequate world-knowledge and lower robustness, making it hard for them to compete with state-of-the-art models. Thus, lately, a third research direction seeks to close the gap between the two approaches by employing hybrid methods (e.g., Liang et al. 2017, Kalouli et al 2020, Ebrahimi et al. 2021), combining the strengths of each approach and mitigating its weaknesses.


We see such hybrid research efforts as promising not only to overcome the described challenges and advance the field, but also to contribute to the symbolic-deep learning "debate" that has emerged in the field of NLU. We would like to further promote this research direction and foster fruitful dialog between the two disciplines. This workshop aims to bring together researchers working on hybrid methods in any subfield of NLU, including but not limited to NLI, QA, Sentiment Analysis, Dialog, Machine Translation, Summarization, etc. The workshop is also suitable for researchers working on one of the two disciplines but interested in moving into the hybrid direction. The novelty of NALOMA'22 compared to NALOMA'20 and NALOMA'21 is that it gets decoupled from NLI and welcomes a broader focus on other NLU tasks, as we believe that the main principles of hybrid methodology are shared across subfields.