NATURAL LOGIC MEETS MACHINE LEARNING IV

Workshop @IWCS2023


The task of Natural Language Inference (NLI) has received immense attention recently. This attention has led to the creation of massive datasets and the training of large, deep models reaching human performance. The world-knowledge encapsulated in such models enables such models to deal with diverse and large data in an efficient way. However, it has been repeatedly shown that such models fail to solve some basic inferences and lack generalization power. When presented with differently biased data  or with inferences containing hard linguistic phenomena, they struggle to reach the baseline. Explicitly detecting and solving these weaknesses is only partly possible. At the same time, another strand of research has targeted more traditional approaches to reasoning, employing some kind of logic or semantic formalism. Such approaches excel in precision, especially of inferences with hard linguistic phenomena, e.g., negation, quantifiers, modals, etc. However, they suffer from inadequate world-knowledge and lower robustness, making it hard for them to compete with state-of-the-art models. Thus,  a third research direction seeks to close the gap between the two approaches by employing hybrid methods.


We see such hybrid research efforts as promising not only to overcome the described challenges and advance the field, but also to contribute to the symbolic-deep learning "debate" that has emerged in the  field of NLU.  We would like to further promote this research direction and foster fruitful dialog between the two disciplines. This workshop aims to bring together researchers working on hybrid methods in any subfield of NLU, including but not limited to NLI, QA, Sentiment Analysis, Dialog, Machine Translation, Summarization, etc.