NATURAL LOGIC MEETS MACHINE LEARNING IV

Workshop @ESSLLI 2022, August 8-12 2022

INVITED SPEAKERS

Claire Gardent, Director of Research (first class), CNRS, LORIA, Nancy

Explaining Omissions 

Joint work with Juliette Faille, CNRS/LORIA and Lorraine University, Nancy (France), Albert Gatt (U. Utrecht, The Netherlands), Quentin Brabant, Gwénolé Lecorvé and Lina Rojas-Barahona (Orange Lanion)


What kind of errors are made by neural generation models and why ?


In this talk, I will present our work on assessing, analysing and explaining the output of text generation models that are grounded in Knowledge Graphs (KG).


Focusing on KG-to-Text encoder-decoder models i.e., generation models which aim to verbalise the content of a Knowledge Graph, I will discuss missing information i.e., information that is present in the input but not in the  output. I will first, introduce a novel evaluation metric for assessing to what extent generation models omit input information. I will show that, while this metric correlates with human scores, correlation varies with the specifics of the human evaluation setup which  suggests that an automatic metric might be more reliable, as less subjective and more focused on correct verbalisation of the input, than human evaluation measures.Using both a parametric and a non-parametric probe,  I will then go on to demonstrate  that omissions are already "visible" in the encoder representations i.e., can be tracked back to the encoder. 


In the second part of the talk, I will discuss conversational question generation and show that grounding  dialog in knowledge  allows for a detailed analysis of the model behaviour in terms of well-formedness, relevance, semantic adequacy and dialog coherence. 


Johan Bos, Professor of Computational Linguistics, University of Groningen

Semantic Parsing with Large Language Models 

I will present how large language models can be integrated into the traditional realm of computational semantics, namely associating a text snippet of natural language with a logical form, commonly referred to as semantic parsing.  Initially, I will introduce a

variable-free logical language for describing meanings that is suitable for machine learning purposes. Subsequently, I will demonstrate how large language models can be used to improve semantic parsing through the processes of fine-tuning and prompt engineering. Lastly, I will discuss the profound implications concerning ethical and methodological considerations that arise from the utilisation of large language models in tasks such as semantic parsing.