Reasoning with Natural Language Explanations


 Turorial at EMNLP 2024, Miami, Florida.

November 15, 2024 at 14:00

Overview

Explanation constitutes an archetypal feature of human rationality, underpinning learning and generalisation, and representing one of the media supporting scientific discovery and communication. 

Due to the importance of explanations in human reasoning, an increasing amount of research in Natural Language Inference (NLI) has started reconsidering the role that explanations play in learning and inference, attempting to build explanation-based NLI models that can effectively encode and use natural language explanations on downstream tasks.  Research in explanation-based NLI, however, presents specific challenges and opportunities, as explanatory reasoning reflects aspects of both material and formal inference. This type of reasoning demands integrating linguistic, commonsense, and domain-specific knowledge with abstract inferential processes like analogy, deduction, and abduction.

In this tutorial, we provide a comprehensive introduction to the field of explanation-based NLI, grounding this discussion on the epistemological-linguistic foundations of explanations, systematically describing the main architectural trends and evaluation methodologies that can be used to build systems capable of explanatory reasoning.

Tutorial paper: https://aclanthology.org/2024.emnlp-tutorials.4/ 

Slides: https://www.marcovalentino.net/tutorial_emnlp_2024/slides.pdf

Code and Python notebooks: https://github.com/neuro-symbolic-ai/reasoning_with_nle_emnlp_2024

The full recording of the tutorial can be accessed here!

Tutorial Content

Epistemological-Linguistic Foundations.  One of the main objectives of the tutorial is to provide a theoretically grounded foundation for explanation-based NLI, investigating the notion of explanation as a language and inference scientific object of interest, from both epistemological and linguistic perspectives. To this end, we will present a systematic survey of the contemporary discussion in Philosophy of Science around the notion of a scientific explanation, attempting to shed light on the nature and function of explanatory arguments and their constituting elements. Following the survey, we will focus on grounding the theoretical accounts for explanation-based NLI, identifying the main feature of explanatory arguments in corpora of natural language explanations.

Resources & Evaluation Methods. In order to build NLI models that can reason through the generation of natural language explanations, systematic evaluation methodologies must be developed. To this end, The tutorial will review the main resources, benchmarks and metrics in the field. In addition, the tutorial will review the main evaluation metrics adopted to assess the quality of natural language explanations. Evaluating the quality of explanations, in fact, is a challenging problem as it requires accounting for multiple concurrent properties.

Explanation-Based Learning & Inference.  We will review the key architectural patterns and modelling strategies for reasoning and learning over natural language explanations. In particular, we focus on different paradigms including Multi-Hop Reasoning & Retrieval-Based Models, where, given an external knowledge base, NLI models are required to select, collect and link the relevant knowledge to arrive at a final answer, and Natural Language Explanation Generation, which focuses on using generative models to support explanatory inference. Here, we will focus on the advent of Large Language Models (LLMs), which have made it possible to elicit explanatory reasoning via specific prompting techniques and in-context learning.

Semantic Control for Explanatory Reasoning. Controlling the explanation generation process in neural-based models is particularly critical while modelling complex reasoning tasks. In this tutorial, we will review emerging trends that combine neural and symbolic approaches to improve semantic control in the explanatory reasoning process and provide formal guarantees on the quality of the explanations. These methods aim to integrate the content flexibility of language models (instrumental for supporting material inferences) and formal inference properties.  In particular, we will focus on the following trends: (1) leveraging explanatory inference patterns for explanation-based NLI; (2) constraint-based optimisation for explanation-based NLI; (3) formal-geometric inference controls over latent spaces; (4) LLM-symbolic architectures.


Organizers

 Idiap Research Institute

Marco is a postdoc at the Idiap Research Institute. His research activity lies at the intersection of natural language processing, reasoning, and explanation, investigating the development of AI systems that can support explanatory natural language reasoning in complex domains (e.g., mathematics, science, biomedical and clinical applications).

Idiap Research Institute & University of Manchester

André leads the Neuro-symbolic AI Lab at the University of Manchester and Idiap Research Institute. His main research interests are on enabling the development of AI methods to support abstract, flexible and controlled reasoning in order to support AI-augmented scientific discovery.