Our tutorial aims to explore the synergies between causality and large models, also known as “foundation models,” which have demonstrated remarkable capabilities across for helping data mining in healthcare, finance, and education. However, there are increasing concerns about the trustworthiness and interpretability of these complex "black-box" LLMs behind the promising performance in data mining domains. A growing community of researchers is turning towards a more principled framework to address these concerns, better understand the behavior of large models, and improve their reliability and interpretability. Specifically, this tutorial will focus on three directions: causal agents for decision-making, LLMs for causality, and benefiting LLMs with causality. Besides, we introduce some open challenges and potential future directions for this area. We hope this tutorial will stimulate more ideas on this topic and facilitate the development of causality-aware large models.
Tutorial Outline
Introduction (15 Min)
The background of causality
Motivation for causal LLMs
Classification of causal agent, LLM for causality, and causality for LLM methods
Causal agents for decision-making (35 Min)
A brief introduction to the agent decision-making problem
Traditional research about agents
Advanced LLM Agent
Challenges: why need to introduce causality understanding?
Causality for better understanding and for better downstream decisions.
Causality for decision-making in LLMs
Challenges of LLM
Understanding the causality in LLMs
Q&A (5 Min)
LLMs for causality (35 Min)
LLMs for causal discovery
Traditional causal discovery methods
Discovering causal relationships from text with or without tabular data
Providing prior knowledge for traditional causal discovery algorithms
Complex scenarios such as temporal data
Challenges and future directions: hidden variables and rich text data.
LLMs for causal inference
Motivation
Causal inference in natural language
Generating counterfactual
Challenges and future directions
Break (5 Min)
Benefiting LLMs with causality (35 Min)
Enhancing the reasoning ability of LLMs
Mitigating the fairness and bias issues of LLMs
Improving the interpretability and safety of LLMs
Open problems, future directions and conclusions (15 Min)