Workshop on Causal Neuro-symbolic Artificial Intelligence
About
As artificial intelligence (AI) systems grow increasingly complex and are integrated into critical decision-making processes, there is a growing need to ensure that these systems are interpretable, robust, and capable of understanding causality, not just correlation. The field of Causal Neuro- symbolic AI seeks to bridge the gap between data-driven learning and symbolic reasoning to create systems that can both learn and reason about causes and effects within a structured framework. The workshop on Causal Neuro-symbolic AI aims to bring together the growing community of researchers that are looking to combine the benefits of causality with neuro-symbolic AI. The Causal Neuro-symbolic AI field seeks to 1) enrich neuro-symbolic AI systems with explicit representations of causality, 2) integrate causal knowledge with domain knowledge, and 3) enable the use of neuro- symbolic AI techniques for causal AI tasks. The explicit causal representation yields insights that predictive models may fail to infer from observational data. It can also assist people in decision-making scenarios where discerning the cause of an outcome is necessary to choose among various interventions. The emerging field of Causal Neuro- symbolic AI represents a convergence of causal reasoning, neuro-symbolic, and AI. The workshop on Causal Neuro-symbolic AI (CausalNeSy) aims to bring together researchers and practitioners from academia and industry to share their experiences and insights on applying causality and neuro-symbolic AI techniques to real-world problems.
Topics of interest
We invite researchers, practitioners, and industry experts to submit original research papers, surveys, and case studies addressing the following themes (including but not limited to):
1. Core Methods and Frameworks
Causal Knowledge Representation: Approaches for representing causal knowledge using neuro-symbolic AI methods.
Causal Reasoning in Neuro-symbolic Systems: Methods for implementing causal reasoning within neuro- symbolic frameworks.
Neuro-symbolic Methods for Causal Structure Learning: Techniques for learning causal structures within neuro- symbolic frameworks.
Causal Representation Learning: Approaches to causal representation learning using neuro-symbolic AI.
2. Integration of Techniques and Paradigms
Causal Knowledge Graph Embeddings: Utilizing embeddings of causal knowledge for graph completion and causal discovery.
Causal Reasoning and Neural Networks: Techniques for harmonizing causal symbolic reasoning with neural net- works to enhance AI interpretability and robustness.
Integration of Causality, Logic, and Probability: Approaches that combine causality, logic, and probabilities in neuro-symbolic AI.
Causal Generative Models: Development and application of causal generative models for machine learning.
Causal Neuro-Symbolic AI in Large Language Models (LLMs): Integration of causality in LLMs for enhanced reasoning capabilities.
Causal Foundation Models: Development of causal foundation models within neuro-symbolic AI.
3. Explanation, Trust, Fairness, and Accountability
Neuro-symbolic Methods for Causal Explanation: Techniques for explaining causes and their effects using neuro-symbolic methods.
Fairness, Accountability, Transparency, and Explainability: Ensuring fairness, accountability, transparency, and explainability in Causal Neuro-symbolic AI systems.
Trustworthiness, Grounding, Instruct-ability, and Alignment: Addressing issues related to the trustworthiness, grounding, instruct-ability, and alignment of Causal Neuro-symbolic AI systems.
4. Applications
Causal Discovery in Complex Environments: Strategies for discovering causal relationships in complex environments using neuro-symbolic AI.
Causal Neuro-symbolic AI in Use: Real-world applications of causal and neuro-symbolic AI methods in domains such as healthcare, finance, autonomous systems, natural language processing, etc.
Important dates
Workshop paper submissions due: 6 March 2025
Notification to authors: 3 April 2025
Camera-ready version due: 17 April 2025
Early-bird registration to the workshop: TBA
Workshop date: June 1-2 2025
All deadlines must be considered at 23:59 AoE
Submission guidelines
Submission site: Openreview
We welcome original research papers in four types of submissions:
1. Full research papers (12-14 pages)
2. Position papers (6-8 pages)
3. Short papers (4-6 pages)
A skilled and multidisciplinary program committee will evaluate all submitted papers, focusing on the originality of the work and its relevance to the workshop's theme.
Acceptance of papers will adhere to the CEUR workshop Template and undergo a double-blind review process.
Selected papers will be presented at the workshop and published as open-access in the workshop proceedings through CEUR, where they will be available as archival content.
Workshop organizers
Utkarshani Jaimini
University of South Carolina
Cory Henson
Bosch Center for AI
Amit Sheth
University of South Carolina
Yuni susanti
Fujitsu Inc
Program Committee
TBA