2nd Causal Neuro-symbolic Artificial Intelligence (Causal NeSy):
Toward Agentic LLMs with Neuro-Symbolic and Graph Based Reasoning
Date and Time: May 10-11 2026 (Exact time and date TBD)
Place : Dubrovnik , Croatia
Date and Time: May 10-11 2026 (Exact time and date TBD)
Place : Dubrovnik , Croatia
As Artificial Intelligent (AI) systems increasingly influence high-stakes decisions in healthcare, manufacturing, robotics, and autonomous systems, there is a growing need for models capable of causal, interpretable, and knowledge-grounded reasoning. In parallel, the rise of agentic AI and large language models (LLMs) has created new opportunities to combine semantic knowledge representations with powerful generative and multimodal capabilities.
Causal Neuro-Symbolic AI has emerged as a unifying paradigm that integrates neural learning, symbolic reasoning,
knowledge graphs, and causal inference to create AI systems that can learn, reason, act, and explain through explicit causal and semantic structures. This includes enriching LLMs and agentic systems with causal representations, aligning causal models with knowledge graphs for interpretable decision-making, and enabling multimodal causal inference across text, image, and video.
This full-day workshop, the 2nd edition of the CausalNeSy Workshop, following its successful debut at the 22nd Extended Semantic Web Conference (ESWC), aims to unite the semantic web, knowledge graph, causality, neuro-symbolic and agentic AI communities. The workshop will include keynotes, paper presentations, panel discussions, demos, and a multimodal causal reasoning agentic challenge.
Topics include but not limited to: KG-based reasoning for agentic LLMs, causal grounding of LLM agents, KG enhanced autonomous agents, counterfactual reasoning, semantic alignment for agentic systems, and benchmarking semantic causal agentic AI.
The goal is to build a cross-disciplinary community advancing agentic, causal, neurosymbolic, and knowledge-graph-driven AI within the semantic web ecosystem.
We invite researchers, practitioners, and industry experts to submit original research papers, surveys, and case studies addressing the following themes (including but not limited to):
1. Core Methods and Frameworks
Causal reasoning in Agent Architectures
Integration of causality, logic, and probability for verifiable agent behavior
Methods for implementing causal and counterfactual reasoning within LLM-powered Agent frameworks (e.g., ReAct, Reflexion, Chain-of-Thought)
Techniques for discovering causal structures in dynamic environment interacting agent systems
Approaches to causal representation learning designed for agentic AI, LLMs and other large foundation models.
Causal reasoning pipelines for LLMs and hybrid agentic systems
Integrating causal networks with domain knowledge
Causal reasoning and inference; e.g., counterfactual reasoning, deductive causal reasoning, probabilistic causal reasoning
2. Integration of Techniques and Paradigms
Video-based causal reasoning for agent perception, planning, and intervention prediction
Image-based causal discovery (e.g., cause-effect relationships from visual inputs) for grounding LLM and
VLM agents
Multi-modal causal QA explanation for agentic systems (CLEVRER, CRAFT, NExT-QA)
Causal retrieval-augmented generation (KG-RAG, GraphRAG) for LLM-driven agents
Vision, language, causality integration for embodied or tool-using LLM agents
Causal KG embeddings for autonomous reasoning, planning, and action selection
Causal Reasoning and Neural Networks: Techniques for harmonizing causal symbolic reasoning with neural networks to enhance AI interpretability and robustness.
Neuro-symbolic reasoning layers inside LLMs and agentic pipelines
Integration of Causality, Logic, and Probability: Approaches that combine causality, logic, and probabilities
in neuro-symbolic AI.
Causal generative models for simulation, decision-making, and agent training
Causal foundation models and causally grounded agentic LLM architectures
3. Generative AI and LLMs
Prompt engineering, tool use, and program synthesis for causal tasks in agentic LLMs
Causal grounding and verifiable action generation in foundation models
Causal audits, counterfactual probes, and stress tests for autonomous LLM agents
Symbolic and KG-based alignment for ensuring reliable agentic behavior
Instruction tuning for causal reasoning, explanation, and intervention planning
Causal planning and decision-making modules embedded into LLM-based agents
4. Explanation, Trust, Fairness, and Accountability
Human-in-the-loop causal evaluation for agent actions and LLM-generated decisions
Neuro-symbolic methods for causal explanation in semantic and multimodal agents
Fairness, accountability, and transparency in causally grounded agentic LLM systems
Trustworthiness, grounding, instructability, and semantic alignment of autonomous agents
Verification, safety constraints, and causal guarantees for agentic AI behaviors
5. Applications
Generative and interactive agents in simulation, digital twins, and socio-technical systems
Causal discovery in complex, multimodal, and dynamic environments using LLM-driven pipelines
Real-world deployments of causal neuro-symbolic and agentic AI in healthcare, finance, manufacturing, robotics, and scientific discovery
KG-driven agentic systems for decision support, diagnostics, planning, and automated reasoning
Autonomous LLM agents that integrate KG, causality, and neuro-symbolic reasoning for grounded, domain-specific tasks
Workshop paper submissions due: 3rd March 2026
Notification to authors: 31 March 2026
Camera-ready version due: 15 April 2026
Workshop date: TBD
All deadlines must be considered at 23:59 AoE
Submission site: Openreview
We welcome original research papers in four types of submissions:
1. Research papers (6-10 pages) (Position papers are invited as well)
2. Short papers/Demo paper (4-6 pages)
A skilled and multidisciplinary program committee will evaluate all submitted papers, focusing on the originality of the work and its relevance to the workshop's theme.
Acceptance of papers will adhere to the CEUR workshop Template and undergo a double-blind review process.
Selected papers will be presented at the workshop and published as open-access in the workshop proceedings through CEUR, where they will be available as archival content
PS: Depending on the number of accepted papers, there may be an opportunity for the workshop proceedings to be published as part of a Springer proceedings volume.
Utkarshani Jaimini
University of Michigan Dearborn
Chathurangi Shyallika
University of South Carolina
Cory Henson
Bosch Center for AI
Amit Sheth
University of South Carolina
TBA