This tutorial provides a theoretical and practical introduction to Neurosymbolic decision making, with a particular focus on Reinforcement Learning (NeSyRL) for autonomous agents. This emerging paradigm combines the strengths of symbolic reasoning (expressive abstraction and generalization) with the adaptability of Deep RL under uncertainty. Participants will explore how symbolic task knowledge can be represented in various ways, from reward machines to structured logic programs, enabling declarative representations of actions, constraints, and preferences for decision making. The core of the tutorial presents a critical overview of leading NeSyRL frameworks that integrate these symbolic abstractions into RL algorithms, producing autonomous agents that balance interpretability, safe generalization, and data efficiency. Practical examples from both single- and multi-agent scenarios will complement the theoretical discussion, equipping attendees with methods and tools for neurosymbolic decision making. Finally, the tutorial will highlight current trends and open challenges that are shaping the future of this rapidly evolving research field.
Daniele Meli is an Assistant Professor in Computer Science at the University of Verona, Italy. His research lies at the intersection of reinforcement learning, symbolic reasoning, and autonomous agents, with a focus on neuro-symbolic approaches for planning, decision-making under uncertainty, and explainability in real-world systems. His work investigates how symbolic knowledge, such as logical constraints, task models, and high-level specifications, can be integrated with reinforcement learning and sequential decision-making frameworks to improve safety, data efficiency, generalization, and interpretability of learned policies.
Celeste Veronese is a third-year PhD student in Computer Science at the University of Verona, Italy, advised by Daniele Meli and Prof. Alessandro Farinelli. Her research focuses on the use of (inductive) logic programming as a principled framework for acquiring, structuring, and representing symbolic knowledge, which can then be seamlessly integrated into deep reinforcement learning and planning processes. Her work investigates how learned and human-provided logical abstractions can guide decision making, improve data efficiency and generalization, and enhance the interpretability of learning-based planning systems.