10:30 - 12:00
Morning Session
10:30 - 10:40
10:40 - 11:20
Opening and Welcome Session (by the Organizers)
Key Note Session: Marco De Luca (NEO4J)
Is the Principal Solutions Architect at Neo4j and is an enthusiastic "Graphista" today. With almost 30 years of IT experience, he also possesses a broad and deep knowledge in various areas, including artificial intelligence, databases, cloud solutions, storage technologies, networks, and the optimisation of IT services. He is always at the cutting edge of technology, highly motivated and enthusiastic about advising customers, partners, and colleagues, as well as designing and teaching IT solutions. He collaborates with major European customers across various industries to assist them in developing their IT services and environments.
This session highlights how Generative AI and Knowledge Graphs are being applied to real-world challenges across industries. Through selected use cases, we’ll showcase the business value these technologies can deliver—from enhanced decision-making to smarter data access.
We’ll also provide a high-level perspective on common challenges and strategic approaches, drawing from hands-on experience in the field. While not offering complete solutions, this session is designed to spark ideas, share lessons learned, and inspire new applications in both industry and research.
11:20 - 12:00
Presentations from Accepted Papers
The digital transformation of engineering systems demands scalable and precise identifier management. Information about a single asset is often fragmented across numerous systems and organizational boundaries, with each context using its own identifiers. This paper addresses this challenge by introducing a formal framework for semantic reference. We build on prior work by introducing two key concepts: reference contexts, which formalize the boundary conditions for identifier interpretation, and public models, which serve as curated, shared layers for anchoring reference. We define reference equality as the symmetric, transitive closure over typed proxy relations that link identifiers across these contexts. Finally, we demonstrate how this semantic infrastructure provides essential grounding for Large Language Model(LLM) workflows, enabling reliable reference disambiguation and traceable generation. This approach bridges the gap between human-readable descriptors, machine-readable identifiers, and logic-based models, supporting hybrid reasoning in industrial knowledge systems.
The growing threat of disinformation and misinformation across digital platforms has intensified the demand for systems capable of producing verifiable and trustworthy outputs. With the widespread adoption of Large Language Models (LLMs) for a variety of tasks, the requirement to provide accurate and fact-verifiable answers is increasing daily. GraphRAGs have become a powerful approach for solving complex tasks that require factual context to deliver accurate and explainable answers. However, Knowledge Bases (KB) used to provide factual and contextual knowledge are composed of thousands or millions of statements, which limits the size of inputs that an LLM can handle, typically by the number of input tokens supported by the model. This work addresses the problem of Fact-Checking by injecting Knowledge Graph Embedding (KGE) vector representations into LLMs using a Retrieval Augmented Generation (RAG) approach to obtain more accurate results. The results show a notable difference in the quality of the results with two different vector representations and two KB construction methods.
In the context of Industry 4.0, effective maintenance is critical for minimizing manufacturing downtime and ensuring production reliability. While first Graph Retrieval-Augmented Generation (RAG) frameworks enhance contextual understanding and accuracy in maintenance chatbots, Knowledge Graph (KG) construction in manufacturing remains tedious and error-prone. To address this, we propose a semi-automated KG construction pipeline that integrates rule-based methods, Small Language Models (SLMs), and Large Language Models (LLMs), significantly reducing manual efforts in KG construction. We evaluate the constructed KG in a Graph RAG setting on real-world maintenance scenarios in a production line. Our results highlight the potential to significantly enhance the efficiency and intelligence of manufacturing maintenance workflows. Our work aims to spark discussions on efficient Graph RAG frameworks for maintenance scenarios in manufacturing.
12:00 - 13:00
Conference Lunch Break
13:00 - 14:30
Afternoon Session
13:00 - 13:40
Presentations from Accepted Papers
The emergence of large language models has significantly advanced the feasibility of automated problem-solving using agents. However, despite promising results, these systems often function as black boxes, raising concerns about their ability to comply with requirements due to opaque decision-making processes. To mitigate these issues, we introduce a multi-agent system powered by language models. This system segments the decision-making process into three agent-driven stages: proposing queries, identifying norms, and retrieving facts, while delegating final judgment to a logical reasoner. We evaluated our system in simulated driving scenarios governed by a limited set of traffic regulations. Results indicate that our approach markedly enhances compliance with decision-making accuracy and offers a more interpretable and traceable method compared to methods that rely solely on language models.
A knowledge integration framework is presented for efficient field service engineering in the context of powertrains. The framework enables semantic integration of otherwise siloed data from different operations and maintenance system databases. Starting with an expert-curated and database schema-aligned ontology, and using LLMs, a knowledge graph is created from unstructured data sources. A comparison of two leading LLM-based approaches is provided for unstructured data integration in knowledge graphs, namely LangChain LLM Graph Transformer and Microsoft GraphRAG. We also suggest customizations for fine-tuning of the generation process. Besides efficient powertrain fault resolutions, potential applications include Root Cause Analysis (RCA), Failure Mode and Effects Analysis (FMEA), as well as prescriptive maintenance.
13:40 - 14:20
14:20 - 13:00
Key Note Session (Panel)
Final Remarks and Closing
14:30 - 15:00
Conference Coffee Break