This challenge builds on SmartPilot, an agent-based copilot for intelligent manufacturing. The goal is to design and evaluate knowledge-graph–aware prompt engineering strategies for a documentation / process-understanding layer.
Participants are given:
The SmartPilot environment (Streamlit app + agents),
A Manufacturing Knowledge Graph (RDF),
A Process Ontology (JSON graph),
Domain documentation and manuals,
and are asked to implement a single pluggable component: a layer that answers operator queries grounded in the Manufacturing Knowledge Graph.
The challenge turns SmartPilot from a system demo into a benchmark for neurosymbolic, KG-aware copilots.
PredictX Agent: Specializes in anomaly prediction using multimodal sensor data (time series and images).
ForeSight Agent: Employs an LSTM-based forecasting model integrated with domain-specific knowledge, capturing temporal dependencies in manufacturing processes. It forecasts the next hour/day's production.
CausalTrace Agent: Designed to help users discover and understand causal relationships in sensor data. CausalTrace identifies the most probable root causes of anomalies and enhances its explanations with knowledge graphs and process ontologies to deliver clear, context-rich insights.
InfoGuide Agent: Delivers real-time, domain-specific answers to operational, safety, maintenance, and troubleshooting questions by leveraging retrieval-augmented generation (RAG) on carefully curated content from manufacturing manuals. It integrates seamlessly with PredictX, ForeSight, and CausalTrace to respond to questions related to anomaly prediction, production forecasting, and causality.
However, in the current implementation:
The KG and ontology are mostly used as unstructured text (descriptions in the context window),
There is no explicit grounding of answers to KG entities/relations,
There are no metrics that reward or penalize how well the copilot adheres to the KG.
This challenge directly targets that gap.
You are given an industrial manufacturing scenario (toy rocket / Analog data) where operators ask questions such as:
“What sensors are connected to Robot 3?”
“What is the safe range for Gripper_Load during cycle startup?”
“Which components are most likely involved if temperature spikes at Robot 4?”
“How should I calibrate the Gripper_Load sensor?”
Your system must answer using the provided KG, process ontology, and documentation, and must explicitly declare which KG entities and relations it relied on.
Participants should be comfortable with:
Python,
Basic graph concepts (nodes, edges, relations),
Prompt engineering and LLM usage,
Reading minimal manufacturing documentation (no deep domain expertise required).
Steps
Register
Download the Starter Kit
Develop Your Agent
Local Testing
Submit
Iterate
Presentation & Visibility
All teams are expected to submit a short/demo paper describing their system (design, KG usage, evaluation).
The papers will be peer-reviewed by the workshop/challenge organizers for quality and clarity.
Accepted papers will appear in the Springer proceedings of the associated workshop and are invited to give a short oral/demo presentation of their approach.
Typical content is expected to include:
Problem formulation and design goals,
System architecture and prompt-engineering strategy,
How the Knowledge Graph and process ontology are used,
Evaluation results on the challenge metrics and qualitative analysis.
Recognition
One winning team will be recognized.
The top 3 teams on the final leaderboard will be selected. Each winning team will receive an official certificate (one per team member) acknowledging their ranking and contribution to the challenge.