Physical & Knowledge-Grounded AI
World Models, Edge Intelligence, and Trustworthy AI for the Physical World
Physical & Knowledge-Grounded AI
World Models, Edge Intelligence, and Trustworthy AI for the Physical World
While much of traditional AI has focused on patterns in digital data, Physical & Knowledge-Grounded AI is concerned with AI systems that can sense, understand, and act in the physical world. This track explores how AI can go beyond black-box prediction by combining sensor data, domain knowledge, and world models to represent better how environments, objects, and people behave over time.
The goal is to build AI systems that are not only accurate, but also reliable, efficient, explainable, and deployable in real-world settings. We are interested in methods that connect physical sensing (e.g., radar, wireless signals, wearables, vision) with reasoning and decision-making, enabling applications such as intelligent infrastructure, digital twins, smart mobility, health monitoring, and distributed autonomous systems.
This track reflects an important direction in current AI research: moving from purely cloud-based, data-hungry models toward trustworthy AI that can operate in real environments, on resource-constrained devices, and alongside humans. Recent trend reports highlight the growing importance of agentic systems, trustworthy deployment, and AI grounded in physical environments rather than purely digital workflows.
For students, this means learning how to build the full stack of intelligence:
Sensing – extracting useful information from the environment
Modeling – building world models or knowledge-grounded representations
Execution – making safe, efficient, and trustworthy decisions in real time
The track is designed for both:
TCS students, interested in machine learning, model compression, multimodal learning, federated learning, and autonomous systems; and
BIT students, interested in digital twins, trustworthy deployment, AI-supported infrastructure, and the governance of intelligent systems.
This track is about the AI intelligence layer, how AI systems reason, model, and act on physical-world data. It is not primarily about building or connecting the systems that collect that data, nor about applying machine learning methods to general datasets.
The central question is: is your research contribution about what the AI understands and decides, or about how the system is built and connected?
Strong signals that your project fits here:
Your core contribution is in the AI reasoning or modeling: a world model, a physics-informed model, a digital twin, or an agent that plans and decides
Your AI must incorporate physical laws, domain knowledge, or structured constraints, not learn from data alone
Your research question is about inference quality, decision-making, or trustworthiness: correctness, explainability, robustness, or safety of the AI output
Your AI closes the loop: it does not just classify or detect, but also acts, controls, or adapts based on what it senses
You are optimising for intelligence under constraints: making AI more accurate, efficient, or reliable when deployed on edge hardware or in real-time settings
Signals that this track may not be the right fit:
Your main contribution is the sensing or networking system itself — device design, communication protocols, localisation, or large-scale data management
Your project's research question is about system reliability, scalability, or privacy of a networked infrastructure, rather than the reasoning quality of the AI
Your contribution is a new ML method or architecture applied to a structured dataset without physical-world grounding or deployment constraints
Your project focuses on NLP, social networks, biometrics, recommender systems, or general computer vision
The same sensor, two different research contributions, what separates them:
Sensor-based projects can look similar on the surface but differ fundamentally in where the research contribution lies. Here are some examples:
Wearables for health monitoring:
Systems contribution → reliable data collection, energy-efficient transmission, privacy-preserving networking → not the primary focus here
AI contribution → a model that infers health states, adapts to individual variation, and runs efficiently on the device → fits this track
Radar or Wi-Fi signals for activity detection:
Systems contribution → signal acquisition, protocol design, multi-device coordination → not the primary focus here
AI contribution → a model that builds a representation of human behaviour from raw signals and reasons about it → fits this track
Smart building or infrastructure:
Systems contribution → integrating heterogeneous devices, managing data streams, ensuring security → not the primary focus here
AI contribution → a digital twin or world model that enables prediction, optimisation, or autonomous control → fits this track
Still unsure? Ask yourself: if you removed the AI reasoning component from your project, would there still be a research contribution? If yes, the contribution is likely in the system or the data, and you may want to explore other tracks. If no, the AI intelligence is the contribution, and you belong here.
If your project does not match any listed assignment but involves AI reasoning on physical data toward a real-world deployment goal, you are still encouraged to reach out. In the past, many projects in this track start from student-initiated ideas.
World Models & Knowledge-Grounded Learning
AI models that integrate data with physical laws, expert knowledge, or structured reasoning.
Agentic and Distributed AI Systems
Autonomous and collaborative AI systems that plan, coordinate, and adapt in dynamic environments.
Efficient and Sustainable Physical AI
Embedded and edge AI, TinyML, model compression, and methods for reducing computational and energy costs.
Sensing, Digital Twins, and Trustworthy Deployment
Multimodal sensing, infrastructure monitoring, digital twins, explainability, fairness, robustness, and safe deployment.
Embedded & Neuromorphic Sensing
Leveraging energy-efficient sensors (e.g., neuromorphic vision, mmWave radar, Wi-Fi CSI) and specialized hardware to build real-time, edge-deployed AI systems and structured world models.
Topics of interest include, but are not limited to:
world models for perception, prediction, and control
physics-informed and knowledge-guided machine learning
multimodal sensing with radar, Wi-Fi, wearables, cameras, and mobile devices
digital twins for infrastructure, mobility, and health applications
edge intelligence, TinyML, and model compression
federated and distributed learning
explainable and trustworthy AI for real-world systems
safe multi-agent systems and autonomous decision-making
AI for health, environment, and intelligent infrastructure
Students are invited to explore concrete projects and discuss them with supervisors in the Pervasive Systems group and the Computer Architecture for Embedded Systems group. Many assignments are connected to real-world deployment settings and application domains.
Assignment portal:
Examples include:
Creating a Modular Digital Twin Platform for Steel Infrastructures
Agentic AI: Autonomous Intelligence for the future telecom networks
Interactive Explainable AI Dashboard for Multi-Modal AI Systems
Wild Life [hedgehog] Health Monitoring in a Pet Shelter using mmWave radar
Safe Multi-Agent Reinforcement Learning (MARL) for UAV Swarm Communications
Optimizing Person Detection with Neuromorphic Vision Sensors
Comparative Analysis of Multi‑Sensor Integration in the SENSE‑Rai Framework
Error Resilience Analysis Of Transformers Approximations For Computer Vision Tasks
Exploring Bloom Filters as Fault Detectors for Static Memory Content in Machine Learning Systems
Title: “A Comprehensive Survey on World Models for Embodied AI” Authors: Xinqing Li, Yun Liu, et al. Source: https://arxiv.org/abs/2510.16732
For further information on the content of this track, you may contact the track chair: Le Viet Duc.