Our lab advances the frontier of Physical AI by fusing physics-driven modeling, intelligent sensing, and machine learning to build systems that perceive, reason, and adapt within the physical world. From decoding structural signals and interpreting turbulent flows to enabling autonomous robotics, proactive safety systems, and cross-domain intelligence, our work spans six tightly connected research directions—each aimed at solving real-world challenges through deeply integrated physical and artificial intelligence.
🧠 Structural Analysis and Structural Health Monitoring
What if materials could talk—and machines could listen?
In Physical AI, structural health is no longer just a dataset—it is a stream of signals narrating stress, fatigue, and the story of long-term resilience. Our research reimagines how structures interact with intelligence: enabling systems that sense subtle vibrations, decode mechanical patterns, and evolve their response over time. This research lays the foundation for self-aware structures—across aerospace systems, robotic components, space modules, and advanced mechanical assemblies—that can self-diagnose, self-optimize, and anticipate failure before it occurs. By combining sensor fusion, temporal deep learning, and real-world signal processing, we aim to build intelligent agents that perceive structural signals as meaningful patterns, interpret the physics behind them, and evolve alongside the systems they monitor. These systems are designed not only to detect anomalies, but to understand the physical conditions that produce them—bridging the gap between material response and autonomous decision-making. As a core pillar of Physical AI, this research direction advances the vision of machines that interact with the physical world through embedded intelligence. Our goal is to develop smart structures that operate robustly across mission-critical domains—from aerial vehicles and space hardware to robotic limbs and industrial systems—transforming how machines perceive, learn from, and respond to structural behavior in motion.
🌪️ Wind Tunnel Experiments and Aerodynamic Behavior
Turbulence is more than chaos—it is a code, and we aim to read it.
Our lab explores how intelligent systems can interpret aerodynamic behavior not just through force measurements, but through the language of fluid–structure interaction. By merging physical experimentation with deep learning, we are transforming noisy pressure maps and vortical flows into actionable knowledge. This work defines a future where Physical AI models predict and optimize motion through air, adapting to complexity in real-time applications. At our lab, we are shaping a new generation of Physical AI frameworks that blend classical aerodynamics with learning-enabled inference. Through wind tunnel experimentation and data-driven modeling, we investigate how structures respond to unsteady flows, and how intelligent systems can learn from these responses to detect, predict, and adapt to real-world aerodynamic challenges. Our long-term direction is to develop Physical AI agents that can reason about wake dynamics, vortex shedding, and aeroelastic instabilities, transforming raw flow data into knowledge for control, optimization, and safety. These capabilities are critical for a range of applications, from biomimetic flight and renewable energy systems to responsive aerodynamic surfaces and resilient mechanical designs. This research area lays the foundation for fluid-aware intelligence, where sensing, learning, and actuation converge to make aerodynamic systems more intelligent, efficient, and adaptable in complex environments.
🤖 Robotics and Intelligent Autonomous Systems
A robot that understands its world isn’t just reacting—it is anticipating.
In this research direction, we harness Physical AI to go beyond conventional autonomy. Our systems learn to see, feel, and reason through multi-sensory data, enabling real-time interaction with uncertain environments. From UAV-based inspections to predictive control in dynamic settings, we focus on endowing machines with the embodied intelligence needed to operate with precision and purpose in the physical world. We aim to create physically intelligent robotic agents that operate robustly in complex, unstructured environments—integrating vision, time-series sensor data, and context-awareness into a unified decision-making loop. Whether it is an aerial drone navigating urban turbulence, or an embedded agent predicting human presence from environmental cues, our systems are designed to close the gap between data and action through adaptive intelligence. This research direction focuses on endowing robots with the ability to learn from their surroundings, reason over spatial-temporal signals, and perform autonomously with high precision and resilience. By fusing deep learning, sensor fusion, and real-time inference, we push the boundaries of what autonomous systems can perceive and achieve. Our long-term goal is to pioneer physically grounded AI for robotics—enabling agents that not only move through space, but understand and respond to the physical dynamics that govern it.
🛡️ Safety Engineering and Risk Mitigation
The most intelligent systems are the ones that prevent harm before it happens.
Safety, in the age of Physical AI, means merging perception with foresight. Our research pioneers sensor-integrated, learning-enabled frameworks that monitor human states, predict system failures, and mitigate risks as they emerge. We are designing systems that recognize subtle signals—from thermal signatures to biosensor feedback—and use them to make safety proactive, adaptive, and deeply embedded into real-world operations. Physical AI brings a paradigm shift to safety engineering: from passive surveillance to predictive prevention. We are building systems that integrate thermal, acoustic, physiological, and spatial signals to interpret risk from multiple dimensions—fatigue, hazardous motion, abnormal sounds, or invisible thermal anomalies. These systems don't just flag danger; they understand the conditions that lead to it. This research direction focuses on developing robust AI agents that can operate in uncertain, noisy, and low-visibility conditions—learning from incomplete or imprecise sensor data to ensure human-aware and context-aware safety. By fusing multimodal sensing with deep learning and adaptive feedback mechanisms, we aim to create autonomous safety guardians that operate proactively rather than reactively. Our long-term vision is to lead the evolution of safety-critical AI—from monitoring systems to intelligent agents that reason about risk and respond to danger before harm occurs.
🌊 Fluid-Structure Interaction
Where motion meets matter, intelligence must follow.
The intricate dance between fluids and structures is governed by physics—but understanding it requires intelligence. This direction focuses on how Physical AI can decode the forces, instabilities, and oscillations that define unsteady environments. Whether optimizing energy systems or modeling bioinspired flight, we integrate advanced simulation, experimental insight, and machine learning to make sense of dynamic physical interactions. We envision a future where intelligent systems learn to navigate, adapt, and self-optimize in the presence of unsteady aerodynamics, fluid loading, and nonlinear vibrations. To enable this, we develop physics-informed AI frameworks and hybrid experimental-modeling platforms that extract hidden patterns from turbulent wake interactions, aeroelastic responses, and multiphase flow transitions. Our research direction advances Physical AI by teaching machines to interpret the language of flow—shear, separation, damping, reattachment—not just as numerical signals, but as dynamic physical phenomena that inform control, design, and adaptation. These capabilities unlock new frontiers in autonomous flight, morphing structures, resilient fluid machinery, and interactive environments that demand co-evolution between structure and surrounding media. Ultimately, we aim to empower AI agents that do more than observe—they interact with fluids, learn from forces, and shape their own response to achieve efficient, stable, and intelligent performance under real-world dynamics.
🧬Multidisciplinary Physical AI Research
Not every challenge fits in a single discipline—and neither does our intelligence.
Physical AI thrives at the crossroads of fields. In this domain, we tackle problems that span geotechnical systems, environmental forecasting, human physiology, and beyond. By blending sensing technologies, generative models, and physical modeling, we craft AI systems that are flexible, context-aware, and scalable. This research area is where Physical AI evolves from specialized solutions to broad, transformative applications. Our direction in Multidisciplinary Physical AI Research builds adaptable intelligence across diverse physical systems. From forecasting wind fields and extreme weather patterns to quantifying environmental resources and monitoring human fatigue in real time, we integrate multimodal sensing, generative modeling, and temporal learning to fuse physical phenomena with intelligent computation. This research area equips Physical AI agents to understand the world’s complexity—not just in specialized applications, but in ecosystems of interacting variables. Our goal is to develop transferable AI systems capable of learning from diverse data sources (e.g., satellite imagery, wearables, sensor networks), generalizing across domains, and assisting decision-making in both natural and engineered environments. In this direction, we envision AI that senses drought before it strikes, detects mental fatigue before risk emerges, and models material behavior before structural failure. Through cross-domain learning and scalable architectures, our lab aims to build intelligent systems that not only respond to their physical world—but anticipate, adapt, and transform it.