Engineering is a constant negotiation with uncertainty. In my research, I develop control-theoretic frameworks that allow autonomous systems, from agile rotorcraft to secure industrial robots, to maintain performance and safety while navigating adversarial or ill-defined environments.
Real-world dynamics are rarely cooperative. They are stubbornly nonlinear and high-dimensional. My work employs Model Predictive Control (MPC) and Koopman operator theory to bridge the gap between theoretical complexity and real-time feasibility. By lifting nonlinear dynamics into a higher-dimensional linear space, we can apply robust optimization techniques to systems with intricate geometries. This ensures that aerospace vehicles and robotic manipulators can respect hard state and input constraints without sacrificing the agility required for high-performance maneuvers.
A controller is an insurance policy against model mismatch. I design adaptive control architectures that function as real-time identification engines, allowing systems to "tune" themselves as their parameters evolve. In the context of aerospace systems, we deploy these methods to mitigate high-frequency vibrations and structural resonances. By treating these disturbances as exogenous signals to be rejected, we enhance the fatigue life and operational reliability of the platform.
The increasing connectivity of cyber-physical systems has opened new vectors for malicious interference. To counter this, I use set-theoretic methods, specifically zonotopic reachability analysis, to construct formal safety envelopes. By propagating the reachable sets of a system in real-time, we can detect state-injection attacks that masquerade as sensor noise. If a system’s trajectory drifts toward the boundary of its viable set, our algorithms trigger resilient control laws that prioritize recovery and stability over mission objectives.
The future of autonomy lies in the synthesis of data-driven adaptation and rigorous safety barriers. We are currently integrating Control Barrier Functions (CBFs) into learning-based control loops. This approach allows us to exploit the flexibility of machine learning while maintaining an ironclad "safety filter" that prevents the system from violating its physical limits. We want machines that can learn from their surroundings but are mathematically incapable of causing harm.