Dr. Kokolakis’s research develops rigorous control and learning frameworks for trustworthy autonomy. Trustworthy autonomy refers to intelligent systems that are reliable and equipped with interpretable decision-making mechanisms. The themes below outline the main directions of this work.
Inspired by theory of mind, we develop an interpretable Level-k Thinking model to characterize the reasoning depth of bounded-rational agents. Online learning enables autonomous agents to infer their opponents’ thinking levels and adapt their strategies in real time.
Level-k Thinking model for characterizing bounded rationality via an interpretable cognitive structure
Online inference of opponents’ thinking levels enabling real-time strategic adaptation
Rationality-aware multi-agent coordination using a learning-based assignment mechanism
Using Lyapunov design, we develop reinforcement learning algorithms with provable non-asymptotic convergence guarantees, enabling robust, safe, and time-critical decision-making in unknown adversarial environments.
Finite-time safe learning for pursuit–evasion games using Gaussian processes to model unknown obstacles
Fixed-time learning of backward reachable sets for time-critical safety verification
Predefined-time reinforcement learning for optimal feedback control
We develop an adversarial physics-informed self-supervised learning framework that synthesizes robust optimal control strategies with guaranteed safety and predefined-time stability under worst-case disturbances.
A game-theoretic control framework for robust optimal safe predefined-time stabilization under adversarial disturbances
An adversarial physics-informed self-supervised learning architecture that embeds safety constraints and predefined-time stability conditions into the training process
Leveraging parallel execution, we develop generative AI-based information-theoretic decision-making mechanisms to synthesize safe, efficient, robust, and adaptive predictive control strategies for autonomous systems operating in uncertain physical environments. By incorporating historical experience data, the framework iteratively improves system performance while robustly satisfying constraints across repeated tasks.
We design generative AI-enabled information-theoretic control frameworks to endow autonomous systems with safe, robust, efficient, and adaptive decision-making under uncertainty
Parallel trajectory sampling enables real-time optimization, balancing safety and performance
Historical experience data is used to refine expressive predictive models, enabling continual performance improvement while preserving safety