Our Research
Our research centers on three interconnected pillars: machine learning, scientific computation, and mathematical theory. We develop data- and physics-driven algorithms that bridge fundamental theory with practical applications. By integrating advanced computational techniques with rigorous mathematical frameworks, we aim to address complex, multiscale, and multiphysics problems across science and engineering. This approach enables both predictive modeling and insight-driven discovery.
Machine and Deep Learning
Scientific machine learning and hybrid data- and physics-driven techniques
Probabilistic machine learning and advanced neural network architectures (Graph NN, Spiking NN, Neural ODEs)
Physics-based deep generative models for uncertainty quantification and propagation
Quantum computing and quantum machine learning algorithms
Large language models using Transformers and applications in autonomous systems
Scientific Foundational models
Deep Reinforcement Learning (DRL)
Multi-agent reinforcement learning (MARL)
DRL for high-speed flow control and scientific optimization
Safe and explainable reinforcement learning
Offline and meta reinforcement learning
DRL for discovery and optimization in scientific problems
Scientific Computation
Inverse problems and multi-fidelity modeling
Domain decomposition methods and multi-scale/multi-physics simulations
Computational continuum mechanics including high-speed flows, acoustics, and nonlinear elasticity
Advanced numerical methods: spectral/finite element methods, WENO, and discontinuous Galerkin schemes
Fractional and non-local PDEs
Applications
Fluid flows and heat transfer
Material characterization and additive manufacturing
Fluid-structure interactions and acoustics
Complex multiphysics system
Current Research Highlights
Active Hypersonic Flow Control [Arxiv]
New Architecture: Feature-Enriched Kolmogorov Arnold Networks (FEKAN) [Arxiv]
BubbleOKAN Neural Operator for High-Frequency Bubble Dynamics [Journal]
We present the two-step DeepONet framework designed to map pressure profiles to bubble radius responses, addressing the challenges of capturing both low- and high-frequency dynamics. To overcome the intrinsic spectral bias of deep learning models, we incorporate the Rowdy adaptive activation function, enhancing the representation of high-frequency features. Building on this, we introduce Two-Step DeepOKAN, a Kolmogorov-Arnold Network–based model that improves interpretability while efficiently modeling bubble dynamics without relying on conventional activation functions. By combining spline and radial basis functions, the model constructs a universal basis that mitigates the spectral limitations of RBFs in high-frequency learning. Systematic evaluation across the Rayleigh-Plesset and Keller-Miksis equations demonstrates that DeepOKAN outperforms existing neural operators, accurately capturing complex bubble dynamics across a wide frequency spectrum.