Neural Operators (NOs) have emerged as a powerful tool for approximating the solution operators of partial differential equations, yet they face critical bottlenecks when applied to high-frequency, high-dimensional, or discontinuous physical systems. This talk explores the theoretical limits of the current paradigm and proposes a physics-based path forward.
First, we address the challenge of generalization. Standard NOs often fail to generalize out-of-distribution (OOD), particularly in wave propagation problems like the Helmholtz equation. By analyzing the Rademacher complexity of these operators, we introduce a novel stochastic depth architecture that provably reduces OOD risk, ensuring robust performance across varying wave speeds.
However, even with improved generalization, we encounter a fundamental limit: the curse of dimensionality. We present recent theoretical work on a "Mixture of Neural Operators" (MoNO) which demonstrates that while distributed architectures can "soften" this curse, the parametric complexity required to approximate arbitrary nonlinear operators remains significant. This suggests an inherent ceiling to what pure "black-box" learning can efficiently achieve.
Finally, we propose an alternative that breaks this barrier: Neural Discrete Equilibrium (NeurDE). Instead of learning the operator directly, we "lift" nonlinear conservation laws into a kinetic theory framework. By restricting the learning problem solely to the local equilibrium state and offloading linear transport to exact numerical schemes, NeurDE bypasses the curse of dimensionality. This hybrid approach enables the stable, long-term prediction of shocks and supersonic flows that remain out of reach for standard neural operators.