The well-known generalization problem hinders the application of artificial neural networks in continuous-time prediction tasks with changing latent dynamics. In sharp contrast, biological systems can neatly adapt to evolving environments benefiting from real-time feedback mechanisms. Inspired by the feedback philosophy, we present feedback neural networks, showing that a feedback loop can flexibly correct the learned latent dynamics of neural ordinary differential equations (neural ODEs), leading to a prominent generalization improvement. The feedback neural network is a novel two-DOF neural network, which possesses robust performance in unseen scenarios with no loss of accuracy performance on the nominal task. A linear feedback form is presented to correct the learned latent dynamics firstly, with a convergence guarantee. Then, domain randomization is utilized to learn a nonlinear neural feedback form. Finally, extensive tests including trajectory prediction of a real irregular object and model predictive control of a quadrotor with various uncertainties, are implemented, indicating significant improvements over state-of-the-art model-based and learning-based methods.
Neural network architectures. Left: Neural ODE developed in Chen et al. (2018). Right: Proposed feedback neural network.
Supplementary codes
For access to all codes, visit here.
For guidance on how to use the code, it is highly recommended that you first see the toy example and its README file.
For access to the code used in the spiral curve example visit here.
For access to the code used in the object trajectory prediction example visit here.
For access to the code used in the quadrotor example visit here.
Update record
September - 29, 2024 - First release.
October -11, 2024 - Code description is optimized and a toy demo is provided.
February -28, 2025 - Update all codes.