CO-DKF AND EXTENSION

certifiable optimal and distributed estimation algorithms

The optimal fusion of estimates in a Distributed Kalman Filter (DKF) requires tracking of the complete network error covariance, problematic in terms of memory and communication. A scalable alternative is to fuse estimates under unknown correlations, doing the update by solving an optimisation problem. Unfortunately, this problem is NP-hard, forcing relaxations that lose optimality guarantees. Motivated by this, we present the first Certifiable Optimal DKF (CO-DKF) [1]. Using only information from one-hop neighbours, CO-DKF solves the optimal fusion of estimates under unknown correlations by a particular tight Semidefinite Programming (SDP) relaxation which allows to certify, locally and in real time, if the relaxed solution is the actual optimum. In that case, we prove optimality in the Mean Square Error (MSE) sense. Additionally, we demonstrate the global asymptotic stability of the estimator. CO-DKF outperforms other state-of-the-art DKF algorithms, specially in sparse, highly noisy setups. Interestingly, the certifiable optimal properties of our CO-DKF can be extended to event-triggered settings. By exploiting the results of the certification, we present ECO-DKF [2], an event-triggered distributed Kalman filter that preserves the aforementioned optimality properties while significantly reducing the communication bandwidth burden.

From this standpoint, we further develop the problem of finding an outer approximation of the global intersection of ellipsoids by means of a distributed algorithm, so that a network can agree on the same ellipsoid that approximates the uncertainty in the estimated quantities of interest [3, 4]. 

The aforementioned problems are related in topic and purpose to dynamic consensus problems, where a network of agents aims at reaching agreement about quantities of interest by means of local interactions. In this sense, we have been studying how to accelerate state-of-the-art techniques [5] to make them faster and easier to apply in real applications. We can conveniently exploit these acceleration tools to enhance, e.g., distributed optimization algorithms, leading to the up-to-date fastest distributed first-order optimization protocol [6].



Red is ours

Blue is the centralized Kalman Filter

Orthers are state-of-the-art Distributed Kalman Filters

Red is time-triggered

Green is a standard ET rule

Orthers are variants of our proposed ECO-DKF

References

[1] E. Sebastián, E. Montijano and C. Sagüés, "All-in-one: Certifiable Optimal Distributed Kalman Filter under Unknown Correlations", IEEE Conference on Decision and Control, 2021. Available online at: https://ieeexplore.ieee.org/abstract/document/9683348 and https://arxiv.org/abs/2105.15061

[2] E. Sebastián, E. Montijano and C. Sagüés, "ECO-DKF: Event-triggered and Certifiable Optimal Distributed Kalman Filter under Unknown Correlations", IEEE Transactions on Automatic Control, 2024. Available online at: https://arxiv.org/abs/2311.02406

[3] R. Aldana-López, E. Sebastián, R. Aragüés, E. Montijano and C. Sagüés, “Distributed outer approximation of the intersection of ellipsoids,” IEEE Control Systems Letters, 2023. Available online at: https://arxiv.org/abs/2305.15172

[4] R. Aldana-López, E. Sebastián, R. Aragüés, E. Montijano and C. Sagüés, “Distributed Discrete-time Dynamic Outer Approximation of the Intersection of Ellipsoids,” under review at IEEE Transactions on Automatic Control, 2024. Available online at: https://arxiv.org/abs/2403.01478v1 

[5] E. Sebastián, E. Montijano, C. Sagüés, M. Franceschelli and A. Gasparri, “Accelerated Multi-stage Discrete Time Dynamic Average Consensus,” IEEE Control Systems Letters, 2023. Available online at: https://ieeexplore.ieee.org/document/10163054

[6] E. Sebastián, E. Montijano, C. Sagüés, M. Franceschelli and A. Gasparri, “Accelerated Alternating Direction Method of Multipliers Gradient Tracking for Distributed Optimization,” under review at IEEE Control Systems Letters, 2024