Duality-based stochastic policy optimization for estimation with unknown noise covariances

S Talebi, A Taghvaei, M Mesbahi - 2023 American Control …, 2023 - ieeexplore.ieee.org
2023 American Control Conference (ACC), 2023ieeexplore.ieee.org
Duality of control and estimation allows mapping recent advances in data-guided control to
the estimation setup. This paper formalizes and utilizes such a mapping to consider learning
the optimal (steady-state) Kalman gain when process and measurement noise statistics are
unknown. Specifically, building on the duality between synthesizing optimal control and
estimation gains, the filter design problem is formalized as direct policy learning. In this
direction, the duality is used to extend existing theoretical guarantees of direct policy …
Duality of control and estimation allows mapping recent advances in data-guided control to the estimation setup. This paper formalizes and utilizes such a mapping to consider learning the optimal (steady-state) Kalman gain when process and measurement noise statistics are unknown. Specifically, building on the duality between synthesizing optimal control and estimation gains, the filter design problem is formalized as direct policy learning. In this direction, the duality is used to extend existing theoretical guarantees of direct policy updates for Linear Quadratic Regulator (LQR) to establish global convergence of the Gradient Descent (GD) algorithm for the estimation problem–while addressing subtle differences between the two synthesis problems. Subsequently, a Stochastic Gradient Descent (SGD) approach is adopted to learn the optimal Kalman gain without the knowledge of noise covariances. The results are illustrated via several numerical examples.
ieeexplore.ieee.org
Showing the best result for this search. See all results