Hybrid reinforcement learning for optimal control of non-linear switching system

X Li, L Dong, L Xue, C Sun - IEEE Transactions on Neural …, 2022 - ieeexplore.ieee.org
X Li, L Dong, L Xue, C Sun
IEEE Transactions on Neural Networks and Learning Systems, 2022ieeexplore.ieee.org
Based on the reinforcement learning mechanism, a data-based scheme is proposed to
address the optimal control problem of discrete-time non-linear switching systems. In
contrast to conventional systems, in the switching systems, the control signal consists of the
active mode (discrete) and the control inputs (continuous). First, the Hamilton–Jacobi–
Bellman equation of the hybrid action space is derived, and a two-stage value iteration
method is proposed to learn the optimal solution. In addition, a neural network structure is …
Based on the reinforcement learning mechanism, a data-based scheme is proposed to address the optimal control problem of discrete-time non-linear switching systems. In contrast to conventional systems, in the switching systems, the control signal consists of the active mode (discrete) and the control inputs (continuous). First, the Hamilton–Jacobi–Bellman equation of the hybrid action space is derived, and a two-stage value iteration method is proposed to learn the optimal solution. In addition, a neural network structure is designed by decomposing the Q-function into the value function and the normalized advantage value function, which is quadratic with respect to the continuous control of subsystems. In this way, the Q-function and the continuous policy can be simultaneously updated at each iteration step so that the training of hybrid policies is simplified to a one-step manner. Moreover, the convergence analysis of the proposed algorithm with consideration of approximation error is provided. Finally, the algorithm is applied evaluated on three different simulation examples. Compared to the related work, the results demonstrate the potential of our method.
ieeexplore.ieee.org
Showing the best result for this search. See all results