Sanusi, I. orcid.org/0000-0002-3198-9048, Mills, A. orcid.org/0000-0002-6798-5284, Dodd, T. et al. (1 more author) (2020) Online optimal and adaptive integral tracking control for varying discrete‐time systems using reinforcement learning. International Journal of Adaptive Control and Signal Processing, 34 (8). pp. 971-991. ISSN 0890-6327
Abstract
Conventional closed‐form solution to the optimal control problem using optimal control theory is only available under the assumption that there are known system dynamics/models described as differential equations. Without such models, reinforcement learning (RL) as a candidate technique has been successfully applied to iteratively solve the optimal control problem for unknown or varying systems. For the optimal tracking control problem, existing RL techniques in the literature assume either the use of a predetermined feedforward input for the tracking control, restrictive assumptions on the reference model dynamics, or discounted tracking costs. Furthermore, by using discounted tracking costs, zero steady‐state error cannot be guaranteed by the existing RL methods. This article therefore presents an optimal online RL tracking control framework for discrete‐time (DT) systems, which does not impose any restrictive assumptions of the existing methods and equally guarantees zero steady‐state tracking error. This is achieved by augmenting the original system dynamics with the integral of the error between the reference inputs and the tracked outputs for use in the online RL framework. It is further shown that the resulting value function for the DT linear quadratic tracker using the augmented formulation with integral control is also quadratic. This enables the development of Bellman equations, which use only the system measurements to solve the corresponding DT algebraic Riccati equation and obtain the optimal tracking control inputs online. Two RL strategies are thereafter proposed based on both the value function approximation and the Q‐learning along with bounds on excitation for the convergence of the parameter estimates. Simulation case studies show the effectiveness of the proposed approach.
Metadata
Item Type: | Article |
---|---|
Authors/Creators: |
|
Copyright, Publisher and Additional Information: | © 2020 The Authors. International Journal of Adaptive Control and Signal Processing published by John Wiley & Sons, Ltd. This is an open access article under the terms of the Creative Commons Attribution License, (http://creativecommons.org/licenses/by/4.0/) which permits use, distribution and reproduction in any medium, provided the original work is properly cited. |
Keywords: | adaptive control; adaptive dynamic programming; optimal tracking control; Q‐function approximation; reinforcement learning |
Dates: |
|
Institution: | The University of Sheffield |
Academic Units: | The University of Sheffield > Faculty of Engineering (Sheffield) > Department of Automatic Control and Systems Engineering (Sheffield) |
Depositing User: | Symplectic Sheffield |
Date Deposited: | 24 Apr 2020 15:30 |
Last Modified: | 24 Nov 2021 13:18 |
Status: | Published |
Publisher: | Wiley |
Refereed: | Yes |
Identification Number: | 10.1002/acs.3115 |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:159641 |