Jorge Cortés
Professor
Cymer Corporation Endowed Chair
Off-policy reinforcement learning with anytime safety guarantees via
robust safe gradient flow
P. Mestres, A. Marzabal, J. Cortés
IEEE Transactions on Automatic Control, submitted
Abstract
This paper considers the problem of solving constrained
reinforcement learning (RL) problems with anytime guarantees,
meaning that the algorithmic solution must yield a
constraint-satisfying policy at every iteration of its evolution.
Our design is based on a discretization of the Robust Safe Gradient
Flow (RSGF), a continuous-time dynamics for anytime constrained
optimization whose forward invariance and stability properties we
formally characterize. The proposed strategy, termed RSGF-RL, is an
off-policy algorithm which uses episodic data to estimate the value
functions and their gradients and updates the policy parameters by
solving a convex quadratically constrained quadratic program. Our
technical analysis combines statistical analysis, the theory of
stochastic approximation, and convex analysis to determine the
number of episodes sufficient to ensure that safe policies are
updated to safe policies and to recover from an unsafe policy, both
with an arbitrary user-specified probability, and to establish the
asymptotic convergence to the set of KKT points of the RL problem
almost surely. Simulations on a navigation example and the
cart-pole system illustrate the superior performance of RSGF-RL with
respect to the state of the art.
pdf
Mechanical and Aerospace Engineering,
University of California, San Diego
9500 Gilman Dr,
La Jolla, California, 92093-0411
Ph: 1-858-822-7930
Fax: 1-858-822-3107
cortes at ucsd.edu
Skype id:
jorgilliyo