Author
Listed:
- Vineet Goyal
(Department of Industrial Engineering and Operations Research, Columbia University, New York, New York 10027)
- Julien Grand-Clément
(Department of Information Systems and Operations Management, Ecole des hautes études commerciales (HEC) de Paris, 78350 Jouy-en-Josas, France)
Abstract
Markov decision processes (MDPs) are used to model stochastic systems in many applications. Several efficient algorithms to compute optimal policies have been studied in the literature, including value iteration (VI) and policy iteration. However, these do not scale well, especially when the discount factor for the infinite horizon discounted reward, λ , gets close to one. In particular, the running time scales as O ( 1 / ( 1 − λ ) ) for these algorithms. In this paper, our goal is to design new algorithms that scale better than previous approaches when λ approaches 1. Our main contribution is to present a connection between VI and gradient descent and adapt the ideas of acceleration and momentum in convex optimization to design faster algorithms for MDPs. We prove theoretical guarantees of faster convergence of our algorithms for the computation of the value function of a policy, where the running times of our algorithms scale as O ( 1 / 1 − λ ) for reversible MDP instances. The improvement is quite analogous to Nesterov’s acceleration and momentum in convex optimization. We also provide a lower bound on the convergence properties of any first-order algorithm for solving MDPs, presenting a family of MDPs instances for which no algorithm can converge faster than VI when the number of iterations is smaller than the number of states. We introduce safe accelerated value iteration (S-AVI), which alternates between accelerated updates and value iteration updates. Our algorithm S-AVI is worst-case optimal and retains the theoretical convergence properties of VI while exhibiting strong empirical performances and providing significant speedups when compared with classical approaches (up to one order of magnitude in many cases) for a large test bed of MDP instances.
Suggested Citation
Vineet Goyal & Julien Grand-Clément, 2023.
"A First-Order Approach to Accelerated Value Iteration,"
Operations Research, INFORMS, vol. 71(2), pages 517-535, March.
Handle:
RePEc:inm:oropre:v:71:y:2023:i:2:p:517-535
DOI: 10.1287/opre.2022.2269
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:oropre:v:71:y:2023:i:2:p:517-535. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.