Author
Listed:
- Anna Winnicki
(Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, Urbana, Illinois 61801; and Coordinated Science Laboratory, University of Illinois Urbana-Champaign, Urbana, Illinois 61801)
- Joseph Lubars
(Sandia National Laboratories, Albuquerque, New Mexico 87123)
- Michael Livesay
(Sandia National Laboratories, Albuquerque, New Mexico 87123)
- R. Srikant
(Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, Urbana, Illinois 61801; and Coordinated Science Laboratory, University of Illinois Urbana-Champaign, Urbana, Illinois 61801; and c3.ai Digital Transformation Institute, University of Illinois Urbana-Champaign, Urbana, Illinois 61801)
Abstract
Function approximation is widely used in reinforcement learning to handle the computational difficulties associated with very large state spaces. However, function approximation introduces errors that may lead to instabilities when using approximate dynamic programming techniques to obtain the optimal policy. Therefore, techniques such as lookahead for policy improvement and m -step rollout for policy evaluation are used in practice to improve the performance of approximate dynamic programming with function approximation. We quantitatively characterize the impact of lookahead and m -step rollout on the performance of approximate dynamic programming (DP) with function approximation. (i) Without a sufficient combination of lookahead and m -step rollout, approximate DP may not converge. (ii) Both lookahead and m -step rollout improve the convergence rate of approximate DP. (iii) Lookahead helps mitigate the effect of function approximation and the discount factor on the asymptotic performance of the algorithm. Our results are presented for two approximate DP methods: one that uses least-squares regression to perform function approximation and another that performs several steps of gradient descent of the least-squares objective in each iteration.
Suggested Citation
Anna Winnicki & Joseph Lubars & Michael Livesay & R. Srikant, 2025.
"The Role of Lookahead and Approximate Policy Evaluation in Reinforcement Learning with Linear Value Function Approximation,"
Operations Research, INFORMS, vol. 73(1), pages 139-156, January.
Handle:
RePEc:inm:oropre:v:73:y:2025:i:1:p:139-156
DOI: 10.1287/opre.2022.0357
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:oropre:v:73:y:2025:i:1:p:139-156. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.