Author
Abstract
Simulation-based, model-free solutions to Markov Decision Processes (MDPs) using the algorithm Least Squares Policy Iteration (LSPI) have been applied to multiple practical settings and in several variants. An optimal policy in an MDP is that policy, or a description of which action to take in a state of the MDP, which performs best according to a given metric such as infinite-horizon d iscounted c ost. A s imulation-based a lgorithm f or a n M DP o btains the optimal policy for an MDP in a model-free manner, i.e. without requiring to know apriori any transition probabilities of the MDP under any policy. This work proposes LSPI-CAS, a version of LSPI for compact action-sets, thus avoiding the discretization of the available action set in a state and thereby improving control over the system. Regular LSPI works by repeatedly picking the current best action in a state x from a finite feasible set of actions Ax, requiring finding a minimum over |Ax| values. Our variant uses two kinds of parametrization, a feature vector f(x) for the state called the actor, and ?(x, a) for the state-action pair which is the critic. LSPI-CAS employs a stochastic gradient algorithm called Simultaneous Perturbation Stochastic Approximation (SPSA) to update the actor in each iteration. Regular LSPI has a module named Least-Squares Q-Value (LSQ) which we employ as critic to evaluate perturbed policy iterates, and further update the policy iterate in direction of improving performance. Our algorithm is for infinite-horizon discounted-cost/reward MDPs, the case of finite-horizon compact-action set MDPs having been solved in an earlier work. Numerical results on three settings, a. control of an inverted pendulum, b. Exercise Policy calculation for an American Put option, and c. M/ M/1/K queue control problem are provided for the algorithm, alongside comparison with LSPI. Improvements in both performance and run-time to find an optimal policy are demonstrated.
Suggested Citation
Mohammed Shahid Abdulla & Shalabh Bhatnagar, 2018.
"LSPI-CAS: Least-Squares Policy Iteration for Compact Action Set MDPs,"
Working papers
293, Indian Institute of Management Kozhikode.
Handle:
RePEc:iik:wpaper:293
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:iik:wpaper:293. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sudheesh Kumar (email available below). General contact details of provider: https://edirc.repec.org/data/iikmmin.html .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.