Author
Listed:
- Siliang Zeng
(Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, Minnesota 55455)
- Mingyi Hong
(Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, Minnesota 55455)
- Alfredo Garcia
(Department of Industrial and Systems Engineering, Texas A&M University College of Engineering, College Station, Texas 77843)
Abstract
We consider the task of estimating a structural model of dynamic decisions by a human agent based on the observable history of implemented actions and visited states. This problem has an inherent nested structure: In the inner problem, an optimal policy for a given reward function is identified, whereas in the outer problem, a measure of fit is maximized. Several approaches have been proposed to alleviate the computational burden of this nested-loop structure, but these methods still suffer from high complexity when the state space is either discrete with large cardinality or continuous in high dimensions. Other approaches in the inverse reinforcement learning literature emphasize policy estimation at the expense of reduced reward estimation accuracy. In this paper, we propose a single-loop estimation algorithm with finite time guarantees that is equipped to deal with high-dimensional state spaces without compromising reward estimation accuracy. In the proposed algorithm, each policy improvement step is followed by a stochastic gradient step for likelihood maximization. We show the proposed algorithm converges to a stationary solution with a finite-time guarantee. Further, if the reward is parameterized linearly, the algorithm approximates the maximum likelihood estimator sublinearly.
Suggested Citation
Siliang Zeng & Mingyi Hong & Alfredo Garcia, 2025.
"Structural Estimation of Markov Decision Processes in High-Dimensional State Space with Finite-Time Guarantees,"
Operations Research, INFORMS, vol. 73(2), pages 720-737, March.
Handle:
RePEc:inm:oropre:v:73:y:2025:i:2:p:720-737
DOI: 10.1287/opre.2022.0511
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:oropre:v:73:y:2025:i:2:p:720-737. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.