Author
Listed:
- Mu Du
(School of Economics and Management, Dalian University of Technology, Dalian 116024, China)
- Hongtao Yu
(School of Economics and Management, Dalian University of Technology, Dalian 116024, China)
- Nan Kong
(Weldon School of Biomedical Engineering, Purdue University, West Lafayette, Indiana 47907)
Abstract
We investigate a novel type of online sequential decision problem under uncertainty, namely mixed observability Markov decision process with time-varying interval-valued parameters (MOMDP-TVIVP). Such data-driven optimization problems with online learning widely have real-world applications (e.g., coordinating surveillance and intervention activities under limited resources for pandemic control). Solving MOMDP-TVIVP is a great challenge as online system identification and reoptimization based on newly observational data are required considering the unobserved states and time-varying parameters. Moreover, for many practical problems, the action and state spaces are intractably large for online optimization. To address this challenge, we propose a novel transfer reinforcement learning (TRL)-based algorithmic approach that ingrates transfer learning (TL) into deep reinforcement learning (DRL) in an offline-online scheme. To accelerate the online reoptimization, we pretrain a collection of promising networks and fine-tune them with newly acquired observational data of the system. The hallmark of our approach comes from combining the strong approximation ability of neural networks with the high flexibility of TL through efficiently adapting the previously learned policy to changes in system dynamics. Computational study under different uncertainty configurations and problem scales shows that our approach outperforms existing methods in solution optimality, robustness, efficiency, and scalability. We also demonstrate the value of fine-tuning by comparing TRL with DRL, in which at least 21% solution improvement can be yielded by TRL with fine-tuning for no more than 0.62% of time spent on pretraining in each period for problem instances with a continuous state-action space of modest dimensionality. A retrospective study on a pandemic control use case in Shanghai, China shows improved decision making via TRL in several public health metrics. Our approach is the first-ever endeavor of employing intensive neural network training in solving Markov decision processes requiring online system identification and reoptimization.
Suggested Citation
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:orijoc:v:37:y:2025:i:2:p:315-337. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.