Author
Listed:
- Xudong Guo
(Department of Automation, Tsinghua University, Beijing, China)
- Peiyu Chen
(Department of Automation, Tsinghua University, Beijing, China)
- Shihao Liang
(Yidu Cloud AI Lab, Yidu Cloud (Beijing) Technology Co., Ltd, Beijing, China)
- Zengtao Jiao
(Yidu Cloud AI Lab, Yidu Cloud (Beijing) Technology Co., Ltd, Beijing, China)
- Linfeng Li
(Yidu Cloud AI Lab, Yidu Cloud (Beijing) Technology Co., Ltd, Beijing, China)
- Jun Yan
(Yidu Cloud AI Lab, Yidu Cloud (Beijing) Technology Co., Ltd, Beijing, China)
- Yadong Huang
(Department of Automation, Tsinghua University, Beijing, China)
- Yi Liu
(Department of Automation, Tsinghua University, Beijing, China)
- Wenhui Fan
(Department of Automation, Tsinghua University, Beijing, China)
Abstract
Background Policy makers are facing more complicated challenges to balance saving lives and economic development in the post-vaccination era during a pandemic. Epidemic simulation models and pandemic control methods are designed to tackle this problem. However, most of the existing approaches cannot be applied to real-world cases due to the lack of adaptability to new scenarios and micro representational ability (especially for system dynamics models), the huge computation demand, and the inefficient use of historical information. Methods We propose a novel Pa ndemic C ontrol decision making framework via large-scale A gent-based modeling and deep R einforcement learning (PaCAR) to search optimal control policies that can simultaneously minimize the spread of infection and the government restrictions. In the framework, we develop a new large-scale agent-based simulator with vaccine settings implemented to be calibrated and serve as a realistic environment for a city or a state. We also design a novel reinforcement learning architecture applicable to the pandemic control problem, with a reward carefully designed by the net monetary benefit framework and a sequence learning network to extract information from the sequential epidemiological observations, such as number of cases, vaccination, and so forth. Results Our approach outperforms the baselines designed by experts or adopted by real-world governments and is flexible in dealing with different variants, such as Alpha and Delta in COVID-19. PaCAR succeeds in controlling the pandemic with the lowest economic costs and relatively short epidemic duration and few cases. We further conduct extensive experiments to analyze the reasoning behind the resulting policy sequence and try to conclude this as an informative reference for policy makers in the post-vaccination era of COVID-19 and beyond. Limitations The modeling of economic costs, which are directly estimated by the level of government restrictions, is rather simple. This article mainly focuses on several specific control methods and single-wave pandemic control. Conclusions The proposed framework PaCAR can offer adaptive pandemic control recommendations on different variants and population sizes. Intelligent pandemic control empowered by artificial intelligence may help us make it through the current COVID-19 and other possible pandemics in the future with less cost both of lives and economy. Highlights We introduce a new efficient, large-scale agent-based epidemic simulator in our framework PaCAR, which can be applied to train reinforcement learning networks in a real-world scenario with a population of more than 10,000,000. We develop a novel learning mechanism in PaCAR, which augments reinforcement learning with sequence learning, to learn the tradeoff policy decision of saving lives and economic development in the post-vaccination era. We demonstrate that the policy learned by PaCAR outperforms different benchmark policies under various reality conditions during COVID-19. We analyze the resulting policy given by PaCAR, and the lessons may shed light on better pandemic preparedness plans in the future.
Suggested Citation
Xudong Guo & Peiyu Chen & Shihao Liang & Zengtao Jiao & Linfeng Li & Jun Yan & Yadong Huang & Yi Liu & Wenhui Fan, 2022.
"PaCAR: COVID-19 Pandemic Control Decision Making via Large-Scale Agent-Based Modeling and Deep Reinforcement Learning,"
Medical Decision Making, , vol. 42(8), pages 1064-1077, November.
Handle:
RePEc:sae:medema:v:42:y:2022:i:8:p:1064-1077
DOI: 10.1177/0272989X221107902
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:medema:v:42:y:2022:i:8:p:1064-1077. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.