Multivariate dynamic mediation analysis under a reinforcement learning framework
Author
Abstract
Suggested Citation
Download full text from publisher
References listed on IDEAS
- Yiping Yuan & Xiaotong Shen & Wei Pan & Zizhuo Wang, 2019. "Constrained likelihood for reconstructing a directed acyclic Gaussian graph," Biometrika, Biometrika Trust, vol. 106(1), pages 109-125.
- Chengchun Shi & Lexin Li, 2022. "Testing Mediation Effects Using Logic of Boolean Matrices," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 117(540), pages 2014-2027, October.
- Guo, Xu & Li, Runze & Liu, Jingyuan & Zeng, Mudong, 2023. "Statistical inference for linear mediation models with high-dimensional mediators and application to studying stock reaction to COVID-19 pandemic," Journal of Econometrics, Elsevier, vol. 235(1), pages 166-179.
- J. Peters & P. Bühlmann, 2014. "Identifiability of Gaussian structural equation models with equal error variances," Biometrika, Biometrika Trust, vol. 101(1), pages 219-228.
- Tyler J. VanderWeele & Eric J. Tchetgen Tchetgen, 2017. "Mediation analysis with time varying exposures and mediators," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 79(3), pages 917-938, June.
- David Silver & Julian Schrittwieser & Karen Simonyan & Ioannis Antonoglou & Aja Huang & Arthur Guez & Thomas Hubert & Lucas Baker & Matthew Lai & Adrian Bolton & Yutian Chen & Timothy Lillicrap & Fan , 2017. "Mastering the game of Go without human knowledge," Nature, Nature, vol. 550(7676), pages 354-359, October.
- Nathan Kallus & Masatoshi Uehara, 2022. "Efficiently Breaking the Curse of Horizon in Off-Policy Evaluation with Double Reinforcement Learning," Operations Research, INFORMS, vol. 70(6), pages 3282-3302, November.
- Zheng Wenjing & van der Laan Mark, 2017. "Longitudinal Mediation Analysis with Time-varying Mediators and Exposures, with Application to Survival Outcomes," Journal of Causal Inference, De Gruyter, vol. 5(2), pages 1-24, September.
- Shi, Chengchun & Li, Lexin, 2022. "Testing mediation effects using logic of Boolean matrices," LSE Research Online Documents on Economics 108881, London School of Economics and Political Science, LSE Library.
- Daniel J. Luckett & Eric B. Laber & Anna R. Kahkoska & David M. Maahs & Elizabeth Mayer-Davis & Michael R. Kosorok, 2020. "Estimating Dynamic Treatment Regimes in Mobile Health Using V-Learning," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 115(530), pages 692-706, April.
- Haoyu Wei & Hengrui Cai & Chengchun Shi & Rui Song, 2024. "On Efficient Inference of Causal Effects with Multiple Mediators," Papers 2401.05517, arXiv.org.
- S. A. Murphy, 2003. "Optimal dynamic treatment regimes," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 65(2), pages 331-355, May.
- Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
- Yen-Tsung Huang & Wen-Chi Pan, 2016. "Hypothesis test of mediation effect in causal mediation model with high-dimensional continuous mediators," Biometrics, The International Biometric Society, vol. 72(2), pages 402-413, June.
- Peng Liao & Predrag Klasnja & Susan Murphy, 2021. "Off-Policy Estimation of Long-Term Average Outcomes With Applications to Mobile Health," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 116(533), pages 382-391, March.
- Viviana Celli, 2022. "Causal mediation analysis in economics: Objectives, assumptions, models," Journal of Economic Surveys, Wiley Blackwell, vol. 36(1), pages 214-234, February.
- Shi, Chengchun & Zhang, Shengxing & Lu, Wenbin & Song, Rui, 2022. "Statistical inference of the value function for reinforcement learning in infinite-horizon settings," LSE Research Online Documents on Economics 110882, London School of Economics and Political Science, LSE Library.
- Chengchun Shi & Sheng Zhang & Wenbin Lu & Rui Song, 2022. "Statistical inference of the value function for reinforcement learning in infinite‐horizon settings," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 84(3), pages 765-793, July.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Zhang, Yingying & Shi, Chengchun & Luo, Shikai, 2023. "Conformal off-policy prediction," LSE Research Online Documents on Economics 118250, London School of Economics and Political Science, LSE Library.
- Zhu, Jin & Wan, Runzhe & Qi, Zhengling & Luo, Shikai & Shi, Chengchun, 2024. "Robust offline reinforcement learning with heavy-tailed rewards," LSE Research Online Documents on Economics 122740, London School of Economics and Political Science, LSE Library.
- Gao, Yuhe & Shi, Chengchun & Song, Rui, 2023. "Deep spectral Q-learning with application to mobile health," LSE Research Online Documents on Economics 119445, London School of Economics and Political Science, LSE Library.
- Li, Lexin & Shi, Chengchun & Guo, Tengfei & Jagust, William J., 2022. "Sequential pathway inference for multimodal neuroimaging analysis," LSE Research Online Documents on Economics 111904, London School of Economics and Political Science, LSE Library.
- Haoyu Wei & Hengrui Cai & Chengchun Shi & Rui Song, 2024. "On Efficient Inference of Causal Effects with Multiple Mediators," Papers 2401.05517, arXiv.org.
- Luo, Shikai & Yang, Ying & Shi, Chengchun & Yao, Fang & Ye, Jieping & Zhu, Hongtu, 2024. "Policy evaluation for temporal and/or spatial dependent experiments," LSE Research Online Documents on Economics 122741, London School of Economics and Political Science, LSE Library.
- Shi, Chengchun & Luo, Shikai & Le, Yuan & Zhu, Hongtu & Song, Rui, 2022. "Statistically efficient advantage learning for offline reinforcement learning in infinite horizons," LSE Research Online Documents on Economics 115598, London School of Economics and Political Science, LSE Library.
- Zhishuai Liu & Jesse Clifton & Eric B. Laber & John Drake & Ethan X. Fang, 2023. "Deep Spatial Q-Learning for Infectious Disease Control," Journal of Agricultural, Biological and Environmental Statistics, Springer;The International Biometric Society;American Statistical Association, vol. 28(4), pages 749-773, December.
- Shi, Chengchun & Li, Lexin, 2022. "Testing mediation effects using logic of Boolean matrices," LSE Research Online Documents on Economics 108881, London School of Economics and Political Science, LSE Library.
- Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
- Boute, Robert N. & Gijsbrechts, Joren & van Jaarsveld, Willem & Vanvuchelen, Nathalie, 2022. "Deep reinforcement learning for inventory control: A roadmap," European Journal of Operational Research, Elsevier, vol. 298(2), pages 401-412.
- Shuxi Zeng & Elizabeth C. Lange & Elizabeth A. Archie & Fernando A. Campos & Susan C. Alberts & Fan Li, 2023. "A Causal Mediation Model for Longitudinal Mediators and Survival Outcomes with an Application to Animal Behavior," Journal of Agricultural, Biological and Environmental Statistics, Springer;The International Biometric Society;American Statistical Association, vol. 28(2), pages 197-218, June.
- Kara E. Rudolph & Jonathan Levy & Mark J. van der Laan, 2021. "Transporting stochastic direct and indirect effects to new populations," Biometrics, The International Biometric Society, vol. 77(1), pages 197-211, March.
- Hao, Meiling & Su, Pingfan & Hu, Liyuan & Szabo, Zoltan & Zhao, Qianyu & Shi, Chengchun, 2024. "Forward and backward state abstractions for off-policy evaluation," LSE Research Online Documents on Economics 124074, London School of Economics and Political Science, LSE Library.
- Zhang, Yihao & Chai, Zhaojie & Lykotrafitis, George, 2021. "Deep reinforcement learning with a particle dynamics environment applied to emergency evacuation of a room with obstacles," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 571(C).
- Yang, Kaiyuan & Huang, Houjing & Vandans, Olafs & Murali, Adithya & Tian, Fujia & Yap, Roland H.C. & Dai, Liang, 2023. "Applying deep reinforcement learning to the HP model for protein structure prediction," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 609(C).
- Weifan Long & Taixian Hou & Xiaoyi Wei & Shichao Yan & Peng Zhai & Lihua Zhang, 2023. "A Survey on Population-Based Deep Reinforcement Learning," Mathematics, MDPI, vol. 11(10), pages 1-17, May.
- Yifeng Guo & Xingyu Fu & Yuyan Shi & Mingwen Liu, 2018. "Robust Log-Optimal Strategy with Reinforcement Learning," Papers 1805.00205, arXiv.org.
- Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
- Wen Wei Loh & Beatrijs Moerkerke & Tom Loeys & Stijn Vansteelandt, 2022. "Nonlinear mediation analysis with high‐dimensional mediators whose causal structure is unknown," Biometrics, The International Biometric Society, vol. 78(1), pages 46-59, March.
More about this item
Keywords
longitudinal data; Markov process; mediation analysis; mobile health; reinforcement learning;All these keywords.
JEL classification:
- C1 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:ehl:lserod:127112. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: LSERO Manager (email available below). General contact details of provider: https://edirc.repec.org/data/lsepsuk.html .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.