IDEAS home Printed from https://ideas.repec.org/a/nat/natcom/v15y2024i1d10.1038_s41467-024-50764-5.html
   My bibliography  Save this article

Personalized insulin dosing using reinforcement learning for high-fat meals and aerobic exercises in type 1 diabetes: a proof-of-concept trial

Author

Listed:
  • Adnan Jafar

    (McGill University
    The Research Institute of McGill University Health Centre)

  • Alessandra Kobayati

    (The Research Institute of McGill University Health Centre)

  • Michael A. Tsoukas

    (The Research Institute of McGill University Health Centre)

  • Ahmad Haidar

    (McGill University
    The Research Institute of McGill University Health Centre)

Abstract

In type 1 diabetes, high-fat meals require more insulin to prevent hyperglycemia while meals followed by aerobic exercises require less insulin to prevent hypoglycemia, but the adjustments needed vary between individuals. We propose a decision support system with reinforcement learning to personalize insulin doses for high-fat meals and postprandial aerobic exercises. We test this system in a single-arm 16-week study in 15 adults on multiple daily injections therapy (NCT05041621). The primary objective of this study is to assess the feasibility of the novel learning algorithm. This study looks at glucose outcomes and patient reported outcomes. The postprandial incremental area under the glucose curve is improved from the baseline to the evaluation period for high-fat meals (378 ± 222 vs 38 ± 223 mmol/L/min, p = 0.03) and meals followed by exercises (−395 ± 192 vs 132 ± 181 mmol/L/min, p = 0.007). The postprandial time spent below 3.9 mmol/L is reduced after high-fat meals (5.3 ± 1.6 vs 1.8 ± 1.5%, p = 0.003) and meals followed by exercises (5.3 ± 1.2 vs 1.4 ± 1.1%, p = 0.003). Our study shows the feasibility of automatically personalizing insulin doses for high-fat meals and postprandial exercises. Randomized controlled trials are warranted.

Suggested Citation

  • Adnan Jafar & Alessandra Kobayati & Michael A. Tsoukas & Ahmad Haidar, 2024. "Personalized insulin dosing using reinforcement learning for high-fat meals and aerobic exercises in type 1 diabetes: a proof-of-concept trial," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
  • Handle: RePEc:nat:natcom:v:15:y:2024:i:1:d:10.1038_s41467-024-50764-5
    DOI: 10.1038/s41467-024-50764-5
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41467-024-50764-5
    File Function: Abstract
    Download Restriction: no

    File URL: https://libkey.io/10.1038/s41467-024-50764-5?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. David Silver & Julian Schrittwieser & Karen Simonyan & Ioannis Antonoglou & Aja Huang & Arthur Guez & Thomas Hubert & Lucas Baker & Matthew Lai & Adrian Bolton & Yutian Chen & Timothy Lillicrap & Fan , 2017. "Mastering the game of Go without human knowledge," Nature, Nature, vol. 550(7676), pages 354-359, October.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Daníelsson, Jón & Macrae, Robert & Uthemann, Andreas, 2022. "Artificial intelligence and systemic risk," Journal of Banking & Finance, Elsevier, vol. 140(C).
    2. Zhang, Xi & Wang, Qin & Bi, Xiaowen & Li, Donghong & Liu, Dong & Yu, Yuanjin & Tse, Chi Kong, 2024. "Mitigating cascading failure in power grids with deep reinforcement learning-based remedial actions," Reliability Engineering and System Safety, Elsevier, vol. 250(C).
    3. Yang, Zhengzhi & Zheng, Lei & Perc, Matjaž & Li, Yumeng, 2024. "Interaction state Q-learning promotes cooperation in the spatial prisoner's dilemma game," Applied Mathematics and Computation, Elsevier, vol. 463(C).
    4. Artur Kwasek & Maria Kocot & Izabela Gontarek & Igor Protasowicki & Bartosz Blaszczak, 2024. "Negative Faces of Artificial Intelligence in the Conditions of the Knowledge-Based Economy," European Research Studies Journal, European Research Studies Journal, vol. 0(2), pages 465-477.
    5. Zhang, Yihao & Chai, Zhaojie & Lykotrafitis, George, 2021. "Deep reinforcement learning with a particle dynamics environment applied to emergency evacuation of a room with obstacles," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 571(C).
    6. Keller, Alexander & Dahm, Ken, 2019. "Integral equations and machine learning," Mathematics and Computers in Simulation (MATCOM), Elsevier, vol. 161(C), pages 2-12.
    7. Canhoto, Ana Isabel & Clear, Fintan, 2020. "Artificial intelligence and machine learning as business tools: A framework for diagnosing value destruction potential," Business Horizons, Elsevier, vol. 63(2), pages 183-193.
    8. Zhaobin Mo & Xuan Di & Rongye Shi, 2023. "Robust Data Sampling in Machine Learning: A Game-Theoretic Framework for Training and Validation Data Selection," Games, MDPI, vol. 14(1), pages 1-13, January.
    9. Yang, Kaiyuan & Huang, Houjing & Vandans, Olafs & Murali, Adithya & Tian, Fujia & Yap, Roland H.C. & Dai, Liang, 2023. "Applying deep reinforcement learning to the HP model for protein structure prediction," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 609(C).
    10. Yifeng Guo & Xingyu Fu & Yuyan Shi & Mingwen Liu, 2018. "Robust Log-Optimal Strategy with Reinforcement Learning," Papers 1805.00205, arXiv.org.
    11. Xueqing Yan & Yongming Li, 2023. "A Novel Discrete Differential Evolution with Varying Variables for the Deficiency Number of Mahjong Hand," Mathematics, MDPI, vol. 11(9), pages 1-21, May.
    12. José A. Torres-León & Marco A. Moreno-Armendáriz & Hiram Calvo, 2024. "Representing the Information of Multiplayer Online Battle Arena (MOBA) Video Games Using Convolutional Accordion Auto-Encoder (A 2 E) Enhanced by Attention Mechanisms," Mathematics, MDPI, vol. 12(17), pages 1-19, September.
    13. Jianjun Chen & Weihao Hu & Di Cao & Bin Zhang & Qi Huang & Zhe Chen & Frede Blaabjerg, 2019. "An Imbalance Fault Detection Algorithm for Variable-Speed Wind Turbines: A Deep Learning Approach," Energies, MDPI, vol. 12(14), pages 1-15, July.
    14. Andrew G. Haldane & Arthur E. Turrell, 2019. "Drawing on different disciplines: macroeconomic agent-based models," Journal of Evolutionary Economics, Springer, vol. 29(1), pages 39-66, March.
    15. Antonopoulos, Ioannis & Robu, Valentin & Couraud, Benoit & Kirli, Desen & Norbu, Sonam & Kiprakis, Aristides & Flynn, David & Elizondo-Gonzalez, Sergio & Wattam, Steve, 2020. "Artificial intelligence and machine learning approaches to energy demand-side response: A systematic review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 130(C).
    16. Lu Wang & Wenqing Ai & Tianhu Deng & Zuo‐Jun M. Shen & Changjing Hong, 2020. "Optimal production ramp‐up in the smartphone manufacturing industry," Naval Research Logistics (NRL), John Wiley & Sons, vol. 67(8), pages 685-704, December.
    17. Karwowski, Jan & Mańdziuk, Jacek, 2019. "A Monte Carlo Tree Search approach to finding efficient patrolling schemes on graphs," European Journal of Operational Research, Elsevier, vol. 277(1), pages 255-268.
    18. Young Joon Park & Yoon Sang Cho & Seoung Bum Kim, 2019. "Multi-agent reinforcement learning with approximate model learning for competitive games," PLOS ONE, Public Library of Science, vol. 14(9), pages 1-20, September.
    19. Hai Wang & Shengnan Chen, 2023. "Insights into the Application of Machine Learning in Reservoir Engineering: Current Developments and Future Trends," Energies, MDPI, vol. 16(3), pages 1-11, January.
    20. Jinghai He & Cheng Hua & Chunyang Zhou & Zeyu Zheng, 2025. "Reinforcement-Learning Portfolio Allocation with Dynamic Embedding of Market Information," Papers 2501.17992, arXiv.org.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:15:y:2024:i:1:d:10.1038_s41467-024-50764-5. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.