IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v15y2022i16p5776-d883799.html
   My bibliography  Save this article

Well Construction Action Planning and Automation through Finite-Horizon Sequential Decision-Making

Author

Listed:
  • Gurtej Singh Saini

    (Cockrell School of Engineering, The University of Texas at Austin, Austin, TX 78712, USA)

  • Oney Erge

    (Cockrell School of Engineering, The University of Texas at Austin, Austin, TX 78712, USA)

  • Pradeepkumar Ashok

    (Cockrell School of Engineering, The University of Texas at Austin, Austin, TX 78712, USA)

  • Eric van Oort

    (Cockrell School of Engineering, The University of Texas at Austin, Austin, TX 78712, USA)

Abstract

Well construction operations require continuous complex decision-making and multi-step action planning. Action selection at every step demands a careful evaluation of the vast action space, while guided by long-term objectives and desired outcomes. Current human-centric decision-making introduces a degree of bias, which can result in reactive rather than proactive decisions. This can lead from minor operational inefficiencies all the way to catastrophic health and safety issues. This paper details the steps in structuring unbiased purpose-built sequential decision-making systems. Setting up such systems entails representing the operation as a Markov decision process (MDP). This requires explicitly defining states and action values, defining goal states, building a digital twin to model the process, and appropriately shaping reward functions to measure feedback. The digital twin, in conjunction with the reward function, is utilized for simulating and quantifying the different action sequences. A finite-horizon sequential decision-making system, with discrete state and action space, was set up to advise on hole cleaning during well construction. The state was quantified by the cuttings bed height and the equivalent circulation density values, and the action set was defined using a combination of controllable drilling parameters (including mud density and rheology, drillstring rotation speed, etc.). A non-sparse normalized reward structure was formulated as a function of the state and action values. Hydraulics, cuttings transport, and rig state detection models were integrated to build the hole cleaning digital twin. This system was then used for performance tracking and scenario simulations (with each scenario defined as a finite-horizon action sequence) on real-world oil wells. The different scenarios were compared by monitoring state–action transitions and the evolution of the reward with actions. This paper presents a novel method for setting up well construction operations as long-term finite-horizon sequential decision-making systems, and defines a way to quantify and compare different scenarios. The proper construction of such systems is a crucial step towards automating intelligent decision-making.

Suggested Citation

  • Gurtej Singh Saini & Oney Erge & Pradeepkumar Ashok & Eric van Oort, 2022. "Well Construction Action Planning and Automation through Finite-Horizon Sequential Decision-Making," Energies, MDPI, vol. 15(16), pages 1-28, August.
  • Handle: RePEc:gam:jeners:v:15:y:2022:i:16:p:5776-:d:883799
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/15/16/5776/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/15/16/5776/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. David Silver & Julian Schrittwieser & Karen Simonyan & Ioannis Antonoglou & Aja Huang & Arthur Guez & Thomas Hubert & Lucas Baker & Matthew Lai & Adrian Bolton & Yutian Chen & Timothy Lillicrap & Fan , 2017. "Mastering the game of Go without human knowledge," Nature, Nature, vol. 550(7676), pages 354-359, October.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yuchen Zhang & Wei Yang, 2022. "Breakthrough invention and problem complexity: Evidence from a quasi‐experiment," Strategic Management Journal, Wiley Blackwell, vol. 43(12), pages 2510-2544, December.
    2. Daníelsson, Jón & Macrae, Robert & Uthemann, Andreas, 2022. "Artificial intelligence and systemic risk," Journal of Banking & Finance, Elsevier, vol. 140(C).
    3. Zhang, Xi & Wang, Qin & Bi, Xiaowen & Li, Donghong & Liu, Dong & Yu, Yuanjin & Tse, Chi Kong, 2024. "Mitigating cascading failure in power grids with deep reinforcement learning-based remedial actions," Reliability Engineering and System Safety, Elsevier, vol. 250(C).
    4. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    5. Ostheimer, Julia & Chowdhury, Soumitra & Iqbal, Sarfraz, 2021. "An alliance of humans and machines for machine learning: Hybrid intelligent systems and their design principles," Technology in Society, Elsevier, vol. 66(C).
    6. Boute, Robert N. & Gijsbrechts, Joren & van Jaarsveld, Willem & Vanvuchelen, Nathalie, 2022. "Deep reinforcement learning for inventory control: A roadmap," European Journal of Operational Research, Elsevier, vol. 298(2), pages 401-412.
    7. Zhou, Yuhao & Wang, Yanwei, 2022. "An integrated framework based on deep learning algorithm for optimizing thermochemical production in heavy oil reservoirs," Energy, Elsevier, vol. 253(C).
    8. Mandal, Ankit & Tiwari, Yash & Panigrahi, Prasanta K. & Pal, Mayukha, 2022. "Physics aware analytics for accurate state prediction of dynamical systems," Chaos, Solitons & Fractals, Elsevier, vol. 164(C).
    9. Adnan Jafar & Alessandra Kobayati & Michael A. Tsoukas & Ahmad Haidar, 2024. "Personalized insulin dosing using reinforcement learning for high-fat meals and aerobic exercises in type 1 diabetes: a proof-of-concept trial," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    10. Bossert, Leonie & Hagendorff, Thilo, 2021. "Animals and AI. The role of animals in AI research and application – An overview and ethical evaluation," Technology in Society, Elsevier, vol. 67(C).
    11. Yang, Zhengzhi & Zheng, Lei & Perc, Matjaž & Li, Yumeng, 2024. "Interaction state Q-learning promotes cooperation in the spatial prisoner's dilemma game," Applied Mathematics and Computation, Elsevier, vol. 463(C).
    12. Zhang, Yihao & Chai, Zhaojie & Lykotrafitis, George, 2021. "Deep reinforcement learning with a particle dynamics environment applied to emergency evacuation of a room with obstacles," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 571(C).
    13. Jun Li & Wei Zhu & Jun Wang & Wenfei Li & Sheng Gong & Jian Zhang & Wei Wang, 2018. "RNA3DCNN: Local and global quality assessments of RNA 3D structures using 3D deep convolutional neural networks," PLOS Computational Biology, Public Library of Science, vol. 14(11), pages 1-18, November.
    14. Keller, Alexander & Dahm, Ken, 2019. "Integral equations and machine learning," Mathematics and Computers in Simulation (MATCOM), Elsevier, vol. 161(C), pages 2-12.
    15. Canhoto, Ana Isabel & Clear, Fintan, 2020. "Artificial intelligence and machine learning as business tools: A framework for diagnosing value destruction potential," Business Horizons, Elsevier, vol. 63(2), pages 183-193.
    16. Zhang, Guangming & Zhang, Chao & Wang, Wei & Cao, Huan & Chen, Zhenyu & Niu, Yuguang, 2023. "Offline reinforcement learning control for electricity and heat coordination in a supercritical CHP unit," Energy, Elsevier, vol. 266(C).
    17. Zhaobin Mo & Xuan Di & Rongye Shi, 2023. "Robust Data Sampling in Machine Learning: A Game-Theoretic Framework for Training and Validation Data Selection," Games, MDPI, vol. 14(1), pages 1-13, January.
    18. Ma, Tao & Yang, Xuzhi & Szabo, Zoltan, 2024. "To switch or not to switch? Balanced policy switching in offline reinforcement learning," LSE Research Online Documents on Economics 124144, London School of Economics and Political Science, LSE Library.
    19. Haoran Wang & Shi Yu, 2021. "Robo-Advising: Enhancing Investment with Inverse Optimization and Deep Reinforcement Learning," Papers 2105.09264, arXiv.org.
    20. Huy Chau & Duy Nguyen & Thai Nguyen, 2024. "Continuous-time optimal investment with portfolio constraints: a reinforcement learning approach," Papers 2412.10692, arXiv.org.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:15:y:2022:i:16:p:5776-:d:883799. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.