Author
Listed:
- Gurtej Singh Saini
(Cockrell School of Engineering, The University of Texas at Austin, Austin, TX 78712, USA)
- Parham Pournazari
(Cockrell School of Engineering, The University of Texas at Austin, Austin, TX 78712, USA)
- Pradeepkumar Ashok
(Cockrell School of Engineering, The University of Texas at Austin, Austin, TX 78712, USA)
- Eric van Oort
(Cockrell School of Engineering, The University of Texas at Austin, Austin, TX 78712, USA)
Abstract
Reactive and biased human decision-making during well construction operations can result in problems ranging from minor inefficiencies to events that can have far-reaching negative consequences for safety, environmental compliance and cost. A system that can automatically generate an optimal action sequence from any given state to meet an operation’s objectives is therefore highly desirable. Moreover, an intelligent agent capable of self-learning can offset the computation and memory costs associated with evaluating the action space, which is often vast. This paper details the development of such action planning systems by utilizing reinforcement learning techniques. The concept of self-play used by game AI engines (such as AlphaGo and AlphaZero in Google’s DeepMind group) is adapted here for well construction tasks, wherein a drilling agent learns and improves from scenario simulations performed using digital twins. The first step in building such a system necessitates formulating the given well construction task as a Markov Decision Process (MDP). Planning is then accomplished using Monte Carlo tree search (MCTS), a simulation-based search technique. Simulations, based on the MCTS’s tree and rollout policies, are performed in an episodic manner using a digital twin of the underlying task(s). The results of these episodic simulations are then used for policy improvement. Domain-specific heuristics are included for further policy enhancement, considered factors such as trade-offs between safety and performance, the distance to the goal state, and the feasibility of taking specific actions from specific states. We demonstrate our proposed action planning system for hole cleaning, a task which to date has proven difficult to optimize and automate. Comparing the action sequences generated by our system to real field data, it is shown that it would have resulted in significantly improved hole cleaning performance compared to the action taken in the field, as quantified by the final state reached and the long-term reward. Such intelligent sequential decision-making systems, which use heuristics and exploration–exploitation trade-offs for optimum results, are novel applications in well construction and may pave the way for the automation of tasks that until now have been exclusively controlled by humans.
Suggested Citation
Gurtej Singh Saini & Parham Pournazari & Pradeepkumar Ashok & Eric van Oort, 2022.
"Intelligent Action Planning for Well Construction Operations Demonstrated for Hole Cleaning Optimization and Automation,"
Energies, MDPI, vol. 15(15), pages 1-33, August.
Handle:
RePEc:gam:jeners:v:15:y:2022:i:15:p:5749-:d:883014
Download full text from publisher
Citations
Citations are extracted by the
CitEc Project, subscribe to its
RSS feed for this item.
Cited by:
- Mohammed Al-Rubaii & Mohammed Al-Shargabi & Dhafer Al-Shehri, 2023.
"A Novel Model for the Real-Time Evaluation of Hole-Cleaning Conditions with Case Studies,"
Energies, MDPI, vol. 16(13), pages 1-27, June.
- Moshood, Taofeeq D. & Rotimi, James OB. & Shahzad, Wajiha & Bamgbade, J.A., 2024.
"Infrastructure digital twin technology: A new paradigm for future construction industry,"
Technology in Society, Elsevier, vol. 77(C).
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:15:y:2022:i:15:p:5749-:d:883014. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.