IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v10y2022i9p1509-d807146.html
   My bibliography  Save this article

Pruning Stochastic Game Trees Using Neural Networks for Reduced Action Space Approximation

Author

Listed:
  • Tasos Papagiannis

    (Zografou Campus, School of Electrical & Computer Engineering, National Technical University of Athens, 15780 Athens, Greece)

  • Georgios Alexandridis

    (Zografou Campus, School of Electrical & Computer Engineering, National Technical University of Athens, 15780 Athens, Greece)

  • Andreas Stafylopatis

    (Zografou Campus, School of Electrical & Computer Engineering, National Technical University of Athens, 15780 Athens, Greece)

Abstract

Monte Carlo Tree Search has proved to be very efficient in the broad domain of Game AI, though it suffers from high dimensionality in cases of large branching factors. Several pruning techniques have been proposed to tackle this problem, most of which require explicit domain knowledge. In this study, an approach using neural networks to determine the number of actions to be pruned, depending on the iterations run and the total number of possible actions, is proposed. Multi-armed bandit simulations with the UCB1 formula are employed to generate suitable datasets for the networks’ training and a specifically designed process is followed to select the best combination of the number of iterations and actions for pruning. Two pruning Monte Carlo Tree Search variants are investigated, based on different actions’ expected rewards’ distributions, and they are evaluated in the collectible card game Hearthstone. The proposed technique improves the performance of the Monte Carlo Tree Search algorithm in different setups of computational limitations regarding the available number of tree search iterations and is significantly boosted when combined with supervised learning trained-state value predicting models.

Suggested Citation

  • Tasos Papagiannis & Georgios Alexandridis & Andreas Stafylopatis, 2022. "Pruning Stochastic Game Trees Using Neural Networks for Reduced Action Space Approximation," Mathematics, MDPI, vol. 10(9), pages 1-16, May.
  • Handle: RePEc:gam:jmathe:v:10:y:2022:i:9:p:1509-:d:807146
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/10/9/1509/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/10/9/1509/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Julian Schrittwieser & Ioannis Antonoglou & Thomas Hubert & Karen Simonyan & Laurent Sifre & Simon Schmitt & Arthur Guez & Edward Lockhart & Demis Hassabis & Thore Graepel & Timothy Lillicrap & David , 2020. "Mastering Atari, Go, chess and shogi by planning with a learned model," Nature, Nature, vol. 588(7839), pages 604-609, December.
    2. David Silver & Julian Schrittwieser & Karen Simonyan & Ioannis Antonoglou & Aja Huang & Arthur Guez & Thomas Hubert & Lucas Baker & Matthew Lai & Adrian Bolton & Yutian Chen & Timothy Lillicrap & Fan , 2017. "Mastering the game of Go without human knowledge," Nature, Nature, vol. 550(7676), pages 354-359, October.
    3. Michael N. Katehakis & Arthur F. Veinott, 1987. "The Multi-Armed Bandit Problem: Decomposition and Computation," Mathematics of Operations Research, INFORMS, vol. 12(2), pages 262-268, May.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Boute, Robert N. & Gijsbrechts, Joren & van Jaarsveld, Willem & Vanvuchelen, Nathalie, 2022. "Deep reinforcement learning for inventory control: A roadmap," European Journal of Operational Research, Elsevier, vol. 298(2), pages 401-412.
    2. Li, Wenqing & Ni, Shaoquan, 2022. "Train timetabling with the general learning environment and multi-agent deep reinforcement learning," Transportation Research Part B: Methodological, Elsevier, vol. 157(C), pages 230-251.
    3. Bálint Kővári & Lászlo Szőke & Tamás Bécsi & Szilárd Aradi & Péter Gáspár, 2021. "Traffic Signal Control via Reinforcement Learning for Reducing Global Vehicle Emission," Sustainability, MDPI, vol. 13(20), pages 1-18, October.
    4. De Moor, Bram J. & Gijsbrechts, Joren & Boute, Robert N., 2022. "Reward shaping to improve the performance of deep reinforcement learning in perishable inventory management," European Journal of Operational Research, Elsevier, vol. 301(2), pages 535-545.
    5. Christopher R. Madan, 2020. "Considerations for Comparing Video Game AI Agents with Humans," Challenges, MDPI, vol. 11(2), pages 1-12, August.
    6. Loris Cannelli & Giuseppe Nuti & Marzio Sala & Oleg Szehr, 2020. "Hedging using reinforcement learning: Contextual $k$-Armed Bandit versus $Q$-learning," Papers 2007.01623, arXiv.org, revised Feb 2022.
    7. Yuchen Zhang & Wei Yang, 2022. "Breakthrough invention and problem complexity: Evidence from a quasi‐experiment," Strategic Management Journal, Wiley Blackwell, vol. 43(12), pages 2510-2544, December.
    8. Lodewijk Kallenberg, 2013. "Derman’s book as inspiration: some results on LP for MDPs," Annals of Operations Research, Springer, vol. 208(1), pages 63-94, September.
    9. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    10. Daníelsson, Jón & Macrae, Robert & Uthemann, Andreas, 2022. "Artificial intelligence and systemic risk," Journal of Banking & Finance, Elsevier, vol. 140(C).
    11. Gokhale, Gargya & Claessens, Bert & Develder, Chris, 2022. "Physics informed neural networks for control oriented thermal modeling of buildings," Applied Energy, Elsevier, vol. 314(C).
    12. Esther Frostig & Gideon Weiss, 2016. "Four proofs of Gittins’ multiarmed bandit theorem," Annals of Operations Research, Springer, vol. 241(1), pages 127-165, June.
    13. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    14. Sonin, Isaac M., 2008. "A generalized Gittins index for a Markov chain and its recursive calculation," Statistics & Probability Letters, Elsevier, vol. 78(12), pages 1526-1533, September.
    15. Ostheimer, Julia & Chowdhury, Soumitra & Iqbal, Sarfraz, 2021. "An alliance of humans and machines for machine learning: Hybrid intelligent systems and their design principles," Technology in Society, Elsevier, vol. 66(C).
    16. Zhou, Yuhao & Wang, Yanwei, 2022. "An integrated framework based on deep learning algorithm for optimizing thermochemical production in heavy oil reservoirs," Energy, Elsevier, vol. 253(C).
    17. José Niño-Mora, 2020. "A Verification Theorem for Threshold-Indexability of Real-State Discounted Restless Bandits," Mathematics of Operations Research, INFORMS, vol. 45(2), pages 465-496, May.
    18. Mandal, Ankit & Tiwari, Yash & Panigrahi, Prasanta K. & Pal, Mayukha, 2022. "Physics aware analytics for accurate state prediction of dynamical systems," Chaos, Solitons & Fractals, Elsevier, vol. 164(C).
    19. Adnan Jafar & Alessandra Kobayati & Michael A. Tsoukas & Ahmad Haidar, 2024. "Personalized insulin dosing using reinforcement learning for high-fat meals and aerobic exercises in type 1 diabetes: a proof-of-concept trial," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    20. W. Jason Choi & Amin Sayedi, 2019. "Learning in Online Advertising," Marketing Science, INFORMS, vol. 38(4), pages 584-608, July.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:10:y:2022:i:9:p:1509-:d:807146. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.