IDEAS home Printed from https://ideas.repec.org/a/nat/nature/v588y2020i7839d10.1038_s41586-020-03051-4.html
   My bibliography  Save this article

Mastering Atari, Go, chess and shogi by planning with a learned model

Author

Listed:
  • Julian Schrittwieser

    (DeepMind)

  • Ioannis Antonoglou

    (DeepMind
    University College London)

  • Thomas Hubert

    (DeepMind)

  • Karen Simonyan

    (DeepMind)

  • Laurent Sifre

    (DeepMind)

  • Simon Schmitt

    (DeepMind)

  • Arthur Guez

    (DeepMind)

  • Edward Lockhart

    (DeepMind)

  • Demis Hassabis

    (DeepMind)

  • Thore Graepel

    (DeepMind
    University College London)

  • Timothy Lillicrap

    (DeepMind)

  • David Silver

    (DeepMind
    University College London)

Abstract

Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess1 and Go2, where a perfect simulator is available. However, in real-world problems, the dynamics governing the environment are often complex and unknown. Here we present the MuZero algorithm, which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. The MuZero algorithm learns an iterable model that produces predictions relevant to planning: the action-selection policy, the value function and the reward. When evaluated on 57 different Atari games3—the canonical video game environment for testing artificial intelligence techniques, in which model-based planning approaches have historically struggled4—the MuZero algorithm achieved state-of-the-art performance. When evaluated on Go, chess and shogi—canonical environments for high-performance planning—the MuZero algorithm matched, without any knowledge of the game dynamics, the superhuman performance of the AlphaZero algorithm5 that was supplied with the rules of the game.

Suggested Citation

  • Julian Schrittwieser & Ioannis Antonoglou & Thomas Hubert & Karen Simonyan & Laurent Sifre & Simon Schmitt & Arthur Guez & Edward Lockhart & Demis Hassabis & Thore Graepel & Timothy Lillicrap & David , 2020. "Mastering Atari, Go, chess and shogi by planning with a learned model," Nature, Nature, vol. 588(7839), pages 604-609, December.
  • Handle: RePEc:nat:nature:v:588:y:2020:i:7839:d:10.1038_s41586-020-03051-4
    DOI: 10.1038/s41586-020-03051-4
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41586-020-03051-4
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1038/s41586-020-03051-4?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. De Moor, Bram J. & Gijsbrechts, Joren & Boute, Robert N., 2022. "Reward shaping to improve the performance of deep reinforcement learning in perishable inventory management," European Journal of Operational Research, Elsevier, vol. 301(2), pages 535-545.
    2. Guangyuan Li & Baobao Song & Harinder Singh & V. B. Surya Prasath & H. Leighton Grimes & Nathan Salomonis, 2023. "Decision level integration of unimodal and multimodal single cell data with scTriangulate," Nature Communications, Nature, vol. 14(1), pages 1-16, December.
    3. Jin Li & Ye Luo & Xiaowei Zhang, 2021. "Causal Reinforcement Learning: An Instrumental Variable Approach," Papers 2103.04021, arXiv.org, revised Sep 2022.
    4. Jinke Yao & Jiachen Xu & Ning Zhang & Yajuan Guan, 2023. "Model-Based Reinforcement Learning Method for Microgrid Optimization Scheduling," Sustainability, MDPI, vol. 15(12), pages 1-18, June.
    5. Weiwu Ren & Jialin Zhu & Hui Qi & Ligang Cong & Xiaoqiang Di, 2022. "Dynamic optimization of intersatellite link assignment based on reinforcement learning," International Journal of Distributed Sensor Networks, , vol. 18(2), pages 15501477211, February.
    6. Christopher R. Madan, 2020. "Considerations for Comparing Video Game AI Agents with Humans," Challenges, MDPI, vol. 11(2), pages 1-12, August.
    7. Tasos Papagiannis & Georgios Alexandridis & Andreas Stafylopatis, 2022. "Pruning Stochastic Game Trees Using Neural Networks for Reduced Action Space Approximation," Mathematics, MDPI, vol. 10(9), pages 1-16, May.
    8. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    9. Jorge Ramírez-Ruiz & Dmytro Grytskyy & Chiara Mastrogiuseppe & Yamen Habib & Rubén Moreno-Bote, 2024. "Complex behavior from intrinsic motivation to occupy future action-state path space," Nature Communications, Nature, vol. 15(1), pages 1-15, December.
    10. He, Hongwen & Su, Qicong & Huang, Ruchen & Niu, Zegong, 2024. "Enabling intelligent transferable energy management of series hybrid electric tracked vehicle across motion dimensions via soft actor-critic algorithm," Energy, Elsevier, vol. 294(C).
    11. Spyridon Samothrakis, 2021. "Artificial Intelligence inspired methods for the allocation of common goods and services," PLOS ONE, Public Library of Science, vol. 16(9), pages 1-16, September.
    12. Gokhale, Gargya & Claessens, Bert & Develder, Chris, 2022. "Physics informed neural networks for control oriented thermal modeling of buildings," Applied Energy, Elsevier, vol. 314(C).
    13. Syed Ghazi Sarwat & Timoleon Moraitis & C. David Wright & Harish Bhaskaran, 2022. "Chalcogenide optomemristors for multi-factor neuromorphic computation," Nature Communications, Nature, vol. 13(1), pages 1-9, December.
    14. Marcel Rolf Pfeifer, 2021. "Development of a Smart Manufacturing Execution System Architecture for SMEs: A Czech Case Study," Sustainability, MDPI, vol. 13(18), pages 1-23, September.
    15. Alexandros A. Lavdas & Nikos A. Salingaros, 2021. "Can Suboptimal Visual Environments Negatively Affect Children’s Cognitive Development?," Challenges, MDPI, vol. 12(2), pages 1-12, November.
    16. Christoph Graf & Viktor Zobernig & Johannes Schmidt & Claude Klöckl, 2024. "Computational Performance of Deep Reinforcement Learning to Find Nash Equilibria," Computational Economics, Springer;Society for Computational Economics, vol. 63(2), pages 529-576, February.
    17. Rishi Rajalingham & Aída Piccato & Mehrdad Jazayeri, 2022. "Recurrent neural networks with explicit representation of dynamic latent variables can mimic behavioral patterns in a physical inference task," Nature Communications, Nature, vol. 13(1), pages 1-15, December.
    18. Christoph Graf & Viktor Zobernig & Johannes Schmidt & Claude Klockl, 2021. "Computational Performance of Deep Reinforcement Learning to find Nash Equilibria," Papers 2104.12895, arXiv.org.
    19. Boute, Robert N. & Gijsbrechts, Joren & van Jaarsveld, Willem & Vanvuchelen, Nathalie, 2022. "Deep reinforcement learning for inventory control: A roadmap," European Journal of Operational Research, Elsevier, vol. 298(2), pages 401-412.
    20. Bálint Kővári & Lászlo Szőke & Tamás Bécsi & Szilárd Aradi & Péter Gáspár, 2021. "Traffic Signal Control via Reinforcement Learning for Reducing Global Vehicle Emission," Sustainability, MDPI, vol. 13(20), pages 1-18, October.
    21. Li, Wenqing & Ni, Shaoquan, 2022. "Train timetabling with the general learning environment and multi-agent deep reinforcement learning," Transportation Research Part B: Methodological, Elsevier, vol. 157(C), pages 230-251.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:nature:v:588:y:2020:i:7839:d:10.1038_s41586-020-03051-4. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.