IDEAS home Printed from https://ideas.repec.org/a/nat/nature/v602y2022i7897d10.1038_s41586-021-04301-9.html
   My bibliography  Save this article

Magnetic control of tokamak plasmas through deep reinforcement learning

Author

Listed:
  • Jonas Degrave

    (DeepMind)

  • Federico Felici

    (Swiss Plasma Center - EPFL)

  • Jonas Buchli

    (DeepMind)

  • Michael Neunert

    (DeepMind)

  • Brendan Tracey

    (DeepMind)

  • Francesco Carpanese

    (DeepMind
    Swiss Plasma Center - EPFL)

  • Timo Ewalds

    (DeepMind)

  • Roland Hafner

    (DeepMind)

  • Abbas Abdolmaleki

    (DeepMind)

  • Diego de las Casas

    (DeepMind)

  • Craig Donner

    (DeepMind)

  • Leslie Fritz

    (DeepMind)

  • Cristian Galperti

    (Swiss Plasma Center - EPFL)

  • Andrea Huber

    (DeepMind)

  • James Keeling

    (DeepMind)

  • Maria Tsimpoukelli

    (DeepMind)

  • Jackie Kay

    (DeepMind)

  • Antoine Merle

    (Swiss Plasma Center - EPFL)

  • Jean-Marc Moret

    (Swiss Plasma Center - EPFL)

  • Seb Noury

    (DeepMind)

  • Federico Pesamosca

    (Swiss Plasma Center - EPFL)

  • David Pfau

    (DeepMind)

  • Olivier Sauter

    (Swiss Plasma Center - EPFL)

  • Cristian Sommariva

    (Swiss Plasma Center - EPFL)

  • Stefano Coda

    (Swiss Plasma Center - EPFL)

  • Basil Duval

    (Swiss Plasma Center - EPFL)

  • Ambrogio Fasoli

    (Swiss Plasma Center - EPFL)

  • Pushmeet Kohli

    (DeepMind)

  • Koray Kavukcuoglu

    (DeepMind)

  • Demis Hassabis

    (DeepMind)

  • Martin Riedmiller

    (DeepMind)

Abstract

Nuclear fusion using magnetic confinement, in particular in the tokamak configuration, is a promising path towards sustainable energy. A core challenge is to shape and maintain a high-temperature plasma within the tokamak vessel. This requires high-dimensional, high-frequency, closed-loop control using magnetic actuator coils, further complicated by the diverse requirements across a wide range of plasma configurations. In this work, we introduce a previously undescribed architecture for tokamak magnetic controller design that autonomously learns to command the full set of control coils. This architecture meets control objectives specified at a high level, at the same time satisfying physical and operational constraints. This approach has unprecedented flexibility and generality in problem specification and yields a notable reduction in design effort to produce new plasma configurations. We successfully produce and control a diverse set of plasma configurations on the Tokamak à Configuration Variable1,2, including elongated, conventional shapes, as well as advanced configurations, such as negative triangularity and ‘snowflake’ configurations. Our approach achieves accurate tracking of the location, current and shape for these configurations. We also demonstrate sustained ‘droplets’ on TCV, in which two separate plasmas are maintained simultaneously within the vessel. This represents a notable advance for tokamak feedback control, showing the potential of reinforcement learning to accelerate research in the fusion domain, and is one of the most challenging real-world systems to which reinforcement learning has been applied.

Suggested Citation

  • Jonas Degrave & Federico Felici & Jonas Buchli & Michael Neunert & Brendan Tracey & Francesco Carpanese & Timo Ewalds & Roland Hafner & Abbas Abdolmaleki & Diego de las Casas & Craig Donner & Leslie F, 2022. "Magnetic control of tokamak plasmas through deep reinforcement learning," Nature, Nature, vol. 602(7897), pages 414-419, February.
  • Handle: RePEc:nat:nature:v:602:y:2022:i:7897:d:10.1038_s41586-021-04301-9
    DOI: 10.1038/s41586-021-04301-9
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41586-021-04301-9
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1038/s41586-021-04301-9?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Kai Zhao & Jia Song & Yunlong Hu & Xiaowei Xu & Yang Liu, 2022. "Deep Deterministic Policy Gradient-Based Active Disturbance Rejection Controller for Quad-Rotor UAVs," Mathematics, MDPI, vol. 10(15), pages 1-15, July.
    2. Caputo, Cesare & Cardin, Michel-Alexandre & Ge, Pudong & Teng, Fei & Korre, Anna & Antonio del Rio Chanona, Ehecatl, 2023. "Design and planning of flexible mobile Micro-Grids using Deep Reinforcement Learning," Applied Energy, Elsevier, vol. 335(C).
    3. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    4. Fuhao Ji & Auralee Edelen & Ryan Roussel & Xiaozhe Shen & Sara Miskovich & Stephen Weathersby & Duan Luo & Mianzhen Mo & Patrick Kramer & Christopher Mayes & Mohamed A. K. Othman & Emilio Nanni & Xiji, 2024. "Multi-objective Bayesian active learning for MeV-ultrafast electron diffraction," Nature Communications, Nature, vol. 15(1), pages 1-7, December.
    5. Paweł Linczuk & Andrzej Wojeński & Tomasz Czarski & Piotr Kolasiński & Wojciech M. Zabołotny & Krzysztof Poźniak & Grzegorz Kasprowicz & Radosław Cieszewski & Maryna Chernyshova & Karol Malinowski & D, 2024. "Heterogeneous Online Computational Platform for GEM-Based Plasma Impurity Monitoring Systems," Energies, MDPI, vol. 17(22), pages 1-22, November.
    6. Andrea Murari & Riccardo Rossi & Teddy Craciunescu & Jesús Vega & Michela Gelfusa, 2024. "A control oriented strategy of disruption prediction to avoid the configuration collapse of tokamak reactors," Nature Communications, Nature, vol. 15(1), pages 1-19, December.
    7. Zhang, Tianhao & Dong, Zhe & Huang, Xiaojin, 2024. "Multi-objective optimization of thermal power and outlet steam temperature for a nuclear steam supply system with deep reinforcement learning," Energy, Elsevier, vol. 286(C).
    8. Hajkowicz, Stefan & Naughtin, Claire & Sanderson, Conrad & Schleiger, Emma & Karimi, Sarvnaz & Bratanova, Alexandra & Bednarz, Tomasz, 2022. "Artificial intelligence for science – adoption trends and future development pathways," MPRA Paper 115464, University Library of Munich, Germany.
    9. Yang, Kaiyuan & Huang, Houjing & Vandans, Olafs & Murali, Adithya & Tian, Fujia & Yap, Roland H.C. & Dai, Liang, 2023. "Applying deep reinforcement learning to the HP model for protein structure prediction," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 609(C).
    10. Weifan Long & Taixian Hou & Xiaoyi Wei & Shichao Yan & Peng Zhai & Lihua Zhang, 2023. "A Survey on Population-Based Deep Reinforcement Learning," Mathematics, MDPI, vol. 11(10), pages 1-17, May.
    11. Maryam Ghalkhani & Saeid Habibi, 2022. "Review of the Li-Ion Battery, Thermal Management, and AI-Based Battery Management System for EV Application," Energies, MDPI, vol. 16(1), pages 1-16, December.
    12. Jiyu Cui & Fang Wu & Wen Zhang & Lifeng Yang & Jianbo Hu & Yin Fang & Peng Ye & Qiang Zhang & Xian Suo & Yiming Mo & Xili Cui & Huajun Chen & Huabin Xing, 2023. "Direct prediction of gas adsorption via spatial atom interaction learning," Nature Communications, Nature, vol. 14(1), pages 1-9, December.
    13. S. K. Kim & R. Shousha & S. M. Yang & Q. Hu & S. H. Hahn & A. Jalalvand & J.-K. Park & N. C. Logan & A. O. Nelson & Y.-S. Na & R. Nazikian & R. Wilcox & R. Hong & T. Rhodes & C. Paz-Soldan & Y. M. Jeo, 2024. "Highest fusion performance without harmful edge energy bursts in tokamak," Nature Communications, Nature, vol. 15(1), pages 1-11, December.
    14. Malte Reinschmidt & József Fortágh & Andreas Günther & Valentin V. Volchkov, 2024. "Reinforcement learning in cold atom experiments," Nature Communications, Nature, vol. 15(1), pages 1-11, December.
    15. Stefano Bianchini & Moritz Muller & Pierre Pelletier, 2023. "Drivers and Barriers of AI Adoption and Use in Scientific Research," Papers 2312.09843, arXiv.org, revised Feb 2024.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:nature:v:602:y:2022:i:7897:d:10.1038_s41586-021-04301-9. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.