IDEAS home Printed from https://ideas.repec.org/a/nat/nature/v558y2018i7708d10.1038_s41586-018-0180-5.html
   My bibliography  Save this article

Equivalent-accuracy accelerated neural-network training using analogue memory

Author

Listed:
  • Stefano Ambrogio

    (IBM Research–Almaden)

  • Pritish Narayanan

    (IBM Research–Almaden)

  • Hsinyu Tsai

    (IBM Research–Almaden)

  • Robert M. Shelby

    (IBM Research–Almaden)

  • Irem Boybat

    (IBM Research–Zurich
    EPFL)

  • Carmelo Nolfo

    (IBM Research–Almaden
    EPFL)

  • Severin Sidler

    (IBM Research–Almaden
    EPFL)

  • Massimo Giordano

    (IBM Research–Almaden)

  • Martina Bodini

    (IBM Research–Almaden
    EPFL)

  • Nathan C. P. Farinha

    (IBM Research–Almaden)

  • Benjamin Killeen

    (IBM Research–Almaden)

  • Christina Cheng

    (IBM Research–Almaden)

  • Yassine Jaoudi

    (IBM Research–Almaden)

  • Geoffrey W. Burr

    (IBM Research–Almaden)

Abstract

Neural-network training can be slow and energy intensive, owing to the need to transfer the weight data for the network between conventional digital memory chips and processor chips. Analogue non-volatile memory can accelerate the neural-network training algorithm known as backpropagation by performing parallelized multiply–accumulate operations in the analogue domain at the location of the weight data. However, the classification accuracies of such in situ training using non-volatile-memory hardware have generally been less than those of software-based training, owing to insufficient dynamic range and excessive weight-update asymmetry. Here we demonstrate mixed hardware–software neural-network implementations that involve up to 204,900 synapses and that combine long-term storage in phase-change memory, near-linear updates of volatile capacitors and weight-data transfer with ‘polarity inversion’ to cancel out inherent device-to-device variations. We achieve generalization accuracies (on previously unseen data) equivalent to those of software-based training on various commonly used machine-learning test datasets (MNIST, MNIST-backrand, CIFAR-10 and CIFAR-100). The computational energy efficiency of 28,065 billion operations per second per watt and throughput per area of 3.6 trillion operations per second per square millimetre that we calculate for our implementation exceed those of today’s graphical processing units by two orders of magnitude. This work provides a path towards hardware accelerators that are both fast and energy efficient, particularly on fully connected neural-network layers.

Suggested Citation

  • Stefano Ambrogio & Pritish Narayanan & Hsinyu Tsai & Robert M. Shelby & Irem Boybat & Carmelo Nolfo & Severin Sidler & Massimo Giordano & Martina Bodini & Nathan C. P. Farinha & Benjamin Killeen & Chr, 2018. "Equivalent-accuracy accelerated neural-network training using analogue memory," Nature, Nature, vol. 558(7708), pages 60-67, June.
  • Handle: RePEc:nat:nature:v:558:y:2018:i:7708:d:10.1038_s41586-018-0180-5
    DOI: 10.1038/s41586-018-0180-5
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41586-018-0180-5
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1038/s41586-018-0180-5?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Melika Payvand & Filippo Moro & Kumiko Nomura & Thomas Dalgaty & Elisa Vianello & Yoshifumi Nishi & Giacomo Indiveri, 2022. "Self-organization of an inhomogeneous memristive hardware for sequence learning," Nature Communications, Nature, vol. 13(1), pages 1-12, December.
    2. Darshit Mehta & Mustafizur Rahman & Kenji Aono & Shantanu Chakrabartty, 2022. "An adaptive synaptic array using Fowler–Nordheim dynamic analog memory," Nature Communications, Nature, vol. 13(1), pages 1-11, December.
    3. Peng Chen & Fenghao Liu & Peng Lin & Peihong Li & Yu Xiao & Bihua Zhang & Gang Pan, 2023. "Open-loop analog programmable electrochemical memory array," Nature Communications, Nature, vol. 14(1), pages 1-9, December.
    4. Filippo Moro & Emmanuel Hardy & Bruno Fain & Thomas Dalgaty & Paul Clémençon & Alessio Prà & Eduardo Esmanhotto & Niccolò Castellani & François Blard & François Gardien & Thomas Mesquida & François Ru, 2022. "Neuromorphic object localization using resistive memories and ultrasonic transducers," Nature Communications, Nature, vol. 13(1), pages 1-13, December.
    5. Charles Mackin & Malte J. Rasch & An Chen & Jonathan Timcheck & Robert L. Bruce & Ning Li & Pritish Narayanan & Stefano Ambrogio & Manuel Gallo & S. R. Nandakumar & Andrea Fasoli & Jose Luquin & Alexa, 2022. "Optimised weight programming for analogue memory-based deep neural networks," Nature Communications, Nature, vol. 13(1), pages 1-12, December.
    6. Thomas Dalgaty & Filippo Moro & Yiğit Demirağ & Alessio Pra & Giacomo Indiveri & Elisa Vianello & Melika Payvand, 2024. "Mosaic: in-memory computing and routing for small-world spike-based neuromorphic systems," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    7. Seokho Seo & Beomjin Kim & Donghoon Kim & Seungwoo Park & Tae Ryong Kim & Junkyu Park & Hakcheon Jeong & See-On Park & Taehoon Park & Hyeok Shin & Myung-Su Kim & Yang-Kyu Choi & Shinhyun Choi, 2022. "The gate injection-based field-effect synapse transistor with linear conductance update for online training," Nature Communications, Nature, vol. 13(1), pages 1-10, December.
    8. Xiangjin Wu & Asir Intisar Khan & Hengyuan Lee & Chen-Feng Hsu & Huairuo Zhang & Heshan Yu & Neel Roy & Albert V. Davydov & Ichiro Takeuchi & Xinyu Bao & H.-S. Philip Wong & Eric Pop, 2024. "Novel nanocomposite-superlattices for low energy and high stability nanoscale phase-change memory," Nature Communications, Nature, vol. 15(1), pages 1-8, December.
    9. Maldonado, D. & Aguilera-Pedregosa, C. & Vinuesa, G. & García, H. & Dueñas, S. & Castán, H. & Aldana, S. & González, M.B. & Moreno, E. & Jiménez-Molinos, F. & Campabadal, F. & Roldán, J.B., 2022. "An experimental and simulation study of the role of thermal effects on variability in TiN/Ti/HfO2/W resistive switching nonlinear devices," Chaos, Solitons & Fractals, Elsevier, vol. 160(C).
    10. Ruibin Mao & Bo Wen & Arman Kazemi & Yahui Zhao & Ann Franchesca Laguna & Rui Lin & Ngai Wong & Michael Niemier & X. Sharon Hu & Xia Sheng & Catherine E. Graves & John Paul Strachan & Can Li, 2022. "Experimentally validated memristive memory augmented neural network with efficient hashing and similarity search," Nature Communications, Nature, vol. 13(1), pages 1-13, December.
    11. Yijun Li & Jianshi Tang & Bin Gao & Jian Yao & Anjunyi Fan & Bonan Yan & Yuchao Yang & Yue Xi & Yuankun Li & Jiaming Li & Wen Sun & Yiwei Du & Zhengwu Liu & Qingtian Zhang & Song Qiu & Qingwen Li & He, 2023. "Monolithic three-dimensional integration of RRAM-based hybrid memory architecture for one-shot learning," Nature Communications, Nature, vol. 14(1), pages 1-9, December.
    12. Fadi Jebali & Atreya Majumdar & Clément Turck & Kamel-Eddine Harabi & Mathieu-Coumba Faye & Eloi Muhr & Jean-Pierre Walder & Oleksandr Bilousov & Amadéo Michaud & Elisa Vianello & Tifenn Hirtzlin & Fr, 2024. "Powering AI at the edge: A robust, memristor-based binarized neural network with near-memory computing and miniaturized solar cell," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    13. Djohan Bonnet & Tifenn Hirtzlin & Atreya Majumdar & Thomas Dalgaty & Eduardo Esmanhotto & Valentina Meli & Niccolo Castellani & Simon Martin & Jean-François Nodin & Guillaume Bourgeois & Jean-Michel P, 2023. "Bringing uncertainty quantification to the extreme-edge with memristor-based Bayesian neural networks," Nature Communications, Nature, vol. 14(1), pages 1-13, December.
    14. Simone D’Agostino & Filippo Moro & Tristan Torchet & Yiğit Demirağ & Laurent Grenouillet & Niccolò Castellani & Giacomo Indiveri & Elisa Vianello & Melika Payvand, 2024. "DenRAM: neuromorphic dendritic architecture with RRAM for efficient temporal processing with delays," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    15. Sourav Dutta & Georgios Detorakis & Abhishek Khanna & Benjamin Grisafe & Emre Neftci & Suman Datta, 2022. "Neural sampling machine with stochastic synapse allows brain-like learning and inference," Nature Communications, Nature, vol. 13(1), pages 1-10, December.
    16. Bin Gao & Ying Zhou & Qingtian Zhang & Shuanglin Zhang & Peng Yao & Yue Xi & Qi Liu & Meiran Zhao & Wenqiang Zhang & Zhengwu Liu & Xinyi Li & Jianshi Tang & He Qian & Huaqiang Wu, 2022. "Memristor-based analogue computing for brain-inspired sound localization with in situ training," Nature Communications, Nature, vol. 13(1), pages 1-8, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:nature:v:558:y:2018:i:7708:d:10.1038_s41586-018-0180-5. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.