Author
Listed:
- Junyi Yang
(City University of Hong Kong)
- Ruibin Mao
(The University of Hong Kong)
- Mingrui Jiang
(The University of Hong Kong)
- Yichuan Cheng
(City University of Hong Kong)
- Pao-Sheng Vincent Sun
(City University of Hong Kong)
- Shuai Dong
(City University of Hong Kong)
- Giacomo Pedretti
(Hewlett Packard Enterprise)
- Xia Sheng
(Hewlett Packard Enterprise)
- Jim Ignowski
(Hewlett Packard Enterprise)
- Haoliang Li
(City University of Hong Kong)
- Can Li
(The University of Hong Kong)
- Arindam Basu
(City University of Hong Kong)
Abstract
Analog In-memory Computing (IMC) has demonstrated energy-efficient and low latency implementation of convolution and fully-connected layers in deep neural networks (DNN) by using physics for computing in parallel resistive memory arrays. However, recurrent neural networks (RNN) that are widely used for speech-recognition and natural language processing have tasted limited success with this approach. This can be attributed to the significant time and energy penalties incurred in implementing nonlinear activation functions that are abundant in such models. In this work, we experimentally demonstrate the implementation of a non-linear activation function integrated with a ramp analog-to-digital conversion (ADC) at the periphery of the memory to improve in-memory implementation of RNNs. Our approach uses an extra column of memristors to produce an appropriately pre-distorted ramp voltage such that the comparator output directly approximates the desired nonlinear function. We experimentally demonstrate programming different nonlinear functions using a memristive array and simulate its incorporation in RNNs to solve keyword spotting and language modelling tasks. Compared to other approaches, we demonstrate manifold increase in area-efficiency, energy-efficiency and throughput due to the in-memory, programmable ramp generator that removes digital processing overhead.
Suggested Citation
Junyi Yang & Ruibin Mao & Mingrui Jiang & Yichuan Cheng & Pao-Sheng Vincent Sun & Shuai Dong & Giacomo Pedretti & Xia Sheng & Jim Ignowski & Haoliang Li & Can Li & Arindam Basu, 2025.
"Efficient nonlinear function approximation in analog resistive crossbars for recurrent neural networks,"
Nature Communications, Nature, vol. 16(1), pages 1-15, December.
Handle:
RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-56254-6
DOI: 10.1038/s41467-025-56254-6
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-56254-6. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.