Author
Listed:
- Samarth Jain
(National University of Singapore)
- Sifan Li
(National University of Singapore)
- Haofei Zheng
(National University of Singapore)
- Lingqi Li
(National University of Singapore)
- Xuanyao Fong
(National University of Singapore)
- Kah-Wee Ang
(National University of Singapore)
Abstract
Memristor crossbar arrays (CBAs) based on two-dimensional (2D) materials have emerged as a potential solution to overcome the limitations of energy consumption and latency associated with conventional von Neumann architectures. However, current 2D memristor CBAs encounter specific challenges such as limited array size, high sneak path current, and lack of integration with peripheral circuits for hardware compute-in-memory (CIM) systems. In this work, we demonstrate a hardware CIM system leveraging heterogeneous integration of scalable 2D hafnium diselenide (HfSe2) memristors and silicon (Si) selectors, as well as their integration with peripheral control-sensing circuits. The 32 × 32 one-selector-one-memristor (1S1R) array mitigates sneak current, achieving 89% yield. The integrated CBA demonstrates an improvement of energy efficiency and response time comparable to state-of-the-art 2D materials-based memristors. To take advantage of low latency devices for achieving low energy systems, we use time-domain sensing circuits with the CBA, whose power consumption surpasses that of analog-to-digital converters (ADCs) by 2.5 folds. The implemented full-hardware binary convolutional neural network (CNN) achieves remarkable accuracy (97.5%) in a pattern recognition task. Additionally, in-built activation functions enhance the energy efficiency of the system. This silicon-compatible heterogeneous integration approach presents a promising hardware solution for artificial intelligence (AI) applications.
Suggested Citation
Samarth Jain & Sifan Li & Haofei Zheng & Lingqi Li & Xuanyao Fong & Kah-Wee Ang, 2025.
"Heterogeneous integration of 2D memristor arrays and silicon selectors for compute-in-memory hardware in convolutional neural networks,"
Nature Communications, Nature, vol. 16(1), pages 1-13, December.
Handle:
RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-58039-3
DOI: 10.1038/s41467-025-58039-3
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-58039-3. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.