Author
Listed:
- Shaocong Wang
(The University of Hong Kong
Chinese Academy of Sciences
Hong Kong Science Park)
- Yizhao Gao
(The University of Hong Kong)
- Yi Li
(The University of Hong Kong
Chinese Academy of Sciences
Chinese Academy of Sciences
University of Chinese Academy of Sciences)
- Woyu Zhang
(Chinese Academy of Sciences
Chinese Academy of Sciences
University of Chinese Academy of Sciences)
- Yifei Yu
(The University of Hong Kong
Hong Kong Science Park)
- Bo Wang
(The University of Hong Kong
Hong Kong Science Park)
- Ning Lin
(The University of Hong Kong
Hong Kong Science Park)
- Hegan Chen
(The University of Hong Kong
Hong Kong Science Park)
- Yue Zhang
(The University of Hong Kong
Hong Kong Science Park)
- Yang Jiang
(The University of Hong Kong
Hong Kong Science Park)
- Dingchen Wang
(The University of Hong Kong
Hong Kong Science Park)
- Jia Chen
(The University of Hong Kong
Hong Kong Science Park)
- Peng Dai
(The University of Hong Kong)
- Hao Jiang
(Fudan University)
- Peng Lin
(Zhejiang University)
- Xumeng Zhang
(Fudan University)
- Xiaojuan Qi
(The University of Hong Kong)
- Xiaoxin Xu
(Chinese Academy of Sciences
Chinese Academy of Sciences
University of Chinese Academy of Sciences)
- Hayden So
(The University of Hong Kong)
- Zhongrui Wang
(Southern University of Science and Technology)
- Dashan Shang
(Chinese Academy of Sciences
Chinese Academy of Sciences
University of Chinese Academy of Sciences)
- Qi Liu
(Chinese Academy of Sciences
Fudan University)
- Kwang-Ting Cheng
(Hong Kong Science Park
The Hong Kong University of Science and Technology)
- Ming Liu
(Chinese Academy of Sciences
Fudan University)
Abstract
Visual sensors, including 3D light detection and ranging, neuromorphic dynamic vision sensor, and conventional frame cameras, are increasingly integrated into edge-side intelligent machines. However, their data are heterogeneous, causing complexity in system development. Moreover, conventional digital hardware is constrained by von Neumann bottleneck and the physical limit of transistor scaling. The computational demands of training ever-growing models further exacerbate these challenges. We propose a hardware-software co-designed random resistive memory-based deep extreme point learning machine. Data-wise, the multi-sensory data are unified as point set and processed universally. Software-wise, most weights are exempted from training. Hardware-wise, nanoscale resistive memory enables collocation of memory and processing, and leverages the inherent programming stochasticity for generating random weights. The co-design system is validated on 3D segmentation (ShapeNet), event recognition (DVS128 Gesture), and image classification (Fashion-MNIST) tasks, achieving accuracy comparable to conventional systems while delivering 6.78 × /21.04 × /15.79 × energy efficiency improvements and 70.12%/89.46%/85.61% training cost reductions.
Suggested Citation
Shaocong Wang & Yizhao Gao & Yi Li & Woyu Zhang & Yifei Yu & Bo Wang & Ning Lin & Hegan Chen & Yue Zhang & Yang Jiang & Dingchen Wang & Jia Chen & Peng Dai & Hao Jiang & Peng Lin & Xumeng Zhang & Xiao, 2025.
"Random resistive memory-based deep extreme point learning machine for unified visual processing,"
Nature Communications, Nature, vol. 16(1), pages 1-11, December.
Handle:
RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-56079-3
DOI: 10.1038/s41467-025-56079-3
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-56079-3. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.