Author
Listed:
- Yanqiu Guan
(Nanjing University)
- Haochen Li
(Nanjing University)
- Yi Zhang
(Tsinghua University)
- Yuchen Qiu
(Nanjing University)
- Labao Zhang
(Nanjing University
Hefei National Laboratory
University of Science and Technology of China)
- Xiangyang Ji
(Tsinghua University)
- Hao Wang
(Nanjing University
Hefei National Laboratory)
- Qi Chen
(Nanjing University)
- Liang Ma
(Nanjing University)
- Xiaohan Wang
(Nanjing University)
- Zhuolin Yang
(Nanjing University)
- Xuecou Tu
(Nanjing University
Hefei National Laboratory)
- Qingyuan Zhao
(Nanjing University)
- Xiaoqing Jia
(Nanjing University
Hefei National Laboratory)
- Jian Chen
(Nanjing University)
- Lin Kang
(Nanjing University
Hefei National Laboratory
University of Science and Technology of China)
- Peiheng Wu
(Nanjing University
Hefei National Laboratory)
Abstract
Image sensors with internal computing capabilities fuse sensing and computing to significantly reduce the power consumption and latency of machine vision tasks. Linear photodetectors such as 2D semiconductors with tunable electrical and optical properties enable in-sensor computing for multiple functions. In-sensor computing at the single-photon level is much more plausible but has not yet been achieved. Here, we demonstrate a photon-efficient camera with in-sensor computing based on a superconducting nanowire array detector with four programmable dimensions including photon count rate, response time, pulse amplitude, and spectral responsivity. At the same time, the sensor features saturated (100%) quantum efficiency in the range of 405–1550 nm. Benefiting from the multidimensional modulation and ultra-high sensitivity, a classification accuracy of 92.22% for three letters is achieved with only 0.12 photons per pixel per pattern. Furthermore, image preprocessing and spectral classification are demonstrated. Photon-efficient in-sensor computing is beneficial for vision tasks in extremely low-light environments such as covert imaging, biological imaging and space exploration. The single-photon image sensor can be scaled up to construct more complex neural networks, enabling more complex real-time vision tasks with high sensitivity.
Suggested Citation
Yanqiu Guan & Haochen Li & Yi Zhang & Yuchen Qiu & Labao Zhang & Xiangyang Ji & Hao Wang & Qi Chen & Liang Ma & Xiaohan Wang & Zhuolin Yang & Xuecou Tu & Qingyuan Zhao & Xiaoqing Jia & Jian Chen & Lin, 2025.
"Photon-efficient camera with in-sensor computing,"
Nature Communications, Nature, vol. 16(1), pages 1-8, December.
Handle:
RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-58501-2
DOI: 10.1038/s41467-025-58501-2
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-58501-2. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.