Author
Listed:
- Nianzeng Yuan
(School of Computer Science and Engineering, Xi’an University of Technology, Xi’an 710048, China)
- Xingyun Zhao
(School of Computer Science and Engineering, Xi’an University of Technology, Xi’an 710048, China)
- Bangyong Sun
(School of Computer Science and Engineering, Xi’an University of Technology, Xi’an 710048, China
Key Laboratory of Pulp and Paper Science & Technology of Ministry of Education, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China
State Key Laboratory of Transient Optics and Photonics, Chinese Academy of Sciences, Xi’an 710119, China)
- Wenjia Han
(Key Laboratory of Pulp and Paper Science & Technology of Ministry of Education, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China)
- Jiahai Tan
(State Key Laboratory of Transient Optics and Photonics, Chinese Academy of Sciences, Xi’an 710119, China)
- Tao Duan
(State Key Laboratory of Transient Optics and Photonics, Chinese Academy of Sciences, Xi’an 710119, China)
- Xiaomei Gao
(Xi’an Mapping and Printing of China National Administration of Coal Geology, Xi’an 710199, China)
Abstract
Within low-light imaging environment, the insufficient reflected light from objects often results in unsatisfactory images with degradations of low contrast, noise artifacts, or color distortion. The captured low-light images usually lead to poor visual perception quality for color deficient or normal observers. To address the above problems, we propose an end-to-end low-light image enhancement network by combining transformer and CNN (convolutional neural network) to restore the normal light images. Specifically, the proposed enhancement network is designed into a U-shape structure with several functional fusion blocks. Each fusion block includes a transformer stem and a CNN stem, and those two stems collaborate to accurately extract the local and global features. In this way, the transformer stem is responsible for efficiently learning global semantic information and capturing long-term dependencies, while the CNN stem is good at learning local features and focusing on detailed features. Thus, the proposed enhancement network can accurately capture the comprehensive semantic information of low-light images, which significantly contribute to recover normal light images. The proposed method is compared with the current popular algorithms quantitatively and qualitatively. Subjectively, our method significantly improves the image brightness, suppresses the image noise, and maintains the texture details and color information. For objective metrics such as peak signal-to-noise ratio (PSNR), structural similarity (SSIM), image perceptual similarity (LPIPS), DeltaE, and NIQE, our method improves the optimal values by 1.73 dB, 0.05, 0.043, 0.7939, and 0.6906, respectively, compared with other methods. The experimental results show that our proposed method can effectively solve the problems of underexposure, noise interference, and color inconsistency in micro-optical images, and has certain application value.
Suggested Citation
Nianzeng Yuan & Xingyun Zhao & Bangyong Sun & Wenjia Han & Jiahai Tan & Tao Duan & Xiaomei Gao, 2023.
"Low-Light Image Enhancement by Combining Transformer and Convolutional Neural Network,"
Mathematics, MDPI, vol. 11(7), pages 1-14, March.
Handle:
RePEc:gam:jmathe:v:11:y:2023:i:7:p:1657-:d:1111484
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:11:y:2023:i:7:p:1657-:d:1111484. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.