Author
Listed:
- Dong Chao
(Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai), Zhuhai 519000, China
South China Sea Marine Survey Center, Ministry of Natural Resources of the People’s Republic of China, Guangzhou 510300, China
Key Laboratory of Marine Environmental Survey Technology and Application, Ministry of Natural Resources of the People’s Republic of China, Guangzhou 510300, China)
- Zhenming Li
(College of Mechanical Engineering and Automation, Foshan University, Foshan 528200, China)
- Wenbo Zhu
(College of Mechanical Engineering and Automation, Foshan University, Foshan 528200, China)
- Haibing Li
(College of Mechanical Engineering and Automation, Foshan University, Foshan 528200, China)
- Bing Zheng
(Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai), Zhuhai 519000, China
South China Sea Marine Survey Center, Ministry of Natural Resources of the People’s Republic of China, Guangzhou 510300, China
Key Laboratory of Marine Environmental Survey Technology and Application, Ministry of Natural Resources of the People’s Republic of China, Guangzhou 510300, China)
- Zhongbo Zhang
(College of Mechanical Engineering and Automation, Foshan University, Foshan 528200, China)
- Weijie Fu
(College of Mechanical Engineering and Automation, Foshan University, Foshan 528200, China)
Abstract
Underwater vision technology is crucial for marine exploration, aquaculture, and environmental monitoring. However, the challenging underwater conditions, including light attenuation, color distortion, reduced contrast, and blurring, pose difficulties. Current deep learning models and traditional image enhancement techniques are limited in addressing these challenges, making it challenging to acquire high-quality underwater image signals. To overcome these limitations, this study proposes an approach called adaptive multi-scale multi-color space underwater image enhancement with GAN-physics fusion (AMSMC-UGAN). AMSMC-UGAN leverages multiple color spaces (RGB, HSV, and Lab) for feature extraction, compensating for RGB’s limitations in underwater environments and enhancing the use of image information. By integrating a membership degree function to guide deep learning based on physical models, the model’s performance is improved across different underwater scenes. In addition, the introduction of a multi-scale feature extraction module deepens the granularity of image information, learns the degradation distribution of different image information of the same image content more comprehensively, and provides useful guidance for more comprehensive data for image enhancement. AMSMC-UGAN achieved maximum scores of 26.04 dB, 0.87, and 3.2004 for PSNR, SSIM, and UIQM metrics, respectively, on real and synthetic underwater image datasets. Additionally, it obtained gains of at least 6.5%, 6%, and 1% for these metrics. Empirical evaluations on real and artificially distorted underwater image datasets demonstrate that AMSMC-GAN outperforms existing techniques, showcasing superior performance with enhanced quantitative metrics and strong generalization capabilities.
Suggested Citation
Dong Chao & Zhenming Li & Wenbo Zhu & Haibing Li & Bing Zheng & Zhongbo Zhang & Weijie Fu, 2024.
"AMSMC-UGAN: Adaptive Multi-Scale Multi-Color Space Underwater Image Enhancement with GAN-Physics Fusion,"
Mathematics, MDPI, vol. 12(10), pages 1-19, May.
Handle:
RePEc:gam:jmathe:v:12:y:2024:i:10:p:1551-:d:1395698
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:12:y:2024:i:10:p:1551-:d:1395698. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.