Author
Listed:
- Xudong Ye
(Faculty of Data Science, City University of Macau, Macau SAR, China)
- Qi Zhang
(Faculty of Data Science, City University of Macau, Macau SAR, China)
- Sanshuai Cui
(Faculty of Data Science, City University of Macau, Macau SAR, China)
- Zuobin Ying
(Faculty of Data Science, City University of Macau, Macau SAR, China)
- Jingzhang Sun
(School of Cyberspace Security, Hainan University, Haikou 570228, China)
- Xia Du
(School of Computer and Information Engineering, Xiamen University of Technology, Xiamen 361024, China)
Abstract
The field of object detection has witnessed significant advancements in recent years, thanks to the remarkable progress in artificial intelligence and deep learning. These breakthroughs have significantly enhanced the accuracy and efficiency of detecting and categorizing objects in digital images. Nonetheless, contemporary object detection technologies have certain limitations, such as their inability to counter white-box attacks, insufficient denoising, suboptimal reconstruction, and gradient confusion. To overcome these hurdles, this study proposes an innovative approach that uses conditional diffusion models to perturb adversarial examples. The process begins with the application of a random chessboard mask to the adversarial example, followed by the addition of a slight noise to fill the masked area during the forward process. The adversarial image is then restored to its original form through a reverse generative process that only considers the masked pixels, not the entire image. Next, we use the complement of the initial mask as the mask for the second stage to reconstruct the image once more. This two-stage masking process allows for the complete removal of global disturbances and aids in image reconstruction. In particular, we employ a conditional diffusion model based on a class-conditional U-Net architecture, with the source image further conditioned through concatenation. Our method outperforms the recently introduced HARP method by 5% and 6.5% in mAP on the COCO2017 and PASCAL VOC datasets, respectively, under non-APT PGD attacks. Comprehensive experimental results confirm that our method can effectively restore adversarial examples, demonstrating its practical utility.
Suggested Citation
Xudong Ye & Qi Zhang & Sanshuai Cui & Zuobin Ying & Jingzhang Sun & Xia Du, 2024.
"Mitigating Adversarial Attacks in Object Detection through Conditional Diffusion Models,"
Mathematics, MDPI, vol. 12(19), pages 1-18, October.
Handle:
RePEc:gam:jmathe:v:12:y:2024:i:19:p:3093-:d:1491395
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:12:y:2024:i:19:p:3093-:d:1491395. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.