Author
Listed:
- Zhijie Li
(College of Information and Control Engineering, Xi’an University of Architecture and Technology, Xi’an 710055, China)
- Jiahui Zhang
(College of Information and Control Engineering, Xi’an University of Architecture and Technology, Xi’an 710055, China)
- Yingjie Zhang
(College of Information and Control Engineering, Xi’an University of Architecture and Technology, Xi’an 710055, China)
- Dawei Yan
(College of Information and Control Engineering, Xi’an University of Architecture and Technology, Xi’an 710055, China)
- Xing Zhang
(College of Information and Control Engineering, Xi’an University of Architecture and Technology, Xi’an 710055, China)
- Marcin Woźniak
(Institute of Mathematics, Silesian University of Technology, Kaszubska 23, 44-100 Gliwice, Poland)
- Wei Dong
(College of Information and Control Engineering, Xi’an University of Architecture and Technology, Xi’an 710055, China)
Abstract
The advancement of Transformer models in computer vision has rapidly spurred numerous Transformer-based object detection approaches, such as DEtection TRansformer. Although DETR’s self-attention mechanism effectively captures the global context, it struggles with fine-grained detail detection, limiting its efficacy in small object detection where noise can easily obscure or confuse small targets. To address these issues, we propose F uzzy S ystem DN N- DETR involving two key modules: Fuzzy Adapter Transformer Encoder and Fuzzy Denoising Transformer Decoder. The fuzzy Adapter Transformer Encoder utilizes adaptive fuzzy membership functions and rule-based smoothing to preserve critical details, such as edges and textures, while mitigating the loss of fine details in global feature processing. Meanwhile, the Fuzzy Denoising Transformer Decoder effectively reduces noise interference and enhances fine-grained feature capture, eliminating redundant computations in irrelevant regions. This approach achieves a balance between computational efficiency for medium-resolution images and the accuracy required for small object detection. Our architecture also employs adapter modules to reduce re-training costs, and a two-stage fine-tuning strategy adapts fuzzy modules to specific domains before harmonizing the model with task-specific adjustments. Experiments on the COCO and AI-TOD-V2 datasets show that FSDN-DETR achieves an approximately 20% improvement in average precision for very small objects, surpassing state-of-the-art models and demonstrating robustness and reliability for small object detection in complex environments.
Suggested Citation
Zhijie Li & Jiahui Zhang & Yingjie Zhang & Dawei Yan & Xing Zhang & Marcin Woźniak & Wei Dong, 2025.
"FSDN-DETR: Enhancing Fuzzy Systems Adapter with DeNoising Anchor Boxes for Transfer Learning in Small Object Detection,"
Mathematics, MDPI, vol. 13(2), pages 1-25, January.
Handle:
RePEc:gam:jmathe:v:13:y:2025:i:2:p:287-:d:1569313
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:2:p:287-:d:1569313. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.