Author
Listed:
- Yiyong Chen
(Tea Research Institute, Guangdong Academy of Agricultural Sciences & Guangdong Provincial Key Laboratory of Tea Plant Resources Innovation and Utilization, Dafeng Road 6, Tianhe District, Guangzhou 510640, China)
- Yang Guo
(College of Electronic Engineering, South China Agricultural University, Guangzhou 510642, China)
- Jianlong Li
(Tea Research Institute, Guangdong Academy of Agricultural Sciences & Guangdong Provincial Key Laboratory of Tea Plant Resources Innovation and Utilization, Dafeng Road 6, Tianhe District, Guangzhou 510640, China)
- Bo Zhou
(Tea Research Institute, Guangdong Academy of Agricultural Sciences & Guangdong Provincial Key Laboratory of Tea Plant Resources Innovation and Utilization, Dafeng Road 6, Tianhe District, Guangzhou 510640, China)
- Jiaming Chen
(Key Laboratory of South China Agricultural Plant Molecular Analysis and Genetic Improvement & Guangdong Provincial Key Laboratory of Applied Botany, South China Botanical Garden, Chinese Academy of Sciences, Xingke Road 723, Tianhe District, Guangzhou 510650, China)
- Man Zhang
(Tea Research Institute, Guangdong Academy of Agricultural Sciences & Guangdong Provincial Key Laboratory of Tea Plant Resources Innovation and Utilization, Dafeng Road 6, Tianhe District, Guangzhou 510640, China)
- Yingying Cui
(Tea Research Institute, Guangdong Academy of Agricultural Sciences & Guangdong Provincial Key Laboratory of Tea Plant Resources Innovation and Utilization, Dafeng Road 6, Tianhe District, Guangzhou 510640, China)
- Jinchi Tang
(Tea Research Institute, Guangdong Academy of Agricultural Sciences & Guangdong Provincial Key Laboratory of Tea Plant Resources Innovation and Utilization, Dafeng Road 6, Tianhe District, Guangzhou 510640, China)
Abstract
Accurate bud detection is a prerequisite for automatic tea picking and yield statistics; however, current research suffers from missed detection due to the variety of singleness and false detection under complex backgrounds. Traditional target detection models are mainly based on CNN, but CNN can only achieve the extraction of local feature information, which is a lack of advantages for the accurate identification of targets in complex environments, and Transformer can be a good solution to the problem. Therefore, based on a multi-variety tea bud dataset, this study proposes RT-DETR-Tea, an improved object detection model under the real-time detection Transformer (RT-DETR) framework. This model uses cascaded group attention to replace the multi-head self-attention (MHSA) mechanism in the attention-based intra-scale feature interaction (AIFI) module, effectively optimizing deep features and enriching the semantic information of features. The original cross-scale feature-fusion module (CCFM) mechanism is improved to establish the gather-and-distribute-Tea (GD-Tea) mechanism for multi-level feature fusion, which can effectively fuse low-level and high-level semantic information and large and small tea bud features in natural environments. The submodule of DilatedReparamBlock in UniRepLKNet was employed to improve RepC3 to achieve an efficient fusion of tea bud feature information and ensure the accuracy of the detection head. Ablation experiments show that the precision and mean average precision of the proposed RT-DETR-Tea model are 96.1% and 79.7%, respectively, which are increased by 5.2% and 2.4% compared to those of the original model, indicating the model’s effectiveness. The model also shows good detection performance on the newly constructed tea bud dataset. Compared with other detection algorithms, the improved RT-DETR-Tea model demonstrates superior tea bud detection performance, providing effective technical support for smart tea garden management and production.
Suggested Citation
Yiyong Chen & Yang Guo & Jianlong Li & Bo Zhou & Jiaming Chen & Man Zhang & Yingying Cui & Jinchi Tang, 2024.
"RT-DETR-Tea: A Multi-Species Tea Bud Detection Model for Unstructured Environments,"
Agriculture, MDPI, vol. 14(12), pages 1-19, December.
Handle:
RePEc:gam:jagris:v:14:y:2024:i:12:p:2256-:d:1540337
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jagris:v:14:y:2024:i:12:p:2256-:d:1540337. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.