Author
Listed:
- Tianci Chen
(National Key Laboratory of Agricultural Equipment Technology, College of Engineering, South China Agricultural University, Guangzhou 510642, China)
- Haoxin Li
(National Key Laboratory of Agricultural Equipment Technology, College of Engineering, South China Agricultural University, Guangzhou 510642, China)
- Jinhong Lv
(National Key Laboratory of Agricultural Equipment Technology, College of Engineering, South China Agricultural University, Guangzhou 510642, China)
- Jiazheng Chen
(College of Mechanical and Electrical Engineering, Zhongkai University of Agriculture and Engineering, Guangzhou 510225, China)
- Weibin Wu
(National Key Laboratory of Agricultural Equipment Technology, College of Engineering, South China Agricultural University, Guangzhou 510642, China
Guangdong Engineering Technology Research Center for Creative Hilly Orchard Machinery, Guangzhou 510642, China)
Abstract
Accurately detecting tea bud leaves is crucial for the automation of tea picking robots. However, challenges arise due to tea stem occlusion and overlapping of buds and leaves, presenting varied shapes of one bud–one leaf targets in the field of view, making precise segmentation of tea bud leaves challenging. To improve the segmentation accuracy of one bud–one leaf targets with different shapes and fine granularity, this study proposes a novel semantic segmentation model for tea bud leaves. The method designs a hierarchical Transformer block based on a self-attention mechanism in the encoding network, which is beneficial for capturing long-range dependencies between features and enhancing the representation of common features. Then, a multi-path feature aggregation module is designed to effectively merge the feature outputs of encoder blocks with decoder outputs, thereby alleviating the loss of fine-grained features caused by downsampling. Furthermore, a refined polarized attention mechanism is employed after the aggregation module to perform polarized filtering on features in channel and spatial dimensions, enhancing the output of fine-grained features. The experimental results demonstrate that the proposed Unet-Enhanced model achieves segmentation performance well on one bud–one leaf targets with different shapes, with a mean intersection over union (mIoU) of 91.18% and a mean pixel accuracy (mPA) of 95.10%. The semantic segmentation network can accurately segment tea bud leaves, providing a decision-making basis for the spatial positioning of tea picking robots.
Suggested Citation
Tianci Chen & Haoxin Li & Jinhong Lv & Jiazheng Chen & Weibin Wu, 2024.
"Segmentation Network for Multi-Shape Tea Bud Leaves Based on Attention and Path Feature Aggregation,"
Agriculture, MDPI, vol. 14(8), pages 1-23, August.
Handle:
RePEc:gam:jagris:v:14:y:2024:i:8:p:1388-:d:1458170
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jagris:v:14:y:2024:i:8:p:1388-:d:1458170. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.