Author
Listed:
- Xintong Zhang
(College of Mathematics and Computer Science, Zhejiang A&F University, Hangzhou 311300, China
Key Laboratory of State Forestry and Grassland Administration on Forestry Sensing Technology and Intelligent Equipment, Hangzhou 311300, China
Key Laboratory of Forestry Intelligent Monitoring and Information Technology of Zhejiang, Hangzhou 311300, China)
- Dasheng Wu
(College of Mathematics and Computer Science, Zhejiang A&F University, Hangzhou 311300, China
Key Laboratory of State Forestry and Grassland Administration on Forestry Sensing Technology and Intelligent Equipment, Hangzhou 311300, China
Key Laboratory of Forestry Intelligent Monitoring and Information Technology of Zhejiang, Hangzhou 311300, China)
- Fengya Xu
(College of Mathematics and Computer Science, Zhejiang A&F University, Hangzhou 311300, China
Key Laboratory of State Forestry and Grassland Administration on Forestry Sensing Technology and Intelligent Equipment, Hangzhou 311300, China
Key Laboratory of Forestry Intelligent Monitoring and Information Technology of Zhejiang, Hangzhou 311300, China)
Abstract
It is challenging to achieve accurate tea bud detection in optical images with complex backgrounds since distinguishing between the foregrounds and backgrounds of these images remains difficult. Although several studies have been proposed to implicitly distinguish foregrounds and backgrounds via various attention mechanisms, explicit distinction between foregrounds and backgrounds has been seldom explored. Inspired by recent successful applications of the Segment Anything Model (SAM) in computer vision, this study proposes a SAM-assisted dual-branch YOLOv8 model named SD-YOLOv8 for tea bud detection to address the challenges of explicit distinction between foregrounds and backgrounds. The SD-YOLOv8 model mainly consists of two key components: (1) the SAM-based foreground segmenter (SFS) to generate foreground masks of tea bud images without any training, and (2) the heterogeneous feature extractor to parallelly capture both color features in optical images and edge features in foreground masks. The experimental results show that the proposed SD-YOLOv8 significantly improves the performance of tea bud detection based on the explicit distinction between foregrounds and backgrounds. The mean Average Precision of the SD-YOLOv8 model reaches 86.0%, surpassing the YOLOv8 (mAP 81.6%) by 5 percentage points and outperforming recent object detection models, including Faster R-CNN (mAP 60.7%), DETR (mAP 64.6%), YOLOv5 (mAP 72.4%), and YOLOv7 (mAP 80.6%). This demonstrates its superior capability in efficiently detecting tea buds against complex backgrounds. Additionally, this study proposes a self-built tea bud dataset with three seasons to address the data shortages in tea bud detection.
Suggested Citation
Xintong Zhang & Dasheng Wu & Fengya Xu, 2025.
"SD-YOLOv8: SAM-Assisted Dual-Branch YOLOv8 Model for Tea Bud Detection on Optical Images,"
Agriculture, MDPI, vol. 15(7), pages 1-19, March.
Handle:
RePEc:gam:jagris:v:15:y:2025:i:7:p:712-:d:1621778
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jagris:v:15:y:2025:i:7:p:712-:d:1621778. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.