Author
Listed:
- Yuanyuan Zhang
(Nanjing Institute of Agricultural Mechanization, Ministry of Agriculture and Rural Affairs, Nanjing 210014, China)
- Kunpeng Tian
(Nanjing Institute of Agricultural Mechanization, Ministry of Agriculture and Rural Affairs, Nanjing 210014, China)
- Jicheng Huang
(Nanjing Institute of Agricultural Mechanization, Ministry of Agriculture and Rural Affairs, Nanjing 210014, China)
- Zhenlong Wang
(Nanjing Institute of Agricultural Mechanization, Ministry of Agriculture and Rural Affairs, Nanjing 210014, China)
- Bin Zhang
(Graduate School of Chinese Academy of Agricultural Sciences, Beijing 100083, China)
- Qing Xie
(Nanjing Institute of Agricultural Mechanization, Ministry of Agriculture and Rural Affairs, Nanjing 210014, China)
Abstract
When uncrewed agricultural machinery performs autonomous operations in the field, it inevitably encounters obstacles such as persons, livestock, poles, and stones. Therefore, accurate recognition of obstacles in the field environment is an essential function. To ensure the safety and enhance the operational efficiency of autonomous farming equipment, this study proposes an improved YOLOv8-based field obstacle detection model, leveraging depth information obtained from binocular cameras for precise obstacle localization. The improved model incorporates the Large Separable Kernel Attention (LSKA) module to enhance the extraction of field obstacle features. Additionally, the use of a Poly Kernel Inception (PKI) Block reduces model size while improving obstacle detection across various scales. An auxiliary detection head is also added to improve accuracy. Combining the improved model with binocular cameras allows for the detection of obstacles and their three-dimensional coordinates. Experimental results demonstrate that the improved model achieves a mean average precision (mAP) of 91.8%, representing a 3.4% improvement over the original model, while reducing floating-point operations to 7.9 G (Giga). The improved model exhibits significant advantages compared to other algorithms. In localization accuracy tests, the maximum average error and relative error in the 2–10 m range for the distance between the camera and five types of obstacles were 0.16 m and 2.26%. These findings confirm that the designed model meets the requirements for obstacle detection and localization in field environments.
Suggested Citation
Yuanyuan Zhang & Kunpeng Tian & Jicheng Huang & Zhenlong Wang & Bin Zhang & Qing Xie, 2024.
"Field Obstacle Detection and Location Method Based on Binocular Vision,"
Agriculture, MDPI, vol. 14(9), pages 1-18, September.
Handle:
RePEc:gam:jagris:v:14:y:2024:i:9:p:1493-:d:1469144
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jagris:v:14:y:2024:i:9:p:1493-:d:1469144. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.