IDEAS home Printed from https://ideas.repec.org/a/spr/jcsosc/v7y2024i1d10.1007_s42001-024-00266-7.html
   My bibliography  Save this article

Efficient annotation reduction with active learning for computer vision-based Retail Product Recognition

Author

Listed:
  • Niels Griffioen

    (Tilburg University)

  • Nevena Rankovic

    (Tilburg University)

  • Federico Zamberlan

    (Universidad de Buenos Aires)

  • Monisha Punith

    (University of Antwerp)

Abstract

The retail industry encounters huge obstacles with computer vision (CV) technology due to frequent model retraining with changing products and time-consuming, costly data annotation. Previous research in this field has been primarily focused on optimizing model performance rather than minimizing annotation effort. Therefore, the main idea of this paper is to evaluate active learning as a method to minimize annotation effort in the retail industry. The MVTEC Densely Segmented Supermarket dataset is used to evaluate various active learning methods such as the Least Confident, Entropy and Cost-Effective Active Learning (CEAL) along with Mask R-CNN model. The results demonstrate that annotating only 20.83 $$-$$ - 24.34% of the data achieves 95% of the full dataset’s performance. When training, out-of-sample data share similar characteristics, the Least Confident and CEAL methods reduce annotation requirements by 7.7 $$-$$ - 15.7% while maintaining 95% and 97% of the full dataset’s performance. However, the Entropy method under-performs compared to the random selection baseline. Ultimately, none of the methods show a clear advantage when the data characteristics differ between training and out-of-sample data. Finally, the proposed active learning methods on an industry-specific retail dataset remarkably propels the development of highly efficient and cost-effective CV solutions meticulously tailored for the retail industry.

Suggested Citation

  • Niels Griffioen & Nevena Rankovic & Federico Zamberlan & Monisha Punith, 2024. "Efficient annotation reduction with active learning for computer vision-based Retail Product Recognition," Journal of Computational Social Science, Springer, vol. 7(1), pages 1039-1070, April.
  • Handle: RePEc:spr:jcsosc:v:7:y:2024:i:1:d:10.1007_s42001-024-00266-7
    DOI: 10.1007/s42001-024-00266-7
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s42001-024-00266-7
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s42001-024-00266-7?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:jcsosc:v:7:y:2024:i:1:d:10.1007_s42001-024-00266-7. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.