IDEAS home Printed from https://ideas.repec.org/a/gam/jftint/v16y2024i5p174-d1396772.html
   My bibliography  Save this article

TQU-SLAM Benchmark Dataset for Comparative Study to Build Visual Odometry Based on Extracted Features from Feature Descriptors and Deep Learning

Author

Listed:
  • Thi-Hao Nguyen

    (Faculty of Engineering Technology, Hung Vuong University, Viet Tri City 35100, Vietnam)

  • Van-Hung Le

    (Faculty of Basic Science, Tan Trao University, Tuyen Quang City 22000, Vietnam)

  • Huu-Son Do

    (Faculty of Basic Science, Tan Trao University, Tuyen Quang City 22000, Vietnam)

  • Trung-Hieu Te

    (Faculty of Basic Science, Tan Trao University, Tuyen Quang City 22000, Vietnam)

  • Van-Nam Phan

    (Faculty of Basic Science, Tan Trao University, Tuyen Quang City 22000, Vietnam)

Abstract

The problem of data enrichment to train visual SLAM and VO construction models using deep learning (DL) is an urgent problem today in computer vision. DL requires a large amount of data to train a model, and more data with many different contextual and conditional conditions will create a more accurate visual SLAM and VO construction model. In this paper, we introduce the TQU-SLAM benchmark dataset, which includes 160,631 RGB-D frame pairs. It was collected from the corridors of three interconnected buildings comprising a length of about 230 m. The ground-truth data of the TQU-SLAM benchmark dataset were prepared manually, including 6-DOF camera poses, 3D point cloud data, intrinsic parameters, and the transformation matrix between the camera coordinate system and the real world. We also tested the TQU-SLAM benchmark dataset using the PySLAM framework with traditional features such as SHI_TOMASI, SIFT, SURF, ORB, ORB2, AKAZE, KAZE, and BRISK and features extracted from DL such as VGG, DPVO, and TartanVO. The camera pose estimation results are evaluated, and we show that the ORB2 features have the best results ( E r r d = 5.74 mm), while the ratio of the number of frames with detected keypoints of the SHI_TOMASI feature is the best ( r d = 98.97 % ). At the same time, we also present and analyze the challenges of the TQU-SLAM benchmark dataset for building visual SLAM and VO systems.

Suggested Citation

  • Thi-Hao Nguyen & Van-Hung Le & Huu-Son Do & Trung-Hieu Te & Van-Nam Phan, 2024. "TQU-SLAM Benchmark Dataset for Comparative Study to Build Visual Odometry Based on Extracted Features from Feature Descriptors and Deep Learning," Future Internet, MDPI, vol. 16(5), pages 1-21, May.
  • Handle: RePEc:gam:jftint:v:16:y:2024:i:5:p:174-:d:1396772
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1999-5903/16/5/174/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1999-5903/16/5/174/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:16:y:2024:i:5:p:174-:d:1396772. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.