IDEAS home Printed from https://ideas.repec.org/a/wsi/fracta/v32y2024i09n10ns0218348x25400328.html
   My bibliography  Save this article

Harnessing Deep Transfer Learning With Imaging Technology For Underwater Object Detection And Tracking In Consumer Electronics

Author

Listed:
  • SAAD ALAHMARI

    (��Department of Computer Science, Applied College, Northern Border University, Arar, Saudi Arabia)

  • ALANOUD AL MAZROA

    (��Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P. O. Box 84428, Riyadh 11671, Saudi Arabia)

  • KHALID MAHMOOD

    (�Department of Information Systems, Applied College at Mahayil, King Khalid University, Saudi Arabia)

  • JEHAD SAAD ALQURNI

    (�Department of Educational Technologies, College of Education, Imam Abdulrahman Bin Faisal University, P. O. Box 1982, Dammam 31441, Saudi Arabia)

  • AHMED S. SALAMA

    (��Department of Electrical Engineering, Faculty of Engineering & Technology, Future University in Egypt New Cairo 11845, Egypt)

  • YAZEED ALZAHRANI

    (*Department of Computer Engineering, College of Engineering in Wadi Addawasir, Prince Sattam Bin Abdulaziz University, Saudi Arabia)

Abstract

Consumer electronics like action underwater drones and cameras commonly include object detection abilities to automatically capture underwater images and videos by tracking and focusing objects of interest. Underwater object detection (UOD) in consumer electronics revolutionizes interactions with aquatic environments. Modern consumer gadgets are increasingly equipped with sophisticated object detection capabilities, from action cameras to underwater drones, which allow users to automatically capture clear videos and images underwater by tracking and identifying objects of interest. This technology contributes to user safety by enabling devices to avoid collisions with underwater obstacles and improving underwater videography and photography quality in complex systems simulation platforms. Classical approaches need a clear feature definition that suffers from uncertainty due to differing viewpoints, occlusion, illumination, and season. This paper focuses on developing Deep Transfer Learning with Imaging Technology for Underwater Object Detection and Tracking (DTLIT-UOBT) techniques in consumer electronics. The DTLIT-UOBT technique uses deep learning and imaging technologies to detect and track underwater objects. In the DTLIT-UOBT technique, the bilateral filtering (BF) approach is primarily used to improve the quality of the underwater images. Besides, an improved neural architectural search network (NASNet) model derives feature vectors from the preprocessed images. The DTLIT-UOBT technique uses the jellyfish search fractal optimization algorithm (JSOA) for the hyperparameter tuning process. Finally, the detection and tracking of the objects can be performed by an extreme learning machine (ELM). A sequence of simulations was used to authorize the performance of the DTLIT-UOBT model by utilizing an underwater object detection dataset. The experimental validation of the DTLIT-UOBT model exhibits a superior accuracy value of 95.71% over other techniques.

Suggested Citation

  • Saad Alahmari & Alanoud Al Mazroa & Khalid Mahmood & Jehad Saad Alqurni & Ahmed S. Salama & Yazeed Alzahrani, 2024. "Harnessing Deep Transfer Learning With Imaging Technology For Underwater Object Detection And Tracking In Consumer Electronics," FRACTALS (fractals), World Scientific Publishing Co. Pte. Ltd., vol. 32(09n10), pages 1-17.
  • Handle: RePEc:wsi:fracta:v:32:y:2024:i:09n10:n:s0218348x25400328
    DOI: 10.1142/S0218348X25400328
    as

    Download full text from publisher

    File URL: http://www.worldscientific.com/doi/abs/10.1142/S0218348X25400328
    Download Restriction: Access to full text is restricted to subscribers

    File URL: https://libkey.io/10.1142/S0218348X25400328?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:wsi:fracta:v:32:y:2024:i:09n10:n:s0218348x25400328. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Tai Tone Lim (email available below). General contact details of provider: https://www.worldscientific.com/worldscinet/fractals .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.