IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v11y2023i7p1694-d1113831.html
   My bibliography  Save this article

LCAM: Low-Complexity Attention Module for Lightweight Face Recognition Networks

Author

Listed:
  • Seng Chun Hoo

    (School of Electrical & Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, Nibong Tebal 14300, Pulau Pinang, Malaysia
    Department of Computer Engineering Technology, Japan-Malaysia Technical Institute, Taman Perindustrian Bukit Minyak, Simpang Ampat 14100, Pulau Pinang, Malaysia)

  • Haidi Ibrahim

    (School of Electrical & Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, Nibong Tebal 14300, Pulau Pinang, Malaysia)

  • Shahrel Azmin Suandi

    (School of Electrical & Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, Nibong Tebal 14300, Pulau Pinang, Malaysia)

  • Theam Foo Ng

    (Centre of Global Sustainability Studies, Level 5, Hamzah Sendut Library, Universiti Sains Malaysia, Minden 11800, Pulau Pinang, Malaysia)

Abstract

Inspired by the human visual system to concentrate on the important region of a scene, attention modules recalibrate the weights of either the channel features alone or along with spatial features to prioritize informative regions while suppressing unimportant information. However, the floating-point operations (FLOPs) and parameter counts are considerably high when one is incorporating these modules, especially for those with both channel and spatial attentions in a baseline model. Despite the success of attention modules in general ImageNet classification tasks, emphasis should be given to incorporating these modules in face recognition tasks. Hence, a novel attention mechanism with three parallel branches known as the Low-Complexity Attention Module (LCAM) is proposed. Note that there is only one convolution operation for each branch. Therefore, the LCAM is lightweight, yet it is still able to achieve a better performance. Experiments from face verification tasks indicate that LCAM achieves similar or even better results compared with those of previous modules that incorporate both channel and spatial attentions. Moreover, compared to the baseline model with no attention modules, LCAM achieves performance values of 0.84% on ConvFaceNeXt, 1.15% on MobileFaceNet, and 0.86% on ProxylessFaceNAS with respect to the average accuracy of seven image-based face recognition datasets.

Suggested Citation

  • Seng Chun Hoo & Haidi Ibrahim & Shahrel Azmin Suandi & Theam Foo Ng, 2023. "LCAM: Low-Complexity Attention Module for Lightweight Face Recognition Networks," Mathematics, MDPI, vol. 11(7), pages 1-27, April.
  • Handle: RePEc:gam:jmathe:v:11:y:2023:i:7:p:1694-:d:1113831
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/11/7/1694/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/11/7/1694/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Seng Chun Hoo & Haidi Ibrahim & Shahrel Azmin Suandi, 2022. "ConvFaceNeXt: Lightweight Networks for Face Recognition," Mathematics, MDPI, vol. 10(19), pages 1-28, October.
    2. Sadiq H. Abdulhussain & Basheera M. Mahmmod & Amer AlGhadhban & Jan Flusser, 2022. "Face Recognition Algorithm Based on Fast Computation of Orthogonal Moments," Mathematics, MDPI, vol. 10(15), pages 1-28, August.
    3. Tonglai Liu & Ronghai Luo & Longqin Xu & Dachun Feng & Liang Cao & Shuangyin Liu & Jianjun Guo, 2022. "Spatial Channel Attention for Deep Convolutional Neural Networks," Mathematics, MDPI, vol. 10(10), pages 1-10, May.
    4. Jing Yan & Tingliang Liu & Xinyu Ye & Qianzhen Jing & Yuannan Dai, 2021. "Rotating machinery fault diagnosis based on a novel lightweight convolutional neural network," PLOS ONE, Public Library of Science, vol. 16(8), pages 1-20, August.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Minghua Wan & Yuxi Zhang & Guowei Yang & Hongjian Guo, 2023. "Two-Dimensional Exponential Sparse Discriminant Local Preserving Projections," Mathematics, MDPI, vol. 11(7), pages 1-16, April.
    2. Ye Yuan & Jiahao Li & Qi Yu & Jian Liu & Zongdao Li & Qingdu Li & Na Liu, 2024. "A Two-Stage Facial Kinematic Control Strategy for Humanoid Robots Based on Keyframe Detection and Keypoint Cubic Spline Interpolation," Mathematics, MDPI, vol. 12(20), pages 1-16, October.
    3. Xiaoyong Zhang & Rui Xu & Kaixuan Lu & Zhihang Hao & Zhengchao Chen & Mingyong Cai, 2022. "Resource-Based Port Material Yard Detection with SPPA-Net," Sustainability, MDPI, vol. 14(24), pages 1-12, December.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:11:y:2023:i:7:p:1694-:d:1113831. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.