Author
Listed:
- G. Balachandran
(Jeppiaar Engineering College)
- S. Ranjith
(Jeppiaar Engineering College)
- T. R. Chenthil
(Jeppiaar Engineering College)
- G. C. Jagan
(Jeppiaar Engineering College)
Abstract
Facial expression-based Emotion Recognition (FER) is crucial in human–computer interaction and affective computing, particularly when addressing diverse age groups. This paper introduces the Multi-Scale Vision Transformer with Contrastive Learning (MViT-CnG), an age-adaptive FER approach designed to enhance the accuracy and interpretability of emotion recognition models across different classes. The MViT-CnG model leverages vision transformers and contrastive learning to capture intricate facial features, ensuring robust performance despite diverse and dynamic facial features. By utilizing contrastive learning, the model's interpretability is significantly enhanced, which is vital for building trust in automated systems and facilitating human–machine collaboration. Additionally, this approach enriches the model's capacity to discern shared and distinct features within facial expressions, improving its ability to generalize across different age groups. Evaluations using the FER-2013 and CK + datasets highlight the model's broad generalization capabilities, with FER-2013 covering a wide range of emotions across diverse age groups and CK + focusing on posed expressions in controlled environments. The MViT-CnG model adapts effectively to both datasets, showcasing its versatility and reliability across distinct data characteristics. Performance results demonstrated that the MViT-CnG model achieved superior accuracy across all emotion recognition labels on the FER-2013 dataset with a 99.6% accuracy rate, and 99.5% on the CK + dataset, indicating significant improvements in recognizing subtle facial expressions. Comprehensive evaluations revealed that the model's precision, recall, and F1-score are consistently higher than those of existing models, confirming its robustness and reliability in facial emotion recognition tasks.
Suggested Citation
G. Balachandran & S. Ranjith & T. R. Chenthil & G. C. Jagan, 2025.
"Facial expression-based emotion recognition across diverse age groups: a multi-scale vision transformer with contrastive learning approach,"
Journal of Combinatorial Optimization, Springer, vol. 49(1), pages 1-39, January.
Handle:
RePEc:spr:jcomop:v:49:y:2025:i:1:d:10.1007_s10878-024-01241-8
DOI: 10.1007/s10878-024-01241-8
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:jcomop:v:49:y:2025:i:1:d:10.1007_s10878-024-01241-8. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.