Author
Listed:
- Xin Liu
(Faculty of Humanities and Arts, Macau University of Science and Technology, Macau 999078, China
These authors contributed equally to this work.)
- Fugang Wang
(Faculty of Humanities and Arts, Macau University of Science and Technology, Macau 999078, China
These authors contributed equally to this work.)
- Hui Zeng
(School of Design, Jiangnan University, Wuxi 214122, China)
- Yile Chen
(Faculty of Humanities and Arts, Macau University of Science and Technology, Macau 999078, China)
- Liang Zheng
(Faculty of Humanities and Arts, Macau University of Science and Technology, Macau 999078, China)
- Junming Chen
(Faculty of Humanities and Arts, Macau University of Science and Technology, Macau 999078, China)
Abstract
Micro-expressions, fleeting and often unnoticed facial cues, hold the key to uncovering concealed emotions, offering significant implications for understanding emotions, cognition, and psychological processes. However, micro-expression information capture presents challenges due to its instantaneous and subtle nature. Furthermore, it is affected by unpredictable degradation factors such as device performance and weather, and model degradation issues persist in real scenarios, and directly training deep networks or introducing image restoration networks yields unsatisfactory results, hindering the development of micro-expression recognition in real-world applications. This study aims to develop an advanced micro-expression recognition algorithm to promote the research of micro-expression applications in psychology. Firstly, Generative Adversarial Networks (GANs) are employed to build high-quality micro-expression generation models, which are then used as prior decoders to model micro-expression features. Subsequently, the GAN priors of deep neural networks are fine-tuned using low-quality facial micro-expression images. The designed micro-expression GAN module ensures that the generation of latent codes and noise inputs suitable for micro-expression GAN blocks from the deep and shallow features of deep neural networks. This approach controls the reconstruction of facial structure, local details, and accurate expressions to enhance the stability of subsequent recognition networks. Additionally, a Multi-Scale Dynamic Cross-Domain (MSCD) module is proposed to dynamically adjust the input of reconstructed features to different task representation layers. Doing so effectively integrates reconstructed features and improves the micro-expression recognition performance. Experimental results demonstrate that our method consistently achieves superior performance on multiple datasets, achieving particularly significant performance improvements in micro-expression recognition for severely degraded facial images in real scenarios.
Suggested Citation
Xin Liu & Fugang Wang & Hui Zeng & Yile Chen & Liang Zheng & Junming Chen, 2025.
"PRNet: A Priori Embedded Network for Real-World Blind Micro-Expression Recognition,"
Mathematics, MDPI, vol. 13(5), pages 1-16, February.
Handle:
RePEc:gam:jmathe:v:13:y:2025:i:5:p:749-:d:1599376
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:5:p:749-:d:1599376. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.