Author
Abstract
Multimodal biometrics fusion plays an important role in the field of biometrics. Therefore, this paper presents a multimodal biometrics fusion algorithm using deep reinforcement learning. In order to reduce the influence of user behavior, user’s personal characteristics, and environmental light on image data quality, data preprocessing is realized through data transformation and single-mode biometric image region segmentation. A two-dimensional Gobar filter was used to analyze the texture of local sub-blocks, qualitatively describe the similarity between the filter and the sub-blocks and extract the phase information and local amplitude information of multimodal biometrics features. Deep reinforcement learning was used to construct the classifier of different modal biometrics, and the weighted sum fusion of different modal biometrics was implemented by fractional information. The multimodal biometrics fusion algorithm was designed. The Casia-iris-interval-v4 and NFBS datasets were used to test the performance of the proposed algorithm. The results show that the fused image quality is better, the feature extraction accuracy is between 84% and 93%, the average accuracy of feature classification is 97%, the multimodal biometric classification time is only 110 ms, the multimodal biometric fusion time is only 550 ms, the effect is good, and the practicability is strong.
Suggested Citation
Quan Huang & Xiaofeng Li, 2022.
"Multimodal Biometrics Fusion Algorithm Using Deep Reinforcement Learning,"
Mathematical Problems in Engineering, Hindawi, vol. 2022, pages 1-9, March.
Handle:
RePEc:hin:jnlmpe:8544591
DOI: 10.1155/2022/8544591
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hin:jnlmpe:8544591. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Mohamed Abdelhakeem (email available below). General contact details of provider: https://www.hindawi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.