Author
Abstract
A multitarget real-time tracking system for panoramic video with multisensor information fusion dual neural networks is studied and implemented by combining dual neural networks, fused geometric features, and deep learning-based target real-time tracking algorithms. The motion model of multisensor information fusion is analyzed, the dual neural network perturbation model is introduced, and the state variables of the system are determined. Combined with the data structure model of multisensor information fusion, a least-squares-based optimization function is constructed. In this optimization function, the image sensor is constructed as an observationally constrained residual term based on the pixel plane, the inertial measurement unit is constructed as an observationally constrained residual term based on the motion model, and the motion relationship between sensors is constructed as a motion-constrained residual term to derive a mathematical model for real-time multitarget tracking of panoramic video. The simulation experimental results show that the integrated position estimation accuracy of the multisensor information fusion dual neural network-based panoramic video multitarget real-time tracking fusion algorithm is 10.5% higher than that of the optimal distributed panoramic video multitarget fusion algorithm without feedback. The results of the simulation experiments show that the integrated position estimation accuracy of the multitarget real-time tracking fusion algorithm for panoramic video improves by 10.96%, and the integrated speed estimation accuracy improves by 6.32% compared with the optimal distributed multitarget fusion algorithm without feedback. The proposed panoramic video multitarget real-time tracking algorithm based on the dual neural network can effectively improve the target tracking accuracy of the model on degraded frames (motion blur, target occlusion, out-of-focus, etc.), and the stability of the algorithm for target location and category detection is effectively improved by multiframe feature fusion. The research in this paper provides better technical support and research theory for panoramic video multitarget real-time tracking.
Suggested Citation
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hin:jnlmpe:8313471. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Mohamed Abdelhakeem (email available below). General contact details of provider: https://www.hindawi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.