Author
Listed:
- Haoran Yang
(College of Electronic and Information Engineering, Shandong University of Science and Technology, Qingdao 266590, China)
- Juanjuan Wang
(College of Electronic and Information Engineering, Shandong University of Science and Technology, Qingdao 266590, China)
- Yi Miao
(College of Electronic and Information Engineering, Shandong University of Science and Technology, Qingdao 266590, China)
- Yulu Yang
(College of Electronic and Information Engineering, Shandong University of Science and Technology, Qingdao 266590, China)
- Zengshun Zhao
(College of Electronic and Information Engineering, Shandong University of Science and Technology, Qingdao 266590, China
School of Control Science & Engineering, Shandong University, Jinan 250061, China
Department of Electrical& Computer Engineering, University of Florida, Gainesville, FL 32611, USA)
- Zhigang Wang
(Key Laboratory of Computer Vision and System, Ministry of Education, Tianjin Key Laboratory of Intelligence Computing and Novel Software Technology, Tianjin University of Technology, Tianjin 300384, China)
- Qian Sun
(College of Electronic and Information Engineering, Shandong University of Science and Technology, Qingdao 266590, China)
- Dapeng Oliver Wu
(Department of Electrical& Computer Engineering, University of Florida, Gainesville, FL 32611, USA)
Abstract
As one of the core contents of intelligent monitoring, target tracking is the basis for video content analysis and processing. In visual tracking, due to occlusion, illumination changes, and pose and scale variation, handling such large appearance changes of the target object and the background over time remains the main challenge for robust target tracking. In this paper, we present a new robust algorithm (STC-KF) based on the spatio-temporal context and Kalman filtering. Our approach introduces a novel formulation to address the context information, which adopts the entire local information around the target, thereby preventing the remaining important context information related to the target from being lost by only using the rare key point information. The state of the object in the tracking process can be determined by the Euclidean distance of the image intensity in two consecutive frames. Then, the prediction value of the Kalman filter can be updated as the Kalman observation to the object position and marked on the next frame. The performance of the proposed STC-KF algorithm is evaluated and compared with the original STC algorithm. The experimental results using benchmark sequences imply that the proposed method outperforms the original STC algorithm under the conditions of heavy occlusion and large appearance changes.
Suggested Citation
Haoran Yang & Juanjuan Wang & Yi Miao & Yulu Yang & Zengshun Zhao & Zhigang Wang & Qian Sun & Dapeng Oliver Wu, 2019.
"Combining Spatio-Temporal Context and Kalman Filtering for Visual Tracking,"
Mathematics, MDPI, vol. 7(11), pages 1-14, November.
Handle:
RePEc:gam:jmathe:v:7:y:2019:i:11:p:1059-:d:283899
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:7:y:2019:i:11:p:1059-:d:283899. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.