Author
Listed:
- Liyan Xiong
- Hu Yi
- Xiaohui Huang
- Weichun Huang
- Nouman Ali
Abstract
Accurate counting in dense scenes can effectively prevent the occurrence of abnormal events, which is crucial for flow management, traffic control, and urban safety. In recent years, the application of deep learning technology in counting tasks has significantly improved the performance of models, but it still faces many challenges, including the diversity of target distribution between image and background, the drastic change of target scale, and serious occlusion. To solve these problems, this paper proposes a spatial context feature fusion network, abbreviated as SCFFNet, to understand highly congested scenes and perform accurate counts as well as produce high-quality estimated density maps. SCFFNet first uses rich convolutions with different scales to calculate scale-aware features, adaptively encodes the scale of contextual information needed to accurately estimate density maps, and then calibrates and refuses the fused feature maps through a channel spatial attention-aware module, which improves the model’s ability to suppress background and focus on main features. Finally, the final estimated density map is generated by a dilated convolution module. We conduct experiments on five public crowd datasets, UCF_CC_50, WorldExpo’10, ShanghaiTech, Mall, and Beijing BRT, and the results show that our method achieves lower counting errors than existing state-of-the-art methods. In addition, we extend SCFFNet to count other objects, such as vehicles in the vehicle dataset HBR_YD, and the experimental results show that our proposed method significantly improves the output quality with higher accuracy than previous methods.
Suggested Citation
Liyan Xiong & Hu Yi & Xiaohui Huang & Weichun Huang & Nouman Ali, 2022.
"SCFFNet: Spatial Context Feature Fusion Network for Understanding the Highly Congested Scenes,"
Mathematical Problems in Engineering, Hindawi, vol. 2022, pages 1-18, June.
Handle:
RePEc:hin:jnlmpe:3277995
DOI: 10.1155/2022/3277995
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hin:jnlmpe:3277995. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Mohamed Abdelhakeem (email available below). General contact details of provider: https://www.hindawi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.