Author
Listed:
- Md. Waliul Hasan
(Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh)
- Shahria Shanto
(Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh)
- Jannatun Nayeema
(Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh)
- Rashik Rahman
(Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh
Department of Computer Science, University of Calgary, Calgary, AB T2N 1N4, Canada)
- Tanjina Helaly
(Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh)
- Ziaur Rahman
(School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia)
- Sk. Tanzir Mehedi
(Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh
School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia)
Abstract
Early fire detection is the key to saving lives and limiting property damage. Advanced technology can detect fires in high-risk zones with minimal human presence before they escalate beyond control. This study focuses on providing a more advanced model structure based on the YOLOv8 architecture to enhance early recognition of fire. Although YOLOv8 is excellent at real-time object detection, it can still be better adjusted to the nuances of fire detection. We achieved this advancement by incorporating an additional context-to-flow layer, enabling the YOLOv8 model to more effectively capture both local and global contextual information. The context-to-flow layer enhances the model’s ability to recognize complex patterns like smoke and flames, leading to more effective feature extraction. This extra layer helps the model better detect fires and smoke by improving its ability to focus on fine-grained details and minor variation, which is crucial in challenging environments with low visibility, dynamic fire behavior, and complex backgrounds. Our proposed model achieved a 2.9% greater precision rate, 4.7% more recall rate, and 4% more F1-score in comparison to the YOLOv8 default model. This study discovered that the architecture modification increases information flow and improves fire detection at all fire sizes, from tiny sparks to massive flames. We also included explainable AI strategies to explain the model’s decision-making, thus adding more transparency and improving trust in its predictions. Ultimately, this enhanced system demonstrates remarkable efficacy and accuracy, which allows additional improvements in autonomous fire detection systems.
Suggested Citation
Md. Waliul Hasan & Shahria Shanto & Jannatun Nayeema & Rashik Rahman & Tanjina Helaly & Ziaur Rahman & Sk. Tanzir Mehedi, 2024.
"An Explainable AI-Based Modified YOLOv8 Model for Efficient Fire Detection,"
Mathematics, MDPI, vol. 12(19), pages 1-21, September.
Handle:
RePEc:gam:jmathe:v:12:y:2024:i:19:p:3042-:d:1488203
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:12:y:2024:i:19:p:3042-:d:1488203. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.