IDEAS home Printed from https://ideas.repec.org/a/spr/infosf/v25y2023i2d10.1007_s10796-022-10333-x.html
   My bibliography  Save this article

Deep Reinforcement Learning in the Advanced Cybersecurity Threat Detection and Protection

Author

Listed:
  • Mohit Sewak

    (Security & Compliance Research, Microsoft R &D India Pvt. Ltd.)

  • Sanjay K. Sahay

    (BITS Pilani, Goa Campus)

  • Hemant Rathore

    (BITS Pilani, Goa Campus)

Abstract

The cybersecurity threat landscape has lately become overly complex. Threat actors leverage weaknesses in the network and endpoint security in a very coordinated manner to perpetuate sophisticated attacks that could bring down the entire network and many critical hosts in the network. To defend against such attacks, cybersecurity solutions are upgrading from the traditional to advanced deep and machine learning defense mechanisms for threat detection and protection. The application of these techniques has been reviewed well in the scientific literature. Deep Reinforcement Learning has shown great promise in developing AI solutions for areas that had earlier required advanced human cognizance. Different techniques and algorithms under deep reinforcement learning have shown great promise in applications ranging from games to industrial processes, where it is claimed to augment systems with general AI capabilities. These algorithms have recently also been used in cybersecurity, especially in threat detection and protection, where these are showing state-of-the-art results. Unlike supervised machine learning and deep learning, deep reinforcement learning is used in more diverse ways and is empowering many innovative applications in the threat defense landscape. However, there does not exist any comprehensive review of deep reinforcement learning applications in advanced cybersecurity threat detection and protection. Therefore, in this paper, we intend to fill this gap and provide a comprehensive review of the different applications of deep reinforcement learning in this field.

Suggested Citation

  • Mohit Sewak & Sanjay K. Sahay & Hemant Rathore, 2023. "Deep Reinforcement Learning in the Advanced Cybersecurity Threat Detection and Protection," Information Systems Frontiers, Springer, vol. 25(2), pages 589-611, April.
  • Handle: RePEc:spr:infosf:v:25:y:2023:i:2:d:10.1007_s10796-022-10333-x
    DOI: 10.1007/s10796-022-10333-x
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s10796-022-10333-x
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s10796-022-10333-x?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Hemant Rathore & Sanjay K. Sahay & Piyush Nikam & Mohit Sewak, 2021. "Robust Android Malware Detection System Against Adversarial Attacks Using Q-Learning," Information Systems Frontiers, Springer, vol. 23(4), pages 867-882, August.
    2. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    3. Oriol Vinyals & Igor Babuschkin & Wojciech M. Czarnecki & Michaël Mathieu & Andrew Dudzik & Junyoung Chung & David H. Choi & Richard Powell & Timo Ewalds & Petko Georgiev & Junhyuk Oh & Dan Horgan & M, 2019. "Grandmaster level in StarCraft II using multi-agent reinforcement learning," Nature, Nature, vol. 575(7782), pages 350-354, November.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Sagar Samtani & Ziming Zhao & Ram Krishnan, 2023. "Secure Knowledge Management and Cybersecurity in the Era of Artificial Intelligence," Information Systems Frontiers, Springer, vol. 25(2), pages 425-429, April.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Wang, Xianjia & Yang, Zhipeng & Liu, Yanli & Chen, Guici, 2023. "A reinforcement learning-based strategy updating model for the cooperative evolution," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 618(C).
    2. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    3. Li, Wenqing & Ni, Shaoquan, 2022. "Train timetabling with the general learning environment and multi-agent deep reinforcement learning," Transportation Research Part B: Methodological, Elsevier, vol. 157(C), pages 230-251.
    4. Christoph Graf & Viktor Zobernig & Johannes Schmidt & Claude Klöckl, 2024. "Computational Performance of Deep Reinforcement Learning to Find Nash Equilibria," Computational Economics, Springer;Society for Computational Economics, vol. 63(2), pages 529-576, February.
    5. Wang, Xin & Liu, Shuo & Yu, Yifan & Yue, Shengzhi & Liu, Ying & Zhang, Fumin & Lin, Yuanshan, 2023. "Modeling collective motion for fish schooling via multi-agent reinforcement learning," Ecological Modelling, Elsevier, vol. 477(C).
    6. Weifan Long & Taixian Hou & Xiaoyi Wei & Shichao Yan & Peng Zhai & Lihua Zhang, 2023. "A Survey on Population-Based Deep Reinforcement Learning," Mathematics, MDPI, vol. 11(10), pages 1-17, May.
    7. Xuan-Kun Li & Jian-Xu Ma & Xiang-Yu Li & Jun-Jie Hu & Chuan-Yang Ding & Feng-Kai Han & Xiao-Min Guo & Xi Tan & Xian-Min Jin, 2024. "High-efficiency reinforcement learning with hybrid architecture photonic integrated circuit," Nature Communications, Nature, vol. 15(1), pages 1-10, December.
    8. János Kramár & Tom Eccles & Ian Gemp & Andrea Tacchetti & Kevin R. McKee & Mateusz Malinowski & Thore Graepel & Yoram Bachrach, 2022. "Negotiation and honesty in artificial intelligence methods for the board game of Diplomacy," Nature Communications, Nature, vol. 13(1), pages 1-15, December.
    9. Weichao Hu & Hongzhang Mu & Yanyan Chen & Yixin Liu & Xiaosong Li, 2023. "Modeling Interactions of Autonomous/Manual Vehicles and Pedestrians with a Multi-Agent Deep Deterministic Policy Gradient," Sustainability, MDPI, vol. 15(7), pages 1-14, April.
    10. Malte Reinschmidt & József Fortágh & Andreas Günther & Valentin V. Volchkov, 2024. "Reinforcement learning in cold atom experiments," Nature Communications, Nature, vol. 15(1), pages 1-11, December.
    11. Jin, Jiahuan & Cui, Tianxiang & Bai, Ruibin & Qu, Rong, 2024. "Container port truck dispatching optimization using Real2Sim based deep reinforcement learning," European Journal of Operational Research, Elsevier, vol. 315(1), pages 161-175.
    12. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    13. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    14. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    15. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    16. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    17. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.
    18. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    19. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    20. Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:infosf:v:25:y:2023:i:2:d:10.1007_s10796-022-10333-x. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.