IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v11y2023i17p3626-d1222532.html
   My bibliography  Save this article

DADE-DQN: Dual Action and Dual Environment Deep Q-Network for Enhancing Stock Trading Strategy

Author

Listed:
  • Yuling Huang

    (School of Computer Science and Engineering, Macau University of Science and Technology, Taipa, Macao, China)

  • Xiaoping Lu

    (School of Computer Science and Engineering, Macau University of Science and Technology, Taipa, Macao, China)

  • Chujin Zhou

    (School of Computer Science and Engineering, Macau University of Science and Technology, Taipa, Macao, China)

  • Yunlin Song

    (Department of Engineering Science, Faculty of Innovation Engineering, Macau University of Science and Technology, Taipa, Macao, China)

Abstract

Deep reinforcement learning (DRL) has attracted strong interest since AlphaGo beat human professionals, and its applications in stock trading are widespread. In this paper, an enhanced stock trading strategy called Dual Action and Dual Environment Deep Q-Network (DADE-DQN) for profit and risk reduction is proposed. Our approach incorporates several key highlights. First, to achieve a better balance between exploration and exploitation, a dual-action selection and dual-environment mechanism are incorporated into our DQN framework. Second, our approach optimizes the utilization of storage transitions by utilizing independent replay memories and performing dual mini-batch updates, leading to faster convergence and more efficient learning. Third, a novel deep network structure that incorporates Long Short-Term Memory (LSTM) and attention mechanisms is introduced, thereby improving the network’s ability to capture essential features and patterns. In addition, an innovative feature selection method is presented to efficiently enhance the input data by utilizing mutual information to identify and eliminate irrelevant features. Evaluation on six datasets shows that our DADE-DQN algorithm outperforms multiple DRL-based strategies (TDQN, DQN-Pattern, DQN-Vanilla) and traditional strategies (B&H, S&H, MR, TF). For example, on the KS11 dataset, the DADE-DQN strategy has achieved an impressive cumulative return of 79.43% and a Sharpe ratio of 2.21, outperforming all other methods. These experimental results demonstrate the performance of our approach in enhancing stock trading strategies.

Suggested Citation

  • Yuling Huang & Xiaoping Lu & Chujin Zhou & Yunlin Song, 2023. "DADE-DQN: Dual Action and Dual Environment Deep Q-Network for Enhancing Stock Trading Strategy," Mathematics, MDPI, vol. 11(17), pages 1-27, August.
  • Handle: RePEc:gam:jmathe:v:11:y:2023:i:17:p:3626-:d:1222532
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/11/17/3626/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/11/17/3626/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Supriya Bajpai, 2021. "Application of deep reinforcement learning for Indian stock trading automation," Papers 2106.16088, arXiv.org.
    2. Xue Guo & Hu Zhang & Tianhai Tian, 2018. "Development of stock correlation networks using mutual information and financial big data," PLOS ONE, Public Library of Science, vol. 13(4), pages 1-16, April.
    3. Oriol Vinyals & Igor Babuschkin & Wojciech M. Czarnecki & Michaël Mathieu & Andrew Dudzik & Junyoung Chung & David H. Choi & Richard Powell & Timo Ewalds & Petko Georgiev & Junhyuk Oh & Dan Horgan & M, 2019. "Grandmaster level in StarCraft II using multi-agent reinforcement learning," Nature, Nature, vol. 575(7782), pages 350-354, November.
    4. Cai, Jianchao & Xu, Kai & Zhu, Yanhui & Hu, Fang & Li, Liuhuan, 2020. "Prediction and analysis of net ecosystem carbon exchange based on gradient boosting regression and random forest," Applied Energy, Elsevier, vol. 262(C).
    5. David Silver & Julian Schrittwieser & Karen Simonyan & Ioannis Antonoglou & Aja Huang & Arthur Guez & Thomas Hubert & Lucas Baker & Matthew Lai & Adrian Bolton & Yutian Chen & Timothy Lillicrap & Fan , 2017. "Mastering the game of Go without human knowledge," Nature, Nature, vol. 550(7676), pages 354-359, October.
    6. Zhishun Wang & Wei Lu & Kaixin Zhang & Tianhao Li & Zixi Zhao, 2021. "A parallel-network continuous quantitative trading model with GARCH and PPO," Papers 2105.03625, arXiv.org, revised May 2021.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Bossert, Leonie & Hagendorff, Thilo, 2021. "Animals and AI. The role of animals in AI research and application – An overview and ethical evaluation," Technology in Society, Elsevier, vol. 67(C).
    2. Yang, Zhengzhi & Zheng, Lei & Perc, Matjaž & Li, Yumeng, 2024. "Interaction state Q-learning promotes cooperation in the spatial prisoner's dilemma game," Applied Mathematics and Computation, Elsevier, vol. 463(C).
    3. Weifan Long & Taixian Hou & Xiaoyi Wei & Shichao Yan & Peng Zhai & Lihua Zhang, 2023. "A Survey on Population-Based Deep Reinforcement Learning," Mathematics, MDPI, vol. 11(10), pages 1-17, May.
    4. Xuan-Kun Li & Jian-Xu Ma & Xiang-Yu Li & Jun-Jie Hu & Chuan-Yang Ding & Feng-Kai Han & Xiao-Min Guo & Xi Tan & Xian-Min Jin, 2024. "High-efficiency reinforcement learning with hybrid architecture photonic integrated circuit," Nature Communications, Nature, vol. 15(1), pages 1-10, December.
    5. Li, Wenqing & Ni, Shaoquan, 2022. "Train timetabling with the general learning environment and multi-agent deep reinforcement learning," Transportation Research Part B: Methodological, Elsevier, vol. 157(C), pages 230-251.
    6. Qingyan Li & Tao Lin & Qianyi Yu & Hui Du & Jun Li & Xiyue Fu, 2023. "Review of Deep Reinforcement Learning and Its Application in Modern Renewable Power System Control," Energies, MDPI, vol. 16(10), pages 1-23, May.
    7. Michael Curry & Alexander Trott & Soham Phade & Yu Bai & Stephan Zheng, 2022. "Analyzing Micro-Founded General Equilibrium Models with Many Agents using Deep Reinforcement Learning," Papers 2201.01163, arXiv.org, revised Feb 2022.
    8. Dong Liu & Feng Xiao & Jian Luo & Fan Yang, 2023. "Deep Reinforcement Learning-Based Holding Control for Bus Bunching under Stochastic Travel Time and Demand," Sustainability, MDPI, vol. 15(14), pages 1-18, July.
    9. Malte Reinschmidt & József Fortágh & Andreas Günther & Valentin V. Volchkov, 2024. "Reinforcement learning in cold atom experiments," Nature Communications, Nature, vol. 15(1), pages 1-11, December.
    10. Jin, Jiahuan & Cui, Tianxiang & Bai, Ruibin & Qu, Rong, 2024. "Container port truck dispatching optimization using Real2Sim based deep reinforcement learning," European Journal of Operational Research, Elsevier, vol. 315(1), pages 161-175.
    11. Cui, Tianxiang & Du, Nanjiang & Yang, Xiaoying & Ding, Shusheng, 2024. "Multi-period portfolio optimization using a deep reinforcement learning hyper-heuristic approach," Technological Forecasting and Social Change, Elsevier, vol. 198(C).
    12. Yuchen Zhang & Wei Yang, 2022. "Breakthrough invention and problem complexity: Evidence from a quasi‐experiment," Strategic Management Journal, Wiley Blackwell, vol. 43(12), pages 2510-2544, December.
    13. Daníelsson, Jón & Macrae, Robert & Uthemann, Andreas, 2022. "Artificial intelligence and systemic risk," Journal of Banking & Finance, Elsevier, vol. 140(C).
    14. Yi, Zonggen & Luo, Yusheng & Westover, Tyler & Katikaneni, Sravya & Ponkiya, Binaka & Sah, Suba & Mahmud, Sadab & Raker, David & Javaid, Ahmad & Heben, Michael J. & Khanna, Raghav, 2022. "Deep reinforcement learning based optimization for a tightly coupled nuclear renewable integrated energy system," Applied Energy, Elsevier, vol. 328(C).
    15. Christophe Chorro & Emmanuelle Jay & Philippe De Peretti & Thibault Soler, 2021. "Frequency causality measures and Vector AutoRegressive (VAR) models: An improved subset selection method suited to parsimonious systems," Documents de travail du Centre d'Economie de la Sorbonne 21013, Université Panthéon-Sorbonne (Paris 1), Centre d'Economie de la Sorbonne.
    16. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    17. Ostheimer, Julia & Chowdhury, Soumitra & Iqbal, Sarfraz, 2021. "An alliance of humans and machines for machine learning: Hybrid intelligent systems and their design principles," Technology in Society, Elsevier, vol. 66(C).
    18. Boute, Robert N. & Gijsbrechts, Joren & van Jaarsveld, Willem & Vanvuchelen, Nathalie, 2022. "Deep reinforcement learning for inventory control: A roadmap," European Journal of Operational Research, Elsevier, vol. 298(2), pages 401-412.
    19. Zhou, Yuhao & Wang, Yanwei, 2022. "An integrated framework based on deep learning algorithm for optimizing thermochemical production in heavy oil reservoirs," Energy, Elsevier, vol. 253(C).
    20. Liying Xu & Jiadi Zhu & Bing Chen & Zhen Yang & Keqin Liu & Bingjie Dang & Teng Zhang & Yuchao Yang & Ru Huang, 2022. "A distributed nanocluster based multi-agent evolutionary network," Nature Communications, Nature, vol. 13(1), pages 1-10, December.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:11:y:2023:i:17:p:3626-:d:1222532. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.