IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2003.06365.html
   My bibliography  Save this paper

Application of Deep Q-Network in Portfolio Management

Author

Listed:
  • Ziming Gao
  • Yuan Gao
  • Yi Hu
  • Zhengyong Jiang
  • Jionglong Su

Abstract

Machine Learning algorithms and Neural Networks are widely applied to many different areas such as stock market prediction, face recognition and population analysis. This paper will introduce a strategy based on the classic Deep Reinforcement Learning algorithm, Deep Q-Network, for portfolio management in stock market. It is a type of deep neural network which is optimized by Q Learning. To make the DQN adapt to financial market, we first discretize the action space which is defined as the weight of portfolio in different assets so that portfolio management becomes a problem that Deep Q-Network can solve. Next, we combine the Convolutional Neural Network and dueling Q-net to enhance the recognition ability of the algorithm. Experimentally, we chose five lowrelevant American stocks to test the model. The result demonstrates that the DQN based strategy outperforms the ten other traditional strategies. The profit of DQN algorithm is 30% more than the profit of other strategies. Moreover, the Sharpe ratio associated with Max Drawdown demonstrates that the risk of policy made with DQN is the lowest.

Suggested Citation

  • Ziming Gao & Yuan Gao & Yi Hu & Zhengyong Jiang & Jionglong Su, 2020. "Application of Deep Q-Network in Portfolio Management," Papers 2003.06365, arXiv.org.
  • Handle: RePEc:arx:papers:2003.06365
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2003.06365
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. David P. Helmbold & Robert E. Schapire & Yoram Singer & Manfred K. Warmuth, 1998. "Onā€Line Portfolio Selection Using Multiplicative Updates," Mathematical Finance, Wiley Blackwell, vol. 8(4), pages 325-347, October.
    2. Zhengyao Jiang & Dixing Xu & Jinjun Liang, 2017. "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem," Papers 1706.10059, arXiv.org, revised Jul 2017.
    3. Seyoung Park & Hyunson Song & Sungchul Lee, 2019. "Linear programing models for portfolio optimization using a benchmark," The European Journal of Finance, Taylor & Francis Journals, vol. 25(5), pages 435-457, March.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Zhengyong Jiang & Jeyan Thiayagalingam & Jionglong Su & Jinjun Liang, 2023. "CAD: Clustering And Deep Reinforcement Learning Based Multi-Period Portfolio Management Strategy," Papers 2310.01319, arXiv.org.
    2. Pieter M. van Staden & Peter A. Forsyth & Yuying Li, 2023. "A parsimonious neural network approach to solve portfolio optimization problems without using dynamic programming," Papers 2303.08968, arXiv.org.
    3. Panda, Saunak Kumar & Xiang, Yisha & Liu, Ruiqi, 2024. "Dynamic resource matching in manufacturing using deep reinforcement learning," European Journal of Operational Research, Elsevier, vol. 318(2), pages 408-423.
    4. Huanming Zhang & Zhengyong Jiang & Jionglong Su, 2021. "A Deep Deterministic Policy Gradient-based Strategy for Stocks Portfolio Management," Papers 2103.11455, arXiv.org.
    5. Karush Suri & Xiao Qi Shi & Konstantinos Plataniotis & Yuri Lawryshyn, 2021. "TradeR: Practical Deep Hierarchical Reinforcement Learning for Trade Execution," Papers 2104.00620, arXiv.org.
    6. van Staden, Pieter M. & Forsyth, Peter A. & Li, Yuying, 2024. "Across-time risk-aware strategies for outperforming a benchmark," European Journal of Operational Research, Elsevier, vol. 313(2), pages 776-800.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    2. Huanming Zhang & Zhengyong Jiang & Jionglong Su, 2021. "A Deep Deterministic Policy Gradient-based Strategy for Stocks Portfolio Management," Papers 2103.11455, arXiv.org.
    3. Wonsup Shin & Seok-Jun Bu & Sung-Bae Cho, 2019. "Automatic Financial Trading Agent for Low-risk Portfolio Management using Deep Reinforcement Learning," Papers 1909.03278, arXiv.org.
    4. Jiahua Xu & Daniel Perez & Yebo Feng & Benjamin Livshits, 2023. "Auto.gov: Learning-based On-chain Governance for Decentralized Finance (DeFi)," Papers 2302.09551, arXiv.org, revised May 2023.
    5. Seyoung Park & Eun Ryung Lee & Sungchul Lee & Geonwoo Kim, 2019. "Dantzig Type Optimization Method with Applications to Portfolio Selection," Sustainability, MDPI, vol. 11(11), pages 1-32, June.
    6. Seung-Hyun Moon & Yong-Hyuk Kim & Byung-Ro Moon, 2019. "Empirical investigation of state-of-the-art mean reversion strategies for equity markets," Papers 1909.04327, arXiv.org.
    7. Amir Mosavi & Pedram Ghamisi & Yaser Faghan & Puhong Duan, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Papers 2004.01509, arXiv.org.
    8. Alexandre Carbonneau & Fr'ed'eric Godin, 2021. "Deep equal risk pricing of financial derivatives with non-translation invariant risk measures," Papers 2107.11340, arXiv.org.
    9. Man Yiu Tsang & Tony Sit & Hoi Ying Wong, 2022. "Adaptive Robust Online Portfolio Selection," Papers 2206.01064, arXiv.org.
    10. Fischer, Thomas G., 2018. "Reinforcement learning in financial markets - a survey," FAU Discussion Papers in Economics 12/2018, Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics.
    11. Charl Maree & Christian W. Omlin, 2022. "Balancing Profit, Risk, and Sustainability for Portfolio Management," Papers 2207.02134, arXiv.org.
    12. Mei-Li Shen & Cheng-Feng Lee & Hsiou-Hsiang Liu & Po-Yin Chang & Cheng-Hong Yang, 2021. "An Effective Hybrid Approach for Forecasting Currency Exchange Rates," Sustainability, MDPI, vol. 13(5), pages 1-29, March.
    13. Martino Banchio & Giacomo Mantegazza, 2022. "Artificial Intelligence and Spontaneous Collusion," Papers 2202.05946, arXiv.org, revised Sep 2023.
    14. R'emi J'ez'equel & Dmitrii M. Ostrovskii & Pierre Gaillard, 2022. "Efficient and Near-Optimal Online Portfolio Selection," Papers 2209.13932, arXiv.org.
    15. Miquel Noguer i Alonso & Sonam Srivastava, 2020. "Deep Reinforcement Learning for Asset Allocation in US Equities," Papers 2010.04404, arXiv.org.
    16. Mengying Zhu & Xiaolin Zheng & Yan Wang & Yuyuan Li & Qianqiao Liang, 2019. "Adaptive Portfolio by Solving Multi-armed Bandit via Thompson Sampling," Papers 1911.05309, arXiv.org, revised Nov 2019.
    17. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    18. James Chok & Geoffrey M. Vasil, 2023. "Convex optimization over a probability simplex," Papers 2305.09046, arXiv.org.
    19. Eyal Even-Dar & Sham. M. Kakade & Yishay Mansour, 2009. "Online Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 34(3), pages 726-736, August.
    20. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2003.06365. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.