IDEAS home Printed from https://ideas.repec.org/p/ven/wpaper/201515.html
   My bibliography  Save this paper

Q-Learning and SARSA: a comparison between two intelligent stochastic control approaches for financial trading

Author

Listed:
  • Marco Corazza

    (Department of Economics, C� Foscari University Of Venice)

  • Andrea Sangalli

    (�)

Abstract

The purpose of this paper is to solve a stochastic control problem consisting of optimizing the management of a trading system. Two model free machine learning algorithms based on Reinforcement Learning method are compared: the Q-Learning and the SARSA ones. Both these models optimize their behaviours in real time on the basis of the reactions they get from the environment in which operate. This idea is based on a new emerging theory about the market efficiency, the Adaptive Market Hypothesis. We apply the algorithms on single stock price time series using simple state variables. These algorithms operate selecting an action among three possible ones: buy, sell and stay out from the market. We perform several applications based on different parameter settings that are tested on an artificial daily stock prices time series and on different real ones from Italian stock market. Furthermore, performances are both gross and net of transaction costs.

Suggested Citation

  • Marco Corazza & Andrea Sangalli, 2015. "Q-Learning and SARSA: a comparison between two intelligent stochastic control approaches for financial trading," Working Papers 2015:15, Department of Economics, University of Venice "Ca' Foscari", revised 2015.
  • Handle: RePEc:ven:wpaper:2015:15
    as

    Download full text from publisher

    File URL: http://www.unive.it/pag/fileadmin/user_upload/dipartimenti/economia/doc/Pubblicazioni_scientifiche/working_papers/2015/WP_DSE_corazza_sangalli_15_15.pdf
    File Function: First version, 2015
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Francesco Bertoluzzo & Marco Corazza, 2012. "Reinforcement Learning for automatic financial trading: Introduction and some applications," Working Papers 2012:33, Department of Economics, University of Venice "Ca' Foscari", revised 2012.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Terry Lingze Meng & Matloob Khushi, 2019. "Reinforcement Learning in Financial Markets," Data, MDPI, vol. 4(3), pages 1-17, July.
    2. Yuling Huang & Kai Cui & Yunlin Song & Zongren Chen, 2023. "A Multi-Scaling Reinforcement Learning Trading System Based on Multi-Scaling Convolutional Neural Networks," Mathematics, MDPI, vol. 11(11), pages 1-19, May.
    3. Marco Corazza & Giovanni Fasano & Riccardo Gusso & Raffaele Pesenti, 2019. "A comparison among Reinforcement Learning algorithms in financial trading systems," Working Papers 2019:33, Department of Economics, University of Venice "Ca' Foscari".

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Fischer, Thomas G., 2018. "Reinforcement learning in financial markets - a survey," FAU Discussion Papers in Economics 12/2018, Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics.
    2. Caiyu Jiang & Jianhua Wang, 2022. "A Portfolio Model with Risk Control Policy Based on Deep Reinforcement Learning," Mathematics, MDPI, vol. 11(1), pages 1-16, December.
    3. Hyungjun Park & Min Kyu Sim & Dong Gu Choi, 2019. "An intelligent financial portfolio trading strategy using deep Q-learning," Papers 1907.03665, arXiv.org, revised Nov 2019.
    4. Haoqian Li & Thomas Lau, 2019. "Reinforcement Learning: Prediction, Control and Value Function Approximation," Papers 1908.10771, arXiv.org.
    5. Petrus Strydom, 2017. "Funding optimization for a bank integrating credit and liquidity risk," Journal of Applied Finance & Banking, SCIENPRESS Ltd, vol. 7(2), pages 1-1.
    6. Zihao Zhang & Stefan Zohren & Stephen Roberts, 2019. "Deep Reinforcement Learning for Trading," Papers 1911.10107, arXiv.org.
    7. Ariel Neufeld & Julian Sester & Mario v{S}iki'c, 2022. "Markov Decision Processes under Model Uncertainty," Papers 2206.06109, arXiv.org, revised Jan 2023.
    8. Ariel Neufeld & Julian Sester & Mario Šikić, 2023. "Markov decision processes under model uncertainty," Mathematical Finance, Wiley Blackwell, vol. 33(3), pages 618-665, July.
    9. Zihao Zhang & Stefan Zohren & Stephen Roberts, 2020. "Deep Learning for Portfolio Optimization," Papers 2005.13665, arXiv.org, revised Jan 2021.
    10. Xiao-Yang Liu & Zhuoran Xiong & Shan Zhong & Hongyang Yang & Anwar Walid, 2018. "Practical Deep Reinforcement Learning Approach for Stock Trading," Papers 1811.07522, arXiv.org, revised Jul 2022.

    More about this item

    Keywords

    Financial trading system; Adaptive Market Hypothesis; model free machine learning; Reinforcement Learning; Q-Learning; SARSA; Italian stock market.;
    All these keywords.

    JEL classification:

    • C61 - Mathematical and Quantitative Methods - - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling - - - Optimization Techniques; Programming Models; Dynamic Analysis
    • C63 - Mathematical and Quantitative Methods - - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling - - - Computational Techniques
    • G11 - Financial Economics - - General Financial Markets - - - Portfolio Choice; Investment Decisions

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:ven:wpaper:2015:15. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Geraldine Ludbrook (email available below). General contact details of provider: https://edirc.repec.org/data/dsvenit.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.