IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v9y2021i21p2689-d662748.html
   My bibliography  Save this article

Reinforcement Learning Approaches to Optimal Market Making

Author

Listed:
  • Bruno Gašperov

    (Laboratory for Financial and Risk Analytics, Faculty of Electrical Engineering and Computing, University of Zagreb, 10000 Zagreb, Croatia)

  • Stjepan Begušić

    (Laboratory for Financial and Risk Analytics, Faculty of Electrical Engineering and Computing, University of Zagreb, 10000 Zagreb, Croatia)

  • Petra Posedel Šimović

    (Department of Informatics and Mathematics, Faculty of Agriculture, University of Zagreb, 10000 Zagreb, Croatia)

  • Zvonko Kostanjčar

    (Laboratory for Financial and Risk Analytics, Faculty of Electrical Engineering and Computing, University of Zagreb, 10000 Zagreb, Croatia)

Abstract

Market making is the process whereby a market participant, called a market maker, simultaneously and repeatedly posts limit orders on both sides of the limit order book of a security in order to both provide liquidity and generate profit. Optimal market making entails dynamic adjustment of bid and ask prices in response to the market maker’s current inventory level and market conditions with the goal of maximizing a risk-adjusted return measure. This problem is naturally framed as a Markov decision process, a discrete-time stochastic (inventory) control process. Reinforcement learning, a class of techniques based on learning from observations and used for solving Markov decision processes, lends itself particularly well to it. Recent years have seen a very strong uptick in the popularity of such techniques in the field, fueled in part by a series of successes of deep reinforcement learning in other domains. The primary goal of this paper is to provide a comprehensive and up-to-date overview of the current state-of-the-art applications of (deep) reinforcement learning focused on optimal market making. The analysis indicated that reinforcement learning techniques provide superior performance in terms of the risk-adjusted return over more standard market making strategies, typically derived from analytical models.

Suggested Citation

  • Bruno Gašperov & Stjepan Begušić & Petra Posedel Šimović & Zvonko Kostanjčar, 2021. "Reinforcement Learning Approaches to Optimal Market Making," Mathematics, MDPI, vol. 9(21), pages 1-22, October.
  • Handle: RePEc:gam:jmathe:v:9:y:2021:i:21:p:2689-:d:662748
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/9/21/2689/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/9/21/2689/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Thomas Spooner & Rahul Savani, 2020. "Robust Market Making via Adversarial Reinforcement Learning," Papers 2003.01820, arXiv.org, revised Jul 2020.
    2. Rama Cont & Sasha Stoikov & Rishi Talreja, 2010. "A Stochastic Model for Order Book Dynamics," Operations Research, INFORMS, vol. 58(3), pages 549-563, June.
    3. Bastien Baldacci & Iuliia Manziuk & Thibaut Mastrolia & Mathieu Rosenbaum, 2019. "Market making and incentives design in the presence of a dark pool: a deep reinforcement learning approach," Papers 1912.01129, arXiv.org.
    4. Fabien Guilbaud & Huyên Pham, 2013. "Optimal high-frequency trading with limit and market orders," Quantitative Finance, Taylor & Francis Journals, vol. 13(1), pages 79-94, January.
    5. Marco Avellaneda & Sasha Stoikov, 2008. "High-frequency trading in a limit order book," Quantitative Finance, Taylor & Francis Journals, vol. 8(3), pages 217-224.
    6. Nicholas T. Chan and Christian Shelton, 2001. "An Adaptive Electronic Market-Maker," Computing in Economics and Finance 2001 146, Society for Computational Economics.
    7. Olivier Guéant & Iuliia Manziuk, 2019. "Deep Reinforcement Learning for Market Making in Corporate Bonds: Beating the Curse of Dimensionality," Applied Mathematical Finance, Taylor & Francis Journals, vol. 26(5), pages 387-452, September.
    8. Glosten, Lawrence R. & Milgrom, Paul R., 1985. "Bid, ask and transaction prices in a specialist market with heterogeneously informed traders," Journal of Financial Economics, Elsevier, vol. 14(1), pages 71-100, March.
    9. Ho, Thomas & Stoll, Hans R., 1981. "Optimal dealer pricing under transactions and return uncertainty," Journal of Financial Economics, Elsevier, vol. 9(1), pages 47-73, March.
    10. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    11. Tucker Hybinette Balch & Mahmoud Mahfouz & Joshua Lockhart & Maria Hybinette & David Byrd, 2019. "How to Evaluate Trading Strategies: Single Agent Market Replay or Multiple Agent Interactive Simulation?," Papers 1906.12010, arXiv.org.
    12. Svitlana Vyetrenko & David Byrd & Nick Petosa & Mahmoud Mahfouz & Danial Dervovic & Manuela Veloso & Tucker Hybinette Balch, 2019. "Get Real: Realism Metrics for Robust Limit Order Book Market Simulations," Papers 1912.04941, arXiv.org.
    13. Jonathan Sadighian, 2020. "Extending Deep Reinforcement Learning Frameworks in Cryptocurrency Market Making," Papers 2004.06985, arXiv.org.
    14. Matias Selser & Javier Kreiner & Manuel Maurette, 2021. "Optimal Market Making by Reinforcement Learning," Papers 2104.04036, arXiv.org.
    15. Olivier Gu'eant & Iuliia Manziuk, 2019. "Deep reinforcement learning for market making in corporate bonds: beating the curse of dimensionality," Papers 1910.13205, arXiv.org.
    16. Yagna Patel, 2018. "Optimizing Market Making using Multi-Agent Reinforcement Learning," Papers 1812.10252, arXiv.org.
    17. Dieter Hendricks & Diane Wilcox, 2014. "A reinforcement learning extension to the Almgren-Chriss model for optimal trade execution," Papers 1403.2229, arXiv.org.
    18. Sumitra Ganesh & Nelson Vadori & Mengda Xu & Hua Zheng & Prashant Reddy & Manuela Veloso, 2019. "Reinforcement Learning for Market Making in a Multi-agent Dealer Market," Papers 1911.05892, arXiv.org.
    19. Amir Mosavi & Pedram Ghamisi & Yaser Faghan & Puhong Duan, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Papers 2004.01509, arXiv.org.
    20. Mosavi, Amir & Faghan, Yaser & Ghamisi, Pedram & Duan, Puhong & Ardabili, Sina Faizollahzadeh & Hassan, Salwana & Band, Shahab S., 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," OSF Preprints jrc58, Center for Open Science.
    21. David Silver & Julian Schrittwieser & Karen Simonyan & Ioannis Antonoglou & Aja Huang & Arthur Guez & Thomas Hubert & Lucas Baker & Matthew Lai & Adrian Bolton & Yutian Chen & Timothy Lillicrap & Fan , 2017. "Mastering the game of Go without human knowledge," Nature, Nature, vol. 550(7676), pages 354-359, October.
    22. Thomas Spooner & John Fearnley & Rahul Savani & Andreas Koukorinis, 2018. "Market Making via Reinforcement Learning," Papers 1804.04216, arXiv.org.
    23. Jonathan Sadighian, 2019. "Deep Reinforcement Learning in Cryptocurrency Market Making," Papers 1911.08647, arXiv.org.
    24. Olivier Gu'eant, 2016. "Optimal market making," Papers 1605.01862, arXiv.org, revised May 2017.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Yanyan Fan & Yu Zhang & Baosu Guo & Xiaoyuan Luo & Qingjin Peng & Zhenlin Jin, 2022. "A Hybrid Sparrow Search Algorithm of the Hyperparameter Optimization in Deep Learning," Mathematics, MDPI, vol. 10(16), pages 1-23, August.
    2. Luca Lalor & Anatoliy Swishchuk, 2024. "Reinforcement Learning in Non-Markov Market-Making," Papers 2410.14504, arXiv.org, revised Nov 2024.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    2. Joseph Jerome & Leandro Sanchez-Betancourt & Rahul Savani & Martin Herdegen, 2022. "Model-based gym environments for limit order book trading," Papers 2209.07823, arXiv.org.
    3. Joseph Jerome & Gregory Palmer & Rahul Savani, 2022. "Market Making with Scaled Beta Policies," Papers 2207.03352, arXiv.org, revised Sep 2022.
    4. Thomas Spooner & Rahul Savani, 2020. "Robust Market Making via Adversarial Reinforcement Learning," Papers 2003.01820, arXiv.org, revised Jul 2020.
    5. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    6. Bruno Gav{s}perov & Zvonko Kostanjv{c}ar, 2022. "Deep Reinforcement Learning for Market Making Under a Hawkes Process-Based Limit Order Book Model," Papers 2207.09951, arXiv.org.
    7. Nelson Vadori & Leo Ardon & Sumitra Ganesh & Thomas Spooner & Selim Amrouni & Jared Vann & Mengda Xu & Zeyu Zheng & Tucker Balch & Manuela Veloso, 2022. "Towards Multi-Agent Reinforcement Learning driven Over-The-Counter Market Simulations," Papers 2210.07184, arXiv.org, revised Aug 2023.
    8. Jiafa He & Cong Zheng & Can Yang, 2023. "Integrating Tick-level Data and Periodical Signal for High-frequency Market Making," Papers 2306.17179, arXiv.org.
    9. Ben Hambly & Renyuan Xu & Huining Yang, 2023. "Recent advances in reinforcement learning in finance," Mathematical Finance, Wiley Blackwell, vol. 33(3), pages 437-503, July.
    10. Bastien Baldacci & Jerome Benveniste & Gordon Ritter, 2020. "Optimal trading without optimal control," Papers 2012.12945, arXiv.org.
    11. Hui Niu & Siyuan Li & Jiahao Zheng & Zhouchi Lin & Jian Li & Jian Guo & Bo An, 2023. "IMM: An Imitative Reinforcement Learning Approach with Predictive Representation Learning for Automatic Market Making," Papers 2308.08918, arXiv.org.
    12. Luca Lalor & Anatoliy Swishchuk, 2024. "Reinforcement Learning in Non-Markov Market-Making," Papers 2410.14504, arXiv.org, revised Nov 2024.
    13. Pankaj Kumar, 2021. "Deep Hawkes Process for High-Frequency Market Making," Papers 2109.15110, arXiv.org.
    14. Marcello Monga, 2024. "Automated Market Making and Decentralized Finance," Papers 2407.16885, arXiv.org.
    15. Alexander Barzykin & Philippe Bergault & Olivier Gu'eant, 2021. "Algorithmic market making in dealer markets with hedging and market impact," Papers 2106.06974, arXiv.org, revised Dec 2022.
    16. Philippe Bergault & Olivier Guéant, 2021. "Size matters for OTC market makers: General results and dimensionality reduction techniques," Mathematical Finance, Wiley Blackwell, vol. 31(1), pages 279-322, January.
    17. Hong Guo & Jianwu Lin & Fanlin Huang, 2023. "Market Making with Deep Reinforcement Learning from Limit Order Books," Papers 2305.15821, arXiv.org.
    18. Philippe Bergault & Louis Bertucci & David Bouba & Olivier Gu'eant & Julien Guilbert, 2024. "Automated Market Making: the case of Pegged Assets," Papers 2411.08145, arXiv.org.
    19. Philippe Bergault & Louis Bertucci & David Bouba & Olivier Gu'eant, 2022. "Automated Market Makers: Mean-Variance Analysis of LPs Payoffs and Design of Pricing Functions," Papers 2212.00336, arXiv.org, revised Nov 2023.
    20. Philippe Bergault & Louis Bertucci & David Bouba & Olivier Gu'eant & Julien Guilbert, 2024. "Price-Aware Automated Market Makers: Models Beyond Brownian Prices and Static Liquidity," Papers 2405.03496, arXiv.org, revised May 2024.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:9:y:2021:i:21:p:2689-:d:662748. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.