IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2004.01509.html
   My bibliography  Save this paper

Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics

Author

Listed:
  • Amir Mosavi
  • Pedram Ghamisi
  • Yaser Faghan
  • Puhong Duan

Abstract

The popularity of deep reinforcement learning (DRL) methods in economics have been exponentially increased. DRL through a wide range of capabilities from reinforcement learning (RL) and deep learning (DL) for handling sophisticated dynamic business environments offers vast opportunities. DRL is characterized by scalability with the potential to be applied to high-dimensional problems in conjunction with noisy and nonlinear patterns of economic data. In this work, we first consider a brief review of DL, RL, and deep RL methods in diverse applications in economics providing an in-depth insight into the state of the art. Furthermore, the architecture of DRL applied to economic applications is investigated in order to highlight the complexity, robustness, accuracy, performance, computational tasks, risk constraints, and profitability. The survey results indicate that DRL can provide better performance and higher accuracy as compared to the traditional algorithms while facing real economic problems at the presence of risk parameters and the ever-increasing uncertainties.

Suggested Citation

  • Amir Mosavi & Pedram Ghamisi & Yaser Faghan & Puhong Duan, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Papers 2004.01509, arXiv.org.
  • Handle: RePEc:arx:papers:2004.01509
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2004.01509
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Pavlov Gregory, 2011. "Optimal Mechanism for Selling Two Goods," The B.E. Journal of Theoretical Economics, De Gruyter, vol. 11(1), pages 1-35, February.
    2. Hutchinson, James M & Lo, Andrew W & Poggio, Tomaso, 1994. "A Nonparametric Approach to Pricing and Hedging Derivative Securities via Learning Networks," Journal of Finance, American Finance Association, vol. 49(3), pages 851-889, July.
    3. Xiao-Yang Liu & Zhuoran Xiong & Shan Zhong & Hongyang Yang & Anwar Walid, 2018. "Practical Deep Reinforcement Learning Approach for Stock Trading," Papers 1811.07522, arXiv.org, revised Jul 2022.
    4. Olivier Guéant & Pierre Louis Lions & Jean-Michel Lasry, 2011. "Mean Field Games and Applications," Post-Print hal-01393103, HAL.
    5. Estrella, Arturo & Hardouvelis, Gikas A, 1991. "The Term Structure as a Predictor of Real Economic Activity," Journal of Finance, American Finance Association, vol. 46(2), pages 555-576, June.
    6. Manelli, Alejandro M. & Vincent, Daniel R., 2006. "Bundling as an optimal selling mechanism for a multiple-good monopolist," Journal of Economic Theory, Elsevier, vol. 127(1), pages 1-35, March.
    7. Bekiros, Stelios D., 2010. "Heterogeneous trading strategies with adaptive fuzzy Actor-Critic reinforcement learning: A behavioral approach," Journal of Economic Dynamics and Control, Elsevier, vol. 34(6), pages 1153-1170, June.
    8. Thomas R. Cook & Aaron Smalter Hall, 2017. "Macroeconomic Indicator Forecasting with Deep Neural Networks," Research Working Paper RWP 17-11, Federal Reserve Bank of Kansas City.
    9. Roger B. Myerson, 1981. "Optimal Auction Design," Mathematics of Operations Research, INFORMS, vol. 6(1), pages 58-73, February.
    10. Zhengyao Jiang & Dixing Xu & Jinjun Liang, 2017. "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem," Papers 1706.10059, arXiv.org, revised Jul 2017.
    11. Nicky J. Welton & Howard H. Z. Thom, 2015. "Value of Information," Medical Decision Making, , vol. 35(5), pages 564-566, July.
    12. Zhipeng Liang & Hao Chen & Junhao Zhu & Kangkang Jiang & Yanran Li, 2018. "Adversarial Deep Reinforcement Learning in Portfolio Management," Papers 1808.09940, arXiv.org, revised Nov 2018.
    13. J. B. Heaton & N. G. Polson & J. H. Witte, 2017. "Deep learning for finance: deep portfolios," Applied Stochastic Models in Business and Industry, John Wiley & Sons, vol. 33(1), pages 3-12, January.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Amir Masoud Rahmani & Efat Yousefpoor & Mohammad Sadegh Yousefpoor & Zahid Mehmood & Amir Haider & Mehdi Hosseinzadeh & Rizwan Ali Naqvi, 2021. "Machine Learning (ML) in Medicine: Review, Applications, and Challenges," Mathematics, MDPI, vol. 9(22), pages 1-52, November.
    2. Muhammad Umar Khan & Somia Mehak & Dr. Wajiha Yasir & Shagufta Anwar & Muhammad Usman Majeed & Hafiz Arslan Ramzan, 2023. "Quantitative Studies Of Deep Reinforcement Learning In Gaming, Robotics And Real-World Control Systems," Bulletin of Business and Economics (BBE), Research Foundation for Humanity (RFH), vol. 12(2), pages 389-395.
    3. Chien-Liang Chiu & Paoyu Huang & Min-Yuh Day & Yensen Ni & Yuhsin Chen, 2024. "Mastery of “Monthly Effects”: Big Data Insights into Contrarian Strategies for DJI 30 and NDX 100 Stocks over a Two-Decade Period," Mathematics, MDPI, vol. 12(2), pages 1-21, January.
    4. Jifan Zhang & Salih Tutun & Samira Fazel Anvaryazdi & Mohammadhossein Amini & Durai Sundaramoorthi & Hema Sundaramoorthi, 2024. "Management of resource sharing in emergency response using data-driven analytics," Annals of Operations Research, Springer, vol. 339(1), pages 663-692, August.
    5. Brini, Alessio & Tedeschi, Gabriele & Tantari, Daniele, 2023. "Reinforcement learning policy recommendation for interbank network stability," Journal of Financial Stability, Elsevier, vol. 67(C).
    6. Valentin Kuleto & Milena Ilić & Mihail Dumangiu & Marko Ranković & Oliva M. D. Martins & Dan Păun & Larisa Mihoreanu, 2021. "Exploring Opportunities and Challenges of Artificial Intelligence and Machine Learning in Higher Education Institutions," Sustainability, MDPI, vol. 13(18), pages 1-16, September.
    7. Charl Maree & Christian W. Omlin, 2022. "Balancing Profit, Risk, and Sustainability for Portfolio Management," Papers 2207.02134, arXiv.org.
    8. Rui (Aruhan) Shi, 2021. "Learning from Zero: How to Make Consumption-Saving Decisions in a Stochastic Environment with an AI Algorithm," CESifo Working Paper Series 9255, CESifo.
    9. Reilly Pickard & Yuri Lawryshyn, 2023. "Deep Reinforcement Learning for Dynamic Stock Option Hedging: A Review," Mathematics, MDPI, vol. 11(24), pages 1-19, December.
    10. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    11. Fernando Loor & Veronica Gil-Costa & Mauricio Marin, 2024. "Metric Space Indices for Dynamic Optimization in a Peer to Peer-Based Image Classification Crowdsourcing Platform," Future Internet, MDPI, vol. 16(6), pages 1-29, June.
    12. Shidi Deng & Maximilian Schiffer & Martin Bichler, 2024. "Algorithmic Collusion in Dynamic Pricing with Deep Reinforcement Learning," Papers 2406.02437, arXiv.org.
    13. Petr Suler & Zuzana Rowland & Tomas Krulicky, 2021. "Evaluation of the Accuracy of Machine Learning Predictions of the Czech Republic’s Exports to the China," JRFM, MDPI, vol. 14(2), pages 1-30, February.
    14. Bruno Gašperov & Stjepan Begušić & Petra Posedel Šimović & Zvonko Kostanjčar, 2021. "Reinforcement Learning Approaches to Optimal Market Making," Mathematics, MDPI, vol. 9(21), pages 1-22, October.
    15. Ben Hambly & Renyuan Xu & Huining Yang, 2023. "Recent advances in reinforcement learning in finance," Mathematical Finance, Wiley Blackwell, vol. 33(3), pages 437-503, July.
    16. Adrian Millea & Abbas Edalat, 2022. "Using Deep Reinforcement Learning with Hierarchical Risk Parity for Portfolio Optimization," IJFS, MDPI, vol. 11(1), pages 1-16, December.
    17. Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.
    18. Berigel, Muhammet & Boztaş, Gizem Dilan & Rocca, Antonella & Neagu, Gabriela, 2024. "Using machine learning for NEETs and sustainability studies: Determining best machine learning algorithms," Socio-Economic Planning Sciences, Elsevier, vol. 94(C).
    19. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    20. Mei-Li Shen & Cheng-Feng Lee & Hsiou-Hsiang Liu & Po-Yin Chang & Cheng-Hong Yang, 2021. "An Effective Hybrid Approach for Forecasting Currency Exchange Rates," Sustainability, MDPI, vol. 13(5), pages 1-29, March.
    21. Tian Zhu & Wei Zhu, 2022. "Quantitative Trading through Random Perturbation Q-Network with Nonlinear Transaction Costs," Stats, MDPI, vol. 5(2), pages 1-15, June.
    22. Rui & Shi, 2021. "Learning from zero: how to make consumption-saving decisions in a stochastic environment with an AI algorithm," Papers 2105.10099, arXiv.org, revised Feb 2022.
    23. Fatemehsadat Mirshafiee & Emad Shahbazi & Mohadeseh Safi & Rituraj Rituraj, 2023. "Predicting Power and Hydrogen Generation of a Renewable Energy Converter Utilizing Data-Driven Methods: A Sustainable Smart Grid Case Study," Energies, MDPI, vol. 16(1), pages 1-20, January.
    24. Jan Niederreiter, 2023. "Broadening Economics in the Era of Artificial Intelligence and Experimental Evidence," Italian Economic Journal: A Continuation of Rivista Italiana degli Economisti and Giornale degli Economisti, Springer;Società Italiana degli Economisti (Italian Economic Association), vol. 9(1), pages 265-294, March.
    25. Callum Rhys Tilbury, 2022. "Reinforcement Learning for Economic Policy: A New Frontier?," Papers 2206.08781, arXiv.org, revised Feb 2023.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    2. Bonatti, Alessandro & Bergemann, Dirk & Haupt, Andreas & Smolin, Alex, 2021. "The Optimality of Upgrade Pricing," CEPR Discussion Papers 16394, C.E.P.R. Discussion Papers.
    3. Mei-Li Shen & Cheng-Feng Lee & Hsiou-Hsiang Liu & Po-Yin Chang & Cheng-Hong Yang, 2021. "An Effective Hybrid Approach for Forecasting Currency Exchange Rates," Sustainability, MDPI, vol. 13(5), pages 1-29, March.
    4. Mark Armstrong, 2016. "Nonlinear Pricing," Annual Review of Economics, Annual Reviews, vol. 8(1), pages 583-614, October.
    5. Bikhchandani, Sushil & Mishra, Debasis, 2022. "Selling two identical objects," Journal of Economic Theory, Elsevier, vol. 200(C).
    6. Sergiu Hart & Noam Nisan, 2013. "Selling Multiple Correlated Goods: Revenue Maximization and Menu-Size Complexity (old title: "The Menu-Size Complexity of Auctions")," Papers 1304.6116, arXiv.org, revised Nov 2018.
    7. Hart, Sergiu & Nisan, Noam, 2019. "Selling multiple correlated goods: Revenue maximization and menu-size complexity," Journal of Economic Theory, Elsevier, vol. 183(C), pages 991-1029.
    8. Cai, Yang & Daskalakis, Constantinos, 2015. "Extreme value theorems for optimal multidimensional pricing," Games and Economic Behavior, Elsevier, vol. 92(C), pages 266-305.
    9. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    10. Yeon-Koo Che & Weijie Zhong, 2021. "Robustly Optimal Mechanisms for Selling Multiple Goods," Papers 2105.02828, arXiv.org, revised Aug 2024.
    11. Yunan Ye & Hengzhi Pei & Boxin Wang & Pin-Yu Chen & Yada Zhu & Jun Xiao & Bo Li, 2020. "Reinforcement-Learning based Portfolio Management with Augmented Asset Movement Prediction States," Papers 2002.05780, arXiv.org.
    12. Hart, Sergiu & Nisan, Noam, 2017. "Approximate revenue maximization with multiple items," Journal of Economic Theory, Elsevier, vol. 172(C), pages 313-347.
    13. Menicucci, Domenico & Hurkens, Sjaak & Jeon, Doh-Shin, 2015. "On the optimality of pure bundling for a monopolist," Journal of Mathematical Economics, Elsevier, vol. 60(C), pages 33-42.
    14. Rochet, Jean-Charles, 2024. "Multidimensional Screening After 37 years," TSE Working Papers 24-1536, Toulouse School of Economics (TSE).
    15. Michael J. Curry & Zhou Fan & David C. Parkes, 2024. "Optimal Automated Market Makers: Differentiable Economics and Strong Duality," Papers 2402.09129, arXiv.org.
    16. Devanur, Nikhil R. & Haghpanah, Nima & Psomas, Alexandros, 2020. "Optimal multi-unit mechanisms with private demands," Games and Economic Behavior, Elsevier, vol. 121(C), pages 482-505.
    17. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay, 2020. "Time your hedge with Deep Reinforcement Learning," Papers 2009.14136, arXiv.org, revised Nov 2020.
    18. Tang, Pingzhong & Wang, Zihe, 2017. "Optimal mechanisms with simple menus," Journal of Mathematical Economics, Elsevier, vol. 69(C), pages 54-70.
    19. Carlos Segura-Rodriguez, 2019. "Selling Data," PIER Working Paper Archive 19-006, Penn Institute for Economic Research, Department of Economics, University of Pennsylvania.
    20. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay & Jamal Atif, 2020. "AAMDRL: Augmented Asset Management with Deep Reinforcement Learning," Papers 2010.08497, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2004.01509. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.