IDEAS home Printed from https://ideas.repec.org/a/rfh/bbejor/v12y2023i2p389-395.html
   My bibliography  Save this article

Quantitative Studies Of Deep Reinforcement Learning In Gaming, Robotics And Real-World Control Systems

Author

Listed:
  • MUHAMMAD UMAR KHAN

    (Assistant Professor, Department of Electrical and Computer Engineering, COMSATS University Islamabad, Pakistan)

  • SOMIA MEHAK

    (Department of Computer Science, NUML Multan Campus, Pakistan)

  • DR. WAJIHA YASIR

    (Assistant Professor, COMSATS University Islamabad, Abbott bad Campus, Pakistan)

  • SHAGUFTA ANWAR

    (Department of Computer Science and Technology, Lahore LEADs University Lahore, SSE CS GGHS BHILOMAHAR ,Daska ,Sialkot, Pakistan)

  • MUHAMMAD USMAN MAJEED

    (Faculty of Computing and Information Technology, University of the Punjab Lahore, Pakistan)

  • HAFIZ ARSLAN RAMZAN

    (Institute of Computer and Software Engineering, Khwaja Fareed University of Engineering and Information Technology, Rahim Yar Khan, Pakistan)

Abstract

Deep Reinforcement Learning (DRL) has emerged as a transformative paradigm with profound implications for gaming, robotics, real-world control systems, and beyond. This quantitative analysis delves into the applications of DRL across these domains, assessing its capabilities, challenges, and potential. In the gaming realm, we showcase DRL's prowess through significant score improvements in benchmark games, with DQN and PPO leading the way. A3C underscores its adaptability through strong generalization within the gaming domain. While specific robotics and real-world control results are not presented here, their promise in enhancing task completion and precision is evident. Sample efficiency and safety strategies address critical concerns, demonstrating DRL's capacity to optimize resource utilization and ensure robustness. Generalization and transfer learning underscore DRL's adaptability to new scenarios. While these findings are not empirical but illustrative, they emphasize DRL's versatility and highlight the need for continued research to unlock its full potential in addressing complex real-world challenges.

Suggested Citation

  • Muhammad Umar Khan & Somia Mehak & Dr. Wajiha Yasir & Shagufta Anwar & Muhammad Usman Majeed & Hafiz Arslan Ramzan, 2023. "Quantitative Studies Of Deep Reinforcement Learning In Gaming, Robotics And Real-World Control Systems," Bulletin of Business and Economics (BBE), Research Foundation for Humanity (RFH), vol. 12(2), pages 389-395.
  • Handle: RePEc:rfh:bbejor:v:12:y:2023:i:2:p:389-395
    DOI: https://doi.org/10.61506/01.00019
    as

    Download full text from publisher

    File URL: https://www.researchgate.net/profile/Hafiz-Arslan-Ramzan/publication/374997991_Quantitative_Studies_of_Deep_Reinforcement_Learning_in_Gaming_Robotics_and_Real-World_Control_Systems/links/6553d4ad3fa26f66f400655e/Quantitative-Studies-of-Deep-Reinforcement-Learning-in-Gaming-Robotics-and-Real-World-Control-Systems.pdf
    Download Restriction: no

    File URL: https://bbejournal.com/BBE/article/view/505
    Download Restriction: no

    File URL: https://libkey.io/https://doi.org/10.61506/01.00019?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Muhammad Umar Khan & Somia Mehak & Dr. Wajiha Yasir & Shagufta Anwar & Muhammad Usman Majeed & Hafiz Arslan Ramzan, 2023. "Quantitative Studies Of Deep Reinforcement Learning In Gaming, Robotics And Real-World Control Systems," Bulletin of Business and Economics (BBE), Research Foundation for Humanity (RFH), vol. 12(2), pages 389-395.
    2. Amir Mosavi & Pedram Ghamisi & Yaser Faghan & Puhong Duan, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Papers 2004.01509, arXiv.org.
    3. Mosavi, Amir & Faghan, Yaser & Ghamisi, Pedram & Duan, Puhong & Ardabili, Sina Faizollahzadeh & Hassan, Salwana & Band, Shahab S., 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," OSF Preprints jrc58, Center for Open Science.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Muhammad Umar Khan & Somia Mehak & Dr. Wajiha Yasir & Shagufta Anwar & Muhammad Usman Majeed & Hafiz Arslan Ramzan, 2023. "Quantitative Studies Of Deep Reinforcement Learning In Gaming, Robotics And Real-World Control Systems," Bulletin of Business and Economics (BBE), Research Foundation for Humanity (RFH), vol. 12(2), pages 389-395.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Charl Maree & Christian W. Omlin, 2022. "Balancing Profit, Risk, and Sustainability for Portfolio Management," Papers 2207.02134, arXiv.org.
    2. Mei-Li Shen & Cheng-Feng Lee & Hsiou-Hsiang Liu & Po-Yin Chang & Cheng-Hong Yang, 2021. "An Effective Hybrid Approach for Forecasting Currency Exchange Rates," Sustainability, MDPI, vol. 13(5), pages 1-29, March.
    3. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    4. Brini, Alessio & Tedeschi, Gabriele & Tantari, Daniele, 2023. "Reinforcement learning policy recommendation for interbank network stability," Journal of Financial Stability, Elsevier, vol. 67(C).
    5. Valentin Kuleto & Milena Ilić & Mihail Dumangiu & Marko Ranković & Oliva M. D. Martins & Dan Păun & Larisa Mihoreanu, 2021. "Exploring Opportunities and Challenges of Artificial Intelligence and Machine Learning in Higher Education Institutions," Sustainability, MDPI, vol. 13(18), pages 1-16, September.
    6. Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.
    7. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    8. Chien-Liang Chiu & Paoyu Huang & Min-Yuh Day & Yensen Ni & Yuhsin Chen, 2024. "Mastery of “Monthly Effects”: Big Data Insights into Contrarian Strategies for DJI 30 and NDX 100 Stocks over a Two-Decade Period," Mathematics, MDPI, vol. 12(2), pages 1-22, January.
    9. Jan Niederreiter, 2023. "Broadening Economics in the Era of Artificial Intelligence and Experimental Evidence," Italian Economic Journal: A Continuation of Rivista Italiana degli Economisti and Giornale degli Economisti, Springer;Società Italiana degli Economisti (Italian Economic Association), vol. 9(1), pages 265-294, March.
    10. Tian Zhu & Wei Zhu, 2022. "Quantitative Trading through Random Perturbation Q-Network with Nonlinear Transaction Costs," Stats, MDPI, vol. 5(2), pages 1-15, June.
    11. Petr Suler & Zuzana Rowland & Tomas Krulicky, 2021. "Evaluation of the Accuracy of Machine Learning Predictions of the Czech Republic’s Exports to the China," JRFM, MDPI, vol. 14(2), pages 1-30, February.
    12. Shidi Deng & Maximilian Schiffer & Martin Bichler, 2024. "Algorithmic Collusion in Dynamic Pricing with Deep Reinforcement Learning," Papers 2406.02437, arXiv.org.
    13. Ben Hambly & Renyuan Xu & Huining Yang, 2023. "Recent advances in reinforcement learning in finance," Mathematical Finance, Wiley Blackwell, vol. 33(3), pages 437-503, July.
    14. Fernando Loor & Veronica Gil-Costa & Mauricio Marin, 2024. "Metric Space Indices for Dynamic Optimization in a Peer to Peer-Based Image Classification Crowdsourcing Platform," Future Internet, MDPI, vol. 16(6), pages 1-29, June.
    15. Rui (Aruhan) Shi, 2021. "Learning from Zero: How to Make Consumption-Saving Decisions in a Stochastic Environment with an AI Algorithm," CESifo Working Paper Series 9255, CESifo.
    16. Rui & Shi, 2021. "Learning from zero: how to make consumption-saving decisions in a stochastic environment with an AI algorithm," Papers 2105.10099, arXiv.org, revised Feb 2022.
    17. Bruno Gašperov & Stjepan Begušić & Petra Posedel Šimović & Zvonko Kostanjčar, 2021. "Reinforcement Learning Approaches to Optimal Market Making," Mathematics, MDPI, vol. 9(21), pages 1-22, October.
    18. Callum Rhys Tilbury, 2022. "Reinforcement Learning for Economic Policy: A New Frontier?," Papers 2206.08781, arXiv.org, revised Feb 2023.
    19. Fatemehsadat Mirshafiee & Emad Shahbazi & Mohadeseh Safi & Rituraj Rituraj, 2023. "Predicting Power and Hydrogen Generation of a Renewable Energy Converter Utilizing Data-Driven Methods: A Sustainable Smart Grid Case Study," Energies, MDPI, vol. 16(1), pages 1-20, January.
    20. Reilly Pickard & Yuri Lawryshyn, 2023. "Deep Reinforcement Learning for Dynamic Stock Option Hedging: A Review," Mathematics, MDPI, vol. 11(24), pages 1-19, December.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:rfh:bbejor:v:12:y:2023:i:2:p:389-395. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Dr. Muhammad Irfan Chani (email available below). General contact details of provider: https://edirc.repec.org/data/rffhlpk.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.