Multiagent cooperation and competition with deep reinforcement learning
Author
Abstract
Suggested Citation
DOI: 10.1371/journal.pone.0172395
Download full text from publisher
Citations
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
Cited by:
- Qingyan Li & Tao Lin & Qianyi Yu & Hui Du & Jun Li & Xiyue Fu, 2023. "Review of Deep Reinforcement Learning and Its Application in Modern Renewable Power System Control," Energies, MDPI, vol. 16(10), pages 1-23, May.
- Se-Heon Lim & Sung-Guk Yoon, 2022. "Dynamic DNR and Solar PV Smart Inverter Control Scheme Using Heterogeneous Multi-Agent Deep Reinforcement Learning," Energies, MDPI, vol. 15(23), pages 1-18, December.
- Emilio Calvano & Giacomo Calzolari & Vincenzo Denicolò & Sergio Pastorello, 2019. "Algorithmic Pricing What Implications for Competition Policy?," Review of Industrial Organization, Springer;The Industrial Organization Society, vol. 55(1), pages 155-171, August.
- Young Joon Park & Yoon Sang Cho & Seoung Bum Kim, 2019. "Multi-agent reinforcement learning with approximate model learning for competitive games," PLOS ONE, Public Library of Science, vol. 14(9), pages 1-20, September.
- Marilleau, Nicolas & Lang, Christophe & Giraudoux, Patrick, 2018. "Coupling agent-based with equation-based models to study spatially explicit megapopulation dynamics," Ecological Modelling, Elsevier, vol. 384(C), pages 34-42.
- Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
- Li, Xingyu & Epureanu, Bogdan I., 2020. "AI-based competition of autonomous vehicle fleets with application to fleet modularity," European Journal of Operational Research, Elsevier, vol. 287(3), pages 856-874.
- Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
- Wang, Xuekai & D’Ariano, Andrea & Su, Shuai & Tang, Tao, 2023. "Cooperative train control during the power supply shortage in metro system: A multi-agent reinforcement learning approach," Transportation Research Part B: Methodological, Elsevier, vol. 170(C), pages 244-278.
- Tianhao Wang & Shiqian Ma & Na Xu & Tianchun Xiang & Xiaoyun Han & Chaoxu Mu & Yao Jin, 2022. "Secondary Voltage Collaborative Control of Distributed Energy System via Multi-Agent Reinforcement Learning," Energies, MDPI, vol. 15(19), pages 1-12, September.
- Lee, Hyun-Rok & Lee, Taesik, 2021. "Multi-agent reinforcement learning algorithm to solve a partially-observable multi-agent problem in disaster response," European Journal of Operational Research, Elsevier, vol. 291(1), pages 296-308.
- Aymanns, Christoph & Foerster, Jakob & Georg, Co-Pierre & Weber, Matthias, 2022.
"Fake News in Social Networks,"
SocArXiv
y4mkd, Center for Open Science.
- Christoph Aymanns & Jakob Foerster & Co-Pierre Georg & Matthias Weber, 2022. "Fake News in Social Networks," Swiss Finance Institute Research Paper Series 22-58, Swiss Finance Institute.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0172395. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.