IDEAS home Printed from https://ideas.repec.org/a/inm/ormoor/v49y2024i4p2295-2322.html
   My bibliography  Save this article

V-Learning—A Simple, Efficient, Decentralized Algorithm for Multiagent Reinforcement Learning

Author

Listed:
  • Chi Jin

    (Department of Electrical and Computer Engineering, Princeton University, Princeton, New Jersey 08544)

  • Qinghua Liu

    (Department of Electrical and Computer Engineering, Princeton University, Princeton, New Jersey 08544)

  • Yuanhao Wang

    (Department of Computer Science, Princeton University, Princeton, New Jersey 08544)

  • Tiancheng Yu

    (Department of Electrical and Computer Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139)

Abstract

A major challenge of multiagent reinforcement learning (MARL) is the curse of multiagents , where the size of the joint action space scales exponentially with the number of agents. This remains to be a bottleneck for designing efficient MARL algorithms, even in a basic scenario with finitely many states and actions. This paper resolves this challenge for the model of episodic Markov games. We design a new class of fully decentralized algorithms—V-learning, which provably learns Nash equilibria (in the two-player zero-sum setting), correlated equilibria, and coarse correlated equilibria (in the multiplayer general-sum setting) in a number of samples that only scales with max i ∈ [ m ] A i , where A i is the number of actions for the i th player. This is in sharp contrast to the size of the joint action space, which is ∏ i = 1 m A i . V-learning (in its basic form) is a new class of single-agent reinforcement learning (RL) algorithms that convert any adversarial bandit algorithm with suitable regret guarantees into an RL algorithm. Similar to the classical Q-learning algorithm, it performs incremental updates to the value functions. Different from Q-learning, it only maintains the estimates of V-values instead of Q-values. This key difference allows V-learning to achieve the claimed guarantees in the MARL setting by simply letting all agents run V-learning independently.

Suggested Citation

  • Chi Jin & Qinghua Liu & Yuanhao Wang & Tiancheng Yu, 2024. "V-Learning—A Simple, Efficient, Decentralized Algorithm for Multiagent Reinforcement Learning," Mathematics of Operations Research, INFORMS, vol. 49(4), pages 2295-2322, November.
  • Handle: RePEc:inm:ormoor:v:49:y:2024:i:4:p:2295-2322
    DOI: 10.1287/moor.2021.0317
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/moor.2021.0317
    Download Restriction: no

    File URL: https://libkey.io/10.1287/moor.2021.0317?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:ormoor:v:49:y:2024:i:4:p:2295-2322. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.