IDEAS home Printed from https://ideas.repec.org/a/nat/nature/v590y2021i7847d10.1038_s41586-020-03157-9.html
   My bibliography  Save this article

First return, then explore

Author

Listed:
  • Adrien Ecoffet

    (Uber AI Labs
    OpenAI)

  • Joost Huizinga

    (Uber AI Labs
    OpenAI)

  • Joel Lehman

    (Uber AI Labs
    OpenAI)

  • Kenneth O. Stanley

    (Uber AI Labs
    OpenAI)

  • Jeff Clune

    (Uber AI Labs
    OpenAI)

Abstract

Reinforcement learning promises to solve complex sequential-decision problems autonomously by specifying a high-level reward function only. However, reinforcement learning algorithms struggle when, as is often the case, simple and intuitive rewards provide sparse1 and deceptive2 feedback. Avoiding these pitfalls requires a thorough exploration of the environment, but creating algorithms that can do so remains one of the central challenges of the field. Here we hypothesize that the main impediment to effective exploration originates from algorithms forgetting how to reach previously visited states (detachment) and failing to first return to a state before exploring from it (derailment). We introduce Go-Explore, a family of algorithms that addresses these two challenges directly through the simple principles of explicitly ‘remembering’ promising states and returning to such states before intentionally exploring. Go-Explore solves all previously unsolved Atari games and surpasses the state of the art on all hard-exploration games1, with orders-of-magnitude improvements on the grand challenges of Montezuma’s Revenge and Pitfall. We also demonstrate the practical potential of Go-Explore on a sparse-reward pick-and-place robotics task. Additionally, we show that adding a goal-conditioned policy can further improve Go-Explore’s exploration efficiency and enable it to handle stochasticity throughout training. The substantial performance gains from Go-Explore suggest that the simple principles of remembering states, returning to them, and exploring from them are a powerful and general approach to exploration—an insight that may prove critical to the creation of truly intelligent learning agents.

Suggested Citation

  • Adrien Ecoffet & Joost Huizinga & Joel Lehman & Kenneth O. Stanley & Jeff Clune, 2021. "First return, then explore," Nature, Nature, vol. 590(7847), pages 580-586, February.
  • Handle: RePEc:nat:nature:v:590:y:2021:i:7847:d:10.1038_s41586-020-03157-9
    DOI: 10.1038/s41586-020-03157-9
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41586-020-03157-9
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1038/s41586-020-03157-9?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Li, Yunjian & Song, Yixiao & Sun, Yanming & Zeng, Mingzhuo, 2024. "When do employees learn from artificial intelligence? The moderating effects of perceived enjoyment and task-related complexity," Technology in Society, Elsevier, vol. 77(C).

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:nature:v:590:y:2021:i:7847:d:10.1038_s41586-020-03157-9. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.