IDEAS home Printed from https://ideas.repec.org/a/nat/natcom/v15y2024i1d10.1038_s41467-024-49711-1.html
   My bibliography  Save this article

Complex behavior from intrinsic motivation to occupy future action-state path space

Author

Listed:
  • Jorge Ramírez-Ruiz

    (Universitat Pompeu Fabra)

  • Dmytro Grytskyy

    (Universitat Pompeu Fabra)

  • Chiara Mastrogiuseppe

    (Universitat Pompeu Fabra)

  • Yamen Habib

    (Universitat Pompeu Fabra)

  • Rubén Moreno-Bote

    (Universitat Pompeu Fabra
    Universitat Pompeu Fabra)

Abstract

Most theories of behavior posit that agents tend to maximize some form of reward or utility. However, animals very often move with curiosity and seem to be motivated in a reward-free manner. Here we abandon the idea of reward maximization and propose that the goal of behavior is maximizing occupancy of future paths of actions and states. According to this maximum occupancy principle, rewards are the means to occupy path space, not the goal per se; goal-directedness simply emerges as rational ways of searching for resources so that movement, understood amply, never ends. We find that action-state path entropy is the only measure consistent with additivity and other intuitive properties of expected future action-state path occupancy. We provide analytical expressions that relate the optimal policy and state-value function and prove convergence of our value iteration algorithm. Using discrete and continuous state tasks, including a high-dimensional controller, we show that complex behaviors such as “dancing”, hide-and-seek, and a basic form of altruistic behavior naturally result from the intrinsic motivation to occupy path space. All in all, we present a theory of behavior that generates both variability and goal-directedness in the absence of reward maximization.

Suggested Citation

  • Jorge Ramírez-Ruiz & Dmytro Grytskyy & Chiara Mastrogiuseppe & Yamen Habib & Rubén Moreno-Bote, 2024. "Complex behavior from intrinsic motivation to occupy future action-state path space," Nature Communications, Nature, vol. 15(1), pages 1-15, December.
  • Handle: RePEc:nat:natcom:v:15:y:2024:i:1:d:10.1038_s41467-024-49711-1
    DOI: 10.1038/s41467-024-49711-1
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41467-024-49711-1
    File Function: Abstract
    Download Restriction: no

    File URL: https://libkey.io/10.1038/s41467-024-49711-1?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Alexander S Klyubin & Daniel Polani & Chrystopher L Nehaniv, 2008. "Keep Your Options Open: An Information-Based Driving Principle for Sensorimotor Systems," PLOS ONE, Public Library of Science, vol. 3(12), pages 1-14, December.
    2. Bruno B Averbeck, 2015. "Theory of Choice in Bandit, Information Sampling and Foraging Tasks," PLOS Computational Biology, Public Library of Science, vol. 11(3), pages 1-28, March.
    3. Julian Schrittwieser & Ioannis Antonoglou & Thomas Hubert & Karen Simonyan & Laurent Sifre & Simon Schmitt & Arthur Guez & Edward Lockhart & Demis Hassabis & Thore Graepel & Timothy Lillicrap & David , 2020. "Mastering Atari, Go, chess and shogi by planning with a learned model," Nature, Nature, vol. 588(7839), pages 604-609, December.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    2. Gokhale, Gargya & Claessens, Bert & Develder, Chris, 2022. "Physics informed neural networks for control oriented thermal modeling of buildings," Applied Energy, Elsevier, vol. 314(C).
    3. Boute, Robert N. & Gijsbrechts, Joren & van Jaarsveld, Willem & Vanvuchelen, Nathalie, 2022. "Deep reinforcement learning for inventory control: A roadmap," European Journal of Operational Research, Elsevier, vol. 298(2), pages 401-412.
    4. Rishi Rajalingham & Aída Piccato & Mehrdad Jazayeri, 2022. "Recurrent neural networks with explicit representation of dynamic latent variables can mimic behavioral patterns in a physical inference task," Nature Communications, Nature, vol. 13(1), pages 1-15, December.
    5. Gillian Dale & Danielle Sampers & Stephanie Loo & C Shawn Green, 2018. "Individual differences in exploration and persistence: Grit and beliefs about ability and reward," PLOS ONE, Public Library of Science, vol. 13(9), pages 1-17, September.
    6. Jinke Yao & Jiachen Xu & Ning Zhang & Yajuan Guan, 2023. "Model-Based Reinforcement Learning Method for Microgrid Optimization Scheduling," Sustainability, MDPI, vol. 15(12), pages 1-18, June.
    7. Christoph Graf & Viktor Zobernig & Johannes Schmidt & Claude Klockl, 2021. "Computational Performance of Deep Reinforcement Learning to find Nash Equilibria," Papers 2104.12895, arXiv.org.
    8. Daniel Bennett & Stefan Bode & Maja Brydevall & Hayley Warren & Carsten Murawski, 2016. "Intrinsic Valuation of Information in Decision Making under Uncertainty," PLOS Computational Biology, Public Library of Science, vol. 12(7), pages 1-21, July.
    9. Weiwu Ren & Jialin Zhu & Hui Qi & Ligang Cong & Xiaoqiang Di, 2022. "Dynamic optimization of intersatellite link assignment based on reinforcement learning," International Journal of Distributed Sensor Networks, , vol. 18(2), pages 15501477211, February.
    10. Syed Ghazi Sarwat & Timoleon Moraitis & C. David Wright & Harish Bhaskaran, 2022. "Chalcogenide optomemristors for multi-factor neuromorphic computation," Nature Communications, Nature, vol. 13(1), pages 1-9, December.
    11. Li, Wenqing & Ni, Shaoquan, 2022. "Train timetabling with the general learning environment and multi-agent deep reinforcement learning," Transportation Research Part B: Methodological, Elsevier, vol. 157(C), pages 230-251.
    12. Bálint Kővári & Lászlo Szőke & Tamás Bécsi & Szilárd Aradi & Péter Gáspár, 2021. "Traffic Signal Control via Reinforcement Learning for Reducing Global Vehicle Emission," Sustainability, MDPI, vol. 13(20), pages 1-18, October.
    13. Guangyuan Li & Baobao Song & Harinder Singh & V. B. Surya Prasath & H. Leighton Grimes & Nathan Salomonis, 2023. "Decision level integration of unimodal and multimodal single cell data with scTriangulate," Nature Communications, Nature, vol. 14(1), pages 1-16, December.
    14. Spyridon Samothrakis, 2021. "Artificial Intelligence inspired methods for the allocation of common goods and services," PLOS ONE, Public Library of Science, vol. 16(9), pages 1-16, September.
    15. Alexandros A. Lavdas & Nikos A. Salingaros, 2021. "Can Suboptimal Visual Environments Negatively Affect Children’s Cognitive Development?," Challenges, MDPI, vol. 12(2), pages 1-12, November.
    16. R Becket Ebitz & Brianna J Sleezer & Hank P Jedema & Charles W Bradberry & Benjamin Y Hayden, 2019. "Tonic exploration governs both flexibility and lapses," PLOS Computational Biology, Public Library of Science, vol. 15(11), pages 1-37, November.
    17. Marcel Rolf Pfeifer, 2021. "Development of a Smart Manufacturing Execution System Architecture for SMEs: A Czech Case Study," Sustainability, MDPI, vol. 13(18), pages 1-23, September.
    18. Lieke L F van Lieshout & Iris J Traast & Floris P de Lange & Roshan Cools, 2021. "Curiosity or savouring? Information seeking is modulated by both uncertainty and valence," PLOS ONE, Public Library of Science, vol. 16(9), pages 1-19, September.
    19. He, Hongwen & Su, Qicong & Huang, Ruchen & Niu, Zegong, 2024. "Enabling intelligent transferable energy management of series hybrid electric tracked vehicle across motion dimensions via soft actor-critic algorithm," Energy, Elsevier, vol. 294(C).
    20. De Moor, Bram J. & Gijsbrechts, Joren & Boute, Robert N., 2022. "Reward shaping to improve the performance of deep reinforcement learning in perishable inventory management," European Journal of Operational Research, Elsevier, vol. 301(2), pages 535-545.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:15:y:2024:i:1:d:10.1038_s41467-024-49711-1. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.