IDEAS home Printed from https://ideas.repec.org/a/sae/intdis/v18y2022i2p15501477211070202.html
   My bibliography  Save this article

Dynamic optimization of intersatellite link assignment based on reinforcement learning

Author

Listed:
  • Weiwu Ren
  • Jialin Zhu
  • Hui Qi
  • Ligang Cong
  • Xiaoqiang Di

Abstract

Intersatellite links can reduce the dependence of satellite communication systems on ground networks, reduce the number of ground gateways, and reduce the complexity and investment of ground networks, which are important future trends in satellite development. Intersatellite links are dynamic over time, and different intersatellite topologies have a great impact on satellite network performance. To improve the overall performance of satellite networks, a satellite link assignment optimization algorithm based on reinforcement learning is proposed in this article. Different from the swarm intelligence method in principle, this algorithm models the combinatorial optimization problem of links as the optimal sequence decision problem of a series of link selection actions. Realistic constraints such as intersatellite visibility, network connectivity, and number of antenna beams are regarded as fully observable environmental factors. The agent selects the link according to the decision, and the selection action utility affects the next selection decision. After a finite number of iterations, the optimal link assignment scheme with minimum link delay is achieved. The simulation results show that in 8 or 12 satellite network systems, compared with the original topology, the topology calculated by this method has better network delay and smaller delay variance.

Suggested Citation

  • Weiwu Ren & Jialin Zhu & Hui Qi & Ligang Cong & Xiaoqiang Di, 2022. "Dynamic optimization of intersatellite link assignment based on reinforcement learning," International Journal of Distributed Sensor Networks, , vol. 18(2), pages 15501477211, February.
  • Handle: RePEc:sae:intdis:v:18:y:2022:i:2:p:15501477211070202
    DOI: 10.1177/15501477211070202
    as

    Download full text from publisher

    File URL: https://journals.sagepub.com/doi/10.1177/15501477211070202
    Download Restriction: no

    File URL: https://libkey.io/10.1177/15501477211070202?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Julian Schrittwieser & Ioannis Antonoglou & Thomas Hubert & Karen Simonyan & Laurent Sifre & Simon Schmitt & Arthur Guez & Edward Lockhart & Demis Hassabis & Thore Graepel & Timothy Lillicrap & David , 2020. "Mastering Atari, Go, chess and shogi by planning with a learned model," Nature, Nature, vol. 588(7839), pages 604-609, December.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Gokhale, Gargya & Claessens, Bert & Develder, Chris, 2022. "Physics informed neural networks for control oriented thermal modeling of buildings," Applied Energy, Elsevier, vol. 314(C).
    2. Rishi Rajalingham & Aída Piccato & Mehrdad Jazayeri, 2022. "Recurrent neural networks with explicit representation of dynamic latent variables can mimic behavioral patterns in a physical inference task," Nature Communications, Nature, vol. 13(1), pages 1-15, December.
    3. Jinke Yao & Jiachen Xu & Ning Zhang & Yajuan Guan, 2023. "Model-Based Reinforcement Learning Method for Microgrid Optimization Scheduling," Sustainability, MDPI, vol. 15(12), pages 1-18, June.
    4. Syed Ghazi Sarwat & Timoleon Moraitis & C. David Wright & Harish Bhaskaran, 2022. "Chalcogenide optomemristors for multi-factor neuromorphic computation," Nature Communications, Nature, vol. 13(1), pages 1-9, December.
    5. Li, Wenqing & Ni, Shaoquan, 2022. "Train timetabling with the general learning environment and multi-agent deep reinforcement learning," Transportation Research Part B: Methodological, Elsevier, vol. 157(C), pages 230-251.
    6. Alexandros A. Lavdas & Nikos A. Salingaros, 2021. "Can Suboptimal Visual Environments Negatively Affect Children’s Cognitive Development?," Challenges, MDPI, vol. 12(2), pages 1-12, November.
    7. De Moor, Bram J. & Gijsbrechts, Joren & Boute, Robert N., 2022. "Reward shaping to improve the performance of deep reinforcement learning in perishable inventory management," European Journal of Operational Research, Elsevier, vol. 301(2), pages 535-545.
    8. Christopher R. Madan, 2020. "Considerations for Comparing Video Game AI Agents with Humans," Challenges, MDPI, vol. 11(2), pages 1-12, August.
    9. Christoph Graf & Viktor Zobernig & Johannes Schmidt & Claude Klöckl, 2024. "Computational Performance of Deep Reinforcement Learning to Find Nash Equilibria," Computational Economics, Springer;Society for Computational Economics, vol. 63(2), pages 529-576, February.
    10. Tasos Papagiannis & Georgios Alexandridis & Andreas Stafylopatis, 2022. "Pruning Stochastic Game Trees Using Neural Networks for Reduced Action Space Approximation," Mathematics, MDPI, vol. 10(9), pages 1-16, May.
    11. Jorge Ramírez-Ruiz & Dmytro Grytskyy & Chiara Mastrogiuseppe & Yamen Habib & Rubén Moreno-Bote, 2024. "Complex behavior from intrinsic motivation to occupy future action-state path space," Nature Communications, Nature, vol. 15(1), pages 1-15, December.
    12. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    13. Boute, Robert N. & Gijsbrechts, Joren & van Jaarsveld, Willem & Vanvuchelen, Nathalie, 2022. "Deep reinforcement learning for inventory control: A roadmap," European Journal of Operational Research, Elsevier, vol. 298(2), pages 401-412.
    14. Christoph Graf & Viktor Zobernig & Johannes Schmidt & Claude Klockl, 2021. "Computational Performance of Deep Reinforcement Learning to find Nash Equilibria," Papers 2104.12895, arXiv.org.
    15. Bálint Kővári & Lászlo Szőke & Tamás Bécsi & Szilárd Aradi & Péter Gáspár, 2021. "Traffic Signal Control via Reinforcement Learning for Reducing Global Vehicle Emission," Sustainability, MDPI, vol. 13(20), pages 1-18, October.
    16. Guangyuan Li & Baobao Song & Harinder Singh & V. B. Surya Prasath & H. Leighton Grimes & Nathan Salomonis, 2023. "Decision level integration of unimodal and multimodal single cell data with scTriangulate," Nature Communications, Nature, vol. 14(1), pages 1-16, December.
    17. Spyridon Samothrakis, 2021. "Artificial Intelligence inspired methods for the allocation of common goods and services," PLOS ONE, Public Library of Science, vol. 16(9), pages 1-16, September.
    18. Marcel Rolf Pfeifer, 2021. "Development of a Smart Manufacturing Execution System Architecture for SMEs: A Czech Case Study," Sustainability, MDPI, vol. 13(18), pages 1-23, September.
    19. He, Hongwen & Su, Qicong & Huang, Ruchen & Niu, Zegong, 2024. "Enabling intelligent transferable energy management of series hybrid electric tracked vehicle across motion dimensions via soft actor-critic algorithm," Energy, Elsevier, vol. 294(C).
    20. Jin Li & Ye Luo & Xiaowei Zhang, 2021. "Causal Reinforcement Learning: An Instrumental Variable Approach," Papers 2103.04021, arXiv.org, revised Sep 2022.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:intdis:v:18:y:2022:i:2:p:15501477211070202. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.