IDEAS home Printed from https://ideas.repec.org/a/spr/telsys/v87y2024i2d10.1007_s11235-024-01188-5.html
   My bibliography  Save this article

A dynamic spectrum access algorithm based on deep reinforcement learning with novel multi-vehicle reward functions in cognitive vehicular networks

Author

Listed:
  • Lingling Chen

    (Jilin Institute of Chemical Technology
    Jilin University)

  • Ziwei Wang

    (Jilin Institute of Chemical Technology)

  • Xiaohui Zhao

    (Jilin University)

  • Xuan Shen

    (Jilin Institute of Chemical Technology)

  • Wei He

    (Jilin Institute of Chemical Technology)

Abstract

As a revolution in the field of transportation, the demand for communication of vehicles is increasing. Therefore, how to improve the success rate of vehicle spectrum access has become a major problem to be solved. The case of a single vehicle accessing a channel was only considered in the previous research on dynamic spectrum access in cognitive vehicular networks (CVNs), and the spectrum resources could not be fully utilized. In order to fully utilize spectrum resources, a model for spectrum sharing among multiple secondary vehicles (SVs) and a primary vehicle (PV) is proposed. This model includes scenarios where multiple SVs share spectrum to maximize the average quality of service (QoS) for vehicles. And the condition is considered that the total interference generated by vehicles accessing the same channel is less than the interference threshold. In this paper, a deep Q-network method with a modified reward function (IDQN) algorithm is proposed to maximize the average QoS of PVs and SVs and improve spectrum utilization. The algorithm is designed with different reward functions according to the QoS of PVs and SVs under different situations. Finally, the proposed algorithm is compared with the deep Q-network (DQN) and Q-learning algorithms under the Python simulation platform. The average access success rate of SVs in the IDQN algorithm proposed can reach 98 $$\%$$ % , which is improved by 18 $$\%$$ % compared with the Q-learning algorithm. And the convergence speed is 62.5 $$\%$$ % faster than the DQN algorithm. At the same time, the average QoS of PVs and the average QoS of SVs in the IDQN algorithm can reach 2.4, which is improved by 50 $$\%$$ % and 33 $$\%$$ % compared with the DQN algorithm, and improved by 60 $$\%$$ % and 140 $$\%$$ % compared with the Q-learning algorithm.

Suggested Citation

  • Lingling Chen & Ziwei Wang & Xiaohui Zhao & Xuan Shen & Wei He, 2024. "A dynamic spectrum access algorithm based on deep reinforcement learning with novel multi-vehicle reward functions in cognitive vehicular networks," Telecommunication Systems: Modelling, Analysis, Design and Management, Springer, vol. 87(2), pages 359-383, October.
  • Handle: RePEc:spr:telsys:v:87:y:2024:i:2:d:10.1007_s11235-024-01188-5
    DOI: 10.1007/s11235-024-01188-5
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s11235-024-01188-5
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s11235-024-01188-5?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Iqram Hussain, 2024. "Secure, Sustainable Smart Cities and the Internet of Things: Perspectives, Challenges, and Future Directions," Sustainability, MDPI, vol. 16(4), pages 1-3, February.
    2. Dingmi Sun & Yimin Chen & Hao Li, 2024. "Intelligent Vehicle Computation Offloading in Vehicular Ad Hoc Networks: A Multi-Agent LSTM Approach with Deep Reinforcement Learning," Mathematics, MDPI, vol. 12(3), pages 1-27, January.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Bejinaru Ruxandra & Toma Marian-VladuČ›, 2024. "Enhancing Business Operations Through Microlearning, BPM and RPA," Proceedings of the International Conference on Business Excellence, Sciendo, vol. 18(1), pages 1831-1847.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:telsys:v:87:y:2024:i:2:d:10.1007_s11235-024-01188-5. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.