IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v13y2025i2p235-d1564968.html
   My bibliography  Save this article

SGD-TripleQNet: An Integrated Deep Reinforcement Learning Model for Vehicle Lane-Change Decision

Author

Listed:
  • Yang Liu

    (College of Intelligent Science and Information Engineering, Shenyang University, Shenyang 110044, China)

  • Tianxing Yang

    (College of Intelligent Science and Information Engineering, Shenyang University, Shenyang 110044, China)

  • Liwei Tian

    (College of Intelligent Science and Information Engineering, Shenyang University, Shenyang 110044, China)

  • Jianbiao Pei

    (College of Intelligent Science and Information Engineering, Shenyang University, Shenyang 110044, China)

Abstract

With the advancement of autonomous driving technology, vehicle lane-change decision (LCD) has become a critical issue for improving driving safety and efficiency. Traditional deep reinforcement learning (DRL) methods face challenges such as slow convergence, unstable decisions, and low accuracy when dealing with complex traffic environments. To address these issues, this paper proposes a novel integrated deep reinforcement learning model called “SGD-TripleQNet” for autonomous vehicle lane-change decision-making. This method integrates three types of deep Q-learning networks (DQN, DDQN, and Dueling DDQN) and uses the Stochastic Gradient Descent (SGD) optimization algorithm to dynamically adjust the network weights. This dynamic weight adjustment process fine-tunes the weights based on gradient information to minimize the target loss function. The approach effectively addresses key challenges in autonomous driving lane-change decisions, including slow convergence, low accuracy, and unstable decision-making. The experiment shows that the proposed method, SGD-TripleQNet, has significant advantages over single models: In terms of convergence speed, it is approximately 25% faster than DQN, DDQN, and Dueling DDQN, achieving stability within 150 epochs; in terms of decision stability, the Q-value fluctuations are reduced by about 40% in the later stages of training; in terms of final performance, the average reward exceeds that of DQN (by 6.85%), DDQN (by 6.86%), and Dueling DDQN (by 6.57%), confirming the effectiveness of the proposed method. It also provides a theoretical foundation and practical guidance for the design and optimization of future autonomous driving systems.

Suggested Citation

  • Yang Liu & Tianxing Yang & Liwei Tian & Jianbiao Pei, 2025. "SGD-TripleQNet: An Integrated Deep Reinforcement Learning Model for Vehicle Lane-Change Decision," Mathematics, MDPI, vol. 13(2), pages 1-17, January.
  • Handle: RePEc:gam:jmathe:v:13:y:2025:i:2:p:235-:d:1564968
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/13/2/235/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/13/2/235/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:2:p:235-:d:1564968. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.