IDEAS home Printed from https://ideas.repec.org/a/gam/jsusta/v15y2022i1p435-d1016532.html
   My bibliography  Save this article

Deep-Reinforcement-Learning-Based Active Disturbance Rejection Control for Lateral Path Following of Parafoil System

Author

Listed:
  • Yuemin Zheng

    (College of Artificial Intelligence, Nankai University, Tianjin 300350, China)

  • Jin Tao

    (College of Artificial Intelligence, Nankai University, Tianjin 300350, China
    Silo AI, 00100 Helsinki, Finland)

  • Qinglin Sun

    (College of Artificial Intelligence, Nankai University, Tianjin 300350, China)

  • Hao Sun

    (College of Artificial Intelligence, Nankai University, Tianjin 300350, China)

  • Zengqiang Chen

    (College of Artificial Intelligence, Nankai University, Tianjin 300350, China
    Key Laboratory of Intelligent Robotics of Tianjin, Nankai University, Tianjin 300350, China)

  • Mingwei Sun

    (College of Artificial Intelligence, Nankai University, Tianjin 300350, China)

  • Feng Duan

    (College of Artificial Intelligence, Nankai University, Tianjin 300350, China)

Abstract

The path-following control of the parafoil system is essential for executing missions, such as accurate homing and delivery. In this paper, the lateral path-following control of the parafoil system is studied. First, considering the relative motion between the parafoil canopy and the payload, an eight-degree-of-freedom (DOF) model of the parafoil system is constructed. Then, a guidance law containing the position deviation and heading angle deviation is proposed. Moreover, a linear active disturbance rejection controller (LADRC) is designed based on the guidance law to allow the parafoil system to track the desired path under internal unmodeled dynamics or external environmental disturbances. For the adaptive tuning of the controller parameters, a deep Q-network (DQN) is applied to the LADRC-based path-following control system, and the controller parameters can be adjusted in real time according to the system’s states. Finally, the effectiveness of the proposed method is applied to a parafoil system following circular and straight paths in an environment with wind disturbances. The simulation results show that the proposed method is an effective means to realize the lateral path-following control of the parafoil system, and it can also promote the development of intelligent controllers.

Suggested Citation

  • Yuemin Zheng & Jin Tao & Qinglin Sun & Hao Sun & Zengqiang Chen & Mingwei Sun & Feng Duan, 2022. "Deep-Reinforcement-Learning-Based Active Disturbance Rejection Control for Lateral Path Following of Parafoil System," Sustainability, MDPI, vol. 15(1), pages 1-18, December.
  • Handle: RePEc:gam:jsusta:v:15:y:2022:i:1:p:435-:d:1016532
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2071-1050/15/1/435/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2071-1050/15/1/435/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Ronghui Li & Tieshan Li & Renxiang Bu & Qinling Zheng & C. L. Philip Chen, 2013. "Active Disturbance Rejection with Sliding Mode Control Based Course and Path Following for Underactuated Ships," Mathematical Problems in Engineering, Hindawi, vol. 2013, pages 1-9, November.
    2. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Hamza Assia & Houari Merabet Boulouiha & William David Chicaiza & Juan Manuel Escaño & Abderrahmane Kacimi & José Luis Martínez-Ramos & Mouloud Denai, 2023. "Wind Turbine Active Fault Tolerant Control Based on Backstepping Active Disturbance Rejection Control and a Neurofuzzy Detector," Energies, MDPI, vol. 16(14), pages 1-22, July.
    2. Yuemin Zheng & Jin Tao & Qinglin Sun & Hao Sun & Zengqiang Chen & Mingwei Sun, 2023. "Adaptive Active Disturbance Rejection Load Frequency Control for Power System with Renewable Energies Using the Lyapunov Reward-Based Twin Delayed Deep Deterministic Policy Gradient Algorithm," Sustainability, MDPI, vol. 15(19), pages 1-25, October.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    2. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    3. Imen Azzouz & Wiem Fekih Hassen, 2023. "Optimization of Electric Vehicles Charging Scheduling Based on Deep Reinforcement Learning: A Decentralized Approach," Energies, MDPI, vol. 16(24), pages 1-18, December.
    4. Jacob W. Crandall & Mayada Oudah & Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael A. Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Nature Communications, Nature, vol. 9(1), pages 1-12, December.
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," TSE Working Papers 17-806, Toulouse School of Economics (TSE).
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," IAST Working Papers 17-68, Institute for Advanced Study in Toulouse (IAST).
      • Jacob Crandall & Mayada Oudah & Fatimah Ishowo-Oloko Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Post-Print hal-01897802, HAL.
    5. Sun, Alexander Y., 2020. "Optimal carbon storage reservoir management through deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
    6. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    7. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    8. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    9. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    10. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    11. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.
    12. Zichen Lu & Ying Yan, 2024. "Temperature Control of Fuel Cell Based on PEI-DDPG," Energies, MDPI, vol. 17(7), pages 1-19, April.
    13. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    14. Wang, Xuan & Shu, Gequn & Tian, Hua & Wang, Rui & Cai, Jinwen, 2020. "Operation performance comparison of CCHP systems with cascade waste heat recovery systems by simulation and operation optimisation," Energy, Elsevier, vol. 206(C).
    15. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    16. Parvez Farazi, Nahid & Zou, Bo & Tulabandhula, Theja, 2022. "Dynamic On-Demand Crowdshipping Using Constrained and Heuristics-Embedded Double Dueling Deep Q-Network," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 166(C).
    17. Louback, Eduardo & Biswas, Atriya & Machado, Fabricio & Emadi, Ali, 2024. "A review of the design process of energy management systems for dual-motor battery electric vehicles," Renewable and Sustainable Energy Reviews, Elsevier, vol. 193(C).
    18. Brammer, Janis & Lutz, Bernhard & Neumann, Dirk, 2022. "Permutation flow shop scheduling with multiple lines and demand plans using reinforcement learning," European Journal of Operational Research, Elsevier, vol. 299(1), pages 75-86.
    19. Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.
    20. Tri-Hai Nguyen & Laihyuk Park, 2023. "HAP-Assisted RSMA-Enabled Vehicular Edge Computing: A DRL-Based Optimization Framework," Mathematics, MDPI, vol. 11(10), pages 1-23, May.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jsusta:v:15:y:2022:i:1:p:435-:d:1016532. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.