IDEAS home Printed from https://ideas.repec.org/a/gam/jsusta/v14y2022i15p9680-d881595.html
   My bibliography  Save this article

Vision Transformer for Detecting Critical Situations and Extracting Functional Scenario for Automated Vehicle Safety Assessment

Author

Listed:
  • Minhee Kang

    (The Department of Smartcity, Hongik University, Seoul 04066, Korea
    These authors contributed equally to this work.)

  • Wooseop Lee

    (Transportation Policy Division, Seoul Metropolitan Government, Seoul 04515, Korea
    These authors contributed equally to this work.)

  • Keeyeon Hwang

    (The Department of Urban Planning, Hongik University, Seoul 04066, Korea)

  • Young Yoon

    (The Department of Computer Engineering, Hongik University, Seoul 04066, Korea)

Abstract

Automated Vehicles (AVs) are attracting attention as a safer mobility option thanks to the recent advancement of various sensing technologies that realize a much quicker Perception–Reaction Time than Human-Driven Vehicles (HVs). However, AVs are not entirely free from the risk of accidents, and we currently lack a systematic and reliable method to improve AV safety functions. The manual composition of accident scenarios does not scale. Simulation-based methods do not fully cover the peculiar AV accident patterns that can occur in the real world. Artificial Intelligence (AI) techniques are employed to identify the moments of accidents from ego-vehicle videos. However, most AI-based approaches fall short in accounting for the probable causes of the accidents. Neither of these AI-driven methods offer details for authoring accident scenarios used for AV safety testing. In this paper, we present a customized Vision Transformer (named ViT-TA) that accurately classifies the critical situations around traffic accidents and automatically points out the objects as probable causes based on an Attention map. Using 24,740 frames from Dashcam Accident Dataset (DAD) as training data, ViT-TA detected critical moments at Time-To-Collision (TTC) ≤ 1 s with 34.92 higher accuracy than the state-of-the-art approach. ViT-TA’s Attention map highlighting the critical objects helped us understand how the situations unfold to put the hypothetical ego vehicles with AV functions at risk. Based on the ViT-TA-assisted interpretation, we systematized the composition of Functional scenarios conceptualized by the PEGASUS project for describing a high-level plan to improve AVs’ capability of evading critical situations. We propose a novel framework for automatically deriving Logical and Concrete scenarios specified with 6-Layer situational variables defined by the PEGASUS project. We believe our work is vital towards systematically generating highly reliable and trustworthy safety improvement plans for AVs in a scalable manner.

Suggested Citation

  • Minhee Kang & Wooseop Lee & Keeyeon Hwang & Young Yoon, 2022. "Vision Transformer for Detecting Critical Situations and Extracting Functional Scenario for Automated Vehicle Safety Assessment," Sustainability, MDPI, vol. 14(15), pages 1-19, August.
  • Handle: RePEc:gam:jsusta:v:14:y:2022:i:15:p:9680-:d:881595
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2071-1050/14/15/9680/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2071-1050/14/15/9680/
    Download Restriction: no
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Zepeng Gao & Jianbo Feng & Chao Wang & Yu Cao & Bonan Qin & Tao Zhang & Senqi Tan & Riya Zeng & Hongbin Ren & Tongxin Ma & Youshan Hou & Jie Xiao, 2022. "Research on Vehicle Active Steering Stability Control Based on Variable Time Domain Input and State Information Prediction," Sustainability, MDPI, vol. 15(1), pages 1-18, December.
    2. Yihang Zhang & Yunsick Sung, 2023. "Hybrid Traffic Accident Classification Models," Mathematics, MDPI, vol. 11(4), pages 1-16, February.
    3. Yihang Zhang & Yunsick Sung, 2023. "Traffic Accident Detection Using Background Subtraction and CNN Encoder–Transformer Decoder in Video Frames," Mathematics, MDPI, vol. 11(13), pages 1-15, June.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jsusta:v:14:y:2022:i:15:p:9680-:d:881595. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.