IDEAS home Printed from https://ideas.repec.org/a/gam/jsusta/v13y2021i11p5892-d560896.html
   My bibliography  Save this article

Evaluating and Enhancing the Robustness of Sustainable Neural Relationship Classifiers Using Query-Efficient Black-Box Adversarial Attacks

Author

Listed:
  • Ijaz Ul Haq

    (School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100811, China)

  • Zahid Younas Khan

    (School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100811, China
    Department of Computer Science and Information Technology, University of Azad Jammu and Kashmir, Muzaffarabad 13100, Pakistan)

  • Arshad Ahmad

    (Department of IT and Computer Science Pak-Austria Fachhochschule Institute of Applied Sciences and Technology, Haripur 22620, Pakistan)

  • Bashir Hayat

    (Institute of Management Sciences Peshawar, Peshawar 25100, Pakistan)

  • Asif Khan

    (School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100811, China)

  • Ye-Eun Lee

    (Department of Computer Science and Engineering, Chungnam National University, Daejeon 34134, Korea)

  • Ki-Il Kim

    (Department of Computer Science and Engineering, Chungnam National University, Daejeon 34134, Korea)

Abstract

Neural relation extraction (NRE) models are the backbone of various machine learning tasks, including knowledge base enrichment, information extraction, and document summarization. Despite the vast popularity of these models, their vulnerabilities remain unknown; this is of high concern given their growing use in security-sensitive applications such as question answering and machine translation in the aspects of sustainability. In this study, we demonstrate that NRE models are inherently vulnerable to adversarially crafted text that contains imperceptible modifications of the original but can mislead the target NRE model. Specifically, we propose a novel sustainable term frequency-inverse document frequency (TFIDF) based black-box adversarial attack to evaluate the robustness of state-of-the-art CNN, CGN, LSTM, and BERT-based models on two benchmark RE datasets. Compared with white-box adversarial attacks, black-box attacks impose further constraints on the query budget; thus, efficient black-box attacks remain an open problem. By applying TFIDF to the correctly classified sentences of each class label in the test set, the proposed query-efficient method achieves a reduction of up to 70% in the number of queries to the target model for identifying important text items. Based on these items, we design both character- and word-level perturbations to generate adversarial examples. The proposed attack successfully reduces the accuracy of six representative models from an average F1 score of 80% to below 20%. The generated adversarial examples were evaluated by humans and are considered semantically similar. Moreover, we discuss defense strategies that mitigate such attacks, and the potential countermeasures that could be deployed in order to improve sustainability of the proposed scheme.

Suggested Citation

  • Ijaz Ul Haq & Zahid Younas Khan & Arshad Ahmad & Bashir Hayat & Asif Khan & Ye-Eun Lee & Ki-Il Kim, 2021. "Evaluating and Enhancing the Robustness of Sustainable Neural Relationship Classifiers Using Query-Efficient Black-Box Adversarial Attacks," Sustainability, MDPI, vol. 13(11), pages 1-25, May.
  • Handle: RePEc:gam:jsusta:v:13:y:2021:i:11:p:5892-:d:560896
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2071-1050/13/11/5892/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2071-1050/13/11/5892/
    Download Restriction: no
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Isa Ebtehaj & Keyvan Soltani & Afshin Amiri & Marzban Faramarzi & Chandra A. Madramootoo & Hossein Bonakdari, 2021. "Prognostication of Shortwave Radiation Using an Improved No-Tuned Fast Machine Learning," Sustainability, MDPI, vol. 13(14), pages 1-23, July.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jsusta:v:13:y:2021:i:11:p:5892-:d:560896. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.