IDEAS home Printed from https://ideas.repec.org/a/nat/natcom/v13y2022i1d10.1038_s41467-022-33266-0.html
   My bibliography  Save this article

Adversarial attacks and adversarial robustness in computational pathology

Author

Listed:
  • Narmin Ghaffari Laleh

    (University Hospital RWTH Aachen, RWTH Aachen university)

  • Daniel Truhn

    (University Hospital Aachen)

  • Gregory Patrick Veldhuizen

    (Technical University Dresden)

  • Tianyu Han

    (RWTH Aachen University)

  • Marko van Treeck

    (University Hospital RWTH Aachen, RWTH Aachen university)

  • Roman D. Buelow

    (University Hospital RWTH Aachen)

  • Rupert Langer

    (University of Bern
    Kepler University Hospital, Johannes Kepler University Linz)

  • Bastian Dislich

    (University of Bern)

  • Peter Boor

    (University Hospital RWTH Aachen)

  • Volkmar Schulz

    (RWTH Aachen University
    RWTH Aachen University
    Fraunhofer Institute for Digital Medicine MEVIS
    Hyperion Hybrid Imaging Systems GmbH)

  • Jakob Nikolas Kather

    (University Hospital RWTH Aachen, RWTH Aachen university
    Technical University Dresden
    University Hospital Heidelberg
    University of Leeds)

Abstract

Artificial Intelligence (AI) can support diagnostic workflows in oncology by aiding diagnosis and providing biomarkers directly from routine pathology slides. However, AI applications are vulnerable to adversarial attacks. Hence, it is essential to quantify and mitigate this risk before widespread clinical use. Here, we show that convolutional neural networks (CNNs) are highly susceptible to white- and black-box adversarial attacks in clinically relevant weakly-supervised classification tasks. Adversarially robust training and dual batch normalization (DBN) are possible mitigation strategies but require precise knowledge of the type of attack used in the inference. We demonstrate that vision transformers (ViTs) perform equally well compared to CNNs at baseline, but are orders of magnitude more robust to white- and black-box attacks. At a mechanistic level, we show that this is associated with a more robust latent representation of clinically relevant categories in ViTs compared to CNNs. Our results are in line with previous theoretical studies and provide empirical evidence that ViTs are robust learners in computational pathology. This implies that large-scale rollout of AI models in computational pathology should rely on ViTs rather than CNN-based classifiers to provide inherent protection against perturbation of the input data, especially adversarial attacks.

Suggested Citation

  • Narmin Ghaffari Laleh & Daniel Truhn & Gregory Patrick Veldhuizen & Tianyu Han & Marko van Treeck & Roman D. Buelow & Rupert Langer & Bastian Dislich & Peter Boor & Volkmar Schulz & Jakob Nikolas Kath, 2022. "Adversarial attacks and adversarial robustness in computational pathology," Nature Communications, Nature, vol. 13(1), pages 1-10, December.
  • Handle: RePEc:nat:natcom:v:13:y:2022:i:1:d:10.1038_s41467-022-33266-0
    DOI: 10.1038/s41467-022-33266-0
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41467-022-33266-0
    File Function: Abstract
    Download Restriction: no

    File URL: https://libkey.io/10.1038/s41467-022-33266-0?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Tianyu Han & Sven Nebelung & Federico Pedersoli & Markus Zimmermann & Maximilian Schulze-Hagen & Michael Ho & Christoph Haarburger & Fabian Kiessling & Christiane Kuhl & Volkmar Schulz & Daniel Truhn, 2021. "Advancing diagnostic performance and clinical usability of neural networks via adversarial training and dual batch normalization," Nature Communications, Nature, vol. 12(1), pages 1-11, December.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.

      More about this item

      Statistics

      Access and download statistics

      Corrections

      All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:13:y:2022:i:1:d:10.1038_s41467-022-33266-0. See general information about how to correct material in RePEc.

      If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

      If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

      If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

      For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

      Please note that corrections may take a couple of weeks to filter through the various RePEc services.

      IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.