IDEAS home Printed from https://ideas.repec.org/a/inm/oropre/v72y2024i3p1016-1030.html
   My bibliography  Save this article

Adversarial Robustness for Latent Models: Revisiting the Robust-Standard Accuracies Tradeoff

Author

Listed:
  • Adel Javanmard

    (Data Sciences and Operations Department, University of Southern California, Los Angeles, California 90089)

  • Mohammad Mehrabi

    (Data Sciences and Operations Department, University of Southern California, Los Angeles, California 90089)

Abstract

Over the past few years, several adversarial training methods have been proposed to improve the robustness of machine learning models against adversarial perturbations in the input. Despite remarkable progress in this regard, adversarial training is often observed to drop the standard test accuracy. This phenomenon has intrigued the research community to investigate the potential tradeoff between standard accuracy (a.k.a generalization) and robust accuracy (a.k.a robust generalization) as two performance measures. In this paper, we revisit this tradeoff for latent models and argue that this tradeoff is mitigated when the data enjoys a low-dimensional structure. In particular, we consider binary classification under two data generative models, namely Gaussian mixture model and generalized linear model, where the features data lie on a low-dimensional manifold. We develop a theory to show that the low-dimensional manifold structure allows one to obtain models that are nearly optimal with respect to both, the standard accuracy and the robust accuracy measures. We further corroborate our theory with several numerical experiments, including Mixture of Factor Analyzers (MFA) model trained on the MNIST data set.

Suggested Citation

  • Adel Javanmard & Mohammad Mehrabi, 2024. "Adversarial Robustness for Latent Models: Revisiting the Robust-Standard Accuracies Tradeoff," Operations Research, INFORMS, vol. 72(3), pages 1016-1030, May.
  • Handle: RePEc:inm:oropre:v:72:y:2024:i:3:p:1016-1030
    DOI: 10.1287/opre.2022.0162
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/opre.2022.0162
    Download Restriction: no

    File URL: https://libkey.io/10.1287/opre.2022.0162?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:oropre:v:72:y:2024:i:3:p:1016-1030. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.