IDEAS home Printed from https://ideas.repec.org/p/crs/wpaper/2017-32.html
   My bibliography  Save this paper

Robust machine learning by median-of-means : theory and practice

Author

Listed:
  • Guillaume Lecué

    (CREST; CNRS; Université Paris Saclay)

  • Mathieu Lerasle

    (CNRS,département de mathématiques d’Orsay)

Abstract

We introduce new estimators for robust machine learning based on median-of-means (MOM) estimators of the mean of real valued random variables. These estimators achieve optimal rates of convergence under minimal assumptions on the dataset. The dataset may also have been corrupted by outliers on which no assumption is granted. We also analyze these new estimators with standard tools from robust statistics. In particular, we revisit the concept of breakdown point. We modify the original definition by studying the number of outliers that a dataset can contain without deteriorating the estimation properties of a given estimator. This new notion of breakdown number, that takes into account the statistical performances of the estimators, is non-asymptotic in nature and adapted for machine learning purposes. We proved that the breakdown number of our estimator is of the order of number of observations * rate of convergence. For instance, the breakdown number of our estimators for the problem of estimation of a d-dimensional vector with a noise variance a² is a²d and it becomes a²s log(ed/s) when this vector has only s non-zero component. Beyond this breakdown point, we proved that the rate of convergence achieved by our estimator is number of outliers divided by number of observations. Besides these theoretical guarantees, the major improvement brought by these new estimators is that they are easily computable in practice. In fact, basically any algorithm used to approximate the standard Empirical Risk Minimizer (or its regularized versions) has a robust version approximating our estimators. On top of being robust to outliers, the "MOM version" of the algorithms are even faster than the original ones, less demanding in memory resources in some situations and well adapted for distributed datasets which makes it particularly attractive for large dataset analysis. As a proof of concept, we study many algorithms for the classical LASSO estimator. It turns out that the original algorithm can be improved a lot in practice by randomizing the blocks on which \local means" are computed at each step of the descent algorithm. A byproduct of this modification is that our algorithms come with a measure of depth of data that can be used to detect outliers, which is another major issue in Machine learning.

Suggested Citation

  • Guillaume Lecué & Mathieu Lerasle, 2017. "Robust machine learning by median-of-means : theory and practice," Working Papers 2017-32, Center for Research in Economics and Statistics.
  • Handle: RePEc:crs:wpaper:2017-32
    as

    Download full text from publisher

    File URL: http://crest.science/RePEc/wpstorage/2017-32.pdf
    File Function: CREST working paper version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Baraud, Y. & Birgé, L., 2016. "Rho-estimators for shape restricted density estimation," Stochastic Processes and their Applications, Elsevier, vol. 126(12), pages 3888-3912.
    2. Dodge, Yadolah, 1987. "An introduction to L1-norm based statistical data analysis," Computational Statistics & Data Analysis, Elsevier, vol. 5(4), pages 239-253, September.
    3. Sara Geer, 2014. "Weakly decomposable regularization penalties and structured sparsity," Scandinavian Journal of Statistics, Danish Society for Theoretical Statistics;Finnish Statistical Society;Norwegian Statistical Association;Swedish Statistical Association, vol. 41(1), pages 72-86, March.
    4. Jianqing Fan & Quefeng Li & Yuyan Wang, 2017. "Estimation of high dimensional mean regression in the absence of symmetry and light tail assumptions," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 79(1), pages 247-265, January.
    5. Cun-Hui Zhang & Stephanie S. Zhang, 2014. "Confidence intervals for low dimensional parameters in high dimensional linear models," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 76(1), pages 217-242, January.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Pengfei Liu & Mengchen Zhang & Ru Zhang & Qin Zhou, 2021. "Robust Estimation and Tests for Parameters of Some Nonlinear Regression Models," Mathematics, MDPI, vol. 9(6), pages 1-16, March.
    2. Adarsh Prasad & Arun Sai Suggala & Sivaraman Balakrishnan & Pradeep Ravikumar, 2020. "Robust estimation via robust gradient estimation," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 82(3), pages 601-627, July.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Han, Dongxiao & Huang, Jian & Lin, Yuanyuan & Shen, Guohao, 2022. "Robust post-selection inference of high-dimensional mean regression with heavy-tailed asymmetric or heteroskedastic errors," Journal of Econometrics, Elsevier, vol. 230(2), pages 416-431.
    2. Man, Rebeka & Tan, Kean Ming & Wang, Zian & Zhou, Wen-Xin, 2024. "Retire: Robust expectile regression in high dimensions," Journal of Econometrics, Elsevier, vol. 239(2).
    3. Luo, Jiyu & Sun, Qiang & Zhou, Wen-Xin, 2022. "Distributed adaptive Huber regression," Computational Statistics & Data Analysis, Elsevier, vol. 169(C).
    4. van de Geer, Sara, 2016. "Worst possible sub-directions in high-dimensional models," Journal of Multivariate Analysis, Elsevier, vol. 146(C), pages 248-260.
    5. Ciuperca, Gabriela, 2021. "Variable selection in high-dimensional linear model with possibly asymmetric errors," Computational Statistics & Data Analysis, Elsevier, vol. 155(C).
    6. Lecué, Guillaume & Lerasle, Matthieu, 2019. "Learning from MOM’s principles: Le Cam’s approach," Stochastic Processes and their Applications, Elsevier, vol. 129(11), pages 4385-4410.
    7. Alexandre Belloni & Victor Chernozhukov & Kengo Kato, 2019. "Valid Post-Selection Inference in High-Dimensional Approximately Sparse Quantile Regression Models," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 114(526), pages 749-758, April.
    8. Chenchuan (Mark) Li & Ulrich K. Müller, 2021. "Linear regression with many controls of limited explanatory power," Quantitative Economics, Econometric Society, vol. 12(2), pages 405-442, May.
    9. Umberto Amato & Anestis Antoniadis & Italia De Feis & Irene Gijbels, 2021. "Penalised robust estimators for sparse and high-dimensional linear models," Statistical Methods & Applications, Springer;Società Italiana di Statistica, vol. 30(1), pages 1-48, March.
    10. Victor Chernozhukov & Whitney K. Newey & Victor Quintas-Martinez & Vasilis Syrgkanis, 2021. "Automatic Debiased Machine Learning via Riesz Regression," Papers 2104.14737, arXiv.org, revised Mar 2024.
    11. Guo, Xu & Li, Runze & Liu, Jingyuan & Zeng, Mudong, 2023. "Statistical inference for linear mediation models with high-dimensional mediators and application to studying stock reaction to COVID-19 pandemic," Journal of Econometrics, Elsevier, vol. 235(1), pages 166-179.
    12. Toshio Honda, 2021. "The de-biased group Lasso estimation for varying coefficient models," Annals of the Institute of Statistical Mathematics, Springer;The Institute of Statistical Mathematics, vol. 73(1), pages 3-29, February.
    13. Hansen, Christian & Liao, Yuan, 2019. "The Factor-Lasso And K-Step Bootstrap Approach For Inference In High-Dimensional Economic Applications," Econometric Theory, Cambridge University Press, vol. 35(3), pages 465-509, June.
    14. Alexandre Belloni & Victor Chernozhukov & Kengo Kato, 2013. "Uniform post selection inference for LAD regression and other z-estimation problems," CeMMAP working papers CWP74/13, Centre for Microdata Methods and Practice, Institute for Fiscal Studies.
    15. Victor Chernozhukov & Denis Chetverikov & Mert Demirer & Esther Duflo & Christian Hansen & Whitney K. Newey, 2016. "Double machine learning for treatment and causal parameters," CeMMAP working papers 49/16, Institute for Fiscal Studies.
    16. Philipp Bach & Victor Chernozhukov & Malte S. Kurz & Martin Spindler & Sven Klaassen, 2021. "DoubleML -- An Object-Oriented Implementation of Double Machine Learning in R," Papers 2103.09603, arXiv.org, revised Jun 2024.
    17. Yumou Qiu & Jing Tao & Xiao‐Hua Zhou, 2021. "Inference of heterogeneous treatment effects using observational data with high‐dimensional covariates," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 83(5), pages 1016-1043, November.
    18. Semenova, Vira, 2023. "Debiased machine learning of set-identified linear models," Journal of Econometrics, Elsevier, vol. 235(2), pages 1725-1746.
    19. Fan, Jianqing & Guo, Yongyi & Jiang, Bai, 2022. "Adaptive Huber regression on Markov-dependent data," Stochastic Processes and their Applications, Elsevier, vol. 150(C), pages 802-818.
    20. Valérie Mignon & Celso Brunetti & Marc Joëts, 2023. "Reasons Behind Words: OPEC Narratives and the Oil Market," EconomiX Working Papers 2023-24, University of Paris Nanterre, EconomiX.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:crs:wpaper:2017-32. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Secretariat General (email available below). General contact details of provider: https://edirc.repec.org/data/crestfr.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.