IDEAS home Printed from https://ideas.repec.org/a/eee/csdana/v152y2020ics0167947320301389.html
   My bibliography  Save this article

Sparse recovery via nonconvex regularized M-estimators over ℓq-balls

Author

Listed:
  • Li, Xin
  • Wu, Dongya
  • Li, Chong
  • Wang, Jinhua
  • Yao, Jen-Chih

Abstract

The recovery properties of nonconvex regularized M-estimators are analysed, under the general sparsity assumption on the true parameter. In the statistical aspect, the recovery bound for any stationary point of the nonconvex regularized M-estimator is established under some regularity conditions. In the computational aspect, the proximal gradient method is used to solve the nonconvex optimization problem and is proved to achieve a linear convergence rate, by virtue of a slight decomposition of the objective function. In particular, for commonly-used regularizers such as SCAD and MCP, a simpler decomposition is applicable thanks to the assumption on the regularizer, which helps to construct the estimator with better recovery performance. In the aspect of application, theoretical consequences are obtained on the corrupted error-in-variables linear regression model by verifying the required conditions. Finally, statistical and computational results as well as advantages of the assumptions are demonstrated by several numerical experiments. Simulation results show remarkable consistency with the theory under high-dimensional scaling.

Suggested Citation

  • Li, Xin & Wu, Dongya & Li, Chong & Wang, Jinhua & Yao, Jen-Chih, 2020. "Sparse recovery via nonconvex regularized M-estimators over ℓq-balls," Computational Statistics & Data Analysis, Elsevier, vol. 152(C).
  • Handle: RePEc:eee:csdana:v:152:y:2020:i:c:s0167947320301389
    DOI: 10.1016/j.csda.2020.107047
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0167947320301389
    Download Restriction: Full text for ScienceDirect subscribers only.

    File URL: https://libkey.io/10.1016/j.csda.2020.107047?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Orre, R. & Lansner, A. & Bate, A. & Lindquist, M., 2000. "Bayesian neural networks with confidence estimations applied to data mining," Computational Statistics & Data Analysis, Elsevier, vol. 34(4), pages 473-493, October.
    2. Yang, Yuhong, 2000. "Combining Different Procedures for Adaptive Regression," Journal of Multivariate Analysis, Elsevier, vol. 74(1), pages 135-161, July.
    3. Li, Peili & Xiao, Yunhai, 2018. "An efficient algorithm for sparse inverse covariance matrix estimation based on dual formulation," Computational Statistics & Data Analysis, Elsevier, vol. 128(C), pages 292-307.
    4. NESTEROV, Yu., 2007. "Gradient methods for minimizing composite objective function," LIDAM Discussion Papers CORE 2007076, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE).
    5. Hui Zou & Trevor Hastie, 2005. "Addendum: Regularization and variable selection via the elastic net," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 67(5), pages 768-768, November.
    6. Hui Zou & Trevor Hastie, 2005. "Regularization and variable selection via the elastic net," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 67(2), pages 301-320, April.
    7. Fan J. & Li R., 2001. "Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties," Journal of the American Statistical Association, American Statistical Association, vol. 96, pages 1348-1360, December.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Umberto Amato & Anestis Antoniadis & Italia De Feis & Irene Gijbels, 2021. "Penalised robust estimators for sparse and high-dimensional linear models," Statistical Methods & Applications, Springer;Società Italiana di Statistica, vol. 30(1), pages 1-48, March.
    2. Tutz, Gerhard & Pößnecker, Wolfgang & Uhlmann, Lorenz, 2015. "Variable selection in general multinomial logit models," Computational Statistics & Data Analysis, Elsevier, vol. 82(C), pages 207-222.
    3. Margherita Giuzio, 2017. "Genetic algorithm versus classical methods in sparse index tracking," Decisions in Economics and Finance, Springer;Associazione per la Matematica, vol. 40(1), pages 243-256, November.
    4. Shuichi Kawano, 2014. "Selection of tuning parameters in bridge regression models via Bayesian information criterion," Statistical Papers, Springer, vol. 55(4), pages 1207-1223, November.
    5. Yize Zhao & Matthias Chung & Brent A. Johnson & Carlos S. Moreno & Qi Long, 2016. "Hierarchical Feature Selection Incorporating Known and Novel Biological Information: Identifying Genomic Features Related to Prostate Cancer Recurrence," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 111(516), pages 1427-1439, October.
    6. Qianyun Li & Runmin Shi & Faming Liang, 2019. "Drug sensitivity prediction with high-dimensional mixture regression," PLOS ONE, Public Library of Science, vol. 14(2), pages 1-18, February.
    7. Changrong Yan & Dixin Zhang, 2013. "Sparse dimension reduction for survival data," Computational Statistics, Springer, vol. 28(4), pages 1835-1852, August.
    8. Gareth M. James & Peter Radchenko & Jinchi Lv, 2009. "DASSO: connections between the Dantzig selector and lasso," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 71(1), pages 127-142, January.
    9. Soave, David & Lawless, Jerald F., 2023. "Regularized regression for two phase failure time studies," Computational Statistics & Data Analysis, Elsevier, vol. 182(C).
    10. Zemin Zheng & Jie Zhang & Yang Li, 2022. "L 0 -Regularized Learning for High-Dimensional Additive Hazards Regression," INFORMS Journal on Computing, INFORMS, vol. 34(5), pages 2762-2775, September.
    11. Alexander Chudik & George Kapetanios & M. Hashem Pesaran, 2016. "Big Data Analytics: A New Perspective," CESifo Working Paper Series 5824, CESifo.
    12. Xiao Ni & Daowen Zhang & Hao Helen Zhang, 2010. "Variable Selection for Semiparametric Mixed Models in Longitudinal Studies," Biometrics, The International Biometric Society, vol. 66(1), pages 79-88, March.
    13. Camila Epprecht & Dominique Guegan & Álvaro Veiga & Joel Correa da Rosa, 2017. "Variable selection and forecasting via automated methods for linear models: LASSO/adaLASSO and Autometrics," Post-Print halshs-00917797, HAL.
    14. Zichen Zhang & Ye Eun Bae & Jonathan R. Bradley & Lang Wu & Chong Wu, 2022. "SUMMIT: An integrative approach for better transcriptomic data imputation improves causal gene identification," Nature Communications, Nature, vol. 13(1), pages 1-12, December.
    15. Wang, Christina Dan & Chen, Zhao & Lian, Yimin & Chen, Min, 2022. "Asset selection based on high frequency Sharpe ratio," Journal of Econometrics, Elsevier, vol. 227(1), pages 168-188.
    16. Bartosz Uniejewski, 2024. "Regularization for electricity price forecasting," Papers 2404.03968, arXiv.org.
    17. Peter Bühlmann & Jacopo Mandozzi, 2014. "High-dimensional variable screening and bias in subsequent inference, with an empirical comparison," Computational Statistics, Springer, vol. 29(3), pages 407-430, June.
    18. Peter Martey Addo & Dominique Guegan & Bertrand Hassani, 2018. "Credit Risk Analysis Using Machine and Deep Learning Models," Risks, MDPI, vol. 6(2), pages 1-20, April.
    19. Capanu, Marinela & Giurcanu, Mihai & Begg, Colin B. & Gönen, Mithat, 2023. "Subsampling based variable selection for generalized linear models," Computational Statistics & Data Analysis, Elsevier, vol. 184(C).
    20. Weng, Jiaying, 2022. "Fourier transform sparse inverse regression estimators for sufficient variable selection," Computational Statistics & Data Analysis, Elsevier, vol. 168(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:csdana:v:152:y:2020:i:c:s0167947320301389. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/csda .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.