IDEAS home Printed from https://ideas.repec.org/a/eee/jmvana/v203y2024ics0047259x2400037x.html
   My bibliography  Save this article

Tuning-free sparse clustering via alternating hard-thresholding

Author

Listed:
  • Dong, Wei
  • Xu, Chen
  • Xie, Jinhan
  • Tang, Niansheng

Abstract

Model-based clustering is a commonly-used technique to partition heterogeneous data into homogeneous groups. When the analysis is to be conducted with a large number of features, analysts face simultaneous challenges in model interpretability, clustering accuracy, and computational efficiency. Several Bayesian and penalization methods have been proposed to select important features for model-based clustering. However, the performance of those methods relies on a careful algorithmic tuning, which can be time-consuming for high-dimensional cases. In this paper, we propose a new sparse clustering method based on alternating hard-thresholding. The new method is conceptually simple and tuning-free. With a user-specified sparsity level, it efficiently detects a set of key features by eliminating a large number of features that are less useful for clustering. Based on the selected key features, one can readily obtain an effective clustering of the original high-dimensional data under a general sparse covariance structure. Under mild conditions, we show that the new method leads to clusters with a misclassification rate consistent to the optimal rate as if the underlying true model were used. The promising performance of the new method is supported by both simulated and real data examples.

Suggested Citation

  • Dong, Wei & Xu, Chen & Xie, Jinhan & Tang, Niansheng, 2024. "Tuning-free sparse clustering via alternating hard-thresholding," Journal of Multivariate Analysis, Elsevier, vol. 203(C).
  • Handle: RePEc:eee:jmvana:v:203:y:2024:i:c:s0047259x2400037x
    DOI: 10.1016/j.jmva.2024.105330
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0047259X2400037X
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.jmva.2024.105330?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Lan Wang & Bo Peng & Jelena Bradic & Runze Li & Yunan Wu, 2020. "Rejoinder to “A Tuning-Free Robust and Efficient Approach to High-Dimensional Regression”," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 115(532), pages 1726-1729, December.
    2. Jianqing Fan & Cong Ma & Kaizheng Wang, 2020. "Comment on “A Tuning-Free Robust and Efficient Approach to High-Dimensional Regression”," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 115(532), pages 1720-1725, December.
    3. Bouveyron, Charles & Brunet-Saumard, Camille, 2014. "Model-based clustering of high-dimensional data: A review," Computational Statistics & Data Analysis, Elsevier, vol. 71(C), pages 52-78.
    4. Mingyang Ren & Sanguo Zhang & Qingzhao Zhang & Shuangge Ma, 2022. "Gaussian graphical model‐based heterogeneity analysis via penalized fusion," Biometrics, The International Biometric Society, vol. 78(2), pages 524-535, June.
    5. Jian Guo & Elizaveta Levina & George Michailidis & Ji Zhu, 2011. "Joint estimation of multiple graphical models," Biometrika, Biometrika Trust, vol. 98(1), pages 1-15.
    6. Chen Xu & Jiahua Chen, 2014. "The Sparse MLE for Ultrahigh-Dimensional Feature Screening," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 109(507), pages 1257-1269, September.
    7. Sinae Kim & Mahlet G. Tadesse & Marina Vannucci, 2006. "Variable selection in clustering via Dirichlet process mixture models," Biometrika, Biometrika Trust, vol. 93(4), pages 877-893, December.
    8. Patrick Danaher & Pei Wang & Daniela M. Witten, 2014. "The joint graphical lasso for inverse covariance estimation across multiple classes," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 76(2), pages 373-397, March.
    9. Liu, Weidong & Luo, Xi, 2015. "Fast and adaptive sparse precision matrix estimation in high dimensions," Journal of Multivariate Analysis, Elsevier, vol. 135(C), pages 153-162.
    10. Lan Wang & Bo Peng & Jelena Bradic & Runze Li & Yunan Wu, 2020. "A Tuning-free Robust and Efficient Approach to High-dimensional Regression," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 115(532), pages 1700-1714, December.
    11. Jian Guo & Elizaveta Levina & George Michailidis & Ji Zhu, 2010. "Pairwise Variable Selection for High-Dimensional Model-Based Clustering," Biometrics, The International Biometric Society, vol. 66(3), pages 793-804, September.
    12. Sijian Wang & Ji Zhu, 2008. "Variable Selection for Model-Based High-Dimensional Clustering and Its Application to Microarray Data," Biometrics, The International Biometric Society, vol. 64(2), pages 440-448, June.
    13. Hengjian Cui & Runze Li & Wei Zhong, 2015. "Model-Free Feature Screening for Ultrahigh Dimensional Discriminant Analysis," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 110(510), pages 630-641, June.
    14. Tadesse, Mahlet G. & Sha, Naijun & Vannucci, Marina, 2005. "Bayesian Variable Selection in Clustering High-Dimensional Data," Journal of the American Statistical Association, American Statistical Association, vol. 100, pages 602-617, June.
    15. Teng Zhang & Hui Zou, 2014. "Sparse precision matrix estimation via lasso penalized D-trace loss," Biometrika, Biometrika Trust, vol. 101(1), pages 103-120.
    16. Xiudi Li & Ali Shojaie, 2020. "Discussion of “A Tuning-Free Robust and Efficient Approach to High-Dimensional Regression”," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 115(532), pages 1717-1719, December.
    17. Wang, Cheng & Jiang, Binyan, 2020. "An efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss," Computational Statistics & Data Analysis, Elsevier, vol. 142(C).
    18. Khalili, Abbas & Chen, Jiahua, 2007. "Variable Selection in Finite Mixture of Regression Models," Journal of the American Statistical Association, American Statistical Association, vol. 102, pages 1025-1038, September.
    19. Witten, Daniela M. & Tibshirani, Robert, 2010. "A Framework for Feature Selection in Clustering," Journal of the American Statistical Association, American Statistical Association, vol. 105(490), pages 713-726.
    20. Jinhan Xie & Yuanyuan Lin & Xiaodong Yan & Niansheng Tang, 2020. "Category-Adaptive Variable Screening for Ultra-High Dimensional Heterogeneous Categorical Data," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 115(530), pages 747-760, April.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Canhong Wen & Zhenduo Li & Ruipeng Dong & Yijin Ni & Wenliang Pan, 2023. "Simultaneous Dimension Reduction and Variable Selection for Multinomial Logistic Regression," INFORMS Journal on Computing, INFORMS, vol. 35(5), pages 1044-1060, September.
    2. Mingyang Ren & Sanguo Zhang & Junhui Wang, 2023. "Consistent estimation of the number of communities via regularized network embedding," Biometrics, The International Biometric Society, vol. 79(3), pages 2404-2416, September.
    3. Yu, Ke & Luo, Shan, 2024. "Rank-based sequential feature selection for high-dimensional accelerated failure time models with main and interaction effects," Computational Statistics & Data Analysis, Elsevier, vol. 197(C).
    4. Jack Jewson & David Rossell, 2022. "General Bayesian loss function selection and the use of improper models," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 84(5), pages 1640-1665, November.
    5. Yuyang Liu & Pengfei Pi & Shan Luo, 2023. "A semi-parametric approach to feature selection in high-dimensional linear regression models," Computational Statistics, Springer, vol. 38(2), pages 979-1000, June.
    6. Wei Dong & Hongzhen Liu, 2024. "Distributed Sparse Precision Matrix Estimation via Alternating Block-Based Gradient Descent," Mathematics, MDPI, vol. 12(5), pages 1-15, February.
    7. Thierry Chekouo & Alejandro Murua, 2018. "High-dimensional variable selection with the plaid mixture model for clustering," Computational Statistics, Springer, vol. 33(3), pages 1475-1496, September.
    8. Brian J. Reich & Howard D. Bondell, 2011. "A Spatial Dirichlet Process Mixture Model for Clustering Population Genetics Data," Biometrics, The International Biometric Society, vol. 67(2), pages 381-390, June.
    9. Dong Liu & Changwei Zhao & Yong He & Lei Liu & Ying Guo & Xinsheng Zhang, 2023. "Simultaneous cluster structure learning and estimation of heterogeneous graphs for matrix‐variate fMRI data," Biometrics, The International Biometric Society, vol. 79(3), pages 2246-2259, September.
    10. Alessandro Casa & Andrea Cappozzo & Michael Fop, 2022. "Group-Wise Shrinkage Estimation in Penalized Model-Based Clustering," Journal of Classification, Springer;The Classification Society, vol. 39(3), pages 648-674, November.
    11. Zeyu Wu & Cheng Wang & Weidong Liu, 2023. "A unified precision matrix estimation framework via sparse column-wise inverse operator under weak sparsity," Annals of the Institute of Statistical Mathematics, Springer;The Institute of Statistical Mathematics, vol. 75(4), pages 619-648, August.
    12. Zhang, Qingzhao & Ma, Shuangge & Huang, Yuan, 2021. "Promote sign consistency in the joint estimation of precision matrices," Computational Statistics & Data Analysis, Elsevier, vol. 159(C).
    13. Rendon Aguirre, Janeth Carolina, 2017. "Clustering Big Data by Extreme Kurtosis Projections," DES - Working Papers. Statistics and Econometrics. WS 24522, Universidad Carlos III de Madrid. Departamento de Estadística.
    14. Pircalabelu, Eugen & Artemiou, Andreas, 2021. "Graph informed sliced inverse regression," Computational Statistics & Data Analysis, Elsevier, vol. 164(C).
    15. Arias-Castro, Ery & Pu, Xiao, 2017. "A simple approach to sparse clustering," Computational Statistics & Data Analysis, Elsevier, vol. 105(C), pages 217-228.
    16. Crook Oliver M. & Gatto Laurent & Kirk Paul D. W., 2019. "Fast approximate inference for variable selection in Dirichlet process mixtures, with an application to pan-cancer proteomics," Statistical Applications in Genetics and Molecular Biology, De Gruyter, vol. 18(6), pages 1-20, December.
    17. Pedro Galeano & Daniel Peña, 2019. "Data science, big data and statistics," TEST: An Official Journal of the Spanish Society of Statistics and Operations Research, Springer;Sociedad de Estadística e Investigación Operativa, vol. 28(2), pages 289-329, June.
    18. Vahe Avagyan, 2022. "Precision matrix estimation using penalized Generalized Sylvester matrix equation," TEST: An Official Journal of the Spanish Society of Statistics and Operations Research, Springer;Sociedad de Estadística e Investigación Operativa, vol. 31(4), pages 950-967, December.
    19. Zhaoyu Xing & Yang Wan & Juan Wen & Wei Zhong, 2024. "GOLFS: feature selection via combining both global and local information for high dimensional clustering," Computational Statistics, Springer, vol. 39(5), pages 2651-2675, July.
    20. Cathy Maugis & Gilles Celeux & Marie-Laure Martin-Magniette, 2009. "Variable Selection for Clustering with Gaussian Mixture Models," Biometrics, The International Biometric Society, vol. 65(3), pages 701-709, September.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:jmvana:v:203:y:2024:i:c:s0047259x2400037x. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/622892/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.