IDEAS home Printed from https://ideas.repec.org/a/spr/advdac/v13y2019i4d10.1007_s11634-019-00356-9.html
   My bibliography  Save this article

Robust and sparse k-means clustering for high-dimensional data

Author

Listed:
  • Šárka Brodinová

    (TU Wien)

  • Peter Filzmoser

    (TU Wien)

  • Thomas Ortner

    (TU Wien)

  • Christian Breiteneder

    (TU Wien)

  • Maia Rohm

    (TU Wien)

Abstract

In real-world application scenarios, the identification of groups poses a significant challenge due to possibly occurring outliers and existing noise variables. Therefore, there is a need for a clustering method which is capable of revealing the group structure in data containing both outliers and noise variables without any pre-knowledge. In this paper, we propose a k-means-based algorithm incorporating a weighting function which leads to an automatic weight assignment for each observation. In order to cope with noise variables, a lasso-type penalty is used in an objective function adjusted by observation weights. We finally introduce a framework for selecting both the number of clusters and variables based on a modified gap statistic. The conducted experiments on simulated and real-world data demonstrate the advantage of the method to identify groups, outliers, and informative variables simultaneously.

Suggested Citation

  • Šárka Brodinová & Peter Filzmoser & Thomas Ortner & Christian Breiteneder & Maia Rohm, 2019. "Robust and sparse k-means clustering for high-dimensional data," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 13(4), pages 905-932, December.
  • Handle: RePEc:spr:advdac:v:13:y:2019:i:4:d:10.1007_s11634-019-00356-9
    DOI: 10.1007/s11634-019-00356-9
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s11634-019-00356-9
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s11634-019-00356-9?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Robert Tibshirani & Guenther Walther & Trevor Hastie, 2001. "Estimating the number of clusters in a data set via the gap statistic," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 63(2), pages 411-423.
    2. María Gallegos & Gunter Ritter, 2009. "Trimming algorithms for clustering contaminated grouped data and their robustness," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 3(2), pages 135-167, September.
    3. Kondo, Yumi & Salibian-Barrera, Matias & Zamar, Ruben, 2016. "RSKC: An R Package for a Robust and Sparse K-Means Clustering Algorithm," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 72(i05).
    4. Anthony C. Atkinson & Marco Riani & Andrea Cerioli, 2018. "Cluster detection and clustering with random start forward searches," Journal of Applied Statistics, Taylor & Francis Journals, vol. 45(5), pages 777-798, April.
    5. Sugar, Catherine A. & James, Gareth M., 2003. "Finding the Number of Clusters in a Dataset: An Information-Theoretic Approach," Journal of the American Statistical Association, American Statistical Association, vol. 98, pages 750-763, January.
    6. Filzmoser, Peter & Maronna, Ricardo & Werner, Mark, 2008. "Outlier identification in high dimensions," Computational Statistics & Data Analysis, Elsevier, vol. 52(3), pages 1694-1711, January.
    7. Pietro Coretto & Christian Hennig, 2016. "Robust Improper Maximum Likelihood: Tuning, Computation, and a Comparison With Other Methods for Robust Gaussian Clustering," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 111(516), pages 1648-1659, October.
    8. Luis García-Escudero & Alfonso Gordaliza & Carlos Matrán & Agustín Mayo-Iscar, 2010. "A review of robust clustering methods," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 4(2), pages 89-109, September.
    9. Andrea Cerioli & Marco Riani & Anthony C. Atkinson & Aldo Corbellini, 2018. "The power of monitoring: how to make the most of a contaminated multivariate sample," Statistical Methods & Applications, Springer;Società Italiana di Statistica, vol. 27(4), pages 559-587, December.
    10. Raftery, Adrian E. & Dean, Nema, 2006. "Variable Selection for Model-Based Clustering," Journal of the American Statistical Association, American Statistical Association, vol. 101, pages 168-178, March.
    11. Neykov, N. & Filzmoser, P. & Dimova, R. & Neytchev, P., 2007. "Robust fitting of mixtures using the trimmed likelihood estimator," Computational Statistics & Data Analysis, Elsevier, vol. 52(1), pages 299-308, September.
    12. Witten, Daniela M. & Tibshirani, Robert, 2010. "A Framework for Feature Selection in Clustering," Journal of the American Statistical Association, American Statistical Association, vol. 105(490), pages 713-726.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Kumar, Navin & Sood, Sandeep Kumar & Saini, Munish, 2024. "Internet of Vehicles (IoV) Based Framework for electricity Demand Forecasting in V2G," Energy, Elsevier, vol. 297(C).
    2. Mauricio Toledo-Acosta & Talin Barreiro & Asela Reig-Alamillo & Markus Müller & Fuensanta Aroca Bisquert & Maria Luisa Barrigon & Enrique Baca-Garcia & Jorge Hermosillo-Valadez, 2020. "Cognitive Emotional Embedded Representations of Text to Predict Suicidal Ideation and Psychiatric Symptoms," Mathematics, MDPI, vol. 8(11), pages 1-27, November.
    3. Rana Muhammad Adnan & Kulwinder Singh Parmar & Salim Heddam & Shamsuddin Shahid & Ozgur Kisi, 2021. "Suspended Sediment Modeling Using a Heuristic Regression Method Hybridized with Kmeans Clustering," Sustainability, MDPI, vol. 13(9), pages 1-21, April.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Alessio Farcomeni & Antonio Punzo, 2020. "Robust model-based clustering with mild and gross outliers," TEST: An Official Journal of the Spanish Society of Statistics and Operations Research, Springer;Sociedad de Estadística e Investigación Operativa, vol. 29(4), pages 989-1007, December.
    2. Gaynor, Sheila & Bair, Eric, 2017. "Identification of relevant subtypes via preweighted sparse clustering," Computational Statistics & Data Analysis, Elsevier, vol. 116(C), pages 139-154.
    3. Yujia Li & Xiangrui Zeng & Chien‐Wei Lin & George C. Tseng, 2022. "Simultaneous estimation of cluster number and feature sparsity in high‐dimensional cluster analysis," Biometrics, The International Biometric Society, vol. 78(2), pages 574-585, June.
    4. J. Fernando Vera & Rodrigo Macías, 2021. "On the Behaviour of K-Means Clustering of a Dissimilarity Matrix by Means of Full Multidimensional Scaling," Psychometrika, Springer;The Psychometric Society, vol. 86(2), pages 489-513, June.
    5. Peter Radchenko & Gourab Mukherjee, 2017. "Convex clustering via l 1 fusion penalization," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 79(5), pages 1527-1546, November.
    6. Charles Bouveyron & Camille Brunet-Saumard, 2014. "Discriminative variable selection for clustering with the sparse Fisher-EM algorithm," Computational Statistics, Springer, vol. 29(3), pages 489-513, June.
    7. Floriello, Davide & Vitelli, Valeria, 2017. "Sparse clustering of functional data," Journal of Multivariate Analysis, Elsevier, vol. 154(C), pages 1-18.
    8. Fang, Yixin & Wang, Junhui, 2012. "Selection of the number of clusters via the bootstrap method," Computational Statistics & Data Analysis, Elsevier, vol. 56(3), pages 468-477.
    9. Arias-Castro, Ery & Pu, Xiao, 2017. "A simple approach to sparse clustering," Computational Statistics & Data Analysis, Elsevier, vol. 105(C), pages 217-228.
    10. Andrea Cappozzo & Luis Angel García Escudero & Francesca Greselin & Agustín Mayo-Iscar, 2021. "Parameter Choice, Stability and Validity for Robust Cluster Weighted Modeling," Stats, MDPI, vol. 4(3), pages 1-14, July.
    11. Lingsong Meng & Dorina Avram & George Tseng & Zhiguang Huo, 2022. "Outcome‐guided sparse K‐means for disease subtype discovery via integrating phenotypic data with high‐dimensional transcriptomic data," Journal of the Royal Statistical Society Series C, Royal Statistical Society, vol. 71(2), pages 352-375, March.
    12. Cappozzo, Andrea & Greselin, Francesca & Murphy, Thomas Brendan, 2021. "Robust variable selection for model-based learning in presence of adulteration," Computational Statistics & Data Analysis, Elsevier, vol. 158(C).
    13. Gallegos, María Teresa & Ritter, Gunter, 2010. "Using combinatorial optimization in model-based trimmed clustering with cardinality constraints," Computational Statistics & Data Analysis, Elsevier, vol. 54(3), pages 637-654, March.
    14. Fritz, Heinrich & García-Escudero, Luis A. & Mayo-Iscar, Agustín, 2013. "A fast algorithm for robust constrained clustering," Computational Statistics & Data Analysis, Elsevier, vol. 61(C), pages 124-136.
    15. Li, Pai-Ling & Chiou, Jeng-Min, 2011. "Identifying cluster number for subspace projected functional data clustering," Computational Statistics & Data Analysis, Elsevier, vol. 55(6), pages 2090-2103, June.
    16. Yaeji Lim & Hee-Seok Oh & Ying Kuen Cheung, 2019. "Multiscale Clustering for Functional Data," Journal of Classification, Springer;The Classification Society, vol. 36(2), pages 368-391, July.
    17. Zhaoyu Xing & Yang Wan & Juan Wen & Wei Zhong, 2024. "GOLFS: feature selection via combining both global and local information for high dimensional clustering," Computational Statistics, Springer, vol. 39(5), pages 2651-2675, July.
    18. Jeffrey Andrews & Paul McNicholas, 2014. "Variable Selection for Clustering and Classification," Journal of Classification, Springer;The Classification Society, vol. 31(2), pages 136-153, July.
    19. L. A. García-Escudero & A. Gordaliza & C. Matrán & A. Mayo-Iscar, 2018. "Comments on “The power of monitoring: how to make the most of a contaminated multivariate sample”," Statistical Methods & Applications, Springer;Società Italiana di Statistica, vol. 27(4), pages 605-608, December.
    20. Oliver Schaer & Nikolaos Kourentzes & Robert Fildes, 2022. "Predictive competitive intelligence with prerelease online search traffic," Production and Operations Management, Production and Operations Management Society, vol. 31(10), pages 3823-3839, October.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:advdac:v:13:y:2019:i:4:d:10.1007_s11634-019-00356-9. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.