IDEAS home Printed from https://ideas.repec.org/a/eee/csdana/v54y2010i7p1858-1871.html
   My bibliography  Save this article

Factorial and reduced K-means reconsidered

Author

Listed:
  • Timmerman, Marieke E.
  • Ceulemans, Eva
  • Kiers, Henk A.L.
  • Vichi, Maurizio

Abstract

Factorial K-means analysis (FKM) and Reduced K-means analysis (RKM) are clustering methods that aim at simultaneously achieving a clustering of the objects and a dimension reduction of the variables. Because a comprehensive comparison between FKM and RKM is lacking in the literature so far, a theoretical and simulation-based comparison between FKM and RKM is provided. It is shown theoretically how FKM's versus RKM's performances are affected by the presence of residuals within the clustering subspace and/or within its orthocomplement in the observed data. The simulation study confirmed that for both FKM and RKM, the cluster membership recovery generally deteriorates with increasing amount of overlap between clusters. Furthermore, the conjectures were confirmed that for FKM the subspace recovery deteriorates with increasing relative sizes of subspace residuals compared to the complement residuals, and that the reverse holds for RKM. As such, FKM and RKM complement each other. When the majority of the variables reflect the clustering structure, and/or standardized variables are being analyzed, RKM can be expected to perform reasonably well. However, because both RKM and FKM may suffer from subspace and membership recovery problems, it is essential to critically evaluate their solutions on the basis of the content of the clustering problem at hand.

Suggested Citation

  • Timmerman, Marieke E. & Ceulemans, Eva & Kiers, Henk A.L. & Vichi, Maurizio, 2010. "Factorial and reduced K-means reconsidered," Computational Statistics & Data Analysis, Elsevier, vol. 54(7), pages 1858-1871, July.
  • Handle: RePEc:eee:csdana:v:54:y:2010:i:7:p:1858-1871
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0167-9473(10)00074-5
    Download Restriction: Full text for ScienceDirect subscribers only.
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Glenn Milligan & Martha Cooper, 1988. "A study of standardization of variables in cluster analysis," Journal of Classification, Springer;The Classification Society, vol. 5(2), pages 181-204, September.
    2. Lawrence Hubert & Phipps Arabie, 1985. "Comparing partitions," Journal of Classification, Springer;The Classification Society, vol. 2(1), pages 193-218, December.
    3. Henry Kaiser, 1958. "The varimax criterion for analytic rotation in factor analysis," Psychometrika, Springer;The Psychometric Society, vol. 23(3), pages 187-200, September.
    4. Norman Cliff, 1966. "Orthogonal rotation to congruence," Psychometrika, Springer;The Psychometric Society, vol. 31(1), pages 33-42, March.
    5. Douglas Steinley & Robert Henson, 2005. "OCLUS: An Analytic Method for Generating Clusters with Known Overlap," Journal of Classification, Springer;The Classification Society, vol. 22(2), pages 221-250, September.
    6. Douglas Steinley & Michael Brusco, 2008. "Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures," Psychometrika, Springer;The Psychometric Society, vol. 73(1), pages 125-144, March.
    7. Jan Schepers & Eva Ceulemans & Iven Mechelen, 2008. "Selecting Among Multi-Mode Partitioning Models of Different Complexities: A Comparison of Four Model Selection Criteria," Journal of Classification, Springer;The Classification Society, vol. 25(1), pages 67-85, June.
    8. Vichi, Maurizio & Kiers, Henk A. L., 2001. "Factorial k-means analysis for two-way data," Computational Statistics & Data Analysis, Elsevier, vol. 37(1), pages 49-64, July.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Michael C. Thrun & Alfred Ultsch, 2021. "Using Projection-Based Clustering to Find Distance- and Density-Based Clusters in High-Dimensional Data," Journal of Classification, Springer;The Classification Society, vol. 38(2), pages 280-312, July.
    2. Cristina Tortora & Mireille Gettler Summa & Marina Marino & Francesco Palumbo, 2016. "Factor probabilistic distance clustering (FPDC): a new clustering method," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 10(4), pages 441-464, December.
    3. Yoshikazu Terada, 2015. "Strong consistency of factorial $$K$$ K -means clustering," Annals of the Institute of Statistical Mathematics, Springer;The Institute of Statistical Mathematics, vol. 67(2), pages 335-357, April.
    4. Michio Yamamoto, 2012. "Clustering of functional data in a low-dimensional subspace," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 6(3), pages 219-247, October.
    5. Kim De Roover & Eva Ceulemans & Marieke Timmerman & John Nezlek & Patrick Onghena, 2013. "Modeling Differences in the Dimensionality of Multiblock Data by Means of Clusterwise Simultaneous Component Analysis," Psychometrika, Springer;The Psychometric Society, vol. 78(4), pages 648-668, October.
    6. Kensuke Tanioka & Hiroshi Yadohisa, 2019. "Simultaneous Method of Orthogonal Non-metric Non-negative Matrix Factorization and Constrained Non-hierarchical Clustering," Journal of Classification, Springer;The Classification Society, vol. 36(1), pages 73-93, April.
    7. Uno, Kohei & Satomura, Hironori & Adachi, Kohei, 2016. "Fixed factor analysis with clustered factor score constraint," Computational Statistics & Data Analysis, Elsevier, vol. 94(C), pages 265-274.
    8. Masaki Mitsuhiro & Hiroshi Yadohisa, 2015. "Reduced $$k$$ k -means clustering with MCA in a low-dimensional space," Computational Statistics, Springer, vol. 30(2), pages 463-475, June.
    9. Yoshikazu Terada, 2014. "Strong Consistency of Reduced K-means Clustering," Scandinavian Journal of Statistics, Danish Society for Theoretical Statistics;Finnish Statistical Society;Norwegian Statistical Association;Swedish Statistical Association, vol. 41(4), pages 913-931, December.
    10. Nickolay T. Trendafilov & Tsegay Gebrehiwot Gebru, 2016. "Recipes for sparse LDA of horizontal data," METRON, Springer;Sapienza Università di Roma, vol. 74(2), pages 207-221, August.
    11. Roberto Rocci & Stefano Gattone & Maurizio Vichi, 2011. "A New Dimension Reduction Method: Factor Discriminant K-means," Journal of Classification, Springer;The Classification Society, vol. 28(2), pages 210-226, July.
    12. Luca Greco & Antonio Lucadamo & Pietro Amenta, 2020. "An Impartial Trimming Approach for Joint Dimension and Sample Reduction," Journal of Classification, Springer;The Classification Society, vol. 37(3), pages 769-788, October.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Michael C. Thrun & Alfred Ultsch, 2021. "Using Projection-Based Clustering to Find Distance- and Density-Based Clusters in High-Dimensional Data," Journal of Classification, Springer;The Classification Society, vol. 38(2), pages 280-312, July.
    2. Michio Yamamoto, 2012. "Clustering of functional data in a low-dimensional subspace," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 6(3), pages 219-247, October.
    3. Kensuke Tanioka & Hiroshi Yadohisa, 2019. "Simultaneous Method of Orthogonal Non-metric Non-negative Matrix Factorization and Constrained Non-hierarchical Clustering," Journal of Classification, Springer;The Classification Society, vol. 36(1), pages 73-93, April.
    4. Roberto Rocci & Stefano Antonio Gattone & Roberto Di Mari, 2018. "A data driven equivariant approach to constrained Gaussian mixture modeling," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 12(2), pages 235-260, June.
    5. Jerzy Korzeniewski, 2016. "New Method Of Variable Selection For Binary Data Cluster Analysis," Statistics in Transition New Series, Polish Statistical Association, vol. 17(2), pages 295-304, June.
    6. Henk Kiers, 1994. "Simplimax: Oblique rotation to an optimal target with simple structure," Psychometrika, Springer;The Psychometric Society, vol. 59(4), pages 567-579, December.
    7. Douglas L. Steinley & M. J. Brusco, 2019. "Using an Iterative Reallocation Partitioning Algorithm to Verify Test Multidimensionality," Journal of Classification, Springer;The Classification Society, vol. 36(3), pages 397-413, October.
    8. DeSarbo, Wayne S. & Selin Atalay, A. & Blanchard, Simon J., 2009. "A three-way clusterwise multidimensional unfolding procedure for the spatial representation of context dependent preferences," Computational Statistics & Data Analysis, Elsevier, vol. 53(8), pages 3217-3230, June.
    9. Roberto Rocci & Stefano Gattone & Maurizio Vichi, 2011. "A New Dimension Reduction Method: Factor Discriminant K-means," Journal of Classification, Springer;The Classification Society, vol. 28(2), pages 210-226, July.
    10. Aurora Torrente & Juan Romo, 2021. "Initializing k-means Clustering by Bootstrap and Data Depth," Journal of Classification, Springer;The Classification Society, vol. 38(2), pages 232-256, July.
    11. Vichi, Maurizio & Saporta, Gilbert, 2009. "Clustering and disjoint principal component analysis," Computational Statistics & Data Analysis, Elsevier, vol. 53(8), pages 3194-3208, June.
    12. Masaki Mitsuhiro & Hiroshi Yadohisa, 2015. "Reduced $$k$$ k -means clustering with MCA in a low-dimensional space," Computational Statistics, Springer, vol. 30(2), pages 463-475, June.
    13. Naoto Yamashita & Shin-ichi Mayekawa, 2015. "A new biplot procedure with joint classification of objects and variables by fuzzy c-means clustering," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 9(3), pages 243-266, September.
    14. Michael Brusco & Douglas Steinley, 2015. "Affinity Propagation and Uncapacitated Facility Location Problems," Journal of Classification, Springer;The Classification Society, vol. 32(3), pages 443-480, October.
    15. Douglas Steinley & Michael Brusco, 2008. "Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures," Psychometrika, Springer;The Psychometric Society, vol. 73(1), pages 125-144, March.
    16. Dirk Depril & Iven Mechelen & Tom Wilderjans, 2012. "Lowdimensional Additive Overlapping Clustering," Journal of Classification, Springer;The Classification Society, vol. 29(3), pages 297-320, October.
    17. Donatella Vicari & Paolo Giordani, 2023. "CPclus: Candecomp/Parafac Clustering Model for Three-Way Data," Journal of Classification, Springer;The Classification Society, vol. 40(2), pages 432-465, July.
    18. M. Velden & A. Iodice D’Enza & F. Palumbo, 2017. "Cluster Correspondence Analysis," Psychometrika, Springer;The Psychometric Society, vol. 82(1), pages 158-185, March.
    19. Robert Jennrich, 2001. "A simple general procedure for orthogonal rotation," Psychometrika, Springer;The Psychometric Society, vol. 66(2), pages 289-306, June.
    20. Henk Kiers, 1997. "Techniques for rotating two or more loading matrices to optimal agreement and simple structure: A comparison and some technical details," Psychometrika, Springer;The Psychometric Society, vol. 62(4), pages 545-568, December.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:csdana:v:54:y:2010:i:7:p:1858-1871. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/csda .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.