IDEAS home Printed from https://ideas.repec.org/a/eee/csdana/v54y2010i3p732-749.html
   My bibliography  Save this article

Learning mixture models via component-wise parameter smoothing

Author

Listed:
  • Reddy, Chandan K.
  • Rajaratnam, Bala

Abstract

The task of obtaining an optimal set of parameters to fit a mixture model has many applications in science and engineering domains and is a computationally challenging problem. A novel algorithm using a convolution based smoothing approach to construct a hierarchy (or family) of smoothed log-likelihood surfaces is proposed. This approach smooths the likelihood function and applies the EM algorithm to obtain a promising solution on the smoothed surface. Using the most promising solutions as initial guesses, the EM algorithm is applied again on the original likelihood. Though the results are demonstrated using only two levels, the method can potentially be applied to any number of levels in the hierarchy. A theoretical insight demonstrates that the smoothing approach indeed reduces the overall gradient of a modified version of the likelihood surface. This optimization procedure effectively eliminates extensive searching in non-promising regions of the parameter space. Results on some benchmark datasets demonstrate significant improvements of the proposed algorithm compared to other approaches. Empirical results on the reduction in the number of local maxima and improvements in the initialization procedures are provided.

Suggested Citation

  • Reddy, Chandan K. & Rajaratnam, Bala, 2010. "Learning mixture models via component-wise parameter smoothing," Computational Statistics & Data Analysis, Elsevier, vol. 54(3), pages 732-749, March.
  • Handle: RePEc:eee:csdana:v:54:y:2010:i:3:p:732-749
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0167-9473(09)00162-5
    Download Restriction: Full text for ScienceDirect subscribers only.
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Hunt, Lynette & Jorgensen, Murray, 2003. "Mixture model clustering for mixed data with missing information," Computational Statistics & Data Analysis, Elsevier, vol. 41(3-4), pages 429-440, January.
    2. Biernacki, Christophe & Celeux, Gilles & Govaert, Gerard, 2003. "Choosing starting values for the EM algorithm for getting the highest likelihood in multivariate Gaussian mixture models," Computational Statistics & Data Analysis, Elsevier, vol. 41(3-4), pages 561-575, January.
    3. Bohning, Dankmar & Seidel, Wilfried & Alfo, Macro & Garel, Bernard & Patilea, Valentin & Walther, Gunther, 2007. "Advances in Mixture Models," Computational Statistics & Data Analysis, Elsevier, vol. 51(11), pages 5205-5210, July.
    4. R. H. Shumway & D. S. Stoffer, 1982. "An Approach To Time Series Smoothing And Forecasting Using The Em Algorithm," Journal of Time Series Analysis, Wiley Blackwell, vol. 3(4), pages 253-264, July.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Franko, Mitja & Nagode, Marko, 2015. "Probability density function of the equivalent stress amplitude using statistical transformation," Reliability Engineering and System Safety, Elsevier, vol. 134(C), pages 118-125.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. O’Hagan, Adrian & Murphy, Thomas Brendan & Gormley, Isobel Claire, 2012. "Computational aspects of fitting mixture models via the expectation–maximization algorithm," Computational Statistics & Data Analysis, Elsevier, vol. 56(12), pages 3843-3864.
    2. Bohning, Dankmar & Seidel, Wilfried, 2003. "Editorial: recent developments in mixture models," Computational Statistics & Data Analysis, Elsevier, vol. 41(3-4), pages 349-357, January.
    3. Mazzocchi, Mario, 2006. "Time patterns in UK demand for alcohol and tobacco: an application of the EM algorithm," Computational Statistics & Data Analysis, Elsevier, vol. 50(9), pages 2191-2205, May.
    4. Adrian O’Hagan & Arthur White, 2019. "Improved model-based clustering performance using Bayesian initialization averaging," Computational Statistics, Springer, vol. 34(1), pages 201-231, March.
    5. Matteo Barigozzi & Matteo Luciani, 2024. "Quasi Maximum Likelihood Estimation and Inference of Large Approximate Dynamic Factor Models via the EM algorithm," Finance and Economics Discussion Series 2024-086, Board of Governors of the Federal Reserve System (U.S.).
    6. Zirogiannis, Nikolaos & Tripodis, Yorghos, 2013. "A Generalized Dynamic Factor Model for Panel Data: Estimation with a Two-Cycle Conditional Expectation-Maximization Algorithm," Working Paper Series 142752, University of Massachusetts, Amherst, Department of Resource Economics.
    7. Tobias Hartl & Roland Jucknewitz, 2022. "Approximate state space modelling of unobserved fractional components," Econometric Reviews, Taylor & Francis Journals, vol. 41(1), pages 75-98, January.
    8. Amanda F. Mejia, 2022. "Discussion on “distributional independent component analysis for diverse neuroimaging modalities” by Ben Wu, Subhadip Pal, Jian Kang, and Ying Guo," Biometrics, The International Biometric Society, vol. 78(3), pages 1109-1112, September.
    9. Zhu, Xuwen & Melnykov, Volodymyr, 2018. "Manly transformation in finite mixture modeling," Computational Statistics & Data Analysis, Elsevier, vol. 121(C), pages 190-208.
    10. Lebret, Rémi & Iovleff, Serge & Langrognet, Florent & Biernacki, Christophe & Celeux, Gilles & Govaert, Gérard, 2015. "Rmixmod: The R Package of the Model-Based Unsupervised, Supervised, and Semi-Supervised Classification Mixmod Library," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 67(i06).
    11. Rufo, M.J. & Pérez, C.J. & Martín, J., 2009. "Local parametric sensitivity for mixture models of lifetime distributions," Reliability Engineering and System Safety, Elsevier, vol. 94(7), pages 1238-1244.
    12. Joseph Ndong & Ted Soubdhan, 2022. "Extracting Statistical Properties of Solar and Photovoltaic Power Production for the Scope of Building a Sophisticated Forecasting Framework," Forecasting, MDPI, vol. 5(1), pages 1-21, December.
    13. David de Antonio Liedo, 2014. "Nowcasting Belgium," Working Paper Research 256, National Bank of Belgium.
    14. Proietti, Tommaso, 2008. "Estimation of Common Factors under Cross-Sectional and Temporal Aggregation Constraints: Nowcasting Monthly GDP and its Main Components," MPRA Paper 6860, University Library of Munich, Germany.
    15. Matteo Barigozzi & Marc Hallin, 2023. "Dynamic Factor Models: a Genealogy," Papers 2310.17278, arXiv.org, revised Jan 2024.
    16. Faicel Chamroukhi, 2016. "Piecewise Regression Mixture for Simultaneous Functional Data Clustering and Optimal Segmentation," Journal of Classification, Springer;The Classification Society, vol. 33(3), pages 374-411, October.
    17. Fabian Dvorak, 2020. "stratEst: Strategy Estimation in R," TWI Research Paper Series 119, Thurgauer Wirtschaftsinstitut, Universität Konstanz.
    18. Alexander Tsyplakov, 2011. "An introduction to state space modeling (in Russian)," Quantile, Quantile, issue 9, pages 1-24, July.
    19. Ma, Tao & Zhou, Zhou & Antoniou, Constantinos, 2018. "Dynamic factor model for network traffic state forecast," Transportation Research Part B: Methodological, Elsevier, vol. 118(C), pages 281-317.
    20. repec:jss:jstsof:46:i06 is not listed on IDEAS
    21. Semhar Michael & Volodymyr Melnykov, 2016. "An effective strategy for initializing the EM algorithm in finite mixture models," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 10(4), pages 563-583, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:csdana:v:54:y:2010:i:3:p:732-749. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/csda .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.