IDEAS home Printed from https://ideas.repec.org/a/eee/intfor/v29y2013i1p13-27.html
   My bibliography  Save this article

Comparing forecast accuracy: A Monte Carlo investigation

Author

Listed:
  • Busetti, Fabio
  • Marcucci, Juri

Abstract

The size and power properties of several tests of equal Mean Square Prediction Errors (MSPE) and of Forecast Encompassing (FE) are evaluated, using Monte Carlo simulations, in the context of nested dynamic regression models. The highest size-adjusted power is achieved by the F-type test of forecast encompassing proposed by Clark and McCracken (2001); however, the test tends to be slightly oversized when the number of out-of sample observations is ‘small’ and in cases of (partial) misspecification. The relative performances of the various tests remain broadly unaltered for one- and multi-step-ahead predictions and when the predictive models are partially misspecified. Interestingly, the presence of highly persistent regressors leads to a loss of power of the tests, but their size properties remain nearly unaffected. An empirical example compares the performances of models for short term predictions of Italian GDP.

Suggested Citation

  • Busetti, Fabio & Marcucci, Juri, 2013. "Comparing forecast accuracy: A Monte Carlo investigation," International Journal of Forecasting, Elsevier, vol. 29(1), pages 13-27.
  • Handle: RePEc:eee:intfor:v:29:y:2013:i:1:p:13-27
    DOI: 10.1016/j.ijforecast.2012.04.011
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0169207012000799
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.ijforecast.2012.04.011?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to look for a different version below or search for a different version of it.

    Other versions of this item:

    References listed on IDEAS

    as
    1. Ivo Welch & Amit Goyal, 2008. "A Comprehensive Look at The Empirical Performance of Equity Premium Prediction," The Review of Financial Studies, Society for Financial Studies, vol. 21(4), pages 1455-1508, July.
    2. Hansen, Lars Peter, 1982. "Large Sample Properties of Generalized Method of Moments Estimators," Econometrica, Econometric Society, vol. 50(4), pages 1029-1054, July.
    3. Raffaella Giacomini & Halbert White, 2006. "Tests of Conditional Predictive Ability," Econometrica, Econometric Society, vol. 74(6), pages 1545-1578, November.
    4. Todd E. Clark & Michael W. McCracken, 2009. "Improving Forecast Accuracy By Combining Recursive And Rolling Forecasts," International Economic Review, Department of Economics, University of Pennsylvania and Osaka University Institute of Social and Economic Research Association, vol. 50(2), pages 363-395, May.
    5. Clark, Todd E. & McCracken, Michael W., 2001. "Tests of equal forecast accuracy and encompassing for nested models," Journal of Econometrics, Elsevier, vol. 105(1), pages 85-110, November.
    6. Clark, Todd E. & West, Kenneth D., 2006. "Using out-of-sample mean squared prediction errors to test the martingale difference hypothesis," Journal of Econometrics, Elsevier, vol. 135(1-2), pages 155-186.
    7. West, Kenneth D, 1996. "Asymptotic Inference about Predictive Ability," Econometrica, Econometric Society, vol. 64(5), pages 1067-1084, September.
    8. Hansen, Peter Reinhard, 2005. "A Test for Superior Predictive Ability," Journal of Business & Economic Statistics, American Statistical Association, vol. 23, pages 365-380, October.
    9. Todd Clark & Michael McCracken, 2005. "Evaluating Direct Multistep Forecasts," Econometric Reviews, Taylor & Francis Journals, vol. 24(4), pages 369-404.
    10. West, Kenneth D., 2006. "Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 1, chapter 3, pages 99-134, Elsevier.
    11. Diebold, Francis X & Mariano, Roberto S, 2002. "Comparing Predictive Accuracy," Journal of Business & Economic Statistics, American Statistical Association, vol. 20(1), pages 134-144, January.
    12. Yock Y. Chong & David F. Hendry, 1986. "Econometric Evaluation of Linear Macro-Economic Models," The Review of Economic Studies, Review of Economic Studies Ltd, vol. 53(4), pages 671-690.
    13. Clark, Todd & McCracken, Michael, 2013. "Advances in Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1107-1201, Elsevier.
    14. Kirstin Hubrich & Kenneth D. West, 2010. "Forecast evaluation of small nested model sets," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 25(4), pages 574-594.
    15. Clark, Todd E. & West, Kenneth D., 2007. "Approximately normal tests for equal predictive accuracy in nested models," Journal of Econometrics, Elsevier, vol. 138(1), pages 291-311, May.
    16. Clark, Todd E. & McCracken, Michael W., 2005. "The power of tests of predictive ability in the presence of structural breaks," Journal of Econometrics, Elsevier, vol. 124(1), pages 1-31, January.
    17. Busetti, Fabio & Marcucci, Juri, 2013. "Comparing forecast accuracy: A Monte Carlo investigation," International Journal of Forecasting, Elsevier, vol. 29(1), pages 13-27.
    18. Andrews, Donald W K & Monahan, J Christopher, 1992. "An Improved Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimator," Econometrica, Econometric Society, vol. 60(4), pages 953-966, July.
    19. Stambaugh, Robert F., 1999. "Predictive regressions," Journal of Financial Economics, Elsevier, vol. 54(3), pages 375-421, December.
    20. Ericsson, Neil R., 1992. "Parameter constancy, mean square forecast errors, and measuring forecast performance: An exposition, extensions, and illustration," Journal of Policy Modeling, Elsevier, vol. 14(4), pages 465-495, August.
    21. Chao, John & Corradi, Valentina & Swanson, Norman R., 2001. "Out-Of-Sample Tests For Granger Causality," Macroeconomic Dynamics, Cambridge University Press, vol. 5(4), pages 598-620, September.
    22. Corradi, Valentina & Swanson, Norman R., 2002. "A consistent test for nonlinear out of sample predictive accuracy," Journal of Econometrics, Elsevier, vol. 110(2), pages 353-381, October.
    23. Harvey, David I & Leybourne, Stephen J & Newbold, Paul, 1998. "Tests for Forecast Encompassing," Journal of Business & Economic Statistics, American Statistical Association, vol. 16(2), pages 254-259, April.
    24. Clements, M.P. & Hendry, D., 1992. "On the Limitations of Comparing Mean Square Forecast Errors," Economics Series Working Papers 99138, University of Oxford, Department of Economics.
    25. Lutz Kilian, 1998. "Small-Sample Confidence Intervals For Impulse Response Functions," The Review of Economics and Statistics, MIT Press, vol. 80(2), pages 218-230, May.
    26. Filippo Altissimo & Riccardo Cristadoro & Mario Forni & Marco Lippi & Giovanni Veronese, 2010. "New Eurocoin: Tracking Economic Growth in Real Time," The Review of Economics and Statistics, MIT Press, vol. 92(4), pages 1024-1034, November.
    27. Valentina Corradi & Norman R. Swanson, 2007. "Nonparametric Bootstrap Procedures For Predictive Inference Based On Recursive Estimation Schemes," International Economic Review, Department of Economics, University of Pennsylvania and Osaka University Institute of Social and Economic Research Association, vol. 48(1), pages 67-109, February.
    28. Newey, Whitney & West, Kenneth, 2014. "A simple, positive semi-definite, heteroscedasticity and autocorrelation consistent covariance matrix," Applied Econometrics, Russian Presidential Academy of National Economy and Public Administration (RANEPA), vol. 33(1), pages 125-132.
    29. Stock, James H & Watson, Mark W, 2002. "Macroeconomic Forecasting Using Diffusion Indexes," Journal of Business & Economic Statistics, American Statistical Association, vol. 20(2), pages 147-162, April.
    30. Granger, C. W. J. & Newbold, Paul, 1986. "Forecasting Economic Time Series," Elsevier Monographs, Elsevier, edition 2, number 9780122951831 edited by Shell, Karl.
    31. McCracken, Michael W., 2007. "Asymptotics for out of sample tests of Granger causality," Journal of Econometrics, Elsevier, vol. 140(2), pages 719-752, October.
    32. Harvey, David & Leybourne, Stephen & Newbold, Paul, 1997. "Testing the equality of prediction mean squared errors," International Journal of Forecasting, Elsevier, vol. 13(2), pages 281-291, June.
    33. Halbert White, 2000. "A Reality Check for Data Snooping," Econometrica, Econometric Society, vol. 68(5), pages 1097-1126, September.
    34. Nelson, Charles R, 1972. "The Prediction Performance of the FRB-MIT-PENN Model of the U.S. Economy," American Economic Review, American Economic Association, vol. 62(5), pages 902-917, December.
    35. Giuseppe Parigi & Roberto Golinelli, 2007. "The use of monthly indicators to forecast quarterly GDP in the short run: an application to the G7 countries," Journal of Forecasting, John Wiley & Sons, Ltd., vol. 26(2), pages 77-94.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Clark, Todd & McCracken, Michael, 2013. "Advances in Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1107-1201, Elsevier.
    2. Granziera, Eleonora & Hubrich, Kirstin & Moon, Hyungsik Roger, 2014. "A predictability test for a small number of nested models," Journal of Econometrics, Elsevier, vol. 182(1), pages 174-185.
    3. Barbara Rossi & Atsushi Inoue, 2012. "Out-of-Sample Forecast Tests Robust to the Choice of Window Size," Journal of Business & Economic Statistics, Taylor & Francis Journals, vol. 30(3), pages 432-453, April.
    4. Clark, Todd E. & West, Kenneth D., 2007. "Approximately normal tests for equal predictive accuracy in nested models," Journal of Econometrics, Elsevier, vol. 138(1), pages 291-311, May.
    5. Rossi, Barbara, 2013. "Advances in Forecasting under Instability," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1203-1324, Elsevier.
    6. Pincheira, Pablo M. & West, Kenneth D., 2016. "A comparison of some out-of-sample tests of predictability in iterated multi-step-ahead forecasts," Research in Economics, Elsevier, vol. 70(2), pages 304-319.
    7. Todd E. Clark & Michael W. McCracken, 2010. "Reality checks and nested forecast model comparisons," Working Papers 2010-032, Federal Reserve Bank of St. Louis.
    8. West, Kenneth D., 2006. "Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 1, chapter 3, pages 99-134, Elsevier.
    9. Raffaella Giacomini & Barbara Rossi, 2013. "Forecasting in macroeconomics," Chapters, in: Nigar Hashimzade & Michael A. Thornton (ed.), Handbook of Research Methods and Applications in Empirical Macroeconomics, chapter 17, pages 381-408, Edward Elgar Publishing.
    10. Clark, Todd E. & McCracken, Michael W., 2001. "Tests of equal forecast accuracy and encompassing for nested models," Journal of Econometrics, Elsevier, vol. 105(1), pages 85-110, November.
    11. Pablo Pincheira & Nicolás Hardy & Felipe Muñoz, 2021. "“Go Wild for a While!”: A New Test for Forecast Evaluation in Nested Models," Mathematics, MDPI, vol. 9(18), pages 1-28, September.
    12. Petropoulos, Fotios & Apiletti, Daniele & Assimakopoulos, Vassilios & Babai, Mohamed Zied & Barrow, Devon K. & Ben Taieb, Souhaib & Bergmeir, Christoph & Bessa, Ricardo J. & Bijak, Jakub & Boylan, Joh, 2022. "Forecasting: theory and practice," International Journal of Forecasting, Elsevier, vol. 38(3), pages 705-871.
      • Fotios Petropoulos & Daniele Apiletti & Vassilios Assimakopoulos & Mohamed Zied Babai & Devon K. Barrow & Souhaib Ben Taieb & Christoph Bergmeir & Ricardo J. Bessa & Jakub Bijak & John E. Boylan & Jet, 2020. "Forecasting: theory and practice," Papers 2012.03854, arXiv.org, revised Jan 2022.
    13. Brooks, Chris & Burke, Simon P. & Stanescu, Silvia, 2016. "Finite sample weighting of recursive forecast errors," International Journal of Forecasting, Elsevier, vol. 32(2), pages 458-474.
    14. Rapach, David & Zhou, Guofu, 2013. "Forecasting Stock Returns," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 328-383, Elsevier.
    15. Mariano, Roberto S. & Preve, Daniel, 2012. "Statistical tests for multiple forecast comparison," Journal of Econometrics, Elsevier, vol. 169(1), pages 123-130.
    16. Ahmed, Shamim & Liu, Xiaoquan & Valente, Giorgio, 2016. "Can currency-based risk factors help forecast exchange rates?," International Journal of Forecasting, Elsevier, vol. 32(1), pages 75-97.
    17. Christopher J. Neely & David E. Rapach & Jun Tu & Guofu Zhou, 2014. "Forecasting the Equity Risk Premium: The Role of Technical Indicators," Management Science, INFORMS, vol. 60(7), pages 1772-1791, July.
    18. Todd E. Clark & Kenneth D. West, 2005. "Using Out-of-Sample Mean Squared Prediction Errors to Test the Martingale Difference," NBER Technical Working Papers 0305, National Bureau of Economic Research, Inc.
    19. Pablo Pincheira Brown & Nicolás Hardy, 2024. "Correlation‐based tests of predictability," Journal of Forecasting, John Wiley & Sons, Ltd., vol. 43(6), pages 1835-1858, September.
    20. Norman Swanson & Nii Ayi Armah, 2006. "Predictive Inference Under Model Misspecification with an Application to Assessing the Marginal Predictive Content of Money for Output," Departmental Working Papers 200619, Rutgers University, Department of Economics.

    More about this item

    Keywords

    Forecast encompassing; Model evaluation; Nested models; Equal predictive ability; Forecast evaluation;
    All these keywords.

    JEL classification:

    • C12 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General - - - Hypothesis Testing: General
    • C52 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Model Evaluation, Validation, and Selection
    • C53 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Forecasting and Prediction Models; Simulation Methods

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:intfor:v:29:y:2013:i:1:p:13-27. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/ijforecast .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.