IDEAS home Printed from https://ideas.repec.org/a/eee/intfor/v32y2016i2p458-474.html
   My bibliography  Save this article

Finite sample weighting of recursive forecast errors

Author

Listed:
  • Brooks, Chris
  • Burke, Simon P.
  • Stanescu, Silvia

Abstract

This paper proposes and tests a new framework for weighting recursive out-of-sample prediction errors according to their corresponding levels of in-sample estimation uncertainty. In essence, we show how to use the maximum possible amount of information from the sample in the evaluation of the prediction accuracy, by commencing the forecasts at the earliest opportunity and weighting the prediction errors. Via a Monte Carlo study, we demonstrate that the proposed framework selects the correct model from a set of candidate models considerably more often than the existing standard approach when only a small sample is available. We also show that the proposed weighting approaches result in tests of equal predictive accuracy that have much better sizes than the standard approach. An application to an exchange rate dataset highlights relevant differences in the results of tests of predictive accuracy based on the standard approach versus the framework proposed in this paper.

Suggested Citation

  • Brooks, Chris & Burke, Simon P. & Stanescu, Silvia, 2016. "Finite sample weighting of recursive forecast errors," International Journal of Forecasting, Elsevier, vol. 32(2), pages 458-474.
  • Handle: RePEc:eee:intfor:v:32:y:2016:i:2:p:458-474
    DOI: 10.1016/j.ijforecast.2015.05.003
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0169207015000849
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.ijforecast.2015.05.003?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Clark, Todd E. & McCracken, Michael W., 2001. "Tests of equal forecast accuracy and encompassing for nested models," Journal of Econometrics, Elsevier, vol. 105(1), pages 85-110, November.
    2. Clark, Todd E. & West, Kenneth D., 2006. "Using out-of-sample mean squared prediction errors to test the martingale difference hypothesis," Journal of Econometrics, Elsevier, vol. 135(1-2), pages 155-186.
    3. West, Kenneth D, 1996. "Asymptotic Inference about Predictive Ability," Econometrica, Econometric Society, vol. 64(5), pages 1067-1084, September.
    4. Peter Reinhard Hansen & Allan Timmermann, 2012. "Choice of Sample Split in Out-of-Sample Forecast Evaluation," CREATES Research Papers 2012-43, Department of Economics and Business Economics, Aarhus University.
    5. Todd Clark & Michael McCracken, 2005. "Evaluating Direct Multistep Forecasts," Econometric Reviews, Taylor & Francis Journals, vol. 24(4), pages 369-404.
    6. West, Kenneth D., 2006. "Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 1, chapter 3, pages 99-134, Elsevier.
    7. Diebold, Francis X & Mariano, Roberto S, 2002. "Comparing Predictive Accuracy," Journal of Business & Economic Statistics, American Statistical Association, vol. 20(1), pages 134-144, January.
    8. Philippe Bacchetta & Eric van Wincoop & Toni Beutler, 2010. "Can Parameter Instability Explain the Meese-Rogoff Puzzle?," NBER International Seminar on Macroeconomics, University of Chicago Press, vol. 6(1), pages 125-173.
    9. Cheung, Yin-Wong & Chinn, Menzie D. & Pascual, Antonio Garcia, 2005. "Empirical exchange rate models of the nineties: Are any fit to survive?," Journal of International Money and Finance, Elsevier, vol. 24(7), pages 1150-1175, November.
    10. West, Kenneth D & McCracken, Michael W, 1998. "Regression-Based Tests of Predictive Ability," International Economic Review, Department of Economics, University of Pennsylvania and Osaka University Institute of Social and Economic Research Association, vol. 39(4), pages 817-840, November.
    11. Clark, Todd E. & West, Kenneth D., 2007. "Approximately normal tests for equal predictive accuracy in nested models," Journal of Econometrics, Elsevier, vol. 138(1), pages 291-311, May.
    12. Atsushi Inoue & Lutz Kilian, 2005. "In-Sample or Out-of-Sample Tests of Predictability: Which One Should We Use?," Econometric Reviews, Taylor & Francis Journals, vol. 23(4), pages 371-402.
    13. Clark, Todd E. & McCracken, Michael W., 2005. "The power of tests of predictive ability in the presence of structural breaks," Journal of Econometrics, Elsevier, vol. 124(1), pages 1-31, January.
    14. Barbara Rossi & Atsushi Inoue, 2012. "Out-of-Sample Forecast Tests Robust to the Choice of Window Size," Journal of Business & Economic Statistics, Taylor & Francis Journals, vol. 30(3), pages 432-453, April.
    15. Meese, Richard A. & Rogoff, Kenneth, 1983. "Empirical exchange rate models of the seventies : Do they fit out of sample?," Journal of International Economics, Elsevier, vol. 14(1-2), pages 3-24, February.
    16. Busetti, Fabio & Marcucci, Juri, 2013. "Comparing forecast accuracy: A Monte Carlo investigation," International Journal of Forecasting, Elsevier, vol. 29(1), pages 13-27.
    17. Andrew C. Harvey, 1990. "The Econometric Analysis of Time Series, 2nd Edition," MIT Press Books, The MIT Press, edition 2, volume 1, number 026208189x, April.
    18. Inoue, Atsushi & Kilian, Lutz, 2006. "On the selection of forecasting models," Journal of Econometrics, Elsevier, vol. 130(2), pages 273-306, February.
    19. Harvey, David I & Leybourne, Stephen J & Newbold, Paul, 1998. "Tests for Forecast Encompassing," Journal of Business & Economic Statistics, American Statistical Association, vol. 16(2), pages 254-259, April.
    20. Ashley, R & Granger, C W J & Schmalensee, R, 1980. "Advertising and Aggregate Consumption: An Analysis of Causality," Econometrica, Econometric Society, vol. 48(5), pages 1149-1167, July.
    21. Raffaella Giacomini & Barbara Rossi, 2010. "Forecast comparisons in unstable environments," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 25(4), pages 595-620.
    22. Mc Cracken, Michael W., 2000. "Robust out-of-sample inference," Journal of Econometrics, Elsevier, vol. 99(2), pages 195-223, December.
    23. McCracken, Michael W., 2007. "Asymptotics for out of sample tests of Granger causality," Journal of Econometrics, Elsevier, vol. 140(2), pages 719-752, October.
    24. Harvey, David & Leybourne, Stephen & Newbold, Paul, 1997. "Testing the equality of prediction mean squared errors," International Journal of Forecasting, Elsevier, vol. 13(2), pages 281-291, June.
    25. Ashley, Richard, 2003. "Statistically significant forecasting improvements: how much out-of-sample data is likely necessary?," International Journal of Forecasting, Elsevier, vol. 19(2), pages 229-239.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Rossi, Barbara, 2013. "Advances in Forecasting under Instability," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1203-1324, Elsevier.
    2. Clark, Todd & McCracken, Michael, 2013. "Advances in Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1107-1201, Elsevier.
    3. Barbara Rossi & Atsushi Inoue, 2012. "Out-of-Sample Forecast Tests Robust to the Choice of Window Size," Journal of Business & Economic Statistics, Taylor & Francis Journals, vol. 30(3), pages 432-453, April.
    4. West, Kenneth D., 2006. "Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 1, chapter 3, pages 99-134, Elsevier.
    5. Raffaella Giacomini & Barbara Rossi, 2013. "Forecasting in macroeconomics," Chapters, in: Nigar Hashimzade & Michael A. Thornton (ed.), Handbook of Research Methods and Applications in Empirical Macroeconomics, chapter 17, pages 381-408, Edward Elgar Publishing.
    6. Granziera, Eleonora & Hubrich, Kirstin & Moon, Hyungsik Roger, 2014. "A predictability test for a small number of nested models," Journal of Econometrics, Elsevier, vol. 182(1), pages 174-185.
    7. Busetti, Fabio & Marcucci, Juri, 2013. "Comparing forecast accuracy: A Monte Carlo investigation," International Journal of Forecasting, Elsevier, vol. 29(1), pages 13-27.
    8. Rossi, Barbara & Sekhposyan, Tatevik, 2011. "Understanding models' forecasting performance," Journal of Econometrics, Elsevier, vol. 164(1), pages 158-172, September.
    9. Kenneth S. Rogoff & Vania Stavrakeva, 2008. "The Continuing Puzzle of Short Horizon Exchange Rate Forecasting," NBER Working Papers 14071, National Bureau of Economic Research, Inc.
    10. Richard A. Ashley & Kwok Ping Tsang, 2014. "Credible Granger-Causality Inference with Modest Sample Lengths: A Cross-Sample Validation Approach," Econometrics, MDPI, vol. 2(1), pages 1-20, March.
    11. Pincheira, Pablo M. & West, Kenneth D., 2016. "A comparison of some out-of-sample tests of predictability in iterated multi-step-ahead forecasts," Research in Economics, Elsevier, vol. 70(2), pages 304-319.
    12. Rapach, David & Zhou, Guofu, 2013. "Forecasting Stock Returns," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 328-383, Elsevier.
    13. Calhoun, Gray, 2014. "Out-Of-Sample Comparisons of Overfit Models," Staff General Research Papers Archive 32462, Iowa State University, Department of Economics.
    14. Clark, Todd E. & West, Kenneth D., 2007. "Approximately normal tests for equal predictive accuracy in nested models," Journal of Econometrics, Elsevier, vol. 138(1), pages 291-311, May.
    15. Kirstin Hubrich & Kenneth D. West, 2010. "Forecast evaluation of small nested model sets," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 25(4), pages 574-594.
    16. Amat, Christophe & Michalski, Tomasz & Stoltz, Gilles, 2018. "Fundamentals and exchange rate forecastability with simple machine learning methods," Journal of International Money and Finance, Elsevier, vol. 88(C), pages 1-24.
    17. Rudan Wang & Bruce Morley & Javier Ordóñez, 2016. "The Taylor Rule, Wealth Effects and the Exchange Rate," Review of International Economics, Wiley Blackwell, vol. 24(2), pages 282-301, May.
    18. Clark, Todd E. & McCracken, Michael W., 2001. "Tests of equal forecast accuracy and encompassing for nested models," Journal of Econometrics, Elsevier, vol. 105(1), pages 85-110, November.
    19. Molodtsova, Tanya & Papell, David H., 2009. "Out-of-sample exchange rate predictability with Taylor rule fundamentals," Journal of International Economics, Elsevier, vol. 77(2), pages 167-180, April.
    20. Pablo Pincheira & Nicolás Hardy & Felipe Muñoz, 2021. "“Go Wild for a While!”: A New Test for Forecast Evaluation in Nested Models," Mathematics, MDPI, vol. 9(18), pages 1-28, September.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:intfor:v:32:y:2016:i:2:p:458-474. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/ijforecast .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.