IDEAS home Printed from https://ideas.repec.org/p/nzb/nzbdps/2021-1.html
   My bibliography  Save this paper

Star Wars at Central Banks

Author

Listed:

Abstract

Researchers in economics often write stars (*) next to their results, to highlight which results pass conventional thresholds of certainty, or ‘statistical significance’. Researchers typically like showing starry results because they can improve perceptions about the value of a piece of work, broadening its influence. But are these stars always what they seem? In our paper we investigate whether researchers use improper methods to produce more starry results, something that we worry would foster exaggeration. For example, researchers often have to decide whether to delete suspicious-looking data points. Each deletion can change results, and the right choice is often subjective. So researchers have freedom to shape their results somewhat. If researchers use that freedom to favour starry results, their work will suffer from exaggeration. Others have also investigated this problem, focusing mostly on research published in academic journals. Their findings do suggest exaggeration and now there is growing support for lifting research standards. Still, our work is important because it is unclear whether the findings about journals apply to central banks. To investigate, we compile 2 decades of research results from the Federal Reserve Bank of Minneapolis, the Reserve Bank of Australia and the Reserve Bank of New Zealand. We then use 2 popular methods to detect exaggeration in the dataset. Both build on the observation that researchers start assigning stars at a human-made threshold of significance, whereas nature, which should dictate the true pattern of results, is indifferent to that threshold. So if the observed pattern of results shows anomalies at the starry threshold, we can be confident that the anomalies come from researchers. Most complex is the final step: figuring out whether the anomalies come from exaggeration or something else. Our findings are mixed. The first method shows no evidence of exaggeration but often misses exaggeration when it occurs. The second method shows some evidence of exaggeration but relies on strong assumptions. We test those assumptions and challenge their merit. At this point, all that is clear is that central banks produce results with patterns different from those in journals, there being less bunching at the starry threshold (see the figure below). The source of this difference remains a mystery.

Suggested Citation

  • Joel Bank & Hamish Fitchett & Adam Gorajek & Benjamin Malin & Andrew Staib, 2021. "Star Wars at Central Banks," Reserve Bank of New Zealand Discussion Paper Series DP2021/01, Reserve Bank of New Zealand.
  • Handle: RePEc:nzb:nzbdps:2021/1
    Note: We position results on the horizontal axis using a measure of statistical significance called the z-score. The academic journals are The American Economic Review, Journal of Political Economy and The Quarterly Journal of Economics. - Source: Authors calculations; Brodeur et al (2016); Federal Reserve Bank of Minneapolis; Reserve Bank of Australia; Reserve Bank of New Zealand.
    as

    Download full text from publisher

    File URL: https://www.rbnz.govt.nz/-/media/ReserveBank/Files/Publications/Discussion%20papers/2021/dp2021-01.pdf?revision=d8b423fb-f557-4670-bae1-3fa4c58c9c58
    Download Restriction: no
    ---><---

    Other versions of this item:

    References listed on IDEAS

    as
    1. Andrew Gelman & Guido Imbens, 2013. "Why ask Why? Forward Causal Inference and Reverse Causal Questions," NBER Working Papers 19614, National Bureau of Economic Research, Inc.
    2. Leeb, Hannes & Pötscher, Benedikt M., 2005. "Model Selection And Inference: Facts And Fiction," Econometric Theory, Cambridge University Press, vol. 21(1), pages 21-59, February.
    3. Abel Brodeur & Nikolai Cook & Anthony Heyes, 2020. "Methods Matter: p-Hacking and Publication Bias in Causal Analysis in Economics," American Economic Review, American Economic Association, vol. 110(11), pages 3634-3660, November.
    4. Abel Brodeur & Mathias Lé & Marc Sangnier & Yanos Zylberberg, 2016. "Star Wars: The Empirics Strike Back," American Economic Journal: Applied Economics, American Economic Association, vol. 8(1), pages 1-32, January.
    5. Brian Fabo & Martina Jancokova & Elisabeth Kempf & Lubos Pastor, 2020. "Fifty Shades of QE: Conflicts of Interest in Economic Research," Working Papers 2020-128, Becker Friedman Institute for Research In Economics.
    6. Burlig, Fiona, 2018. "Improving transparency in observational social science research: A pre-analysis plan approach," Economics Letters, Elsevier, vol. 168(C), pages 56-60.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Jakub Rybacki & Dobromił Serwa, 2021. "What Makes a Successful Scientist in a Central Bank? Evidence From the RePEc Database," Central European Journal of Economic Modelling and Econometrics, Central European Journal of Economic Modelling and Econometrics, vol. 13(3), pages 331-357, September.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Uwe Hassler & Marc‐Oliver Pohle, 2022. "Unlucky Number 13? Manipulating Evidence Subject to Snooping," International Statistical Review, International Statistical Institute, vol. 90(2), pages 397-410, August.
    2. Adam Gorajek & Benjamin A. Malin, 2021. "Comment on "Star Wars: The Empirics Strike Back"," Staff Report 629, Federal Reserve Bank of Minneapolis.
    3. Abel Brodeur, Nikolai M. Cook, Jonathan S. Hartley, Anthony Heyes, 2022. "Do Pre-Registration and Pre-analysis Plans Reduce p-Hacking and Publication Bias?," LCERPA Working Papers am0132, Laurier Centre for Economic Research and Policy Analysis.
    4. Edward Miguel, 2021. "Evidence on Research Transparency in Economics," Journal of Economic Perspectives, American Economic Association, vol. 35(3), pages 193-214, Summer.
    5. Sarah A. Janzen & Jeffrey D. Michler, 2021. "Ulysses' pact or Ulysses' raft: Using pre‐analysis plans in experimental and nonexperimental research," Applied Economic Perspectives and Policy, John Wiley & Sons, vol. 43(4), pages 1286-1304, December.
    6. Stefano DellaVigna & Elizabeth Linos, 2022. "RCTs to Scale: Comprehensive Evidence From Two Nudge Units," Econometrica, Econometric Society, vol. 90(1), pages 81-116, January.
    7. Bruns, Stephan & Herwartz, Helmut & Ioannidis, John P.A. & Islam, Chris-Gabriel & Raters, Fabian H. C., 2023. "Statistical reporting errors in economics," MetaArXiv mbx62, Center for Open Science.
    8. Christoph Huber & Christian König-Kersting & Matteo M. Marini, 2022. "Experimenting with Financial Professionals," Working Papers 2022-07, Faculty of Economics and Statistics, Universität Innsbruck, revised Jun 2024.
    9. Brodeur, Abel & Cook, Nikolai & Heyes, Anthony, 2022. "We Need to Talk about Mechanical Turk: What 22,989 Hypothesis Tests Tell Us about Publication Bias and p-Hacking in Online Experiments," IZA Discussion Papers 15478, Institute of Labor Economics (IZA).
    10. Jasper Brinkerink, 2023. "When Shooting for the Stars Becomes Aiming for Asterisks: P-Hacking in Family Business Research," Entrepreneurship Theory and Practice, , vol. 47(2), pages 304-343, March.
    11. Gechert, Sebastian & Mey, Bianka & Opatrny, Matej & Havranek, Tomas & Stanley, T. D. & Bom, Pedro R. D. & Doucouliagos, Hristos & Heimberger, Philipp & Irsova, Zuzana & Rachinger, Heiko J., 2023. "Conventional Wisdom, Meta-Analysis, and Research Revision in Economics," EconStor Preprints 280745, ZBW - Leibniz Information Centre for Economics.
    12. Dreber, Anna & Johannesson, Magnus, 2023. "A framework for evaluating reproducibility and replicability in economics," Ruhr Economic Papers 1055, RWI - Leibniz-Institut für Wirtschaftsforschung, Ruhr-University Bochum, TU Dortmund University, University of Duisburg-Essen.
    13. Graham Elliott & Nikolay Kudrin & Kaspar Wüthrich, 2022. "Detecting p‐Hacking," Econometrica, Econometric Society, vol. 90(2), pages 887-906, March.
    14. Alexander L. Brown & Taisuke Imai & Ferdinand M. Vieider & Colin F. Camerer, 2024. "Meta-analysis of Empirical Estimates of Loss Aversion," Journal of Economic Literature, American Economic Association, vol. 62(2), pages 485-516, June.
    15. Jakub Rybacki & Dobromił Serwa, 2021. "What Makes a Successful Scientist in a Central Bank? Evidence From the RePEc Database," Central European Journal of Economic Modelling and Econometrics, Central European Journal of Economic Modelling and Econometrics, vol. 13(3), pages 331-357, September.
    16. Brodeur, Abel & Cook, Nikolai & Heyes, Anthony, 2022. "We Need to Talk about Mechanical Turk: What 22,989 Hypothesis Tests Tell us about p-Hacking and Publication Bias in Online Experiments," GLO Discussion Paper Series 1157, Global Labor Organization (GLO).
    17. Abel Brodeur & Scott Carrell & David Figlio & Lester Lusher, 2023. "Unpacking P-hacking and Publication Bias," American Economic Review, American Economic Association, vol. 113(11), pages 2974-3002, November.
    18. Brodeur, Abel & Esterling, Kevin & Ankel-Peters, Jörg & Bueno, Natália S & Desposato, Scott & Dreber, Anna & Genovese, Federica & Green, Donald P & Hepplewhite, Matthew & de la Guardia, Fernando Hoces, 2024. "Promoting Reproducibility and Replicability in Political Science," Department of Economics, Working Paper Series qt23n3n3dg, Department of Economics, Institute for Business and Economic Research, UC Berkeley.
    19. Roggenkamp, Hauke C., 2024. "Revisiting ‘Growth and Inequality in Public Good Provision’—Reproducing and Generalizing Through Inconvenient Online Experimentation," OSF Preprints 6rn97, Center for Open Science.
    20. Carina Neisser, 2021. "The Elasticity of Taxable Income: A Meta-Regression Analysis [The top 1% in international and historical perspective]," The Economic Journal, Royal Economic Society, vol. 131(640), pages 3365-3391.

    More about this item

    JEL classification:

    • A11 - General Economics and Teaching - - General Economics - - - Role of Economics; Role of Economists
    • C13 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General - - - Estimation: General
    • E58 - Macroeconomics and Monetary Economics - - Monetary Policy, Central Banking, and the Supply of Money and Credit - - - Central Banks and Their Policies

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nzb:nzbdps:2021/1. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Reserve Bank of New Zealand Knowledge Centre (email available below). General contact details of provider: https://edirc.repec.org/data/rbngvnz.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.