IDEAS home Printed from https://ideas.repec.org/p/nbr/nberwo/8018.html
   My bibliography  Save this paper

Retrospective vs. Prospective Analyses of School Inputs: The Case of Flip Charts in Kenya

Author

Listed:
  • Paul Glewwe
  • Michael Kremer
  • Sylvie Moulin
  • Eric Zitzewitz

Abstract

This paper compares retrospective and prospective analyses of the effect of flip charts on test scores in rural Kenyan schools. Retrospective estimates that focus on subjects for which flip charts are used suggest that flip charts raise test scores by up to 20 percent of a standard deviation. Controlling for other educational inputs does not reduce this estimate. In contrast, prospective estimators based on a study of 178 schools, half of which were randomly selected to receive charts, provide no evidence that flip charts increase test scores. One interpretation is that the retrospective results were subject to omitted variable bias despite the inclusion of control variables. If the direction of omitted variable bias were similar in other retrospective analyses of educational inputs in developing countries, the effects of inputs may be even more modest than retrospective studies suggest. Bias appears to be reduced by a differences-in-differences estimator that examines the impact of flip charts on the relative performance of students in flip chart and other subjects across schools with and without flip charts, but it is not clear that this approach is applicable more generally.

Suggested Citation

  • Paul Glewwe & Michael Kremer & Sylvie Moulin & Eric Zitzewitz, 2000. "Retrospective vs. Prospective Analyses of School Inputs: The Case of Flip Charts in Kenya," NBER Working Papers 8018, National Bureau of Economic Research, Inc.
  • Handle: RePEc:nbr:nberwo:8018
    Note: CH
    as

    Download full text from publisher

    File URL: http://www.nber.org/papers/w8018.pdf
    Download Restriction: no
    ---><---

    Other versions of this item:

    References listed on IDEAS

    as
    1. Alan B. Krueger & Diane M. Whitmore, 1999. "The Effect of Attending a Small Class in the Early Grades on College-Test Taking and Middle School Test Results: Evidence from Project STAR," Working Papers 806, Princeton University, Department of Economics, Industrial Relations Section..
    2. Krueger, Alan B & Whitmore, Diane M, 2001. "The Effect of Attending a Small Class in the Early Grades on College-Test Taking and Middle School Test Results: Evidence from Project STAR," Economic Journal, Royal Economic Society, vol. 111(468), pages 1-28, January.
    3. Robert J. LaLonde, 1984. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data," Working Papers 563, Princeton University, Department of Economics, Industrial Relations Section..
    4. Hanushek, Eric A, 1995. "Interpreting Recent Research on Schooling in Developing Countries," The World Bank Research Observer, World Bank, vol. 10(2), pages 227-246, August.
    5. LaLonde, Robert J, 1986. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data," American Economic Review, American Economic Association, vol. 76(4), pages 604-620, September.
    6. Eric A. Hanushek & Steven G. Rivkin, 1996. "Understanding the 20th Century Growth in U.S. School Spending," NBER Working Papers 5547, National Bureau of Economic Research, Inc.
    7. Hanushek, Eric A, 1986. "The Economics of Schooling: Production and Efficiency in Public Schools," Journal of Economic Literature, American Economic Association, vol. 24(3), pages 1141-1177, September.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Abhijit V. Banerjee & Shawn Cole & Esther Duflo & Leigh Linden, 2007. "Remedying Education: Evidence from Two Randomized Experiments in India," The Quarterly Journal of Economics, President and Fellows of Harvard College, vol. 122(3), pages 1235-1264.
    2. Larru, Jose Maria, 2007. "La evaluación de impacto: qué es, cómo se mide y qué está aportando en la cooperación al desarrollo [Impact Assessment and Evaluation: What it is it, how can it be measured and what it is adding to," MPRA Paper 6928, University Library of Munich, Germany.
    3. Tarek Azzam & Michael Bates & David Fairris, 2019. "Do Learning Communities Increase First Year College Retention? Testing Sample Selection and External Validity of Randomized Control Trials," Working Papers 202002, University of California at Riverside, Department of Economics.
    4. Wößmann, Ludger, 2001. "New Evidence on the Missing Resource-Performance Link in Education," Kiel Working Papers 1051, Kiel Institute for the World Economy (IfW Kiel).
    5. Thomas D. Cook, 2003. "Why have Educational Evaluators Chosen Not to Do Randomized Experiments?," The ANNALS of the American Academy of Political and Social Science, , vol. 589(1), pages 114-149, September.
    6. Simone Dobbelsteen & Jesse Levin & Hessel Oosterbeek, 2002. "The causal effect of class size on scholastic achievement: distinguishing the pure class size effect from the effect of changes in class composition," Oxford Bulletin of Economics and Statistics, Department of Economics, University of Oxford, vol. 64(1), pages 17-38, February.
    7. Joshua D. Angrist & Jörn-Steffen Pischke, 2010. "The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics," Journal of Economic Perspectives, American Economic Association, vol. 24(2), pages 3-30, Spring.
    8. Peter Hull & Michal Kolesár & Christopher Walters, 2022. "Labor by design: contributions of David Card, Joshua Angrist, and Guido Imbens," Scandinavian Journal of Economics, Wiley Blackwell, vol. 124(3), pages 603-645, July.
    9. Wo[ss]mann, Ludger & West, Martin, 2006. "Class-size effects in school systems around the world: Evidence from between-grade variation in TIMSS," European Economic Review, Elsevier, vol. 50(3), pages 695-736, April.
    10. Margaret Stevens & Kathryn Graddy, 2003. "The Impact of School Inputs on Student Performance: An Empirical Study of Private Schools in the United Kingdom," Economics Series Working Papers 146, University of Oxford, Department of Economics.
    11. Joshua D. Angrist, 2022. "Empirical Strategies in Economics: Illuminating the Path From Cause to Effect," Econometrica, Econometric Society, vol. 90(6), pages 2509-2539, November.
    12. Stevens, Margaret & Graddy, Kathryn, 2003. "The Impact of School Inputs on Student Performance: An Empirical Study of Private Schools in the UK," CEPR Discussion Papers 3776, C.E.P.R. Discussion Papers.
    13. Mingat, Alain & Tan, Jee-Peng, 2003. "On the mechanics of progress in primary education," Economics of Education Review, Elsevier, vol. 22(5), pages 455-467, October.
    14. Maria De Paola & Vincenzo Scoppa, 2011. "The Effects Of Class Size On The Achievement Of College Students," Manchester School, University of Manchester, vol. 79(6), pages 1061-1079, December.
    15. Jackson, Erika & Page, Marianne E., 2013. "Estimating the distributional effects of education reforms: A look at Project STAR," Economics of Education Review, Elsevier, vol. 32(C), pages 92-103.
    16. Holmlund, Helena & McNally, Sandra & Viarengo, Martina, 2010. "Does money matter for schools?," Economics of Education Review, Elsevier, vol. 29(6), pages 1154-1164, December.
    17. Alan B. Krueger & Diane M. Whitmore, 2001. "Would Smaller Classes Help Close the Black-White Achievement Gap?," Working Papers 830, Princeton University, Department of Economics, Industrial Relations Section..
    18. Alan B. Krueger, 2003. "Economic Considerations and Class Size," Economic Journal, Royal Economic Society, vol. 113(485), pages 34-63, February.
    19. Changhui Kang & Yoonsoo Park, 2021. "Private Tutoring and Distribution of Student Academic Outcomes: An Implication of the Presence of Private Tutoring for Educational Inequality," Korean Economic Review, Korean Economic Association, vol. 37, pages 287-326.
    20. Charles T. Clotfelter & Helen F. Ladd & Jacob L. Vigdor, 2006. "Teacher-Student Matching and the Assessment of Teacher Effectiveness," Journal of Human Resources, University of Wisconsin Press, vol. 41(4).

    More about this item

    JEL classification:

    • I21 - Health, Education, and Welfare - - Education - - - Analysis of Education
    • N37 - Economic History - - Labor and Consumers, Demography, Education, Health, Welfare, Income, Wealth, Religion, and Philanthropy - - - Africa; Oceania

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nbr:nberwo:8018. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: the person in charge (email available below). General contact details of provider: https://edirc.repec.org/data/nberrus.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.