IDEAS home Printed from https://ideas.repec.org/a/taf/ecinnt/v16y2007i5p385-402.html
   My bibliography  Save this article

Building the Capacity to Experiment in Schools: A Case Study of the Institute of Educational Sciences in the US Department of Education

Author

Listed:
  • Thomas D. Cook
  • Dominique Foray

Abstract

This article is about building new research capacities to foster a fundamental shift in research methods. It examines in detail the new R&D policy of the US Department of Education, which is designed to dramatically increase the number of experiments conducted in schools despite limitations in the supply of seasoned experimenters. The article reviews the various policy mechanisms that are being used both to implement this new pro-experimental policy and to increase the supply of experimenters. It also very briefly discusses some of the potential positive and negative effects of pursuing such an R&D policy.

Suggested Citation

  • Thomas D. Cook & Dominique Foray, 2007. "Building the Capacity to Experiment in Schools: A Case Study of the Institute of Educational Sciences in the US Department of Education," Economics of Innovation and New Technology, Taylor & Francis Journals, vol. 16(5), pages 385-402.
  • Handle: RePEc:taf:ecinnt:v:16:y:2007:i:5:p:385-402
    DOI: 10.1080/10438590600982475
    as

    Download full text from publisher

    File URL: http://www.tandfonline.com/doi/abs/10.1080/10438590600982475
    Download Restriction: Access to full text is restricted to subscribers.

    File URL: https://libkey.io/10.1080/10438590600982475?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Joshua D. Angrist, 2004. "American Education Research Changes Tack," Oxford Review of Economic Policy, Oxford University Press and Oxford Review of Economic Policy Limited, vol. 20(2), pages 198-212, Summer.
    2. Roberto Agodini & Mark Dynarski, "undated". "Are Experiments the Only Option? A Look at Dropout Prevention Programs," Mathematica Policy Research Reports 51241adbf9fa4a26add6d54c5, Mathematica Policy Research.
    3. Roberto Agodini & Mark Dynarski, 2004. "Are Experiments the Only Option? A Look at Dropout Prevention Programs," The Review of Economics and Statistics, MIT Press, vol. 86(1), pages 180-194, February.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Foray, D. & Raffo, J., 2014. "The emergence of an educational tool industry: Opportunities and challenges for innovation in education," Research Policy, Elsevier, vol. 43(10), pages 1707-1715.
    2. Vivian C. Wong & Peter M. Steiner & Kylie L. Anglin, 2018. "What Can Be Learned From Empirical Evaluations of Nonexperimental Methods?," Evaluation Review, , vol. 42(2), pages 147-175, April.
    3. Gargani, John, 2013. "What can practitioners learn from theorists’ logic models?," Evaluation and Program Planning, Elsevier, vol. 38(C), pages 81-88.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Vivian C. Wong & Peter M. Steiner & Kylie L. Anglin, 2018. "What Can Be Learned From Empirical Evaluations of Nonexperimental Methods?," Evaluation Review, , vol. 42(2), pages 147-175, April.
    2. Jean Stockard, 2013. "Merging the accountability and scientific research requirements of the No Child Left Behind Act: using cohort control groups," Quality & Quantity: International Journal of Methodology, Springer, vol. 47(4), pages 2225-2257, June.
    3. William Bosshardt & Neela Manage, 2011. "Does Calculus Help in Principles of Economics Courses? Estimates Using Matching Estimators," The American Economist, Sage Publications, vol. 56(1), pages 29-37, May.
    4. Ferraro, Paul J. & Miranda, Juan José, 2014. "The performance of non-experimental designs in the evaluation of environmental programs: A design-replication study using a large-scale randomized experiment as a benchmark," Journal of Economic Behavior & Organization, Elsevier, vol. 107(PA), pages 344-365.
    5. Aga, Deribe Assefa, 2016. "Factors affecting the success of development projects : A behavioral perspective," Other publications TiSEM 867ae95e-d53d-4a68-ad46-6, Tilburg University, School of Economics and Management.
    6. Fatih Unlu & Douglas Lee Lauen & Sarah Crittenden Fuller & Tiffany Berglund & Elc Estrera, 2021. "Can Quasi‐Experimental Evaluations That Rely On State Longitudinal Data Systems Replicate Experimental Results?," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(2), pages 572-613, March.
    7. Handa, Sudhanshu & Pineda, Heiling & Esquivel, Yannete & Lopez, Blancadilia & Gurdian, Nidia Veronica & Regalia, Ferdinando, 2009. "Non-formal basic education as a development priority: Evidence from Nicaragua," Economics of Education Review, Elsevier, vol. 28(4), pages 512-522, August.
    8. Fortson, Kenneth & Gleason, Philip & Kopa, Emma & Verbitsky-Savitz, Natalya, 2015. "Horseshoes, hand grenades, and treatment effects? Reassessing whether nonexperimental estimators are biased," Economics of Education Review, Elsevier, vol. 44(C), pages 100-113.
    9. Alberto Abadie & Guido W. Imbens, 2008. "On the Failure of the Bootstrap for Matching Estimators," Econometrica, Econometric Society, vol. 76(6), pages 1537-1557, November.
    10. Maureen A. Pirog & Anne L. Buffardi & Colleen K. Chrisinger & Pradeep Singh & John Briney, 2009. "Are the alternatives to randomized assignment nearly as good? Statistical corrections to nonrandomized evaluations," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 28(1), pages 169-172.
    11. Maciej Jakubowski, 2015. "Latent variables and propensity score matching: a simulation study with application to data from the Programme for International Student Assessment in Poland," Empirical Economics, Springer, vol. 48(3), pages 1287-1325, May.
    12. Bridget Terry Long & Michal Kurlaender, 2008. "Do Community Colleges provide a Viable Pathway to a Baccalaureate Degree?," NBER Working Papers 14367, National Bureau of Economic Research, Inc.
    13. Thomas D. Cook & William R. Shadish & Vivian C. Wong, 2008. "Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 27(4), pages 724-750.
    14. Andrew P. Jaciw, 2016. "Applications of a Within-Study Comparison Approach for Evaluating Bias in Generalized Causal Inferences From Comparison Groups Studies," Evaluation Review, , vol. 40(3), pages 241-276, June.
    15. Hämäläinen, Kari & Uusitalo, Roope & Vuori, Jukka, 2008. "Varying biases in matching estimates: Evidence from two randomised job search training experiments," Labour Economics, Elsevier, vol. 15(4), pages 604-618, August.
    16. Ravallion, Martin, 2008. "Evaluating Anti-Poverty Programs," Handbook of Development Economics, in: T. Paul Schultz & John A. Strauss (ed.), Handbook of Development Economics, edition 1, volume 4, chapter 59, pages 3787-3846, Elsevier.
    17. Justine Burns & Malcolm Kewsell & Rebecca Thornton, 2009. "Evaluating the Impact of Health Programmes," SALDRU Working Papers 40, Southern Africa Labour and Development Research Unit, University of Cape Town.
    18. Rebecca A. Maynard, 2006. "Presidential address: Evidence-based decision making: What will it take for the decision makers to care?," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 25(2), pages 249-265.
    19. Sauermann, Jan & Stenberg, Anders, 2020. "Assessing Selection Bias in Non-Experimental Estimates of the Returns to Workplace Training," IZA Discussion Papers 13789, Institute of Labor Economics (IZA).
    20. Peter Z. Schochet & John Burghardt, 2007. "Using Propensity Scoring to Estimate Program-Related Subgroup Impacts in Experimental Program Evaluations," Evaluation Review, , vol. 31(2), pages 95-120, April.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:taf:ecinnt:v:16:y:2007:i:5:p:385-402. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Longhurst (email available below). General contact details of provider: http://www.tandfonline.com/GEIN20 .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.