IDEAS home Printed from https://ideas.repec.org/a/sae/evarev/v26y2002i4p355-381.html
   My bibliography  Save this article

Using Performance Standards to Evaluate Social Programs with Incomplete Outcome Data

Author

Listed:
  • Charles F. Manski

    (Northwestern University)

  • John Newman

    (World Bank)

  • John V. Pepper

    (University of Virginia)

Abstract

The idea of program evaluation is both simple and appealing. Program outcomes are measured and compared to some minimum performance standard or threshold. In practice, however, evaluation is difficult. Two fundamental problems of outcome measurement must be addressed. The first, which we call the problem of auxiliary outcomes , is that we do not observe outcome of interest. The second, which we call the problem of counterfactual outcomes , is that we do not observe the threshold standard. This article examines how performance standards should be set and applied in the face of these problems in measuring outcomes. The central message is that the proper way to implement standards varies with the prior information an evaluator can credibly bring to bear to compensate for incomplete outcome data. By combining available data with credible assumptions on treatments and outcomes, the performance of a program may be deemed acceptable, unacceptable, or indeterminate.

Suggested Citation

  • Charles F. Manski & John Newman & John V. Pepper, 2002. "Using Performance Standards to Evaluate Social Programs with Incomplete Outcome Data," Evaluation Review, , vol. 26(4), pages 355-381, August.
  • Handle: RePEc:sae:evarev:v:26:y:2002:i:4:p:355-381
    DOI: 10.1177/0193841X02026004001
    as

    Download full text from publisher

    File URL: https://journals.sagepub.com/doi/10.1177/0193841X02026004001
    Download Restriction: no

    File URL: https://libkey.io/10.1177/0193841X02026004001?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Manski, Charles F, 1990. "Nonparametric Bounds on Treatment Effects," American Economic Review, American Economic Association, vol. 80(2), pages 319-323, May.
    2. Charles F. Manski, 1997. "Monotone Treatment Response," Econometrica, Econometric Society, vol. 65(6), pages 1311-1334, November.
    3. Sims,Christopher A. (ed.), 1994. "Advances in Econometrics," Cambridge Books, Cambridge University Press, number 9780521444606, October.
    4. James J. Heckman & Jeffrey Smith & Nancy Clements, 1997. "Making The Most Out Of Programme Evaluations and Social Experiments: Accounting For Heterogeneity in Programme Impacts," The Review of Economic Studies, Review of Economic Studies Ltd, vol. 64(4), pages 487-535.
    5. Heckman, James J & Honore, Bo E, 1990. "The Empirical Content of the Roy Model," Econometrica, Econometric Society, vol. 58(5), pages 1121-1149, September.
    6. Manski, Charles F., 2000. "Identification problems and decisions under ambiguity: Empirical analysis of treatment response and normative analysis of treatment choice," Journal of Econometrics, Elsevier, vol. 95(2), pages 415-442, April.
    7. Charles F. Manski & John V. Pepper, 2000. "Monotone Instrumental Variables, with an Application to the Returns to Schooling," Econometrica, Econometric Society, vol. 68(4), pages 997-1012, July.
    8. Sims,Christopher A. (ed.), 1994. "Advances in Econometrics," Cambridge Books, Cambridge University Press, number 9780521444590, October.
    9. Heckman, J.J. & Hotz, V.J., 1988. "Choosing Among Alternative Nonexperimental Methods For Estimating The Impact Of Social Programs: The Case Of Manpower Training," University of Chicago - Economics Research Center 88-12, Chicago - Economics Research Center.
    10. John V. Pepper, 2000. "The Intergenerational Transmission Of Welfare Receipt: A Nonparametric Bounds Analysis," The Review of Economics and Statistics, MIT Press, vol. 82(3), pages 472-488, August.
    11. Charles F. Manski, 1989. "Anatomy of the Selection Problem," Journal of Human Resources, University of Wisconsin Press, vol. 24(3), pages 343-360.
    12. Daniel Friedlander & David H. Greenberg & Philip K. Robins, 1997. "Evaluating Government Training Programs for the Economically Disadvantaged," Journal of Economic Literature, American Economic Association, vol. 35(4), pages 1809-1855, December.
    13. V. Joseph Hotz & Charles H. Mullin & Seth G. Sanders, 1997. "Bounding Causal Effects Using Data from a Contaminated Natural Experiment: Analysing the Effects of Teenage Childbearing," The Review of Economic Studies, Review of Economic Studies Ltd, vol. 64(4), pages 575-603.
    14. Charles F. Manski, 1997. "The Mixing Problem in Programme Evaluation," The Review of Economic Studies, Review of Economic Studies Ltd, vol. 64(4), pages 537-553.
    15. Bjorklund, Anders & Moffitt, Robert, 1987. "The Estimation of Wage Gains and Welfare Gains in Self-selection," The Review of Economics and Statistics, MIT Press, vol. 69(1), pages 42-49, February.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. John V. Pepper, 2002. "To Train or Not To Train: Optimal Treatment Assignment Rules Using Welfare-to-Work Experiments," Virginia Economics Online Papers 356, University of Virginia, Department of Economics.
    2. Manski, Charles F., 2000. "Identification problems and decisions under ambiguity: Empirical analysis of treatment response and normative analysis of treatment choice," Journal of Econometrics, Elsevier, vol. 95(2), pages 415-442, April.
    3. John V. Pepper, 2003. "Using Experiments to Evaluate Performance Standards: What Do Welfare-to-Work Demonstrations Reveal to Welfare Reformers?," Journal of Human Resources, University of Wisconsin Press, vol. 38(4).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Manski, Charles F., 2000. "Identification problems and decisions under ambiguity: Empirical analysis of treatment response and normative analysis of treatment choice," Journal of Econometrics, Elsevier, vol. 95(2), pages 415-442, April.
    2. Charles F. Manski & John Newman & John V. Pepper, "undated". "Using Performance Standards to Evaluate Social Programs with Incomplete Outcome Data: General Issues and Application to a Higher Education Block Grant Program," IPR working papers 00-1, Institute for Policy Resarch at Northwestern University.
    3. Charles F. Manski & John V. Pepper, 2000. "Monotone Instrumental Variables, with an Application to the Returns to Schooling," Econometrica, Econometric Society, vol. 68(4), pages 997-1012, July.
    4. Charles F. Manski, 2003. "Identification Problems in the Social Sciences and Everyday Life," Southern Economic Journal, John Wiley & Sons, vol. 70(1), pages 11-21, July.
    5. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.
    6. Michael Lechner & Blaise Melly, 2007. "Earnings Effects of Training Programs," University of St. Gallen Department of Economics working paper series 2007 2007-28, Department of Economics, University of St. Gallen.
    7. Michael Lechner & Blaise Melly, 2010. "Partial Idendification of Wage Effects of Training Programs," Working Papers 2010-8, Brown University, Department of Economics.
    8. Dimitris Christelis & Dimitris Georgarakos & Tullio Jappelli & Geoff Kenny, 2020. "The Covid-19 Crisis and Consumption: Survey Evidence from Six EU Countries," Working Papers 2020_31, Business School - Economics, University of Glasgow.
    9. Grafova, Irina B. & Freedman, Vicki A. & Lurie, Nicole & Kumar, Rizie & Rogowski, Jeannette, 2014. "The difference-in-difference method: Assessing the selection bias in the effects of neighborhood environment on health," Economics & Human Biology, Elsevier, vol. 13(C), pages 20-33.
    10. Mingliang Li & Dale J. Poirier & Justin L. Tobias, 2004. "Do dropouts suffer from dropping out? Estimation and prediction of outcome gains in generalized selection models," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 19(2), pages 203-225.
    11. James J. Heckman, 2005. "Micro Data, Heterogeneity and the Evaluation of Public Policy Part 2," The American Economist, Sage Publications, vol. 49(1), pages 16-44, March.
    12. Charles F. Manski, 1999. "Statistical Treatment Rules for Heterogeneous Populations: With Application to Randomized Experiments," NBER Technical Working Papers 0242, National Bureau of Economic Research, Inc.
    13. Stefan Boes, 2013. "Nonparametric analysis of treatment effects in ordered response models," Empirical Economics, Springer, vol. 44(1), pages 81-109, February.
    14. James J. Heckman & Edward J. Vytlacil, 2000. "Instrumental Variables, Selection Models, and Tight Bounds on the Average Treatment Effect," NBER Technical Working Papers 0259, National Bureau of Economic Research, Inc.
    15. Ho, Kate & Rosen, Adam M., 2015. "Partial Identification in Applied Research: Benefits and Challenges," CEPR Discussion Papers 10883, C.E.P.R. Discussion Papers.
    16. Victor Chernozhukov & Sokbae Lee & Adam M. Rosen, 2013. "Intersection Bounds: Estimation and Inference," Econometrica, Econometric Society, vol. 81(2), pages 667-737, March.
    17. Vira Semenova, 2023. "Aggregated Intersection Bounds and Aggregated Minimax Values," Papers 2303.00982, arXiv.org, revised Jun 2024.
    18. Richard Blundell & Amanda Gosling & Hidehiko Ichimura & Costas Meghir, 2007. "Changes in the Distribution of Male and Female Wages Accounting for Employment Composition Using Bounds," Econometrica, Econometric Society, vol. 75(2), pages 323-363, March.
    19. Sungwon Lee, 2021. "Partial Identification and Inference for Conditional Distributions of Treatment Effects," Papers 2108.00723, arXiv.org, revised Nov 2023.
    20. C, Loran & Eckbo, Espen & Lu, Ching-Chih, 2014. "Does Executive Compensation Reflect Default Risk?," UiS Working Papers in Economics and Finance 2014/11, University of Stavanger.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:evarev:v:26:y:2002:i:4:p:355-381. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.