IDEAS home Printed from https://ideas.repec.org/a/sae/jedbes/v38y2013i6p551-576.html
   My bibliography  Save this article

The Gains From Vertical Scaling

Author

Listed:
  • Derek C. Briggs
  • Ben Domingue

Abstract

It is often assumed that a vertical scale is necessary when value-added models depend upon the gain scores of students across two or more points in time. This article examines the conditions under which the scale transformations associated with the vertical scaling process would be expected to have a significant impact on normative interpretations using gain scores. It is shown that this will depend upon the extent to which adopting a particular vertical scaling approach leads to a large degree of scale shrinkage (decreases in score variability over time). Empirical data are used to compare school-level gain scores computed as a function of different vertical scales transformed to represent increasing, decreasing, and constant trends in score variability across grades. A pragmatic approach is also presented to assess the departure of a given vertical scale from a scale with ideal equal-interval properties. Finally, longitudinal data are used to illustrate a case when the availability of a vertical scale will be most important: when questions are being posed about the magnitudes of student-level growth trajectories.

Suggested Citation

  • Derek C. Briggs & Ben Domingue, 2013. "The Gains From Vertical Scaling," Journal of Educational and Behavioral Statistics, , vol. 38(6), pages 551-576, December.
  • Handle: RePEc:sae:jedbes:v:38:y:2013:i:6:p:551-576
    DOI: 10.3102/1076998613508317
    as

    Download full text from publisher

    File URL: https://journals.sagepub.com/doi/10.3102/1076998613508317
    Download Restriction: no

    File URL: https://libkey.io/10.3102/1076998613508317?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Louis T. Mariano & Daniel F. McCaffrey & J. R. Lockwood, 2010. "A Model for Teacher Effects From Longitudinal Data Without Assuming Vertical Scaling," Journal of Educational and Behavioral Statistics, , vol. 35(3), pages 253-279, June.
    2. Raj Chetty & John N. Friedman & Jonah E. Rockoff, 2011. "The Long-Term Impacts of Teachers: Teacher Value-Added and Student Outcomes in Adulthood," NBER Working Papers 17699, National Bureau of Economic Research, Inc.
    3. Dale Ballou, 2009. "Test Scaling and Value-Added Measurement," Education Finance and Policy, MIT Press, vol. 4(4), pages 351-383, October.
    4. H. Brogden, 1977. "The rasch model, the law of comparative judgment and additive conjoint measurement," Psychometrika, Springer;The Psychometric Society, vol. 42(4), pages 631-634, December.
    5. Petra E. Todd & Kenneth I. Wolpin, 2003. "On The Specification and Estimation of The Production Function for Cognitive Achievement," Economic Journal, Royal Economic Society, vol. 113(485), pages 3-33, February.
    6. Eric A. Hanushek & Steven G. Rivkin, 2010. "Generalizations about Using Value-Added Measures of Teacher Quality," American Economic Review, American Economic Association, vol. 100(2), pages 267-271, May.
    7. Douglas N. Harris, 2009. "Would Accountability Based on Teacher Value Added Be Smart Policy? An Examination of the Statistical Properties and Policy Alternatives," Education Finance and Policy, MIT Press, vol. 4(4), pages 319-350, October.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Koedel Cory & Leatherman Rebecca & Parsons Eric, 2012. "Test Measurement Error and Inference from Value-Added Models," The B.E. Journal of Economic Analysis & Policy, De Gruyter, vol. 12(1), pages 1-37, November.
    2. Jonah E. Rockoff & Douglas O. Staiger & Thomas J. Kane & Eric S. Taylor, 2012. "Information and Employee Evaluation: Evidence from a Randomized Intervention in Public Schools," American Economic Review, American Economic Association, vol. 102(7), pages 3184-3213, December.
    3. Cory Koedel & Mark Ehlert & Eric Parsons & Michael Podgursky, 2012. "Selecting Growth Measures for School and Teacher Evaluations," Working Papers 1210, Department of Economics, University of Missouri.
    4. Ben Ost, 2014. "How Do Teachers Improve? The Relative Importance of Specific and General Human Capital," American Economic Journal: Applied Economics, American Economic Association, vol. 6(2), pages 127-151, April.
    5. Dan Goldhaber & Michael Hansen, 2013. "Is it Just a Bad Class? Assessing the Long-term Stability of Estimated Teacher Performance," Economica, London School of Economics and Political Science, vol. 80(319), pages 589-612, July.
    6. Marine de Talancé, 2015. "Better Teachers, Better Results? Evidence from Rural Pakistan," Working Papers DT/2015/21, DIAL (Développement, Institutions et Mondialisation).
    7. Vosters, Kelly N. & Guarino, Cassandra M. & Wooldridge, Jeffrey M., 2018. "Understanding and evaluating the SAS® EVAAS® Univariate Response Model (URM) for measuring teacher effectiveness," Economics of Education Review, Elsevier, vol. 66(C), pages 191-205.
    8. Stacy, Brian, 2014. "Ranking Teachers when Teacher Value-Added is Heterogeneous Across Students," EconStor Preprints 104743, ZBW - Leibniz Information Centre for Economics.
    9. Papay, John P. & Kraft, Matthew A., 2015. "Productivity returns to experience in the teacher labor market: Methodological challenges and new evidence on long-term career improvement," Journal of Public Economics, Elsevier, vol. 130(C), pages 105-119.
    10. Seth Gershenson, 2016. "Performance Standards and Employee Effort: Evidence From Teacher Absences," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 35(3), pages 615-638, June.
    11. Condie, Scott & Lefgren, Lars & Sims, David, 2014. "Teacher heterogeneity, value-added and education policy," Economics of Education Review, Elsevier, vol. 40(C), pages 76-92.
    12. Stacy, Brian & Guarino, Cassandra & Wooldridge, Jeffrey, 2018. "Does the precision and stability of value-added estimates of teacher performance depend on the types of students they serve?," Economics of Education Review, Elsevier, vol. 64(C), pages 50-74.
    13. Helen F. Ladd & Lucy C. Sorensen, 2017. "Returns to Teacher Experience: Student Achievement and Motivation in Middle School," Education Finance and Policy, MIT Press, vol. 12(2), pages 241-279, Spring.
    14. Gary Henry & Roderick Rose & Doug Lauen, 2014. "Are value-added models good enough for teacher evaluations? Assessing commonly used models with simulated and actual data," Investigaciones de Economía de la Educación volume 9, in: Adela García Aracil & Isabel Neira Gómez (ed.), Investigaciones de Economía de la Educación 9, edition 1, volume 9, chapter 20, pages 383-405, Asociación de Economía de la Educación.
    15. Aucejo, Esteban M. & Romano, Teresa Foy, 2016. "Assessing the effect of school days and absences on test score performance," Economics of Education Review, Elsevier, vol. 55(C), pages 70-87.
    16. Steven G. Rivkin & Jeffrey C. Schiman, 2015. "Instruction Time, Classroom Quality, and Academic Achievement," Economic Journal, Royal Economic Society, vol. 125(588), pages 425-448, November.
    17. Cassandra M. Guarino & Mark D. Reckase & Jeffrey M. Woolrdige, 2014. "Can Value-Added Measures of Teacher Performance Be Trusted?," Education Finance and Policy, MIT Press, vol. 10(1), pages 117-156, November.
    18. Goldhaber, Dan & Cowan, James & Walch, Joe, 2013. "Is a good elementary teacher always good? Assessing teacher performance estimates across subjects," Economics of Education Review, Elsevier, vol. 36(C), pages 216-228.
    19. Goel, Deepti & Barooah, Bidisha, 2018. "Drivers of Student Performance: Evidence from Higher Secondary Public Schools in Delhi," GLO Discussion Paper Series 231, Global Labor Organization (GLO).
    20. Cory Koedel & Eric Parsons & Michael Podgursky & Mark Ehlert, 2015. "Teacher Preparation Programs and Teacher Quality: Are There Real Differences Across Programs?," Education Finance and Policy, MIT Press, vol. 10(4), pages 508-534, October.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:jedbes:v:38:y:2013:i:6:p:551-576. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.