IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v9y2021i13p1465-d580050.html
   My bibliography  Save this article

About the Equivalence of the Latent D-Scoring Model and the Two-Parameter Logistic Item Response Model

Author

Listed:
  • Alexander Robitzsch

    (IPN—Leibniz Institute for Science and Mathematics Education, Olshausenstrasse 62, 24118 Kiel, Germany
    Centre for International Student Assessment (ZIB), Olshausenstrasse 62, 24118 Kiel, Germany)

Abstract

This article shows that the recently proposed latent D-scoring model of Dimitrov is statistically equivalent to the two-parameter logistic item response model. An analytical derivation and a numerical illustration are employed for demonstrating this finding. Hence, estimation techniques for the two-parameter logistic model can be used for estimating the latent D-scoring model. In an empirical example using PISA data, differences of country ranks are investigated when using different metrics for the latent trait. In the example, the choice of the latent trait metric matters for the ranking of countries. Finally, it is argued that an item response model with bounded latent trait values like the latent D-scoring model might have advantages for reporting results in terms of interpretation.

Suggested Citation

  • Alexander Robitzsch, 2021. "About the Equivalence of the Latent D-Scoring Model and the Two-Parameter Logistic Item Response Model," Mathematics, MDPI, vol. 9(13), pages 1-17, June.
  • Handle: RePEc:gam:jmathe:v:9:y:2021:i:13:p:1465-:d:580050
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/9/13/1465/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/9/13/1465/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Dale Ballou, 2009. "Test Scaling and Value-Added Measurement," Education Finance and Policy, MIT Press, vol. 4(4), pages 351-383, October.
    2. James A Wiley & John Levi Martin & Stephen J Herschkorn & Jason Bond, 2015. "A New Extension of the Binomial Error Model for Responses to Items of Varying Difficulty in Educational Testing and Attitude Surveys," PLOS ONE, Public Library of Science, vol. 10(11), pages 1-16, November.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Koedel Cory & Leatherman Rebecca & Parsons Eric, 2012. "Test Measurement Error and Inference from Value-Added Models," The B.E. Journal of Economic Analysis & Policy, De Gruyter, vol. 12(1), pages 1-37, November.
    2. Seth Gershenson, 2016. "Performance Standards and Employee Effort: Evidence From Teacher Absences," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 35(3), pages 615-638, June.
    3. Derek C. Briggs & Ben Domingue, 2013. "The Gains From Vertical Scaling," Journal of Educational and Behavioral Statistics, , vol. 38(6), pages 551-576, December.
    4. Gershenson, Seth & Holt, Stephen B. & Papageorge, Nicholas W., 2015. "Who Believes in Me? The Effect of Student-Teacher Demographic Match on Teacher Expectations," IZA Discussion Papers 9202, Institute of Labor Economics (IZA).
    5. Alexander Robitzsch, 2024. "Estimation of Standard Error, Linking Error, and Total Error for Robust and Nonrobust Linking Methods in the Two-Parameter Logistic Model," Stats, MDPI, vol. 7(3), pages 1-21, June.
    6. Gadi Barlevy & Derek Neal, 2012. "Pay for Percentile," American Economic Review, American Economic Association, vol. 102(5), pages 1805-1831, August.
    7. Donald Boyd & Hamilton Lankford & Susanna Loeb & James Wyckoff, 2013. "Measuring Test Measurement Error," Journal of Educational and Behavioral Statistics, , vol. 38(6), pages 629-663, December.
    8. Brendan Houng & Moshe Justman, 2013. "Comparing Least-Squares Value-Added Analysis and Student Growth Percentile Analysis for Evaluating Student Progress and Estimating School Effects," Melbourne Institute Working Paper Series wp2013n07, Melbourne Institute of Applied Economic and Social Research, The University of Melbourne.
    9. David M. Quinn & Andrew D. Ho, 2021. "Ordinal Approaches to Decomposing Between-Group Test Score Disparities," Journal of Educational and Behavioral Statistics, , vol. 46(4), pages 466-500, August.
    10. Moshe Justman & Brendan Houng, 2013. "A Comparison Of Two Methods For Estimating School Effects And Tracking Student Progress From Standardized Test Scores," Working Papers 1316, Ben-Gurion University of the Negev, Department of Economics.
    11. Wiswall, Matthew, 2013. "The dynamics of teacher quality," Journal of Public Economics, Elsevier, vol. 100(C), pages 61-78.
    12. J. R. Lockwood & Daniel F. McCaffrey, 2014. "Correcting for Test Score Measurement Error in ANCOVA Models for Estimating Treatment Effects," Journal of Educational and Behavioral Statistics, , vol. 39(1), pages 22-52, February.
    13. Daniel M. Bolt & Xiangyi Liao, 2022. "Item Complexity: A Neglected Psychometric Feature of Test Items?," Psychometrika, Springer;The Psychometric Society, vol. 87(4), pages 1195-1213, December.
    14. Cory Koedel & Mark Ehlert & Eric Parsons & Michael Podgursky, 2012. "Selecting Growth Measures for School and Teacher Evaluations," Working Papers 1210, Department of Economics, University of Missouri.
    15. Barrett, Nathan & Toma, Eugenia F., 2013. "Reward or punishment? Class size and teacher quality," Economics of Education Review, Elsevier, vol. 35(C), pages 41-52.
    16. Seth Gershenson & Diane Whitmore Schanzenbach, 2016. "Linking Teacher Quality, Student Attendance, and Student Achievement," Education Finance and Policy, MIT Press, vol. 11(2), pages 125-149, Spring.
    17. Benjamin R. Shear & Sean F. Reardon, 2021. "Using Pooled Heteroskedastic Ordered Probit Models to Improve Small-Sample Estimates of Latent Test Score Distributions," Journal of Educational and Behavioral Statistics, , vol. 46(1), pages 3-33, February.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:9:y:2021:i:13:p:1465-:d:580050. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.