IDEAS home Printed from https://ideas.repec.org/a/sae/jedbes/v49y2024i1p32-60.html
   My bibliography  Save this article

Cognitive Diagnosis Testlet Model for Multiple-Choice Items

Author

Listed:
  • Lei Guo
  • Wenjie Zhou

    (Southwest University)

  • Xiao Li

    (University of Illinois at Urbana-Champaign)

Abstract

The testlet design is very popular in educational and psychological assessments. This article proposes a new cognitive diagnosis model, the multiple-choice cognitive diagnostic testlet (MC-CDT) model for tests using testlets consisting of MC items. The MC-CDT model uses the original examinees’ responses to MC items instead of dichotomously scored data (i.e., correct or incorrect) to retain information of different distractors and thus enhance the MC items’ diagnostic power. The Markov chain Monte Carlo algorithm was adopted to calibrate the model using the WinBUGS software. Then, a thorough simulation study was conducted to evaluate the estimation accuracy for both item and examinee parameters in the MC-CDT model under various conditions. The results showed that the proposed MC-CDT model outperformed the traditional MC cognitive diagnostic model. Specifically, the MC-CDT model fits the testlet data better than the traditional model, while also fitting the data without testlets well. The findings of this empirical study show that the MC-CDT model fits real data better than the traditional model and that it can also provide testlet information.

Suggested Citation

  • Lei Guo & Wenjie Zhou & Xiao Li, 2024. "Cognitive Diagnosis Testlet Model for Multiple-Choice Items," Journal of Educational and Behavioral Statistics, , vol. 49(1), pages 32-60, February.
  • Handle: RePEc:sae:jedbes:v:49:y:2024:i:1:p:32-60
    DOI: 10.3102/10769986231165622
    as

    Download full text from publisher

    File URL: https://journals.sagepub.com/doi/10.3102/10769986231165622
    Download Restriction: no

    File URL: https://libkey.io/10.3102/10769986231165622?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Laine Bradshaw & Jonathan Templin, 2014. "Combining Item Response Theory and Diagnostic Classification Models: A Psychometric Model for Scaling Ability and Diagnosing Misconceptions," Psychometrika, Springer;The Psychometric Society, vol. 79(3), pages 403-425, July.
    2. Eric Bradlow & Howard Wainer & Xiaohui Wang, 1999. "A Bayesian random effects model for testlets," Psychometrika, Springer;The Psychometric Society, vol. 64(2), pages 153-168, June.
    3. R. Darrell Bock, 1972. "Estimating item parameters and latent ability when responses are scored in two or more nominal categories," Psychometrika, Springer;The Psychometric Society, vol. 37(1), pages 29-51, March.
    4. Jimmy Torre & Jeffrey Douglas, 2004. "Higher-order latent trait models for cognitive diagnosis," Psychometrika, Springer;The Psychometric Society, vol. 69(3), pages 333-353, September.
    5. Sturtz, Sibylle & Ligges, Uwe & Gelman, Andrew, 2005. "R2WinBUGS: A Package for Running WinBUGS from R," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 12(i03).
    6. Curtis, S. McKay, 2010. "BUGS Code for Item Response Theory," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 36(c01).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yang Liu & Jan Hannig, 2017. "Generalized Fiducial Inference for Logistic Graded Response Models," Psychometrika, Springer;The Psychometric Society, vol. 82(4), pages 1097-1125, December.
    2. Peida Zhan & Hong Jiao & Kaiwen Man & Lijun Wang, 2019. "Using JAGS for Bayesian Cognitive Diagnosis Modeling: A Tutorial," Journal of Educational and Behavioral Statistics, , vol. 44(4), pages 473-503, August.
    3. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.
    4. Xin Xu & Guanhua Fang & Jinxin Guo & Zhiliang Ying & Susu Zhang, 2024. "Diagnostic Classification Models for Testlets: Methods and Theory," Psychometrika, Springer;The Psychometric Society, vol. 89(3), pages 851-876, September.
    5. Peida Zhan & Wen-Chung Wang & Xiaomin Li, 2020. "A Partial Mastery, Higher-Order Latent Structural Model for Polytomous Attributes in Cognitive Diagnostic Assessments," Journal of Classification, Springer;The Classification Society, vol. 37(2), pages 328-351, July.
    6. Peida Zhan & Hong Jiao & Dandan Liao & Feiming Li, 2019. "A Longitudinal Higher-Order Diagnostic Classification Model," Journal of Educational and Behavioral Statistics, , vol. 44(3), pages 251-281, June.
    7. repec:jss:jstsof:36:c01 is not listed on IDEAS
    8. Singh, Jagdip, 2004. "Tackling measurement problems with Item Response Theory: Principles, characteristics, and assessment, with an illustrative example," Journal of Business Research, Elsevier, vol. 57(2), pages 184-208, February.
    9. Chun Wang, 2024. "A Diagnostic Facet Status Model (DFSM) for Extracting Instructionally Useful Information from Diagnostic Assessment," Psychometrika, Springer;The Psychometric Society, vol. 89(3), pages 747-773, September.
    10. Javier Revuelta, 2008. "The generalized Logit-Linear Item Response Model for Binary-Designed Items," Psychometrika, Springer;The Psychometric Society, vol. 73(3), pages 385-405, September.
    11. Stefano Noventa & Andrea Spoto & Jürgen Heller & Augustin Kelava, 2019. "On a Generalization of Local Independence in Item Response Theory Based on Knowledge Space Theory," Psychometrika, Springer;The Psychometric Society, vol. 84(2), pages 395-421, June.
    12. Matthew S. Johnson & Sandip Sinharay, 2020. "The Reliability of the Posterior Probability of Skill Attainment in Diagnostic Classification Models," Journal of Educational and Behavioral Statistics, , vol. 45(1), pages 5-31, February.
    13. Quinn N. Lathrop & Ying Cheng, 2017. "Item Cloning Variation and the Impact on the Parameters of Response Models," Psychometrika, Springer;The Psychometric Society, vol. 82(1), pages 245-263, March.
    14. Nana Kim & Daniel M. Bolt & James Wollack, 2022. "Noncompensatory MIRT For Passage-Based Tests," Psychometrika, Springer;The Psychometric Society, vol. 87(3), pages 992-1009, September.
    15. Lachaud, Michée A. & Bravo-Ureta, Boris E., 2022. "A Bayesian statistical analysis of return to agricultural R&D investment in Latin America: Implications for food security," Technology in Society, Elsevier, vol. 70(C).
    16. Yu, Jun, 2012. "A semiparametric stochastic volatility model," Journal of Econometrics, Elsevier, vol. 167(2), pages 473-482.
    17. Luo, Nanyu & Ji, Feng & Han, Yuting & He, Jinbo & Zhang, Xiaoya, 2024. "Fitting item response theory models using deep learning computational frameworks," OSF Preprints tjxab, Center for Open Science.
    18. Bas Hemker & Klaas Sijtsma & Ivo Molenaar & Brian Junker, 1996. "Polytomous IRT models and monotone likelihood ratio of the total score," Psychometrika, Springer;The Psychometric Society, vol. 61(4), pages 679-693, December.
    19. Liang, Zhongyao & Qian, Song S. & Wu, Sifeng & Chen, Huili & Liu, Yong & Yu, Yanhong & Yi, Xuan, 2019. "Using Bayesian change point model to enhance understanding of the shifting nutrients-phytoplankton relationship," Ecological Modelling, Elsevier, vol. 393(C), pages 120-126.
    20. Martijn G. de Jong & Jan-Benedict E. M. Steenkamp & Bernard P. Veldkamp, 2009. "A Model for the Construction of Country-Specific Yet Internationally Comparable Short-Form Marketing Scales," Marketing Science, INFORMS, vol. 28(4), pages 674-689, 07-08.
    21. Hans-Friedrich Köhn & Chia-Yi Chiu, 2018. "How to Build a Complete Q-Matrix for a Cognitively Diagnostic Test," Journal of Classification, Springer;The Classification Society, vol. 35(2), pages 273-299, July.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:jedbes:v:49:y:2024:i:1:p:32-60. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.