IDEAS home Printed from https://ideas.repec.org/a/sae/joudef/v19y2022i2p229-236.html
   My bibliography  Save this article

The importance of identifying the dimensionality of constructs employed in simulation and training for AI

Author

Listed:
  • Michael D Coovert
  • Winston Bennett Jr

Abstract

Advances at the intersection of artificial intelligence (AI) and education and training are occurring at an ever-increasing pace. On the education and training side, psychological and performance constructs play a central role in both theory and application. It is essential, therefore, to accurately determine the dimensionality of a construct, as it is often employed during both the assessment and development of theory, and its practical application. Traditionally, both exploratory and confirmatory factor analyses have been employed to establish the dimensionality of data. Due in part to inconsistent findings, methodologists recently resurrected the bifactor approach for establishing the dimensionality of data. The bifactor model is pitted against traditional data structures, and the one with the best overall fit (according to chi-square, root mean square error of approximation (RMSEA), comparative fit index (CFI), Tucker–Lewis index (TLI), and standardized root mean square residual (SRMR)) is preferred. If the bifactor structure is preferred by that test, it can be further examined via a suite of emerging coefficients (e.g., omega, omega hierarchical, omega subscale, H , explained common variance, and percent uncontaminated correlations), each of which is computed from standardized factor loadings. To examine the utility of these new statistical tools in an education and training context, we analyze data where the construct of interest is trust. We chose trust as it is central, among other things, to understanding human reliance upon and utilization of AI systems. We utilized the above statistical approach and determined the two-factor structure of widely employed trust scale is better represented by one general factor. Findings like this hold substantial implications for theory development and testing, prediction as in structural equation modeling (SEM) models, as well as the utilization of scales and their role in education, training, and AI systems. We encourage other researchers to employ the statistical measures described here to critically examine the construct measures used in their work if those measures are thought to be multidimensional. Only through the appropriate utilization of constructs, defined in part by their dimensionality, are we to advance the intersection of AI and simulation and training.

Suggested Citation

  • Michael D Coovert & Winston Bennett Jr, 2022. "The importance of identifying the dimensionality of constructs employed in simulation and training for AI," The Journal of Defense Modeling and Simulation, , vol. 19(2), pages 229-236, April.
  • Handle: RePEc:sae:joudef:v:19:y:2022:i:2:p:229-236
    DOI: 10.1177/15485129211036936
    as

    Download full text from publisher

    File URL: https://journals.sagepub.com/doi/10.1177/15485129211036936
    Download Restriction: no

    File URL: https://libkey.io/10.1177/15485129211036936?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Jos Berge & Gregor Sočan, 2004. "The greatest lower bound to the reliability of a test and the hypothesis of unidimensionality," Psychometrika, Springer;The Psychometric Society, vol. 69(4), pages 613-625, December.
    2. Caemmerer, Jacqueline M. & Keith, Timothy Z. & Reynolds, Matthew R., 2020. "Beyond individual intelligence tests: Application of Cattell-Horn-Carroll Theory," Intelligence, Elsevier, vol. 79(C).
    3. Klaas Sijtsma, 2009. "On the Use, the Misuse, and the Very Limited Usefulness of Cronbach’s Alpha," Psychometrika, Springer;The Psychometric Society, vol. 74(1), pages 107-120, March.
    4. Wiernik, Brenton M. & Wilmot, Michael P. & Kostal, Jack W., 2015. "How Data Analysis Can Dominate Interpretations of Dominant General Factors," Industrial and Organizational Psychology, Cambridge University Press, vol. 8(3), pages 438-445, September.
    5. J Terrill Paterson & Kelly Proffitt & Ben Jimenez & Jay Rotella & Robert Garrott, 2019. "Simulation-based validation of spatial capture-recapture models: A case study using mountain lions," PLOS ONE, Public Library of Science, vol. 14(4), pages 1-20, April.
    6. Michael D. Coovert & Evgeniya E. Pavlova Miller & Winston Bennett Jr., 2017. "Assessing Trust and Effectiveness in Virtual Teams: Latent Growth Curve and Latent Change Score Models," Social Sciences, MDPI, vol. 6(3), pages 1-26, August.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Klaas Sijtsma & Jules L. Ellis & Denny Borsboom, 2024. "Recognize the Value of the Sum Score, Psychometrics’ Greatest Accomplishment," Psychometrika, Springer;The Psychometric Society, vol. 89(1), pages 84-117, March.
    2. Tyler Hunt & Peter Bentler, 2015. "Quantile Lower Bounds to Reliability Based on Locally Optimal Splits," Psychometrika, Springer;The Psychometric Society, vol. 80(1), pages 182-195, March.
    3. Klaas Sijtsma & Julius M. Pfadt, 2021. "Part II: On the Use, the Misuse, and the Very Limited Usefulness of Cronbach’s Alpha: Discussing Lower Bounds and Correlated Errors," Psychometrika, Springer;The Psychometric Society, vol. 86(4), pages 843-860, December.
    4. Markus Pauly & Maria Umlauft & Ali Ünlü, 2018. "Resampling-Based Inference Methods for Comparing Two Coefficients Alpha," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 203-222, March.
    5. Zhengguo Gu & Wilco H. M. Emons & Klaas Sijtsma, 2021. "Estimating Difference-Score Reliability in Pretest–Posttest Settings," Journal of Educational and Behavioral Statistics, , vol. 46(5), pages 592-610, October.
    6. Piotr Koc, 2021. "Measuring Non-electoral Political Participation: Bi-factor Model as a Tool to Extract Dimensions," Social Indicators Research: An International and Interdisciplinary Journal for Quality-of-Life Measurement, Springer, vol. 156(1), pages 271-287, July.
    7. David J. Hessen, 2017. "Lower Bounds to the Reliabilities of Factor Score Estimators," Psychometrika, Springer;The Psychometric Society, vol. 82(3), pages 648-659, September.
    8. Maciej Koniewski & Ilona Barańska & Violetta Kijowska & Jenny T. Steen & Anne B. Wichmann & Sheila Payne & Giovanni Gambassi & Nele Den Noortgate & Harriet Finne-Soveri & Tinne Smets & Lieve den Block, 2022. "Measuring relatives’ perceptions of end-of-life communication with physicians in five countries: a psychometric analysis," European Journal of Ageing, Springer, vol. 19(4), pages 1561-1570, December.
    9. Jules L. Ellis & Klaas Sijtsma & Kristel Groot & Patrick J. F. Groenen, 2024. "Reliability Theory for Measurements with Variable Test Length, Illustrated with ERN and Pe Collected in the Flanker Task," Psychometrika, Springer;The Psychometric Society, vol. 89(4), pages 1280-1303, December.
    10. Xiaochuan Song, 2022. "Investigating Employees’ Responses to Abusive Supervision," Merits, MDPI, vol. 2(4), pages 1-20, November.
    11. Carmen León-Mantero & José Carlos Casas-Rosal & Alexander Maz-Machado & Miguel E Villarraga Rico, 2020. "Analysis of attitudinal components towards statistics among students from different academic degrees," PLOS ONE, Public Library of Science, vol. 15(1), pages 1-13, January.
    12. Brian K Miller & Kay M Nicols & Silvia Clark & Alison Daniels & Whitney Grant, 2018. "Meta-analysis of coefficient alpha for scores on the Narcissistic Personality Inventory," PLOS ONE, Public Library of Science, vol. 13(12), pages 1-16, December.
    13. Adam Pawlewicz & Wojciech Gotkiewicz & Katarzyna Brodzińska & Katarzyna Pawlewicz & Bartosz Mickiewicz & Paweł Kluczek, 2022. "Organic Farming as an Alternative Maintenance Strategy in the Opinion of Farmers from Natura 2000 Areas," IJERPH, MDPI, vol. 19(7), pages 1-22, March.
    14. Michael Hennessy & Amy Bleakley & Martin Fishbein, 2012. "Measurement Models for Reasoned Action Theory," The ANNALS of the American Academy of Political and Social Science, , vol. 640(1), pages 42-57, March.
    15. Rauter, Romana & Globocnik, Dietfried & Baumgartner, Rupert J., 2023. "The role of organizational controls to advance sustainability innovation performance," Technovation, Elsevier, vol. 128(C).
    16. Mercy Gloria Ashepet & Liesbet Vranken & Caroline Michellier & Olivier Dewitte & Rodgers Mutyebere & Clovis Kabaseke & Ronald Twongyirwe & Violet Kanyiginya & Grace Kagoro-Rugunda & Tine Huyse & Liesb, 2024. "Assessing scale reliability in citizen science motivational research: lessons learned from two case studies in Uganda," Palgrave Communications, Palgrave Macmillan, vol. 11(1), pages 1-18, December.
    17. Kelvin K. F. Law & Lillian F. Mills, 2019. "Financial Gatekeepers and Investor Protection: Evidence from Criminal Background Checks," Journal of Accounting Research, Wiley Blackwell, vol. 57(2), pages 491-543, May.
    18. Bastiaan Bruinsma, 2020. "A comparison of measures to validate scales in voting advice applications," Quality & Quantity: International Journal of Methodology, Springer, vol. 54(4), pages 1299-1316, August.
    19. Wen Jiao & Peter Johannes Schulz & Angela Chang, 2024. "Addressing the role of eHealth literacy in shaping popular attitudes towards post-COVID-19 vaccination among Chinese adults," Palgrave Communications, Palgrave Macmillan, vol. 11(1), pages 1-11, December.
    20. Jan-Benedict E. M. Steenkamp & Alberto Maydeu-Olivares, 2023. "Unrestricted factor analysis: A powerful alternative to confirmatory factor analysis," Journal of the Academy of Marketing Science, Springer, vol. 51(1), pages 86-113, January.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:joudef:v:19:y:2022:i:2:p:229-236. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.