IDEAS home Printed from https://ideas.repec.org/a/sae/joudef/v19y2022i2p229-236.html
   My bibliography  Save this article

The importance of identifying the dimensionality of constructs employed in simulation and training for AI

Author

Listed:
  • Michael D Coovert
  • Winston Bennett Jr

Abstract

Advances at the intersection of artificial intelligence (AI) and education and training are occurring at an ever-increasing pace. On the education and training side, psychological and performance constructs play a central role in both theory and application. It is essential, therefore, to accurately determine the dimensionality of a construct, as it is often employed during both the assessment and development of theory, and its practical application. Traditionally, both exploratory and confirmatory factor analyses have been employed to establish the dimensionality of data. Due in part to inconsistent findings, methodologists recently resurrected the bifactor approach for establishing the dimensionality of data. The bifactor model is pitted against traditional data structures, and the one with the best overall fit (according to chi-square, root mean square error of approximation (RMSEA), comparative fit index (CFI), Tucker–Lewis index (TLI), and standardized root mean square residual (SRMR)) is preferred. If the bifactor structure is preferred by that test, it can be further examined via a suite of emerging coefficients (e.g., omega, omega hierarchical, omega subscale, H , explained common variance, and percent uncontaminated correlations), each of which is computed from standardized factor loadings. To examine the utility of these new statistical tools in an education and training context, we analyze data where the construct of interest is trust. We chose trust as it is central, among other things, to understanding human reliance upon and utilization of AI systems. We utilized the above statistical approach and determined the two-factor structure of widely employed trust scale is better represented by one general factor. Findings like this hold substantial implications for theory development and testing, prediction as in structural equation modeling (SEM) models, as well as the utilization of scales and their role in education, training, and AI systems. We encourage other researchers to employ the statistical measures described here to critically examine the construct measures used in their work if those measures are thought to be multidimensional. Only through the appropriate utilization of constructs, defined in part by their dimensionality, are we to advance the intersection of AI and simulation and training.

Suggested Citation

  • Michael D Coovert & Winston Bennett Jr, 2022. "The importance of identifying the dimensionality of constructs employed in simulation and training for AI," The Journal of Defense Modeling and Simulation, , vol. 19(2), pages 229-236, April.
  • Handle: RePEc:sae:joudef:v:19:y:2022:i:2:p:229-236
    DOI: 10.1177/15485129211036936
    as

    Download full text from publisher

    File URL: https://journals.sagepub.com/doi/10.1177/15485129211036936
    Download Restriction: no

    File URL: https://libkey.io/10.1177/15485129211036936?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Jos Berge & Gregor Sočan, 2004. "The greatest lower bound to the reliability of a test and the hypothesis of unidimensionality," Psychometrika, Springer;The Psychometric Society, vol. 69(4), pages 613-625, December.
    2. Caemmerer, Jacqueline M. & Keith, Timothy Z. & Reynolds, Matthew R., 2020. "Beyond individual intelligence tests: Application of Cattell-Horn-Carroll Theory," Intelligence, Elsevier, vol. 79(C).
    3. Klaas Sijtsma, 2009. "On the Use, the Misuse, and the Very Limited Usefulness of Cronbach’s Alpha," Psychometrika, Springer;The Psychometric Society, vol. 74(1), pages 107-120, March.
    4. Wiernik, Brenton M. & Wilmot, Michael P. & Kostal, Jack W., 2015. "How Data Analysis Can Dominate Interpretations of Dominant General Factors," Industrial and Organizational Psychology, Cambridge University Press, vol. 8(3), pages 438-445, September.
    5. J Terrill Paterson & Kelly Proffitt & Ben Jimenez & Jay Rotella & Robert Garrott, 2019. "Simulation-based validation of spatial capture-recapture models: A case study using mountain lions," PLOS ONE, Public Library of Science, vol. 14(4), pages 1-20, April.
    6. Michael D. Coovert & Evgeniya E. Pavlova Miller & Winston Bennett Jr., 2017. "Assessing Trust and Effectiveness in Virtual Teams: Latent Growth Curve and Latent Change Score Models," Social Sciences, MDPI, vol. 6(3), pages 1-26, August.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Piotr Koc, 2021. "Measuring Non-electoral Political Participation: Bi-factor Model as a Tool to Extract Dimensions," Social Indicators Research: An International and Interdisciplinary Journal for Quality-of-Life Measurement, Springer, vol. 156(1), pages 271-287, July.
    2. David J. Hessen, 2017. "Lower Bounds to the Reliabilities of Factor Score Estimators," Psychometrika, Springer;The Psychometric Society, vol. 82(3), pages 648-659, September.
    3. Maciej Koniewski & Ilona Barańska & Violetta Kijowska & Jenny T. Steen & Anne B. Wichmann & Sheila Payne & Giovanni Gambassi & Nele Den Noortgate & Harriet Finne-Soveri & Tinne Smets & Lieve den Block, 2022. "Measuring relatives’ perceptions of end-of-life communication with physicians in five countries: a psychometric analysis," European Journal of Ageing, Springer, vol. 19(4), pages 1561-1570, December.
    4. Tyler Hunt & Peter Bentler, 2015. "Quantile Lower Bounds to Reliability Based on Locally Optimal Splits," Psychometrika, Springer;The Psychometric Society, vol. 80(1), pages 182-195, March.
    5. Klaas Sijtsma & Julius M. Pfadt, 2021. "Part II: On the Use, the Misuse, and the Very Limited Usefulness of Cronbach’s Alpha: Discussing Lower Bounds and Correlated Errors," Psychometrika, Springer;The Psychometric Society, vol. 86(4), pages 843-860, December.
    6. Markus Pauly & Maria Umlauft & Ali Ünlü, 2018. "Resampling-Based Inference Methods for Comparing Two Coefficients Alpha," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 203-222, March.
    7. Zhengguo Gu & Wilco H. M. Emons & Klaas Sijtsma, 2021. "Estimating Difference-Score Reliability in Pretest–Posttest Settings," Journal of Educational and Behavioral Statistics, , vol. 46(5), pages 592-610, October.
    8. Wilson, Christopher J. & Bowden, Stephen C. & Byrne, Linda K. & Joshua, Nicole R. & Marx, Wolfgang & Weiss, Lawrence G., 2023. "The cross-cultural generalizability of cognitive ability measures: A systematic literature review," Intelligence, Elsevier, vol. 98(C).
    9. Conzo, Pierluigi & Aassve, Arnstein & Fuochi, Giulia & Mencarini, Letizia, 2017. "The cultural foundations of happiness," Journal of Economic Psychology, Elsevier, vol. 62(C), pages 268-283.
    10. Xiaochuan Song, 2022. "Investigating Employees’ Responses to Abusive Supervision," Merits, MDPI, vol. 2(4), pages 1-20, November.
    11. Carmen León-Mantero & José Carlos Casas-Rosal & Alexander Maz-Machado & Miguel E Villarraga Rico, 2020. "Analysis of attitudinal components towards statistics among students from different academic degrees," PLOS ONE, Public Library of Science, vol. 15(1), pages 1-13, January.
    12. Danni Liu & Anouk Dijk & Shanyan Lin & Zhenhong Wang & Maja Deković & Judith Semon Dubas, 2023. "Psychometric Properties of the Chinese Version of the Highly Sensitive Child Scale Across Age Groups, Gender, and Informants," Child Indicators Research, Springer;The International Society of Child Indicators (ISCI), vol. 16(4), pages 1755-1780, August.
    13. Brian K Miller & Kay M Nicols & Silvia Clark & Alison Daniels & Whitney Grant, 2018. "Meta-analysis of coefficient alpha for scores on the Narcissistic Personality Inventory," PLOS ONE, Public Library of Science, vol. 13(12), pages 1-16, December.
    14. Adam Pawlewicz & Wojciech Gotkiewicz & Katarzyna Brodzińska & Katarzyna Pawlewicz & Bartosz Mickiewicz & Paweł Kluczek, 2022. "Organic Farming as an Alternative Maintenance Strategy in the Opinion of Farmers from Natura 2000 Areas," IJERPH, MDPI, vol. 19(7), pages 1-22, March.
    15. Rue, Lisa A. & Estrada, Samantha & Floren, Michael & MacKinnon, Krista, 2016. "Formative evaluation: Developing measures for online family mental health recovery education," Evaluation and Program Planning, Elsevier, vol. 55(C), pages 27-34.
    16. Isabel Gallego‐Álvarez & María Consuelo Pucheta‐Martínez, 2020. "How cultural dimensions, legal systems, and industry affect environmental reporting? Empirical evidence from an international perspective," Business Strategy and the Environment, Wiley Blackwell, vol. 29(5), pages 2037-2057, July.
    17. Sommerland, Nina & Masquillier, Caroline & Rau, Asta & Engelbrecht, Michelle & Kigozi, Gladys & Pliakas, Triantafyllos & Janse van Rensburg, Andre & Wouters, Edwin, 2020. "Reducing HIV- and TB-Stigma among healthcare co-workers in South Africa: Results of a cluster randomised trial," Social Science & Medicine, Elsevier, vol. 266(C).
    18. Michael Hennessy & Amy Bleakley & Martin Fishbein, 2012. "Measurement Models for Reasoned Action Theory," The ANNALS of the American Academy of Political and Social Science, , vol. 640(1), pages 42-57, March.
    19. Roberta Fida & Carlo Tramontano & Marinella Paciello & Valerio Ghezzi & Claudio Barbaranelli, 2018. "Understanding the Interplay Among Regulatory Self-Efficacy, Moral Disengagement, and Academic Cheating Behaviour During Vocational Education: A Three-Wave Study," Journal of Business Ethics, Springer, vol. 153(3), pages 725-740, December.
    20. Ji Hoon Ryoo & Sunhee Park & Hongwook Suh & Jaehwa Choi & Jongkyum Kwon, 2022. "Development of a New Measure of Cognitive Ability Using Automatic Item Generation and Its Psychometric Properties," SAGE Open, , vol. 12(2), pages 21582440221, April.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:joudef:v:19:y:2022:i:2:p:229-236. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.