IDEAS home Printed from https://ideas.repec.org/a/spr/soinre/v146y2019i1d10.1007_s11205-018-02055-y.html
   My bibliography  Save this article

How Reliable are Students’ Evaluations of Teaching (SETs)? A Study to Test Student’s Reproducibility and Repeatability

Author

Listed:
  • Amalia Vanacore

    (University of Naples “Federico II”)

  • Maria Sole Pellegrino

    (University of Naples “Federico II”)

Abstract

Students’ Evaluations of Teaching (SETs) are widely used as measures of teaching quality in Higher Education. A review of specialized literature evidences that researchers widely discuss whether SETs can be considered reliable measures of teaching quality evaluation. Though the controversy mainly refers to the role of students as assessors of teaching quality, most of research studies on SETs focus on the design and validation of the evaluation procedure and even when the need of measuring SETs reliability is recognized, it is generally indirectly assessed for the whole group of students by measuring inter-student agreement. In this paper the focus is on the direct assessment of the reliability of each student as a measurement instrument of teaching quality. An agreement-based approach is here adopted in order to assess student’s ability to provide consistent and stable evaluations; the sampling uncertainty is accounted for by building non-parametric bootstrap confidence intervals for the adopted agreement coefficients.

Suggested Citation

  • Amalia Vanacore & Maria Sole Pellegrino, 2019. "How Reliable are Students’ Evaluations of Teaching (SETs)? A Study to Test Student’s Reproducibility and Repeatability," Social Indicators Research: An International and Interdisciplinary Journal for Quality-of-Life Measurement, Springer, vol. 146(1), pages 77-89, November.
  • Handle: RePEc:spr:soinre:v:146:y:2019:i:1:d:10.1007_s11205-018-02055-y
    DOI: 10.1007/s11205-018-02055-y
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s11205-018-02055-y
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s11205-018-02055-y?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Anthony Onwuegbuzie & Larry Daniel & Kathleen Collins, 2009. "A meta-validation model for assessing the score-validity of student teaching evaluations," Quality & Quantity: International Journal of Methodology, Springer, vol. 43(2), pages 197-209, March.
    2. Martin Davies & Joe Hirschberg & Jenny Lye & Carol Johnston & Ian Mcdonald, 2007. "Systematic Influences On Teaching Evaluations: The Case For Caution," Australian Economic Papers, Wiley Blackwell, vol. 46(1), pages 18-38, March.
    3. Duane Alwin, 1989. "Problems in the estimation and interpretation of the reliability of survey data," Quality & Quantity: International Journal of Methodology, Springer, vol. 23(3), pages 277-331, September.
    4. Samer Kherfi, 2011. "Whose Opinion Is It Anyway? Determinants of Participation in Student Evaluation of Teaching," The Journal of Economic Education, Taylor & Francis Journals, vol. 42(1), pages 19-30, January.
    5. Michelle Lalla & Gisella Facchinetti & Giovanni Mastroleo, 2005. "Ordinal scales and fuzzy set systems to measure agreement: An application to the evaluation of teaching activity," Quality & Quantity: International Journal of Methodology, Springer, vol. 38(5), pages 577-601, January.
    6. de Mast, Jeroen, 2007. "Agreement and Kappa-Type Indices," The American Statistician, American Statistical Association, vol. 61, pages 148-153, May.
    7. Maarten Goos & Anna Salomons, 2017. "Measuring teaching quality in higher education: assessing selection bias in course evaluations," Research in Higher Education, Springer;Association for Institutional Research, vol. 58(4), pages 341-364, June.
    8. Mónica Martínez-Gómez & Jose Sierra & José Jabaloyes & Manuel Zarzo, 2011. "A multivariate method for analyzing and improving the use of student evaluation of teaching questionnaires: a case study," Quality & Quantity: International Journal of Methodology, Springer, vol. 45(6), pages 1415-1427, October.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Solmaz Ghaffarian Asl & Necdet Osam, 2021. "A Study of Teacher Performance in English for Academic Purposes Course: Evaluating Efficiency," SAGE Open, , vol. 11(4), pages 21582440211, October.
    2. Cannon, Edmund & Cipriani, Giam Pietro, 2021. "Gender Differences in Student Evaluations of Teaching: Identification and Consequences," IZA Discussion Papers 14387, Institute of Labor Economics (IZA).
    3. María del Carmen Olmos-Gómez & Mónica Luque-Suárez & Concetta Ferrara & Jesús Manuel Cuevas-Rincón, 2020. "Analysis of Psychometric Properties of the Quality and Satisfaction Questionnaire Focused on Sustainability in Higher Education," Sustainability, MDPI, vol. 12(19), pages 1-16, October.
    4. Luis Matosas-López & Cesar Bernal-Bravo & Alberto Romero-Ania & Irene Palomero-Ilardia, 2019. "Quality Control Systems in Higher Education Supported by the Use of Mobile Messaging Services," Sustainability, MDPI, vol. 11(21), pages 1-14, October.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Angelo Antoci & Irene Brunetti & Pierluigi Sacco & Mauro Sodini, 2021. "Student evaluation of teaching, social influence dynamics, and teachers’ choices: An evolutionary model," Journal of Evolutionary Economics, Springer, vol. 31(1), pages 325-348, January.
    2. Neckermann, Susanne & Turmunkh, Uyanga & van Dolder, Dennie & Wang, Tong V., 2022. "Nudging student participation in online evaluations of teaching: Evidence from a field experiment," European Economic Review, Elsevier, vol. 141(C).
    3. Cannon, Edmund & Cipriani, Giam Pietro, 2021. "Gender Differences in Student Evaluations of Teaching: Identification and Consequences," IZA Discussion Papers 14387, Institute of Labor Economics (IZA).
    4. Matthijs Warrens, 2010. "A Formal Proof of a Paradox Associated with Cohen’s Kappa," Journal of Classification, Springer;The Classification Society, vol. 27(3), pages 322-332, November.
    5. Ale J. Hejase & Hussin J. Hejase & Rana S. Al Kaakour, 2014. "The Impact of Students’ Characteristics on their Perceptions of the Evaluation of Teaching Process," International Journal of Management Sciences, Research Academy of Social Sciences, vol. 4(2), pages 90-105.
    6. José M. Ramírez-Hurtado & Alfredo G. Hernández-Díaz & Ana D. López-Sánchez & Víctor E. Pérez-León, 2021. "Measuring Online Teaching Service Quality in Higher Education in the COVID-19 Environment," IJERPH, MDPI, vol. 18(5), pages 1-14, March.
    7. Bonaccolto-Töpfer, Marina & Castagnetti, Carolina, 2021. "The COVID-19 pandemic: A threat to higher education?," Discussion Papers 117, Friedrich-Alexander University Erlangen-Nuremberg, Chair of Labour and Regional Economics.
    8. Duane F. Alwin, 1997. "Feeling Thermometers Versus 7-Point Scales," Sociological Methods & Research, , vol. 25(3), pages 318-340, February.
    9. McKEE J. McCLENDON & DUANE F. ALWIN, 1993. "No-Opinion Filters and Attitude Measurement Reliability," Sociological Methods & Research, , vol. 21(4), pages 438-464, May.
    10. Rieger, Matthias & Voorvelt, Katherine, 2016. "Gender, ethnicity and teaching evaluations: Evidence from mixed teaching teamsAuthor-Name: Wagner, Natascha," Economics of Education Review, Elsevier, vol. 54(C), pages 79-94.
    11. Duane F. Alwin & Jon A. Krosnick, 1991. "The Reliability of Survey Attitude Measurement," Sociological Methods & Research, , vol. 20(1), pages 139-181, August.
    12. Joris Knoben & Leon A. G. Oerlemans & Annefleur R. Krijkamp & Keith G. Provan, 2018. "What Do They Know? The Antecedents of Information Accuracy Differentials in Interorganizational Networks," Organization Science, INFORMS, vol. 29(3), pages 471-488, June.
    13. Joe Hirschberg & Jenny Lye & Martin Davies & Carol Johnston, 2011. "Measuring Student Experience: Relationships between Teaching Quality Instruments (TQI) and Course Experience Questionnaire (CEQ)," Department of Economics - Working Papers Series 1134, The University of Melbourne.
    14. Benjamin Artz & David M. Welsch, 2013. "The Effect of Student Evaluations on Academic Success," Education Finance and Policy, MIT Press, vol. 8(1), pages 100-119, January.
    15. Berezvai, Zombor & Lukáts, Gergely Dániel & Molontay, Roland, 2019. "A pénzügyi ösztönzők hatása az egyetemi oktatók osztályozási gyakorlatára [How financially rewarding student evaluation may affect grading behaviour. Evidence from a natural experiment]," Közgazdasági Szemle (Economic Review - monthly of the Hungarian Academy of Sciences), Közgazdasági Szemle Alapítvány (Economic Review Foundation), vol. 0(7), pages 733-750.
    16. Qing Li, 2016. "Indirect membership function assignment based on ordinal regression," Journal of Applied Statistics, Taylor & Francis Journals, vol. 43(3), pages 441-460, March.
    17. Bredtmann, Julia & Crede, Carsten J. & Otten, Sebastian, 2013. "Methods for evaluating educational programs: Does Writing Center Participation affect student achievement?," Evaluation and Program Planning, Elsevier, vol. 36(1), pages 115-123.
    18. Donghun Cho & Joonmo Cho, 2017. "Does More Accurate Knowledge of Course Grade Impact Teaching Evaluation?," Education Finance and Policy, MIT Press, vol. 12(2), pages 224-240, Spring.
    19. Joe Hirschberg & Jenny Lye, 2014. "The influence of student experiences on post-graduation surveys," Department of Economics - Working Papers Series 1187, The University of Melbourne.
    20. Tindara Addabbo & Maria Laura Di Tommaso & Gisella Facchinetti, 2004. "To what extent fuzzy set theory and structural equation modelling can measure functionings? An application to child well being," CHILD Working Papers wp30_04, CHILD - Centre for Household, Income, Labour and Demographic economics - ITALY.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:soinre:v:146:y:2019:i:1:d:10.1007_s11205-018-02055-y. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.