IDEAS home Printed from https://ideas.repec.org/a/jns/jbstat/v231y2011i5-6p749-760.html
   My bibliography  Save this article

Plagiarism in Student Papers: Prevalence Estimates Using Special Techniques for Sensitive Questions

Author

Listed:
  • Coutts Elisabethen

    (Zurich)

  • Jann Ben

    (University of Bern, Institute of Sociology, Lerchenweg 36, 3012 Bern, Switzerland)

  • Krumpal Ivar

    (University of Leipzig, Institute of Sociology, Beethovenstrasse 15, 04107 Leipzig, Germany)

  • Näher Anatol-Fiete

    (University of Leipzig, Institute of Sociology, Beethovenstrasse 15, 04107 Leipzig, Germany)

Abstract

This article evaluates three different questioning techniques for measuring the prevalence of plagiarism in student papers: the randomized response technique (RRT), the item count technique (ICT), and the crosswise model (CM). In three independent experimental surveys with Swiss and German university students as subjects (two web surveys and a survey using paper and- pencil questionnaires in a classroom setting), each of the three techniques is compared to direct questioning and evaluated based on the “more-is-better” assumption. According to our results the RRT and the ICT failed to reduce social desirability bias in self-reports of plagiarism. In contrast, the CM was more successful in eliciting a significantly higher rate of reported sensitive behavior than direct questioning. One reason for the success of the CM, we believe, is that it overcomes the “self-protective no” bias known from the RRT (and which may also be a potential problem in the ICT).We find rates of up to 22 percent of students who declared that they ever intentionally adopted a passage from someone else’s work without citing it. Severe plagiarism such as handing in someone else’s paper as one’s own, however, seems to be less frequent with rates of about 1 to 2 percent.

Suggested Citation

  • Coutts Elisabethen & Jann Ben & Krumpal Ivar & Näher Anatol-Fiete, 2011. "Plagiarism in Student Papers: Prevalence Estimates Using Special Techniques for Sensitive Questions," Journal of Economics and Statistics (Jahrbuecher fuer Nationaloekonomie und Statistik), De Gruyter, vol. 231(5-6), pages 749-760, October.
  • Handle: RePEc:jns:jbstat:v:231:y:2011:i:5-6:p:749-760
    DOI: 10.1515/jbnst-2011-5-612
    as

    Download full text from publisher

    File URL: https://doi.org/10.1515/jbnst-2011-5-612
    Download Restriction: no

    File URL: https://libkey.io/10.1515/jbnst-2011-5-612?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Elisabeth Coutts & Ben Jann, 2011. "Sensitive Questions in Online Surveys: Experimental Results for the Randomized Response Technique (RRT) and the Unmatched Count Technique (UCT)," Sociological Methods & Research, , vol. 40(1), pages 169-193, February.
    2. Johannes Landsheer & Peter Van Der Heijden & Ger Van Gils, 1999. "Trust and Understanding, Two Psychological Aspects of Randomized Response," Quality & Quantity: International Journal of Methodology, Springer, vol. 33(1), pages 1-12, February.
    3. James Abernathy & Bernard Greenberg & Daniel Horvitz, 1970. "Estimates of induced abortion in urban North Carolina," Demography, Springer;Population Association of America (PAA), vol. 7(1), pages 19-29, February.
    4. Buchman, Ta & Tracy, Ja, 1982. "Obtaining Responses To Sensitive Questions - Conventional Questionnaire Versus Randomized-Response Technique," Journal of Accounting Research, Wiley Blackwell, vol. 20(1), pages 263-271.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Ivar Krumpal & Thomas Voss, 2020. "Sensitive Questions and Trust: Explaining Respondents’ Behavior in Randomized Response Surveys," SAGE Open, , vol. 10(3), pages 21582440209, July.
    2. Kirchner Antje, 2015. "Validating Sensitive Questions: A Comparison of Survey and Register Data," Journal of Official Statistics, Sciendo, vol. 31(1), pages 31-59, March.
    3. Carroll, Eamonn & Timmons, Shane & McGinnity, Frances, 2023. "Experimental tests of public support for disability policy," Research Series, Economic and Social Research Institute (ESRI), number RS159.
    4. Korndörfer, Martin & Krumpal, Ivar & Schmukle, Stefan C., 2014. "Measuring and explaining tax evasion: Improving self-reports using the crosswise model," Journal of Economic Psychology, Elsevier, vol. 45(C), pages 18-32.
    5. McGinnity, Frances & Creighton, Mathew & Fahey, Éamonn, 2020. "Hidden versus revealed attitudes: a list experiment on support for minorities in Ireland," Research Series, Economic and Social Research Institute (ESRI), number BKMNEXT372.
    6. Walzenbach, Sandra & Hinz, Thomas, 2022. "Puzzling Answers to Crosswise Questions - Examining Overall Prevalence Rates, Primacy Effects and Learning Effects," EconStor Preprints 249353, ZBW - Leibniz Information Centre for Economics.
    7. Ivar Krumpal, 2013. "Determinants of social desirability bias in sensitive surveys: a literature review," Quality & Quantity: International Journal of Methodology, Springer, vol. 47(4), pages 2025-2047, June.
    8. Julia Meisters & Adrian Hoffmann & Jochen Musch, 2020. "Controlling social desirability bias: An experimental investigation of the extended crosswise model," PLOS ONE, Public Library of Science, vol. 15(12), pages 1-13, December.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ivar Krumpal, 2013. "Determinants of social desirability bias in sensitive surveys: a literature review," Quality & Quantity: International Journal of Methodology, Springer, vol. 47(4), pages 2025-2047, June.
    2. John, Leslie K. & Loewenstein, George & Acquisti, Alessandro & Vosgerau, Joachim, 2018. "When and why randomized response techniques (fail to) elicit the truth," Organizational Behavior and Human Decision Processes, Elsevier, vol. 148(C), pages 101-123.
    3. Gerty J. L. M. Lensvelt-Mulders & Joop J. Hox & Peter G. M. van der Heijden & Cora J. M. Maas, 2005. "Meta-Analysis of Randomized Response Research," Sociological Methods & Research, , vol. 33(3), pages 319-348, February.
    4. Gueorguiev, Dimitar & Malesky, Edmund, 2012. "Foreign investment and bribery: A firm-level analysis of corruption in Vietnam," Journal of Asian Economics, Elsevier, vol. 23(2), pages 111-129.
    5. Wu, Tao & Delios, Andrew & Chen, Zhaowei & Wang, Xin, 2023. "Rethinking corruption in international business: An empirical review," Journal of World Business, Elsevier, vol. 58(2).
    6. Monika Frenger & Eike Emrich & Werner Pitsch, 2019. "Corruption in Olympic Sports: Prevalence Estimations of Match Fixing Among German Squad Athletes," SAGE Open, , vol. 9(3), pages 21582440198, July.
    7. James E. Prieger, 2023. "Tax noncompliance: The role of tax morale in smokers' behavior," Contemporary Economic Policy, Western Economic Association International, vol. 41(4), pages 653-673, October.
    8. U. N. Umesh & Robert A. Peterson, 1991. "A Critical Evaluation of the Randomized Response Method," Sociological Methods & Research, , vol. 20(1), pages 104-138, August.
    9. Klaus Friesenbichler & George Clarke & Michael Wong, 2014. "Price competition and market transparency: evidence from a random response technique," Empirica, Springer;Austrian Institute for Economic Research;Austrian Economic Association, vol. 41(1), pages 5-21, February.
    10. Shen‐Ming Lee & Truong‐Nhat Le & Phuoc‐Loc Tran & Chin‐Shang Li, 2022. "Investigating the association of a sensitive attribute with a random variable using the Christofides generalised randomised response design and Bayesian methods," Journal of the Royal Statistical Society Series C, Royal Statistical Society, vol. 71(5), pages 1471-1502, November.
    11. Blume, Andreas & Lai, Ernest K. & Lim, Wooyoung, 2019. "Eliciting private information with noise: The case of randomized response," Games and Economic Behavior, Elsevier, vol. 113(C), pages 356-380.
    12. Jouni Kuha & Jonathan Jackson, 2014. "The item count method for sensitive survey questions: modelling criminal behaviour," Journal of the Royal Statistical Society Series C, Royal Statistical Society, vol. 63(2), pages 321-341, February.
    13. Marc Höglinger & Ben Jann, 2018. "More is not always better: An experimental individual-level validation of the randomized response technique and the crosswise model," PLOS ONE, Public Library of Science, vol. 13(8), pages 1-22, August.
    14. Vincenzo Galasso & Vincent Pons & Paola Profeta & Michael Becher & Sylvain Brouard & Martial Foucault, 2020. "Gender Differences in COVID-19 Related Attitudes and Behavior: Evidence from a Panel Survey in Eight OECD Countries," NBER Working Papers 27359, National Bureau of Economic Research, Inc.
    15. Julia Meisters & Adrian Hoffmann & Jochen Musch, 2020. "Can detailed instructions and comprehension checks increase the validity of crosswise model estimates?," PLOS ONE, Public Library of Science, vol. 15(6), pages 1-19, June.
    16. Felix Wolter & Peter Preisendörfer, 2013. "Asking Sensitive Questions," Sociological Methods & Research, , vol. 42(3), pages 321-353, August.
    17. Ulf Böckenholt & Peter van der Heijden, 2007. "Item Randomized-Response Models for Measuring Noncompliance: Risk-Return Perceptions, Social Influences, and Self-Protective Responses," Psychometrika, Springer;The Psychometric Society, vol. 72(2), pages 245-262, June.
    18. Flannery, Timothy, 2018. "A new methodology for surveys and its application to forced response," Mathematical Social Sciences, Elsevier, vol. 91(C), pages 17-24.
    19. Katherine B. Coffman & Lucas C. Coffman & Keith M. Marzilli Ericson, 2017. "The Size of the LGBT Population and the Magnitude of Antigay Sentiment Are Substantially Underestimated," Management Science, INFORMS, vol. 63(10), pages 3168-3186, October.
    20. Roe-Sepowitz, Dominique & Bontrager, Stephanie & Pickett, Justin T. & Kosloski, Anna E., 2019. "Estimating the sex buying behavior of adult males in the United States: List experiment and direct question estimates," Journal of Criminal Justice, Elsevier, vol. 63(C), pages 41-48.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:jns:jbstat:v:231:y:2011:i:5-6:p:749-760. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Peter Golla (email available below). General contact details of provider: https://www.degruyter.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.