IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0221676.html
   My bibliography  Save this article

Online volunteer laboratories for human subjects research

Author

Listed:
  • Austin M Strange
  • Ryan D Enos
  • Mark Hill
  • Amy Lakeman

Abstract

Once a fixture of research in the social and behavioral sciences, volunteer subjects are now only rarely used in human subjects research. Yet volunteers are a potentially valuable resource, especially for research conducted online. We argue that online volunteer laboratories are able to produce high-quality data comparable to that from other online pools. The scalability of volunteer labs means that they can produce large volumes of high-quality data for multiple researchers, while imposing little or no financial burden. Using a range of original tests, we show that volunteer and paid respondents have different motivations for participating in research, but have similar descriptive compositions. Furthermore, volunteer samples are able to replicate classic and contemporary social science findings, and produce high levels of overall response quality comparable to paid subjects. Our results suggest that online volunteer labs represent a potentially significant untapped source of human subjects data.

Suggested Citation

  • Austin M Strange & Ryan D Enos & Mark Hill & Amy Lakeman, 2019. "Online volunteer laboratories for human subjects research," PLOS ONE, Public Library of Science, vol. 14(8), pages 1-13, August.
  • Handle: RePEc:plo:pone00:0221676
    DOI: 10.1371/journal.pone.0221676
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0221676
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0221676&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pone.0221676?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Mullinix, Kevin J. & Leeper, Thomas J. & Druckman, James N. & Freese, Jeremy, 2015. "The Generalizability of Survey Experiments," Journal of Experimental Political Science, Cambridge University Press, vol. 2(2), pages 109-138, January.
    2. Neil Stewart & Christoph Ungemach & Adam J. L. Harris & Daniel M. Bartels & Ben R. Newell & Gabriele Paolacci & Jesse Chandler, "undated". "The Average Laboratory Samples a Population of 7,300 Amazon Mechanical Turk Workers," Mathematica Policy Research Reports f97b669c7b3e4c2ab95c9f805, Mathematica Policy Research.
    3. John Horton & David Rand & Richard Zeckhauser, 2011. "The online laboratory: conducting experiments in a real labor market," Experimental Economics, Springer;Economic Science Association, vol. 14(3), pages 399-425, September.
    4. Stephan Meier & Alois Stutzer, 2008. "Is Volunteering Rewarding in Itself?," Economica, London School of Economics and Political Science, vol. 75(297), pages 39-59, February.
    5. Berinsky, Adam J. & Huber, Gregory A. & Lenz, Gabriel S., 2012. "Evaluating Online Labor Markets for Experimental Research: Amazon.com's Mechanical Turk," Political Analysis, Cambridge University Press, vol. 20(3), pages 351-368, July.
    6. repec:cup:judgdm:v:10:y:2015:i:5:p:479-491 is not listed on IDEAS
    7. Santoso, Lie Philip & Stein, Robert & Stevenson, Randy, 2016. "Survey Experiments with Google Consumer Surveys: Promise and Pitfalls for Academic Research in Social Science," Political Analysis, Cambridge University Press, vol. 24(3), pages 356-373, July.
    8. Kathryn Sharpe Wessling & Joel Huber & Oded Netzer, 2017. "MTurk Character Misrepresentation: Assessment and Solutions," Journal of Consumer Research, Journal of Consumer Research Inc., vol. 44(1), pages 211-230.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Antonio A. Arechar & Simon Gächter & Lucas Molleman, 2018. "Conducting interactive experiments online," Experimental Economics, Springer;Economic Science Association, vol. 21(1), pages 99-131, March.
    2. David Ronayne & Daniel Sgroi, 2018. "On the motivations for the dual-use of electronic and traditional cigarettes," Applied Economics Letters, Taylor & Francis Journals, vol. 25(12), pages 830-834, July.
    3. Christ, Margaret H. & Vance, Thomas W., 2018. "Cascading controls: The effects of managers’ incentives on subordinate effort to help or harm," Accounting, Organizations and Society, Elsevier, vol. 65(C), pages 20-32.
    4. David Johnson & John Barry Ryan, 2020. "Amazon Mechanical Turk workers can provide consistent and economically meaningful data," Southern Economic Journal, John Wiley & Sons, vol. 87(1), pages 369-385, July.
    5. Logan S. Casey & Jesse Chandler & Adam Seth Levine & Andrew Proctor & Dara Z. Strolovitch, 2017. "Intertemporal Differences Among MTurk Workers: Time-Based Sample Variations and Implications for Online Data Collection," SAGE Open, , vol. 7(2), pages 21582440177, June.
    6. Nicholas Haas & Rebecca B. Morton, 2018. "Saying versus doing: a new donation method for measuring ideal points," Public Choice, Springer, vol. 176(1), pages 79-106, July.
    7. Blaine G. Robbins, 2017. "Status, identity, and ability in the formation of trust," Rationality and Society, , vol. 29(4), pages 408-448, November.
    8. Florian Teschner & Henner Gimpel, 2018. "Crowd Labor Markets as Platform for Group Decision and Negotiation Research: A Comparison to Laboratory Experiments," Group Decision and Negotiation, Springer, vol. 27(2), pages 197-214, April.
    9. Masha Shunko & Julie Niederhoff & Yaroslav Rosokha, 2018. "Humans Are Not Machines: The Behavioral Impact of Queueing Design on Service Time," Management Science, INFORMS, vol. 64(1), pages 453-473, January.
    10. Abel Brodeur, Nikolai M. Cook, Anthony Heyes, 2022. "We Need to Talk about Mechanical Turk: What 22,989 Hypothesis Tests Tell Us about Publication Bias and p-Hacking in Online Experiments," LCERPA Working Papers am0133, Laurier Centre for Economic Research and Policy Analysis.
    11. Jeanette A.M.J. Deetlefs & Mathew Chylinski & Andreas Ortmann, 2015. "MTurk ‘Unscrubbed’: Exploring the good, the ‘Super’, and the unreliable on Amazon’s Mechanical Turk," Discussion Papers 2015-20, School of Economics, The University of New South Wales.
    12. Haas, Nicholas & Hassan, Mazen & Mansour, Sarah & Morton, Rebecca B., 2021. "Polarizing information and support for reform," Journal of Economic Behavior & Organization, Elsevier, vol. 185(C), pages 883-901.
    13. Cantarella, Michele & Strozzi, Chiara, 2019. "Workers in the Crowd: The Labour Market Impact of the Online Platform Economy," IZA Discussion Papers 12327, Institute of Labor Economics (IZA).
    14. John Hulland & Jeff Miller, 2018. "“Keep on Turkin’”?," Journal of the Academy of Marketing Science, Springer, vol. 46(5), pages 789-794, September.
    15. Atalay, Kadir & Bakhtiar, Fayzan & Cheung, Stephen & Slonim, Robert, 2014. "Savings and prize-linked savings accounts," Journal of Economic Behavior & Organization, Elsevier, vol. 107(PA), pages 86-106.
    16. Azzam, Tarek & Harman, Elena, 2016. "Crowdsourcing for quantifying transcripts: An exploratory study," Evaluation and Program Planning, Elsevier, vol. 54(C), pages 63-73.
    17. Dato, Simon & Feess, Eberhard & Nieken, Petra, 2019. "Lying and reciprocity," Games and Economic Behavior, Elsevier, vol. 118(C), pages 193-218.
    18. Wladislaw Mill & Cornelius Schneider, 2023. "The Bright Side of Tax Evasion," CESifo Working Paper Series 10615, CESifo.
    19. Gandullia, Luca & Lezzi, Emanuela & Parciasepe, Paolo, 2020. "Replication with MTurk of the experimental design by Gangadharan, Grossman, Jones & Leister (2018): Charitable giving across donor types," Journal of Economic Psychology, Elsevier, vol. 78(C).
    20. Prissé, Benjamin & Jorrat, Diego, 2022. "Lab vs online experiments: No differences," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 100(C).

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0221676. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.