IDEAS home Printed from https://ideas.repec.org/a/eee/epplan/v54y2016icp63-73.html
   My bibliography  Save this article

Crowdsourcing for quantifying transcripts: An exploratory study

Author

Listed:
  • Azzam, Tarek
  • Harman, Elena

Abstract

This exploratory study attempts to demonstrate the potential utility of crowdsourcing as a supplemental technique for quantifying transcribed interviews. Crowdsourcing is the harnessing of the abilities of many people to complete a specific task or a set of tasks. In this study multiple samples of crowdsourced individuals were asked to rate and select supporting quotes from two different transcripts. The findings indicate that the different crowdsourced samples produced nearly identical ratings of the transcripts, and were able to consistently select the same supporting text from the transcripts. These findings suggest that crowdsourcing, with further development, can potentially be used as a mixed method tool to offer a supplemental perspective on transcribed interviews.

Suggested Citation

  • Azzam, Tarek & Harman, Elena, 2016. "Crowdsourcing for quantifying transcripts: An exploratory study," Evaluation and Program Planning, Elsevier, vol. 54(C), pages 63-73.
  • Handle: RePEc:eee:epplan:v:54:y:2016:i:c:p:63-73
    DOI: 10.1016/j.evalprogplan.2015.09.002
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0149718915001044
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.evalprogplan.2015.09.002?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. John Horton & David Rand & Richard Zeckhauser, 2011. "The online laboratory: conducting experiments in a real labor market," Experimental Economics, Springer;Economic Science Association, vol. 14(3), pages 399-425, September.
    2. repec:cup:judgdm:v:5:y:2010:i:5:p:411-419 is not listed on IDEAS
    3. Paolacci, Gabriele & Chandler, Jesse & Ipeirotis, Panagiotis G., 2010. "Running experiments on Amazon Mechanical Turk," Judgment and Decision Making, Cambridge University Press, vol. 5(5), pages 411-419, August.
    4. Berinsky, Adam J. & Huber, Gregory A. & Lenz, Gabriel S., 2012. "Evaluating Online Labor Markets for Experimental Research: Amazon.com's Mechanical Turk," Political Analysis, Cambridge University Press, vol. 20(3), pages 351-368, July.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Harman, Elena & Azzam, Tarek, 2018. "Incorporating public values into evaluative criteria: Using crowdsourcing to identify criteria and standards," Evaluation and Program Planning, Elsevier, vol. 71(C), pages 68-82.
    2. Harman, Elena & Azzam, Tarek, 2018. "Towards program theory validation: Crowdsourcing the qualitative analysis of participant experiences," Evaluation and Program Planning, Elsevier, vol. 66(C), pages 183-194.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Cantarella, Michele & Strozzi, Chiara, 2019. "Workers in the Crowd: The Labour Market Impact of the Online Platform Economy," IZA Discussion Papers 12327, Institute of Labor Economics (IZA).
    2. Gandullia, Luca & Lezzi, Emanuela & Parciasepe, Paolo, 2020. "Replication with MTurk of the experimental design by Gangadharan, Grossman, Jones & Leister (2018): Charitable giving across donor types," Journal of Economic Psychology, Elsevier, vol. 78(C).
    3. Prissé, Benjamin & Jorrat, Diego, 2022. "Lab vs online experiments: No differences," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 100(C).
    4. Valerio Capraro & Hélène Barcelo, 2021. "Punishing defectors and rewarding cooperators: Do people discriminate between genders?," Journal of the Economic Science Association, Springer;Economic Science Association, vol. 7(1), pages 19-32, September.
    5. Lefgren, Lars J. & Sims, David P. & Stoddard, Olga B., 2016. "Effort, luck, and voting for redistribution," Journal of Public Economics, Elsevier, vol. 143(C), pages 89-97.
    6. Tim Straub & Henner Gimpel & Florian Teschner & Christof Weinhardt, 2015. "How (not) to Incent Crowd Workers," Business & Information Systems Engineering: The International Journal of WIRTSCHAFTSINFORMATIK, Springer;Gesellschaft für Informatik e.V. (GI), vol. 57(3), pages 167-179, June.
    7. Chandler, Dana & Kapelner, Adam, 2013. "Breaking monotony with meaning: Motivation in crowdsourcing markets," Journal of Economic Behavior & Organization, Elsevier, vol. 90(C), pages 123-133.
    8. Harman, Elena & Azzam, Tarek, 2018. "Incorporating public values into evaluative criteria: Using crowdsourcing to identify criteria and standards," Evaluation and Program Planning, Elsevier, vol. 71(C), pages 68-82.
    9. Michele Cantarella & Chiara Strozzi, 2018. "Labour market effects of crowdwork in US and EU: an empirical investigation," Department of Economics 0139, University of Modena and Reggio E., Faculty of Economics "Marco Biagi".
    10. Binder, Carola Conces, 2022. "Time-of-day and day-of-week variations in Amazon Mechanical Turk survey responses," Journal of Macroeconomics, Elsevier, vol. 71(C).
    11. Anthony C. Bucaro & Kevin E. Jackson & Jeremy B. Lill, 2020. "The Influence of Corporate Social Responsibility Measures on Investors' Judgments When Integrated in a Financial Report Versus Presented in a Separate Report," Contemporary Accounting Research, John Wiley & Sons, vol. 37(2), pages 665-695, June.
    12. Brañas-Garza, Pablo & Capraro, Valerio & Rascón-Ramírez, Ericka, 2018. "Gender differences in altruism on Mechanical Turk: Expectations and actual behaviour," Economics Letters, Elsevier, vol. 170(C), pages 19-23.
    13. Fehr, Dietmar & Vollmann, Martin, 2020. "Misperceiving Economic Success: Experimental Evidence on Meritocratic Beliefs and Inequality Acceptance," Working Papers 0695, University of Heidelberg, Department of Economics.
    14. Atalay, Kadir & Bakhtiar, Fayzan & Cheung, Stephen & Slonim, Robert, 2014. "Savings and prize-linked savings accounts," Journal of Economic Behavior & Organization, Elsevier, vol. 107(PA), pages 86-106.
    15. Joseph A. Johnson & Jochen Theis & Adam Vitalis & Donald Young, 2020. "The Influence of Firms' Emissions Management Strategy Disclosures on Investors' Valuation Judgments†," Contemporary Accounting Research, John Wiley & Sons, vol. 37(2), pages 642-664, June.
    16. David Johnson & John Barry Ryan, 2020. "Amazon Mechanical Turk workers can provide consistent and economically meaningful data," Southern Economic Journal, John Wiley & Sons, vol. 87(1), pages 369-385, July.
    17. Burdea, Valeria & Woon, Jonathan, 2022. "Online belief elicitation methods," Journal of Economic Psychology, Elsevier, vol. 90(C).
    18. Mourelatos, Evangelos, 2021. "Personality and Ethics on Online Labor Markets: How mood influences ethical perceptions," EconStor Preprints 244735, ZBW - Leibniz Information Centre for Economics.
    19. Nicolas Jacquemet & Alexander G James & Stéphane Luchini & James J Murphy & Jason F Shogren, 2021. "Do truth-telling oaths improve honesty in crowd-working?," PLOS ONE, Public Library of Science, vol. 16(1), pages 1-18, January.
    20. Blaine G. Robbins, 2017. "Status, identity, and ability in the formation of trust," Rationality and Society, , vol. 29(4), pages 408-448, November.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:epplan:v:54:y:2016:i:c:p:63-73. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/evalprogplan .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.