IDEAS home Printed from https://ideas.repec.org/a/eee/epplan/v71y2018icp68-82.html
   My bibliography  Save this article

Incorporating public values into evaluative criteria: Using crowdsourcing to identify criteria and standards

Author

Listed:
  • Harman, Elena
  • Azzam, Tarek

Abstract

At its core, evaluation involves the generation of value judgments. These evaluative judgments are based on comparing an evaluand’s performance to what the evaluand is supposed to do (criteria) and how well it is supposed to do it (standards). The aim of this four-phase study was to test whether criteria and standards can be set via crowdsourcing, a potentially cost- and time-effective approach to collecting public opinion data. In the first three phases, participants were presented with a program description, then asked to complete a task to either identify criteria (phase one), weigh criteria (phase two), or set standards (phase three). Phase four found that the crowd-generated criteria were high quality; more specifically, that they were clear and concise, complete, non-overlapping, and realistic. Overall, the study concludes that crowdsourcing has the potential to be used in evaluation for setting stable, high-quality criteria and standards.

Suggested Citation

  • Harman, Elena & Azzam, Tarek, 2018. "Incorporating public values into evaluative criteria: Using crowdsourcing to identify criteria and standards," Evaluation and Program Planning, Elsevier, vol. 71(C), pages 68-82.
  • Handle: RePEc:eee:epplan:v:71:y:2018:i:c:p:68-82
    DOI: 10.1016/j.evalprogplan.2018.08.004
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0149718918300417
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.evalprogplan.2018.08.004?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. John Horton & David Rand & Richard Zeckhauser, 2011. "The online laboratory: conducting experiments in a real labor market," Experimental Economics, Springer;Economic Science Association, vol. 14(3), pages 399-425, September.
    2. Paolacci, Gabriele & Chandler, Jesse & Ipeirotis, Panagiotis G., 2010. "Running experiments on Amazon Mechanical Turk," Judgment and Decision Making, Cambridge University Press, vol. 5(5), pages 411-419, August.
    3. Berinsky, Adam J. & Huber, Gregory A. & Lenz, Gabriel S., 2012. "Evaluating Online Labor Markets for Experimental Research: Amazon.com's Mechanical Turk," Political Analysis, Cambridge University Press, vol. 20(3), pages 351-368, July.
    4. repec:cup:judgdm:v:5:y:2010:i:5:p:411-419 is not listed on IDEAS
    5. Azzam, Tarek & Harman, Elena, 2016. "Crowdsourcing for quantifying transcripts: An exploratory study," Evaluation and Program Planning, Elsevier, vol. 54(C), pages 63-73.
    6. Geist, Monica R., 2010. "Using the Delphi method to engage stakeholders: A comparison of two studies," Evaluation and Program Planning, Elsevier, vol. 33(2), pages 147-154, May.
    7. Patton, Michael Quinn & Horton, Douglas, 2008. "Utilization-focused evaluation for agricultural innovation," ILAC Briefs 52533, Institutional Learning and Change (ILAC) Initiative.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Teasdale, Rebecca M., 2022. "Representing the values of program participants: Endogenous evaluative criteria," Evaluation and Program Planning, Elsevier, vol. 94(C).
    2. Kazak Jan K. & Simeunović Nataša & Hendricks Andreas, 2019. "Hidden Public Value Identification of Real Estate Management Decisions," Real Estate Management and Valuation, Sciendo, vol. 27(4), pages 96-104, December.
    3. Haeussler, Carolin & Vieth, Sabrina, 2022. "A question worth a million: The expert, the crowd, or myself? An investigation of problem solving," Research Policy, Elsevier, vol. 51(3).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Haas, Nicholas & Hassan, Mazen & Mansour, Sarah & Morton, Rebecca B., 2021. "Polarizing information and support for reform," Journal of Economic Behavior & Organization, Elsevier, vol. 185(C), pages 883-901.
    2. Cantarella, Michele & Strozzi, Chiara, 2019. "Workers in the Crowd: The Labour Market Impact of the Online Platform Economy," IZA Discussion Papers 12327, Institute of Labor Economics (IZA).
    3. Atalay, Kadir & Bakhtiar, Fayzan & Cheung, Stephen & Slonim, Robert, 2014. "Savings and prize-linked savings accounts," Journal of Economic Behavior & Organization, Elsevier, vol. 107(PA), pages 86-106.
    4. Azzam, Tarek & Harman, Elena, 2016. "Crowdsourcing for quantifying transcripts: An exploratory study," Evaluation and Program Planning, Elsevier, vol. 54(C), pages 63-73.
    5. Wladislaw Mill & Cornelius Schneider, 2023. "The Bright Side of Tax Evasion," CESifo Working Paper Series 10615, CESifo.
    6. Gandullia, Luca & Lezzi, Emanuela & Parciasepe, Paolo, 2020. "Replication with MTurk of the experimental design by Gangadharan, Grossman, Jones & Leister (2018): Charitable giving across donor types," Journal of Economic Psychology, Elsevier, vol. 78(C).
    7. Prissé, Benjamin & Jorrat, Diego, 2022. "Lab vs online experiments: No differences," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 100(C).
    8. Valerio Capraro & Hélène Barcelo, 2021. "Punishing defectors and rewarding cooperators: Do people discriminate between genders?," Journal of the Economic Science Association, Springer;Economic Science Association, vol. 7(1), pages 19-32, September.
    9. Lefgren, Lars J. & Sims, David P. & Stoddard, Olga B., 2016. "Effort, luck, and voting for redistribution," Journal of Public Economics, Elsevier, vol. 143(C), pages 89-97.
    10. Tim Straub & Henner Gimpel & Florian Teschner & Christof Weinhardt, 2015. "How (not) to Incent Crowd Workers," Business & Information Systems Engineering: The International Journal of WIRTSCHAFTSINFORMATIK, Springer;Gesellschaft für Informatik e.V. (GI), vol. 57(3), pages 167-179, June.
    11. Nicolas Jacquemet & Alexander G James & Stéphane Luchini & James J Murphy & Jason F Shogren, 2021. "Do truth-telling oaths improve honesty in crowd-working?," PLOS ONE, Public Library of Science, vol. 16(1), pages 1-18, January.
    12. Chandler, Dana & Kapelner, Adam, 2013. "Breaking monotony with meaning: Motivation in crowdsourcing markets," Journal of Economic Behavior & Organization, Elsevier, vol. 90(C), pages 123-133.
    13. Christ, Margaret H. & Vance, Thomas W., 2018. "Cascading controls: The effects of managers’ incentives on subordinate effort to help or harm," Accounting, Organizations and Society, Elsevier, vol. 65(C), pages 20-32.
    14. Harman, Elena & Azzam, Tarek, 2018. "Towards program theory validation: Crowdsourcing the qualitative analysis of participant experiences," Evaluation and Program Planning, Elsevier, vol. 66(C), pages 183-194.
    15. Brañas-Garza, Pablo & Capraro, Valerio & Rascón-Ramírez, Ericka, 2018. "Gender differences in altruism on Mechanical Turk: Expectations and actual behaviour," Economics Letters, Elsevier, vol. 170(C), pages 19-23.
    16. Michele Cantarella & Chiara Strozzi, 2018. "Labour market effects of crowdwork in US and EU: an empirical investigation," Department of Economics 0139, University of Modena and Reggio E., Faculty of Economics "Marco Biagi".
    17. David Johnson & John Barry Ryan, 2020. "Amazon Mechanical Turk workers can provide consistent and economically meaningful data," Southern Economic Journal, John Wiley & Sons, vol. 87(1), pages 369-385, July.
    18. Binder, Carola Conces, 2022. "Time-of-day and day-of-week variations in Amazon Mechanical Turk survey responses," Journal of Macroeconomics, Elsevier, vol. 71(C).
    19. Antonio A. Arechar & Gordon T. Kraft-Todd & David G. Rand, 2017. "Turking overtime: how participant characteristics and behavior vary over time and day on Amazon Mechanical Turk," Journal of the Economic Science Association, Springer;Economic Science Association, vol. 3(1), pages 1-11, July.
    20. Anthony C. Bucaro & Kevin E. Jackson & Jeremy B. Lill, 2020. "The Influence of Corporate Social Responsibility Measures on Investors' Judgments When Integrated in a Financial Report Versus Presented in a Separate Report," Contemporary Accounting Research, John Wiley & Sons, vol. 37(2), pages 665-695, June.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:epplan:v:71:y:2018:i:c:p:68-82. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/evalprogplan .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.