IDEAS home Printed from https://ideas.repec.org/a/wly/apecpp/v45y2023i2p744-761.html
   My bibliography  Save this article

Comparing water quality valuation across probability and non‐probability samples

Author

Listed:
  • Kaitlynn Sandstrom‐Mistry
  • Frank Lupi
  • Hyunjung Kim
  • Joseph A. Herriges

Abstract

We compare water quality valuation results from a probability sample and two opt‐in non‐probability samples, MTurk and Qualtrics. The samples differ in some key demographics, but measured attitudes are strikingly similar. For valuation models, most parameters were significantly different across samples, yet many of the marginal willingness to pay were similar across samples. Notably, for non‐marginal changes there were some differences by samples: MTurk values were always significantly greater than the probability sample, as were Qualtrics values for changes up to about a 20% improvement. Overall, the evidence is mixed, with some key differences but many similarities across samples.

Suggested Citation

  • Kaitlynn Sandstrom‐Mistry & Frank Lupi & Hyunjung Kim & Joseph A. Herriges, 2023. "Comparing water quality valuation across probability and non‐probability samples," Applied Economic Perspectives and Policy, John Wiley & Sons, vol. 45(2), pages 744-761, June.
  • Handle: RePEc:wly:apecpp:v:45:y:2023:i:2:p:744-761
    DOI: 10.1002/aepp.13375
    as

    Download full text from publisher

    File URL: https://doi.org/10.1002/aepp.13375
    Download Restriction: no

    File URL: https://libkey.io/10.1002/aepp.13375?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Other versions of this item:

    References listed on IDEAS

    as
    1. John Gibson & David Johnson, 2019. "Are Online Samples Credible? Evidence from Risk Elicitation Tests," Atlantic Economic Journal, Springer;International Atlantic Economic Society, vol. 47(3), pages 377-379, September.
    2. Roulin, Nicolas, 2015. "Don't Throw the Baby Out With the Bathwater: Comparing Data Quality of Crowdsourcing, Online Panels, and Student Samples," Industrial and Organizational Psychology, Cambridge University Press, vol. 8(2), pages 190-196, June.
    3. David Johnson & John Barry Ryan, 2020. "Amazon Mechanical Turk workers can provide consistent and economically meaningful data," Southern Economic Journal, John Wiley & Sons, vol. 87(1), pages 369-385, July.
    4. repec:cup:judgdm:v:5:y:2010:i:5:p:411-419 is not listed on IDEAS
    5. Liebe, Ulf & Glenk, Klaus & Oehlmann, Malte & Meyerhoff, Jürgen, 2015. "Does the use of mobile devices (tablets and smartphones) affect survey quality and choice behaviour in web surveys?," Journal of choice modelling, Elsevier, vol. 14(C), pages 17-31.
    6. Sandorf, Erlend Dancke & Persson, Lars & Broberg, Thomas, 2020. "Using an integrated choice and latent variable model to understand the impact of “professional” respondents in a stated preference survey," Resource and Energy Economics, Elsevier, vol. 61(C).
    7. Lindhjem, Henrik & Navrud, Ståle, 2011. "Using Internet in Stated Preference Surveys: A Review and Comparison of Survey Modes," International Review of Environmental and Resource Economics, now publishers, vol. 5(4), pages 309-351, September.
    8. Zhifeng Gao & Lisa A. House & Jing Xie, 2016. "Online Survey Data Quality and Its Implication for Willingness-to-Pay: A Cross-Country Comparison," Canadian Journal of Agricultural Economics/Revue canadienne d'agroeconomie, Canadian Agricultural Economics Society/Societe canadienne d'agroeconomie, vol. 64(2), pages 199-221, June.
    9. Boas, Taylor C. & Christenson, Dino P. & Glick, David M., 2020. "Recruiting large online samples in the United States and India: Facebook, Mechanical Turk, and Qualtrics," Political Science Research and Methods, Cambridge University Press, vol. 8(2), pages 232-250, April.
    10. Søren Olsen, 2009. "Choosing Between Internet and Mail Survey Modes for Choice Experiment Surveys Considering Non-Market Goods," Environmental & Resource Economics, Springer;European Association of Environmental and Resource Economists, vol. 44(4), pages 591-610, December.
    11. Berinsky, Adam J. & Huber, Gregory A. & Lenz, Gabriel S., 2012. "Evaluating Online Labor Markets for Experimental Research: Amazon.com's Mechanical Turk," Political Analysis, Cambridge University Press, vol. 20(3), pages 351-368, July.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Menegaki, Angeliki, N. & Olsen, Søren Bøye & Tsagarakis, Konstantinos P., 2016. "Towards a common standard – A reporting checklist for web-based stated preference valuation surveys and a critique for mode surveys," Journal of choice modelling, Elsevier, vol. 18(C), pages 18-50.
    2. Erlend Dancke Sandorf & Kristine Grimsrud & Henrik Lindhjem, 2022. "Ponderous, Proficient or Professional? Survey Experience and Smartphone Effects in Stated Preference Research," Environmental & Resource Economics, Springer;European Association of Environmental and Resource Economists, vol. 81(4), pages 807-832, April.
    3. Skeie, Magnus Aa. & Lindhjem, Henrik & Skjeflo, Sofie & Navrud, Ståle, 2019. "Smartphone and tablet effects in contingent valuation web surveys – No reason to worry?," Ecological Economics, Elsevier, vol. 165(C), pages 1-1.
    4. Kelvin Balcombe & Michail Bitzios & Iain Fraser & Janet Haddock-Fraser, 2014. "Using Attribute Importance Rankings Within Discrete Choice Experiments: An Application to Valuing Bread Attributes," Journal of Agricultural Economics, Wiley Blackwell, vol. 65(2), pages 446-462, June.
    5. John Gibson & David Johnson, 0. "Breaking Bad: When Being Disadvantaged Incentivizes (Seemingly) Risky Behavior," Eastern Economic Journal, Palgrave Macmillan;Eastern Economic Association, vol. 0, pages 1-28.
    6. Barton, Jared & Pan, Xiaofei, 2022. "Movin’ on up? A survey experiment on mobility enhancing policies," European Journal of Political Economy, Elsevier, vol. 74(C).
    7. Brodeur, Abel & Cook, Nikolai & Heyes, Anthony, 2022. "We Need to Talk about Mechanical Turk: What 22,989 Hypothesis Tests Tell us about p-Hacking and Publication Bias in Online Experiments," GLO Discussion Paper Series 1157, Global Labor Organization (GLO).
    8. John Gibson & David Johnson, 2021. "Breaking Bad: When Being Disadvantaged Incentivizes (Seemingly) Risky Behavior," Eastern Economic Journal, Palgrave Macmillan;Eastern Economic Association, vol. 47(1), pages 107-134, January.
    9. Carlsson, Fredrik & Kataria, Mitesh & Lampi, Elina & Martinsson, Peter, 2021. "Past and present outage costs – A follow-up study of households’ willingness to pay to avoid power outages," Resource and Energy Economics, Elsevier, vol. 64(C).
    10. Johannes G. Jaspersen & Marc A. Ragin & Justin R. Sydnor, 2022. "Insurance demand experiments: Comparing crowdworking to the lab," Journal of Risk & Insurance, The American Risk and Insurance Association, vol. 89(4), pages 1077-1107, December.
    11. Luke Fowler & Stephen Utych, 2021. "Are people better employees than machines? Dehumanizing language and employee performance appraisals," Social Science Quarterly, Southwestern Social Science Association, vol. 102(4), pages 2006-2019, July.
    12. Penn, Jerrod & Hu, Wuyang, 2016. "Making the Most of Cheap Talk in an Online Survey," 2016 Annual Meeting, July 31-August 2, Boston, Massachusetts 236171, Agricultural and Applied Economics Association.
    13. Sandorf, Erlend Dancke & Persson, Lars & Broberg, Thomas, 2020. "Using an integrated choice and latent variable model to understand the impact of “professional” respondents in a stated preference survey," Resource and Energy Economics, Elsevier, vol. 61(C).
    14. Mjelde & Tae-Kyun Kim & Choong-Ki Lee, 2016. "Comparison of Internet and interview survey modes when estimating willingness to pay using choice experiments," Applied Economics Letters, Taylor & Francis Journals, vol. 23(1), pages 74-77, January.
    15. David Johnson & John Barry Ryan, 2020. "Amazon Mechanical Turk workers can provide consistent and economically meaningful data," Southern Economic Journal, John Wiley & Sons, vol. 87(1), pages 369-385, July.
    16. Lenz, Lisa & Mittlaender, Sergio, 2022. "The effect of intergroup contact on discrimination," Journal of Economic Psychology, Elsevier, vol. 89(C).
    17. Kolstoe, Sonja & Naald, Brian Vander & Cohan, Alison, 2022. "A tale of two samples: Understanding WTP differences in the age of social media," Ecosystem Services, Elsevier, vol. 55(C).
    18. Liebe, Ulf & Glenk, Klaus & von Meyer-Höfer, Marie & Spiller, Achim, 2019. "A web survey application of real choice experiments," Journal of choice modelling, Elsevier, vol. 33(C).
    19. Danny Campbell & Morten Raun Mørkbak & Søren Bøye Olsen, 2017. "Response time in online stated choice experiments: the non-triviality of identifying fast and slow respondents," Journal of Environmental Economics and Policy, Taylor & Francis Journals, vol. 6(1), pages 17-35, January.
    20. Chen, Xuqi & Shen, Meng & Gao, Zhifeng, 2017. "Impact of Intra-respondent Variations in Attribute Attendance on Consumer Preference in Food Choice," 2017 Annual Meeting, July 30-August 1, Chicago, Illinois 258509, Agricultural and Applied Economics Association.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:wly:apecpp:v:45:y:2023:i:2:p:744-761. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Wiley Content Delivery (email available below). General contact details of provider: https://doi.org/10.1002/(ISSN)2040-5804 .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.