IDEAS home Printed from https://ideas.repec.org/a/gam/jftint/v14y2022i2p37-d732274.html
   My bibliography  Save this article

Selecting Workers Wisely for Crowdsourcing When Copiers and Domain Experts Co-exist

Author

Listed:
  • Xiu Fang

    (School of Computer Science and Technology, Donghua University, Shanghai 201600, China
    These authors contributed equally to this work.)

  • Suxin Si

    (School of Computer Science and Technology, Donghua University, Shanghai 201600, China
    These authors contributed equally to this work.)

  • Guohao Sun

    (School of Computer Science and Technology, Donghua University, Shanghai 201600, China)

  • Quan Z. Sheng

    (School of Computing, Macquaire University, Sydney, NSW 2109, Australia)

  • Wenjun Wu

    (School of Computer Science and Technology, Donghua University, Shanghai 201600, China)

  • Kang Wang

    (School of Computer Science and Technology, Donghua University, Shanghai 201600, China)

  • Hang Lv

    (School of Computer Science and Technology, Donghua University, Shanghai 201600, China)

Abstract

Crowdsourcing integrates human wisdom to solve problems. Tremendous research efforts have been made in this area. However, most of them assume that workers have the same credibility in different domains and workers complete tasks independently. This leads to an inaccurate evaluation of worker credibility, hampering crowdsourcing results. To consider the impact of worker domain expertise, we adopted a vector to more accurately measure the credibility of each worker. Based on this measurement and prior task domain knowledge, we calculated fine-grained worker credibility on each given task. To avoid tasks being assigned to dependent workers who copy answers from others, we conducted copier detection via Bayesian analysis. We designed a crowdsourcing system called SWWC composed of a task assignment stage and a truth discovery stage. In the task assignment stage, we assigned tasks wisely to workers based on worker domain expertise calculation and copier removal. In the truth discovery stage, we computed the estimated truth and worker credibility by an iterative method. Then, we updated the domain expertise of workers to facilitate the upcoming task assignment. We also designed initialization algorithms to better initialize the accuracy of new workers. Theoretical analysis and experimental results showed that our method had a prominent advantage, especially under a copying situation.

Suggested Citation

  • Xiu Fang & Suxin Si & Guohao Sun & Quan Z. Sheng & Wenjun Wu & Kang Wang & Hang Lv, 2022. "Selecting Workers Wisely for Crowdsourcing When Copiers and Domain Experts Co-exist," Future Internet, MDPI, vol. 14(2), pages 1-22, January.
  • Handle: RePEc:gam:jftint:v:14:y:2022:i:2:p:37-:d:732274
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1999-5903/14/2/37/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1999-5903/14/2/37/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. A. P. Dawid & A. M. Skene, 1979. "Maximum Likelihood Estimation of Observer Error‐Rates Using the EM Algorithm," Journal of the Royal Statistical Society Series C, Royal Statistical Society, vol. 28(1), pages 20-28, March.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Xiaoxiao Yang & Jing Zhang & Jun Peng & Lihong Lei, 2021. "Incentive mechanism based on Stackelberg game under reputation constraint for mobile crowdsensing," International Journal of Distributed Sensor Networks, , vol. 17(6), pages 15501477211, June.
    2. Junming Yin & Jerry Luo & Susan A. Brown, 2021. "Learning from Crowdsourced Multi-labeling: A Variational Bayesian Approach," Information Systems Research, INFORMS, vol. 32(3), pages 752-773, September.
    3. Yuqing Kong, 2021. "Information Elicitation Meets Clustering," Papers 2110.00952, arXiv.org.
    4. Tomer Geva & Maytal Saar‐Tsechansky, 2021. "Who Is a Better Decision Maker? Data‐Driven Expert Ranking Under Unobserved Quality," Production and Operations Management, Production and Operations Management Society, vol. 30(1), pages 127-144, January.
    5. Jesus Cerquides & Mehmet Oğuz Mülâyim & Jerónimo Hernández-González & Amudha Ravi Shankar & Jose Luis Fernandez-Marquez, 2021. "A Conceptual Probabilistic Framework for Annotation Aggregation of Citizen Science Data," Mathematics, MDPI, vol. 9(8), pages 1-15, April.
    6. Ahfock, Daniel & McLachlan, Geoffrey J., 2021. "Harmless label noise and informative soft-labels in supervised classification," Computational Statistics & Data Analysis, Elsevier, vol. 161(C).
    7. Alaa Ghanaiem & Evgeny Kagan & Parteek Kumar & Tal Raviv & Peter Glynn & Irad Ben-Gal, 2023. "Unsupervised Classification under Uncertainty: The Distance-Based Algorithm," Mathematics, MDPI, vol. 11(23), pages 1-19, November.
    8. Jing Wang & Panagiotis G. Ipeirotis & Foster Provost, 2017. "Cost-Effective Quality Assurance in Crowd Labeling," Information Systems Research, INFORMS, vol. 28(1), pages 137-158, March.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:14:y:2022:i:2:p:37-:d:732274. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.