IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0231189.html
   My bibliography  Save this article

Wide range screening of algorithmic bias in word embedding models using large sentiment lexicons reveals underreported bias types

Author

Listed:
  • David Rozado

Abstract

Concerns about gender bias in word embedding models have captured substantial attention in the algorithmic bias research literature. Other bias types however have received lesser amounts of scrutiny. This work describes a large-scale analysis of sentiment associations in popular word embedding models along the lines of gender and ethnicity but also along the less frequently studied dimensions of socioeconomic status, age, physical appearance, sexual orientation, religious sentiment and political leanings. Consistent with previous scholarly literature, this work has found systemic bias against given names popular among African-Americans in most embedding models examined. Gender bias in embedding models however appears to be multifaceted and often reversed in polarity to what has been regularly reported. Interestingly, using the common operationalization of the term bias in the fairness literature, novel types of so far unreported bias types in word embedding models have also been identified. Specifically, the popular embedding models analyzed here display negative biases against middle and working-class socioeconomic status, male children, senior citizens, plain physical appearance and intellectual phenomena such as Islamic religious faith, non-religiosity and conservative political orientation. Reasons for the paradoxical underreporting of these bias types in the relevant literature are probably manifold but widely held blind spots when searching for algorithmic bias and a lack of widespread technical jargon to unambiguously describe a variety of algorithmic associations could conceivably be playing a role. The causal origins for the multiplicity of loaded associations attached to distinct demographic groups within embedding models are often unclear but the heterogeneity of said associations and their potential multifactorial roots raises doubts about the validity of grouping them all under the umbrella term bias. Richer and more fine-grained terminology as well as a more comprehensive exploration of the bias landscape could help the fairness epistemic community to characterize and neutralize algorithmic discrimination more efficiently.

Suggested Citation

  • David Rozado, 2020. "Wide range screening of algorithmic bias in word embedding models using large sentiment lexicons reveals underreported bias types," PLOS ONE, Public Library of Science, vol. 15(4), pages 1-26, April.
  • Handle: RePEc:plo:pone00:0231189
    DOI: 10.1371/journal.pone.0231189
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0231189
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0231189&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pone.0231189?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Rachel Courtland, 2018. "Bias detectives: the researchers striving to make algorithms fair," Nature, Nature, vol. 558(7710), pages 357-360, June.
    2. Nikhil Garg & Londa Schiebinger & Dan Jurafsky & James Zou, 2018. "Word embeddings quantify 100 years of gender and ethnic stereotypes," Proceedings of the National Academy of Sciences, Proceedings of the National Academy of Sciences, vol. 115(16), pages 3635-3644, April.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Akter, Shahriar & Hossain, Md Afnan & Sajib, Shahriar & Sultana, Saida & Rahman, Mahfuzur & Vrontis, Demetris & McCarthy, Grace, 2023. "A framework for AI-powered service innovation capability: Review and agenda for future research," Technovation, Elsevier, vol. 125(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Kun Sun & Rong Wang, 2022. "The Evolutionary Pattern of Language in English Fiction Over the Last Two Centuries: Insights From Linguistic Concreteness and Imageability," SAGE Open, , vol. 12(1), pages 21582440211, January.
    2. Stefan Feuerriegel & Mateusz Dolata & Gerhard Schwabe, 2020. "Fair AI," Business & Information Systems Engineering: The International Journal of WIRTSCHAFTSINFORMATIK, Springer;Gesellschaft für Informatik e.V. (GI), vol. 62(4), pages 379-384, August.
    3. Ash, Elliott & Durante, Ruben & Grebenshchikova, Mariia & Schwarz, Carlo, 2022. "Visual Representation and Stereotypes in News Media," CEPR Discussion Papers 16624, C.E.P.R. Discussion Papers.
    4. Martin Baumgaertner & Johannes Zahner, 2021. "Whatever it takes to understand a central banker - Embedding their words using neural networks," MAGKS Papers on Economics 202130, Philipps-Universität Marburg, Faculty of Business Administration and Economics, Department of Economics (Volkswirtschaftliche Abteilung).
    5. Taylor, Marshall A. & Stoltz, Dustin S., 2020. "Integrating Semantic Directions with Concept Mover's Distance to Measure Binary Concept Engagement," SocArXiv 36r2d, Center for Open Science.
    6. Fellnhofer, Katharina & Sornette, Didier, 2022. "Embracing The Intuitive-Analytical Paradox? How Intuitive And Analytical Decision-Making Drive Paradoxes In Simple And Complex Environments," OSF Preprints evjd6, Center for Open Science.
    7. Duede, Eamon & Teplitskiy, Misha & Lakhani, Karim & Evans, James, 2024. "Being together in place as a catalyst for scientific advance," Research Policy, Elsevier, vol. 53(2).
    8. Dustin S. Stoltz & Marshall A. Taylor, 2019. "Concept Mover’s Distance: measuring concept engagement via word embeddings in texts," Journal of Computational Social Science, Springer, vol. 2(2), pages 293-313, July.
    9. Kun Sun & Haitao Liu & Wenxin Xiong, 2021. "The evolutionary pattern of language in scientific writings: A case study of Philosophical Transactions of Royal Society (1665–1869)," Scientometrics, Springer;Akadémiai Kiadó, vol. 126(2), pages 1695-1724, February.
    10. Haochuan Cui & Tiewei Li & Cheng-Jun Wang, 2023. "Climbing up the ladder of abstraction: how to span the boundaries of knowledge space in the online knowledge market?," Palgrave Communications, Palgrave Macmillan, vol. 10(1), pages 1-12, December.
    11. Diego Kozlowski & Jennifer Dusdal & Jun Pang & Andreas Zilian, 2021. "Semantic and relational spaces in science of science: deep learning models for article vectorisation," Scientometrics, Springer;Akadémiai Kiadó, vol. 126(7), pages 5881-5910, July.
    12. Sandeep Soni & Kristina Lerman & Jacob Eisenstein, 2021. "Follow the leader: Documents on the leading edge of semantic change get more citations," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 72(4), pages 478-492, April.
    13. Vesnic-Alujevic, Lucia & Nascimento, Susana & Pólvora, Alexandre, 2020. "Societal and ethical impacts of artificial intelligence: Critical notes on European policy frameworks," Telecommunications Policy, Elsevier, vol. 44(6).
    14. Quariguasi Frota Neto, João & Dutordoir, Marie, 2020. "Mapping the market for remanufacturing: An application of “Big Data” analytics," International Journal of Production Economics, Elsevier, vol. 230(C).
    15. Scott Thiebes & Sebastian Lins & Ali Sunyaev, 2021. "Trustworthy artificial intelligence," Electronic Markets, Springer;IIM University of St. Gallen, vol. 31(2), pages 447-464, June.
    16. Sudeep Bhatia, 2019. "Predicting Risk Perception: New Insights from Data Science," Management Science, INFORMS, vol. 65(8), pages 3800-3823, August.
    17. Huimin Xu & Zhang Zhang & Lingfei Wu & Cheng-Jun Wang, 2019. "The Cinderella Complex: Word embeddings reveal gender stereotypes in movies and books," PLOS ONE, Public Library of Science, vol. 14(11), pages 1-18, November.
    18. Ahmed Abbasi & Jeffrey Parsons & Gautam Pant & Olivia R. Liu Sheng & Suprateek Sarker, 2024. "Pathways for Design Research on Artificial Intelligence," Information Systems Research, INFORMS, vol. 35(2), pages 441-459, June.
    19. Antonio De Nicola & Gregorio D’Agostino, 2021. "Assessment of gender divide in scientific communities," Scientometrics, Springer;Akadémiai Kiadó, vol. 126(5), pages 3807-3840, May.
    20. Dario Onorati & Pierfrancesco Tommasino & Leonardo Ranaldi & Francesca Fallucchi & Fabio Massimo Zanzotto, 2020. "Pat-in-the-Loop : Declarative Knowledge for Controlling Neural Networks," Future Internet, MDPI, vol. 12(12), pages 1-12, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0231189. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.