IDEAS home Printed from https://ideas.repec.org/a/spr/jmgtco/v33y2022i4d10.1007_s00187-022-00347-6.html
   My bibliography  Save this article

Effects of algorithmic control on power asymmetry and inequality within organizations

Author

Listed:
  • Mehdi Barati

    (University at Albany-State University of New York)

  • Bahareh Ansari

    (University at Albany)

Abstract

Algorithmic control is expanding in various domains with the advances in programming algorithms, the continuous increase in hardware computing power, larger amounts of available fine-grained data, and an increasing number of organizations exercising remote work. Scholars and practitioners in human resource management posit that organizations’ adoption of algorithms as a substitute for or supplement to traditional rational control mechanisms to direct, discipline, and evaluate workers might increase the objectivity and transparency of worker-related decision-making processes and, therefore, reduce the power asymmetry and inequality within organizations. This discussion commentary argues that the underlying assumptions of the higher objectivity and transparency of algorithms in organizational control are very strong, and current evidence does not support them. There is also evidence of large variation in organizations’ adoption of algorithmic control due to their current technical, structural, and human capital resources, which further blurs the predicted outcomes. Evidence also exists for an over-reliance on algorithmic suggestions by managers to circumvent accountability. Adopting algorithmic control must therefore be conducted with serious precautions. This article proposes that overestimation of objectivity and transparency, and large variation in organizations’ adoption of AC (including the lack of technical and managerial knowledge of the underlying mechanisms of learning algorithms in some organizations, and the complete abandonment of human intuitive judgment and reasoning in others) could worsen the power asymmetry and inequality within organizations by increasing the opacity of decisions, systematic biases, discriminatory classification, and violation of worker privacy.

Suggested Citation

  • Mehdi Barati & Bahareh Ansari, 2022. "Effects of algorithmic control on power asymmetry and inequality within organizations," Journal of Management Control: Zeitschrift für Planung und Unternehmenssteuerung, Springer, vol. 33(4), pages 525-544, December.
  • Handle: RePEc:spr:jmgtco:v:33:y:2022:i:4:d:10.1007_s00187-022-00347-6
    DOI: 10.1007/s00187-022-00347-6
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s00187-022-00347-6
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s00187-022-00347-6?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Alina Köchling & Marius Claus Wehner, 2020. "Discriminated by an algorithm: a systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development," Business Research, Springer;German Academic Association for Business Research, vol. 13(3), pages 795-848, November.
    2. Edwards, Lilian & Veale, Michael, 2017. "Slave to the Algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for," LawArXiv 97upg, Center for Open Science.
    3. Wiener, Martin & Cram, W. Alec & Benlian, Alexander, 2023. "Algorithmic control and gig workers: A legitimacy perspective of Uber drivers," Publications of Darmstadt Technical University, Institute for Business Studies (BWL) 128415, Darmstadt Technical University, Department of Business Administration, Economics and Law, Institute for Business Studies (BWL).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Niilo Noponen & Polina Feshchenko & Tommi Auvinen & Vilma Luoma-aho & Pekka Abrahamsson, 2024. "Taylorism on steroids or enabling autonomy? A systematic review of algorithmic management," Management Review Quarterly, Springer, vol. 74(3), pages 1695-1721, September.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. König, Pascal D. & Wenzelburger, Georg, 2021. "The legitimacy gap of algorithmic decision-making in the public sector: Why it arises and how to address it," Technology in Society, Elsevier, vol. 67(C).
    2. Vasiliki Koniakou, 2023. "From the “rush to ethics” to the “race for governance” in Artificial Intelligence," Information Systems Frontiers, Springer, vol. 25(1), pages 71-102, February.
    3. Koefer, Franziska & Lemken, Ivo & Pauls, Jan, 2023. "Fairness in algorithmic decision systems: A microfinance perspective," EIF Working Paper Series 2023/88, European Investment Fund (EIF).
    4. Suen, Hung-Yue & Hung, Kuo-En, 2024. "Revealing the influence of AI and its interfaces on job candidates' honest and deceptive impression management in asynchronous video interviews," Technological Forecasting and Social Change, Elsevier, vol. 198(C).
    5. Hazel Si Min Lim & Araz Taeihagh, 2019. "Algorithmic Decision-Making in AVs: Understanding Ethical and Technical Concerns for Smart Cities," Sustainability, MDPI, vol. 11(20), pages 1-28, October.
    6. Chenfeng Yan & Quan Chen & Xinyue Zhou & Xin Dai & Zhilin Yang, 2024. "When the Automated fire Backfires: The Adoption of Algorithm-based HR Decision-making Could Induce Consumer’s Unfavorable Ethicality Inferences of the Company," Journal of Business Ethics, Springer, vol. 190(4), pages 841-859, April.
    7. Buhmann, Alexander & Fieseler, Christian, 2021. "Towards a deliberative framework for responsible innovation in artificial intelligence," Technology in Society, Elsevier, vol. 64(C).
    8. Jing Wang & Zeyu Xing & Rui Zhang, 2023. "AI technology application and employee responsibility," Palgrave Communications, Palgrave Macmillan, vol. 10(1), pages 1-17, December.
    9. Colak Murat & Saridogan Berkay C., 2023. "Exploring the Remote Work Revolution: A Managerial View of the Tech Sector’s Response to the New Normal," International Journal of Contemporary Management, Sciendo, vol. 59(4), pages 18-33, December.
    10. Veale, Michael & Binns, Reuben & Van Kleek, Max, 2018. "Some HCI Priorities for GDPR-Compliant Machine Learning," LawArXiv wm6yk, Center for Open Science.
    11. Cobbe, Jennifer & Veale, Michael & Singh, Jatinder, 2023. "Understanding Accountability in Algorithmic Supply Chains," SocArXiv p4sey, Center for Open Science.
    12. Zhang, Lixuan & Yencha, Christopher, 2022. "Examining perceptions towards hiring algorithms," Technology in Society, Elsevier, vol. 68(C).
    13. Kirsten Martin & Ari Waldman, 2023. "Are Algorithmic Decisions Legitimate? The Effect of Process and Outcomes on Perceptions of Legitimacy of AI Decisions," Journal of Business Ethics, Springer, vol. 183(3), pages 653-670, March.
    14. Gorwa, Robert, 2019. "What is Platform Governance?," SocArXiv fbu27, Center for Open Science.
    15. Vesnic-Alujevic, Lucia & Nascimento, Susana & Pólvora, Alexandre, 2020. "Societal and ethical impacts of artificial intelligence: Critical notes on European policy frameworks," Telecommunications Policy, Elsevier, vol. 44(6).
    16. Veale, Michael, 2017. "Logics and practices of transparency and opacity in real-world applications of public sector machine learning," SocArXiv 6cdhe, Center for Open Science.
    17. Söderlund, Kasia & Engström, Emma & Haresamudram, Kashyap & Larsson, Stefan & Strimling, Pontus, 2024. "Regulating high-reach AI: On transparency directions in the Digital Services Act," Internet Policy Review: Journal on Internet Regulation, Alexander von Humboldt Institute for Internet and Society (HIIG), Berlin, vol. 13(1), pages 1-31.
    18. Tobias D. Krafft & Katharina A. Zweig & Pascal D. König, 2022. "How to regulate algorithmic decision‐making: A framework of regulatory requirements for different applications," Regulation & Governance, John Wiley & Sons, vol. 16(1), pages 119-136, January.
    19. Emre Bayamlıoğlu, 2022. "The right to contest automated decisions under the General Data Protection Regulation: Beyond the so‐called “right to explanation”," Regulation & Governance, John Wiley & Sons, vol. 16(4), pages 1058-1078, October.
    20. Mazur Joanna, 2019. "Automated Decision-Making and the Precautionary Principle in EU Law," TalTech Journal of European Studies, Sciendo, vol. 9(4), pages 3-18, December.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:jmgtco:v:33:y:2022:i:4:d:10.1007_s00187-022-00347-6. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.