IDEAS home Printed from https://ideas.repec.org/a/zbw/iprjir/296496.html
   My bibliography  Save this article

Regulating high-reach AI: On transparency directions in the Digital Services Act

Author

Listed:
  • Söderlund, Kasia
  • Engström, Emma
  • Haresamudram, Kashyap
  • Larsson, Stefan
  • Strimling, Pontus

Abstract

By introducing the concept of high-reach AI, this paper focuses on AI systems whose widespread use may generate significant risks for both individuals and societies. While some of those risks have been recognised under the AI Act, we analyse the rules laid down by the Digital Services Act (DSA) for recommender systems used by dominant social media platforms as a prominent example of high-reach AI. Specifically, we examine transparency provisions aimed at addressing adverse effects of these AI technologies employed by social media very large online platforms (VLOPs). Drawing from AI transparency literature, we analyse DSA transparency measures through the conceptual lens of horizontal and vertical transparency. Our analysis indicates that while the DSA incorporates transparency provisions in both dimensions, the most progressive amendments emerge within the vertical transparency, for instance, by the introduction of the systemic risk assessment mechanism. However, we argue that the true impact of the new transparency provisions extends beyond their mere existence, emphasising the critical role of oversight entities in implementation and application of the DSA. Overall, this study highlights the paramount importance of vertical transparency in providing a comprehensive understanding of the aggregated risks associated with high-reach AI technologies, exemplified by social media recommender systems.

Suggested Citation

  • Söderlund, Kasia & Engström, Emma & Haresamudram, Kashyap & Larsson, Stefan & Strimling, Pontus, 2024. "Regulating high-reach AI: On transparency directions in the Digital Services Act," Internet Policy Review: Journal on Internet Regulation, Alexander von Humboldt Institute for Internet and Society (HIIG), Berlin, vol. 13(1), pages 1-31.
  • Handle: RePEc:zbw:iprjir:296496
    DOI: 10.14763/2024.1.1746
    as

    Download full text from publisher

    File URL: https://www.econstor.eu/bitstream/10419/296496/1/1890394238.pdf
    Download Restriction: no

    File URL: https://libkey.io/10.14763/2024.1.1746?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Edwards, Lilian & Veale, Michael, 2017. "Slave to the Algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for," LawArXiv 97upg, Center for Open Science.
    2. Yesilada, Muhsin & Lewandowsky, Stephan, 2022. "Systematic review: YouTube recommendations and problematic content," Internet Policy Review: Journal on Internet Regulation, Alexander von Humboldt Institute for Internet and Society (HIIG), Berlin, vol. 11(1), pages 1-22.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. König, Pascal D. & Wenzelburger, Georg, 2021. "The legitimacy gap of algorithmic decision-making in the public sector: Why it arises and how to address it," Technology in Society, Elsevier, vol. 67(C).
    2. Hazel Si Min Lim & Araz Taeihagh, 2019. "Algorithmic Decision-Making in AVs: Understanding Ethical and Technical Concerns for Smart Cities," Sustainability, MDPI, vol. 11(20), pages 1-28, October.
    3. Buhmann, Alexander & Fieseler, Christian, 2021. "Towards a deliberative framework for responsible innovation in artificial intelligence," Technology in Society, Elsevier, vol. 64(C).
    4. Cobbe, Jennifer & Veale, Michael & Singh, Jatinder, 2023. "Understanding Accountability in Algorithmic Supply Chains," SocArXiv p4sey, Center for Open Science.
    5. Kirsten Martin & Ari Waldman, 2023. "Are Algorithmic Decisions Legitimate? The Effect of Process and Outcomes on Perceptions of Legitimacy of AI Decisions," Journal of Business Ethics, Springer, vol. 183(3), pages 653-670, March.
    6. Jesselyn M. Baduria & Justin Banquillo & Lerma G. Vergara, 2024. "Balancing of Freedom of Expression and Community Guidelines for YouTube Content Creators," International Journal of Research and Innovation in Social Science, International Journal of Research and Innovation in Social Science (IJRISS), vol. 8(11), pages 3589-3607, November.
    7. Veale, Michael & Binns, Reuben & Van Kleek, Max, 2018. "Some HCI Priorities for GDPR-Compliant Machine Learning," LawArchive wm6yk_v1, Center for Open Science.
    8. Vesnic-Alujevic, Lucia & Nascimento, Susana & Pólvora, Alexandre, 2020. "Societal and ethical impacts of artificial intelligence: Critical notes on European policy frameworks," Telecommunications Policy, Elsevier, vol. 44(6).
    9. Veale, Michael, 2017. "Logics and practices of transparency and opacity in real-world applications of public sector machine learning," SocArXiv 6cdhe, Center for Open Science.
    10. Mazur Joanna, 2019. "Automated Decision-Making and the Precautionary Principle in EU Law," TalTech Journal of European Studies, Sciendo, vol. 9(4), pages 3-18, December.
    11. Daniela Sele & Marina Chugunova, 2023. "Putting a Human in the Loop: Increasing Uptake, but Decreasing Accuracy of Automated Decision-Making," Rationality and Competition Discussion Paper Series 438, CRC TRR 190 Rationality and Competition.
    12. Frederik Zuiderveen Borgesius & Joost Poort, 2017. "Online Price Discrimination and EU Data Privacy Law," Journal of Consumer Policy, Springer, vol. 40(3), pages 347-366, September.
    13. Veale, Michael & Van Kleek, Max & Binns, Reuben, 2018. "Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making," SocArXiv 8kvf4, Center for Open Science.
    14. Kira J.M. Matus & Michael Veale, 2022. "Certification systems for machine learning: Lessons from sustainability," Regulation & Governance, John Wiley & Sons, vol. 16(1), pages 177-196, January.
    15. Larisa Găbudeanu & Iulia Brici & Codruța Mare & Ioan Cosmin Mihai & Mircea Constantin Șcheau, 2021. "Privacy Intrusiveness in Financial-Banking Fraud Detection," Risks, MDPI, vol. 9(6), pages 1-22, June.
    16. Rolf H. Weber, 2021. "Artificial Intelligence ante portas: Reactions of Law," J, MDPI, vol. 4(3), pages 1-14, September.
    17. I. Ooijen & Helena U. Vrabec, 2019. "Does the GDPR Enhance Consumers’ Control over Personal Data? An Analysis from a Behavioural Perspective," Journal of Consumer Policy, Springer, vol. 42(1), pages 91-107, March.
    18. Janssen, Patrick & Sadowski, Bert M., 2021. "Bias in Algorithms: On the trade-off between accuracy and fairness," 23rd ITS Biennial Conference, Online Conference / Gothenburg 2021. Digital societies and industrial transformations: Policies, markets, and technologies in a post-Covid world 238032, International Telecommunications Society (ITS).
    19. Matus, Kira & Veale, Michael, 2021. "Certification Systems for Machine Learning: Lessons from Sustainability," SocArXiv pm3wy, Center for Open Science.
    20. Gorwa, Robert, 2019. "What is Platform Governance?," SocArXiv fbu27_v1, Center for Open Science.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:zbw:iprjir:296496. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ZBW - Leibniz Information Centre for Economics (email available below). General contact details of provider: https://policyreview.info/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.