IDEAS home Printed from https://ideas.repec.org/p/osf/lawarx/wm6yk.html
   My bibliography  Save this paper

Some HCI Priorities for GDPR-Compliant Machine Learning

Author

Listed:
  • Veale, Michael
  • Binns, Reuben
  • Van Kleek, Max

Abstract

Cite as Michael Veale, Reuben Binns and Max Van Kleek (2018) Some HCI Priorities for GDPR-Compliant Machine Learning. The General Data Protection Regulation: An Opportunity for the CHI Community? (CHI-GDPR 2018), Workshop at ACM CHI'18, 22 April 2018, Montreal, Canada. In this short paper, we consider the roles of HCI in enabling the better governance of consequential machine learning systems using the rights and obligations laid out in the recent 2016 EU General Data Protection Regulation (GDPR)---a law which involves heavy interaction with people and systems. Focussing on those areas that relate to algorithmic systems in society, we propose roles for HCI in legal contexts in relation to fairness, bias and discrimination; data protection by design; data protection impact assessments; transparency and explanations; the mitigation and understanding of automation bias; and the communication of envisaged consequences of processing.

Suggested Citation

  • Veale, Michael & Binns, Reuben & Van Kleek, Max, 2018. "Some HCI Priorities for GDPR-Compliant Machine Learning," LawArXiv wm6yk, Center for Open Science.
  • Handle: RePEc:osf:lawarx:wm6yk
    DOI: 10.31219/osf.io/wm6yk
    as

    Download full text from publisher

    File URL: https://osf.io/download/5aafb81180f2d3000d5a38ae/
    Download Restriction: no

    File URL: https://libkey.io/10.31219/osf.io/wm6yk?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Edwards, Lilian & Veale, Michael, 2017. "Slave to the Algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for," LawArXiv 97upg, Center for Open Science.
    2. Veale, Michael & Binns, Reuben, 2017. "Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data," SocArXiv ustxg, Center for Open Science.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Veale, Michael & Van Kleek, Max & Binns, Reuben, 2018. "Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making," SocArXiv 8kvf4, Center for Open Science.
    2. Kira J.M. Matus & Michael Veale, 2022. "Certification systems for machine learning: Lessons from sustainability," Regulation & Governance, John Wiley & Sons, vol. 16(1), pages 177-196, January.
    3. Matus, Kira & Veale, Michael, 2021. "Certification Systems for Machine Learning: Lessons from Sustainability," SocArXiv pm3wy, Center for Open Science.
    4. König, Pascal D. & Wenzelburger, Georg, 2021. "The legitimacy gap of algorithmic decision-making in the public sector: Why it arises and how to address it," Technology in Society, Elsevier, vol. 67(C).
    5. Vasiliki Koniakou, 2023. "From the “rush to ethics” to the “race for governance” in Artificial Intelligence," Information Systems Frontiers, Springer, vol. 25(1), pages 71-102, February.
    6. Alina Köchling & Marius Claus Wehner, 2020. "Discriminated by an algorithm: a systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development," Business Research, Springer;German Academic Association for Business Research, vol. 13(3), pages 795-848, November.
    7. Koefer, Franziska & Lemken, Ivo & Pauls, Jan, 2023. "Fairness in algorithmic decision systems: A microfinance perspective," EIF Working Paper Series 2023/88, European Investment Fund (EIF).
    8. Hazel Si Min Lim & Araz Taeihagh, 2019. "Algorithmic Decision-Making in AVs: Understanding Ethical and Technical Concerns for Smart Cities," Sustainability, MDPI, vol. 11(20), pages 1-28, October.
    9. Buhmann, Alexander & Fieseler, Christian, 2021. "Towards a deliberative framework for responsible innovation in artificial intelligence," Technology in Society, Elsevier, vol. 64(C).
    10. Cobbe, Jennifer & Veale, Michael & Singh, Jatinder, 2023. "Understanding Accountability in Algorithmic Supply Chains," SocArXiv p4sey, Center for Open Science.
    11. Kirsten Martin & Ari Waldman, 2023. "Are Algorithmic Decisions Legitimate? The Effect of Process and Outcomes on Perceptions of Legitimacy of AI Decisions," Journal of Business Ethics, Springer, vol. 183(3), pages 653-670, March.
    12. Brielle Lillywhite & Gregor Wolbring, 2022. "Emergency and Disaster Management, Preparedness, and Planning (EDMPP) and the ‘Social’: A Scoping Review," Sustainability, MDPI, vol. 14(20), pages 1-50, October.
    13. Gorwa, Robert, 2019. "What is Platform Governance?," SocArXiv fbu27, Center for Open Science.
    14. Vesnic-Alujevic, Lucia & Nascimento, Susana & Pólvora, Alexandre, 2020. "Societal and ethical impacts of artificial intelligence: Critical notes on European policy frameworks," Telecommunications Policy, Elsevier, vol. 44(6).
    15. Veale, Michael, 2017. "Logics and practices of transparency and opacity in real-world applications of public sector machine learning," SocArXiv 6cdhe, Center for Open Science.
    16. Tobias D. Krafft & Katharina A. Zweig & Pascal D. König, 2022. "How to regulate algorithmic decision‐making: A framework of regulatory requirements for different applications," Regulation & Governance, John Wiley & Sons, vol. 16(1), pages 119-136, January.
    17. Emre Bayamlıoğlu, 2022. "The right to contest automated decisions under the General Data Protection Regulation: Beyond the so‐called “right to explanation”," Regulation & Governance, John Wiley & Sons, vol. 16(4), pages 1058-1078, October.
    18. Mazur Joanna, 2019. "Automated Decision-Making and the Precautionary Principle in EU Law," TalTech Journal of European Studies, Sciendo, vol. 9(4), pages 3-18, December.
    19. Daniela Sele & Marina Chugunova, 2023. "Putting a Human in the Loop: Increasing Uptake, but Decreasing Accuracy of Automated Decision-Making," Rationality and Competition Discussion Paper Series 438, CRC TRR 190 Rationality and Competition.
    20. Anna Aseeva, 2023. "Liable and Sustainable by Design: A Toolbox for a Regulatory Compliant and Sustainable Tech," Sustainability, MDPI, vol. 16(1), pages 1-27, December.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:osf:lawarx:wm6yk. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: OSF (email available below). General contact details of provider: https://osf.io/preprints/lawarxiv/discover .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.