Author
Listed:
- Battaglini, Manuela
- Rasmussen, Steen
Abstract
Automated decision-making and profiling techniques provide tremendous opportunities to companies and organizations; however, they can also be harmful to individuals, because current laws and their interpretations neither provide data subjects with sufficient control over assessments made by automated decision-making processes nor with sufficient control over how these profiles are used. Initially, we briefly discuss how recent technological innovations led to big data analytics, which through machine learning algorithms can extract behaviours, preferences and feelings of individuals. This automatically generated knowledge can both form the basis for effective business decisions and result in discriminatory and biased perceptions of individuals’ lives. We next observe how the consequences of this situation lead to lack of transparency in automated decision-making and profiling, and discuss the legal framework of this situation. The concept of personal data in this section is crucial, as there is a conflict between the 29 Working Party and the European Court of Justice at the time to define the artificial intelligence (AI)-generated profiles and assessments as personal data. Depending on whether they are or are not personal data, individuals have the right to be notified (Articles 13–14 GDPR) or right to access (Article 15 GDPR) to inferenced data. The reality is that the data protection law does not protect data subjects from the assessments that companies make through big data and machine learning algorithms, as users lose control over their personal data and do not have any mechanism to protect themselves from this profiling owing to trade secrets and intellectual property rights. Finally, we discuss four possible solutions to lack of transparency in automated inferences. We explore the impact of a variety of approaches ranging from use of open source algorithms to only collecting anonymous data, and we show how these approaches, to varying degrees, protect individuals as well as let them control their personal data. Based on that, we conclude by outlining the requirements for a desirable governance model of our critical digital infrastructures.
Suggested Citation
Battaglini, Manuela & Rasmussen, Steen, 2019.
"Transparency, automated decision-making processes and personal profiling,"
Journal of Data Protection & Privacy, Henry Stewart Publications, vol. 2(4), pages 331-349, July.
Handle:
RePEc:aza:jdpp00:y:2019:v:2:i:4:p:331-349
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:aza:jdpp00:y:2019:v:2:i:4:p:331-349. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Henry Stewart Talks (email available below). General contact details of provider: .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.