Brussels Privacy Hub has moved to a new website as of 18 May 2022. The new website is available at www.brusselsprivacyhub.com. This version of the website will be stored for archiving purposes. Please see the new website for the latest updates.
WORKING PAPER • VOL. 6 • N° 18 • January 2020
by Dr Monique Mann and Professor Tobias Matzner
ABSTRACT
The potential for biases being built into algorithms has been known for some time (e.g., Friedman and Nissenbaum, 1996), yet literature has only recently demonstrated the ways algorithmic profiling can result in social sorting and harm marginalized groups (e.g., Browne 2015; Eubanks 2018; Noble 2018). We contend that with increased algorithmic complexity, biases will become more sophisticated and difficult to identify, control for, or contest. Our argument has four steps: first, we show how harnessing algorithms means that data gathered at a particular place and time relating to specific persons, can be used to build group models applied in different contexts to different persons. Thus, privacy and data protection rights, with their focus on individuals (Coll, 2014; Parsons, 2015), do not protect from the discriminatory potential of algorithmic profiling. Second, we explore the idea that anti-discrimination regulation may be more promising, but acknowledge limitations. Third, we argue that in order to harness anti-discrimination regulation, it needs to confront emergent forms of discrimination or risk creating new invisibilities, including invisibility from existing safeguards. Finally, we outline suggestions to address emergent forms of discrimination and exclusionary invisibilities via intersectional and post-colonial analysis.
Keywords: Algorithms, profiling, GDPR, data protection, discrimination, intersectionality
Brussels Privacy Hub
Law Science Technology & Society (LSTS)
Vrije Universiteit Brussel
Pleinlaan 2 • 1050 Brussels
Belgium
Keep up to date of our activities and developments. Sign up to our newsletter:
Copyright © Brussels Privacy Hub