WORKSHOP • 11 OCTOBER 2018
by Giorgia Bucaria, Brussels Privacy Hub, LSTS, VUB
On 11 October 2018, the Brussels Privacy Hub organised an event titled "Data localisation: Analysing a growing global regulatory phenomenon". The event was chaired by Hub co-director Professor Paul de Hert (LSTS, VUB) and featured Helena U. Vrabec (Palantir, Yale Law School), who discussed the impact that data localisation measures have on data economy, and Professor Andrew Keane Woods (University of Arizona), who talked about the relationship between data localisation measures and national sovereignty, with particular regard to jurisdictional aspects.
The event was opened by a short introduction by de Hert, who explained the links between data localisation and law enforcement. He concluded his opening remarks with the question whether, in the age of information technology, the concepts of national sovereignty and jurisdiction are still valid and usable or if, on the contrary, they are out of date in such a “connected era”.
The first speaker, Helena U. Vrabec, held a presentation titled "Data localisation measures and their impact on data economy". The intervention was a summary of the main findings of a paper she has recently written on the topic (“Data localisation measures and their impact on data science”) with three other colleagues. For more information on the piece, click here.
She started by explaining why the topic of data localisation is so relevant in today's world. In the first place, data localisation measures can be considered an international issue, as they deeply affect the dynamics between states on the international level. Secondly, she pointed out that data can be considered, nowadays, as the "oil" of the digital economy and, as such, the possibility to control and access them is considered particularly valuable by a number of stakeholders. In the opinion of Vrabec, there are two factors that enable the smooth functioning and the growth of data economy, i.e. the cross-borders free flow of data and the possibility to access data taking into account the amount of time, energy and money required for such access and the legal requirements to comply with. Accordingly, data localisation measure can be considered a sort of burden to the development of a data economy. However, this has not stopped the spread of data localisation measures. Surprisingly, the more the data economy grows, the more data localisation measures are put in place (they amounted to 15 in 2000 and in 2016 they were more than 50).
At this point, having explained the relevance of the topic in a general manner, the speaker started a more in-depth analysis of the challenges that the spread of such measures poses. In the first place, she clarified that data localisation measures are not focused solely on personal data but, on the contrary, they include all kind of data (e.g. financial/scientific/judicial/… data). Subsequently, having defined the data localisation measures as “the measures limiting the processing of data to a specific geographic area or limiting the company that can access data based upon the nation of incorporation or the principal site of the company”, she divided these measures into two classes: explicit data localisation measures and implicit data localisation measures.
The first cluster is composed of all the provisions that clearly stipulate the intention of limiting data to a specific national territory. This cluster includes measures such as the duty of local storage and/or the duty of local processing and the restriction of international data transfers. On the other hand, the implicit data localisation measures are more “subtle”. They, in fact, have not the specific purpose of limiting the international free flow of data but such limitation is their inherent effect. Such measures include: public procurement, requirements regarding local ownership/establishment/employment, online censorship in order to prohibit foreign actors to provide services that would enable them to store and process data, tax provisions on foreign products resulting in a limited use of such products and in a decrease of the amount of data coming with their functioning, standards. However, as the speaker points out, this dichotomy is not an absolute paradigm to assess the nature of data localisation measures. They, in fact, can vary a lot, on a spectrum that goes from “hard measures” to more “soft measures”.
Then, she explained that the tension toward data localisation at a national level has led, by contrast, to an opposite movement on the international scene. International agents, in fact, are more and more interested in setting up the so-called “anti-localisation measures”, such as the Trans-Pacific Partnership (TPP), the Trade in Services Agreement (TiSA) and the North American Free Trade Agreement (NAFTA). These include provisions that prevent national states from putting up data localisation systems. Such measures exist also in the EU: member states, in fact, are nudged towards the creation of a free flow of data within the Union, so to balance the pro-localisation provisions of the GDPR.
After the analysis of how these measures are practically implemented, Vrabec focused on the “drivers” behind these policies, singling-out the economic driver, the political driver and the technical driver. The economic driver is based on protectionism, using the limitation to the international free flow of data as a tool to develop local IT service providers and, more in general, to stop the international influence on the internal economy. The political driver, instead, can be sub-divided in three further categories: the wish to simplify the law enforcement (also looking at Article 13 of the ECHR), the pursuit of national security, the purpose of enhancing data protection law and the privacy of data subjects. Finally, the technological driver is usually described in terms of cyber-security.
The analysis was concluded by an interesting proposal for the development of a framework able to assess the desirability of such measures. In other words, the negative consequences that these measures have on the economy (they can decrease the GDP of up to 1.7%) can be, at certain conditions, be balanced by a legitimate goal (such as privacy or the efficiency of law enforcement). On this basis, Vrabec, with the aim of creating a tool able to consent a case-by-case assessment of the circumstances at stake, developed the so-called “proportionality test”. This test is made of three steps: the first step consists in analysing the nature and the scope of the measures (e.g. a restriction to the free flow of sensitive data is more acceptable and justified than a restriction on the free flow of non-personal data); the second step is a discussion on the purpose and the causality between the measures and the objective they are serving; finally, the third is to evaluate of the implementation mechanism of the measures chosen, in order to check if they pose excessive costs for stakeholders and, if it is the case, to suggest measures that pose lower costs.
This intervention was followed by a short reaction of professor Andrew Keane Woods, who focused on the jurisdictional repercussions of data localisation measures and their impact on the concepts of national sovereignty and law enforcement. According to Woods, our era is characterised by a progressive and unstoppable loss of control by national states on the basic functions that were once fundamental to their existence. Having perceived this phenomenon, national states are consequently trying to gain back control. Accordingly, data localisation measures are nothing more than another battleground for the attempts of national states to reaffirm their sovereignty on the international scene. States, in fact, are not interested in data localisation measures per se: the driver lying behind the implementation of data localisation measures is always connected to the bigger interest of national sovereignty.
Woods affirmed that this on-going mechanism is particularly evident in the field of judicial data. Nowadays, it is becoming ever more frequent that, in order to carry out their investigation, law enforcement authorities need the help of foreign companies holding data relevant for the purpose of the investigation. When these companies are not located in the national territory, the law enforcement authority does not have any appropriate legal tool in order to force the company to provide these data. In fact, only the authorities of the state where the company is based have the legal power to enforce the disclosure of data. This may lead to the paradoxical situation where a national state is obliged to ask for the help of another state in order to investigate a fully-fledged domestic crime. As Woods points out, this may be frustrating for the national authorities. Data localisation measures are, therefore, the clear consequence of the issue here exposed: if companies are obliged to maintain data in the state of their collection, then the data-supported law enforcement goes back to be a solely internal issue. The professor provided some specific examples, such as the “Microsoft Ireland case” in the USA (Microsoft Corporation v United States of America). In this case, Microsoft, in response to a warrant by the Department of Justice for the personal user data and the email content held on an Irish server, argued that such data were not subject to US jurisdiction. The Second Circuit Court ruled that warrants do not have extraterritorial effect. This case highlighted the problems concerning the cross-border transfers of data and lead, in 2018, to the introduction of the “CLOUD act”.
Finally, the professor emphasised the fact that national states have, in order to solve the difficulties of cross-border transfers of judicially relevant data, a vast range of options. On the one hand, they can choose to overcome this problem by putting up technical barriers (e.g. the legal provision to keep data storage on the national territory) or, on the contrary, by negotiating on the international environment with the aim of breaking down the jurisdictional barriers. In other words, this system consents to national states to gain back the control of data not by limiting the technicalities of the data storage/processing but rather by creating a legal mechanism leading to the same result. The latter option seems preferable for a number of reasons: one of them being, that this does not impact the free flow of data and, as such, does not impede the growth of data economy and its benefits.
The presentations were followed by a lively discussion with the audience.
Copyright © Brussels Privacy Hub