Brussels Privacy Symposium

PROJECTS

Brussels Privacy Symposium

 

The Brussels Privacy Symposium, is a joint programme of Future of Privacy Forum and BPH that aims to address the following challenges regarding the de-identification of personal information taking into account its central role in current privacy policy, law, and practice.

 

There are deep disagreements about the efficacy of de-identification to mitigate privacy risks. Some critics argue that it is impossible to eliminate privacy harms from publicly released data using de-identification because other available data sets will allow attackers to identify individuals through linkage attacks. Defenders of de-identification counter that despite the theoretical and demonstrated ability to mount such attacks, the likelihood of re-identification for most data sets remains minimal. As a practical matter, they argue most data sets remain securely de-identified based on established techniques.

 

There is no agreement regarding the technical questions underlying the de-identification debate, nor is there consensus over how best to advance the discussion about the benefits and limits of de-identification. The growing use of open data holds great promise for individuals and society, but also brings risk. And the need for sound principles governing data release has never been greater.

 

 

2nd Annual Brussels Privacy Symposium 2017 - AI Ethics: The privacy challenge

 

On 6 November 2017, the Brussels Privacy Symposium will focus on privacy issues surrounding Aritifical intelligence. Enhancing efficiency,increasing safety, improving accuracy and reducing negative externalities are just some of AI's key benefits. However, AI also presents risks of opaque decision making, biased algorithms, security and safety vulnerabilities, and upending labor markets. In particular, AI and amrchin learning challenge traditional notions of privacy and data protection including individual control, transparecny, access adn data minimization. On content and social platforms, it can lead to narrowcasting, discrimination, and filter bubbles.

 

A group of industry leaders recently established a partnership to study and formulate best practices on AI technologies. Last year, the White House issued a report titled Preparing for the Future of Artificial Intelligence and announced a National Artificial Intelligence Research and Development Strategic Plan, laying out a strategic vision for federally funded AI research and development. These efforts seek to reconcile the tremendous opportunities that machine learning, human–machine teaming, automation, and algorithmic decision making promise in enhanced safety, efficiency gains, and improvements in quality of life, with the legal and ethical issues that these new capabilities present for democratic institutions, human autonomy, and the very fabric of our society.

 

Papers and Symposium discussion will address the following issues:

 

  • Privacy values in design
  • Algorithmic due process and accountability
  • Fairness and equity in automated decision making
  • Accountable machines
  • Formalizing definitions of privacy fairness and equity
  • Societal implications of autonomous experimentation
  • Deploying machine learning and AI to enhance privacy
  • Cybersafety and privacy

For more information see the call for papers.

 

 

 

Brussels Privacy Symposium 2016 - Identifiability: Policy and Practical Solutions for Anonymization and Pseudonymization.

 

On 8 November 2016, the Brussels Privacy Symposium will be hosting an academic workshop on Identifiability: Policy and Practical Solutions for Anonymization and Pseudonymization aims to address the following challenges regarding the de-identification of personal information taking into account its central role in current privacy policy, law, and practice.

 

There are deep disagreements about the efficacy of de-identification to mitigate privacy risks. Some critics argue that it is impossible to eliminate privacy harms from publicly released data using de-identification because other available data sets will allow attackers to identify individuals through linkage attacks. Defenders of de-identification counter that despite the theoretical and demonstrated ability to mount such attacks, the likelihood of re-identification for most data sets remains minimal. As a practical matter, they argue most data sets remain securely de-identified based on established techniques.

 

There is no agreement regarding the technical questions underlying the de-identification debate, nor is there consensus over how best to advance the discussion about the benefits and limits of de-identification. The growing use of open data holds great promise for individuals and society, but also brings risk. And the need for sound principles governing data release has never been greater.

 

Selected authors from multiple disciplines including law, computer science, statistics, engineering, social science, ethics and business will present papers at this full-day programme. The final programme will be available shortly. More information.

 

  • Programme
  • Abstracts
  • Final papers
  • Presentations
  • Photographs

Connect with us

 

Brussels Privacy Hub

Law Science Technology & Society (LSTS)

Vrije Universiteit Brussel

Pleinlaan 2 • 1050 Brussels

Belgium

info@brusselsprivacyhub.org

@pivacyhub_bru

Stay informed

 

Keep up to date of our activities and developments. Sign up to our newsletter:

My Newsletter

Copyright © Brussels Privacy Hub