Flexible and scalable privacy assessment for very large datasets, with an application to official governmental microdata
Authors
School of Computer Science and Engineering
UNSW,
Sydney 2052, Australia
Abstract
We present a systematic refactoring of the conventional treatment of privacy analyses, basing it on mathematical concepts from the framework of Quantitative Information Flow (QIF). The approach we suggest brings three principal advantages: it is flexible, allowing for precise quantification and comparison of privacy risks for attacks both known and novel; it can be computationally tractable for very large, longitudinal datasets; and its results are explainable both to politicians and to the general public. We apply our approach to a very large case study: the Educational Censuses of Brazil, curated by the governmental agency INEP, which comprise over 90 attributes of approximately 50 million individuals released longitudinally every year since 2007. These datasets have only very recently (2018-2021) attracted legislation to regulate their privacy -- while at the same time continuing to maintain the openness that had been sought in Brazilian society. INEP's reaction to that legislation was the genesis of our project with them. In our conclusions here we share the scientific, technical, and communication lessons we learned in the process.
Keywords: privacy, formal methods, quantitative information flow, very large datasets, longitudinal datasets
BibTeX Entry
@inproceedings{Alvim_22, author = {Alvim, M{\'a}rio and Fernandes, Natasha and McIver, Annabelle and Morgan, Carroll and Nunes, Gabriel}, booktitle = {Proc. Priv. Enhancing Technol. 2022(4)}, pages = {378--99}, paperurl = {https://trustworthy.systems/publications/papers/Alvim_22.pdf}, title = {Flexible and scalable privacy assessment for very large datasets, with an application to official governmental microdata}, year = {2022} }