diffpriv: 'diffpriv': practical differential privacy in R.

Description General-purpose mechanisms Privatize anything with sensitivity measurement References Examples

Description

The diffpriv package is a collection of generic tools for privacy-aware data science, under the formal framework of differential privacy. A differentially-private mechanism can release responses to untrusted third parties, models fit on privacy-sensitive data. Due to the formal worst-case nature of the framework, however, mechanism development typically requires theoretical analysis. diffpriv offers a turn-key approach to differential privacy.

General-purpose mechanisms

Differential privacy's popularity is owed in part to a number of generic mechanisms for privatizing non-private target functions. Virtual S4 class DPMech-class captures common features of these mechanisms and is superclass to:

DPMech-class-derived objects are initialized with a problem-specific non-private target function. Subsequently, the releaseResponse method can privatize responses of target on input datasets. The level of corresponding privatization depends on given privacy parameters DPParamsEps or derived parameters object.

Privatize anything with sensitivity measurement

diffpriv mechanisms have in common a reliance on the 'sensitivity' of target function to small changes to input datasets. This sensitivity must be provably bounded for an application's target in order for differential privacy to be proved, and is used to calibrate privacy-preserving randomization. Unfortunately bounding sensitivity is often prohibitively complex, for example if target is an arbitrary computer program. All DPMech-class mechanisms offer a sensitivitySampler method due to Rubinstein and Ald<c3><a0> (2017) that repeatedly probes target to estimate sensitivity automatically. Mechanisms with estimated sensitivities achieve a slightly weaker form of random differential privacy due to Hall et al. (2013), but without any theoretical analysis necessary.

References

Benjamin I. P. Rubinstein and Francesco Ald<c3><a0>. "Pain-Free Random Differential Privacy with Sensitivity Sampling", accepted into the 34th International Conference on Machine Learning (ICML'2017), May 2017.

Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. "Calibrating noise to sensitivity in private data analysis." In Theory of Cryptography Conference, pp. 265-284. Springer Berlin Heidelberg, 2006.

Frank McSherry and Kunal Talwar. "Mechanism design via differential privacy." In the 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS'07), pp. 94-103. IEEE, 2007.

Rob Hall, Alessandro Rinaldo, and Larry Wasserman. "Random Differential Privacy." Journal of Privacy and Confidentiality, 4(2), pp. 43-59, 2012.

Examples

1
2
3
4
5
## Not run: 
## for full examples see the diffpriv vignette
vignette("diffpriv")

## End(Not run)

diffpriv documentation built on May 2, 2019, 2:38 a.m.