commit | d51eedc0beea536ea563ade62116f37633eebf4d | [log] [tgz] |
---|---|---|
author | Differential Privacy Team <noreply@google.com> | Tue Jan 11 06:31:48 2022 -0800 |
committer | miracvbasaran <miracvbasaran@gmail.com> | Fri Jan 14 15:11:20 2022 +0100 |
tree | 0f6145e0fd26c6b0bd4a8faa5a94e4580c6e87c0 | |
parent | 4c867aae8ea6a6831d2ac0ab749cd5ae24e047b4 [diff] |
Fix bugs in Go, implement BoundedVariance in Java, Renyi DP in Python Go: - Fix a bug in ThresholdedResult for Count that may lead to higher than intended delta - Add checks for thresholdDelta in noise.Threshold() Privacy on Beam: - Fix small bugs with some tests, making them less flaky - Use RoundedLaplaceTolerance instead of LaplaceTolerance for integer aggregation in tests Java: - Implement BoundedVariance - Fail to build bounded quantiles, mean and variance if bounds are equal Accounting: - Introduce a new PrivacyAccountant implementation based on Renyi DP. This API is forked from TF Privacy, which will soon be switched to depend on the version here - Add accounting for tree aggregation as described in "Practical and Private (Deep) Learning without Sampling or Shuffling" PiperOrigin-RevId: 417627089 Change-Id: Id1f0ffbe607ae8f90b488053394eb577f2d48c6b GitOrigin-RevId: b02ccac42d590f72c2f20440721e4d0126a47f78
This repository contains libraries to generate ε- and (ε, δ)-differentially private statistics over datasets. It contains the following tools.
To get started on generating differentially private data, we recomend you follow the Privacy on Beam codelab.
Currently, the DP building block libraries support the following algorithms:
Algorithm | C++ | Go | Java |
---|---|---|---|
Laplace mechanism | Supported | Supported | Supported |
Gaussian mechanism | Supported | Supported | Supported |
Count | Supported | Supported | Supported |
Sum | Supported | Supported | Supported |
Mean | Supported | Supported | Supported |
Variance | Supported | Supported | Supported |
Standard deviation | Supported | Supported | Planned |
Quantiles | Supported | Supported | Supported |
Automatic bounds approximation | Supported | Planned | Planned |
Truncated geometric thresholding | Supported | Supported | Supported |
Laplace thresholding | Supported | Supported | Supported |
Gaussian thresholding | Planned | Supported | Supported |
Implementations of the Laplace mechanism and the Gaussian mechanism use secure noise generation. These mechanisms can be used to perform computations that aren't covered by the algorithms implemented in our libraries.
The DP building block libraries are suitable for research, experimental or production use cases, while the other tools are currently experimental, and subject to change.
In order to run the differential privacy library, you need to install Bazel in version 4.1.0, if you don't have it already. Follow the instructions for your platform on the Bazel website
You also need to install Git, if you don't have it already. Follow the instructions for your platform on the Git website.
Once you've installed Bazel and Git, open a Terminal and clone the differential privacy directory into a local folder:
git clone https://github.com/google/differential-privacy.git
Navigate into the differential-privacy
folder you just created, and build the differential privacy library and dependencies using Bazel (note: ... is a part of the command and not a placeholder):
To build the C++ library, run:
cd cc bazel build ...
To build the Go library, run:
cd go bazel build ...
To build the Java library, run:
cd java bazel build ...
You may need to install additional dependencies when building the PostgreSQL extension, for example on Ubuntu you will need these packages:
sudo apt-get install make libreadline-dev bison flex
Differential privacy requires some bound on maximum number of contributions each user can make to a single aggregation. The DP building block libraries don‘t perform such bounding: their implementation assumes that each user contributes only a fixed number of rows to each partition. That number can be configured by the user. The library neither verifies nor enforces this limit; it is the caller’s responsibility to pre-process data to enforce this.
We chose not to implement this step at the DP building block level because it requires some global operation over the data: group by user, and aggregate or subsample the contributions of each user before passing them on to the DP building block aggregators. Given scalability constraints, this pre-processing must be done by a higher-level part of the infrastructure, typically a distributed processing framework: for example, Privacy on Beam relies on Apache Beam for this operation.
For more detail about our approach to building scalable end-to-end differential privacy frameworks, we recommend reading:
We will continue to publish updates and improvements to the library. We are happy to accept contributions to this project. Please follow our guidelines when sending pull requests. We will respond to issues filed in this project. If we intend to stop publishing improvements and responding to issues we will publish notice here at least 3 months in advance.
This is not an officially supported Google product.
We are always keen on learning about how you use this library and what use cases it helps you to solve. We have two communication channels:
A public discussion group where we will also share our preliminary roadmap, updates, events, etc.
A private email alias at dp-open-source@google.com where you can reach us directly about your use cases and what more we can do to help.
Please refrain from sending any personal identifiable information. If you wish to delete a message you've previously sent, please contact us.