PipelineDP4j: update readmes
Change-Id: I349b873f1adf9a0e548ebb1e3ed7e0b2a7c30b58
GitOrigin-RevId: b164dd3b302b61013f257e821d14277e25320e57
diff --git a/cc/algorithms/approx-bounds-provider.h b/cc/algorithms/approx-bounds-provider.h
index 4dbe2a6..28eb819 100644
--- a/cc/algorithms/approx-bounds-provider.h
+++ b/cc/algorithms/approx-bounds-provider.h
@@ -169,6 +169,7 @@
RETURN_IF_ERROR(ValidateIsInExclusiveInterval(options.success_probability,
0, 1, "Success probability"));
+
// Using `new` to access non-public constructor.
return absl::WrapUnique(new ApproxBoundsProvider<T>(
options.epsilon, options.num_bins, options.scale, options.base,
diff --git a/examples/pipelinedp4j/README.md b/examples/pipelinedp4j/README.md
index 20d1396..023b407 100644
--- a/examples/pipelinedp4j/README.md
+++ b/examples/pipelinedp4j/README.md
@@ -102,7 +102,7 @@
```
1. Run the program (if you want to run Spark example, change `beam` to `spark`,
- `BeamExample` to `SparkExample` or `SparkDatasetExample` and
+ `BeamExample` to `SparkExample` or `SparkDataFrameExample` and
`--outputFilePath=output.txt` to `--outputFolder=output`):
```shell
@@ -124,12 +124,6 @@
from the Maven repository, eliminating the need for the library source files and
the Bazel files (`WORKSPACE.bazel`, `.bazelversion` and `BUILD.bazel`).
-NOTE: Currently the version of examples at HEAD does not build with Maven
-because the examples were migrated to the new version of public API which has
-not been uploaded to the Maven repository yet. It will be uploaded in a few
-days. For now, use the version of examples from commit that preceded the commit
-when this note was created (use git blame).
-
To proceed, ensure Maven is installed on your system. If you're using Linux or
MacOS, you can install it by running `sudo apt-get install maven` or `brew
install maven`, respectively. While any Maven version should work, refer to
@@ -147,7 +141,7 @@
* `examples/pipelinedp4j/spark` for Spark
Then execute the following command with updated `inputFilePath` (if you want to
-run on Spark, change `BeamExample` to `SparkExample` or `SparkDatasetExample`
+run on Spark, change `BeamExample` to `SparkExample` or `SparkDataFrameExample`
and `--outputFilePath=output.txt` to `--outputFolder=output`):
```shell
@@ -159,7 +153,9 @@
View the results.
-For Beam `cat output.txt` For Spark the output is written to a folder and the
+For Beam `cat output.txt`.
+
+For Spark the output is written to a folder and the
result is stored in a file whose name starts with `part-00000`: `cat
output/part-00000<...>`
diff --git a/java/BUILD b/java/BUILD
index 1ec058d..1382fcb 100644
--- a/java/BUILD
+++ b/java/BUILD
@@ -18,7 +18,7 @@
load("@rules_jvm_external//:defs.bzl", "java_export")
# Update the following version for packaging of a new release.
-_DP_LIB_VERSION = "3.0.0"
+_DP_LIB_VERSION = "4.0.0"
exports_files([
"dp_java_deps.bzl",
diff --git a/pipelinedp4j/BUILD.bazel b/pipelinedp4j/BUILD.bazel
index 41cc420..d5db4c1 100644
--- a/pipelinedp4j/BUILD.bazel
+++ b/pipelinedp4j/BUILD.bazel
@@ -30,7 +30,7 @@
)
# Update the following version for packaging of a new release.
-_RELEASE_VERSION = "0.0.1"
+_RELEASE_VERSION = "4.0.0"
pom_file(
name = "export_pom",
diff --git a/pipelinedp4j/README.md b/pipelinedp4j/README.md
index 1ff55e7..48bc558 100644
--- a/pipelinedp4j/README.md
+++ b/pipelinedp4j/README.md
@@ -18,10 +18,6 @@
## How to Use
-WARNING: Current API version (0.0.1) is experimental and will be changed in 2024
-without backward-compatibility. The experimental API won't be supported and
-maintained after that.
-
### Example
<!-- TODO: create codelab and rewrite this section. -->
@@ -44,6 +40,3 @@
[here](https://mvnrepository.com/artifact/com.google.privacy.differentialprivacy.pipelinedp4j/pipelinedp4j).
After adding this dependency into your project you can write the same code as in
the example above and it will compile.
-
-Please, don't use `0.0.1` version in production code as it is experimental and
-its maintenance will be stopped in 2024 with release of the new version.