commit | 87e30acd7436b60112eec7e419f397b61caaffa9 | [log] [tgz] |
---|---|---|
author | Mitch Rudominer <rudominer@chromium.org> | Wed May 17 00:58:30 2017 +0100 |
committer | Mitch Rudominer <rudominer@chromium.org> | Wed May 17 00:58:30 2017 +0100 |
tree | df672cef7a935133391efc69269bffd9e2403aa4 | |
parent | f0eddc09c6b9dc85097b2154937c13540b898142 [diff] |
Fixes an old TODO in a test. We use MessageDecrypter.DecryptMessage() to "decrypt" a message that uses the "NONE" encryption scheme. This means that DecryptMessage() does nothing but deserialize a protobuf. We replace a TODO with this. Change-Id: Ic90fabbc7a255a826183080e8f79a49ca239315e
An extensible, privacy-preserving, user-data analysis pipeline. go/cobalt-for-privacy
Fetch the code, for example via git clone https://fuchsia.googlesource.com/cobalt
Run cobaltb.py setup
.
The Python script cobaltb.py in the root directory is used to orchestrate building, testing and deploying to Google Container Engine. (It was already used above for one-time setup.)
cobaltb.py -h
for general helpcobaltb.py <command> -h
for help on a commandcobaltb.py <command> <subcommand> -h
for help on a sub-commandcobaltb.py clean
cobaltb.py build
cobaltb.py test
This runs the whole suite of tests finally running the the end-to-end test--tests=
argument.cobaltb.py test -h
cobaltb.py test --tests=e2e --verbose --verbose --verbose
This stands up a complete Cobalt system running locally:
It then uses the the test app to send Observations to the Shuffler, uses the observation querier to wait until the Observations have arrived at the Analyzer, uses the report client library to generate a report and wait for the report to complete, and finally checks the result of the report.
The code for the end-to-end test is written in Go and is in the end_to_end_tests/src directory.
Cobalt uses a custom public-key encryption scheme in which the Encoder encrypts Observations to the public key of the Analyzer before sending them to the Shuffler. This is a key part of the design of Cobalt and we refer to it via the slogan “The Shuffler shuffles sealed envelopes” meaning that the Shuffler does not get to see the data that it is shuffling. In order for this to work it is necessary for there to be public/private key PEM files that can be read by the Encoder and the Analyzer. The end-to-end test uses the PEM files located in the end_to_end_tests directory named analyzer_private_key.pem.e2e_test and analyzer_public_key.pem.e2e_test. But for running Cobalt in any other environment we do not want to check in a private key into source control and so we ask each developer to generate thier own key pair.
./cobaltb.py keygen
Then follow the instructions to copy the generated contents into files named analyzer_public.pem and analyzer_private.pem in your source root directory. These will get used by several of the following steps including running the demo manually and deploying to Google Container Engine.
In addition to the encryption to the Analyzer mentioned above there is a second layer of encryption in which Envelopes are encrypted to the public key of the Shuffler. The purpose of this layer of encryption is that TLS between the Encoder and the Shuffler may be terminated prior to reaching the Shuffler in some load-balanced environments. We need a second public/private key pair for this encryption. The end-to-end test uses the PEM files located in the end_to_end_tests directory named shuffler_private_key.pem.e2e_test and shuffler_public_key.pem.e2e_test. But for running Cobalt in any other environment follow the instructions above for generating analyzer_public.pem and analyzer_private.pem but this time create two new files named shuffler_public.pem and shuffler_private.pem
You can run a complete Cobalt system locally (for example in order to give a demo) as follows. Open seven different command-line console windows and run the following commands in each one respectively:
./cobaltb.py start bigtable_emulator
./cobaltb.py start analyzer_service
./cobaltb.py start shuffler
./cobaltb.py start report_master
./cobaltb.py start test_app
./cobaltb.py start observation_querier
./tools/demo/demo_reporter.py
It is a good idea to label the tabs so you can keep track of them.
Instead of the last command ./tools/demo/demo_reporter.py
you could do ./cobaltb.py start report_client
. The script demo_reporter.py invokes the report_client but it has been custom tailored for a demo: Whereas report_client is a generic tool, demo_reporter.py knows specifically which metrics, encodings and reports are being used for the demo and it knows how to generate a visualization of the Basic Rappor report.
Note that the ./cobaltb.py start
command automatically sets the flag -v=3
on all of the started processes. This sets the virtual logging level to 3. The Cobalt log messages has been specifically tuned to give interesting output during a demo at this virtual logging level. For example the Analyzer service will log each time it receives a batch of Observations.
To perform the demo follow these steps.
Use the test_app to send Forculus Observations through the Shuffler to the Analyzer
encode 19 www.AAAA
encode 20 www.BBBB
send
encode 100 www.CCCC
send
Use the observation_querier to inspect the state of the Observation store.
query 50
Use the demo_reporter to generate a report
1
to run the Forculus report demoUse the test_app to send Basic RAPPOR Observations through the Shuffler to the Analyzer
set metric 2
set encoding 2
encode 500 11
encode 1000 12
encode 500 1
send
Use the observation_querier to inspect the state of the Observation store.
set metric 2
query 50
Use the demo_reporter to generate a report
2
to run the Forculus report demoYou can use Cloud Bigtable instead of a local instance of the Bigtable Emulator for
In this section we describe a configuration in which the Cobalt processes are running locally but connect to Cloud Bigtable. In a later section we describe how to run the Cobalt processes themselves on Google Container Engine.
You will need a Google Cloud project in which to create an instance of Cloud Bigtable and also in which to create an instance of Google Container Engine if you wish to do that later. Create a new one or use an existing one. You will need to enable billing. If you are a member of the core Cobalt team you can request access to our shared project.
Navigate to the Bigtable section of the Cloud console for your project. Here is the link for the core Cobalt team's shared project
cbt is a command-line program for interacting with Cloud Bigtable. You do not strictly need cbt in order to follow the other steps in the document but you may choose to install it anyway.
You must install a Service Account Credential on your computer in order for the Cobalt code running on your computer to be able to access Cloud Bigtable.
Create Credentials
Service Account Key
as the type of keyNew Service Account
and assign your service account any name.JSON
as the key typeCreate
service_account_credentials.json
and you must put the file in the Cobalt source root directory (next to this README file.)GOOGLE_APPLICATION_CREDENTIALS
to the path to that file. This is necessary for the gRPC C++ code linked with Cobalt to find the credential at run-time.Note: An alternative solution is to use oauth tokens in order to authenticate your computer to Google Cloud Bigtable. However at this time there seems to be a bug that is preventing this from working. The symptom is you will see the following error message: assertion failed: deadline.clock_type == g_clock_type
. If you see this error message it means that the oauth flow is being attempted and has hit this bug. This happens if the gRPC code is not able to use the service account credential located at GOOGLE_APPLICATION_CREDENTIALS
.
./cobaltb.py bigtable provision
This creates the Cobalt Bigtable tables in your Cloud Bigtable instance.
./cobaltb.py bigtable delete_observations
WARNING: This will permanently delete all data from the Observation Store in whichever Cloud Bigtable instance you point it at. Be careful.
./cobaltb.py bigtable delete_reports
WARNING: This will permanently delete all data from the Report Store in whichever Cloud Bigtable instance you point it at. Be careful.
These are a set of gunit tests that run locally but use Cloud Bigtable. These tests are not run automatically, they are not run on the continuous integration machine and they are not run if you type ./cobaltb.py test --tests=all
. Instead you must explicitly invoke them.
./cobaltb.py test --tests=cloud_bt --bigtable_project_name=<project_name> --bigtable_instance_name=<instance_name>
WARNING: This will modify the contents of the tables in whichever Cloud Bigtable instance you point it at. Be careful.
Note that if you follow the instructions below and create a personal_cluster.json file then this command may be simplified to ./cobaltb.py test --tests=cloud_bt
This is also not done automatically but you may do it manually as follows
./cobaltb.py test --tests=e2e -use_cloud_bt --bigtable_project_name=<project_name> --bigtable_instance_name=<instance_name>
WARNING: This will modify the contents of the tables in whichever Cloud Bigtable instance you point it at. Be careful.
Note that if you follow the instructions below and create a personal_cluster.json file then this command may be simplified to ./cobaltb.py test --tests=e2e -use_cloud_bt
Follow the instructions above for running the demo manually with the following changes
--bigtable_project_name=<project_name>
and --bigtable_instance_name=<instance_name>
so that these processes will connect to your instance of Cloud Bigtable rather than attempting to connect to a local instance of Bigtable Emulator.You can deploy the Shuffler, Analyzer Service and Report Master on Google Container Engine and then run the the demo or the end-to-end test using your cloud instance.
In order to deploy to Container Engine you need to be able to build Docker containers and that requires have the Docker daemon running on your machine.
Install Docker. If you are a Googler the following instructions should work:
sudo apt-get install docker-engine
sudo usermod -aG docker
We also will be using the tools gcloud and kubectl. You should be able to get away without installing these because they are included in Cobalt's sysroot directory and when invoked via cobaltb.py the versions in sysroot will be used. But you may choose to install these anyway. The following steps are optional.
gcloud init
gcloud components install kubectl
Navigate to the Container Clusters section of the Cloud console for your project. Here is the link for the core Cobalt team's shared project
Note that GCE stands for Google Compute Engine and GKE stands for Google Container Engine. Even though we are deploying Cobalt to GKE we create a persistent disk on GCE.
We create a GCE persistent disk in order to store the Shuffler's LevelDB database. The reason for using a persistent disk is that otherwise the database gets blown away between deployments of the Shuffler. (TODO(rudominer) Make this optional. It may be desirable to have the option of blowing away the database between deployments. The database will still persist between restarts.)
Navigate to the Compute Engine / Disks section of the Cloud console for your project. Here is the link for the core Cobalt team's shared project
Optionally create a new file in your Cobalt source root named exactly personal_cluster.json
. This will save you having to type many command-line flags refering to your personal cluster. Its contents should be exactly the following
{ "cloud_project_prefix": "<your-project-prefix>", "cloud_project_name": "<your-project-name>", "cluster_name": "<your-cluster-name>", "gce_pd_name": "<your-persistent-disk-name>", "bigtable_project_name" : "<your-bigtable-project-name>", "bigtable_instance_name": "<your-bigtable-instance-name>" }
For example:
{ "cloud_project_prefix": "google.com", "cloud_project_name": "shuffler-test", "cluster_name": "rudominer-test-1", "gce_pd_name": "rudominer-shuffler-1", "bigtable_project_name" : "google.com:shuffler-test", "bigtable_instance_name": "rudominer-test-1" }
The script cobaltb.py looks for this file and uses it to set defaults for flags. It is ok for some of the values to be the empty string but it is not ok for any of the keys to be missing. For example if you have not yet created a GKE cluster but you have already created a Bigtable instance you can leave all fields except bigtable_project_name
and bigtable_instance_name
empty and then when performing the steps described above in the section Using Cloud Bigtable you will not have to type the flags --bigtable_project_name
and --bigtable_instance_name
.
Here is an explanation of each of the entries.
<your-project-prefix>:<your-project-name>
./cobaltb.py deploy authenticate
Run this one time in order associate your computer with your GKE cluster and set up authentication.
./cobaltb.py deploy upload_secret_keys
Run this one time in order to upload the PEM files containing the Analyzer‘s and Shuffler’s private keys. These are the files analyzer_private.pem and shuffler_private.pem that were created in the section Generating PEM Files above. To upload different private keys, first delete any previously upload secret keys by running ./cobaltb.py deploy delete_secret_keys
./cobaltb.py deploy build
Run this to build Docker containers for the Shuffler, Analyzer Service and Report Master. Run it any time the Cobalt code changes. The generated containers are stored on your computer.
./cobaltb.py deploy push --job=shuffler
./cobaltb.py deploy push --job=analyzer-service
./cobaltb.py deploy push --job=report-master
Run these to push each of the containers built via the previous step up to the cloud repository.
./cobaltb.py deploy start --job=shuffler
./cobaltb.py deploy start --job=analyzer-service
./cobaltb.py deploy start --job=report-master
Run these to start each of the jobs on GKE. Each of these will start multiple Kubernetes entities on GKE: a Service, a Deployment, a Replica Set, and a Pod.
./cobaltb.py deploy start --job=shuffler -danger_danger_delete_all_data_at_startup
Run this version of the start command to start the Shuffler while deleting all Observations collected during previous runs. This is useful when running the end-to-end tests or the demo to ensure that you know exactly what is in the Shuffler's datastore.
./cobaltb.py deploy stop --job=shuffler
./cobaltb.py deploy stop --job=analyzer-service
./cobaltb.py deploy stop --job=report-master
Run these to stop each of the jobs on GKE. Each of these will stop the Kubernetes entities that were started by the corresponding start command.
./cobaltb.py deploy show
Run this in order to see the list of running jobs and their externally facing IP addresses and ports.
./cobaltb.py test --tests=e2e -cobalt_on_gke
If your GKE cluster has been set up correctly and your personal_cluster.json file is set up correclty, this will run the end-to-end test using your personal Cobalt GKE cluster.
See the instructions above for running the manual demo. In this configuration you do not need to start the Shuffler, Analyzer Service, Report Master or Bigtable as these are all running in the cloud. You still need to start the test app, the observation querier and the report client.
./cobaltb.py start test_app -cobalt_on_gke
./cobaltb.py start observation_querier -use_cloud_bt
./tools/demo/demo_reporter.py -cobalt_on_gke