Merge remote-tracking branch 'move-voila/master' into move-voila

[voila] move voila to fuchsia.git under //peridot/bin

This patch moves Voila code from //topaz/bin/voila to
//peridot/bin/voila, changing the repo from topaz to fuchsia.git.

This is motivated by development convenience - voila is a first client
of the carnelian library in garnet which isn't yet stable. Also, as
Voila in neither a runner nor part of experiences, it's not clear where
else than fuchsia.git it could go.

Change-Id: Ia260f76b8cf515c42cf605d86489c36ea04bd843
diff --git a/.clang-format b/.clang-format
new file mode 100644
index 0000000..4bdf247
--- /dev/null
+++ b/.clang-format
@@ -0,0 +1,10 @@
+# http://clang.llvm.org/docs/ClangFormatStyleOptions.html
+BasedOnStyle: Google
+# This defaults to 'Auto'. Explicitly set it for a while, so that
+# 'vector<vector<int> >' in existing files gets formatted to
+# 'vector<vector<int>>'. ('Auto' means that clang-format will only use
+# 'int>>' if the file already contains at least one such instance.)
+Standard: Cpp11
+SortIncludes: true
+AllowShortIfStatementsOnASingleLine: false
+AllowShortLoopsOnASingleLine: false
diff --git a/.dir-locals.el b/.dir-locals.el
new file mode 100644
index 0000000..7f1ebfd
--- /dev/null
+++ b/.dir-locals.el
@@ -0,0 +1,9 @@
+;; Copyright 2017 The Fuchsia Authors. All rights reserved.
+;; Use of this source code is governed by a BSD-style license that can be
+;; found in the LICENSE file.
+
+(
+ ;; Expand tabs as spaces.
+ (c-mode . ((indent-tabs-mode . nil)))
+ (c++-mode . ((indent-tabs-mode . nil)))
+)
diff --git a/.gitattributes b/.gitattributes
new file mode 100644
index 0000000..da2e8a5
--- /dev/null
+++ b/.gitattributes
@@ -0,0 +1,10 @@
+# Auto detect text files and perform LF normalization
+*      text=auto
+
+# Always perform LF normalization on these files
+*.c    text
+*.cc   text
+*.cpp  text
+*.h    text
+*.gn   text
+*.md   text
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..13a2d38
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,62 @@
+### General file patterns
+*~
+.*.sw?
+.build_lock
+.checkstyle
+.classpath
+.config
+.cproject
+.DS_Store
+.gdb_history
+.gdbinit
+.gn
+.jiri_manifest
+.landmines
+.packages
+.project
+.pydevproject
+.ssh
+.vscode
+*.iml
+*.orig
+*.pyc
+*.sublime-project
+*.sublime-workspace
+Cargo.lock
+Cargo.toml
+CMakeLists.txt
+compile_commands.json
+cmake-build-debug/
+cscope.*
+rls*.log
+Session.vim
+tags
+Thumbs.db
+/tools/
+tools/cipd.gni
+topaz/
+vendor/
+tmp/
+
+### Specific files
+/garnet/tools/cipd.gni
+
+### Directories to be ignored across the tree
+.cipd/
+.idea/
+
+### Top-level directories
+/.jiri/
+/.jiri_root/
+/infra/
+/integration/
+# For storing local scripts and data files in a source tree.
+/local/
+/old/
+/out/
+/prebuilt/
+/third_party/
+/tmp/
+/tools/
+/topaz/
+/vendor/
diff --git a/AUTHORS b/AUTHORS
new file mode 100644
index 0000000..61ae302
--- /dev/null
+++ b/AUTHORS
@@ -0,0 +1,10 @@
+# This is the list of Fuchsia Authors.
+
+# Names should be added to this file as one of
+#     Organization's name
+#     Individual's name <submission email address>
+#     Individual's name <submission email address> <email2> <emailN>
+
+Google Inc.
+The Chromium Authors
+The Go Authors
diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md
new file mode 100644
index 0000000..4644fc2
--- /dev/null
+++ b/CODE_OF_CONDUCT.md
@@ -0,0 +1,83 @@
+# Fuchsia Code of Conduct
+
+Google and the Fuchsia team are committed to preserving and fostering a diverse,
+welcoming community. Below is our community code of conduct, which applies to
+our repos and organizations, mailing lists, blog content, IRC channel and any
+other Fuchsia-supported communication group, as well as any private
+communication initiated in the context of these spaces.
+Simply put, community discussions should be
+ * respectful and kind;
+ * about Fuchsia;
+ * about features and code, not the individuals involved.
+
+## Be respectful and constructive.
+
+Treat everyone with respect. Build on each other's ideas. Each of us has the
+right to enjoy our experience and participate without fear of harassment,
+discrimination, or condescension, whether blatant or subtle. Remember that
+Fuchsia is a geographically distributed team and that you may not be
+communicating with someone in their primary language. We all get frustrated
+when working on hard problems, but we cannot allow that frustration to turn
+into personal attacks.
+
+## Speak up if you see or hear something.
+You are empowered to politely engage when you feel that you or others are
+disrespected. The person making you feel uncomfortable may not be aware of what
+they are doing - politely bringing their behavior to their attention is
+encouraged.
+
+If you are uncomfortable speaking up, or feel that your concerns are not being
+duly considered, you can email fuchsia-community-managers@google.com to request
+involvement from a community manager. All concerns shared with community
+managers will be kept confidential, but you may also submit an anonymous report
+[here](https://goo.gl/forms/xgisUdowrEWrYgui2).  Please note that without a way
+to contact you, an anonymous report may be difficult to act on. You may also
+create a throwaway account to report. In cases where a public response is deemed
+necessary, the identities of victims and reporters will remain confidential
+unless those individuals instruct us otherwise.
+
+While all reports will be taken seriously, the Fuchsia community managers may
+not act on complaints that they feel are not violations of this code of
+conduct.
+
+## We will not tolerate harassment of any kind, including but not limited to:
+
+ * Harassing comments
+ * Intimidation
+ * Encouraging a person to engage in self-harm.
+ * Sustained disruption or derailing of threads, channels, lists, etc.
+ * Offensive or violent comments, jokes or otherwise
+ * Inappropriate sexual content
+ * Unwelcome sexual or otherwise aggressive attention
+ * Continued one-on-one communication after requests to cease
+ * Distribution or threat of distribution of people's personally identifying
+   information, AKA “doxing”
+
+## Consequences for failing to comply with this policy
+
+Consequences for failing to comply with this policy may include, at the sole
+discretion of the Fuchsia community managers:
+ * a request for an apology;
+ * a private or public warning or reprimand;
+ * a temporary ban from the mailing list, blog, Fuchsia repository or
+   organization, or other Fuchsia-supported communication group, including
+   loss of committer status;
+ * a permanent ban from any of the above, or from all current and future
+   Fuchsia-supported or Google-supported communities, including loss of
+   committer status.
+
+Participants warned to stop any harassing behavior are expected to comply
+immediately; failure to do so will result in an escalation of consequences.
+The decisions of the Fuchsia community managers may be appealed via
+fuchsia-community-appeals@google.com.
+
+## Acknowledgements
+
+This Code of Conduct is adapted from the Chromium Code of Conduct, based on the
+Geek Feminism Code of Conduct, the Django Code of Conduct and the Geek Feminism
+Wiki "Effective codes of conduct" guide.
+
+## License
+
+This Code of Conduct is available for reuse under the Creative Commons Zero
+(CC0) license.
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
new file mode 100644
index 0000000..e48744c
--- /dev/null
+++ b/CONTRIBUTING.md
@@ -0,0 +1,272 @@
+Contributing Changes
+====================
+
+Fuchsia manages commits through Gerrit at
+https://fuchsia-review.googlesource.com. Not all projects accept patches;
+please see the CONTRIBUTING.md document in individual projects for
+details.
+
+## Submitting changes
+
+To submit a patch to Fuchsia, you may first need to generate a cookie to
+authenticate you to Gerrit. To generate a cookie, log into Gerrit and click
+the "Generate Password" link at the top of https://fuchsia.googlesource.com.
+Then, copy the generated text and execute it in a terminal.
+
+Once authenticated, follow these steps to submit a patch to a repo in Fuchsia:
+
+```
+# create a new branch
+git checkout -b branch_name
+
+# write some awesome stuff, commit to branch_name
+# edit some_file ...
+git add some_file
+# if specified in the repo, follow the commit message format
+git commit ...
+
+# upload the patch to Gerrit
+# `jiri help upload` lists flags for various features, e.g. adding reviewers
+jiri upload # Adds default topic - ${USER}-branch_name
+# or
+jiri upload -topic="custom_topic"
+# or
+git push origin HEAD:refs/for/master
+
+# at any time, if you'd like to make changes to your patch, use --amend
+git commit --amend
+
+# once the change is landed, clean up the branch
+git branch -d branch_name
+```
+
+See the Gerrit documentation for more detail:
+[https://gerrit-documentation.storage.googleapis.com/Documentation/2.12.3/intro-user.html#upload-change](https://gerrit-documentation.storage.googleapis.com/Documentation/2.12.3/intro-user.html#upload-change)
+
+### Commit message tags
+
+If submitting a change to Zircon, Garnet, Peridot or Topaz, include [tags] in
+the commit subject flagging which module, library, app, etc, is affected by the
+change. The style here is somewhat informal. Look at these example changes to
+get a feel for how these are used.
+
+* https://fuchsia-review.googlesource.com/c/zircon/+/112976
+* https://fuchsia-review.googlesource.com/c/garnet/+/110795
+* https://fuchsia-review.googlesource.com/c/peridot/+/113955
+* https://fuchsia-review.googlesource.com/c/topaz/+/114013
+
+Gerrit will flag your change with
+`Needs Label: Commit-Message-has-tags` if these are missing.
+
+Example:
+```
+# Ready to submit
+[parent][component] Update component in Topaz.
+Test: Added test X
+
+# Needs Label: Commit-Message-has-tags
+Update component in Topaz.
+Test: Added test X
+```
+
+### Commit message "Test:" labels
+
+Changes to Zircon, Garnet, Peridot, and Topaz require a "Test:" line in the
+commit message.
+
+We normally expect all changes that modify behavior to include a test that
+demonstrates (some aspect of) the behavior change. The test label should name
+the test that was added or modified by the change:
+
+```
+Test: SandboxMetadata.ParseRapidJson
+```
+
+Some behavior changes are not appropriate to test in an automated fashion. In
+those cases, the test label should describe the manual testing performed by the
+author:
+
+```
+Test: Manually tested that the keyboard still worked after unplugging and
+      replugging the USB connector.
+```
+
+In some cases, we are not able to test certain behavior changes because we lack
+some particular piece of infrastructure. In that case, we should have an issue
+in the tracker about creating that infrastructure and the test label should
+mention the bug number in addition to describing how the change was manually
+tested:
+
+```
+Test: Manually tested that [...]. Automated testing needs US-XXXX
+```
+
+If the change does not change behavior, the test line should indicate that you
+did not intend to change any behavior:
+
+```
+Test: No behavior change
+```
+
+If there's a test suite that validates that your change did not change behavior,
+you can mention that test suite as well:
+
+```
+Test: blobfs-test
+```
+
+Alternatively, if the change involves updating a dependency for which the commit
+queue should provide appropriate acceptance testing, the test label should defer
+to the commit queue:
+
+```
+Test: CQ
+```
+
+Syntactically, commit messages must contain one of {test, tests, tested, testing}
+followed by ':' or '='. Any case (e.g., "TEST" or "Test") works.
+
+All of these are valid:
+
+```
+TEST=msg
+
+Test:msg
+
+Testing : msg
+
+  Tested = msg
+
+Tests:
+- test a
+- test b
+```
+
+(See https://fuchsia.googlesource.com/All-Projects/+/refs/meta/config/rules.pl
+for the exact regex.)
+
+Gerrit will flag your change with `Needs Label: Commit-Message-has-TEST-line` if
+these are missing.
+
+Example:
+
+```
+# Ready to submit
+[parent][component] Update component in Topaz.
+Test: Added test X
+
+# Needs Label: Commit-Message-has-TEST-line
+[parent][component] Update component in Topaz.
+```
+
+## [Non-Googlers only] Sign the Google CLA
+
+In order to land your change, you need to sign the [Google CLA](https://cla.developers.google.com/).
+
+## [Googlers only] Issue actions
+
+Commit messages may reference issue IDs in Fuchsia's
+[issue tracker](https://fuchsia.atlassian.net/); such references will become
+links in the Gerrit UI. Issue actions may also be specified, for example to
+automatically close an issue when a commit is landed:
+
+BUG-123 #done
+
+`done` is the most common issue action, though any workflow action can be
+indicated in this way.
+
+Issue actions take place when the relevant commit becomes visible in a Gerrit
+branch, with the exception that commits under refs/changes/ are ignored.
+Usually, this means the action will happen when the commit is merged to
+master, but note that it will also happen if a change is uploaded to a private
+branch.
+
+*Note*: Fuchsia's issue tracker is not open to external contributors at this
+time.
+
+## Cross-repo changes
+
+Changes in two or more separate repos will be automatically tracked for you by
+Gerrit if you use the same topic.
+
+### Using jiri upload
+Create branch with same name on all repos and upload the changes
+```
+# make and commit the first change
+cd fuchsia/bin/fortune
+git checkout -b add_feature_foo
+* edit foo_related_files ... *
+git add foo_related_files ...
+git commit ...
+
+# make and commit the second change in another repository
+cd fuchsia/build
+git checkout -b add_feature_foo
+* edit more_foo_related_files ... *
+git add more_foo_related_files ...
+git commit ...
+
+# Upload all changes with the same branch name across repos
+jiri upload -multipart # Adds default topic - ${USER}-branch_name
+# or
+jiri upload -multipart -topic="custom_topic"
+
+# after the changes are reviewed, approved and submitted, clean up the local branch
+cd fuchsia/bin/fortune
+git branch -d add_feature_foo
+
+cd fuchsia/build
+git branch -d add_feature_foo
+```
+
+### Using Gerrit commands
+
+```
+# make and commit the first change, upload it with topic 'add_feature_foo'
+cd fuchsia/bin/fortune
+git checkout -b add_feature_foo
+* edit foo_related_files ... *
+git add foo_related_files ...
+git commit ...
+git push origin HEAD:refs/for/master%topic=add_feature_foo
+
+# make and commit the second change in another repository
+cd fuchsia/build
+git checkout -b add_feature_foo
+* edit more_foo_related_files ... *
+git add more_foo_related_files ...
+git commit ...
+git push origin HEAD:refs/for/master%topic=add_feature_foo
+
+# after the changes are reviewed, approved and submitted, clean up the local branch
+cd fuchsia/bin/fortune
+git branch -d add_feature_foo
+
+cd fuchsia/build
+git branch -d add_feature_foo
+```
+
+Multipart changes are tracked in Gerrit via topics, will be tested together,
+and can be landed in Gerrit at the same time with `Submit Whole Topic`. Topics
+can be edited via the web UI.
+
+## Changes that span repositories
+
+See [Changes that span repositories](development/workflows/multilayer_changes.md).
+
+## Resolving merge conflicts
+
+```
+# rebase from origin/master, revealing the merge conflict
+git rebase origin/master
+
+# resolve the conflicts and complete the rebase
+* edit files_with_conflicts ... *
+git add files_with_resolved_conflicts ...
+git rebase --continue
+jiri upload
+
+# continue as usual
+git commit --amend
+jiri upload
+```
diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..87f152c
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,27 @@
+Copyright 2019 The Fuchsia Authors. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+   * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+   * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+   * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/OWNERS b/OWNERS
new file mode 100644
index 0000000..e1c9828
--- /dev/null
+++ b/OWNERS
@@ -0,0 +1,36 @@
+# Global approvers
+#
+# Someone should be a global approver if they commonly make large-scale changes
+# across the whole codebase. For example, people who maintain the various
+# languages, toolchains, and other build system components often should be
+# global approvers. Additionally, the set of global approvers should have good
+# coverage across the world to support contributors in every timezone.
+#
+# Code reviews should be sent to a global approver if the change manipulates
+# the root directory or has broad (typically mechanical) impact across the
+# codebase. If the change manipulates a specific part of the codebase, the code
+# review should be sent to a more specific approver. Global approvers should
+# use their judgement as to when to redirect review requests to more specific
+# approvers.
+#
+# The list of global approvers will change over time as the set of people
+# making large-scale changes evolves over time. Please do not take it
+# personally if you are added or removed from this list. This list is not a
+# "honor role" of respected contributors. It is a list of people who often make
+# certain kinds of changes to the codebase.
+
+abarth@google.com
+abdulla@google.com
+cramertj@google.com
+ianloic@google.com
+jamesr@google.com
+jeffbrown@google.com
+jeremymanson@google.com
+kulakowski@google.com
+mcgrathr@google.com
+phosek@google.com
+pylaligand@google.com
+qsr@google.com
+raggi@google.com
+thatguy@google.com
+*
\ No newline at end of file
diff --git a/PATENTS b/PATENTS
new file mode 100644
index 0000000..2746e78
--- /dev/null
+++ b/PATENTS
@@ -0,0 +1,22 @@
+Additional IP Rights Grant (Patents)
+
+"This implementation" means the copyrightable works distributed by
+Google as part of the Fuchsia project.
+
+Google hereby grants to you a perpetual, worldwide, non-exclusive,
+no-charge, royalty-free, irrevocable (except as stated in this
+section) patent license to make, have made, use, offer to sell, sell,
+import, transfer, and otherwise run, modify and propagate the contents
+of this implementation of Fuchsia, where such license applies only to
+those patent claims, both currently owned by Google and acquired in
+the future, licensable by Google that are necessarily infringed by
+this implementation. This grant does not include claims that would be
+infringed only as a consequence of further modification of this
+implementation. If you or your agent or exclusive licensee institute
+or order or agree to the institution of patent litigation or any other
+patent enforcement activity against any entity (including a
+cross-claim or counterclaim in a lawsuit) alleging that this
+implementation of Fuchsia constitutes direct or contributory patent
+infringement, or inducement of patent infringement, then any patent
+rights granted to you under this License for this implementation of
+Fuchsia shall terminate as of the date such litigation is filed.
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..8fa31df
--- /dev/null
+++ b/README.md
@@ -0,0 +1,19 @@
+# Fuchsia
+
+Pink + Purple == Fuchsia (a new operating system)
+
+## What is Fuchsia?
+
+Fuchsia is a modular, capability-based operating system. Fuchsia runs on modern
+64-bit Intel and ARM processors.
+
+Fuchsia is an open source project with a [code of conduct](CODE_OF_CONDUCT.md)
+that we expect everyone who interacts with the project to respect.
+
+## How can I build and run Fuchsia?
+
+See [Getting Started](docs/getting_started.md).
+
+## Where can I learn more about Fuchsia?
+
+See the [documentation](docs/).
diff --git a/boards/arm64.gni b/boards/arm64.gni
new file mode 100644
index 0000000..382857d
--- /dev/null
+++ b/boards/arm64.gni
@@ -0,0 +1,12 @@
+# Copyright 2019 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+target_cpu = "arm64"
+
+fuchsia_packages = []
+
+board_packages = [
+  # Include all drivers for now.
+  "garnet/packages/prod/drivers",
+]
diff --git a/boards/frank.gni b/boards/frank.gni
new file mode 100644
index 0000000..6f1a111
--- /dev/null
+++ b/boards/frank.gni
@@ -0,0 +1,7 @@
+# Copyright 2019 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//garnet/boards/arm64.gni")
+
+board_packages += [ "garnet/packages/config/frank_media_config" ]
diff --git a/boards/mt8167s_ref.gni b/boards/mt8167s_ref.gni
new file mode 100644
index 0000000..f93360a
--- /dev/null
+++ b/boards/mt8167s_ref.gni
@@ -0,0 +1,10 @@
+# Copyright 2019 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//garnet/boards/arm64.gni")
+
+custom_signing_script =
+    "//zircon/kernel/target/arm64/board/mt8167s_ref/package-image.sh"
+
+use_vbmeta = true
diff --git a/boards/toulouse.gni b/boards/toulouse.gni
new file mode 100644
index 0000000..436004f
--- /dev/null
+++ b/boards/toulouse.gni
@@ -0,0 +1,24 @@
+# Copyright 2019 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+target_cpu = "x64"
+
+fuchsia_packages = []
+
+board_packages = [
+  # Include all drivers for now.
+  "garnet/packages/prod/drivers",
+]
+
+# Fuchsia does not have a deterministic ordering for bringing up PCI devices, so the
+# /dev/class/ethernet/xxx paths have no well-defined mapping to the ports on the front of the
+# device.
+# In order for netbooting and loglistener to work, we need to let netsvc know which path corresponds
+# to the left-most ethernet port.
+_toulouse_cmdline_args = [
+  "kernel.serial=legacy",
+  "netsvc.interface=/dev/sys/pci/00:1f.6/e1000/ethernet",
+]
+kernel_cmdline_args = _toulouse_cmdline_args
+zedboot_cmdline_args = _toulouse_cmdline_args
diff --git a/boards/x64.gni b/boards/x64.gni
new file mode 100644
index 0000000..f400ec9
--- /dev/null
+++ b/boards/x64.gni
@@ -0,0 +1,12 @@
+# Copyright 2019 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+target_cpu = "x64"
+
+fuchsia_packages = []
+
+board_packages = [
+  # Include all drivers for now.
+  "garnet/packages/prod/drivers",
+]
diff --git a/build/.gitignore b/build/.gitignore
new file mode 100644
index 0000000..de820df
--- /dev/null
+++ b/build/.gitignore
@@ -0,0 +1,22 @@
+/.jiri/
+*.pyc
+
+*~
+*.DS_Store
+
+# Thumbnails
+._*
+
+# swap
+[._]*.s[a-w][a-z]
+[._]s[a-w][a-z]
+# session
+Session.vim
+# temporary
+.netrwhist
+*~
+
+cipd.gni
+.cipd/*
+last-update
+third_party/
diff --git a/build/Fuchsia.cmake b/build/Fuchsia.cmake
new file mode 100644
index 0000000..daabba8
--- /dev/null
+++ b/build/Fuchsia.cmake
@@ -0,0 +1,49 @@
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+# Need support for CMAKE_C_COMPILER_TARGET
+cmake_minimum_required(VERSION 3.0)
+
+set(CMAKE_SYSTEM_NAME Fuchsia)
+
+set(CMAKE_SYSROOT ${FUCHSIA_SYSROOT})
+
+if(NOT DEFINED FUCHSIA_TOOLCHAIN)
+  string(TOLOWER ${CMAKE_HOST_SYSTEM_PROCESSOR} HOST_SYSTEM_PROCESSOR)
+  if(HOST_SYSTEM_PROCESSOR STREQUAL "x86_64")
+    set(HOST_SYSTEM_PROCESSOR "x64")
+  elseif(HOST_SYSTEM_PROCESSOR STREQUAL "aarch64")
+    set(HOST_SYSTEM_PROCESSOR "arm64")
+  endif()
+  string(TOLOWER ${CMAKE_HOST_SYSTEM_NAME} HOST_SYSTEM_NAME)
+  if(HOST_SYSTEM_NAME STREQUAL "darwin")
+    set(HOST_SYSTEM_NAME "mac")
+  endif()
+  set(FUCHSIA_TOOLCHAIN "${CMAKE_CURRENT_LIST_DIR}/../buildtools/${HOST_SYSTEM_NAME}-${HOST_SYSTEM_PROCESSOR}/clang")
+endif()
+
+if(NOT DEFINED FUCHSIA_COMPILER_TARGET)
+  set(FUCHSIA_COMPILER_TARGET "${FUCHSIA_SYSTEM_PROCESSOR}-fuchsia")
+endif()
+
+set(CMAKE_C_COMPILER "${FUCHSIA_TOOLCHAIN}/bin/clang")
+set(CMAKE_C_COMPILER_TARGET ${FUCHSIA_COMPILER_TARGET} CACHE STRING "")
+set(CMAKE_CXX_COMPILER "${FUCHSIA_TOOLCHAIN}/bin/clang++")
+set(CMAKE_CXX_COMPILER_TARGET ${FUCHSIA_COMPILER_TARGET} CACHE STRING "")
+set(CMAKE_ASM_COMPILER "${FUCHSIA_TOOLCHAIN}/bin/clang")
+set(CMAKE_ASM_COMPILER_TARGET ${FUCHSIA_COMPILER_TARGET} CACHE STRING "")
+
+set(CMAKE_LINKER "${FUCHSIA_TOOLCHAIN}/bin/ld.lld" CACHE PATH "")
+set(CMAKE_AR "${FUCHSIA_TOOLCHAIN}/bin/llvm-ar" CACHE PATH "")
+set(CMAKE_RANLIB "${FUCHSIA_TOOLCHAIN}/bin/llvm-ranlib" CACHE PATH "")
+set(CMAKE_NM "${FUCHSIA_TOOLCHAIN}/bin/llvm-nm" CACHE PATH "")
+set(CMAKE_OBJCOPY "${FUCHSIA_TOOLCHAIN}/bin/llvm-objcopy" CACHE PATH "")
+set(CMAKE_OBJDUMP "${FUCHSIA_TOOLCHAIN}/bin/llvm-objdump" CACHE PATH "")
+set(CMAKE_STRIP "${FUCHSIA_TOOLCHAIN}/bin/llvm-strip" CACHE PATH "")
+
+set(CMAKE_FIND_ROOT_PATH ${FUCHSIA_SYSROOT})
+
+set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
+set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
+set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
diff --git a/build/OWNERS b/build/OWNERS
new file mode 100644
index 0000000..a22675b
--- /dev/null
+++ b/build/OWNERS
@@ -0,0 +1,5 @@
+jamesr@google.com
+mcgrathr@google.com
+phosek@google.com
+pylaligand@google.com
+*
diff --git a/build/README.md b/build/README.md
new file mode 100644
index 0000000..ad26bed
--- /dev/null
+++ b/build/README.md
@@ -0,0 +1,3 @@
+# Build
+
+Shared build configuration for Fuchsia.
diff --git a/build/__init__.py b/build/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/build/__init__.py
diff --git a/build/banjo/BUILD.gn b/build/banjo/BUILD.gn
new file mode 100644
index 0000000..6ae9239
--- /dev/null
+++ b/build/banjo/BUILD.gn
@@ -0,0 +1,27 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/toolchain/clang_toolchain.gni")
+
+# A toolchain dedicated to processing Banjo libraries.
+# The only targets in this toolchain are action() targets, so it
+# has no real tools.  But every toolchain needs stamp and copy.
+toolchain("banjoing") {
+  tool("stamp") {
+    command = stamp_command
+    description = stamp_description
+  }
+  tool("copy") {
+    command = copy_command
+    description = copy_description
+  }
+
+  toolchain_args = {
+    toolchain_variant = {
+    }
+    toolchain_variant = {
+      base = get_label_info(":banjoing", "label_no_toolchain")
+    }
+  }
+}
diff --git a/build/banjo/banjo.gni b/build/banjo/banjo.gni
new file mode 100644
index 0000000..71c82b8
--- /dev/null
+++ b/build/banjo/banjo.gni
@@ -0,0 +1,96 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/banjo/toolchain.gni")
+import("//build/rust/toolchain.gni")
+
+# Declares a BANJO library.
+#
+# Depending on the toolchain in which this targets is expanded, it will yield
+# different results:
+#   - in the BANJO toolchain, it will compile its source files into an
+#     intermediate representation consumable by language bindings generators;
+#   - in the target or shared toolchain, this will produce a source_set
+#     containing C/C++ bindings.
+#
+# Parameters
+#
+#   sources (required)
+#     List of paths to library source files.
+#
+#   name (optional)
+#     Name of the library.
+#     Defaults to the target's name.
+#
+#   sdk_category (optional)
+#     Publication level of the library in SDKs.
+#     See //build/sdk/sdk_atom.gni.
+
+template("banjo") {
+  if (defined(invoker.sdk_category)) {
+    not_needed(invoker, [ "sdk_category" ])
+  }
+  if (current_toolchain == banjo_toolchain) {
+    import("//build/banjo/banjo_library.gni")
+
+    banjo_library(target_name) {
+      forward_variables_from(invoker, "*")
+    }
+  } else if (current_toolchain == rust_toolchain) {
+    import("//build/rust/banjo_rust.gni")
+
+    banjo_rust(target_name) {
+      forward_variables_from(invoker, "*")
+    }
+  } else if (is_fuchsia) {
+    import("//build/c/banjo_c.gni")
+    import("//build/rust/banjo_rust_library.gni")
+
+    banjo_rust_library(target_name) {
+      forward_variables_from(invoker, "*")
+    }
+
+    banjo_c_target(target_name) {
+      forward_variables_from(invoker, "*")
+    }
+  } else {
+    assert(false,
+           "Unable to process BANJO target in toolchain $current_toolchain.")
+  }
+}
+
+template("banjo_dummy") {
+  if (defined(invoker.sdk_category)) {
+    not_needed(invoker, [ "sdk_category" ])
+  }
+  if (current_toolchain == banjo_toolchain) {
+    import("//build/banjo/banjo_library.gni")
+
+    banjo_dummy_library(target_name) {
+      forward_variables_from(invoker, "*")
+    }
+  } else if (current_toolchain == rust_toolchain) {
+    import("//build/rust/banjo_rust.gni")
+
+    banjo_rust(target_name) {
+      forward_variables_from(invoker, "*")
+    }
+  } else if (is_fuchsia) {
+    import("//build/c/banjo_c.gni")
+    import("//build/rust/banjo_rust_library.gni")
+
+    #
+    # TODO(cramertj): remove pending TC-81.
+    banjo_rust_library(target_name) {
+      forward_variables_from(invoker, "*")
+    }
+
+    banjo_dummy_c_target(target_name) {
+      forward_variables_from(invoker, "*")
+    }
+  } else {
+    assert(false,
+           "Unable to process BANJO target in toolchain $current_toolchain.")
+  }
+}
diff --git a/build/banjo/banjo_library.gni b/build/banjo/banjo_library.gni
new file mode 100644
index 0000000..7c34d84
--- /dev/null
+++ b/build/banjo/banjo_library.gni
@@ -0,0 +1,397 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/banjo/toolchain.gni")
+import("//build/compiled_action.gni")
+import("//build/sdk/sdk_atom.gni")
+
+# Private template to generate an SDK Atom for banjo_library and
+# banjo_dummy_library targets.
+#
+
+template("_banjo_library_sdk") {
+  library_name = target_name
+  if (defined(invoker.name)) {
+    library_name = invoker.name
+  }
+
+  # Process sources.
+  file_base = "banjo/$library_name"
+  all_files = []
+  sdk_sources = []
+  foreach(source, invoker.sources) {
+    relative_source = rebase_path(source, ".")
+    if (string_replace(relative_source, "..", "bogus") != relative_source) {
+      # If the source file is not within the same directory, just use the file
+      # name.
+      relative_source = get_path_info(source, "file")
+    }
+    destination = "$file_base/$relative_source"
+    sdk_sources += [ destination ]
+    all_files += [
+      {
+        source = rebase_path(source)
+        dest = destination
+      },
+    ]
+  }
+
+  # Identify metadata for dependencies.
+  sdk_metas = []
+  sdk_deps = []
+  all_deps = []
+  if (defined(invoker.deps)) {
+    all_deps = invoker.deps
+  }
+  foreach(dep, all_deps) {
+    full_label = get_label_info(dep, "label_no_toolchain")
+    sdk_dep = "${full_label}_sdk"
+    sdk_deps += [ sdk_dep ]
+    gen_dir = get_label_info(sdk_dep, "target_gen_dir")
+    name = get_label_info(sdk_dep, "name")
+    sdk_metas += [ rebase_path("$gen_dir/$name.meta.json") ]
+  }
+
+  # Generate the library metadata.
+  meta_file = "$target_gen_dir/${target_name}.sdk_meta.json"
+  meta_target_name = "${target_name}_meta"
+
+  action(meta_target_name) {
+    script = "//build/banjo/gen_sdk_meta.py"
+
+    inputs = sdk_metas
+
+    outputs = [
+      meta_file,
+    ]
+
+    args = [
+             "--out",
+             rebase_path(meta_file),
+             "--name",
+             library_name,
+             "--root",
+             file_base,
+             "--specs",
+           ] + sdk_metas + [ "--sources" ] + sdk_sources
+
+    deps = sdk_deps
+  }
+
+  sdk_atom("${target_name}_sdk") {
+    id = "sdk://banjo/$library_name"
+
+    category = invoker.sdk_category
+
+    meta = {
+      source = meta_file
+      dest = "$file_base/meta.json"
+      schema = "banjo_library"
+    }
+
+    files = all_files
+
+    non_sdk_deps = [ ":$meta_target_name" ]
+
+    deps = []
+    foreach(dep, all_deps) {
+      label = get_label_info(dep, "label_no_toolchain")
+      deps += [ "${label}_sdk" ]
+    }
+  }
+}
+
+# Generates some representation of a Banjo library that's consumable by Language
+# bindings generators.
+#
+# The parameters for this template are defined in //build/banjo/banjo.gni. The
+# relevant parameters in this template are:
+#   - name;
+#   - sources;
+#   - sdk_category.
+
+template("banjo_library") {
+  assert(
+      current_toolchain == banjo_toolchain,
+      "This template can only be used in the Banjo toolchain $banjo_toolchain.")
+
+  assert(defined(invoker.sources), "A Banjo library requires some sources.")
+
+  library_name = target_name
+  if (defined(invoker.name)) {
+    library_name = invoker.name
+  }
+
+  c_response_file = "$target_gen_dir/c/$target_name.args"
+  cpp_response_file = "$target_gen_dir/cpp/$target_name.args"
+  cpp_i_response_file = "$target_gen_dir/cpp_i/$target_name.args"
+  rust_response_file = "$target_gen_dir/rust/$target_name.args"
+  ddk_root = string_replace(string_replace(library_name, ".", "/"), "_", "-")
+  ddktl_root = string_replace(ddk_root, "ddk", "ddktl")
+  ddk_header = "$root_gen_dir/$ddk_root.h"
+  ddktl_header = "$root_gen_dir/$ddktl_root.h"
+  ddktl_internal_header = "$root_gen_dir/$ddktl_root-internal.h"
+  rust_file = "banjo_" + string_replace(library_name, ".", "_") + ".rs"
+  rust_internal_header = "$target_gen_dir/$rust_file"
+
+  main_target_name = target_name
+  c_response_file_target_name = "${target_name}_c_response_file"
+  cpp_response_file_target_name = "${target_name}_cpp_response_file"
+  cpp_i_response_file_target_name = "${target_name}_cpp_i_response_file"
+  rust_response_file_target_name = "${target_name}_rust_response_file"
+  c_compile_target_name = "${target_name}_c_compile"
+  cpp_compile_target_name = "${target_name}_cpp_compile"
+  cpp_i_compile_target_name = "${target_name}_cpp_i_compile"
+  rust_compile_target_name = "${target_name}_rust_compile"
+
+  targets = [
+    [
+      c_response_file_target_name,
+      c_compile_target_name,
+      c_response_file,
+      "c",
+      ddk_header,
+    ],
+    [
+      cpp_response_file_target_name,
+      cpp_compile_target_name,
+      cpp_response_file,
+      "cpp",
+      ddktl_header,
+    ],
+    [
+      cpp_i_response_file_target_name,
+      cpp_i_compile_target_name,
+      cpp_i_response_file,
+      "cpp_i",
+      ddktl_internal_header,
+    ],
+    [
+      rust_response_file_target_name,
+      rust_compile_target_name,
+      rust_response_file,
+      "rust",
+      rust_internal_header,
+    ],
+  ]
+
+  all_deps = []
+  if (defined(invoker.deps)) {
+    all_deps += invoker.deps
+  }
+  if (defined(invoker.public_deps)) {
+    all_deps += invoker.public_deps
+  }
+
+  foreach(target, targets) {
+    target_name = target[0]
+    response_file = target[2]
+    backend = target[3]
+    output = target[4]
+
+    action(target_name) {
+      visibility = [ ":*" ]
+
+      script = "//build/banjo/gen_response_file.py"
+
+      forward_variables_from(invoker,
+                             [
+                               "deps",
+                               "public_deps",
+                               "sources",
+                               "testonly",
+                             ])
+
+      libraries = "$target_gen_dir/$backend/$main_target_name.libraries"
+
+      outputs = [
+        response_file,
+        libraries,
+      ]
+
+      args = [
+               "--out-response-file",
+               rebase_path(response_file, root_build_dir),
+               "--out-libraries",
+               rebase_path(libraries, root_build_dir),
+               "--backend",
+               backend,
+               "--output",
+               rebase_path(output, root_build_dir),
+               "--name",
+               library_name,
+               "--sources",
+             ] + rebase_path(sources, root_build_dir)
+
+      if (all_deps != []) {
+        dep_libraries = []
+
+        foreach(dep, all_deps) {
+          gen_dir = get_label_info(dep, "target_gen_dir")
+          name = get_label_info(dep, "name")
+          dep_libraries += [ "$gen_dir/c/$name.libraries" ]
+        }
+
+        inputs = dep_libraries
+
+        args +=
+            [ "--dep-libraries" ] + rebase_path(dep_libraries, root_build_dir)
+      }
+    }
+  }
+
+  foreach(target, targets) {
+    target_name = target[1]
+    response_target_name = target[0]
+    response_file = target[2]
+    backend = target[3]
+    output = target[4]
+
+    if (backend != "rust") {
+      compiled_action(target_name) {
+        forward_variables_from(invoker, [ "testonly" ])
+
+        visibility = [ ":*" ]
+
+        tool = "//zircon/public/tool/banjoc"
+
+        inputs = [
+          response_file,
+        ]
+
+        outputs = [
+          output,
+        ]
+
+        rebased_response_file = rebase_path(response_file, root_build_dir)
+
+        args = [ "@$rebased_response_file" ]
+
+        deps = [
+          ":$response_target_name",
+        ]
+      }
+    }
+  }
+
+  group(main_target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "testonly",
+                             "visibility",
+                           ])
+
+    public_deps = [
+      ":$c_compile_target_name",
+      ":$c_response_file_target_name",
+      ":$cpp_compile_target_name",
+      ":$cpp_i_compile_target_name",
+      ":$cpp_i_response_file_target_name",
+      ":$cpp_response_file_target_name",
+      ":$rust_response_file_target_name",
+    ]
+  }
+
+  if (defined(invoker.sdk_category) && invoker.sdk_category != "excluded") {
+    _banjo_library_sdk("$main_target_name") {
+      forward_variables_from(invoker, "*")
+    }
+  }
+}
+
+template("banjo_dummy_library") {
+  assert(
+      current_toolchain == banjo_toolchain,
+      "This template can only be used in the Banjo toolchain $banjo_toolchain.")
+
+  assert(defined(invoker.sources),
+         "A Banjo dummy library requires some sources.")
+
+  library_name = target_name
+  if (defined(invoker.name)) {
+    library_name = invoker.name
+  }
+
+  main_target_name = target_name
+  c_response_file_target_name = "${target_name}_rust_response_file"
+  rust_response_file_target_name = "${target_name}_c_response_file"
+  ddk_root = string_replace(string_replace(library_name, ".", "/"), "_", "-")
+  rust_file = "banjo_" + string_replace(library_name, ".", "_") + ".rs"
+  ddk_header = "$target_gen_dir/$ddk_root.h"
+  rust_internal_header = "$target_gen_dir/$rust_file"
+
+  targets = [
+    [
+      c_response_file_target_name,
+      "c",
+      ddk_header,
+    ],
+    [
+      rust_response_file_target_name,
+      "rust",
+      rust_internal_header,
+    ],
+  ]
+
+  foreach(target, targets) {
+    response_target_name = target[0]
+    backend = target[1]
+    output = target[2]
+
+    action(response_target_name) {
+      visibility = [ ":*" ]
+
+      script = "//build/banjo/gen_response_file.py"
+
+      forward_variables_from(invoker,
+                             [
+                               "deps",
+                               "public_deps",
+                               "sources",
+                               "testonly",
+                             ])
+
+      response_file = "$target_gen_dir/$backend/$main_target_name.args"
+      libraries = "$target_gen_dir/$backend/$main_target_name.libraries"
+
+      outputs = [
+        response_file,
+        libraries,
+      ]
+
+      args = [
+               "--output",
+               rebase_path(output, root_build_dir),
+               "--backend",
+               backend,
+               "--out-response-file",
+               rebase_path(response_file, root_build_dir),
+               "--out-libraries",
+               rebase_path(libraries, root_build_dir),
+               "--name",
+               library_name,
+               "--sources",
+             ] + rebase_path(sources, root_build_dir)
+    }
+  }
+
+  group(main_target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "testonly",
+                             "visibility",
+                           ])
+
+    public_deps = [
+      ":$c_response_file_target_name",
+      ":$rust_response_file_target_name",
+    ]
+  }
+
+  if (defined(invoker.sdk_category) && invoker.sdk_category != "excluded") {
+    _banjo_library_sdk("$target_name") {
+      forward_variables_from(invoker, "*")
+    }
+  }
+}
diff --git a/build/banjo/gen_response_file.py b/build/banjo/gen_response_file.py
new file mode 100755
index 0000000..1d55751
--- /dev/null
+++ b/build/banjo/gen_response_file.py
@@ -0,0 +1,71 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import os
+import string
+import sys
+
+
+def read_libraries(libraries_path):
+    with open(libraries_path) as f:
+        lines = f.readlines()
+        return [l.rstrip("\n") for l in lines]
+
+
+def write_libraries(libraries_path, libraries):
+    directory = os.path.dirname(libraries_path)
+    if not os.path.exists(directory):
+        os.makedirs(directory)
+    with open(libraries_path, "w+") as f:
+        for library in libraries:
+            f.write(library)
+            f.write("\n")
+
+
+def main():
+    parser = argparse.ArgumentParser(description="Generate response file for Banjo frontend")
+    parser.add_argument("--out-response-file", help="The path for for the response file to generate", required=True)
+    parser.add_argument("--out-libraries", help="The path for for the libraries file to generate", required=True)
+    parser.add_argument("--backend", help="The path for the C simple client file to generate, if any")
+    parser.add_argument("--output", help="The path for the C++ header file to generate, if any")
+    parser.add_argument("--name", help="The name for the generated Banjo library, if any")
+    parser.add_argument("--sources", help="List of Banjo source files", nargs="*")
+    parser.add_argument("--dep-libraries", help="List of dependent libraries", nargs="*")
+    args = parser.parse_args()
+
+    target_libraries = []
+
+    for dep_libraries_path in args.dep_libraries or []:
+        dep_libraries = read_libraries(dep_libraries_path)
+        for library in dep_libraries:
+            if library in target_libraries:
+                continue
+            target_libraries.append(library)
+
+    target_libraries.append(" ".join(sorted(args.sources)))
+
+    write_libraries(args.out_libraries, target_libraries)
+
+    response_file = []
+
+    if args.name:
+        response_file.append("--name %s" % args.name)
+
+    if args.backend:
+        response_file.append("--backend %s" % args.backend)
+
+    if args.output:
+        response_file.append("--output %s" % args.output)
+
+    response_file.extend(["--files %s" % library for library in target_libraries])
+
+    with open(args.out_response_file, "w+") as f:
+        f.write(" ".join(response_file))
+        f.write("\n")
+
+
+if __name__ == "__main__":
+  sys.exit(main())
diff --git a/build/banjo/gen_sdk_meta.py b/build/banjo/gen_sdk_meta.py
new file mode 100755
index 0000000..28b3fa3
--- /dev/null
+++ b/build/banjo/gen_sdk_meta.py
@@ -0,0 +1,57 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import json
+import os
+import sys
+
+
+def main():
+    parser = argparse.ArgumentParser('Builds a metadata file')
+    parser.add_argument('--out',
+                        help='Path to the output file',
+                        required=True)
+    parser.add_argument('--name',
+                        help='Name of the library',
+                        required=True)
+    parser.add_argument('--root',
+                        help='Root of the library in the SDK',
+                        required=True)
+    parser.add_argument('--specs',
+                        help='Path to spec files of dependencies',
+                        nargs='*')
+    parser.add_argument('--sources',
+                        help='List of library sources',
+                        nargs='+')
+    args = parser.parse_args()
+
+    metadata = {
+        'type': 'banjo_library',
+        'name': args.name,
+        'root': args.root,
+        'sources': args.sources,
+    }
+
+    deps = []
+    for spec in args.specs:
+        with open(spec, 'r') as spec_file:
+            data = json.load(spec_file)
+        type = data['type']
+        name = data['name']
+        if type == 'banjo_library' or type == 'banjo_dummy_library':
+            deps.append(name)
+        else:
+            raise Exception('Unsupported dependency type: %s' % type)
+    metadata['deps'] = deps
+
+    with open(args.out, 'w') as out_file:
+        json.dump(metadata, out_file, indent=2, sort_keys=True)
+
+    return 0
+
+
+if __name__ == '__main__':
+    sys.exit(main())
diff --git a/build/banjo/toolchain.gni b/build/banjo/toolchain.gni
new file mode 100644
index 0000000..493dbed
--- /dev/null
+++ b/build/banjo/toolchain.gni
@@ -0,0 +1,5 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+banjo_toolchain = "//build/banjo:banjoing"
diff --git a/build/c/BUILD.gn b/build/c/BUILD.gn
new file mode 100644
index 0000000..56e1322
--- /dev/null
+++ b/build/c/BUILD.gn
@@ -0,0 +1,11 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/banjo/toolchain.gni")
+
+config("banjo_gen_config") {
+  banjo_root_gen_dir =
+      get_label_info("//bogus($banjo_toolchain)", "root_gen_dir")
+  include_dirs = [ banjo_root_gen_dir ]
+}
diff --git a/build/c/banjo_c.gni b/build/c/banjo_c.gni
new file mode 100644
index 0000000..297ce4f
--- /dev/null
+++ b/build/c/banjo_c.gni
@@ -0,0 +1,85 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/banjo/toolchain.gni")
+import("//build/compiled_action.gni")
+
+# C/C++ bindings for a Banjo protocol.
+#
+# The parameters for this template are defined in //build/banjo/banjo.gni. The
+# relevant parameters in this template are:
+#   name: string, name of the Banjo protocol
+
+template("banjo_c_target") {
+  assert(is_fuchsia, "This template can only be used in $target_toolchain.")
+
+  not_needed(invoker, [ "sources" ])
+
+  main_target_name = target_name
+
+  library_name = invoker.name
+
+  ddk_root = string_replace(string_replace(library_name, ".", "/"), "_", "-")
+  ddktl_root = string_replace(ddk_root, "ddk", "ddktl")
+  banjo_root_gen_dir =
+      get_label_info(":$target_name($banjo_toolchain)", "root_gen_dir")
+  ddk_header = "$banjo_root_gen_dir/$ddk_root.h"
+  ddktl_header = "$banjo_root_gen_dir/$ddktl_root.h"
+  ddktl_internal_header = "$banjo_root_gen_dir/$ddktl_root-internal.h"
+
+  # The C/C++ headers are generated by the frontend, so we just need to
+  # produce a target with the generated file name and configuration information.
+  source_set(main_target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "deps",
+                             "testonly",
+                             "visibility",
+                           ])
+
+    public = [
+      ddk_header,
+      ddktl_header,
+      ddktl_internal_header,
+    ]
+
+    # Let dependencies use `#include "$file_stem.h"`.
+    public_configs = [ "//build/c:banjo_gen_config" ]
+
+    deps += [
+              ":${main_target_name}_c_compile($banjo_toolchain)",
+              ":${main_target_name}_cpp_compile($banjo_toolchain)",
+              ":${main_target_name}_cpp_i_compile($banjo_toolchain)",
+   ]
+
+    libs = [ "zircon" ]
+  }
+}
+
+template("banjo_dummy_c_target") {
+  assert(is_fuchsia, "This template can only be used in $target_toolchain.")
+
+  not_needed(invoker,
+             [
+               "sources",
+               "name",
+             ])
+
+  main_target_name = target_name
+
+  # The headers referenced by a dummy target all exist in the sysroot.
+  source_set(main_target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "deps",
+                             "testonly",
+                             "visibility",
+                           ])
+
+    public_deps = [
+      "//zircon/public/sysroot",
+    ]
+    libs = [ "zircon" ]
+  }
+}
diff --git a/build/c/fidl_c.gni b/build/c/fidl_c.gni
new file mode 100644
index 0000000..0fc6c15
--- /dev/null
+++ b/build/c/fidl_c.gni
@@ -0,0 +1,120 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/compiled_action.gni")
+import("//build/fidl/toolchain.gni")
+
+template("fidl_tables") {
+  main_target_name = target_name
+
+  fidl_target_gen_dir =
+      get_label_info(":$target_name($fidl_toolchain)", "target_gen_dir")
+  coding_tables = "$fidl_target_gen_dir/$target_name.fidl.tables.cc"
+
+  # The C simple $type code is generated by the frontend, so we just need to
+  # produce a target with the generated file name and configuration information.
+  source_set(main_target_name + "_tables") {
+    forward_variables_from(invoker,
+                           [
+                             "testonly",
+                             "visibility",
+                           ])
+
+    sources = [
+      coding_tables,
+    ]
+
+    deps = [
+      ":${main_target_name}_compile($fidl_toolchain)",
+    ]
+    public_deps = [
+      "//zircon/public/lib/fidl",
+    ]
+  }
+}
+
+# C simple client bindings for a FIDL library.
+#
+# The parameters for this template are defined in //build/fidl/fidl.gni. The
+# relevant parameters in this template are:
+#   name: string, name of the FIDL library
+#   type: string, 'client' or 'server'
+
+template("fidl_c_target") {
+  assert(is_fuchsia, "This template can only be used in $target_toolchain.")
+
+  type = invoker.type
+  main_target_name = target_name
+
+  library_name = invoker.name
+
+  c_stem = string_replace(library_name, ".", "/") + "/c/fidl"
+  fidl_root_gen_dir =
+      get_label_info(":$target_name($fidl_toolchain)", "root_gen_dir")
+  c_header = "$fidl_root_gen_dir/$c_stem.h"
+  c_file = "$fidl_root_gen_dir/$c_stem.$type.c"
+
+  # The C simple $type code is generated by the frontend, so we just need to
+  # produce a target with the generated file name and configuration information.
+  source_set(main_target_name + "_c_" + type) {
+    forward_variables_from(invoker,
+                           [
+                             "testonly",
+                             "visibility",
+                           ])
+
+    sources = [
+      c_file,
+    ]
+    public = [
+      c_header,
+    ]
+
+    # Let dependencies use `#include "$file_stem.h"`.
+    public_configs = [ "//build/cpp:fidl_gen_config" ]
+
+    deps = [
+      ":${main_target_name}_compile($fidl_toolchain)",
+      ":${main_target_name}_tables",
+    ]
+    public_deps = [
+      "//zircon/public/lib/fidl",
+    ]
+    libs = [ "zircon" ]
+  }
+}
+
+template("fidl_c_client") {
+  library_name = target_name
+  if (defined(invoker.name)) {
+    library_name = invoker.name
+  }
+  forward_variables_from(invoker,
+                         [
+                           "testonly",
+                           "visibility",
+                         ])
+
+  fidl_c_target(target_name) {
+    name = library_name
+    type = "client"
+  }
+}
+
+template("fidl_c_server") {
+  library_name = target_name
+  if (defined(invoker.name)) {
+    library_name = invoker.name
+  }
+
+  forward_variables_from(invoker,
+                         [
+                           "testonly",
+                           "visibility",
+                         ])
+  fidl_c_target(target_name) {
+    name = library_name
+    type = "server"
+  }
+}
diff --git a/build/cat.py b/build/cat.py
new file mode 100755
index 0000000..2e0c1a8
--- /dev/null
+++ b/build/cat.py
@@ -0,0 +1,26 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import sys
+
+def parse_args():
+    parser = argparse.ArgumentParser(description='Concat files.')
+    parser.add_argument('-i', action='append', dest='inputs', default=[],
+                          help='Input files', required=True)
+    parser.add_argument('-o', dest='output', help='Output file', required=True)
+    args = parser.parse_args()
+    return args
+
+def main():
+    args = parse_args()
+    with open(args.output, 'w') as outfile:
+      for fname in args.inputs:
+        with open(fname) as infile:
+            outfile.write(infile.read())
+
+
+if __name__ == '__main__':
+    sys.exit(main())
diff --git a/build/cat.sh b/build/cat.sh
new file mode 100755
index 0000000..9c9428b
--- /dev/null
+++ b/build/cat.sh
@@ -0,0 +1,21 @@
+#!/bin/sh
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+# cat.sh output_file [file, ...]
+# cat.sh concatenates all file arguments into output_file, separated by lines.
+# cat.sh elides empty lines (in order to avoid breaking the symbolizer)
+
+# Note: `cat $output $@` is not equivalent - that does not produce new lines
+# between inputs.
+
+readonly output="$1"
+shift 1
+for file in "$@"
+do
+  val="$(<"${file}")"
+  if [ -n "${val}" ]; then
+    echo "${val}"
+  fi
+done > "${output}"
diff --git a/build/cipd-update.sh b/build/cipd-update.sh
new file mode 100755
index 0000000..16ea495
--- /dev/null
+++ b/build/cipd-update.sh
@@ -0,0 +1,26 @@
+#!/usr/bin/env bash
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+set -ex
+
+readonly SCRIPT_ROOT="$(cd $(dirname ${BASH_SOURCE[0]} ) && pwd)"
+readonly FUCHSIA_ROOT="${SCRIPT_ROOT}/.."
+readonly BUILD_ROOT="${FUCHSIA_ROOT}/build"
+readonly BUILDTOOLS_DIR="${FUCHSIA_ROOT}/buildtools"
+readonly CIPD="${BUILDTOOLS_DIR}/cipd"
+
+INTERNAL_ACCESS=false
+if [[ "$(${CIPD} ls fuchsia_internal)" != "No matching packages." ]]; then
+  INTERNAL_ACCESS=true
+fi
+echo "internal_access = ${INTERNAL_ACCESS}" > "${SCRIPT_ROOT}/cipd.gni"
+
+declare -a ENSURE_FILES=("${SCRIPT_ROOT}/cipd.ensure")
+if $INTERNAL_ACCESS; then
+  ENSURE_FILES+=("${SCRIPT_ROOT}/cipd_internal.ensure")
+fi
+
+(sed '/^\$/!d' "${ENSURE_FILES[@]}" && sed '/^\$/d' "${ENSURE_FILES[@]}") |
+  ${CIPD} ensure -ensure-file build/cipd.ensure -root ${BUILD_ROOT} -log-level warning
diff --git a/build/cipd.ensure b/build/cipd.ensure
new file mode 100644
index 0000000..47b60ae
--- /dev/null
+++ b/build/cipd.ensure
@@ -0,0 +1,22 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+# Note: changes to this file will require updating cipd.versions, which
+# can be done by running `./cipd ensure-file-resolve -ensure-file cipd.ensure`
+# from this directory.
+
+$ResolvedVersions cipd.versions
+
+# This tells CIPD to fix up manually deleted files.
+$ParanoidMode CheckPresence
+
+$VerifiedPlatform linux-amd64
+$VerifiedPlatform mac-amd64
+
+# Linux sysroot
+@Subdir third_party/sysroot/linux-x64
+fuchsia/sysroot/linux-amd64 git_revision:a4aaacde9d37ccf91a0f8dc8267cb7ad5d9be283
+
+@Subdir third_party/sysroot/linux-arm64
+fuchsia/sysroot/linux-arm64 git_revision:a4aaacde9d37ccf91a0f8dc8267cb7ad5d9be283
diff --git a/build/cipd.versions b/build/cipd.versions
new file mode 100644
index 0000000..b7b165f
--- /dev/null
+++ b/build/cipd.versions
@@ -0,0 +1,10 @@
+# This file is auto-generated by 'cipd ensure-file-resolve'.
+# Do not modify manually. All changes will be overwritten.
+
+fuchsia/sysroot/linux-amd64
+	git_revision:a4aaacde9d37ccf91a0f8dc8267cb7ad5d9be283
+	qIDZyI1rOQR7INRUL5vxltcQ9sAmdu3-FrI-Wcf_OVwC
+
+fuchsia/sysroot/linux-arm64
+	git_revision:a4aaacde9d37ccf91a0f8dc8267cb7ad5d9be283
+	B1w-KMj3c3107ZQ3i05ID0Keq_kDPXLmwAkwzn2LmTIC
diff --git a/build/cipd_internal.ensure b/build/cipd_internal.ensure
new file mode 100644
index 0000000..8e27530
--- /dev/null
+++ b/build/cipd_internal.ensure
@@ -0,0 +1,5 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+$ParanoidMode CheckPresence
diff --git a/build/cmx/cmx.gni b/build/cmx/cmx.gni
new file mode 100644
index 0000000..f6bd428
--- /dev/null
+++ b/build/cmx/cmx.gni
@@ -0,0 +1,261 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/compiled_action.gni")
+import("//build/json/validate_json.gni")
+
+# Given a .cmx file, validates the module facet of it
+#
+# Parameters
+#   cmx (required)
+#     This is the .cmx file that wants to be validated.
+#
+#   deps (optional)
+template("cmx_module_validate") {
+  module_facet_validation = target_name + "_module_facet"
+
+  # Validate the |fuchsia.module| facet schema.
+  validate_json(module_facet_validation) {
+    forward_variables_from(invoker,
+                           [
+                             "deps",
+                             "public_deps",
+                             "testonly",
+                             "visibility",
+                           ])
+
+    data = invoker.cmx
+    schema = "//build/cmx/facets/module_facet_schema.json"
+  }
+}
+
+# Validates a cmx file
+#
+# The cmx_validate template will ensure that a given cmx file is conformant to
+# the cmx schema, as defined by //garnet/bin/cmc/schema.json. A stamp file is
+# generated to mark that a given cmx file has passed.
+#
+# Parameters
+#
+#   data (required)
+#     [file] The path to the cmx file that is to be validated
+#
+#   deps (optional)
+#   public_deps (optional)
+#   testonly (optional)
+#   visibility (optional)
+#     Standard GN meaning.
+#
+# Example of usage:
+#
+#   cmx_validate(format) {
+#     data = meta.path
+#   }
+template("cmx_validate") {
+  compiled_action(target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "deps",
+                             "sources",
+                             "public_deps",
+                             "testonly",
+                             "visibility",
+                           ])
+
+    tool = "//garnet/bin/cmc"
+    tool_output_name = "cmc"
+
+    stamp_file = "$target_gen_dir/$target_name.verified"
+
+    inputs = [
+      invoker.data,
+    ]
+
+    outputs = [
+      stamp_file,
+    ]
+
+    args = [
+      "--stamp",
+      rebase_path(stamp_file),
+      "validate",
+      rebase_path(invoker.data),
+    ]
+  }
+  cmx_module_validate("module_" + target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "deps",
+                             "testonly",
+                             "visibility",
+                           ])
+    cmx = invoker.data
+  }
+}
+
+# Compiles a cml file
+#
+# The cm_compile template will compile a cml file into a cm file. It will
+# pretty-print the given cm file if is_debug is set to true.
+#
+# Parameters
+#
+#   data (required)
+#     [file] The path to the cml file that is to be compiled.
+#
+#   deps (optional)
+#   public_deps (optional)
+#   testonly (optional)
+#   visibility (optional)
+#     Standard GN meaning.
+#
+# Example of usage:
+#
+#   cm_compile(format) {
+#     data = rebase_path(meta.path)
+#   }
+template("cm_compile") {
+  compiled_action(target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "deps",
+                             "public_deps",
+                             "testonly",
+                             "visibility",
+                           ])
+    tool = "//garnet/bin/cmc/"
+    tool_output_name = "cmc"
+
+    compiled_output = "$target_out_dir/$target_name"
+    inputs = [
+      invoker.data,
+    ]
+    outputs = [
+      compiled_output,
+    ]
+
+    args = [
+      "compile",
+      "--output",
+      rebase_path(compiled_output),
+      invoker.data,
+    ]
+
+    if (is_debug) {
+      args += [ "--pretty" ]
+    }
+  }
+}
+
+# Merges together cmx files
+#
+# The cmx_merge template will combine the given cmx files into a single cmx
+# file.
+#
+# Parameters
+#
+#   sources (required)
+#     [list of files] A list of cmx files that are to be merged.
+#
+#   deps (optional)
+#   public_deps (optional)
+#   testonly (optional)
+#   visibility (optional)
+#     Standard GN meaning.
+#
+# Example of usage:
+#
+#   cmx_merge(format) {
+#     sources = [
+#       rebase_path(meta.path),
+#       rebase_path(
+#           "//topaz/runtime/dart_runner/meta/aot${product_suffix}_runtime"),
+#     ]
+#   }
+template("cmx_merge") {
+  compiled_action(target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "deps",
+                             "sources",
+                             "public_deps",
+                             "testonly",
+                             "visibility",
+                           ])
+
+    tool = "//garnet/bin/cmc"
+    tool_output_name = "cmc"
+
+    merged_output = "$target_out_dir/$target_name"
+    inputs = invoker.sources
+    outputs = [
+      merged_output,
+    ]
+
+    args = [
+      "merge",
+      "--output",
+      rebase_path(merged_output),
+    ]
+
+    foreach(source, sources) {
+      args += [ rebase_path(source, root_build_dir) ]
+    }
+  }
+}
+
+# Formats a cmx file
+#
+# The cmx_format template will minify the given cmx file if is_debug is set to
+# false, and will pretty-print the given cmx file if is_debug is set to true.
+#
+# Parameters
+#
+#   data (required)
+#     [file] The path to the cmx file that is to be formatted
+#
+#   deps (optional)
+#   public_deps (optional)
+#   testonly (optional)
+#   visibility (optional)
+#     Standard GN meaning.
+#
+# Example of usage:
+#
+#   cmx_format(format) {
+#     data = rebase_path(meta.path)
+#   }
+template("cmx_format") {
+  compiled_action(target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "deps",
+                             "public_deps",
+                             "testonly",
+                             "visibility",
+                           ])
+
+    tool = "//garnet/bin/cmc"
+    tool_output_name = "cmc"
+
+    formatted_output = "$target_out_dir/$target_name"
+    inputs = [
+      invoker.data,
+    ]
+    outputs = [
+      formatted_output,
+    ]
+
+    args = [
+      "format",
+      "--output",
+      rebase_path(formatted_output),
+      invoker.data,
+    ]
+
+    if (is_debug) {
+      args += [ "--pretty" ]
+    }
+  }
+}
diff --git a/build/cmx/facets/module_facet_schema.json b/build/cmx/facets/module_facet_schema.json
new file mode 100644
index 0000000..16acc51
--- /dev/null
+++ b/build/cmx/facets/module_facet_schema.json
@@ -0,0 +1,121 @@
+{
+  "$schema": "http://json-schema.org/schema#",
+  "title": "Schema for a .cmx's `fuchsia.module` facet",
+  "definitions": {
+    "facets": {
+      "type": "object",
+      "properties": {
+        "fuchsia.module": {
+          "$ref": "#/definitions/moduleFacet"
+        }
+      }
+    },
+    "moduleFacet": {
+      "type": "object",
+      "properties": {
+        "suggestion_headline": {
+          "type": "string"
+        },
+        "intent_filters": {
+          "$ref": "#/definitions/intentFilterArray"
+        },
+        "composition_pattern": {
+          "$ref": "#/definitions/compositionPattern"
+        },
+        "action": {
+          "type": "string"
+        },
+        "parameters": {
+          "$ref": "#/definitions/parameterArray"
+        },
+        "@version": {
+          "type": "integer"
+        },
+        "placeholder_color": {
+          "$ref": "#definitions/hexColor"
+        }
+      },
+      "dependencies": {
+        "intent_filters": {
+          "required": [
+            "@version"
+          ]
+        }
+      },
+      "additionalProperties": false
+    },
+    "intentFilterArray": {
+      "type": "array",
+      "items": {
+        "$ref": "#/definitions/intentFilter"
+      },
+      "additionalItems": false,
+      "uniqueItems": true,
+      "minItems": 1
+    },
+    "intentFilter": {
+      "type": "object",
+      "properties": {
+        "action": {
+          "type": "string"
+        },
+        "parameters": {
+          "$ref": "#/definitions/parameterArray"
+        }
+      },
+      "required": [
+        "action",
+        "parameters"
+      ]
+    },
+    "parameterArray": {
+      "type": "array",
+      "items": {
+        "$ref": "#/definitions/parameter"
+      },
+      "additionalItems": false,
+      "uniqueItems": true
+    },
+    "parameter": {
+      "type": "object",
+      "properties": {
+        "name": {
+          "$ref": "#/definitions/alphaNumString"
+        },
+        "type": {
+          "type": "string"
+        },
+        "required": {
+          "type": "boolean"
+        }
+      },
+      "required": [
+        "name",
+        "type"
+      ],
+      "additionalProperties": false
+    },
+    "alphaNumString": {
+      "type": "string",
+      "pattern": "^[a-zA-Z0-9_]+$"
+    },
+    "compositionPattern": {
+      "type": "string",
+      "enum": [
+        "ticker",
+        "comments-right"
+      ]
+    },
+    "hexColor": {
+      "type": "string",
+      "pattern": "^#([A-Fa-f0-9]{6})$"
+    }
+  },
+  "type": "object",
+  "properties": {
+    "facets": {
+      "$ref": "#/definitions/facets"
+    }
+  },
+  "additionalProperties": true
+}
\ No newline at end of file
diff --git a/build/compiled_action.gni b/build/compiled_action.gni
new file mode 100644
index 0000000..4a5b8c8
--- /dev/null
+++ b/build/compiled_action.gni
@@ -0,0 +1,146 @@
+# Copyright 2016 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/config/clang/clang.gni")
+
+# This file introduces two related templates that act like action and
+# action_foreach but instead of running a script, it will compile a given tool
+# in the host toolchain and run that (either once or over the list of inputs,
+# depending on the variant).
+#
+# Parameters
+#
+#   tool (required)
+#       [label] Label of the tool to run. This should be an executable, and
+#       this label should not include a toolchain (anything in parens). The
+#       host compile of this tool will be used.
+#
+#   tool_output_name (optional)
+#       [string] The `output_name` in the `executable()` for `tool`.
+#       This default's to the `tool` target's name.
+#
+#   outputs (required)
+#       [list of files] Like the outputs of action (if using "compiled_action",
+#       this would be just the list of outputs), or action_foreach (if using
+#       "compiled_action_foreach", this would contain source expansions mapping
+#       input to output files).
+#
+#   args (required)
+#       [list of strings] Same meaning as action/action_foreach.
+#
+#   inputs (optional)
+#   sources (optional)
+#       Files the binary takes as input. The step will be re-run whenever any
+#       of these change. If inputs is empty, the step will run only when the
+#       binary itself changes.
+#
+#   args (all optional)
+#   depfile
+#   deps
+#   public_deps
+#   testonly
+#   visibility
+#       Same meaning as action/action_foreach.
+#
+# Example of usage:
+#
+#   compiled_action("run_my_tool") {
+#     tool = "//tools/something:mytool"
+#     outputs = [
+#       "$target_gen_dir/mysource.cc",
+#       "$target_gen_dir/mysource.h",
+#     ]
+#
+#     # The tool takes this input.
+#     sources = [ "my_input_file.idl" ]
+#
+#     # In this case, the tool takes as arguments the input file and the output
+#     # build dir (both relative to the "cd" that the script will be run in)
+#     # and will produce the output files listed above.
+#     args = [
+#       rebase_path("my_input_file.idl", root_build_dir),
+#       "--output-dir", rebase_path(target_gen_dir, root_build_dir),
+#     ]
+#   }
+#
+# You would typically declare your tool like this:
+#   if (host_toolchain == current_toolchain) {
+#     executable("mytool") {
+#       ...
+#     }
+#   }
+#
+# The if statement around the executable is optional. That says "I only care
+# about this target in the host toolchain". Usually this is what you want, and
+# saves unnecessarily compiling your tool for the target platform. But if you
+# need a target build of your tool as well, just leave off the if statement.
+template("_compiled_action_target") {
+  assert(defined(invoker.tool), "tool must be defined for $target_name")
+  assert(defined(invoker.outputs), "outputs must be defined for $target_name")
+  assert(defined(invoker.args), "args must be defined for $target_name")
+
+  target(invoker._target_type, target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "depfile",
+                             "deps",
+                             "inputs",
+                             "outputs",
+                             "public_deps",
+                             "sources",
+                             "testonly",
+                             "tool_output_name",
+                             "visibility",
+                           ])
+    if (!defined(deps)) {
+      deps = []
+    }
+    if (!defined(inputs)) {
+      inputs = []
+    }
+
+    script = "//build/gn_run_binary.sh"
+
+    # Constuct the host toolchain version of the tool.
+    host_tool = "${invoker.tool}($host_toolchain)"
+
+    # Get the path to the executable.
+    if (!defined(tool_output_name)) {
+      tool_output_name = get_label_info(host_tool, "name")
+    }
+    tool_out_dir = get_label_info(host_tool, "root_out_dir")
+    host_executable = "$tool_out_dir/$tool_output_name"
+
+    # Add the executable itself as an input.
+    inputs += [ host_executable ]
+
+    deps += [ host_tool ]
+
+    # The script takes as arguments Clang bin directory (for passing
+    # llvm-symbolizer to runtimes), the binary to run, and then the
+    # arguments to pass it.
+    args = [
+             clang_prefix,
+             rebase_path(host_executable, root_build_dir),
+           ] + invoker.args
+  }
+}
+
+# See _compiled_action_target().
+template("compiled_action") {
+  _compiled_action_target(target_name) {
+    _target_type = "action"
+    forward_variables_from(invoker, [ "visibility" ])
+    forward_variables_from(invoker, "*", [ "visibility" ])
+  }
+}
+
+# See _compiled_action_target().
+template("compiled_action_foreach") {
+  _compiled_action_target(target_name) {
+    _target_type = "action_foreach"
+    forward_variables_from(invoker, [ "visibility" ])
+    forward_variables_from(invoker, "*", [ "visibility" ])
+  }
+}
diff --git a/build/config/BUILD.gn b/build/config/BUILD.gn
new file mode 100644
index 0000000..aaee03d
--- /dev/null
+++ b/build/config/BUILD.gn
@@ -0,0 +1,257 @@
+# Copyright 2016 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/config/compiler.gni")
+
+declare_args() {
+  if (is_fuchsia) {
+    # Controls whether the compiler emits full stack frames for function calls.
+    # This reduces performance but increases the ability to generate good
+    # stack traces, especially when we have bugs around unwind table generation.
+    # It applies only for Fuchsia targets (see below where it is unset).
+    #
+    # TODO(ZX-2361): Theoretically unwind tables should be good enough so we can
+    # remove this option when the issues are addressed.
+    enable_frame_pointers = is_debug
+  }
+}
+
+# No frame pointers for host compiles.
+if (!is_fuchsia) {
+  enable_frame_pointers = false
+}
+
+config("compiler") {
+  asmflags = []
+  cflags = [ "-fcolor-diagnostics" ]
+  cflags_c = []
+  cflags_cc = [ "-fvisibility-inlines-hidden" ]
+  cflags_objc = []
+  cflags_objcc = [ "-fvisibility-inlines-hidden" ]
+  ldflags = []
+  defines = []
+  configs = []
+
+  if (current_os == "fuchsia") {
+    configs += [ "//build/config/fuchsia:compiler" ]
+  } else {
+    cflags_c += [ "-std=c11" ]
+    cflags_cc += [
+      "-std=c++17",
+      "-stdlib=libc++",
+    ]
+    if (current_os == "linux") {
+      configs += [ "//build/config/linux:compiler" ]
+    } else if (current_os == "mac") {
+      configs += [ "//build/config/mac:compiler" ]
+    }
+  }
+
+  # Linker on macOS does not support `color-diagnostics`
+  if (current_os != "mac") {
+    ldflags += [ "-Wl,--color-diagnostics" ]
+  }
+
+  asmflags += cflags
+  asmflags += cflags_c
+}
+
+config("relative_paths") {
+  # Make builds independent of absolute file path.  The file names
+  # embedded in debugging information will be expressed as relative to
+  # the build directory, e.g. "../.." for an "out/subdir" under //.
+  # This is consistent with the file names in __FILE__ expansions
+  # (e.g. in assertion messages), which the compiler doesn't provide a
+  # way to remap.  That way source file names in logging and
+  # symbolization can all be treated the same way.  This won't go well
+  # if root_build_dir is not a subdirectory //, but there isn't a better
+  # option to keep all source file name references uniformly relative to
+  # a single root.
+  absolute_path = rebase_path("//.")
+  relative_path = rebase_path("//.", root_build_dir)
+  cflags = [
+    # This makes sure that the DW_AT_comp_dir string (the current
+    # directory while running the compiler, which is the basis for all
+    # relative source file names in the DWARF info) is represented as
+    # relative to //.
+    "-fdebug-prefix-map=$absolute_path=$relative_path",
+
+    # This makes sure that include directories in the toolchain are
+    # represented as relative to the build directory (because that's how
+    # we invoke the compiler), rather than absolute.  This can affect
+    # __FILE__ expansions (e.g. assertions in system headers).  We
+    # normally run a compiler that's someplace within the source tree
+    # (//buildtools/...), so its absolute installation path will have a
+    # prefix matching absolute_path and hence be mapped to relative_path
+    # in the debugging information, so this should actually be
+    # superfluous for purposes of the debugging information.
+    "-no-canonical-prefixes",
+  ]
+}
+
+config("debug") {
+  cflags = [ "-O0" ]
+  ldflags = cflags
+}
+
+config("release") {
+  defines = [ "NDEBUG=1" ]
+  cflags = [
+    "-O3",
+    "-fdata-sections",
+    "-ffunction-sections",
+  ]
+  ldflags = cflags
+  if (current_os == "mac") {
+    ldflags += [ "-Wl,-dead_strip" ]
+  } else {
+    ldflags += [ "-Wl,--gc-sections" ]
+  }
+}
+
+config("exceptions") {
+  cflags_cc = [ "-fexceptions" ]
+  cflags_objcc = cflags_cc
+}
+
+config("no_exceptions") {
+  cflags_cc = [ "-fno-exceptions" ]
+  cflags_objcc = cflags_cc
+}
+
+config("rtti") {
+  cflags_cc = [ "-frtti" ]
+  cflags_objcc = cflags_cc
+}
+
+config("no_rtti") {
+  cflags_cc = [ "-fno-rtti" ]
+  cflags_objcc = cflags_cc
+}
+
+config("default_include_dirs") {
+  include_dirs = [
+    "//",
+    root_gen_dir,
+  ]
+}
+
+config("minimal_symbols") {
+  cflags = [ "-gline-tables-only" ]
+  asmflags = cflags
+  ldflags = cflags
+}
+
+config("symbols") {
+  cflags = [ "-g3" ]
+  asmflags = cflags
+  ldflags = cflags
+}
+
+config("no_symbols") {
+  cflags = [ "-g0" ]
+  asmflags = cflags
+  ldflags = cflags
+}
+
+# Default symbols.
+config("default_symbols") {
+  if (symbol_level == 0) {
+    configs = [ ":no_symbols" ]
+  } else if (symbol_level == 1) {
+    configs = [ ":minimal_symbols" ]
+  } else if (symbol_level == 2) {
+    configs = [ ":symbols" ]
+  } else {
+    assert(symbol_level >= 0 && symbol_level <= 2)
+  }
+}
+
+config("default_frame_pointers") {
+  if (enable_frame_pointers) {
+    configs = [ ":frame_pointers" ]
+  } else {
+    configs = [ ":no_frame_pointers" ]
+  }
+}
+
+config("frame_pointers") {
+  cflags = [ "-fno-omit-frame-pointer" ]
+}
+
+config("no_frame_pointers") {
+  cflags = [ "-fomit-frame-pointer" ]
+}
+
+config("default_warnings") {
+  cflags = [
+    "-Wall",
+    "-Wextra",
+    "-Wno-unused-parameter",
+  ]
+  if (current_os == "fuchsia") {
+    cflags += [
+      # TODO(TO-99): Remove once all the cases of unused 'this' lamda capture
+      # have been removed from our codebase.
+      "-Wno-unused-lambda-capture",
+
+      # TODO(TO-100): Remove once comparator types provide const call operator.
+      "-Wno-user-defined-warnings",
+    ]
+  }
+}
+
+config("symbol_visibility_hidden") {
+  # Disable libc++ visibility annotations to make sure that the compiler option
+  # has effect on symbols defined in libc++ headers. Note that we don't want to
+  # disable these annotations altogether to ensure that our toolchain is usable
+  # outside of our build since not every user uses hidden visibility by default.
+  defines = [ "_LIBCPP_DISABLE_VISIBILITY_ANNOTATIONS" ]
+  cflags = [ "-fvisibility=hidden" ]
+}
+
+config("symbol_no_undefined") {
+  if (current_os == "mac") {
+    ldflags = [ "-Wl,-undefined,error" ]
+  } else {
+    ldflags = [ "-Wl,--no-undefined" ]
+  }
+}
+
+config("shared_library_config") {
+  configs = []
+  cflags = []
+
+  if (current_os == "fuchsia") {
+    configs += [ "//build/config/fuchsia:shared_library_config" ]
+  } else if (current_os == "linux") {
+    cflags += [ "-fPIC" ]
+  } else if (current_os == "mac") {
+    configs += [ "//build/config/mac:mac_dynamic_flags" ]
+  }
+}
+
+config("executable_config") {
+  configs = []
+
+  if (current_os == "fuchsia") {
+    configs += [
+      "//build/config/fuchsia:executable_config",
+      "//build/config/fuchsia:fdio_config",
+    ]
+  } else if (current_os == "mac") {
+    configs += [
+      "//build/config/mac:mac_dynamic_flags",
+      "//build/config/mac:mac_executable_flags",
+    ]
+  }
+}
+
+config("default_libs") {
+  configs = []
+
+  if (current_os == "mac") {
+    configs += [ "//build/config/mac:default_libs" ]
+  }
+}
diff --git a/build/config/BUILDCONFIG.gn b/build/config/BUILDCONFIG.gn
new file mode 100644
index 0000000..b837b29
--- /dev/null
+++ b/build/config/BUILDCONFIG.gn
@@ -0,0 +1,983 @@
+# Copyright 2016 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+# The GN files in //third_party/flutter all use $flutter_root/
+# in place of // to refer to the root of the flutter source tree.
+flutter_root = "//third_party/flutter"
+
+declare_args() {
+  # Debug build.
+  is_debug = true
+}
+
+if (target_os == "") {
+  target_os = "fuchsia"
+}
+if (target_cpu == "") {
+  target_cpu = host_cpu
+}
+target_platform = "${target_os}-${target_cpu}"
+if (current_cpu == "") {
+  current_cpu = target_cpu
+}
+if (current_os == "") {
+  current_os = target_os
+}
+current_platform = "${current_os}-${current_cpu}"
+
+host_platform = "${host_os}-${host_cpu}"
+
+if (target_os == "fuchsia") {
+  target_toolchain = "//build/toolchain/fuchsia:${target_cpu}"
+} else {
+  assert(false, "Target OS not supported")
+}
+
+if (host_os == "linux" || host_os == "mac") {
+  host_toolchain = "//build/toolchain:host_${host_cpu}"
+} else {
+  assert(false, "Host OS not supported")
+}
+
+set_default_toolchain(target_toolchain)
+
+# Some projects expect a default value for sources_assignment_filter.
+sources_assignment_filter = []
+
+declare_args() {
+  # *This should never be set as a build argument.*
+  # It exists only to be set in `toolchain_args`.
+  # See //build/toolchain/clang_toolchain.gni for details.
+  # This variable is a scope giving details about the current toolchain:
+  #     `toolchain_variant.base`
+  #         [label] The "base" toolchain for this variant, *often the
+  #         right thing to use in comparisons, not `current_toolchain`.*
+  #         This is the toolchain actually referenced directly in GN
+  #         source code.  If the current toolchain is not
+  #         `shlib_toolchain` or a variant toolchain, this is the same
+  #         as `current_toolchain`.  In one of those derivative
+  #         toolchains, this is the toolchain the GN code probably
+  #         thought it was in.  This is the right thing to use in a test
+  #         like `toolchain_variant.base == target_toolchain`, rather
+  #         rather than comparing against `current_toolchain`.
+  #     `toolchain_variant.name`
+  #         [string] The name of this variant, as used in `variant` fields
+  #         in [`select_variant`](#select_variant) clauses.  In the base
+  #         toolchain and its `shlib_toolchain`, this is `""`.
+  #     `toolchain_variant.suffix`
+  #         [string] This is "-${toolchain_variant.name}", "" if name is empty.
+  #     `toolchain_variant.is_pic_default`
+  #         [bool] This is true in `shlib_toolchain`.
+  # The other fields are the variant's effects as defined in
+  # [`known_variants`](#known_variants).
+  toolchain_variant = {
+    base = target_toolchain  # default toolchain
+  }
+}
+
+if (!defined(toolchain_variant.name)) {
+  # Default values describe the "null variant".
+  # All the optional fields (except `toolchain_args`) are canonicalized
+  # to their default/empty values so the code below doesn't need to have
+  # `defined(toolchain_variant.field)` checks all over.
+  toolchain_variant.name = ""
+  toolchain_variant.suffix = ""
+  toolchain_variant.configs = []
+  toolchain_variant.remove_common_configs = []
+  toolchain_variant.remove_shared_configs = []
+  toolchain_variant.deps = []
+  toolchain_variant.is_pic_default = false
+}
+
+is_android = false
+is_fuchsia = false
+is_fuchsia_host = false
+is_ios = false
+is_linux = false
+is_mac = false
+is_win = false
+is_clang = true
+is_component_build = false
+is_official_build = false
+
+# This is set to allow third party projects to configure their GN build based
+# on the knowledge that they're being built in the Fuchsia tree. In the
+# subproject this can be tested with
+#   `if (defined(is_fuchsia_tree) && is_fuchsia_tree) { ... }`
+# thus allowing configuration without requiring all users of the subproject to
+# set this variable.
+is_fuchsia_tree = true
+
+if (current_os == "fuchsia") {
+  is_fuchsia = true
+} else if (current_os == "linux") {
+  is_linux = true
+} else if (current_os == "mac") {
+  is_mac = true
+}
+
+# Some library targets may be built as different type depending on the target
+# platform. This variable specifies the default library type for each target.
+if (is_fuchsia) {
+  default_library_type = "shared_library"
+} else {
+  default_library_type = "static_library"
+}
+
+# When we are in a variant of host_toolchain, change the value of
+# host_toolchain so that `if (current_toolchain == host_toolchain)` tests
+# still match, since that is the conventional way to detect being in host
+# context.  This means that any "...($host_toolchain)" label references
+# from inside a variant of host_toolchain will refer to the variant
+# (current_toolchain rather than host_toolchain).  To handle this, the
+# `executable()` template below will define its target in other variant
+# toolchains as a copy of the real executable.
+if (toolchain_variant.base == host_toolchain) {
+  is_fuchsia_host = true
+  host_toolchain += toolchain_variant.suffix
+}
+
+# References should use `"label($shlib_toolchain)"` rather than
+# `"label(${target_toolchain}-shared)"` or anything else.
+shlib_toolchain = "${toolchain_variant.base}${toolchain_variant.suffix}-shared"
+
+# All binary targets will get this list of configs by default.
+default_common_binary_configs = [
+  "//build/config:compiler",
+  "//build/config:relative_paths",
+  "//build/config:default_frame_pointers",
+  "//build/config:default_include_dirs",
+  "//build/config:default_symbols",
+  "//build/config:default_warnings",
+  "//build/config:no_exceptions",
+  "//build/config:no_rtti",
+  "//build/config:symbol_visibility_hidden",
+]
+
+if (is_debug) {
+  default_common_binary_configs += [ "//build/config:debug" ]
+} else {
+  default_common_binary_configs += [ "//build/config:release" ]
+}
+
+if (is_fuchsia) {
+  default_common_binary_configs += [
+    "//build/config/fuchsia:icf",
+    "//build/config/fuchsia:thread_safety_annotations",
+    "//build/config/fuchsia:werror",
+
+    # TODO(mcgrathr): Perhaps restrict this to only affected code.
+    # For now, safest to do it everywhere.
+    "//build/config/fuchsia:zircon_asserts",
+  ]
+}
+
+default_common_binary_configs += [ "//build/config/lto:default" ]
+
+# Add and remove configs specified by the variant.
+default_common_binary_configs += toolchain_variant.configs
+default_common_binary_configs -= toolchain_variant.remove_common_configs
+
+default_shared_library_configs = default_common_binary_configs + [
+                                   "//build/config:shared_library_config",
+                                   "//build/config:symbol_no_undefined",
+                                 ]
+default_shared_library_configs -= toolchain_variant.remove_shared_configs
+
+default_executable_configs = default_common_binary_configs + [
+                               "//build/config:executable_config",
+                               "//build/config:default_libs",
+                             ]
+default_executable_deps = [ "//build/config/scudo:default_for_executable" ]
+
+if (toolchain_variant.is_pic_default) {
+  default_common_binary_configs += [ "//build/config:shared_library_config" ]
+}
+
+# Apply that default list to the binary target types.
+set_defaults("source_set") {
+  configs = default_common_binary_configs
+}
+set_defaults("static_library") {
+  configs = default_common_binary_configs
+}
+set_defaults("shared_library") {
+  configs = default_shared_library_configs
+}
+set_defaults("loadable_module") {
+  configs = default_shared_library_configs
+}
+set_defaults("executable") {
+  configs = default_executable_configs
+}
+
+if (is_fuchsia) {
+  if (!toolchain_variant.is_pic_default) {
+    # In the main toolchain, shared_library just redirects to the same
+    # target in the -shared toolchain.
+    template("shared_library") {
+      group(target_name) {
+        public_deps = [
+          ":$target_name(${current_toolchain}-shared)",
+        ]
+        forward_variables_from(invoker,
+                               [
+                                 "testonly",
+                                 "visibility",
+                               ])
+
+        # Mark all variables as not needed to suppress errors for unused
+        # variables.  The other variables normally passed to shared_library
+        # are actually used by the shared_library instantiation in the
+        # -shared toolchain, so any going truly unused will be caught there.
+        not_needed(invoker, "*")
+      }
+    }
+  } else {
+    # In the -shared toolchain, shared_library is just its normal self,
+    # but if the invoker constrained the visibility, we must make sure
+    # the dependency from the main toolchain is still allowed.
+    template("shared_library") {
+      shared_library(target_name) {
+        # Explicitly forward visibility, implicitly forward everything
+        # else.  Forwarding "*" doesn't recurse into nested scopes (to
+        # avoid copying all globals into each template invocation), so
+        # won't pick up file-scoped variables.  Normally this isn't too
+        # bad, but visibility is commonly defined at the file scope.
+        # Explicitly forwarding visibility and then excluding it from the
+        # "*" set works around this problem.  See http://crbug.com/594610
+        # for rationale on why this GN behavior is not considered a bug.
+        forward_variables_from(invoker, [ "visibility" ])
+        forward_variables_from(invoker, "*", [ "visibility" ])
+        if (defined(visibility)) {
+          visibility += [ ":$target_name" ]
+        }
+      }
+    }
+  }
+}
+
+# This is the basic "asan" variant.  Others start with this and modify.
+# See `known_variants` (below) for the meaning of fields in this scope.
+_asan_variant = {
+  configs = [ "//build/config/sanitizers:asan" ]
+  if (host_os != "fuchsia") {
+    host_only = {
+      # On most systems (not Fuchsia), the sanitizer runtimes are normally
+      # linked statically and so `-shared` links do not include them.
+      # Using `-shared --no-undefined` with sanitized code will get
+      # undefined references for the sanitizer runtime calls generated by
+      # the compiler.  It shouldn't do much harm, since the non-variant
+      # builds will catch the real undefined reference bugs.
+      remove_shared_configs = [ "//build/config:symbol_no_undefined" ]
+    }
+  }
+  toolchain_args = {
+    # -fsanitize=scudo is incompatible with -fsanitize=address.
+    use_scudo = false
+  }
+}
+
+declare_args() {
+  # List of variants that will form the basis for variant toolchains.
+  # To make use of a variant, set [`select_variant`](#select_variant).
+  #
+  # Normally this is not set as a build argument, but it serves to
+  # document the available set of variants.
+  # See also [`universal_variants`](#universal_variants).
+  # Only set this to remove all the default variants here.
+  # To add more, set [`extra_variants`](#extra_variants) instead.
+  #
+  # Each element of the list is one variant, which is a scope defining:
+  #
+  #   `configs` (optional)
+  #       [list of labels] Each label names a config that will be
+  #       automatically used by every target built in this variant.
+  #       For each config `${label}`, there must also be a target
+  #       `${label}_deps`, which each target built in this variant will
+  #       automatically depend on.  The `variant()` template is the
+  #       recommended way to define a config and its `_deps` target at
+  #       the same time.
+  #
+  #   `remove_common_configs` (optional)
+  #   `remove_shared_configs` (optional)
+  #       [list of labels] This list will be removed (with `-=`) from
+  #       the `default_common_binary_configs` list (or the
+  #       `default_shared_library_configs` list, respectively) after
+  #       all other defaults (and this variant's configs) have been
+  #       added.
+  #
+  #   `deps` (optional)
+  #       [list of labels] Added to the deps of every target linked in
+  #       this variant (as well as the automatic `${label}_deps` for
+  #       each label in configs).
+  #
+  #   `name` (required if configs is omitted)
+  #       [string] Name of the variant as used in
+  #       [`select_variant`](#select_variant) elements' `variant` fields.
+  #       It's a good idea to make it something concise and meaningful when
+  #       seen as e.g. part of a directory name under `$root_build_dir`.
+  #       If name is omitted, configs must be nonempty and the simple names
+  #       (not the full label, just the part after all `/`s and `:`s) of these
+  #       configs will be used in toolchain names (each prefixed by a "-"),
+  #       so the list of config names forming each variant must be unique
+  #       among the lists in `known_variants + extra_variants`.
+  #
+  #   `toolchain_args` (optional)
+  #       [scope] Each variable defined in this scope overrides a
+  #       build argument in the toolchain context of this variant.
+  #
+  #   `host_only` (optional)
+  #   `target_only` (optional)
+  #       [scope] This scope can contain any of the fields above.
+  #       These values are used only for host or target, respectively.
+  #       Any fields included here should not also be in the outer scope.
+  #
+  known_variants = [
+    {
+      configs = [ "//build/config/lto" ]
+    },
+    {
+      configs = [ "//build/config/lto:thinlto" ]
+    },
+
+    {
+      configs = [ "//build/config/profile" ]
+    },
+
+    {
+      configs = [ "//build/config/scudo" ]
+    },
+
+    {
+      configs = [ "//build/config/sanitizers:ubsan" ]
+    },
+    {
+      configs = [
+        "//build/config/sanitizers:ubsan",
+        "//build/config/sanitizers:sancov",
+      ]
+    },
+
+    _asan_variant,
+    {
+      forward_variables_from(_asan_variant, "*")
+      configs += [ "//build/config/sanitizers:sancov" ]
+    },
+    {
+      name = "asan_no_detect_leaks"
+      forward_variables_from(_asan_variant, "*", [ "toolchain_args" ])
+      toolchain_args = {
+        forward_variables_from(_asan_variant.toolchain_args, "*")
+        asan_default_options = "detect_leaks=0"
+      }
+    },
+
+    # Fuzzer variants for various sanitizers,  -fsanitize=fuzzer results
+    # in undefined symbols in shared objects that are satisfied in the final,
+    # statically linked fuzzer.
+    {
+      # TODO(aarongreen): TC-264: Clang emits new/new[]/delete/delete[] into
+      # libfuzzer's static libc++.  Remove this when fixed.
+      forward_variables_from(_asan_variant, "*", [ "toolchain_args" ])
+      toolchain_args = {
+        forward_variables_from(_asan_variant.toolchain_args, "*")
+        asan_default_options = "alloc_dealloc_mismatch=0"
+      }
+      configs += [ "//build/config/sanitizers:fuzzer" ]
+      remove_shared_configs = [ "//build/config:symbol_no_undefined" ]
+    },
+    {
+      configs = [
+        "//build/config/sanitizers:ubsan",
+        "//build/config/sanitizers:fuzzer",
+      ]
+      remove_shared_configs = [ "//build/config:symbol_no_undefined" ]
+    },
+  ]
+
+  # Additional variant toolchain configs to support.
+  # This is just added to [`known_variants`](#known_variants).
+  extra_variants = []
+
+  # List of "universal" variants, in addition to
+  # [`known_variants`](#known_variants).  Normally this is not set as a
+  # build argument, but it serves to document the available set of
+  # variants.  These are treated just like
+  # [`known_variants`](#known_variants), but as well as being variants by
+  # themselves, these are also combined with each of
+  # [`known_variants`](#known_variants) to form additional variants,
+  # e.g. "asan-debug" or "ubsan-sancov-release".
+  universal_variants = []
+
+  # Only one of "debug" and "release" is really available as a universal
+  # variant in any given build (depending on the global setting of
+  # `is_debug`).  But this gets evaluated separately in every toolchain, so
+  # e.g. in the "release" toolchain the sense of `if (is_debug)` tests is
+  # inverted and this would list only "debug" as an available variant.  The
+  # selection logic in `variant_target()` can only work if the value of
+  # `universal_variants` it sees includes the current variant.
+  if (is_debug) {
+    universal_variants += [
+      {
+        name = "release"
+        configs = []
+        toolchain_args = {
+          is_debug = false
+        }
+      },
+    ]
+  } else {
+    universal_variants += [
+      {
+        name = "debug"
+        configs = []
+        toolchain_args = {
+          is_debug = true
+        }
+      },
+    ]
+  }
+
+  # List of short names for commonly-used variant selectors.  Normally this
+  # is not set as a build argument, but it serves to document the available
+  # set of short-cut names for variant selectors.  Each element of this list
+  # is a scope where `.name` is the short name and `.select_variant` is a
+  # a list that can be spliced into [`select_variant`](#select_variant).
+  select_variant_shortcuts = [
+    {
+      name = "host_asan"
+      select_variant = []
+      select_variant = [
+        {
+          variant = "asan_no_detect_leaks"
+          host = true
+          dir = [
+            # TODO(TO-565): The yasm host tools have leaks.
+            "//third_party/yasm",
+
+            # TODO(TO-666): replace futiltiy & cgpt with 1p tools
+            "//third_party/vboot_reference",
+            "//garnet/tools/vboot_reference",
+          ]
+        },
+        {
+          variant = "asan"
+          host = true
+        },
+      ]
+    },
+
+    # TODO(TC-241): Remove this when TC-241 is fixed.  For now, don't
+    # apply ASan to drivers because all drivers (and nothing else)
+    # use -static-libstdc++ and so hit the TC-241 problem.
+    {
+      name = "asan"
+      select_variant = []
+      select_variant = [
+        {
+          target_type = [ "driver_module" ]
+          variant = false
+        },
+        {
+          variant = "asan"
+          host = false
+        },
+      ]
+    },
+  ]
+}
+
+# Now elaborate the fixed shortcuts with implicit shortcuts for
+# each known variant.  The shortcut is just the name of the variant
+# and selects for `host=false`.
+_select_variant_shortcuts = select_variant_shortcuts
+foreach(variant, known_variants) {
+  if (defined(variant.name)) {
+    variant = variant.name
+  } else {
+    # This is how GN spells "let".
+    foreach(configs, [ variant.configs ]) {
+      variant = ""
+      foreach(config, configs) {
+        config = get_label_info(config, "name")
+        if (variant == "") {
+          variant = config
+        } else {
+          variant += "-$config"
+        }
+      }
+    }
+  }
+  _select_variant_shortcuts += [
+    {
+      name = variant
+      select_variant = []
+      select_variant = [
+        {
+          variant = name
+          host = false
+        },
+      ]
+    },
+  ]
+  foreach(universal_variant, universal_variants) {
+    _select_variant_shortcuts += [
+      {
+        name = "${variant}-${universal_variant.name}"
+        select_variant = []
+        select_variant = [
+          {
+            variant = name
+            host = false
+          },
+        ]
+      },
+    ]
+  }
+}
+foreach(variant, universal_variants) {
+  variant = variant.name
+  _select_variant_shortcuts += [
+    {
+      name = variant
+      select_variant = []
+      select_variant = [
+        {
+          variant = name
+          host = false
+        },
+      ]
+    },
+  ]
+}
+
+declare_args() {
+  # List of "selectors" to request variant builds of certain targets.
+  # Each selector specifies matching criteria and a chosen variant.
+  # The first selector in the list to match a given target determines
+  # which variant is used for that target.
+  #
+  # Each selector is either a string or a scope.  A shortcut selector is
+  # a string; it gets expanded to a full selector.  A full selector is a
+  # scope, described below.
+  #
+  # A string selector can match a name in
+  # [`select_variant_shortcuts`](#select_variant_shortcuts).  If it's not a
+  # specific shortcut listed there, then it can be the name of any variant
+  # described in [`known_variants`](#known_variants) and
+  # [`universal_variants`](#universal_variants) (and combinations thereof).
+  # A `selector` that's a simple variant name selects for every binary
+  # built in the target toolchain: `{ host=false variant=selector }`.
+  #
+  # If a string selector contains a slash, then it's `"shortcut/filename"`
+  # and selects only the binary in the target toolchain whose `output_name`
+  # matches `"filename"`, i.e. it adds `output_name=["filename"]` to each
+  # selector scope that the shortcut's name alone would yield.
+  #
+  # The scope that forms a full selector defines some of these:
+  #
+  #     variant (required)
+  #         [string or `false`] The variant that applies if this selector
+  #         matches.  This can be `false` to choose no variant, or a string
+  #         that names the variant.  See
+  #         [`known_variants`](#known_variants) and
+  #         [`universal_variants`](#universal_variants).
+  #
+  # The rest below are matching criteria.  All are optional.
+  # The selector matches if and only if all of its criteria match.
+  # If none of these is defined, then the selector always matches.
+  #
+  # The first selector in the list to match wins and then the rest of
+  # the list is ignored.  So construct more complex rules by using a
+  # "blacklist" selector with `variant=false` before a catch-all or
+  # "whitelist" selector that names a variant.
+  #
+  # Each "[strings]" criterion is a list of strings, and the criterion
+  # is satisfied if any of the strings matches against the candidate string.
+  #
+  #     host
+  #         [boolean] If true, the selector matches in the host toolchain.
+  #         If false, the selector matches in the target toolchain.
+  #
+  #     testonly
+  #         [boolean] If true, the selector matches targets with testonly=true.
+  #         If false, the selector matches in targets without testonly=true.
+  #
+  #     target_type
+  #         [strings]: `"executable"`, `"loadable_module"`, or `"driver_module"`
+  #
+  #     output_name
+  #         [strings]: target's `output_name` (default: its `target name`)
+  #
+  #     label
+  #         [strings]: target's full label with `:` (without toolchain suffix)
+  #
+  #     name
+  #         [strings]: target's simple name (label after last `/` or `:`)
+  #
+  #     dir
+  #         [strings]: target's label directory (`//dir` for `//dir:name`).
+  select_variant = []
+
+  # *This should never be set as a build argument.*
+  # It exists only to be set in `toolchain_args`.
+  # See //build/toolchain/clang_toolchain.gni for details.
+  select_variant_canonical = []
+}
+
+# Do this only once, in the default toolchain context.    Then
+# clang_toolchain_suite will just pass the results through to every
+# other toolchain via toolchain_args so the work is not repeated.
+if (toolchain_variant.base == target_toolchain && current_cpu == target_cpu &&
+    current_os == target_os && toolchain_variant.name == "" &&
+    !toolchain_variant.is_pic_default) {
+  assert(select_variant_canonical == [],
+         "`select_variant_canonical` cannot be set as a build argument")
+
+  foreach(selector, select_variant) {
+    if (selector != "$selector") {
+      # It's a scope, not a string.  Just use it as is.
+      select_variant_canonical += [ selector ]
+    } else {
+      # It's a string, not a scope.  Expand the shortcut.
+      # If there is a slash, this is "shortcut/output_name".
+      # If not, it's just "shortcut".
+      foreach(file, [ get_path_info(selector, "file") ]) {
+        if (file == selector) {
+          file = ""
+        } else {
+          selector = get_path_info(selector, "dir")
+        }
+        foreach(shortcut, _select_variant_shortcuts) {
+          # file=true stands in for "break".
+          if (file != true && selector == shortcut.name) {
+            # Found the matching shortcut.
+            if (file == "") {
+              # It applies to everything, so just splice it in directly.
+              select_variant_canonical += shortcut.select_variant
+            } else {
+              # Add each of the shortcut's clauses amended with the
+              # output_name constraint.
+              foreach(clause, shortcut.select_variant) {
+                select_variant_canonical += [
+                  {
+                    forward_variables_from(clause, "*")
+                    output_name = [ file ]
+                  },
+                ]
+              }
+            }
+            file = true
+          }
+        }
+        assert(file == true,
+               "unknown shortcut `${selector}` used in `select_variant`")
+      }
+    }
+  }
+}
+
+template("variant_target") {
+  target_type = target_name
+  target_name = invoker.target_name
+  target_invoker = {
+    # Explicitly forward visibility, implicitly forward everything else.
+    # See comment in template("shared_library") above for details.
+    forward_variables_from(invoker, [ "visibility" ])
+    forward_variables_from(invoker,
+                           "*",
+                           [
+                             "_target_type",
+                             "target_name",
+                             "visibility",
+                           ])
+
+    if (!defined(output_name)) {
+      output_name = target_name
+    }
+  }
+
+  # target_type is the real GN target type that builds the thing.
+  # selector_target_type is the name matched against target_type selectors.
+  if (defined(invoker._target_type)) {
+    selector_target_type = invoker._target_type
+  } else {
+    selector_target_type = target_type
+  }
+
+  target_label = get_label_info(":$target_name", "label_no_toolchain")
+
+  # These are not actually used in all possible if branches below,
+  # so defang GN's extremely sensitive "unused variable" errors.
+  not_needed([
+               "selector_target_type",
+               "target_invoker",
+               "target_label",
+               "target_type",
+             ])
+
+  target_variant = false
+  if (select_variant_canonical != []) {
+    # See if there is a selector that matches this target.
+    selected = false
+    foreach(selector, select_variant_canonical) {
+      # The first match wins.
+      # GN's loops don't have "break", so do nothing on later iterations.
+      if (!selected) {
+        # Expand the selector so we don't have to do a lot of defined(...)
+        # tests below.
+        select = {
+        }
+        select = {
+          target_type = []
+          output_name = []
+          label = []
+          name = []
+          dir = []
+          forward_variables_from(selector, "*")
+        }
+
+        selected = true
+        if (selected && defined(selector.host)) {
+          selected = current_toolchain == host_toolchain == selector.host
+        }
+
+        if (selected && defined(selector.testonly)) {
+          selected = (defined(target_invoker.testonly) &&
+                      target_invoker.testonly) == selector.testonly
+        }
+
+        if (selected && select.target_type != []) {
+          selected = false
+          candidate = selector_target_type
+          foreach(try, select.target_type) {
+            if (try == candidate) {
+              selected = true
+            }
+          }
+        }
+
+        if (selected && select.output_name != []) {
+          selected = false
+          candidate = target_invoker.output_name
+          foreach(try, select.output_name) {
+            if (try == candidate) {
+              selected = true
+            }
+          }
+        }
+
+        if (selected && select.label != []) {
+          selected = false
+          candidate = target_label
+          foreach(try, select.label) {
+            if (try == candidate) {
+              selected = true
+            }
+          }
+        }
+
+        if (selected && select.name != []) {
+          selected = false
+          candidate = get_label_info(target_label, "name")
+          foreach(try, select.name) {
+            if (try == candidate) {
+              selected = true
+            }
+          }
+        }
+
+        if (selected && select.dir != []) {
+          selected = false
+          candidate = get_label_info(target_label, "dir")
+          foreach(try, select.dir) {
+            if (try == candidate) {
+              selected = true
+            }
+          }
+        }
+
+        if (selected && selector.variant != false) {
+          target_variant = "-${selector.variant}"
+        }
+      }
+    }
+  }
+  if (target_variant == false) {
+    target_variant = ""
+  }
+
+  builder_toolchain = toolchain_variant.base + target_variant
+  if (invoker._variant_shared) {
+    builder_toolchain += "-shared"
+  }
+
+  if (current_toolchain == builder_toolchain) {
+    # This is the toolchain selected to actually build this target.
+    target(target_type, target_name) {
+      deps = []
+      forward_variables_from(target_invoker, "*")
+      deps += toolchain_variant.deps
+      foreach(config, toolchain_variant.configs) {
+        # Expand the label so it always has a `:name` part.
+        config = get_label_info(config, "label_no_toolchain")
+        deps += [ "${config}_deps" ]
+      }
+      if (defined(visibility)) {
+        # Other toolchains will define this target as a group or copy
+        # rule that depends on this toolchain's definition.  If the
+        # invoker constrained the visibility, make sure those
+        # dependencies from other toolchains are still allowed.
+        visibility += [ ":${target_name}" ]
+      }
+    }
+  } else if (current_toolchain == shlib_toolchain) {
+    # Don't copy from a variant into a -shared toolchain, because nobody
+    # looks for an executable or loadable_module there.  Instead, just
+    # forward any deps to the real target.
+    group(target_name) {
+      forward_variables_from(target_invoker,
+                             [
+                               "testonly",
+                               "visibility",
+                             ])
+      if (defined(visibility)) {
+        visibility += [ ":${target_name}" ]
+      }
+      deps = [
+        ":${target_name}(${builder_toolchain})",
+      ]
+    }
+  } else {
+    # When some variant was selected, then this target in all other
+    # toolchains is actually just this copy rule.  The target is built in
+    # the selected variant toolchain, but then copied to its usual name in
+    # $root_out_dir so that things can find it there.
+    copy_vars = {
+      forward_variables_from(target_invoker,
+                             [
+                               "testonly",
+                               "visibility",
+                             ])
+      if (defined(visibility)) {
+        visibility += [ ":${target_name}" ]
+      }
+
+      deps = [
+        ":${target_name}(${builder_toolchain})",
+      ]
+      variant_out_dir = get_label_info(deps[0], "root_out_dir")
+
+      full_output_name = target_invoker.output_name
+      if (defined(target_invoker.output_extension) &&
+          target_invoker.output_extension != "") {
+        full_output_name += ".${target_invoker.output_extension}"
+      }
+
+      sources = [
+        "$variant_out_dir/$full_output_name",
+      ]
+      outputs = [
+        "$root_out_dir/$full_output_name",
+      ]
+    }
+
+    # In the host toolchain, make a symlink rather than a hard link
+    # (which is what "copy" rules really do).  Host tools are built with
+    # an embedded shared library lookup path based on $ORIGIN on Linux
+    # (//build/config/linux:compiler) and the equivalent @loader_path on
+    # macOS (//build/config/mac:mac_dynamic_flags).  The dynamic linker
+    # translates this to "the directory containing the executable".
+    # With hard links, this gets the directory used to invoke the
+    # executable, which is host_toolchain's $root_out_dir.  With
+    # symlinks, it instead gets the directory containing the actual
+    # executable file, which is builder_toolchain's $root_out_dir.
+    # Hence the program uses the variant builds of shared libraries that
+    # go with the variant build of the executable, rather using than the
+    # vanilla host_toolchain builds with the variant executable.
+    if (current_toolchain == host_toolchain) {
+      action(target_name) {
+        forward_variables_from(copy_vars, "*")
+        script = "/bin/ln"
+        args = [
+          "-snf",
+          rebase_path(sources[0], get_path_info(outputs[0], "dir")),
+          rebase_path(outputs[0]),
+        ]
+      }
+    } else {
+      # For Fuchsia, //build/gn/variant.py depends on hard links to
+      # identify the variants.
+      copy(target_name) {
+        forward_variables_from(copy_vars, "*")
+      }
+    }
+  }
+}
+
+template("executable") {
+  _executable_name = target_name
+  _variant_shared = false
+  variant_target("executable") {
+    deps = []
+    target_name = _executable_name
+
+    # TODO(aarongreen): This shouldn't be required, but without it the
+    # fuzzers fail to link.  Investigate and remove when resolved.
+    if (defined(invoker._target_type)) {
+      _target_type = invoker._target_type
+    }
+
+    # Explicitly forward visibility, implicitly forward everything else.
+    # See comment in template("shared_library") above for details.
+    forward_variables_from(invoker, [ "visibility" ])
+    forward_variables_from(invoker, "*", [ "visibility" ])
+
+    deps += default_executable_deps
+  }
+}
+
+template("loadable_module") {
+  _module_name = target_name
+  _variant_shared = true
+  variant_target("loadable_module") {
+    target_name = _module_name
+    if (defined(invoker._target_type)) {
+      _target_type = invoker._target_type
+    }
+
+    # Explicitly forward visibility, implicitly forward everything else.
+    # See comment in template("shared_library") above for details.
+    forward_variables_from(invoker, [ "visibility" ])
+    forward_variables_from(invoker, "*", [ "visibility" ])
+    if (!defined(output_extension)) {
+      output_extension = "so"
+    }
+  }
+}
+
+# Some targets we share with Chromium declare themselves to be components,
+# which means they can build either as shared libraries or as static libraries.
+# We build them as static libraries.
+template("component") {
+  if (!defined(invoker.sources)) {
+    # When there are no sources defined, use a source set to avoid creating
+    # an empty static library (which generally don't work).
+    _component_mode = "source_set"
+  } else {
+    _component_mode = "static_library"
+  }
+
+  target(_component_mode, target_name) {
+    # Explicitly forward visibility, implicitly forward everything else.
+    # See comment in template("shared_library") above for details.
+    forward_variables_from(invoker, [ "visibility" ])
+    forward_variables_from(invoker, "*", [ "visibility" ])
+  }
+}
+
+set_defaults("component") {
+  configs = default_common_binary_configs
+}
diff --git a/build/config/arm.gni b/build/config/arm.gni
new file mode 100644
index 0000000..36dc838
--- /dev/null
+++ b/build/config/arm.gni
@@ -0,0 +1,68 @@
+# Copyright 2016 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+if (current_cpu == "arm" || current_cpu == "arm64") {
+  declare_args() {
+    # Version of the ARM processor when compiling on ARM. Ignored on non-ARM
+    # platforms.
+    if (current_cpu == "arm") {
+      arm_version = 7
+    } else if (current_cpu == "arm64") {
+      arm_version = 8
+    } else {
+      assert(false, "Unconfigured arm version")
+    }
+
+    # The ARM floating point mode. This is either the string "hard", "soft", or
+    # "softfp". An empty string means to use the default one for the
+    # arm_version.
+    arm_float_abi = ""
+
+    # The ARM variant-specific tuning mode. This will be a string like "armv6"
+    # or "cortex-a15". An empty string means to use the default for the
+    # arm_version.
+    arm_tune = ""
+
+    # Whether to use the neon FPU instruction set or not.
+    arm_use_neon = true
+
+    # Whether to enable optional NEON code paths.
+    arm_optionally_use_neon = false
+  }
+
+  assert(arm_float_abi == "" || arm_float_abi == "hard" ||
+         arm_float_abi == "soft" || arm_float_abi == "softfp")
+
+  if (arm_version == 6) {
+    arm_arch = "armv6"
+    if (arm_tune != "") {
+      arm_tune = ""
+    }
+    if (arm_float_abi == "") {
+      arm_float_abi = "softfp"
+    }
+    arm_fpu = "vfp"
+
+    # Thumb is a reduced instruction set available on some ARM processors that
+    # has increased code density.
+    arm_use_thumb = false
+  } else if (arm_version == 7) {
+    arm_arch = "armv7-a"
+    if (arm_tune == "") {
+      arm_tune = "generic-armv7-a"
+    }
+
+    if (arm_float_abi == "") {
+      arm_float_abi = "softfp"
+    }
+
+    arm_use_thumb = true
+
+    if (arm_use_neon) {
+      arm_fpu = "neon"
+    } else {
+      arm_fpu = "vfpv3-d16"
+    }
+  }
+}
diff --git a/build/config/clang/clang.gni b/build/config/clang/clang.gni
new file mode 100644
index 0000000..4945ee3
--- /dev/null
+++ b/build/config/clang/clang.gni
@@ -0,0 +1,28 @@
+# Copyright 2016 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+declare_args() {
+  # The default clang toolchain provided by the buildtools. This variable is
+  # additionally consumed by the Go toolchain.
+  clang_prefix =
+      rebase_path("//buildtools/${host_platform}/clang/bin", root_build_dir)
+}
+
+if (current_cpu == "arm64") {
+  clang_cpu = "aarch64"
+} else if (current_cpu == "x64") {
+  clang_cpu = "x86_64"
+} else {
+  assert(false, "CPU not supported")
+}
+
+if (is_fuchsia) {
+  clang_target = "${clang_cpu}-fuchsia"
+} else if (is_linux) {
+  clang_target = "${clang_cpu}-linux-gnu"
+} else if (is_mac) {
+  clang_target = "${clang_cpu}-apple-darwin"
+} else {
+  assert(false, "OS not supported")
+}
diff --git a/build/config/compiler.gni b/build/config/compiler.gni
new file mode 100644
index 0000000..7ec3b1f
--- /dev/null
+++ b/build/config/compiler.gni
@@ -0,0 +1,14 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+declare_args() {
+  # How many symbols to include in the build. This affects the performance of
+  # the build since the symbols are large and dealing with them is slow.
+  #   2 means regular build with symbols.
+  #   1 means minimal symbols, usually enough for backtraces only. Symbols with
+  # internal linkage (static functions or those in anonymous namespaces) may not
+  # appear when using this level.
+  #   0 means no symbols.
+  symbol_level = 2
+}
diff --git a/build/config/fuchsia/BUILD.gn b/build/config/fuchsia/BUILD.gn
new file mode 100644
index 0000000..4864667
--- /dev/null
+++ b/build/config/fuchsia/BUILD.gn
@@ -0,0 +1,153 @@
+# Copyright 2016 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/config/clang/clang.gni")
+import("//build/config/fuchsia/zircon.gni")
+import("//build/config/sysroot.gni")
+import("//build/toolchain/ccache.gni")
+
+assert(current_os == "fuchsia")
+
+config("werror") {
+  if (!use_ccache) {
+    cflags = [
+      "-Werror",
+
+      # Declarations marked as deprecated should cause build failures, rather
+      # they should emit warnings to notify developers about the use of
+      # deprecated interfaces.
+      "-Wno-error=deprecated-declarations",
+
+      # Do not add additional -Wno-error to this config.
+    ]
+  }
+}
+
+config("icf") {
+  # This changes C/C++ semantics and might be incompatible with third-party
+  # code that relies on function pointers comparison.
+  ldflags = [ "-Wl,--icf=all" ]
+}
+
+# ccache, at least in some configurations, caches preprocessed content. This
+# means that by the time the compiler sees it, macros are unrolled. A number
+# of gcc and clang diagnostics are conditioned on whether the source is part
+# of a macro or not. This is because a "reasonable" looking macro invocation
+# may end up doing something silly internally. This can mean self assignment
+# and tautological comparisons, since macros are not typed. Macros also tend
+# to over-parenthesize, and so on. This particular list of options was found
+# via trial and error, and might be the best way of keeping the build quiet.
+config("ccache") {
+  cflags = [
+    "-Wno-error",
+    "-Qunused-arguments",
+    "-Wno-parentheses-equality",
+    "-Wno-self-assign",
+    "-Wno-tautological-compare",
+    "-Wno-unused-command-line-argument",
+  ]
+  asmflags = cflags
+}
+
+config("compiler") {
+  cflags = []
+  cflags_c = [ "-std=c11" ]
+  cflags_cc = [ "-std=c++17" ]
+  ldflags = [
+    "-Wl,--threads",
+    "-Wl,--pack-dyn-relocs=relr",
+  ]
+  configs = [
+    ":compiler_sysroot",
+    ":compiler_target",
+    ":compiler_cpu",
+    ":toolchain_version_stamp",
+  ]
+  if (use_ccache) {
+    configs += [ ":ccache" ]
+  }
+  asmflags = cflags + cflags_c
+}
+
+config("toolchain_version_stamp") {
+  # We want to force a recompile and relink of the world whenever our toolchain changes since
+  # artifacts from an older version of the toolchain may or may not be compatible with newer ones.
+  # To achieve this, we insert a synthetic define into the compile line.
+  cipd_version = read_file(
+          "//buildtools/${host_platform}/clang/.versions/clang.cipd_version",
+          "json")
+  defines = [ "TOOLCHAIN_VERSION=${cipd_version.instance_id}" ]
+}
+
+config("compiler_sysroot") {
+  # The sysroot for Fuchsia is part of Zircon build which is pointed to by the sysroot variable.
+  cflags = [ "--sysroot=${sysroot}" ]
+  ldflags = cflags
+  asmflags = cflags
+}
+
+config("compiler_target") {
+  cflags = [ "--target=$clang_target" ]
+  asmflags = cflags
+  ldflags = cflags
+}
+
+config("compiler_cpu") {
+  cflags = []
+  if (current_cpu == "x64") {
+    cflags += [
+      "-march=x86-64",
+      "-mcx16",
+    ]
+  }
+  ldflags = cflags
+  asmflags = cflags
+
+  if (current_cpu == "arm64") {
+    ldflags += [ "-Wl,--fix-cortex-a53-843419" ]
+  }
+}
+
+config("shared_library_config") {
+  cflags = [ "-fPIC" ]
+}
+
+config("fdio_config") {
+  libs = [ "fdio" ]
+
+  # TODO(pylaligand): find a better way to let executables link in fdio.
+  # Ideally their dependencies should be set up in such a way that it would get
+  # inherited from them.
+  lib_dirs = [ "$zircon_build_dir/system/ulib/fdio" ]
+}
+
+config("executable_config") {
+}
+
+config("thread_safety_annotations") {
+  cflags_cc = [ "-Wthread-safety" ]
+  defines = [ "_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS" ]
+}
+
+config("enable_zircon_asserts") {
+  defines = [ "ZX_DEBUGLEVEL=2" ]
+}
+
+declare_args() {
+  zircon_asserts = is_debug
+}
+
+config("zircon_asserts") {
+  if (zircon_asserts) {
+    configs = [ ":enable_zircon_asserts" ]
+  }
+}
+
+config("no_cpp_standard_library") {
+  ldflags = [ "-nostdlib++" ]
+}
+
+config("static_cpp_standard_library") {
+  ldflags = [ "-static-libstdc++" ]
+}
diff --git a/build/config/fuchsia/config.gni b/build/config/fuchsia/config.gni
new file mode 100644
index 0000000..c3c815e
--- /dev/null
+++ b/build/config/fuchsia/config.gni
@@ -0,0 +1,10 @@
+# Copyright 2016 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+assert(current_os == "fuchsia")
+
+declare_args() {
+  # Path to Fuchsia SDK.
+  fuchsia_sdk = "//buildtools"
+}
diff --git a/build/config/fuchsia/rules.gni b/build/config/fuchsia/rules.gni
new file mode 100644
index 0000000..99b431d
--- /dev/null
+++ b/build/config/fuchsia/rules.gni
@@ -0,0 +1,41 @@
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+assert(current_os == "fuchsia")
+
+# Declare a driver module target.
+#
+# This target allows you to create an object file that can be used as a driver
+# that is loaded at runtime.
+#
+# Flags: cflags, cflags_c, cflags_cc,  asmflags, defines, include_dirs,
+#        ldflags, lib_dirs, libs,
+# Deps: data_deps, deps, public_deps
+# Dependent configs: all_dependent_configs, public_configs
+# General: check_includes, configs, data, inputs, output_name,
+#          output_extension, public, sources, testonly, visibility
+template("driver_module") {
+  loadable_module(target_name) {
+    _target_type = "driver_module"
+
+    # Explicitly forward visibility, implicitly forward everything else.
+    # See comment in //build/config/BUILDCONFIG.gn for details on this pattern.
+    forward_variables_from(invoker, [ "visibility" ])
+    forward_variables_from(invoker, "*", [ "visibility" ])
+  }
+}
+
+set_defaults("driver_module") {
+  # Sets the default configs for driver_module, which can be modified later
+  # by the invoker. This overrides the loadable_module default.
+  configs = default_shared_library_configs
+
+  # In general, drivers should not use the C++ standard library, and drivers
+  # cannot dynamically link against it. This config tells the linker to not
+  # link against the C++ standard library.
+  # Drivers that do require standard library functionality should remove this
+  # config line and add "//build/config/fuchsia:static_cpp_standard_library" to
+  # statically link it into the driver.
+  configs += [ "//build/config/fuchsia:no_cpp_standard_library" ]
+}
diff --git a/build/config/fuchsia/sdk.gni b/build/config/fuchsia/sdk.gni
new file mode 100644
index 0000000..a138ee0
--- /dev/null
+++ b/build/config/fuchsia/sdk.gni
@@ -0,0 +1,15 @@
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+declare_args() {
+  # The directories to search for parts of the SDK.
+  #
+  # By default, we search the public directories for the various layers.
+  # In the future, we'll search a pre-built SDK as well.
+  sdk_dirs = [
+    "//garnet/public",
+    "//peridot/public",
+    "//topaz/public",
+  ]
+}
diff --git a/build/config/fuchsia/zbi.gni b/build/config/fuchsia/zbi.gni
new file mode 100644
index 0000000..0facf2a
--- /dev/null
+++ b/build/config/fuchsia/zbi.gni
@@ -0,0 +1,148 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/config/fuchsia/zircon.gni")
+
+# Template for assembling a Zircon Boot Image file from various inputs.
+#
+# Parameters
+#
+#   output_name (optional, default: target_name)
+#   output_extension (optional, default: "zbi")
+#       [string] These together determine the name of the output file.
+#       If `output_name` is omitted, then the name of the target is
+#       used.  If `output_extension` is "" then `output_name` is the
+#       file name; otherwise, `${output_name}.${output_extension}`;
+#       the output file is always under `root_out_dir`.
+#
+#   complete (optional, default: true)
+#       [Boolean] If true, then the output is intended to be a complete ZBI.
+#       That is, a single file that can be booted.  This will make the tool
+#       verify it has the necessary contents and finalize it appropriately.
+#
+#   compress (optional, default: true)
+#       [Boolean] If true, BOOTFS and RAMDISK payloads will be compressed.
+#
+#   inputs (optional)
+#       [list of files] Input files.  Each can be either a ZBI format
+#       file (e.g. from another `zbi` action, or the kernel ZBI), or a
+#       manifest file or directory to generate a `BOOTFS` filesystem
+#       embedded in the ZBI output.
+#
+#   cmdline (optional)
+#       [list of strings] Kernel command line text.
+#
+#   cmdline_inputs (optional)
+#       [list of files] Input files treated as kernel command line text.
+#
+#   manifest (optional)
+#       [list of string|scope] List of individual manifest entries.
+#       Each entry can be a "TARGET=SOURCE" string, or it can be a scope
+#       with `sources` and `outputs` in the style of a copy() target:
+#       `outputs[0]` is used as `TARGET` (see `gn help source_expansion`).
+#
+#   ramdisk_inputs (optional)
+#       [list of files] Input files treated as raw RAM disk images.
+#
+#   deps (usually required)
+#   visibility (optional)
+#   testonly (optional)
+#       Same as for any GN `action` target.  `deps` must list labels that
+#       produce all the `inputs`, `cmdline_inputs`, and `ramdisk_inputs`
+#       that are generated by the build (none are required for inputs that
+#       are part of the source tree).
+#
+# Each of the various kinds of input is optional, but the action will
+# fail at build time (not at `gn gen` time) if there are is no input of
+# any kind.
+template("zbi") {
+  if (defined(invoker.output_name)) {
+    output_file = invoker.output_name
+  } else {
+    output_file = target_name
+  }
+
+  if (defined(invoker.output_extension)) {
+    if (invoker.output_extension != "") {
+      output_file += ".${invoker.output_extension}"
+    }
+  } else {
+    output_file += ".zbi"
+  }
+
+  output_file = "$root_out_dir/$output_file"
+
+  zircon_tool_action(target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "deps",
+                             "visibility",
+                             "testonly",
+                           ])
+    outputs = [
+      output_file,
+    ]
+    depfile = "${output_file}.d"
+    inputs = []
+
+    tool = "zbi"
+    args = [
+      "--output=" + rebase_path(output_file, root_build_dir),
+      "--depfile=" + rebase_path(depfile, root_build_dir),
+    ]
+
+    if (!defined(invoker.complete) || invoker.complete) {
+      args += [ "--complete=" + current_cpu ]
+    }
+
+    if (defined(invoker.compress) && !invoker.compress) {
+      args += [ "--uncompressed" ]
+    }
+
+    if (defined(invoker.inputs)) {
+      args += rebase_path(invoker.inputs, root_build_dir)
+      inputs += invoker.inputs
+    }
+
+    if (defined(invoker.manifest)) {
+      foreach(entry, invoker.manifest) {
+        if (entry == "$entry") {
+          # It's a literal manifest entry string.
+          args += [ "--entry=$entry" ]
+        } else {
+          # It's a manifest entry in the style of a copy() target.
+          targets = entry.outputs
+          assert(targets == [ targets[0] ],
+                 "manifest entry outputs list must have exactly one element")
+          foreach(source, entry.sources) {
+            inputs += [ source ]
+            source_path = rebase_path(source, root_build_dir)
+            foreach(target, process_file_template([ source ], targets)) {
+              args += [ "--entry=${target}=${source_path}" ]
+            }
+          }
+        }
+      }
+    }
+
+    if (defined(invoker.ramdisk_inputs)) {
+      args += [ "--type=ramdisk" ]
+      args += rebase_path(invoker.ramdisk_inputs, root_build_dir)
+      inputs += invoker.ramdisk_inputs
+    }
+
+    if (defined(invoker.cmdline) || defined(invoker.cmdline_inputs)) {
+      args += [ "--type=cmdline" ]
+      if (defined(invoker.cmdline)) {
+        foreach(cmdline, invoker.cmdline) {
+          args += [ "--entry=$cmdline" ]
+        }
+      }
+      if (defined(invoker.cmdline_inputs)) {
+        args += rebase_path(invoker.cmdline_inputs, root_build_dir)
+        inputs += invoker.cmdline_inputs
+      }
+    }
+  }
+}
diff --git a/build/config/fuchsia/zircon.gni b/build/config/fuchsia/zircon.gni
new file mode 100644
index 0000000..76df754
--- /dev/null
+++ b/build/config/fuchsia/zircon.gni
@@ -0,0 +1,105 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/config/clang/clang.gni")
+
+declare_args() {
+  # Where to find Zircon's host-side tools that are run as part of the build.
+  zircon_tools_dir = "//out/build-zircon/tools"
+
+  # Zircon build directory for `target_cpu`, containing link-time `.so.abi`
+  # files that GN `deps` on //zircon/public libraries will link against.
+  # This should not be a sanitizer build.
+  zircon_build_abi_dir = "//out/build-zircon/build-$target_cpu"
+
+  # Zircon build directory for `target_cpu`, containing `.manifest` and
+  # `.zbi` files for Zircon's BOOTFS and kernel.  This provides the kernel
+  # and Zircon components used in the boot image.  It also provides the
+  # Zircon shared libraries used at runtime in Fuchsia packages.
+  #
+  # If left `""` (the default), then this is computed from
+  # [`zircon_build_abi_dir`](#zircon_build_abi_dir) and
+  # [`zircon_use_asan`](#zircon_use_asan).
+  zircon_build_dir = ""
+
+  # Zircon `USE_ASAN=true` build directory for `target_cpu` containing
+  # `bootfs.manifest` with libraries and `devhost.asan`.
+  #
+  # If left `""` (the default), then this is computed from
+  # [`zircon_build_dir`](#zircon_build_dir) and
+  # [`zircon_use_asan`](#zircon_use_asan).
+  zircon_asan_build_dir = ""
+
+  # Set this if [`zircon_build_dir`](#zircon_build_dir) was built with
+  # `USE_ASAN=true`, e.g. `//scripts/build-zircon.sh -A`.  This mainly
+  # affects the defaults for [`zircon_build_dir`](#zircon_build_dir) and
+  # [`zircon_build_abi_dir`](#zircon_build_abi_dir).  It also gets noticed
+  # by //scripts/fx commands that rebuild Zircon so that they use `-A`
+  # again next time.
+  zircon_use_asan = false
+
+  # Path to `make` binary. By default, `make` is assumed to be in the
+  # path. Used in the script that generates the Zircon build rules. N.B. this
+  # path is *not* rebased, just used as is.
+  zircon_make_path = "make"
+}
+
+if (zircon_build_dir == "") {
+  zircon_build_dir = zircon_build_abi_dir
+  if (zircon_use_asan) {
+    zircon_build_dir += "-asan"
+  }
+}
+
+# If zircon_use_asan is true, then zircon_build_dir has the ASan bits.
+# Otherwise, they need to be found elsewhere.
+if (zircon_asan_build_dir == "" && !zircon_use_asan) {
+  zircon_asan_build_dir = "${zircon_build_dir}-asan"
+}
+
+# Template for running a Zircon host tool as part of the build.
+# This is a thin wrapper to define an `action()` target.
+#
+# Parameters
+#
+#     tool (required)
+#         [string] The name of the tool, like "mkbootfs".
+#
+#     args (required)
+#         [list of strings] The arguments to pass the tool.
+#         The tool runs with `root_build_dir` as its current directory,
+#         so any file names should be made either absolute or relative
+#         to `root_build_dir` using `rebase_path()`.
+#
+# All other parameters are exactly as for `action()`, except
+# that `script` is replaced with `tool`.
+#
+template("zircon_tool_action") {
+  assert(defined(invoker.tool), "zircon_tool_action() requires `tool`")
+  assert(defined(invoker.args), "zircon_tool_action() requires `args`")
+  _tool = "$zircon_tools_dir/${invoker.tool}"
+  action(target_name) {
+    inputs = []
+    forward_variables_from(invoker,
+                           [
+                             "testonly",
+                             "visibility",
+                           ])
+    forward_variables_from(invoker,
+                           "*",
+                           [
+                             "args",
+                             "script",
+                             "tool",
+                             "testonly",
+                             "visibility",
+                           ])
+    script = "//build/gn_run_binary.sh"
+    inputs += [ _tool ]
+    args = [
+             clang_prefix,
+             rebase_path(_tool, root_build_dir),
+           ] + invoker.args
+  }
+}
diff --git a/build/config/host_byteorder.gni b/build/config/host_byteorder.gni
new file mode 100644
index 0000000..7f711ce
--- /dev/null
+++ b/build/config/host_byteorder.gni
@@ -0,0 +1,15 @@
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+# This header file defines the "host_byteorder" variable.
+declare_args() {
+  host_byteorder = "undefined"
+}
+
+# Detect host byteorder
+if (host_cpu == "arm64" || host_cpu == "x64") {
+  host_byteorder = "little"
+} else {
+  assert(false, "Unsupported host CPU")
+}
diff --git a/build/config/linux/BUILD.gn b/build/config/linux/BUILD.gn
new file mode 100644
index 0000000..5f8e313
--- /dev/null
+++ b/build/config/linux/BUILD.gn
@@ -0,0 +1,57 @@
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/config/clang/clang.gni")
+import("//build/config/sysroot.gni")
+
+config("compiler") {
+  cflags = []
+  cflags_c = []
+  cflags_cc = []
+  ldflags = [
+    "-static-libstdc++",
+
+    # Generate build ID for all binaries so we can use the .build-id directory
+    # scheme for debugging information. This flag is enabled by default for our
+    # toolchain and many native host toolchains, but we set it explicitly to
+    # support arbitrary host toolchains.
+    "-Wl,--build-id",
+
+    # Set rpath to find dynamically linked libraries placed next to executables
+    # in the host build directory.
+    "-Wl,-rpath=\$ORIGIN/",
+  ]
+  if (host_os == "mac") {
+    # TODO(TC-325): When building binaries for Linux on macOS, we need to use
+    # lld as a linker, hence this flag. This is not needed on Linux since our
+    # Clang is configured to use lld as a default linker, but we cannot use the
+    # same option on macOS since default linker is currently a per-toolchain,
+    # not a per-target option and on macOS, Clang should default to ld64. We
+    # should change Clang to make the default linker a per-target option.
+    ldflags += [ "-fuse-ld=lld" ]
+  }
+  configs = [
+    ":sysroot",
+    ":target",
+  ]
+
+  # TODO(TC-74) The implicitly linked static libc++.a depends on these.
+  libs = [
+    "dl",
+    "pthread",
+  ]
+  asmflags = cflags + cflags_c
+}
+
+config("sysroot") {
+  cflags = [ "--sysroot=$sysroot" ]
+  ldflags = cflags
+  asmflags = cflags
+}
+
+config("target") {
+  cflags = [ "--target=$clang_target" ]
+  asmflags = cflags
+  ldflags = cflags
+}
diff --git a/build/config/lto/BUILD.gn b/build/config/lto/BUILD.gn
new file mode 100644
index 0000000..a654446
--- /dev/null
+++ b/build/config/lto/BUILD.gn
@@ -0,0 +1,46 @@
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/config/lto/config.gni")
+import("//build/toolchain/variant.gni")
+
+# This config is added unconditionally by BUILDCONFIG.gn to pick up the
+# global `use_lto` build argument.  For fine-grained control, leave
+# `use_lto=false` and use `select_variant` to choose the `lto` or `thinlto`
+# variant for some components.
+config("default") {
+  if (use_lto) {
+    if (use_thinlto) {
+      configs = [ ":thinlto" ]
+    } else {
+      configs = [ ":lto" ]
+    }
+  }
+}
+
+variant("lto") {
+  common_flags = [
+    "-flto",
+
+    # Enable whole-program devirtualization and virtual constant propagation.
+    "-fwhole-program-vtables",
+  ]
+}
+
+variant("thinlto") {
+  common_flags = [ "-flto=thin" ]
+  ldflags = [
+    # The ThinLTO driver launches a number of threads in parallel whose
+    # number is by default equivalent to the number of cores.  We need
+    # to limit the parallelism to avoid aggressive competition between
+    # different linker jobs.
+    "-Wl,--thinlto-jobs=$thinlto_jobs",
+
+    # Set the ThinLTO cache directory which is used to cache native
+    # object files for ThinLTO incremental builds.  This directory is
+    # not managed by Ninja and has to be cleaned manually, but it is
+    # periodically garbage-collected by the ThinLTO driver.
+    "-Wl,--thinlto-cache-dir=$thinlto_cache_dir",
+  ]
+}
diff --git a/build/config/lto/config.gni b/build/config/lto/config.gni
new file mode 100644
index 0000000..f439ada
--- /dev/null
+++ b/build/config/lto/config.gni
@@ -0,0 +1,17 @@
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+declare_args() {
+  # Use link time optimization (LTO).
+  use_lto = false
+
+  # Use ThinLTO variant of LTO if use_lto = true.
+  use_thinlto = true
+
+  # Number of parallel ThinLTO jobs.
+  thinlto_jobs = 8
+
+  # ThinLTO cache directory path.
+  thinlto_cache_dir = rebase_path("$root_out_dir/thinlto-cache", root_build_dir)
+}
diff --git a/build/config/mac/BUILD.gn b/build/config/mac/BUILD.gn
new file mode 100644
index 0000000..fd0686d
--- /dev/null
+++ b/build/config/mac/BUILD.gn
@@ -0,0 +1,54 @@
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/config/sysroot.gni")
+
+config("compiler") {
+  cflags_objcc = [
+    "-std=c++14",
+    "-stdlib=libc++",
+  ]
+  configs = [ ":sysroot" ]
+}
+
+config("sysroot") {
+  cflags = [ "--sysroot=$sysroot" ]
+  ldflags = cflags
+  asmflags = cflags
+}
+
+# On Mac, this is used for everything except static libraries.
+config("mac_dynamic_flags") {
+  ldflags = [
+    "-Wl,-search_paths_first",
+    "-L.",
+
+    # Path for loading shared libraries for unbundled binaries.
+    "-Wl,-rpath,@loader_path/.",
+
+    # Path for loading shared libraries for bundled binaries.
+    # Get back from Binary.app/Contents/MacOS.
+    "-Wl,-rpath,@loader_path/../../..",
+  ]
+}
+
+# On Mac, this is used only for executables.
+config("mac_executable_flags") {
+  ldflags = [ "-Wl,-pie" ]  # Position independent.
+}
+
+# Standard libraries.
+config("default_libs") {
+  libs = [
+    "AppKit.framework",
+    "ApplicationServices.framework",
+    "Carbon.framework",
+    "CoreFoundation.framework",
+    "CoreVideo.framework",
+    "Foundation.framework",
+    "OpenGL.framework",
+    "Security.framework",
+    "IOKit.framework",
+  ]
+}
diff --git a/build/config/mac/config.gni b/build/config/mac/config.gni
new file mode 100644
index 0000000..c8d2ffd
--- /dev/null
+++ b/build/config/mac/config.gni
@@ -0,0 +1,25 @@
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+assert(current_os == "mac")
+
+declare_args() {
+  # Minimum supported version of Mac SDK.
+  mac_sdk_min = "10.13"
+
+  # Path to Mac SDK.
+  mac_sdk_path = ""
+}
+
+find_sdk_args = [
+  "--print-sdk-path",
+  mac_sdk_min,
+]
+find_sdk_lines =
+    exec_script("//build/mac/find_sdk.py", find_sdk_args, "list lines")
+mac_sdk_version = find_sdk_lines[1]
+
+if (mac_sdk_path == "") {
+  mac_sdk_path = find_sdk_lines[0]
+}
diff --git a/build/config/profile/BUILD.gn b/build/config/profile/BUILD.gn
new file mode 100644
index 0000000..2f1d14c
--- /dev/null
+++ b/build/config/profile/BUILD.gn
@@ -0,0 +1,15 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/toolchain/variant.gni")
+
+variant("profile") {
+  common_flags = [
+    "-fprofile-instr-generate",
+    "-fcoverage-mapping",
+  ]
+
+  # The statically-linked profiling runtime depends on libzircon.
+  libs = [ "zircon" ]
+}
diff --git a/build/config/sanitizers/BUILD.gn b/build/config/sanitizers/BUILD.gn
new file mode 100644
index 0000000..5749324
--- /dev/null
+++ b/build/config/sanitizers/BUILD.gn
@@ -0,0 +1,68 @@
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/toolchain/variant.gni")
+
+config("frame_pointers") {
+  cflags = [ "-fno-omit-frame-pointer" ]
+  ldflags = cflags
+}
+
+declare_args() {
+  # Default [AddressSanitizer](https://llvm.org/docs/AddressSanitizer.html)
+  # options (before the `ASAN_OPTIONS` environment variable is read at
+  # runtime).  This can be set as a build argument to affect most "asan"
+  # variants in `known_variants` (which see), or overridden in
+  # toolchain_args in one of those variants.  Note that setting this
+  # nonempty may conflict with programs that define their own
+  # `__asan_default_options` C function.
+  asan_default_options = ""
+}
+
+variant("asan") {
+  common_flags = [ "-fsanitize=address" ]
+
+  # ASan wants frame pointers because it captures stack traces
+  # on allocations and such, not just on errors.
+  configs = [ ":frame_pointers" ]
+
+  if (asan_default_options != "") {
+    deps = [
+      ":asan_default_options",
+    ]
+  }
+}
+
+if (asan_default_options != "") {
+  source_set("asan_default_options") {
+    visibility = [ ":*" ]
+    sources = [
+      "asan_default_options.c",
+    ]
+    defines = [ "ASAN_DEFAULT_OPTIONS=\"${asan_default_options}\"" ]
+
+    # On Fuchsia, the ASan runtime is dynamically linked and needs to have
+    # the __asan_default_options symbol exported.  On systems where the
+    # ASan runtime is statically linked, it doesn't matter either way.
+    configs -= [ "//build/config:symbol_visibility_hidden" ]
+  }
+}
+
+variant("ubsan") {
+  common_flags = [ "-fsanitize=undefined" ]
+}
+
+variant("fuzzer") {
+  common_flags = [ "-fsanitize=fuzzer" ]
+
+  # TODO (TC-251): This shouldn't be necessary, but libzircon isn't currently
+  # linked into libFuzzer on Fuchsia.
+  if (is_fuchsia) {
+    libs = [ "zircon" ]
+  }
+}
+
+variant("sancov") {
+  common_flags = [ "-fsanitize-coverage=trace-pc-guard" ]
+}
diff --git a/build/config/sanitizers/asan_default_options.c b/build/config/sanitizers/asan_default_options.c
new file mode 100644
index 0000000..f8d9d3b
--- /dev/null
+++ b/build/config/sanitizers/asan_default_options.c
@@ -0,0 +1,14 @@
+// Copyright 2017 The Fuchsia Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style license that can be
+// found in the LICENSE file.
+
+#include <sanitizer/asan_interface.h>
+
+// This exists to be built into every executable selected to use the
+// asan_no_detect_leaks variant.  ASan applies the options here before
+// looking at the ASAN_OPTIONS environment variable.
+const char* __asan_default_options(void) {
+  // This macro is defined by BUILD.gn from the `asan_default_options` GN
+  // build argument.
+  return ASAN_DEFAULT_OPTIONS;
+}
diff --git a/build/config/scudo/BUILD.gn b/build/config/scudo/BUILD.gn
new file mode 100644
index 0000000..5e2ddf0
--- /dev/null
+++ b/build/config/scudo/BUILD.gn
@@ -0,0 +1,54 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/config/scudo/scudo.gni")
+import("//build/toolchain/variant.gni")
+
+# This group is added unconditionally by BUILDCONFIG.gn to pick up the
+# global `use_scudo` build argument.  For fine-grained control, leave
+# `use_scudo=false` and use `select_variant` to choose the `scudo`
+# variant for some components.
+# This configuration will apply to the executable alone, and not any of its
+# deps. (So most code will not be compiled with Scudo, but executables will
+# be *linked* with it, which is the important thing.)
+# Enabling Scudo requires both a configuration change and an additional
+# dependency; we group them together here.
+group("default_for_executable") {
+  if (use_scudo && is_fuchsia) {
+    public_configs = [ ":scudo" ]
+    deps = [
+      ":scudo_default_options",
+    ]
+  }
+}
+
+# This defines the //build/config/scudo config that's used separately
+# when `use_scudo` is set, as well as making that config (along with
+# the deps to propagate `scudo_default_options`) into a variant.
+variant("scudo") {
+  # The variant only works by linking Scudo in.
+  # i.e., we don't support code that relies on `#if __has_feature(scudo)`.
+  ldflags = [ "-fsanitize=scudo" ]
+  deps = [
+    ":scudo_default_options",
+  ]
+}
+
+source_set("scudo_default_options") {
+  visibility = [ ":*" ]
+  if (scudo_default_options != []) {
+    sources = [
+      "scudo_default_options.c",
+    ]
+    options_string = ""
+    foreach(option, scudo_default_options) {
+      options_string += ":$option"
+    }
+    defines = [ "SCUDO_DEFAULT_OPTIONS=\"${options_string}\"" ]
+
+    # The Scudo runtime is dynamically linked and needs to have
+    # the __scudo_default_options symbol exported.
+    configs -= [ "//build/config:symbol_visibility_hidden" ]
+  }
+}
diff --git a/build/config/scudo/scudo.gni b/build/config/scudo/scudo.gni
new file mode 100644
index 0000000..7b3d586
--- /dev/null
+++ b/build/config/scudo/scudo.gni
@@ -0,0 +1,27 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+declare_args() {
+  # Enable the [Scudo](https://llvm.org/docs/ScudoHardenedAllocator.html)
+  # memory allocator.
+  use_scudo = true
+
+  # Default [Scudo](https://llvm.org/docs/ScudoHardenedAllocator.html)
+  # options (before the `SCUDO_OPTIONS` environment variable is read at
+  # runtime).  *NOTE:* This affects only components using the `scudo`
+  # variant (see GN build argument `select_variant`), and does not affect
+  # anything when the `use_scudo` build flag is set instead.
+  scudo_default_options = [
+    "abort_on_error=1",  # get stacktrace on error
+    "QuarantineSizeKb=0",  # disables quarantine
+    "ThreadLocalQuarantineSizeKb=0",  # disables quarantine
+    "DeallocationTypeMismatch=false",  # TODO(flowerhack) re-enable when US-495
+                                       # is resolved
+
+    "DeleteSizeMismatch=false",  # TODO(flowerhack) re-enable when US-495
+                                 # is resolved
+
+    "allocator_may_return_null=true",
+  ]
+}
diff --git a/build/config/scudo/scudo_default_options.c b/build/config/scudo/scudo_default_options.c
new file mode 100644
index 0000000..0fac868
--- /dev/null
+++ b/build/config/scudo/scudo_default_options.c
@@ -0,0 +1,14 @@
+// Copyright 2018 The Fuchsia Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style license that can be
+// found in the LICENSE file.
+
+#include <sanitizer/scudo_interface.h>
+
+// This exists to be built into every executable selected to use the
+// scudo variant.  Scudo applies the options here before
+// looking at the SCUDO_OPTIONS environment variable.
+const char* __scudo_default_options(void) {
+  // This macro is defined by BUILD.gn from the `scudo_default_options` GN
+  // build argument.
+  return SCUDO_DEFAULT_OPTIONS;
+}
diff --git a/build/config/sysroot.gni b/build/config/sysroot.gni
new file mode 100644
index 0000000..1595e84
--- /dev/null
+++ b/build/config/sysroot.gni
@@ -0,0 +1,24 @@
+# Copyright 2016 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+declare_args() {
+  # The absolute path of the sysroot that is used with the target toolchain.
+  target_sysroot = ""
+}
+
+if (current_os == target_os && target_sysroot != "") {
+  sysroot = target_sysroot
+} else if (is_fuchsia) {
+  _out_dir =
+      get_label_info("//zircon/public/sysroot(//build/zircon:zircon_toolchain)",
+                     "target_out_dir")
+  sysroot = rebase_path("$_out_dir/sysroot")
+} else if (is_linux) {
+  sysroot = rebase_path("//build/third_party/sysroot/${current_platform}")
+} else if (is_mac) {
+  import("//build/config/mac/config.gni")
+  sysroot = mac_sdk_path
+} else {
+  sysroot = ""
+}
diff --git a/build/cpp/BUILD.gn b/build/cpp/BUILD.gn
new file mode 100644
index 0000000..5b5fb9b
--- /dev/null
+++ b/build/cpp/BUILD.gn
@@ -0,0 +1,12 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/fidl/toolchain.gni")
+
+# See fidl_cpp.gni:fidl_cpp.  Its generated `source_set`s should be the
+# only users of this config (in their public_configs).
+config("fidl_gen_config") {
+  fidl_root_gen_dir = get_label_info("//bogus($fidl_toolchain)", "root_gen_dir")
+  include_dirs = [ fidl_root_gen_dir ]
+}
diff --git a/build/cpp/binaries.py b/build/cpp/binaries.py
new file mode 100755
index 0000000..aa2e8f0
--- /dev/null
+++ b/build/cpp/binaries.py
@@ -0,0 +1,28 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import os
+import sys
+
+sys.path.append(os.path.join(
+    os.path.dirname(__file__),
+    os.pardir,
+    "images",
+))
+import elfinfo
+
+
+def get_sdk_debug_path(binary):
+    build_id = elfinfo.get_elf_info(binary).build_id
+    return '.build-id/' + build_id[:2] + '/' + build_id[2:] + '.debug'
+
+
+# For testing.
+def main():
+    print(get_sdk_debug_path(sys.argv[1]))
+
+
+if __name__ == '__main__':
+    sys.exit(main())
diff --git a/build/cpp/fidl_cpp.gni b/build/cpp/fidl_cpp.gni
new file mode 100644
index 0000000..de5ebf6
--- /dev/null
+++ b/build/cpp/fidl_cpp.gni
@@ -0,0 +1,136 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/compiled_action.gni")
+import("//build/fidl/toolchain.gni")
+import("//build/sdk/sdk_atom_alias.gni")
+
+# Generates some C++ bindings for a FIDL library.
+#
+# The parameters for this template are defined in //build/fidl/fidl.gni. The
+# relevant parameters in this template are:
+#   - name;
+#   - sources;
+#   - cpp_legacy_callbacks.
+
+template("fidl_cpp_codegen") {
+  not_needed(invoker, [ "sources" ])
+
+  main_target_name = target_name
+  generation_target_name = "${target_name}_cpp_generate"
+
+  library_name = target_name
+  if (defined(invoker.name)) {
+    library_name = invoker.name
+  }
+
+  fidl_root_gen_dir =
+      get_label_info(":$target_name($fidl_toolchain)", "root_gen_dir")
+
+  include_stem = string_replace(library_name, ".", "/") + "/cpp/fidl"
+  file_stem = "$fidl_root_gen_dir/$include_stem"
+
+  fidl_target_gen_dir =
+      get_label_info(":$target_name($fidl_toolchain)", "target_gen_dir")
+  json_representation = "$fidl_target_gen_dir/$target_name.fidl.json"
+
+  fidl_root_gen_dir =
+      get_label_info(":$target_name($fidl_toolchain)", "root_gen_dir")
+
+  compiled_action(generation_target_name) {
+    forward_variables_from(invoker, [ "testonly" ])
+
+    visibility = [ ":$main_target_name" ]
+
+    tool = "//garnet/go/src/fidl:fidlgen"
+
+    inputs = [
+      json_representation,
+    ]
+
+    outputs = [
+      "$file_stem.h",
+      "$file_stem.cc",
+    ]
+
+    args = [
+      "--json",
+      rebase_path(json_representation, root_build_dir),
+      "--output-base",
+      rebase_path(file_stem, root_build_dir),
+      "--include-base",
+      rebase_path(fidl_root_gen_dir, root_build_dir),
+      "--generators",
+      "cpp",
+    ]
+
+    if (defined(invoker.cpp_legacy_callbacks) && invoker.cpp_legacy_callbacks) {
+      args += [ "--cpp-legacy-callbacks" ]
+    }
+
+    deps = [
+      ":$main_target_name($fidl_toolchain)",
+    ]
+  }
+}
+
+template("fidl_cpp") {
+  not_needed(invoker, "*")
+
+  main_target_name = target_name
+  generation_target_name = "${target_name}_cpp_generate"
+
+  library_name = target_name
+  if (defined(invoker.name)) {
+    library_name = invoker.name
+  }
+
+  fidl_root_gen_dir =
+      get_label_info(":$target_name($fidl_toolchain)", "root_gen_dir")
+
+  include_stem = string_replace(library_name, ".", "/") + "/cpp/fidl"
+  file_stem = "$fidl_root_gen_dir/$include_stem"
+
+  source_set(main_target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "deps",
+                             "testonly",
+                             "visibility",
+                           ])
+
+    sources = [
+      "$file_stem.cc",
+      "$file_stem.h",
+    ]
+
+    # Let dependencies use `#include "$file_stem.h"`.
+    public_configs = [ "//build/cpp:fidl_gen_config" ]
+
+    public_deps = [
+      ":$generation_target_name($fidl_toolchain)",
+      ":$main_target_name($fidl_toolchain)",
+      ":${main_target_name}_tables",
+    ]
+
+    if (is_fuchsia) {
+      public_deps += [ "//sdk/lib/fidl/cpp" ]
+    } else {
+      public_deps += [ "//sdk/lib/fidl/cpp:cpp_base" ]
+    }
+
+    if (defined(invoker.public_deps)) {
+      public_deps += invoker.public_deps
+    }
+  }
+
+  if (defined(invoker.sdk_category) && invoker.sdk_category != "excluded") {
+    # Instead of depending on the generated bindings, set up a dependency on the
+    # original library.
+    sdk_target_name = "${main_target_name}_sdk"
+    sdk_atom_alias(sdk_target_name) {
+      atom = ":$sdk_target_name($fidl_toolchain)"
+    }
+  }
+}
diff --git a/build/cpp/fidlmerge_cpp.gni b/build/cpp/fidlmerge_cpp.gni
new file mode 100644
index 0000000..5b7a354
--- /dev/null
+++ b/build/cpp/fidlmerge_cpp.gni
@@ -0,0 +1,146 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/compiled_action.gni")
+import("//build/fidl/toolchain.gni")
+
+# Declares a source_set that contains code generated by fidlmerge from a
+# template and a FIDL JSON file.
+#
+# Parameters
+#
+#   fidl_target (required)
+#     Specifies the fidl target from which to read fidl json. For example,
+#     "//zircon/public/fidl/fuchsia-mem" for fuchsia.mem or
+#     "//sdk/fidl/fuchsia.sys" for fuchsia.sys.
+#
+#   template_path (required)
+#     Specifies the template to use to generate the source code for the
+#     source_set. For example, "//garnet/public/build/fostr/fostr.fidlmerge".
+#
+#   generated_source_base (required)
+#     The base file name from which the source_set's 'source' file names are
+#     generated. For example, "formatting".
+#
+#   generated_source_extensions (optional)
+#     The list of extensions of source_set 'source' files that will be generated
+#     and included in the source set. By default, this is [ ".cc", ".h" ]
+#
+#   options (optional)
+#     A single string with comma-separated key=value pairs.
+#
+#   amendments_path (optional)
+#     Specifies a JSON file that contains amendments to be made to the fidl
+#     model before the template is applied. For example,
+#     "//garnet/public/build/fostr/fidl/fuchsia.media/amendments.fidlmerge".
+#     See the fidlmerge README for details.
+#
+#   deps, public_deps, test_only, visibility (optional)
+#     These parameters are forwarded to the source_set.
+#
+
+template("fidlmerge_cpp") {
+  assert(defined(invoker.fidl_target),
+         "fidlmerge_cpp requires parameter fidl_target.")
+
+  assert(defined(invoker.template_path),
+         "fidlmerge_cpp requires parameter template_path.")
+
+  assert(defined(invoker.generated_source_base),
+         "fidlmerge_cpp requires parameter generated_source_base.")
+
+  fidl_target = invoker.fidl_target
+  template_path = invoker.template_path
+  source_base = invoker.generated_source_base
+
+  if (defined(invoker.generated_source_extensions)) {
+    generated_source_extensions = invoker.generated_source_extensions
+  } else {
+    generated_source_extensions = [
+      ".cc",
+      ".h",
+    ]
+  }
+
+  main_target_name = target_name
+  generation_target_name = "${target_name}_generate"
+
+  fidl_target_gen_dir =
+      get_label_info("$fidl_target($fidl_toolchain)", "target_gen_dir")
+  fidl_target_name = get_path_info(fidl_target_gen_dir, "file")
+  json_representation = "$fidl_target_gen_dir/$fidl_target_name.fidl.json"
+
+  include_stem = string_replace(target_gen_dir, ".", "/")
+  file_stem = "$include_stem/$source_base"
+
+  compiled_action(generation_target_name) {
+    forward_variables_from(invoker, [ "testonly" ])
+
+    visibility = [ ":$main_target_name" ]
+
+    tool = "//garnet/go/src/fidlmerge"
+
+    inputs = [
+      json_representation,
+      template_path,
+    ]
+
+    outputs = []
+    foreach(ext, generated_source_extensions) {
+      outputs += [ "$file_stem$ext" ]
+    }
+
+    args = [
+      "--template",
+      rebase_path(template_path, root_build_dir),
+      "--json",
+      rebase_path(json_representation, root_build_dir),
+      "--output-base",
+      rebase_path(file_stem, root_build_dir),
+    ]
+
+    if (defined(invoker.options)) {
+      args += [
+        "--options",
+        invoker.options,
+      ]
+    }
+
+    if (defined(invoker.amendments_path)) {
+      args += [
+        "--amend",
+        rebase_path(invoker.amendments_path, root_build_dir),
+      ]
+    }
+
+    deps = [
+      "$fidl_target($fidl_toolchain)",
+    ]
+
+    if (defined(invoker.deps)) {
+      deps += invoker.deps
+    }
+  }
+
+  source_set(main_target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "deps",
+                             "testonly",
+                             "visibility",
+                           ])
+
+    sources = []
+    foreach(ext, generated_source_extensions) {
+      sources += [ "$file_stem$ext" ]
+    }
+
+    public_deps = [
+      ":$generation_target_name",
+    ]
+    if (defined(invoker.public_deps)) {
+      public_deps += invoker.public_deps
+    }
+  }
+}
diff --git a/build/cpp/gen_sdk_prebuilt_meta_file.py b/build/cpp/gen_sdk_prebuilt_meta_file.py
new file mode 100755
index 0000000..cc9f47f
--- /dev/null
+++ b/build/cpp/gen_sdk_prebuilt_meta_file.py
@@ -0,0 +1,98 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import json
+import os
+import sys
+
+sys.path.append(os.path.join(
+    os.path.dirname(__file__),
+    os.pardir,
+    "cpp",
+))
+import binaries
+
+
+def main():
+    parser = argparse.ArgumentParser('Builds a metadata file')
+    parser.add_argument('--out',
+                        help='Path to the output file',
+                        required=True)
+    parser.add_argument('--name',
+                        help='Name of the library',
+                        required=True)
+    parser.add_argument('--root',
+                        help='Root of the library in the SDK',
+                        required=True)
+    parser.add_argument('--deps',
+                        help='Path to metadata files of dependencies',
+                        nargs='*')
+    parser.add_argument('--headers',
+                        help='List of public headers',
+                        nargs='*')
+    parser.add_argument('--include-dir',
+                        help='Path to the include directory',
+                        required=True)
+    parser.add_argument('--arch',
+                        help='Name of the target architecture',
+                        required=True)
+    parser.add_argument('--lib-link',
+                        help='Path to the link-time library in the SDK',
+                        required=True)
+    parser.add_argument('--lib-dist',
+                        help='Path to the library to add to Fuchsia packages in the SDK',
+                        required=True)
+    parser.add_argument('--lib-debug-file',
+                        help='Path to the source debug version of the library',
+                        required=True)
+    parser.add_argument('--debug-mapping',
+                        help='Path to the file where to write the file mapping for the debug library',
+                        required=True)
+    args = parser.parse_args()
+
+    # The path of the debug file in the SDK depends on its build id.
+    debug_path = binaries.get_sdk_debug_path(args.lib_debug_file)
+    with open(args.debug_mapping, 'w') as mappings_file:
+        mappings_file.write(debug_path + '=' + args.lib_debug_file + '\n')
+
+    metadata = {
+        'type': 'cc_prebuilt_library',
+        'name': args.name,
+        'root': args.root,
+        'format': 'shared',
+        'headers': args.headers,
+        'include_dir': args.include_dir,
+    }
+    metadata['binaries'] = {
+        args.arch: {
+            'link': args.lib_link,
+            'dist': args.lib_dist,
+            'debug': debug_path,
+        },
+    }
+
+    deps = []
+    fidl_deps = []
+    for spec in args.deps:
+        with open(spec, 'r') as spec_file:
+            data = json.load(spec_file)
+        type = data['type']
+        name = data['name']
+        # TODO(DX-498): verify that source libraries are header-only.
+        if type == 'cc_source_library' or type == 'cc_prebuilt_library':
+            deps.append(name)
+        else:
+            raise Exception('Unsupported dependency type: %s' % type)
+    metadata['deps'] = deps
+
+    with open(args.out, 'w') as out_file:
+        json.dump(metadata, out_file, indent=2, sort_keys=True)
+
+    return 0
+
+
+if __name__ == '__main__':
+    sys.exit(main())
diff --git a/build/cpp/gen_sdk_sources_meta_file.py b/build/cpp/gen_sdk_sources_meta_file.py
new file mode 100755
index 0000000..1a1a2a76
--- /dev/null
+++ b/build/cpp/gen_sdk_sources_meta_file.py
@@ -0,0 +1,70 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import json
+import os
+import sys
+
+
+def main():
+    parser = argparse.ArgumentParser('Builds a metadata file')
+    parser.add_argument('--out',
+                        help='Path to the output file',
+                        required=True)
+    parser.add_argument('--name',
+                        help='Name of the library',
+                        required=True)
+    parser.add_argument('--root',
+                        help='Root of the library in the SDK',
+                        required=True)
+    parser.add_argument('--deps',
+                        help='Path to metadata files of dependencies',
+                        nargs='*')
+    parser.add_argument('--sources',
+                        help='List of library sources',
+                        nargs='+')
+    parser.add_argument('--headers',
+                        help='List of public headers',
+                        nargs='*')
+    parser.add_argument('--include-dir',
+                        help='Path to the include directory',
+                        required=True)
+    args = parser.parse_args()
+
+    metadata = {
+        'type': 'cc_source_library',
+        'name': args.name,
+        'root': args.root,
+        'sources': args.sources,
+        'headers': args.headers,
+        'include_dir': args.include_dir,
+        'banjo_deps': [],
+    }
+
+    deps = []
+    fidl_deps = []
+    for spec in args.deps:
+        with open(spec, 'r') as spec_file:
+            data = json.load(spec_file)
+        type = data['type']
+        name = data['name']
+        if type == 'cc_source_library' or type == 'cc_prebuilt_library':
+            deps.append(name)
+        elif type == 'fidl_library':
+            fidl_deps.append(name)
+        else:
+            raise Exception('Unsupported dependency type: %s' % type)
+    metadata['deps'] = deps
+    metadata['fidl_deps'] = fidl_deps
+
+    with open(args.out, 'w') as out_file:
+        json.dump(metadata, out_file, indent=2, sort_keys=True)
+
+    return 0
+
+
+if __name__ == '__main__':
+    sys.exit(main())
diff --git a/build/cpp/sdk_executable.gni b/build/cpp/sdk_executable.gni
new file mode 100644
index 0000000..85b1840
--- /dev/null
+++ b/build/cpp/sdk_executable.gni
@@ -0,0 +1,68 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/sdk/sdk_atom.gni")
+
+# An executable which can be bundled in an SDK.
+#
+# An equivalent to the built-in executable which adds an SDK atom declaration to
+# allow the resulting binary to be included in an SDK.
+#
+#   category (required)
+#     Publication level of the executable in SDKs.
+#     See //build/sdk/sdk_atom.gni.
+#
+#   sdk_deps (optional)
+#     List of labels representing elements that should be added to SDKs
+#     alongside the present binary.
+#     Labels in the list must represent SDK-ready targets.
+
+template("sdk_executable") {
+  assert(defined(invoker.category), "Must define an SDK category")
+
+  main_target_name = target_name
+
+  executable(main_target_name) {
+    forward_variables_from(invoker, "*", [ "category" ])
+  }
+
+  output_name = target_name
+  if (defined(invoker.output_name)) {
+    output_name = invoker.output_name
+  }
+
+  if (!is_fuchsia) {
+    file_base = "tools/$output_name"
+
+    sdk_atom("${target_name}_sdk") {
+      id = "sdk://tools/$output_name"
+
+      category = invoker.category
+
+      meta = {
+        dest = "$file_base-meta.json"
+        schema = "host_tool"
+        value = {
+          type = "host_tool"
+          name = output_name
+          root = "tools"
+          files = [ file_base ]
+        }
+      }
+
+      files = [
+        {
+          source = "$root_out_dir/$output_name"
+          dest = file_base
+        },
+      ]
+
+      if (defined(invoker.sdk_deps)) {
+        deps = invoker.sdk_deps
+      }
+
+      non_sdk_deps = [ ":$main_target_name" ]
+    }
+  }
+}
diff --git a/build/cpp/sdk_shared_library.gni b/build/cpp/sdk_shared_library.gni
new file mode 100644
index 0000000..32cf233
--- /dev/null
+++ b/build/cpp/sdk_shared_library.gni
@@ -0,0 +1,293 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/sdk/sdk_atom.gni")
+import("//build/sdk/sdk_atom_alias.gni")
+
+# A shared library that can be exported to an SDK in binary form.
+#
+# Parameters
+#
+#   category (required)
+#     Publication level of the library in SDKs.
+#     See //build/sdk/sdk_atom.gni.
+#
+#   no_headers (optional)
+#     Whether to include the library's headers in the SDK.
+#     Defaults to false.
+#
+#   sdk_name (optional)
+#     Name of the library in the SDK.
+#     Defaults to the library's output name.
+#
+#   include_base (optional)
+#     Path to the root directory for includes.
+#     Defaults to ".".
+#
+#   runtime_deps (optional)
+#     List of labels representing the library's runtime dependencies. This is
+#     only needed for runtime dependencies inherited from private dependencies.
+#     Note that these labels should represent SDK targets.
+
+# The defaults for a sdk_shared_library should match that of a shared_library.
+set_defaults("sdk_shared_library") {
+  configs = default_shared_library_configs
+}
+
+template("sdk_shared_library") {
+  assert(defined(invoker.category), "Must define an SDK category")
+
+  main_target_name = target_name
+  metadata_target_name = "${target_name}_sdk_metadata"
+  manifest_target_name = "${target_name}_sdk_manifest"
+  sdk_target_name = "${target_name}_sdk"
+
+  shared_library(main_target_name) {
+    forward_variables_from(invoker,
+                           "*",
+                           [
+                             "category",
+                             "include_base",
+                             "no_headers",
+                             "runtime_deps",
+                             "sdk_name",
+                           ])
+
+    if (defined(visibility)) {
+      visibility += [ ":$manifest_target_name" ]
+    }
+
+    # Prebuilt shared libraries are eligible for inclusion in the SDK. We do not
+    # want to dynamically link against libc++.so because we let clients bring
+    # their own toolchain, which might have a different C++ Standard Library or
+    # a different C++ ABI entirely.
+    #
+    # Adding this linker flag keeps us honest about not commiting to a specific
+    # C++ ABI. If this flag is causing your library to not to compile, consider
+    # whether your library really ought to be in the SDK. If so, consider
+    # including your library in the SDK as source rather than precompiled. If
+    # you do require precompilation, you probably need to find a way not to
+    # depend on dynamically linking C++ symbols because C++ does not have a
+    # sufficiently stable ABI for the purposes of our SDK.
+    # TODO(TC-46): statically link against libc++ if necessary.
+    if (!defined(ldflags)) {
+      ldflags = []
+    }
+    ldflags += [ "-nostdlib++" ]
+
+    # Request that the runtime deps be written out to a file. This file will be
+    # used later to verify that all runtime deps are available in the SDK.
+    write_runtime_deps = "$target_out_dir/$target_name.runtime_deps"
+  }
+
+  output_name = target_name
+  if (defined(invoker.output_name)) {
+    output_name = invoker.output_name
+  }
+
+  if (defined(invoker.sdk_name)) {
+    atom_name = invoker.sdk_name
+  } else {
+    atom_name = output_name
+  }
+
+  no_headers = defined(invoker.no_headers) && invoker.no_headers
+
+  # Base path for source files of this library in SDKs.
+  file_base = "pkg/$atom_name"
+
+  # Base path for binaries of this library in SDKs.
+  prebuilt_base = "arch/$target_cpu"
+
+  # Identify dependencies and their metadata files.
+  sdk_deps = []
+  sdk_metas = []
+
+  # If a prebuilt library is only provided for packaging purposes (by not
+  # exposing headers) then its dependencies need not be included in an SDK.
+  if (defined(invoker.public_deps) && !no_headers) {
+    foreach(dep, invoker.public_deps) {
+      full_label = get_label_info(dep, "label_no_toolchain")
+      sdk_dep = "${full_label}_sdk"
+      sdk_deps += [ sdk_dep ]
+    }
+  }
+
+  # Runtime deps are already SDK targets.
+  if (defined(invoker.runtime_deps)) {
+    sdk_deps += invoker.runtime_deps
+  }
+  foreach(sdk_dep, sdk_deps) {
+    gen_dir = get_label_info(sdk_dep, "target_gen_dir")
+    name = get_label_info(sdk_dep, "name")
+    sdk_metas += [ rebase_path("$gen_dir/$name.meta.json") ]
+  }
+
+  # Process headers.
+  all_headers = []
+  if ((defined(invoker.public) || defined(invoker.sources)) && !no_headers) {
+    if (defined(invoker.public)) {
+      all_headers += invoker.public
+    } else {
+      foreach(source_file, invoker.sources) {
+        extension = get_path_info(source_file, "extension")
+        if (extension == "h") {
+          all_headers += [ source_file ]
+        }
+      }
+    }
+  }
+  sdk_headers = []
+  sdk_files = []
+  foreach(header, all_headers) {
+    include_base = "include"
+    if (defined(invoker.include_base)) {
+      include_base = invoker.include_base
+    }
+    destination = rebase_path(header, include_base)
+    header_dest = "$file_base/include/$destination"
+    sdk_headers += [ header_dest ]
+    sdk_files += [
+      {
+        source = header
+        dest = header_dest
+      },
+    ]
+  }
+
+  # Add binaries.
+  shared_out_dir = get_label_info(":bogus($shlib_toolchain)", "root_out_dir")
+  lib_name = "lib$output_name.so"
+  link_lib = "$prebuilt_base/lib/$lib_name"
+  dist_lib = "$prebuilt_base/dist/$lib_name"
+  sdk_files += [
+    {
+      # TODO(TO-791): put ABI stubs under lib/, not the full thing.
+      source = "$shared_out_dir/$lib_name"
+      dest = link_lib
+    },
+    {
+      source = "$shared_out_dir/$lib_name"
+      dest = dist_lib
+    },
+  ]
+
+  metadata_file = "$target_gen_dir/$metadata_target_name.sdk_meta.json"
+  debug_mapping_file = "$target_gen_dir/$metadata_target_name.mappings.txt"
+  debug_lib_file = "$shared_out_dir/lib.unstripped/$lib_name"
+
+  action(metadata_target_name) {
+    script = "//build/cpp/gen_sdk_prebuilt_meta_file.py"
+
+    inputs = sdk_metas + [ debug_lib_file ]
+
+    outputs = [
+      debug_mapping_file,
+      metadata_file,
+    ]
+
+    args = [
+             "--out",
+             rebase_path(metadata_file),
+             "--name",
+             atom_name,
+             "--root",
+             file_base,
+             "--include-dir",
+             "$file_base/include",
+             "--deps",
+           ] + sdk_metas + [ "--headers" ] + sdk_headers +
+           [
+             "--arch",
+             target_cpu,
+             "--lib-link",
+             link_lib,
+             "--lib-dist",
+             dist_lib,
+             "--lib-debug-file",
+             rebase_path(debug_lib_file),
+             "--debug-mapping",
+             rebase_path(debug_mapping_file),
+           ]
+
+    deps = sdk_deps + [ ":$main_target_name" ]
+  }
+
+  sdk_atom(manifest_target_name) {
+    forward_variables_from(invoker, [ "testonly" ])
+
+    id = "sdk://pkg/$atom_name"
+
+    category = invoker.category
+
+    meta = {
+      source = metadata_file
+      dest = "$file_base/meta.json"
+      schema = "cc_prebuilt_library"
+    }
+
+    files = sdk_files
+
+    file_list = debug_mapping_file
+
+    deps = sdk_deps
+
+    non_sdk_deps = [
+      ":$main_target_name",
+      ":$metadata_target_name",
+    ]
+
+    # Explicitly add non-public dependencies, in case some of the source files
+    # are generated.
+    if (defined(invoker.deps)) {
+      non_sdk_deps += invoker.deps
+    }
+  }
+
+  shared_gen_dir = get_label_info(":bogus($shlib_toolchain)", "target_out_dir")
+  runtime_deps_file = "$shared_gen_dir/$target_name.runtime_deps"
+  sdk_manifest_file = "$target_gen_dir/$manifest_target_name.sdk"
+  verify_target_name = "${target_name}_verify"
+
+  # Verify that the SDK manifest for this target includes all of the expected
+  # runtime dependencies.
+  # TODO(DX-498): also check that everything in there is either prebuilt or
+  # headers only.
+  action(verify_target_name) {
+    script = "//build/cpp/verify_runtime_deps.py"
+
+    inputs = [
+      sdk_manifest_file,
+      runtime_deps_file,
+    ]
+
+    stamp_file = "$target_gen_dir/$target_name.stamp"
+
+    outputs = [
+      stamp_file,
+    ]
+
+    args = [
+      "--stamp",
+      rebase_path(stamp_file),
+      "--manifest",
+      rebase_path(sdk_manifest_file),
+      "--runtime-deps-file",
+      rebase_path(runtime_deps_file),
+      "--root-out-dir",
+      rebase_path(root_out_dir),
+    ]
+
+    deps = [
+      ":$main_target_name",
+      ":$manifest_target_name",
+    ]
+  }
+
+  sdk_atom_alias(sdk_target_name) {
+    atom = ":$manifest_target_name"
+
+    non_sdk_deps = [ ":$verify_target_name" ]
+  }
+}
diff --git a/build/cpp/sdk_source_set.gni b/build/cpp/sdk_source_set.gni
new file mode 100644
index 0000000..35ef4c4
--- /dev/null
+++ b/build/cpp/sdk_source_set.gni
@@ -0,0 +1,178 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/sdk/sdk_atom.gni")
+
+# A source set that can be exported to an SDK.
+#
+# An equivalent to the built-in source_set which adds an SDK atom declaration to
+# allow the set to be included in an SDK as sources.
+#
+# Parameters
+#
+#   category (required)
+#     Publication level of the library in SDKs.
+#     See //build/sdk/sdk_atom.gni.
+#
+#   sdk_name (required)
+#     Name of the library in the SDK.
+#
+#   source_dir (optional)
+#     If set, path to the base directory of the sources.
+#     This is useful if the sources are generated and therefore not hosted
+#     directly under the directory where the GN rules are declared.
+#
+#   include_base (optional)
+#     Path to the root directory for includes.
+#     Defaults to "include".
+
+template("sdk_source_set") {
+  assert(defined(invoker.category), "Must define an SDK category")
+  assert(defined(invoker.sdk_name), "Must define an SDK name")
+
+  main_target_name = target_name
+  sdk_target_name = "${target_name}_sdk"
+
+  source_set(main_target_name) {
+    forward_variables_from(invoker,
+                           "*",
+                           [
+                             "category",
+                             "include_base",
+                             "sdk_name",
+                             "source_dir",
+                           ])
+
+    if (defined(visibility)) {
+      visibility += [ ":$sdk_target_name" ]
+    }
+  }
+
+  # Identify dependencies and their metadata files.
+  sdk_metas = []
+  sdk_deps = []
+  if (defined(invoker.public_deps)) {
+    foreach(dep, invoker.public_deps) {
+      full_label = get_label_info(dep, "label_no_toolchain")
+      sdk_dep = "${full_label}_sdk"
+      sdk_deps += [ sdk_dep ]
+
+      gen_dir = get_label_info(sdk_dep, "target_gen_dir")
+      name = get_label_info(sdk_dep, "name")
+      sdk_metas += [ rebase_path("$gen_dir/$name.meta.json") ]
+    }
+  }
+
+  # Sort headers vs. sources.
+  all_headers = []
+  all_sources = []
+  source_headers_are_public = true
+  if (defined(invoker.public)) {
+    source_headers_are_public = false
+    all_headers += invoker.public
+  }
+  if (defined(invoker.sources)) {
+    foreach(source_file, invoker.sources) {
+      extension = get_path_info(source_file, "extension")
+      if (source_headers_are_public && extension == "h") {
+        all_headers += [ source_file ]
+      } else {
+        all_sources += [ source_file ]
+      }
+    }
+  } else {
+    not_needed([ "source_headers_are_public" ])
+  }
+
+  # Determine destinations in the SDK for headers and sources.
+  file_base = "pkg/${invoker.sdk_name}"
+  sdk_headers = []
+  sdk_sources = []
+  sdk_files = []
+  foreach(header, all_headers) {
+    include_base = "include"
+    if (defined(invoker.include_base)) {
+      include_base = invoker.include_base
+    }
+    relative_destination = rebase_path(header, include_base)
+    destination = "$file_base/include/$relative_destination"
+    sdk_headers += [ destination ]
+    sdk_files += [
+      {
+        source = header
+        dest = destination
+      },
+    ]
+  }
+  foreach(source, all_sources) {
+    sdk_sources += [ "$file_base/$source" ]
+    sdk_files += [
+      {
+        source = source
+        dest = "$file_base/$source"
+      },
+    ]
+  }
+
+  metadata_target_name = "${target_name}_sdk_metadata"
+  metadata_file = "$target_gen_dir/$target_name.sdk_meta.json"
+
+  action(metadata_target_name) {
+    script = "//build/cpp/gen_sdk_sources_meta_file.py"
+
+    inputs = sdk_metas
+
+    outputs = [
+      metadata_file,
+    ]
+
+    args = [
+      "--out",
+      rebase_path(metadata_file),
+      "--name",
+      invoker.sdk_name,
+      "--root",
+      file_base,
+      "--include-dir",
+      "$file_base/include",
+    ]
+    args += [ "--deps" ] + sdk_metas
+    args += [ "--sources" ] + sdk_sources
+    args += [ "--headers" ] + sdk_headers
+
+    deps = sdk_deps
+  }
+
+  sdk_atom(sdk_target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "source_dir",
+                             "testonly",
+                           ])
+
+    id = "sdk://pkg/${invoker.sdk_name}"
+    category = invoker.category
+
+    meta = {
+      source = metadata_file
+      dest = "$file_base/meta.json"
+      schema = "cc_source_library"
+    }
+
+    files = sdk_files
+
+    deps = sdk_deps
+
+    non_sdk_deps = [
+      ":$main_target_name",
+      ":$metadata_target_name",
+    ]
+
+    # Explicitly add non-public dependencies, in case some of the source files
+    # are generated.
+    if (defined(invoker.deps)) {
+      non_sdk_deps += invoker.deps
+    }
+  }
+}
diff --git a/build/cpp/verify_runtime_deps.py b/build/cpp/verify_runtime_deps.py
new file mode 100755
index 0000000..43d8088
--- /dev/null
+++ b/build/cpp/verify_runtime_deps.py
@@ -0,0 +1,76 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import json
+import os
+import sys
+
+
+def has_packaged_file(needed_file, deps):
+    """Returns true if the given file could be found in the given deps."""
+    for dep in deps:
+        for file in dep['files']:
+            if needed_file == os.path.normpath(file['source']):
+                return True
+    return False
+
+
+def has_missing_files(runtime_files, package_deps):
+    """Returns true if a runtime file is missing from the given deps."""
+    has_missing_files = False
+    for file in runtime_files:
+        # Some libraries are only known to GN as ABI stubs, whereas the real
+        # runtime dependency is generated in parallel as a ".so.impl" file.
+        if (not has_packaged_file(file, package_deps) and
+                not has_packaged_file('%s.impl' % file, package_deps)):
+            print('No package dependency generates %s' % file)
+            has_missing_files = True
+    return has_missing_files
+
+
+def main():
+    parser = argparse.ArgumentParser(
+            "Verifies a prebuilt library's runtime dependencies")
+    parser.add_argument('--root-out-dir',
+                        help='Path to the root output directory',
+                        required=True)
+    parser.add_argument('--runtime-deps-file',
+                        help='Path to the list of runtime deps',
+                        required=True)
+    parser.add_argument('--manifest',
+                        help='Path to the target\'s SDK manifest file',
+                        required=True)
+    parser.add_argument('--stamp',
+                        help='Path to the stamp file to generate',
+                        required=True)
+    args = parser.parse_args()
+
+    # Read the list of runtime dependencies generated by GN.
+    def normalize_dep(dep):
+        return os.path.normpath(os.path.join(args.root_out_dir, dep.strip()))
+    with open(args.runtime_deps_file, 'r') as runtime_deps_file:
+        runtime_files = map(normalize_dep, runtime_deps_file.readlines())
+
+    # Read the list of package dependencies for the library's SDK incarnation.
+    with open(args.manifest, 'r') as manifest_file:
+        manifest = json.load(manifest_file)
+    atom_id = manifest['ids'][0]
+    def find_atom(id):
+        return next(a for a in manifest['atoms'] if a['id'] == id)
+    atom = find_atom(atom_id)
+    deps = map(lambda a: find_atom(a), atom['deps'])
+    deps += [atom]
+
+    # Check whether all runtime files are available for packaging.
+    if has_missing_files(runtime_files, deps):
+        return 1
+
+    with open(args.stamp, 'w') as stamp:
+        stamp.write('Success!')
+
+
+if __name__ == '__main__':
+    sys.exit(main())
diff --git a/build/dart/BUILD.gn b/build/dart/BUILD.gn
new file mode 100644
index 0000000..fd21b52
--- /dev/null
+++ b/build/dart/BUILD.gn
@@ -0,0 +1,35 @@
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/dart/toolchain.gni")
+
+if (current_toolchain == dart_toolchain) {
+  pool("dart_pool") {
+    depth = dart_pool_depth
+  }
+} else {
+  import("//build/toolchain/clang_toolchain.gni")
+
+  # A toolchain dedicated to processing and analyzing Dart packages.
+  # The only targets in this toolchain are action() targets, so it
+  # has no real tools.  But every toolchain needs stamp and copy.
+  toolchain("dartlang") {
+    tool("stamp") {
+      command = stamp_command
+      description = stamp_description
+    }
+    tool("copy") {
+      command = copy_command
+      description = copy_description
+    }
+
+    toolchain_args = {
+      toolchain_variant = {
+      }
+      toolchain_variant = {
+        base = get_label_info(":dartlang", "label_no_toolchain")
+      }
+    }
+  }
+}
diff --git a/build/dart/OWNERS b/build/dart/OWNERS
new file mode 100644
index 0000000..259d329
--- /dev/null
+++ b/build/dart/OWNERS
@@ -0,0 +1,3 @@
+pylaligand@google.com
+zra@google.com
+*
diff --git a/build/dart/dart.gni b/build/dart/dart.gni
new file mode 100644
index 0000000..a4d7ff2
--- /dev/null
+++ b/build/dart/dart.gni
@@ -0,0 +1,50 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+declare_args() {
+  # Directory containing prebuilt Dart SDK.
+  # This must have in its `bin/` subdirectory `gen_snapshot.OS-CPU` binaries.
+  # Set to empty for a local build.
+  prebuilt_dart_sdk = "//topaz/tools/prebuilt-dart-sdk/${host_platform}"
+
+  # Whether to use the prebuilt Dart SDK for everything.
+  # When setting this to false, the preubilt Dart SDK will not be used in
+  # situations where the version of the SDK matters, but may still be used as an
+  # optimization where the version does not matter.
+  use_prebuilt_dart_sdk = true
+}
+
+# For using the prebuilts even when use_prebuilt_dart_sdk is false.
+prebuilt_dart = "${prebuilt_dart_sdk}/bin/dart"
+prebuilt_gen_snapshot =
+    "${prebuilt_dart_sdk}/bin/gen_snapshot.${current_os}-${current_cpu}"
+prebuilt_gen_snapshot_product =
+    "${prebuilt_dart_sdk}/bin/gen_snapshot_product.${current_os}-${current_cpu}"
+
+if (use_prebuilt_dart_sdk) {
+  dart_sdk = "${prebuilt_dart_sdk}"
+  dart_sdk_deps = []
+
+  gen_snapshot = prebuilt_gen_snapshot
+  gen_snapshot_product = prebuilt_gen_snapshot_product
+  gen_snapshot_deps = []
+} else {
+  _dart_sdk_label = "//third_party/dart:create_sdk($host_toolchain)"
+  dart_sdk = get_label_info(_dart_sdk_label, "root_out_dir") + "/dart-sdk"
+  dart_sdk_deps = [ _dart_sdk_label ]
+
+  _gen_snapshot_label =
+      "//third_party/dart/runtime/bin:gen_snapshot($host_toolchain)"
+  _gen_snapshot_product_label =
+      "//third_party/dart/runtime/bin:gen_snapshot_product($host_toolchain)"
+  gen_snapshot =
+      get_label_info(_gen_snapshot_label, "root_out_dir") + "/gen_snapshot"
+  gen_snapshot_product =
+      get_label_info(_gen_snapshot_product_label, "root_out_dir") +
+      "/gen_snapshot_product"
+  gen_snapshot_deps = [
+    _gen_snapshot_label,
+    _gen_snapshot_product_label,
+  ]
+}
diff --git a/build/dart/dart_fuchsia_test.gni b/build/dart/dart_fuchsia_test.gni
new file mode 100644
index 0000000..4cca90f9
--- /dev/null
+++ b/build/dart/dart_fuchsia_test.gni
@@ -0,0 +1,145 @@
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/dart/toolchain.gni")
+import("//topaz/runtime/dart_runner/dart_app.gni")
+
+# Defines a device-side test binary
+#
+# Bundles a set of `package:test` tests into a single Fuchsia application
+# with generated helper code to invoke the tests appropriately.
+#
+# Parameters
+#
+#   sources (optional)
+#     The list of test sources. These sources must be within source_dir.
+#
+#   source_dir (optional)
+#     Directory containing the test sources.
+#     Defaults to "test".
+#
+#   deps (optional)
+#     List of labels for Dart packages this package depends on.
+#
+#   disable_analysis (optional)
+#     Prevents analysis from being run on this target.
+#
+#   package_only (optional)
+#     Prevents creation of a sh file, which in turn prevents the test from
+#     being run automatically.
+#     Defaults to False.
+#
+#   environments (optional)
+#      Device environments this test should run in. Passed through to
+#      package.tests.environments in //build/package.gni.
+#
+# Example of usage:
+#
+#   dart_fuchsia_test("some_tests") {
+#     tests = [ "test_foo.dart", "test_bar.dart" ]
+#   }
+#
+# TODO:
+#
+#   - Implement reporting so that tests can integrate into the waterfall / CQ.
+#   - Support AOT and Flutter based tests.
+#   - Get a public API into `package:test` for what we're doing.
+#
+template("dart_fuchsia_test") {
+  if (defined(invoker.source_dir)) {
+    test_source_dir = invoker.source_dir
+  } else {
+    test_source_dir = "test"
+  }
+
+  if (defined(invoker.package_only)) {
+    test_package_only = invoker.package_only
+  } else {
+    test_package_only = false
+  }
+
+  generated_test_main_target = target_name + "__test_main"
+
+  # The generated code needs to be installed under the Dart toolchain directory
+  # so that it can be found by the dart_library target powering the JIT app.
+  dart_gen_dir = get_label_info(":bogus($dart_toolchain)", "target_gen_dir")
+  generated_test_main = "$dart_gen_dir/${target_name}__test_main.dart"
+
+  action(generated_test_main_target) {
+    script = "//build/dart/gen_fuchsia_test_main.py"
+    outputs = [
+      generated_test_main,
+    ]
+    args = [
+      "--out=" + rebase_path(generated_test_main),
+      "--source-dir=" + rebase_path(test_source_dir),
+    ]
+  }
+
+  if (!test_package_only) {
+    generated_run_test_sh_target = "${root_build_dir}/${target_name}"
+
+    action(generated_run_test_sh_target) {
+      script = "//build/dart/gen_run_sh.py"
+      outputs = [
+        generated_run_test_sh_target,
+      ]
+      args = [
+        "--out=" + rebase_path(generated_run_test_sh_target),
+        "--to_be_run=${invoker.target_name}",
+      ]
+    }
+  }
+
+  dart_jit_app(target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "disable_analysis",
+                             "sandbox",
+                             "meta",
+                           ])
+    source_dir = test_source_dir
+
+    testonly = true
+
+    if (!test_package_only) {
+      tests = [
+        {
+          name = generated_run_test_sh_target
+          dest = target_name
+          if (defined(invoker.environments)) {
+            environments = invoker.environments
+          }
+        },
+      ]
+    }
+
+    deps = [
+      "//topaz/lib/fuchsia_test_helper",
+    ]
+    if (defined(invoker.deps)) {
+      deps += invoker.deps
+    }
+
+    non_dart_deps = [ ":$generated_test_main_target($target_toolchain)" ]
+    if (!test_package_only) {
+      non_dart_deps += [ ":$generated_run_test_sh_target($target_toolchain)" ]
+    }
+
+    if (defined(invoker.non_dart_deps)) {
+      non_dart_deps += invoker.non_dart_deps
+    }
+
+    main_dart = generated_test_main
+
+    sources = []
+    if (defined(invoker.sources)) {
+      sources = invoker.sources
+    }
+    meta = []
+    if (defined(invoker.meta)) {
+      meta = invoker.meta
+    }
+  }
+}
diff --git a/build/dart/dart_library.gni b/build/dart/dart_library.gni
new file mode 100644
index 0000000..b334109
--- /dev/null
+++ b/build/dart/dart_library.gni
@@ -0,0 +1,400 @@
+# Copyright 2016 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/config/fuchsia/sdk.gni")
+import("//build/dart/dart.gni")
+import("//build/dart/toolchain.gni")
+import("//build/sdk/sdk_atom.gni")
+import("//third_party/flutter/lib/ui/dart_ui.gni")
+import("//topaz/public/dart-pkg/fuchsia/sdk_ext.gni")
+import("//topaz/public/dart-pkg/zircon/sdk_ext.gni")
+import("//topaz/public/lib/ui/flutter/sdk_ext/sdk_ext.gni")
+
+# Defines a Dart library
+#
+# Parameters
+#
+#   sources
+#     The list of all sources in this library.
+#     These sources must be within source_dir.
+#
+#   package_root (optional)
+#     Path to the directory hosting the library.
+#     This is useful for generated content, and can be ignored otherwise.
+#     Defaults to ".".
+#
+#   package_name (optional)
+#     Name of the Dart package. This is used as an identifier in code that
+#     depends on this library.
+#
+#   infer_package_name (optional)
+#     Infer the package name based on the path to the package.
+#
+#     NOTE: Exactly one of package_name or infer_package_name must be set.
+#
+#   source_dir (optional)
+#     Path to the directory containing the package sources, relative to
+#     package_root.
+#     Defaults to "lib".
+#
+#   deps (optional)
+#     List of labels for Dart libraries this library depends on.
+#
+#   non_dart_deps (optional)
+#     List of labels this library depends on that are not Dart libraries. This
+#     includes things like actions that generate Dart code. It typically doesn't
+#     need to be set.
+#     Note that these labels *must* have an explicit toolchain attached.
+#
+#   disable_analysis (optional)
+#     Prevents analysis from being run on this target.
+#
+#   sdk_category (optional)
+#     Publication level of the library in SDKs.
+#     See //build/sdk/sdk_atom.gni.
+#
+#   extra_sources (optional)
+#     Additional sources to consider for analysis.
+#
+# Example of usage:
+#
+#   dart_library("baz") {
+#     package_name = "foo.bar.baz"
+#
+#     sources = [
+#       "blah.dart",
+#     ]
+#
+#     deps = [
+#       "//foo/bar/owl",
+#     ]
+#   }
+
+if (current_toolchain == dart_toolchain) {
+  template("dart_library") {
+    forward_variables_from(invoker, [ "testonly" ])
+
+    if (defined(invoker.package_name)) {
+      package_name = invoker.package_name
+    } else if (defined(invoker.infer_package_name) &&
+               invoker.infer_package_name) {
+      # Compute a package name from the label:
+      #   //foo/bar --> foo.bar
+      #   //foo/bar:blah --> foo.bar._blah
+      #   //garnet/public/foo/bar --> foo.bar
+      # Strip public directories.
+      full_dir = get_label_info(":$target_name", "dir")
+      foreach(sdk_dir, sdk_dirs) {
+        full_dir = string_replace(full_dir, "$sdk_dir/", "")
+      }
+      package_name = full_dir
+      package_name = string_replace(package_name, "//", "")
+      package_name = string_replace(package_name, "/", ".")
+
+      # If the last directory name does not match the target name, add the
+      # target name to the resulting package name.
+      name = get_label_info(":$target_name", "name")
+      last_dir = get_path_info(full_dir, "name")
+      if (last_dir != name) {
+        package_name = "$package_name._$name"
+      }
+    } else {
+      assert(false, "Must specify either a package_name or infer_package_name")
+    }
+
+    dart_deps = []
+    if (defined(invoker.deps)) {
+      foreach(dep, invoker.deps) {
+        dart_deps += [ get_label_info(dep, "label_no_toolchain") ]
+      }
+    }
+
+    package_root = "."
+    if (defined(invoker.package_root)) {
+      package_root = invoker.package_root
+    }
+
+    source_dir = "$package_root/lib"
+    if (defined(invoker.source_dir)) {
+      source_dir = "$package_root/${invoker.source_dir}"
+    }
+
+    assert(defined(invoker.sources), "Sources must be defined")
+    source_file = "$target_gen_dir/$target_name.sources"
+    rebased_sources = []
+    foreach(source, invoker.sources) {
+      rebased_source_dir = rebase_path(source_dir)
+      rebased_sources += [ "$rebased_source_dir/$source" ]
+    }
+    if (defined(invoker.extra_sources)) {
+      foreach(source, invoker.extra_sources) {
+        rebased_sources += [ rebase_path(source) ]
+      }
+    }
+    write_file(source_file, rebased_sources, "list lines")
+
+    # Dependencies of the umbrella group for the targets in this file.
+    group_deps = []
+
+    dot_packages_file = "$target_gen_dir/$target_name.packages"
+    dot_packages_target_name = "${target_name}_dot_packages"
+    group_deps += [ ":$dot_packages_target_name" ]
+
+    # Creates a .packages file listing dependencies of this library.
+    action(dot_packages_target_name) {
+      script = "//build/dart/gen_dot_packages.py"
+
+      deps = []
+      package_files = []
+      foreach(dep, dart_deps) {
+        deps += [ "${dep}_dot_packages" ]
+        dep_gen_dir = get_label_info(dep, "target_gen_dir")
+        dep_name = get_label_info(dep, "name")
+        package_files += [ "$dep_gen_dir/$dep_name.packages" ]
+      }
+      if (defined(invoker.non_dart_deps)) {
+        public_deps = invoker.non_dart_deps
+      }
+
+      sources = package_files + [
+                  # Require a manifest file, allowing the analysis service to identify the
+                  # package.
+                  "$package_root/pubspec.yaml",
+                ]
+
+      outputs = [
+        dot_packages_file,
+      ]
+
+      args = [
+               "--out",
+               rebase_path(dot_packages_file),
+               "--source-dir",
+               rebase_path(source_dir),
+               "--package-name",
+               package_name,
+               "--deps",
+             ] + rebase_path(package_files)
+    }
+
+    # Don't run the analyzer if it is explicitly disabled or if we are using
+    # a custom-built Dart SDK in a cross-build.
+    with_analysis =
+        (!defined(invoker.disable_analysis) || !invoker.disable_analysis) &&
+        (use_prebuilt_dart_sdk || host_cpu == target_cpu)
+    if (with_analysis) {
+      options_file = "$package_root/analysis_options.yaml"
+      invocation_file = "$target_gen_dir/$target_name.analyzer.sh"
+      invocation_target_name = "${target_name}_analysis_runner"
+      group_deps += [ ":$invocation_target_name" ]
+
+      dart_analyzer_binary = "$dart_sdk/bin/dartanalyzer"
+
+      # Creates a script which can be used to manually perform analysis.
+      # TODO(BLD-256): remove this target.
+      action(invocation_target_name) {
+        script = "//build/dart/gen_analyzer_invocation.py"
+
+        deps = dart_sdk_deps + [ ":$dot_packages_target_name" ]
+
+        inputs = [
+          dart_analyzer_binary,
+          dot_packages_file,
+          options_file,
+          source_file,
+        ]
+
+        outputs = [
+          invocation_file,
+        ]
+
+        args = [
+          "--out",
+          rebase_path(invocation_file),
+          "--source-file",
+          rebase_path(source_file),
+          "--dot-packages",
+          rebase_path(dot_packages_file),
+          "--dartanalyzer",
+          rebase_path(dart_analyzer_binary),
+          "--dart-sdk",
+          rebase_path(dart_sdk),
+          "--options",
+          rebase_path(options_file),
+          "--package-name",
+          package_name,
+        ]
+      }
+
+      analysis_target_name = "${target_name}_analysis"
+      group_deps += [ ":$analysis_target_name" ]
+
+      # Runs analysis on the sources.
+      action(analysis_target_name) {
+        script = "//build/dart/run_analysis.py"
+
+        depfile = "$target_gen_dir/$target_name.analysis.d"
+
+        output_file = "$target_gen_dir/$target_name.analyzed"
+
+        pool = "//build/dart:dart_pool($dart_toolchain)"
+
+        inputs = [
+                   dart_analyzer_binary,
+                   dot_packages_file,
+                   options_file,
+                   source_file,
+                 ] + rebased_sources
+
+        outputs = [
+          output_file,
+        ]
+
+        args = [
+          "--source-file",
+          rebase_path(source_file),
+          "--dot-packages",
+          rebase_path(dot_packages_file),
+          "--dartanalyzer",
+          rebase_path(dart_analyzer_binary),
+          "--dart-sdk",
+          rebase_path(dart_sdk),
+          "--options",
+          rebase_path(options_file),
+          "--stamp",
+          rebase_path(output_file),
+          "--depname",
+          rebase_path(output_file, root_build_dir),
+          "--depfile",
+          rebase_path(depfile),
+        ]
+
+        deps = dart_sdk_deps + [ ":$dot_packages_target_name" ]
+      }
+    }
+
+    group(target_name) {
+      # dart_deps are added here to ensure they are fully built.
+      # Up to this point, only the targets generating .packages had been
+      # depended on.
+      deps = dart_deps
+
+      public_deps = group_deps
+    }
+
+    ################################################
+    # SDK support
+    #
+
+    if (defined(invoker.sdk_category) && invoker.sdk_category != "excluded") {
+      assert(
+          defined(invoker.package_name),
+          "Dart libraries published in SDKs must have an explicit package name")
+
+      assert(
+          !defined(invoker.extra_sources),
+          "Extra sources can not be included in SDKs: put them in source_dir")
+
+      # Dependencies that should normally be included in any SDK containing this
+      # target.
+      sdk_deps = []
+
+      # Path to SDK metadata files for first-party dependencies.
+      sdk_metas = []
+
+      # Path to Dart manifest files for third-party dependencies.
+      third_party_pubspecs = []
+      if (defined(invoker.deps)) {
+        sorted_deps =
+            exec_script("//build/dart/sdk/sort_deps.py", invoker.deps, "scope")
+        foreach(dep, sorted_deps.local) {
+          full_label = get_label_info(dep, "label_no_toolchain")
+          sdk_dep = "${full_label}_sdk"
+          sdk_deps += [ sdk_dep ]
+
+          gen_dir = get_label_info(sdk_dep, "target_gen_dir")
+          name = get_label_info(sdk_dep, "name")
+          sdk_metas += [ rebase_path("$gen_dir/$name.meta.json") ]
+        }
+        foreach(dep, sorted_deps.third_party) {
+          path = get_label_info(dep, "dir")
+          third_party_pubspecs += [ rebase_path("$path/pubspec.yaml") ]
+        }
+      }
+
+      file_base = "dart/${invoker.package_name}"
+
+      sdk_sources = []
+      sdk_source_mappings = []
+      foreach(source, rebased_sources) {
+        relative_source = rebase_path(source, source_dir)
+        dest = "$file_base/lib/$relative_source"
+        sdk_sources += [ dest ]
+        sdk_source_mappings += [
+          {
+            source = source
+            dest = dest
+          },
+        ]
+      }
+
+      metadata_target_name = "${target_name}_sdk_metadata"
+      metadata_file = "$target_gen_dir/$target_name.sdk_meta.json"
+      action(metadata_target_name) {
+        script = "//build/dart/sdk/gen_meta_file.py"
+
+        inputs = sdk_metas + third_party_pubspecs
+
+        outputs = [
+          metadata_file,
+        ]
+
+        args = [
+          "--out",
+          rebase_path(metadata_file),
+          "--name",
+          package_name,
+          "--root",
+          file_base,
+        ]
+        args += [ "--specs" ] + sdk_metas
+        args += [ "--sources" ] + sdk_sources
+        args += [ "--third-party-specs" ] + third_party_pubspecs
+
+        deps = sdk_deps
+      }
+
+      sdk_atom("${target_name}_sdk") {
+        id = "sdk://dart/${invoker.package_name}"
+
+        category = invoker.sdk_category
+
+        meta = {
+          source = metadata_file
+          dest = "$file_base/meta.json"
+          schema = "dart_library"
+        }
+
+        deps = sdk_deps
+
+        non_sdk_deps = [ ":$metadata_target_name" ]
+        if (defined(invoker.non_dart_deps)) {
+          non_sdk_deps += invoker.non_dart_deps
+        }
+
+        files = sdk_source_mappings
+      }
+    }
+  }
+} else {  # Not the Dart toolchain.
+  template("dart_library") {
+    group(target_name) {
+      not_needed(invoker, "*")
+
+      public_deps = [
+        ":$target_name($dart_toolchain)",
+      ]
+    }
+  }
+}
diff --git a/build/dart/dart_remote_test.gni b/build/dart/dart_remote_test.gni
new file mode 100644
index 0000000..5d96d1e
--- /dev/null
+++ b/build/dart/dart_remote_test.gni
@@ -0,0 +1,96 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/dart/dart.gni")
+import("//build/dart/dart_library.gni")
+import("//build/dart/toolchain.gni")
+
+# Defines a Dart test suite that requires connecting to a remote target.
+#
+# Parameters
+#
+#   sources (required)
+#     The list of test files, which must be within source_dir.
+#
+#   source_dir (optional)
+#     Directory containing the test sources. Defaults to "test".
+#
+#   deps (optional)
+#     List of labels for Dart libraries this suite depends on.
+#
+#   disable_analysis (optional)
+#     Prevents analysis from being run on this target.
+#
+# Example of usage:
+#
+#   dart_host_to_target_test("baz_test") {
+#     source_dir = "."
+#     sources = [ "foo_bar_baz_test.dart" ]
+#     deps = [
+#       "//foo/baz",
+#       "//third_party/dart-pkg/pub/test",
+#     ]
+#   }
+template("dart_remote_test") {
+  main_target_name = target_name
+  library_target_name = "${target_name}_library"
+
+  sources_dir = "test"
+  if (defined(invoker.source_dir)) {
+    sources_dir = invoker.source_dir
+  }
+
+  dart_library(library_target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "deps",
+                             "disable_analysis",
+                           ])
+
+    infer_package_name = true
+
+    source_dir = sources_dir
+
+    assert(defined(invoker.sources))
+
+    sources = invoker.sources
+  }
+
+  dart_target_gen_dir =
+      get_label_info(":bogus($dart_toolchain)", "target_gen_dir")
+
+  dot_packages_file = "$dart_target_gen_dir/$library_target_name.packages"
+
+  if (current_toolchain == host_toolchain) {
+    invocation_file = "$root_out_dir/$target_name"
+  } else {
+    invocation_file = "$target_gen_dir/$target_name"
+  }
+
+  action(main_target_name) {
+    script = "//build/dart/gen_remote_test_invocation.py"
+
+    testonly = true
+
+    outputs = [
+      invocation_file,
+    ]
+
+    args = [
+             "--out",
+             rebase_path(invocation_file),
+             "--sources",
+           ] + rebase_path(invoker.sources, "", sources_dir) +
+           [
+             "--dot-packages",
+             rebase_path(dot_packages_file),
+             "--dart-binary",
+             rebase_path(prebuilt_dart),
+           ]
+
+    deps = [
+      ":$library_target_name",
+    ]
+  }
+}
diff --git a/build/dart/dart_tool.gni b/build/dart/dart_tool.gni
new file mode 100644
index 0000000..d79aced
--- /dev/null
+++ b/build/dart/dart_tool.gni
@@ -0,0 +1,200 @@
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/dart/dart_library.gni")
+import("//build/dart/toolchain.gni")
+
+# Defines a Dart application that can be run on the host
+#
+# Parameters
+#
+#   sources (optional)
+#     The list of public sources in this library, i.e. Dart files in lib/ but
+#     not in lib/src/. These sources must be within lib/.
+#
+#   package_name (optional)
+#     Name of the dart package.
+#
+#   main_dart (required)
+#     File containing the main function of the application.
+#
+#   deps (optional)
+#     Dependencies of this application
+#
+#   non_dart_deps (optional)
+#     List of labels this package depends on that are not Dart packages. It
+#     typically doesn't need to be set.
+#
+#   output_name (optional)
+#     Name of the output file to generate. Defaults to $target_name.
+#
+#   disable_analysis (optional)
+#     Prevents analysis from being run on this target.
+#
+#   force_prebuilt_dart (optional)
+#     Forces using the prebuilt Dart binary even when use_prebuilt_dart_sdk is
+#     false. Defaults to true in a cross-build, and false otherwise.
+template("dart_tool") {
+  assert(defined(invoker.main_dart), "Must specify main_dart file")
+
+  dart_library_target_name = target_name + "_dart_library"
+
+  if (defined(invoker.package_name)) {
+    package_name = invoker.package_name
+  } else if (!defined(invoker.infer_package_name) ||
+             invoker.infer_package_name) {
+    # Compute a package name from the label:
+    #   //foo/bar --> foo.bar
+    #   //foo/bar:blah --> foo.bar._blah
+    #   //garnet/public/foo/bar --> foo.bar
+    # Strip public directories.
+    full_dir = get_label_info(":$dart_library_target_name", "dir")
+    foreach(sdk_dir, sdk_dirs) {
+      full_dir = string_replace(full_dir, "$sdk_dir/", "")
+    }
+    package_name = full_dir
+    package_name = string_replace(package_name, "//", "")
+    package_name = string_replace(package_name, "/", ".")
+
+    # If the last directory name does not match the target name, add the
+    # target name to the resulting package name.
+    name = get_label_info(":$dart_library_target_name", "name")
+    last_dir = get_path_info(full_dir, "name")
+    if (last_dir != name) {
+      package_name = "$package_name._$name"
+    }
+  } else {
+    assert(false, "Must specify either a package_name or infer_package_name")
+  }
+
+  package_root = "."
+  if (defined(invoker.package_root)) {
+    package_root = invoker.package_root
+  }
+
+  source_dir = "$package_root/lib"
+  if (defined(invoker.source_dir)) {
+    source_dir = "$package_root/${invoker.source_dir}"
+  }
+
+  dart_library(dart_library_target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "deps",
+                             "disable_analysis",
+                           ])
+    package_name = package_name
+    package_root = package_root
+    source_dir = source_dir
+
+    sources = []
+    if (defined(invoker.sources)) {
+      sources += invoker.sources
+    }
+    source_base = "lib"
+    if (defined(invoker.source_dir)) {
+      source_base = invoker.source_dir
+    }
+    sources += [ rebase_path(invoker.main_dart, source_base) ]
+  }
+
+  # Avoid using a custom-built Dart VM in a cross-build. The Dart VM would run
+  # the Dart code and front-end in a simulator, and it would be prohibitively
+  # slow.
+  force_prebuilt_dart = host_cpu != target_cpu
+  if (defined(invoker.force_prebuilt_dart)) {
+    force_prebuilt_dart = invoker.force_prebuilt_dart
+  }
+  if (force_prebuilt_dart) {
+    dart_binary = prebuilt_dart
+    sdk_deps = []
+  } else {
+    dart_binary = "$dart_sdk/bin/dart"
+    sdk_deps = dart_sdk_deps
+  }
+
+  snapshot_path = "$target_gen_dir/${target_name}.snapshot"
+  depfile_path = "${snapshot_path}.d"
+  dart_target_gen_dir =
+      get_label_info(":bogus($dart_toolchain)", "target_gen_dir")
+  packages_path = "$dart_target_gen_dir/$dart_library_target_name.packages"
+
+  snapshot_target_name = target_name + "_snapshot"
+
+  # Creates a snapshot file.
+  # The main advantage of snapshotting is that it sets up source dependencies
+  # via a depfile so that a Dart app gets properly rebuilt when one of its
+  # sources is modified.
+  action(snapshot_target_name) {
+    if (defined(invoker.testonly)) {
+      testonly = invoker.testonly
+    }
+
+    depfile = depfile_path
+
+    outputs = [
+      snapshot_path,
+    ]
+
+    script = dart_binary
+
+    # The snapshot path needs to be rebased on top of the root build dir so
+    # that the resulting depfile gets properly formatted.
+    rebased_snapshot_path = rebase_path(snapshot_path, root_build_dir)
+    rebased_depfile_path = rebase_path(depfile_path)
+    rebased_packages_path = rebase_path(packages_path)
+
+    main_uri = rebase_path(invoker.main_dart)
+    package_relative = rebase_path(invoker.main_dart, source_dir)
+
+    # Approximation for source_dir contains main_dart.
+    if (get_path_info(get_path_info(package_relative, "dir"), "file") != "bin") {
+      main_uri = "package:" + package_name + "/" + package_relative
+    }
+
+    args = [
+      "--snapshot=$rebased_snapshot_path",
+      "--snapshot-depfile=$rebased_depfile_path",
+      "--packages=$rebased_packages_path",
+      main_uri,
+    ]
+
+    deps = sdk_deps + [ ":$dart_library_target_name" ]
+  }
+
+  if (defined(invoker.output_name)) {
+    app_name = invoker.output_name
+  } else {
+    app_name = target_name
+  }
+
+  # Builds a convenience script to invoke the app.
+  action(target_name) {
+    script = "//build/dart/gen_app_invocation.py"
+
+    app_path = "$root_out_dir/dart-tools/$app_name"
+
+    inputs = [
+      dart_binary,
+      snapshot_path,
+    ]
+    outputs = [
+      app_path,
+    ]
+
+    args = [
+      "--out",
+      rebase_path(app_path),
+      "--dart",
+      rebase_path(dart_binary),
+      "--snapshot",
+      rebase_path(snapshot_path),
+    ]
+
+    deps = sdk_deps + [ ":$snapshot_target_name" ]
+    if (defined(invoker.non_dart_deps)) {
+      deps += invoker.non_dart_deps
+    }
+  }
+}
diff --git a/build/dart/empty_pubspec.yaml b/build/dart/empty_pubspec.yaml
new file mode 100644
index 0000000..5e04884
--- /dev/null
+++ b/build/dart/empty_pubspec.yaml
@@ -0,0 +1,5 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+# An empty manifest file used as a package marker.
diff --git a/build/dart/fidl_dart.gni b/build/dart/fidl_dart.gni
new file mode 100644
index 0000000..e99f772
--- /dev/null
+++ b/build/dart/fidl_dart.gni
@@ -0,0 +1,146 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/compiled_action.gni")
+import("//build/dart/dart_library.gni")
+import("//build/dart/toolchain.gni")
+import("//build/fidl/toolchain.gni")
+import("//build/sdk/sdk_atom_alias.gni")
+
+# Generates some Dart bindings for a FIDL library.
+#
+# The parameters for this template are defined in //build/fidl/fidl.gni. The
+# relevant parameters in this template are:
+#   - name;
+#   - sdk_category.
+
+template("fidl_dart") {
+  assert(current_toolchain == dart_toolchain,
+         "This template can only be used in the $dart_toolchain toolchain.")
+
+  not_needed(invoker, [ "sources" ])
+
+  main_target_name = target_name
+  generation_target_name = "${target_name}_dart_generate"
+
+  library_name = target_name
+  if (defined(invoker.name)) {
+    library_name = invoker.name
+  }
+  root_dir = "$target_gen_dir/${library_name}_package"
+  bindings_dir = "$root_dir/lib"
+  bindings_file = "$bindings_dir/fidl.dart"
+  async_bindings_file = "$bindings_dir/fidl_async.dart"
+  test_bindings_file = "$bindings_dir/fidl_test.dart"
+
+  fidl_target_gen_dir =
+      get_label_info(":bogus($fidl_toolchain)", "target_gen_dir")
+  json_representation = "$fidl_target_gen_dir/$target_name.fidl.json"
+
+  compiled_action(generation_target_name) {
+    visibility = [ ":*" ]
+
+    tool = "//topaz/bin/fidlgen_dart"
+
+    inputs = [
+      json_representation,
+    ]
+
+    outputs = [
+      bindings_file,
+      async_bindings_file,
+      test_bindings_file,
+    ]
+
+    args = [
+      "--json",
+      rebase_path(json_representation, root_build_dir),
+      "--output-base",
+      rebase_path(bindings_dir, root_build_dir),
+      "--include-base",
+      rebase_path(root_gen_dir, root_build_dir),
+    ]
+
+    # Don't run the formatter if we are using a custom-built Dart SDK in a
+    # cross-build.
+    deps = [
+      ":$main_target_name($fidl_toolchain)",
+    ]
+    if (use_prebuilt_dart_sdk || host_cpu == target_cpu) {
+      args += [
+        "--dartfmt",
+        rebase_path("$dart_sdk/bin/dartfmt"),
+      ]
+      deps += dart_sdk_deps
+    }
+  }
+
+  copy_pubspec_target_name = "${target_name}_dart_pubspec"
+  copy_options_target_name = "${target_name}_dart_options"
+
+  copy(copy_pubspec_target_name) {
+    sources = [
+      "//build/dart/empty_pubspec.yaml",
+    ]
+
+    outputs = [
+      "$root_dir/pubspec.yaml",
+    ]
+  }
+
+  copy(copy_options_target_name) {
+    sources = [
+      "//topaz/tools/analysis_options.yaml",
+    ]
+
+    outputs = [
+      "$root_dir/analysis_options.yaml",
+    ]
+  }
+
+  dart_library(main_target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "testonly",
+                             "visibility",
+                           ])
+
+    package_root = root_dir
+
+    package_name = "fidl_" + string_replace(library_name, ".", "_")
+
+    sources = [
+      rebase_path(bindings_file, bindings_dir),
+      rebase_path(async_bindings_file, bindings_dir),
+      rebase_path(test_bindings_file, bindings_dir),
+    ]
+
+    deps = [
+      "//topaz/public/dart/fidl",
+      "//topaz/public/dart/zircon",
+    ]
+
+    if (defined(invoker.deps)) {
+      deps += invoker.deps
+    }
+    if (defined(invoker.public_deps)) {
+      deps += invoker.public_deps
+    }
+
+    non_dart_deps = [
+      ":$copy_options_target_name",
+      ":$copy_pubspec_target_name",
+      ":$generation_target_name",
+    ]
+  }
+
+  if (defined(invoker.sdk_category) && invoker.sdk_category != "excluded") {
+    # Instead of depending on the generated bindings, set up a dependency on the
+    # original library.
+    sdk_target_name = "${main_target_name}_sdk"
+    sdk_atom_alias(sdk_target_name) {
+      atom = ":$sdk_target_name($fidl_toolchain)"
+    }
+  }
+}
diff --git a/build/dart/fidlmerge_dart.gni b/build/dart/fidlmerge_dart.gni
new file mode 100644
index 0000000..3117706
--- /dev/null
+++ b/build/dart/fidlmerge_dart.gni
@@ -0,0 +1,181 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/compiled_action.gni")
+import("//build/dart/dart_library.gni")
+import("//build/fidl/toolchain.gni")
+
+# Declares a source_set that contains code generated by fidlmerge from a
+# template and a FIDL JSON file.
+#
+# Parameters
+#
+#   fidl_target (required)
+#     Specifies the fidl target from which to read fidl json. For example,
+#     "//zircon/public/fidl/fuchsia-mem" for fuchsia.mem or
+#     "//sdk/fidl/fuchsia.sys" for fuchsia.sys.
+#
+#   template_path (required)
+#     Specifies the template to use to generate the source code for the
+#     source_set. For example, "//garnet/public/build/fostr/fostr.fidlmerge".
+#
+#   generated_source_base (required)
+#     The base file name from which the source_set's 'source' file names are
+#     generated. For example, "formatting".
+#
+#   options (optional)
+#     A single string with comma-separated key=value pairs.
+#
+#   amendments_path (optional)
+#     Specifies a JSON file that contains amendments to be made to the fidl
+#     model before the template is applied. For example,
+#     "//garnet/public/build/fostr/fidl/fuchsia.media/amendments.fidlmerge".
+#     See the fidlmerge README for details.
+#
+#   deps, public_deps, test_only, visibility (optional)
+#     These parameters are forwarded to the source_set.
+#
+
+template("fidlmerge_dart") {
+  assert(defined(invoker.fidl_target),
+         "fidlmerge_dart requires parameter fidl_target.")
+
+  assert(defined(invoker.template_path),
+         "fidlmerge_dart requires parameter template_path.")
+
+  assert(defined(invoker.generated_source_base),
+         "fidlmerge_dart requires parameter generated_source_base.")
+
+  fidl_target = invoker.fidl_target
+  template_path = invoker.template_path
+  source_base = invoker.generated_source_base
+
+  generation_target_name = "${target_name}_generate"
+
+  fidl_target_gen_dir =
+      get_label_info("$fidl_target($fidl_toolchain)", "target_gen_dir")
+  fidl_target_name = get_path_info(fidl_target_gen_dir, "file")
+  json_representation = "$fidl_target_gen_dir/$fidl_target_name.fidl.json"
+
+  include_stem = string_replace(target_gen_dir, ".", "/")
+  file_stem = "$include_stem/$source_base"
+
+  compiled_action(generation_target_name) {
+    forward_variables_from(invoker, [ "testonly" ])
+    visibility = [ ":*" ]
+
+    tool = "//garnet/go/src/fidlmerge"
+
+    inputs = [
+      json_representation,
+      template_path,
+    ]
+
+    outputs = [
+      "$file_stem.dart",
+    ]
+
+    args = [
+      "--template",
+      rebase_path(template_path, root_build_dir),
+      "--json",
+      rebase_path(json_representation, root_build_dir),
+      "--output-base",
+      rebase_path(file_stem, root_build_dir),
+    ]
+
+    if (defined(invoker.options)) {
+      args += [
+        "--options",
+        invoker.options,
+      ]
+    }
+
+    if (defined(invoker.amendments_path)) {
+      args += [
+        "--amend",
+        rebase_path(invoker.amendments_path, root_build_dir),
+      ]
+    }
+
+    deps = [
+      "$fidl_target($fidl_toolchain)",
+    ]
+
+    if (defined(invoker.deps)) {
+      deps += invoker.deps
+    }
+  }
+
+  copy_pubspec_target_name = "${target_name}_dart_pubspec"
+  copy_options_target_name = "${target_name}_dart_options"
+  copy_source_target_name = "${target_name}_dart_sources"
+
+  library_name = target_name
+  root_dir = "$target_gen_dir/${library_name}_package"
+
+  copy(copy_pubspec_target_name) {
+    sources = [
+      "//build/dart/empty_pubspec.yaml",
+    ]
+
+    outputs = [
+      "$root_dir/pubspec.yaml",
+    ]
+  }
+
+  copy(copy_options_target_name) {
+    sources = [
+      "//topaz/tools/analysis_options.yaml",
+    ]
+
+    outputs = [
+      "$root_dir/analysis_options.yaml",
+    ]
+  }
+
+  copy(copy_source_target_name) {
+    sources = [
+      "$file_stem.dart",
+    ]
+    outputs = [
+      "$root_dir/lib/$source_base.dart",
+    ]
+
+    deps = [
+      ":$generation_target_name",
+    ]
+  }
+
+  dart_library(library_name) {
+    forward_variables_from(invoker,
+                           [
+                             "testonly",
+                             "visibility",
+                           ])
+
+    package_root = root_dir
+
+    package_name = "fidl_" + string_replace(library_name, ".", "_")
+
+    sources = [
+      "$source_base.dart",
+    ]
+
+    deps = []
+
+    if (defined(invoker.deps)) {
+      deps += invoker.deps
+    }
+    if (defined(invoker.public_deps)) {
+      deps += invoker.public_deps
+    }
+
+    non_dart_deps = [
+      ":$copy_options_target_name",
+      ":$copy_pubspec_target_name",
+      ":$copy_source_target_name",
+    ]
+  }
+}
diff --git a/build/dart/gen_analyzer_invocation.py b/build/dart/gen_analyzer_invocation.py
new file mode 100755
index 0000000..0dc059b
--- /dev/null
+++ b/build/dart/gen_analyzer_invocation.py
@@ -0,0 +1,65 @@
+#!/usr/bin/env python
+# Copyright 2016 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import os
+import stat
+import string
+import sys
+
+def main():
+  parser = argparse.ArgumentParser(
+      description='Generate a script that invokes the Dart analyzer')
+  parser.add_argument('--out', help='Path to the invocation file to generate',
+                      required=True)
+  parser.add_argument('--source-file', help='Path to the list of sources',
+                      required=True)
+  parser.add_argument('--dot-packages', help='Path to the .packages file',
+                      required=True)
+  parser.add_argument('--dartanalyzer',
+                      help='Path to the Dart analyzer executable',
+                      required=True)
+  parser.add_argument('--dart-sdk', help='Path to the Dart SDK',
+                      required=True)
+  parser.add_argument('--package-name', help='Name of the analyzed package',
+                      required=True)
+  parser.add_argument('--options', help='Path to analysis options')
+  args = parser.parse_args()
+
+  with open(args.source_file, 'r') as source_file:
+      sources = source_file.read().strip().split('\n')
+
+  analyzer_file = args.out
+  analyzer_path = os.path.dirname(analyzer_file)
+  if not os.path.exists(analyzer_path):
+    os.makedirs(analyzer_path)
+
+  script_template = string.Template('''#!/bin/sh
+
+echo "Package : $package_name"
+$dartanalyzer \\
+  --packages=$dot_packages \\
+  --dart-sdk=$dart_sdk \\
+  --fatal-warnings \\
+  --fatal-hints \\
+  --fatal-lints \\
+  $options_argument \\
+  $sources_argument \\
+  "$$@"
+''')
+  with open(analyzer_file, 'w') as file:
+      file.write(script_template.substitute(
+          args.__dict__,
+          package_name = args.package_name,
+          sources_argument = ' '.join(sources),
+          options_argument = '--options='+args.options if args.options else ''))
+  permissions = (stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR |
+                 stat.S_IRGRP | stat.S_IWGRP | stat.S_IXGRP |
+                 stat.S_IROTH)
+  os.chmod(analyzer_file, permissions)
+
+
+if __name__ == '__main__':
+  sys.exit(main())
diff --git a/build/dart/gen_app_invocation.py b/build/dart/gen_app_invocation.py
new file mode 100755
index 0000000..7c4c2d3
--- /dev/null
+++ b/build/dart/gen_app_invocation.py
@@ -0,0 +1,46 @@
+#!/usr/bin/env python
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import os
+import stat
+import string
+import sys
+
+def main():
+  parser = argparse.ArgumentParser(
+      description='Generate a script that invokes a Dart application')
+  parser.add_argument('--out',
+                      help='Path to the invocation file to generate',
+                      required=True)
+  parser.add_argument('--dart',
+                      help='Path to the Dart binary',
+                      required=True)
+  parser.add_argument('--snapshot',
+                      help='Path to the app snapshot',
+                      required=True)
+  args = parser.parse_args()
+
+  app_file = args.out
+  app_path = os.path.dirname(app_file)
+  if not os.path.exists(app_path):
+    os.makedirs(app_path)
+
+  script_template = string.Template('''#!/bin/sh
+
+$dart \\
+  $snapshot \\
+  "$$@"
+''')
+  with open(app_file, 'w') as file:
+    file.write(script_template.substitute(args.__dict__))
+  permissions = (stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR |
+                 stat.S_IRGRP | stat.S_IWGRP | stat.S_IXGRP |
+                 stat.S_IROTH)
+  os.chmod(app_file, permissions)
+
+
+if __name__ == '__main__':
+  sys.exit(main())
diff --git a/build/dart/gen_dot_packages.py b/build/dart/gen_dot_packages.py
new file mode 100755
index 0000000..b87eb8f
--- /dev/null
+++ b/build/dart/gen_dot_packages.py
@@ -0,0 +1,75 @@
+#!/usr/bin/env python
+# Copyright 2016 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import os
+import string
+import sys
+
+def parse_dot_packages(dot_packages_path):
+  deps = {}
+  with open(dot_packages_path) as dot_packages:
+      for line in dot_packages:
+        if line.startswith('#'):
+            continue
+        delim = line.find(':file://')
+        if delim == -1:
+            continue
+        name = line[:delim]
+        path = os.path.abspath(line[delim + 8:].strip())
+        if name in deps:
+          raise Exception('%s contains multiple entries for package %s' %
+              (dot_packages_path, name))
+        deps[name] = path
+  return deps
+
+
+def create_base_directory(file):
+  path = os.path.dirname(file)
+  if not os.path.exists(path):
+    os.makedirs(path)
+
+
+def main():
+  parser = argparse.ArgumentParser(
+      description="Generate .packages file for dart package")
+  parser.add_argument("--out", help="Path to .packages file to generate",
+                      required=True)
+  parser.add_argument("--package-name", help="Name of this package",
+                      required=True)
+  parser.add_argument("--source-dir", help="Path to package source",
+                      required=True)
+  parser.add_argument("--deps", help="List of dependencies' package file",
+                      nargs="*")
+  args = parser.parse_args()
+
+  dot_packages_file = args.out
+  create_base_directory(dot_packages_file)
+
+  package_deps = {
+    args.package_name: args.source_dir,
+  }
+
+  for dep in args.deps:
+    dependent_packages = parse_dot_packages(dep)
+    for name, path in dependent_packages.iteritems():
+      if name in package_deps:
+        if path != package_deps[name]:
+          print "Error, conflicting entries for %s: %s and %s from %s" % (name,
+              path, package_deps[name], dep)
+          return 1
+      else:
+        package_deps[name] = path
+
+  with open(dot_packages_file, "w") as dot_packages:
+    names = package_deps.keys()
+    names.sort()
+    for name in names:
+      dot_packages.write('%s:file://%s/\n' % (name, package_deps[name]))
+
+  return 0
+
+if __name__ == '__main__':
+  sys.exit(main())
diff --git a/build/dart/gen_fuchsia_test_main.py b/build/dart/gen_fuchsia_test_main.py
new file mode 100755
index 0000000..4f7c722
--- /dev/null
+++ b/build/dart/gen_fuchsia_test_main.py
@@ -0,0 +1,65 @@
+#!/usr/bin/env python
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import os
+import re
+import sys
+
+
+def main():
+    parser = argparse.ArgumentParser(
+        sys.argv[0],
+        description="Generate main file for Fuchsia dart test")
+    parser.add_argument("--out",
+                        help="Path to .dart file to generate",
+                        required=True)
+    parser.add_argument('--source-dir',
+                        help='Path to test sources',
+                        required=True)
+    args = parser.parse_args()
+
+    out_dir = os.path.dirname(args.out)
+    test_files = []
+    for root, dirs, files in os.walk(args.source_dir):
+        for f in files:
+            if not f.endswith('_test.dart'):
+                continue
+            test_files.append(os.path.relpath(os.path.join(root, f), out_dir))
+
+    outfile = open(args.out, 'w')
+    outfile.write('''// Generated by %s
+
+    // ignore_for_file: directives_ordering
+    // ignore_for_file: avoid_relative_lib_imports
+
+    import 'dart:async';
+    import 'package:fuchsia_test_helper/fuchsia_test_helper.dart';
+    ''' % os.path.basename(__file__))
+
+    for i, path in enumerate(test_files):
+        outfile.write("import '%s' as test_%d;\n" % (path, i))
+
+    outfile.write('''
+    Future<int> main(List<String> args) async {
+      await runFuchsiaTests(<MainFunction>[
+    ''')
+
+    for i in range(len(test_files)):
+        outfile.write('test_%d.main,\n' % i)
+
+    outfile.write(''']);
+
+      // Quit.
+      exitFuchsiaTest(0);
+      return 0;
+    }
+    ''')
+
+    outfile.close()
+
+
+if __name__ == '__main__':
+    main()
diff --git a/build/dart/gen_remote_test_invocation.py b/build/dart/gen_remote_test_invocation.py
new file mode 100755
index 0000000..15a2fcf
--- /dev/null
+++ b/build/dart/gen_remote_test_invocation.py
@@ -0,0 +1,63 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import os
+import stat
+import string
+import sys
+
+
+def main():
+  parser = argparse.ArgumentParser(
+      description='Generate a script that invokes the Dart tester')
+  parser.add_argument('--out',
+                      help='Path to the invocation file to generate',
+                      required=True)
+  parser.add_argument('--sources',
+                      help='Path to test files',
+                      required=True,
+                      nargs='+')
+  parser.add_argument('--dart-binary',
+                      help='Path to the dart binary.',
+                      required=True)
+  parser.add_argument('--dot-packages',
+                      help='Path to the .packages file',
+                      required=True)
+  args = parser.parse_args()
+
+  test_file = args.out
+  test_path = os.path.dirname(test_file)
+  if not os.path.exists(test_path):
+    os.makedirs(test_path)
+
+  sources_string = ' '.join(args.sources)
+  script_template = string.Template('''#!/bin/sh
+# This artifact was generated by //build/dart/gen_remote_test_invocation.py
+# Expects arg 1 to be the fuchsia remote address, and arg 2 to be the (optional)
+# SSH config path.
+
+set -e
+
+export FUCHSIA_DEVICE_URL="$$1"
+if [[ ! -z "$$2" ]]; then
+  export FUCHSIA_SSH_CONFIG="$$2"
+fi
+
+for TEST in $sources_string; do
+  $dart_binary --packages="$dot_packages" "$$TEST"
+done
+''')
+  with open(test_file, 'w') as file:
+      file.write(script_template.substitute(args.__dict__,
+                                            sources_string=sources_string))
+  permissions = (stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR |
+                 stat.S_IRGRP | stat.S_IWGRP | stat.S_IXGRP |
+                 stat.S_IROTH)
+  os.chmod(test_file, permissions)
+
+
+if __name__ == '__main__':
+  sys.exit(main())
diff --git a/build/dart/gen_run_sh.py b/build/dart/gen_run_sh.py
new file mode 100755
index 0000000..d76c744
--- /dev/null
+++ b/build/dart/gen_run_sh.py
@@ -0,0 +1,39 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import os
+import stat
+import sys
+
+def main():
+  parser = argparse.ArgumentParser(
+      description='Generate a script that runs something')
+  parser.add_argument('--out',
+                      help='Path to the file to generate',
+                      required=True)
+  parser.add_argument('--to_be_run',
+                      help='The argument to `run`',
+                      required=True)
+  args = parser.parse_args()
+
+  script_file = args.out
+  script_path = os.path.dirname(script_file)
+  if not os.path.exists(script_path):
+    os.makedirs(script_path)
+
+  script = (
+    '#!/boot/bin/sh\n\n'
+    'run %s\n' % args.to_be_run)
+  with open(script_file, 'w') as file:
+    file.write(script)
+  permissions = (stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR |
+                 stat.S_IRGRP | stat.S_IWGRP | stat.S_IXGRP |
+                 stat.S_IROTH)
+  os.chmod(script_file, permissions)
+
+
+if __name__ == '__main__':
+  sys.exit(main())
diff --git a/build/dart/gen_test_invocation.py b/build/dart/gen_test_invocation.py
new file mode 100755
index 0000000..673c775
--- /dev/null
+++ b/build/dart/gen_test_invocation.py
@@ -0,0 +1,55 @@
+#!/usr/bin/env python
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import os
+import stat
+import string
+import sys
+
+
+def main():
+  parser = argparse.ArgumentParser(
+      description='Generate a script that invokes the Dart tester')
+  parser.add_argument('--out',
+                      help='Path to the invocation file to generate',
+                      required=True)
+  parser.add_argument('--source-dir',
+                      help='Path to test sources',
+                      required=True)
+  parser.add_argument('--dot-packages',
+                      help='Path to the .packages file',
+                      required=True)
+  parser.add_argument('--test-runner',
+                      help='Path to the test runner',
+                      required=True)
+  parser.add_argument('--flutter-shell',
+                      help='Path to the Flutter shell',
+                      required=True)
+  args = parser.parse_args()
+
+  test_file = args.out
+  test_path = os.path.dirname(test_file)
+  if not os.path.exists(test_path):
+    os.makedirs(test_path)
+
+  script_template = string.Template('''#!/bin/sh
+
+$test_runner \\
+  --packages=$dot_packages \\
+  --shell=$flutter_shell \\
+  --test-directory=$source_dir \\
+  "$$@"
+''')
+  with open(test_file, 'w') as file:
+      file.write(script_template.substitute(args.__dict__))
+  permissions = (stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR |
+                 stat.S_IRGRP | stat.S_IWGRP | stat.S_IXGRP |
+                 stat.S_IROTH)
+  os.chmod(test_file, permissions)
+
+
+if __name__ == '__main__':
+  sys.exit(main())
diff --git a/build/dart/label_to_package_name.py b/build/dart/label_to_package_name.py
new file mode 100755
index 0000000..ffe7734
--- /dev/null
+++ b/build/dart/label_to_package_name.py
@@ -0,0 +1,49 @@
+#!/usr/bin/env python
+# Copyright 2016 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import string
+import sys
+
+# TODO(abarth): Base these paths on the sdk_dirs variable in gn.
+_SDK_DIRS = [
+  "garnet/public/",
+  "peridot/public/",
+  "topaz/public/",
+]
+
+
+# Strip the sdk dirs from the given label, if necessary.
+def _remove_sdk_dir(label):
+  for prefix in _SDK_DIRS:
+    if label.startswith(prefix):
+      return label[len(prefix):]
+  return label
+
+
+# For target //foo/bar:blah, the package name will be foo.bar._blah.
+# For default targets //foo/bar:bar, the package name will be foo.bar.
+def convert(label):
+  if not label.startswith("//"):
+      sys.stderr.write("expected label to start with //, got %s\n" % label)
+      return 1
+  base = _remove_sdk_dir(label[2:])
+  separator_index = string.rfind(base, ":")
+  if separator_index < 0:
+      sys.stderr.write("could not find target name in label %s\n" % label)
+      return 1
+  path = base[:separator_index].split("/")
+  name = base[separator_index+1:]
+  if path[-1] == name:
+      return ".".join(path)
+  else:
+      return "%s._%s" % (".".join(path), name)
+
+
+def main():
+  print convert(sys.argv[1])
+
+
+if __name__ == '__main__':
+  sys.exit(main())
diff --git a/build/dart/run_analysis.py b/build/dart/run_analysis.py
new file mode 100755
index 0000000..0b18281
--- /dev/null
+++ b/build/dart/run_analysis.py
@@ -0,0 +1,87 @@
+#!/usr/bin/env python
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import os
+import subprocess
+import sys
+
+FUCHSIA_ROOT = os.path.dirname(  # $root
+    os.path.dirname(             # build
+    os.path.dirname(             # dart
+    os.path.abspath(__file__))))
+
+sys.path += [os.path.join(FUCHSIA_ROOT, 'third_party', 'pyyaml', 'lib')]
+import yaml
+
+def main():
+    parser = argparse.ArgumentParser('Runs analysis on a given package')
+    parser.add_argument('--source-file', help='Path to the list of sources',
+                        required=True)
+    parser.add_argument('--dot-packages', help='Path to the .packages file',
+                        required=True)
+    parser.add_argument('--dartanalyzer',
+                        help='Path to the Dart analyzer executable',
+                        required=True)
+    parser.add_argument('--dart-sdk', help='Path to the Dart SDK',
+                        required=True)
+    parser.add_argument('--options', help='Path to analysis options',
+                        required=True)
+    parser.add_argument('--stamp', help='File to touch when analysis succeeds',
+                        required=True)
+    parser.add_argument('--depname', help='Name of the depfile target',
+                        required=True)
+    parser.add_argument('--depfile', help='Path to the depfile to generate',
+                        required=True)
+    args = parser.parse_args()
+
+    with open(args.source_file, 'r') as source_file:
+        sources = source_file.read().strip().split('\n')
+
+    with open(args.depfile, 'w') as depfile:
+        depfile.write('%s: ' % args.depname)
+        def add_dep(path):
+            depfile.write('%s ' % path)
+        options = args.options
+        while True:
+            if not os.path.isabs(options):
+                print('Expected absolute path, got %s' % options)
+                return 1
+            if not os.path.exists(options):
+                print('Could not find options file: %s' % options)
+                return 1
+            add_dep(options)
+            with open(options, 'r') as options_file:
+                content = yaml.safe_load(options_file)
+                if not 'include' in content:
+                    break
+                included = content['include']
+                if not os.path.isabs(included):
+                    included = os.path.join(os.path.dirname(options), included)
+                options = included
+
+    call_args = [
+        args.dartanalyzer,
+        '--packages=%s' % args.dot_packages,
+        '--dart-sdk=%s' % args.dart_sdk,
+        '--options=%s' % args.options,
+        '--fatal-warnings',
+        '--fatal-hints',
+        '--fatal-lints',
+    ] + sources
+
+    call = subprocess.Popen(call_args, stdout=subprocess.PIPE,
+                            stderr=subprocess.PIPE)
+    stdout, stderr = call.communicate()
+    if call.returncode:
+        print(stdout + stderr)
+        return 1
+
+    with open(args.stamp, 'w') as stamp:
+        stamp.write('Success!')
+
+
+if __name__ == '__main__':
+    sys.exit(main())
diff --git a/build/dart/sdk/gen_meta_file.py b/build/dart/sdk/gen_meta_file.py
new file mode 100755
index 0000000..0cd01df
--- /dev/null
+++ b/build/dart/sdk/gen_meta_file.py
@@ -0,0 +1,99 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import json
+import os
+import sys
+
+FUCHSIA_ROOT = os.path.dirname(  # $root
+    os.path.dirname(             # build
+    os.path.dirname(             # dart
+    os.path.dirname(             # sdk
+    os.path.abspath(__file__)))))
+
+sys.path += [os.path.join(FUCHSIA_ROOT, 'third_party', 'pyyaml', 'lib')]
+import yaml
+
+
+# The list of packages that should be pulled from a Flutter SDK instead of pub.
+FLUTTER_PACKAGES = [
+    'flutter',
+    'flutter_driver',
+    'flutter_test',
+    'flutter_tools',
+]
+
+
+def main():
+    parser = argparse.ArgumentParser('Builds a metadata file')
+    parser.add_argument('--out',
+                        help='Path to the output file',
+                        required=True)
+    parser.add_argument('--name',
+                        help='Name of the original package',
+                        required=True)
+    parser.add_argument('--root',
+                        help='Root of the package in the SDK',
+                        required=True)
+    parser.add_argument('--specs',
+                        help='Path to spec files of dependencies',
+                        nargs='*')
+    parser.add_argument('--third-party-specs',
+                        help='Path to pubspec files of 3p dependencies',
+                        nargs='*')
+    parser.add_argument('--sources',
+                        help='List of library sources',
+                        nargs='+')
+    args = parser.parse_args()
+
+    metadata = {
+        'type': 'dart_library',
+        'name': args.name,
+        'root': args.root,
+        'sources': args.sources,
+    }
+
+    third_party_deps = []
+    for spec in args.third_party_specs:
+        with open(spec, 'r') as spec_file:
+            manifest = yaml.safe_load(spec_file)
+            name = manifest['name']
+            dep = {
+                'name': name,
+            }
+            if name in FLUTTER_PACKAGES:
+                dep['version'] = 'flutter_sdk'
+            else:
+                if 'version' not in manifest:
+                    raise Exception('%s does not specify a version.' % spec)
+                dep['version'] = manifest['version']
+            third_party_deps.append(dep)
+    metadata['third_party_deps'] = third_party_deps
+
+    deps = []
+    fidl_deps = []
+    for spec in args.specs:
+        with open(spec, 'r') as spec_file:
+            data = json.load(spec_file)
+        type = data['type']
+        name = data['name']
+        if type == 'dart_library':
+            deps.append(name)
+        elif type == 'fidl_library':
+            fidl_deps.append(name)
+        else:
+            raise Exception('Unsupported dependency type: %s' % type)
+    metadata['deps'] = deps
+    metadata['fidl_deps'] = fidl_deps
+
+    with open(args.out, 'w') as out_file:
+        json.dump(metadata, out_file, indent=2, sort_keys=True)
+
+    return 0
+
+
+if __name__ == '__main__':
+    sys.exit(main())
diff --git a/build/dart/sdk/sort_deps.py b/build/dart/sdk/sort_deps.py
new file mode 100755
index 0000000..8699e28
--- /dev/null
+++ b/build/dart/sdk/sort_deps.py
@@ -0,0 +1,23 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import sys
+
+
+def print_list(name, content):
+    quoted = map(lambda c: '"%s"' % c, content)
+    print('%s = [%s]' % (name, ', '.join(quoted)))
+
+def main():
+    deps = sys.argv[1:]
+    def is_3p(dep):
+        return dep.startswith('//third_party')
+    print_list('third_party', filter(is_3p, deps))
+    print_list('local', filter(lambda d: not is_3p(d), deps))
+    return 0
+
+
+if __name__ == '__main__':
+    sys.exit(main())
diff --git a/build/dart/toolchain.gni b/build/dart/toolchain.gni
new file mode 100644
index 0000000..1d015f7
--- /dev/null
+++ b/build/dart/toolchain.gni
@@ -0,0 +1,19 @@
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+declare_args() {
+  # Maximum number of Dart processes to run in parallel.
+  #
+  # Dart analyzer uses a lot of memory which may cause issues when building
+  # with many parallel jobs e.g. when using goma. To avoid out-of-memory
+  # errors we explicitly reduce the number of jobs.
+  dart_pool_depth = 16
+}
+
+dart_toolchain = "//build/dart:dartlang"
+
+dart_root_gen_dir = get_label_info("//bogus($dart_toolchain)", "root_gen_dir")
+# In order to access the target_gen_dir in the Dart toolchain from some location
+# in the source tree, use the following:
+#   dart_target_gen_dir = get_label_info(":bogus($dart_toolchain)", "target_gen_dir")
diff --git a/build/development.key b/build/development.key
new file mode 100644
index 0000000..3aa7e16
--- /dev/null
+++ b/build/development.key
Binary files differ
diff --git a/build/disabled_for_asan.gni b/build/disabled_for_asan.gni
new file mode 100644
index 0000000..9e9349f
--- /dev/null
+++ b/build/disabled_for_asan.gni
@@ -0,0 +1,46 @@
+disabled_for_asan = [
+  # CF-584
+  "/pkgfs/packages/components_binary_test/0/test/components_binary_argv_test",
+
+  # DNO-408 (flaky)
+  "/pkgfs/packages/debugger_utils_tests/0/test/debugger_utils_tests",
+
+  # ES-180
+  "/pkgfs/packages/escher_tests/0/test/escher_unittests",
+
+  # CF-586
+  "/pkgfs/packages/iquery_golden_test/0/test/iquery_golden_test",
+
+  # MTWN-234
+  "/pkgfs/packages/mediaplayer_tests/0/test/mediaplayer_core_tests",
+
+  # MF-193
+  "/pkgfs/packages/modular_tests/0/test/run_modular_tests.sh",
+
+  # CF-585
+  "/pkgfs/packages/run_tests/0/test/run_return_value_shell_test",
+
+  # SCN-1250
+  "/pkgfs/packages/scenic_tests/0/test/input_unittests",
+  "/pkgfs/packages/scenic_tests/0/test/gfx_unittests",
+  "/pkgfs/packages/scenic_tests/0/test/gfx_viewstate_apptests",
+
+  # Causes crash not asan failure (might be require toolchain help)
+  "/pkgfs/packages/scenic_tests/0/test/gfx_pixeltests",
+
+  # MF-196
+  "/pkgfs/packages/sessionctl_integration_tests/0/test/sessionctl_test",
+
+  # MI4-1803
+  "/pkgfs/packages/suggestion_engine_unittests/0/test/annoyance_ranking_feature_unittest",
+  "/pkgfs/packages/suggestion_engine_unittests/0/test/suggestion_engine_impl_unittest",
+
+  # MTWN-236
+  "/pkgfs/packages/use_aac_decoder_test/0/test/use_aac_decoder_test",
+
+  # MA-553
+  "/pkgfs/packages/vulkan-tests/0/test/vkext",
+
+  # ZX-3384
+  "/pkgfs/packages/zircon_benchmarks/0/test/zircon_benchmarks",
+]
diff --git a/build/dot_gn_symlink.sh b/build/dot_gn_symlink.sh
new file mode 100755
index 0000000..b756a0e
--- /dev/null
+++ b/build/dot_gn_symlink.sh
@@ -0,0 +1,8 @@
+#!/bin/bash
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+FUCHSIA_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
+
+exec ln -snf build/gn/dotfile.gn "${FUCHSIA_DIR}/.gn"
diff --git a/build/fidl/BUILD.gn b/build/fidl/BUILD.gn
new file mode 100644
index 0000000..54f86b4
--- /dev/null
+++ b/build/fidl/BUILD.gn
@@ -0,0 +1,27 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/toolchain/clang_toolchain.gni")
+
+# A toolchain dedicated to processing FIDL libraries.
+# The only targets in this toolchain are action() targets, so it
+# has no real tools.  But every toolchain needs stamp and copy.
+toolchain("fidling") {
+  tool("stamp") {
+    command = stamp_command
+    description = stamp_description
+  }
+  tool("copy") {
+    command = copy_command
+    description = copy_description
+  }
+
+  toolchain_args = {
+    toolchain_variant = {
+    }
+    toolchain_variant = {
+      base = get_label_info(":fidling", "label_no_toolchain")
+    }
+  }
+}
diff --git a/build/fidl/fidl.gni b/build/fidl/fidl.gni
new file mode 100644
index 0000000..195d0b5
--- /dev/null
+++ b/build/fidl/fidl.gni
@@ -0,0 +1,132 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/cpp/fidl_cpp.gni")
+import("//build/dart/toolchain.gni")
+import("//build/fidl/toolchain.gni")
+import("//build/go/toolchain.gni")
+import("//build/rust/toolchain.gni")
+
+# Declares a FIDL library.
+#
+# Depending on the toolchain in which this targets is expanded, it will yield
+# different results:
+#   - in the FIDL toolchain, it will compile its source files into an
+#     intermediate representation consumable by language bindings generators;
+#   - in the target or shared toolchain, this will produce a source_set
+#     containing C++ bindings.
+#
+# Parameters
+#
+#   sources (required)
+#     List of paths to library source files.
+#
+#   name (optional)
+#     Name of the library.
+#     Defaults to the target's name.
+#
+#   sdk_category (optional)
+#     Publication level of the library in SDKs.
+#     See //build/sdk/sdk_atom.gni.
+#
+#   cpp_legacy_callbacks (optional)
+#     If true, uses std::function instead of fit::function.
+#     Defaults to true while migration is in progress.
+
+template("fidl") {
+  if (defined(invoker.sdk_category)) {
+    not_needed(invoker, [ "sdk_category" ])
+  }
+  if (defined(invoker.cpp_legacy_callbacks)) {
+    not_needed(invoker, [ "cpp_legacy_callbacks" ])
+  }
+
+  if (current_toolchain == fidl_toolchain) {
+    import("//build/fidl/fidl_library.gni")
+
+    fidl_library(target_name) {
+      forward_variables_from(invoker, "*")
+    }
+
+    fidl_cpp_codegen(target_name) {
+      forward_variables_from(invoker, "*")
+    }
+  } else if (current_toolchain == dart_toolchain) {
+    import("//build/dart/fidl_dart.gni")
+
+    fidl_dart(target_name) {
+      forward_variables_from(invoker, "*")
+    }
+  } else if (current_toolchain == rust_toolchain) {
+    import("//build/rust/fidl_rust.gni")
+
+    fidl_rust(target_name) {
+      forward_variables_from(invoker, "*")
+    }
+  } else if (current_toolchain == go_toolchain) {
+    import("//build/go/fidl_go.gni")
+
+    fidl_go(target_name) {
+      forward_variables_from(invoker, "*")
+    }
+  } else if (is_fuchsia) {
+    import("//build/c/fidl_c.gni")
+    import("//build/rust/fidl_rust_library.gni")
+
+    fidl_tables(target_name) {
+      forward_variables_from(invoker,
+                             [
+                               "testonly",
+                               "visibility",
+                             ])
+    }
+
+    # TODO(cramertj): remove pending TC-81.
+    fidl_rust_library(target_name) {
+      forward_variables_from(invoker, "*")
+    }
+
+    fidl_cpp(target_name) {
+      forward_variables_from(invoker, "*")
+    }
+
+    fidl_c_client(target_name) {
+      forward_variables_from(invoker, "*")
+    }
+
+    fidl_c_server(target_name) {
+      forward_variables_from(invoker, "*")
+    }
+
+    group("${target_name}_c") {
+      forward_variables_from(invoker,
+                             [
+                               "testonly",
+                               "visibility",
+                             ])
+
+      public_deps = [
+        ":${target_name}_client",
+        ":${target_name}_server",
+      ]
+    }
+  } else {
+    # TODO(ctiller): this case is for host-side FIDL, and ultimately
+    # should be identical to the previous case (once C & Rust are usable from
+    # host)
+    import("//build/c/fidl_c.gni")
+
+    fidl_tables(target_name) {
+      forward_variables_from(invoker,
+                             [
+                               "testonly",
+                               "visibility",
+                             ])
+    }
+
+    fidl_cpp(target_name) {
+      forward_variables_from(invoker, "*")
+    }
+  }
+}
diff --git a/build/fidl/fidl_library.gni b/build/fidl/fidl_library.gni
new file mode 100644
index 0000000..1ca02c7
--- /dev/null
+++ b/build/fidl/fidl_library.gni
@@ -0,0 +1,250 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/compiled_action.gni")
+import("//build/fidl/toolchain.gni")
+import("//build/json/validate_json.gni")
+import("//build/sdk/sdk_atom.gni")
+
+# Generates some representation of a FIDL library that's consumable by Language
+# bindings generators.
+#
+# The parameters for this template are defined in //build/fidl/fidl.gni. The
+# relevant parameters in this template are:
+#   - name;
+#   - sdk_category;
+#   - sources.
+
+template("fidl_library") {
+  assert(
+      current_toolchain == fidl_toolchain,
+      "This template can only be used in the FIDL toolchain $fidl_toolchain.")
+
+  assert(defined(invoker.sources), "A FIDL library requires some sources.")
+
+  library_name = target_name
+  if (defined(invoker.name)) {
+    library_name = invoker.name
+  }
+
+  response_file = "$target_gen_dir/$target_name.args"
+  fidl_stem = "$target_gen_dir/$target_name.fidl"
+  json_representation = "$fidl_stem.json"
+  c_stem = string_replace(library_name, ".", "/") + "/c/fidl"
+  c_client = "$root_gen_dir/$c_stem.client.c"
+  c_header = "$root_gen_dir/$c_stem.h"
+  c_server = "$root_gen_dir/$c_stem.server.c"
+  coding_tables = "$fidl_stem.tables.cc"
+
+  main_target_name = target_name
+  response_file_target_name = "${target_name}_response_file"
+  compilation_target_name = "${target_name}_compile"
+  verification_target_name = "${target_name}_verify"
+
+  all_deps = []
+  if (defined(invoker.deps)) {
+    all_deps += invoker.deps
+  }
+  if (defined(invoker.public_deps)) {
+    all_deps += invoker.public_deps
+  }
+
+  action(response_file_target_name) {
+    visibility = [ ":*" ]
+
+    script = "//build/fidl/gen_response_file.py"
+
+    forward_variables_from(invoker,
+                           [
+                             "deps",
+                             "public_deps",
+                             "sources",
+                             "testonly",
+                           ])
+
+    libraries = "$target_gen_dir/$main_target_name.libraries"
+
+    outputs = [
+      response_file,
+      libraries,
+    ]
+
+    args = [
+             "--out-response-file",
+             rebase_path(response_file, root_build_dir),
+             "--out-libraries",
+             rebase_path(libraries, root_build_dir),
+             "--json",
+             rebase_path(json_representation, root_build_dir),
+             "--c-client",
+             rebase_path(c_client, root_build_dir),
+             "--c-header",
+             rebase_path(c_header, root_build_dir),
+             "--c-server",
+             rebase_path(c_server, root_build_dir),
+             "--tables",
+             rebase_path(coding_tables, root_build_dir),
+             "--name",
+             library_name,
+             "--sources",
+           ] + rebase_path(sources, root_build_dir)
+
+    if (all_deps != []) {
+      dep_libraries = []
+
+      foreach(dep, all_deps) {
+        gen_dir = get_label_info(dep, "target_gen_dir")
+        name = get_label_info(dep, "name")
+        dep_libraries += [ "$gen_dir/$name.libraries" ]
+      }
+
+      inputs = dep_libraries
+
+      args += [ "--dep-libraries" ] + rebase_path(dep_libraries, root_build_dir)
+    }
+  }
+
+  compiled_action(compilation_target_name) {
+    forward_variables_from(invoker, [ "testonly" ])
+
+    visibility = [ ":*" ]
+
+    tool = "//zircon/public/tool/fidlc"
+
+    inputs = [
+      response_file,
+    ]
+
+    outputs = [
+      c_client,
+      c_header,
+      c_server,
+      coding_tables,
+      json_representation,
+    ]
+
+    rebased_response_file = rebase_path(response_file, root_build_dir)
+
+    args = [ "@$rebased_response_file" ]
+
+    deps = [
+      ":$response_file_target_name",
+    ]
+  }
+
+  validate_json(verification_target_name) {
+    forward_variables_from(invoker, [ "testonly" ])
+    visibility = [ ":*" ]
+    data = json_representation
+    schema = "//zircon/system/host/fidl/schema.json"
+    deps = [
+      ":$compilation_target_name",
+    ]
+  }
+
+  group(main_target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "testonly",
+                             "visibility",
+                           ])
+
+    public_deps = [
+      ":$compilation_target_name",
+      ":$response_file_target_name",
+    ]
+
+    deps = [
+      ":$verification_target_name",
+    ]
+  }
+
+  if (defined(invoker.sdk_category) && invoker.sdk_category != "excluded") {
+    library_name = target_name
+    if (defined(invoker.name)) {
+      library_name = invoker.name
+    }
+
+    # Process sources.
+    file_base = "fidl/$library_name"
+    all_files = []
+    sdk_sources = []
+    foreach(source, invoker.sources) {
+      relative_source = rebase_path(source, ".")
+      if (string_replace(relative_source, "..", "bogus") != relative_source) {
+        # If the source file is not within the same directory, just use the file
+        # name.
+        relative_source = get_path_info(source, "file")
+      }
+      destination = "$file_base/$relative_source"
+      sdk_sources += [ destination ]
+      all_files += [
+        {
+          source = rebase_path(source)
+          dest = destination
+        },
+      ]
+    }
+
+    # Identify metadata for dependencies.
+    sdk_metas = []
+    sdk_deps = []
+    foreach(dep, all_deps) {
+      full_label = get_label_info(dep, "label_no_toolchain")
+      sdk_dep = "${full_label}_sdk"
+      sdk_deps += [ sdk_dep ]
+      gen_dir = get_label_info(sdk_dep, "target_gen_dir")
+      name = get_label_info(sdk_dep, "name")
+      sdk_metas += [ rebase_path("$gen_dir/$name.meta.json") ]
+    }
+
+    # Generate the library metadata.
+    meta_file = "$target_gen_dir/${target_name}.sdk_meta.json"
+    meta_target_name = "${target_name}_meta"
+
+    action(meta_target_name) {
+      script = "//build/fidl/gen_sdk_meta.py"
+
+      inputs = sdk_metas
+
+      outputs = [
+        meta_file,
+      ]
+
+      args = [
+               "--out",
+               rebase_path(meta_file),
+               "--name",
+               library_name,
+               "--root",
+               file_base,
+               "--specs",
+             ] + sdk_metas + [ "--sources" ] + sdk_sources
+
+      deps = sdk_deps
+    }
+
+    sdk_atom("${target_name}_sdk") {
+      id = "sdk://fidl/$library_name"
+
+      category = invoker.sdk_category
+
+      meta = {
+        source = meta_file
+        dest = "$file_base/meta.json"
+        schema = "fidl_library"
+      }
+
+      files = all_files
+
+      non_sdk_deps = [ ":$meta_target_name" ]
+
+      deps = []
+      foreach(dep, all_deps) {
+        label = get_label_info(dep, "label_no_toolchain")
+        deps += [ "${label}_sdk" ]
+      }
+    }
+  }
+}
diff --git a/build/fidl/gen_response_file.py b/build/fidl/gen_response_file.py
new file mode 100755
index 0000000..582a58b
--- /dev/null
+++ b/build/fidl/gen_response_file.py
@@ -0,0 +1,82 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import os
+import string
+import sys
+
+
+def read_libraries(libraries_path):
+    with open(libraries_path) as f:
+        lines = f.readlines()
+        return [l.rstrip("\n") for l in lines]
+
+
+def write_libraries(libraries_path, libraries):
+    directory = os.path.dirname(libraries_path)
+    if not os.path.exists(directory):
+        os.makedirs(directory)
+    with open(libraries_path, "w+") as f:
+        for library in libraries:
+            f.write(library)
+            f.write("\n")
+
+
+def main():
+    parser = argparse.ArgumentParser(description="Generate response file for FIDL frontend")
+    parser.add_argument("--out-response-file", help="The path for for the response file to generate", required=True)
+    parser.add_argument("--out-libraries", help="The path for for the libraries file to generate", required=True)
+    parser.add_argument("--json", help="The path for the JSON file to generate, if any")
+    parser.add_argument("--tables", help="The path for the tables file to generate, if any")
+    parser.add_argument("--c-client", help="The path for the C simple client file to generate, if any")
+    parser.add_argument("--c-header", help="The path for the C header file to generate, if any")
+    parser.add_argument("--c-server", help="The path for the C simple server file to generate, if any")
+    parser.add_argument("--name", help="The name for the generated FIDL library, if any")
+    parser.add_argument("--sources", help="List of FIDL source files", nargs="*")
+    parser.add_argument("--dep-libraries", help="List of dependent libraries", nargs="*")
+    args = parser.parse_args()
+
+    target_libraries = []
+
+    for dep_libraries_path in args.dep_libraries or []:
+        dep_libraries = read_libraries(dep_libraries_path)
+        for library in dep_libraries:
+            if library in target_libraries:
+                continue
+            target_libraries.append(library)
+
+    target_libraries.append(" ".join(sorted(args.sources)))
+    write_libraries(args.out_libraries, target_libraries)
+
+    response_file = []
+
+    if args.json:
+        response_file.append("--json %s" % args.json)
+
+    if args.tables:
+        response_file.append("--tables %s" % args.tables)
+
+    if args.c_client:
+        response_file.append("--c-client %s" % args.c_client)
+
+    if args.c_header:
+        response_file.append("--c-header %s" % args.c_header)
+
+    if args.c_server:
+        response_file.append("--c-server %s" % args.c_server)
+
+    if args.name:
+        response_file.append("--name %s" % args.name)
+
+    response_file.extend(["--files %s" % library for library in target_libraries])
+
+    with open(args.out_response_file, "w+") as f:
+        f.write(" ".join(response_file))
+        f.write("\n")
+
+
+if __name__ == "__main__":
+  sys.exit(main())
diff --git a/build/fidl/gen_sdk_meta.py b/build/fidl/gen_sdk_meta.py
new file mode 100755
index 0000000..15fc979
--- /dev/null
+++ b/build/fidl/gen_sdk_meta.py
@@ -0,0 +1,57 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import json
+import os
+import sys
+
+
+def main():
+    parser = argparse.ArgumentParser('Builds a metadata file')
+    parser.add_argument('--out',
+                        help='Path to the output file',
+                        required=True)
+    parser.add_argument('--name',
+                        help='Name of the library',
+                        required=True)
+    parser.add_argument('--root',
+                        help='Root of the library in the SDK',
+                        required=True)
+    parser.add_argument('--specs',
+                        help='Path to spec files of dependencies',
+                        nargs='*')
+    parser.add_argument('--sources',
+                        help='List of library sources',
+                        nargs='+')
+    args = parser.parse_args()
+
+    metadata = {
+        'type': 'fidl_library',
+        'name': args.name,
+        'root': args.root,
+        'sources': args.sources,
+    }
+
+    deps = []
+    for spec in args.specs:
+        with open(spec, 'r') as spec_file:
+            data = json.load(spec_file)
+        type = data['type']
+        name = data['name']
+        if type == 'fidl_library':
+            deps.append(name)
+        else:
+            raise Exception('Unsupported dependency type: %s' % type)
+    metadata['deps'] = deps
+
+    with open(args.out, 'w') as out_file:
+        json.dump(metadata, out_file, indent=2, sort_keys=True)
+
+    return 0
+
+
+if __name__ == '__main__':
+    sys.exit(main())
diff --git a/build/fidl/toolchain.gni b/build/fidl/toolchain.gni
new file mode 100644
index 0000000..aedf348
--- /dev/null
+++ b/build/fidl/toolchain.gni
@@ -0,0 +1,5 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+fidl_toolchain = "//build/fidl:fidling"
diff --git a/build/fuzzing/BUILD.gn b/build/fuzzing/BUILD.gn
new file mode 100644
index 0000000..6554929
--- /dev/null
+++ b/build/fuzzing/BUILD.gn
@@ -0,0 +1,7 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+config("fuzzing_build_mode_unsafe_for_production") {
+  defines = [ "FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION" ]
+}
diff --git a/build/fuzzing/fuzzer.gni b/build/fuzzing/fuzzer.gni
new file mode 100644
index 0000000..d37f35b
--- /dev/null
+++ b/build/fuzzing/fuzzer.gni
@@ -0,0 +1,370 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/package.gni")
+
+# Declare a fuzzed executable target.
+#
+# Creates an instrumented executable file for fuzzing with a sanitizer.  Do not
+# use directly; instead, use `fuzz_target` with an associated `fuzz_package`
+# specifying the supported sanitizers.
+#
+# Takes all the same parameters as executable().
+template("_fuzzed_executable") {
+  executable(target_name) {
+    _target_type = "fuzzed_executable"
+
+    # Explicitly forward visibility, implicitly forward everything else.
+    # See comment in //build/config/BUILDCONFIG.gn for details on this pattern.
+    forward_variables_from(invoker, [ "visibility" ])
+    forward_variables_from(invoker,
+                           "*",
+                           [
+                             "visibility",
+                             "_target_type",
+                           ])
+  }
+}
+
+# Declares a resource needed by the fuzzer.
+#
+# Creates a (potentially empty) resource file for use by a `fuzz_target`.  Do
+# not use directly; instead, use `fuzz_target` with an associated
+# `fuzz_package` specifying the supported sanitizers.
+#
+# Parameters:
+#
+#   fuzz_name (required)
+#      [string] Name of the associated `fuzz_target`.
+#
+#   resource (required)
+#      [string] Base file name of the resource to produce.
+#
+#   script (optional)
+#      [file] Script that produces the resources.  Defaults to
+#      //build/fuzzing/gen_fuzzer_resource.py
+#
+#   args (optional)
+#      [list of strings] Additional arguments to append to the script command
+#      line.
+#
+template("_fuzz_resource") {
+  assert(defined(invoker.fuzz_name),
+         "`fuzz_name` must be defined for $target_name")
+  assert(defined(invoker.resource),
+         "`resource` must be defined for $target_name")
+  action(target_name) {
+    if (defined(invoker.script)) {
+      script = invoker.script
+    } else {
+      script = "//build/fuzzing/gen_fuzzer_resource.py"
+    }
+    output = "${target_gen_dir}/${invoker.fuzz_name}/${invoker.resource}"
+    outputs = [
+      output,
+    ]
+    args = [
+      "--out",
+      rebase_path(output),
+    ]
+    if (defined(invoker.args)) {
+      args += invoker.args
+    }
+  }
+}
+
+# Defines a fuzz target binary
+#
+# The fuzz_target template is used to create binaries which leverage LLVM's
+# libFuzzer to perform fuzz testing.
+#
+# Parameters
+#
+#   options (optional)
+#     [list of strings] Each option is of the form "key=value" and indicates
+#     command line options that the fuzzer should be invoked with. Valid keys
+#     are libFuzzer options (https://llvm.org/docs/LibFuzzer.html#options).
+#
+#   dictionary (optional)
+#     [file] If specified, a file containing inputs, one per line, that the
+#     fuzzer will use to generate new mutations.
+#
+#   corpora (optional)
+#     [list of strings] One or more locations from which to get a fuzzing
+#     corpus, e.g. a project path to a directory of artifacts, a project
+#     path to a CIPD ensure file, or a GCS URL.
+#
+#   sources (optional)
+#     [list of files] The C++ sources required to build the fuzzer binary. With
+#     the exception of the "zircon_fuzzers" targets, this list must be present
+#     and include a file defining `LLVMFuzzerTestOneInput`.  It will typically
+#     be a single file (with deps including the software under test).
+#
+template("fuzz_target") {
+  assert(defined(invoker.sources), "`sources` must be defined for $target_name")
+  _fuzzed_executable(target_name) {
+    forward_variables_from(invoker,
+                           "*",
+                           [
+                             "corpora",
+                             "dictionary",
+                             "options",
+                             "output_name",
+                           ])
+    testonly = true
+  }
+
+  # The following rules ensure the fuzzer resources are present (even if empty)
+  # and in known locations.  This is needed as the fuzz_package has no way of
+  # knowing the original file names when building the package manifest.  These
+  # files only need to be written once, so only do it in the base toolchain.
+  if (current_toolchain == toolchain_variant.base) {
+    fuzz_name = target_name
+
+    # Copy the corpora available for the fuzzer
+    _fuzz_resource("${fuzz_name}_corpora") {
+      resource = "corpora"
+      if (defined(invoker.corpora)) {
+        args = invoker.corpora
+      }
+    }
+
+    # Copy the fuzzer dictionary
+    _fuzz_resource("${fuzz_name}_dictionary") {
+      resource = "dictionary"
+      if (defined(invoker.dictionary)) {
+        args = read_file(invoker.dictionary, "list lines")
+      }
+    }
+
+    # Copy the options the fuzzer should be invoked with
+    _fuzz_resource("${fuzz_name}_options") {
+      resource = "options"
+      if (defined(invoker.options)) {
+        args = invoker.options
+      }
+    }
+
+    # Create a component manifest
+    _fuzz_resource("${fuzz_name}_cmx") {
+      script = "//build/fuzzing/gen_fuzzer_manifest.py"
+      resource = "${fuzz_name}.cmx"
+      args = [
+        "--bin",
+        "${fuzz_name}",
+      ]
+      if (defined(invoker.cmx)) {
+        args += [
+          "--cmx",
+          rebase_path(invoker.cmx),
+        ]
+      }
+    }
+  } else {
+    not_needed(invoker, "*")
+  }
+}
+
+set_defaults("fuzz_target") {
+  configs = default_executable_configs +
+            [ "//build/fuzzing:fuzzing_build_mode_unsafe_for_production" ]
+}
+
+# Defines a package of fuzz target binaries
+#
+# The fuzz_package template is used to bundle several fuzz_targets and their
+# associated data into a single Fuchsia package.
+#
+# Parameters
+#
+#   targets (required)
+#     [list of labels] The fuzz_target() targets to include in this package.
+#
+#   sanitizers (required)
+#     [list of variants] A set of sanitizer variants.  The resulting package
+#     will contain binaries for each sanitizer/target combination.
+#
+#   binaries (optional)
+#   data_deps (optional)
+#   deps (optional)
+#   public_deps (optional)
+#   extra (optional)
+#   loadable_modules (optional)
+#   meta (optional)
+#   resources (optional)
+#     Passed through to the normal package template.
+#
+#   fuzz_host (optional)
+#     [boolean] Indicates whether to also build fuzzer binaries on host.
+#     Defaults to false.
+#
+#   omit_binaries (optional)
+#     [bool] If true, indicates the fuzz target binaries will be provided by
+#     a non-GN build, e.g. the Zircon build.  The package will contain the
+#     resources needed by the fuzzer, but no binaries or component manifests.
+#     Defaults to false.
+#
+template("fuzz_package") {
+  assert(defined(invoker.targets), "targets must be defined for $target_name}")
+  assert(defined(invoker.sanitizers),
+         "sanitizers must be defined for $target_name")
+
+  # Only assemble the package once; handle the specific sanitizers in the loop below
+  if (current_toolchain == toolchain_variant.base) {
+    fuzz = {
+      binaries = []
+      deps = []
+      resources = []
+      meta = []
+      host = false
+      host_binaries = []
+      omit_binaries = false
+      forward_variables_from(invoker,
+                             [
+                               "binaries",
+                               "data_deps",
+                               "deps",
+                               "public_deps",
+                               "extra",
+                               "loadable_modules",
+                               "meta",
+                               "resources",
+                               "targets",
+                               "sanitizers",
+                             ])
+    }
+    if (defined(invoker.fuzz_host)) {
+      fuzz.host = invoker.fuzz_host
+    }
+    if (defined(invoker.omit_binaries)) {
+      fuzz.omit_binaries = invoker.omit_binaries
+    }
+
+    # It's possible (although unusual) that targets could be empty, e.g. a placeholder package
+    if (fuzz.targets == []) {
+      not_needed(fuzz, [ "sanitizers" ])
+    }
+
+    foreach(fuzz_target, fuzz.targets) {
+      fuzz_name = get_label_info(fuzz_target, "name")
+
+      # Find the executable variant for the sanitized fuzzer
+      selected = false
+      sanitized_target = ""
+      host_target = ""
+      foreach(sanitizer, fuzz.sanitizers) {
+        if (!selected) {
+          foreach(selector, select_variant_canonical) {
+            if (selector.variant == "${sanitizer}-fuzzer") {
+              if (defined(selector.target_type)) {
+                selector_target_type = []
+                selector_target_type = selector.target_type
+                if (selector_target_type[0] == "fuzzed_executable") {
+                  selected = true
+                }
+              }
+              if (defined(selector.name)) {
+                selector_name = []
+                selector_name = selector.name
+                if (selector_name[0] == fuzz_name) {
+                  selected = true
+                }
+              }
+              if (defined(selector.output_name)) {
+                selector_output_name = []
+                selector_output_name = selector.output_name
+                if (selector_output_name[0] == fuzz_name) {
+                  selected = true
+                }
+              }
+              if (selected) {
+                sanitized_target = "${fuzz_target}(${toolchain_variant.base}-${sanitizer}-fuzzer)"
+                host_target =
+                    "${fuzz_target}(${host_toolchain}-${sanitizer}-fuzzer)"
+              }
+            }
+          }
+        }
+      }
+
+      # If enabled, add fuzz target to package
+      if (sanitized_target != "") {
+        fuzz_label = get_label_info(sanitized_target, "label_no_toolchain")
+        fuzz_gen_dir =
+            get_label_info(fuzz_target, "target_gen_dir") + "/${fuzz_name}"
+
+        if (!fuzz.omit_binaries) {
+          fuzz.deps += [ sanitized_target ]
+          fuzz_out_dir = get_label_info(sanitized_target, "root_out_dir")
+          fuzz.binaries += [
+            {
+              name = fuzz_name
+              source = "${fuzz_out_dir}/${fuzz_name}"
+              dest = fuzz_name
+            },
+          ]
+
+          fuzz.deps += [ "${fuzz_label}_cmx(${toolchain_variant.base})" ]
+          fuzz.meta += [
+            {
+              path = "${fuzz_gen_dir}/${fuzz_name}.cmx"
+              dest = "${fuzz_name}.cmx"
+            },
+          ]
+
+          fuzz.host_binaries += [ "$host_target" ]
+        }
+
+        fuzz.deps += [
+          "${fuzz_label}_corpora(${toolchain_variant.base})",
+          "${fuzz_label}_dictionary(${toolchain_variant.base})",
+          "${fuzz_label}_options(${toolchain_variant.base})",
+        ]
+        fuzz.resources += [
+          {
+            path = "${fuzz_gen_dir}/corpora"
+            dest = "${fuzz_name}/corpora"
+          },
+          {
+            path = "${fuzz_gen_dir}/dictionary"
+            dest = "${fuzz_name}/dictionary"
+          },
+          {
+            path = "${fuzz_gen_dir}/options"
+            dest = "${fuzz_name}/options"
+          },
+        ]
+      } else {
+        not_needed([
+                     "fuzz_name",
+                     "host_target",
+                   ])
+        not_needed(fuzz, "*")
+      }
+    }
+
+    group("host_${target_name}") {
+      testonly = true
+      if (fuzz.host) {
+        deps = fuzz.host_binaries
+      }
+    }
+
+    # Build the actual package
+    package(target_name) {
+      testonly = true
+      forward_variables_from(fuzz,
+                             "*",
+                             [
+                               "targets",
+                               "sanitizers",
+                               "host",
+                               "host_binaries",
+                               "omit_binaries",
+                             ])
+    }
+  } else {
+    not_needed(invoker, "*")
+  }
+}
diff --git a/build/fuzzing/gen_fuzzer_manifest.py b/build/fuzzing/gen_fuzzer_manifest.py
new file mode 100755
index 0000000..5812273
--- /dev/null
+++ b/build/fuzzing/gen_fuzzer_manifest.py
@@ -0,0 +1,41 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import json
+import sys
+from collections import defaultdict
+
+
+def main():
+  parser = argparse.ArgumentParser("Creates a component manifest for a fuzzer")
+  parser.add_argument("--out", help="Path to the output file", required=True)
+  parser.add_argument(
+      "--bin", help="Package relative path to the binary", required=True)
+  parser.add_argument("--cmx", help="Optional starting manifest")
+  args = parser.parse_args()
+
+  cmx = defaultdict(dict)
+  if args.cmx:
+    with open(args.cmx, "r") as f:
+      cmx = json.load(f)
+
+  cmx["program"]["binary"] = "bin/" + args.bin
+
+  if "services" not in cmx["sandbox"]:
+    cmx["sandbox"]["services"] = []
+  cmx["sandbox"]["services"].append("fuchsia.process.Launcher")
+
+  if "features" not in cmx["sandbox"]:
+    cmx["sandbox"]["features"] = []
+  if "persistent-storage" not in cmx["sandbox"]["features"]:
+    cmx["sandbox"]["features"].append("persistent-storage")
+
+  with open(args.out, "w") as f:
+    f.write(json.dumps(cmx, sort_keys=True, indent=4))
+
+
+if __name__ == "__main__":
+  sys.exit(main())
diff --git a/build/fuzzing/gen_fuzzer_resource.py b/build/fuzzing/gen_fuzzer_resource.py
new file mode 100755
index 0000000..4820ebc
--- /dev/null
+++ b/build/fuzzing/gen_fuzzer_resource.py
@@ -0,0 +1,20 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import sys
+
+def main():
+  parser = argparse.ArgumentParser("Creates a resource file for a fuzzer")
+  parser.add_argument("--out", help="Path to the output file", required=True)
+  args, extra = parser.parse_known_args()
+
+  with open(args.out, "w") as f:
+    for item in extra:
+      f.write(item)
+      f.write("\n")
+
+if __name__ == "__main__":
+  sys.exit(main())
diff --git a/build/gn/BUILD.gn b/build/gn/BUILD.gn
new file mode 100644
index 0000000..26b64c6
--- /dev/null
+++ b/build/gn/BUILD.gn
@@ -0,0 +1,243 @@
+# Copyright 2016 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/compiled_action.gni")
+import("//build/config/fuchsia/zircon.gni")
+import("//build/gn/packages.gni")
+import("//build/package.gni")
+import("//build/testing/platforms.gni")
+import("//build/toolchain/goma.gni")
+
+# Permit dependencies on testonly targets from packages.
+testonly = true
+
+group("packages") {
+  deps = available_packages
+  data_deps = package_data_deps
+}
+
+group("default") {
+  deps = [
+    ":host_tests",
+    ":packages",
+  ]
+  if (preinstall_packages != [] || monolith_packages != []) {
+    deps += [ "//build/images" ]
+  }
+  if (available_packages != []) {
+    deps += [ "//build/images:updates" ]
+  }
+}
+
+# Copy host test binaries to $root_build_dir/host_tests.
+# TODO(IN-819): Delete this copy target once host tests are no longer run out
+# of a single directory.
+if (package_host_tests != []) {
+  copy("host_tests") {
+    deps = []
+    sources = []
+    bindir = get_label_info("//anything($host_toolchain)", "root_out_dir")
+
+    # package_host_tests may contain duplicate entries. Those entries must be
+    # de-duplicated here to avoid output collisions.
+    foreach(label, package_host_tests) {
+      _full_label = "$label($host_toolchain)"
+      deps += [ _full_label ]
+      binary = get_label_info(label, "name")
+      sources += [ "$bindir/$binary" ]
+    }
+
+    outputs = [
+      "$root_build_dir/host_tests/{{source_file_part}}",
+    ]
+  }
+} else {
+  group("host_tests") {
+  }
+}
+
+# TODO(joshuaseaton|mcgrathr): Make this a formal build_api_module.
+#
+# Aggregates metadata about all tests within the build graph to create a
+# top-level manifest.
+generated_file("tests") {
+  outputs = [
+    "$root_build_dir/tests.json",
+  ]
+  data_keys = [ "test_spec" ]
+  output_conversion = "json"
+  deps = [
+    ":host_tests",
+    ":packages",
+  ]
+}
+
+# Write a JSON metadata file about the host tests in the build.
+host_tests = []
+foreach(label, package_host_tests) {
+  host_label = "$label($host_toolchain)"
+  host_tests += [
+    {
+      dir = get_label_info(host_label, "dir")
+      name = get_label_info(host_label, "name")
+      build_dir = rebase_path(get_label_info(host_label, "target_out_dir"),
+                              root_build_dir)
+    },
+  ]
+}
+write_file("$root_build_dir/host_tests.json", host_tests, "json")
+
+# Collect the source files that are dependencies of the create_gn_rules.py
+# script, below.  Unfortunately, exec_script cannot use a depfile produced
+# by the script and only supports a separately computed list of dependencies.
+zircon_files =
+    exec_script("//build/zircon/list_source_files.py", [], "list lines")
+
+supporting_templates = [
+  "//build/zircon/boards.mako",
+  "//build/zircon/header.mako",
+  "//build/zircon/host_tool.mako",
+  "//build/zircon/main.mako",
+  "//build/zircon/shared_library.mako",
+  "//build/zircon/source_library.mako",
+  "//build/zircon/static_library.mako",
+  "//build/zircon/sysroot.mako",
+]
+
+# The following script generates GN build files for Zircon objects.  It is
+# placed before everything else so that //zircon targets are available in
+# due time.  See //build/zircon/README.md for more details.
+exec_script("//build/zircon/create_gn_rules.py",
+            [
+              "--out",
+              rebase_path("//zircon/public"),
+              "--staging",
+              rebase_path("$root_out_dir/zircon-gn"),
+              "--zircon-user-build",
+              rebase_path(zircon_build_abi_dir),
+              "--zircon-tool-build",
+              rebase_path("$zircon_tools_dir/.."),
+              "--make",
+              zircon_make_path,
+            ],
+            "",
+            zircon_files + supporting_templates)
+
+# Write a file that can be sourced by `fx`.  This file is produced
+# by `gn gen` and is not known to Ninja at all, so it has nothing to
+# do with the build itself.  Its sole purpose is to leave bread
+# crumbs about the settings `gn gen` used for `fx` to use later.
+_relative_build_dir = rebase_path(root_build_dir, "//", "//")
+_fx_config_lines = [
+  "# Generated by `gn gen`.",
+  "FUCHSIA_BUILD_DIR='${_relative_build_dir}'",
+  "FUCHSIA_ARCH='${target_cpu}'",
+]
+if (use_goma) {
+  _fx_config_lines += [
+    "# This will affect Zircon's make via //scripts/build-zircon.sh.",
+    "export GOMACC='${goma_dir}/gomacc'",
+  ]
+}
+_fx_build_zircon_args = ""
+if (zircon_use_asan) {
+  _fx_build_zircon_args += " -A"
+}
+foreach(selector, select_variant) {
+  if (selector == "host_asan") {
+    _fx_build_zircon_args += " -H"
+  }
+}
+if (_fx_build_zircon_args != "") {
+  _fx_config_lines += [ "FUCHSIA_BUILD_ZIRCON_ARGS=($_fx_build_zircon_args)" ]
+}
+write_file("$root_build_dir/fx.config", _fx_config_lines)
+
+# Generates breakpad symbol data for unstripped binaries.
+#
+# This symbol data is consumed by infrastructure tools and uploaded to Crash
+# servers to enable crash reporting.  These files are uniquely important for
+# release builds and this step may take a few minutes to complete, so it is
+# not recommended that this be included in the default build.
+action("breakpad_symbols") {
+  testonly = true
+  script = "//buildtools/${host_platform}/dump_breakpad_symbols"
+
+  deps = [
+    "//build/images:ids.txt",
+  ]
+
+  inputs = [
+    "//buildtools/${host_platform}/dump_syms/dump_syms",
+  ]
+  sources = [
+    "$root_out_dir/ids.txt",
+  ]
+
+  # This action generates a single xxx.sym file for each binary in the ids file
+  # and produces an archived output of them all.
+  outputs = [
+    "$root_out_dir/breakpad_symbols/breakpad_symbols.tar.gz",
+  ]
+
+  depfile = "${outputs[0]}.d"
+
+  args = [
+           "-out-dir",
+           rebase_path("$root_out_dir/breakpad_symbols"),
+           "-dump-syms-path",
+           rebase_path("//buildtools/${host_platform}/dump_syms/dump_syms"),
+           "-depfile",
+           rebase_path(depfile, root_build_dir),
+           "-tar-file",
+           rebase_path(outputs[0], root_build_dir),
+         ] + rebase_path(sources, root_build_dir)
+}
+
+# Generates an archive of package metadata.
+amber_files = rebase_path("$root_build_dir/amber-files")
+host_out_dir = get_label_info("//anything($host_toolchain)", "root_out_dir")
+pm_tool = rebase_path("$host_out_dir/pm")
+pkg_archive_contents = [
+  "amber-files/repository=$amber_files/repository",
+
+  # TODO(IN-915): this should never contain the root key. In the future, this
+  # should contain no keys, once infra is managing key material itself.
+  # These keys are consumed by the infra train promote scripts.
+  "amber-files/keys=$amber_files/keys",
+  "pm=$pm_tool",
+]
+pkg_archive_manifest = "$target_gen_dir/package_archive_manifest"
+write_file(pkg_archive_manifest, pkg_archive_contents)
+
+pkg_archive = "$root_build_dir/packages.tar.gz"
+compiled_action("package_archive") {
+  testonly = true
+  tool = "//build/tools/tar_maker"
+  inputs = [
+    pkg_archive_manifest,
+  ]
+  outputs = [
+    pkg_archive,
+  ]
+  args = [
+    "-manifest",
+    rebase_path(pkg_archive_manifest),
+    "-output",
+    rebase_path(pkg_archive),
+  ]
+  deps = [
+    "//build/images:updates",
+  ]
+}
+
+# Generates a JSON manifest of the platforms available for testing, along with
+# their properties.
+target_platforms = []
+foreach(platform, test_platforms) {
+  if (!defined(platform.cpu) || platform.cpu == current_cpu) {
+    target_platforms += [ platform ]
+  }
+}
+write_file("$root_build_dir/platforms.json", target_platforms, "json")
diff --git a/build/gn/check-layer-dependencies.py b/build/gn/check-layer-dependencies.py
new file mode 100755
index 0000000..8d40019
--- /dev/null
+++ b/build/gn/check-layer-dependencies.py
@@ -0,0 +1,74 @@
+#!/usr/bin/env python
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import os
+import subprocess
+import sys
+
+FUCHSIA_ROOT = os.path.dirname(  # $root
+    os.path.dirname(             # build
+    os.path.dirname(             # gn
+    os.path.abspath(__file__))))
+GN = os.path.join(FUCHSIA_ROOT, "buildtools", "gn")
+
+# The layers of the Fuchsia cake
+# Note that these must remain ordered by increasing proximity to the silicon.
+LAYERS = [
+  'topaz',
+  'peridot',
+  'garnet',
+  'zircon',
+]
+
+def main():
+    parser = argparse.ArgumentParser('check-layer-dependencies',
+            formatter_class=argparse.ArgumentDefaultsHelpFormatter)
+    parser.add_argument('--layer',
+                        help='[required] Name of the layer to inspect',
+                        choices=LAYERS)
+    parser.add_argument('--out',
+                        help='Build output directory',
+                        default='out/debug-x64')
+    args = parser.parse_args()
+    layer = args.layer
+    out = args.out
+    if not layer:
+        parser.print_help()
+        return 1
+
+    layer_index = LAYERS.index(layer)
+    create_labels = lambda layers: list(map(lambda l: '//%s' % l, layers))
+    upper_layers = create_labels(LAYERS[0:layer_index])
+    lower_layers = create_labels(LAYERS[layer_index:])
+    public_labels = subprocess.check_output(
+            [GN, 'ls', out, '//%s/public/*' % layer]).splitlines()
+    is_valid = True
+
+    for label in public_labels:
+        deps = subprocess.check_output(
+            [GN, 'desc', out, label, 'deps']).splitlines()
+        for dep in deps:
+            # We should never depend on upper layers.
+            for upper_layer in upper_layers:
+                if dep.startswith(upper_layer):
+                    is_valid = False
+                    print('Upper layer violation')
+                    print('  Label %s' % label)
+                    print('  Dep   %s' % dep)
+            # If we depend on the same layer or a layer below, that dependency
+            # should be located in its layer's public directory.
+            for lower_layer in lower_layers:
+                if (dep.startswith(lower_layer)
+                        and not dep.startswith('%s/public' % lower_layer)):
+                    is_valid = False
+                    print('Lower layer violation')
+                    print('  Label %s' % label)
+                    print('  Dep   %s' % dep)
+    return 0 if is_valid else 1
+
+
+if __name__ == '__main__':
+    sys.exit(main())
diff --git a/build/gn/dotfile.gn b/build/gn/dotfile.gn
new file mode 100644
index 0000000..4eec209
--- /dev/null
+++ b/build/gn/dotfile.gn
@@ -0,0 +1,44 @@
+# Copyright 2016 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+# This file lives at //build/gn/dotfile.gn for maintenance purposes.
+# It's actually used by GN via a symlink at //.gn, which is installed
+# by a jiri hook.  This file directs GN to all the other key files.
+
+# The location of the build configuration file.
+buildconfig = "//build/config/BUILDCONFIG.gn"
+
+# The secondary source root is a parallel directory tree where
+# GN build files are placed when they can not be placed directly
+# in the source tree, e.g. for third party source trees.
+secondary_source = "//build/secondary/"
+
+# The source root location.
+root = "//build/gn"
+
+# The executable used to execute scripts in action and exec_script.
+script_executable = "/usr/bin/env"
+
+# These arguments override the default values for items in a declare_args
+# block. "gn args" in turn can override these.
+default_args = {
+  # Disable Skia settings not needed for host builds.
+  skia_enable_flutter_defines = true
+  skia_enable_pdf = false
+  skia_use_dng_sdk = false
+  skia_use_expat = false
+  skia_use_fontconfig = false
+  skia_use_libwebp = false
+  skia_use_sfntly = false
+  skia_use_x11 = false
+}
+
+# Enable checking for the layers.
+check_targets = [
+  "//garnet/*",
+  "//peridot/*",
+  "//topaz/*",
+  "//vendor/*",
+  "//zircon/*",
+]
diff --git a/build/gn/gen_persistent_log_config.py b/build/gn/gen_persistent_log_config.py
new file mode 100755
index 0000000..198c518
--- /dev/null
+++ b/build/gn/gen_persistent_log_config.py
@@ -0,0 +1,43 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import json
+import sys
+
+
+def main():
+    parser = argparse.ArgumentParser('Generate a sysmgr config to persist logs')
+    parser.add_argument('name', help='Name of the config')
+    parser.add_argument('path', help='Path to the package file')
+    parser.add_argument('--tags', help='Tag to filter for', nargs='+')
+    parser.add_argument('--ignore-tags', help='Tag to ignore', nargs='+')
+    parser.add_argument('--file-capacity', help='max allowed disk usage',
+            type=int)
+    args = parser.parse_args()
+
+    tag_args = []
+    if args.tags:
+        for t in args.tags:
+            tag_args += [ '--tag', t ]
+
+    ignore_tag_args = []
+    if args.ignore_tags:
+        for t in args.ignore_tags:
+            ignore_tag_args += [ '--ignore-tag', t ]
+
+    file_cap_args = []
+    if args.file_capacity != None:
+        file_cap_args = [ '--file_capacity', str(args.file_capacity) ]
+
+    with open(args.path, 'w') as f:
+        json.dump({'apps':[["fuchsia-pkg://fuchsia.com/log_listener#meta/log_listener.cmx", "--file", "/data/logs."+args.name] +
+             file_cap_args + tag_args + ignore_tag_args] }, f)
+
+    return 0
+
+
+if __name__ == '__main__':
+    sys.exit(main())
diff --git a/build/gn/guess_layer.py b/build/gn/guess_layer.py
new file mode 100755
index 0000000..edae386
--- /dev/null
+++ b/build/gn/guess_layer.py
@@ -0,0 +1,75 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+# TOOD(TO-908): This script should be replaced with a jiri feature:
+# `jiri import -json-output` to yield imports in some JSON schema.
+# That could be parsed directly from GN.
+
+from __future__ import absolute_import
+from __future__ import print_function
+
+import argparse
+import os
+import re
+import sys
+import xml.etree.ElementTree
+
+
+LAYERS_RE = re.compile('^(fuchsia|garnet|peridot|topaz|vendor/.*)$')
+
+
+# Returns True iff LAYERS_RE matches name.
+def print_if_layer_name(name):
+    if LAYERS_RE.match(name):
+        # TODO: Remove the guesswork from configuring the build. Instead,
+        # developers should configure the product they want to build explicitly.
+        if name == "fuchsia":
+          name = "peridot"
+        print(name)
+        return True
+    return False
+
+
+def main():
+    parser = argparse.ArgumentParser(
+        description='Guess the current cake layer from the Jiri manifest file',
+        formatter_class=argparse.ArgumentDefaultsHelpFormatter)
+    parser.add_argument(
+        'manifest', type=argparse.FileType('r'), nargs='?',
+        default=os.path.normpath(
+            os.path.join(os.path.dirname(__file__),
+                         os.path.pardir, os.path.pardir, '.jiri_manifest')))
+    args = parser.parse_args()
+
+    tree = xml.etree.ElementTree.parse(args.manifest)
+
+    if tree.find('overrides') is None:
+      sys.stderr.write('found no overrides. guessing project from imports\n')
+      for elt in tree.iter('import'):
+        # manifest can be something like garnet/garnet in which case we want
+        # garnet or internal/vendor/foo/bar in which case we want vendor/foo.
+        head, name = os.path.split(elt.attrib['manifest'])
+        if 'vendor/' in head:
+          head, name = os.path.split(head)
+          name = os.path.join('vendor', name)
+        if print_if_layer_name(name):
+          return 0
+
+    # Guess the layer from the name of the <project> that is overriden in the
+    # current manifest.
+    for elt in tree.iter('overrides'):
+      for project in elt.findall('project'):
+        if print_if_layer_name(project.attrib.get('name', '')):
+          return 0
+
+    sys.stderr.write("ERROR: Could not guess petal from %s. "
+                     "Ensure 'board' and either 'product' or 'packages' is set.\n"
+                     % args.manifest.name)
+
+    return 2
+
+
+if __name__ == '__main__':
+    sys.exit(main())
diff --git a/build/gn/packages.gni b/build/gn/packages.gni
new file mode 100644
index 0000000..441d303
--- /dev/null
+++ b/build/gn/packages.gni
@@ -0,0 +1,121 @@
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+declare_args() {
+  # If a package is referenced in monolith and in preinstall, monolith takes
+  # priority, and the package will be added to OTA images as part of the
+  # verified boot set of static packages.
+
+  # These arguments should be set by the product definition gni file.
+
+  # a list of packages included in OTA images, base system images, and the
+  # distribution repository.
+  monolith = []
+
+  # a list of packages pre-installed on the system (also added to the
+  # distribution repository)
+  preinstall = []
+
+  # a list of packages only added to the distribution repository)
+  available = []
+
+  # These arguments should be set by the board definition gni file.
+
+  # A list of packages included in the monolith from the board definition.
+  # This list is appended with the list from the product definition and any
+  # additional specified packages
+  board_packages = []
+
+  # List of packages (a GN list of strings).
+  # This list of packages is currently added to the set of "monolith" packages,
+  # see `products` for more information; in the future, these packages will be
+  # added to the "preinstall".
+  # If unset, layer will be guessed using //.jiri_manifest and
+  # //{layer}/products/default.gni will be used.
+  fuchsia_packages = []
+
+  # Legacy product definitions.
+  fuchsia_products = []
+}
+
+monolith += board_packages
+monolith += fuchsia_packages
+
+# Print a warning message if the legacy fuchsia_products field is set.
+# Only print in the default toolchain so the warning only shows up once.
+if (fuchsia_products != [] && current_toolchain == default_toolchain) {
+  print("WARNING! Deprecated fuchsia product specification detected")
+  print("Please re-run 'fx set' to update your build configuration")
+  print(
+      "See https://fuchsia.googlesource.com/fuchsia/+/master/docs/development/build/")
+  print("or BLD-240 for more details")
+}
+
+if (monolith == [] && preinstall == [] && available == [] &&
+    fuchsia_packages == []) {
+  _jiri_manifest = "//.jiri_manifest"
+  _layers = exec_script("//build/gn/guess_layer.py",
+                        [ rebase_path(_jiri_manifest) ],
+                        "list lines",
+                        [ _jiri_manifest ])
+  foreach(layer, _layers) {
+    import("//$layer/products/default.gni")
+  }
+}
+
+# Resolve all the `fuchsia_products` JSON files and their dependencies
+# into lists of GN labels:
+# monolith - package labels for base system and verified boot image
+# preinstall - package labels for preinstall, but not OTA
+# available - package labels for the install and update repository
+# host_tests - labels for host tests
+# data_deps - labels for host tools and non-package build targets
+_preprocessed_products = exec_script("preprocess_products.py",
+                                     [
+                                       # A list of strings in GN syntax is
+                                       # valid JSON too.
+                                       "--monolith=$monolith",
+                                       "--preinstall=$preinstall",
+                                       "--available=$available",
+                                       "--legacy-products=$fuchsia_products",
+                                     ],
+                                     "json")
+
+# Tell GN that the files preprocess_products.py ran are inputs to the
+# generation step, by declaring them as file inputs to a (silly) exec_script
+# invocation.
+exec_script("/bin/sh",
+            [
+              "-c",
+              ":",
+            ],
+            "",
+            _preprocessed_products.files_read)
+
+monolith_packages = []
+foreach(pkg, _preprocessed_products.monolith) {
+  monolith_packages += [ get_label_info(pkg, "label_no_toolchain") ]
+}
+preinstall_packages = []
+foreach(pkg, _preprocessed_products.preinstall) {
+  preinstall_packages += [ get_label_info(pkg, "label_no_toolchain") ]
+}
+available_packages = []
+foreach(pkg, _preprocessed_products.available) {
+  available_packages += [ get_label_info(pkg, "label_no_toolchain") ]
+}
+
+# Every extra GN target the package JSON requests be built on the side.
+# This is for things like install_host_tools() targets whose output should
+# be on hand for a developer to use in conjuction with a Fuchsia package.
+package_data_deps = []
+foreach(pkg, _preprocessed_products.data_deps) {
+  package_data_deps += [ get_label_info(pkg, "label_no_toolchain") ]
+}
+
+# Labels of test() targets to be copied into $root_build_dir/host_tests.
+package_host_tests = []
+foreach(label, _preprocessed_products.host_tests) {
+  package_host_tests += [ get_label_info(label, "label_no_toolchain") ]
+}
diff --git a/build/gn/paths.py b/build/gn/paths.py
new file mode 100644
index 0000000..a50d9eb
--- /dev/null
+++ b/build/gn/paths.py
@@ -0,0 +1,37 @@
+#!/usr/bin/env python
+# Copyright 2016 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import os
+import platform
+
+SCRIPT_DIR = os.path.abspath(os.path.dirname(__file__))
+FUCHSIA_ROOT = os.path.abspath(os.path.join(SCRIPT_DIR, os.pardir, os.pardir))
+GN_PATH = os.path.join(FUCHSIA_ROOT, "buildtools", "gn")
+BUILDTOOLS_PATH = os.path.join(FUCHSIA_ROOT, "buildtools", '%s-%s' % (
+    platform.system().lower().replace('darwin', 'mac'),
+    {
+        'x86_64': 'x64',
+        'aarch64': 'arm64',
+    }[platform.machine()],
+))
+DEBUG_OUT_DIR = os.path.join(FUCHSIA_ROOT, "out", "debug-x64")
+RELEASE_OUT_DIR = os.path.join(FUCHSIA_ROOT, "out", "release-x64")
+
+_BUILD_TOOLS = {}
+
+def build_tool(package, tool):
+    """Return the full path of TOOL binary in PACKAGE.
+
+    This will raise an assertion failure if the binary doesn't exist.
+    This function memoizes its results, so there's not much need to
+    cache its results in calling code.
+    """
+
+    path = _BUILD_TOOLS.get((package, tool))
+    if path is None:
+        path = os.path.join(BUILDTOOLS_PATH, package, 'bin', tool)
+        assert os.path.exists(path), "No '%s' tool in '%s'" % (tool, package)
+        _BUILD_TOOLS[package, tool] = path
+    return path
diff --git a/build/gn/prepreprocess_build_packages.py b/build/gn/prepreprocess_build_packages.py
new file mode 100755
index 0000000..66266a9
--- /dev/null
+++ b/build/gn/prepreprocess_build_packages.py
@@ -0,0 +1,124 @@
+#!/usr/bin/env python
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import json
+import os.path
+import paths
+import sys
+
+
+class PackageImportsResolver:
+    """Recursively resolves imports in build packages. See
+       https://fuchsia.googlesource.com/fuchsia/+/master/docs/development/build/packages.md
+       for more information about build packages.
+
+       An observer may be used to perform additional work whenever an
+       import is resolved. This observer needs to implement a method with this
+       signature:
+
+       def import_resolved(self, config, config_path)
+
+       where config is the JSON file representing the build package.
+
+       If there was an error reading some of the input files, `None` will be
+       returned.
+       """
+
+    def __init__(self, observer=None):
+        self.observer = observer
+        self._errored = False
+
+    def resolve(self, imports):
+        return self.resolve_imports(imports)
+
+    def errored(self):
+        return self._errored
+
+    def resolve_imports(self, import_queue):
+
+        def detect_duplicate_keys(pairs):
+            keys = set()
+            result = {}
+            for k, v in pairs:
+                if k in keys:
+                    raise Exception("Duplicate key %s" % k)
+                keys.add(k)
+                result[k] = v
+            return result
+
+        imported = set(import_queue)
+        while import_queue:
+            config_name = import_queue.pop()
+            config_path = os.path.join(paths.FUCHSIA_ROOT, config_name)
+            try:
+                with open(config_path) as f:
+                    try:
+                        config = json.load(f,
+                            object_pairs_hook=detect_duplicate_keys)
+                        self.observer.import_resolved(config, config_path)
+                        for i in config.get("imports", []):
+                            if i not in imported:
+                                import_queue.append(i)
+                                imported.add(i)
+                    except Exception as e:
+                        import traceback
+                        traceback.print_exc()
+                        sys.stderr.write(
+                            "Failed to parse config %s, error %s\n" %
+                            (config_path, str(e)))
+                        self._errored = True
+                        return None
+            except IOError, e:
+                self._errored = True
+                sys.stderr.write("Failed to read package '%s' from '%s'.\n" %
+                                 (config_name, config_path))
+                if "/" not in config_name:
+                    sys.stderr.write("""
+Package names are relative to the root of the source tree but the requested
+path did not contain a '/'. Did you mean 'build/gn/%s' instead?
+    """ % config_name)
+                return None
+        return imported
+
+
+class PackageLabelObserver:
+    def __init__(self):
+        self.json_result = {
+            'targets': [],
+            'data_deps': [],
+            'host_tests': [],
+            'files_read': [],
+        }
+
+    def import_resolved(self, config, config_path):
+        self.json_result['targets'] += config.get('packages', [])
+        self.json_result['data_deps'] += config.get('labels', [])
+        self.json_result['host_tests'] += config.get('host_tests', [])
+        self.json_result['files_read'].append(config_path)
+
+
+def main():
+    parser = argparse.ArgumentParser(description='''
+Determine labels and Fuchsia packages included in the current build.
+''')
+    parser.add_argument('--packages',
+                        help='JSON list of packages',
+                        required=True)
+    args = parser.parse_args()
+
+    observer = PackageLabelObserver()
+    imports_resolver = PackageImportsResolver(observer)
+    imported = imports_resolver.resolve_imports(json.loads(args.packages))
+
+    if imported == None:
+        return 1
+
+    json.dump(observer.json_result, sys.stdout, sort_keys=True)
+
+    return 0
+
+if __name__ == "__main__":
+    sys.exit(main())
diff --git a/build/gn/preprocess_products.py b/build/gn/preprocess_products.py
new file mode 100755
index 0000000..57297fc
--- /dev/null
+++ b/build/gn/preprocess_products.py
@@ -0,0 +1,132 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import json
+import os.path
+import paths
+import sys
+from prepreprocess_build_packages import PackageImportsResolver, PackageLabelObserver
+
+def parse_product(product, build_packages):
+    """
+    product - a path to a JSON product file to parse
+    build_packages - a dict that collects merged sets
+    """
+    product = os.path.join(paths.FUCHSIA_ROOT, product)
+    build_packages["files_read"].add(product)
+
+    with open(product) as f:
+        for k, v in json.load(f).items():
+            if k == "monolith":
+                build_packages[k].update(v)
+                continue
+            if k == "preinstall":
+                build_packages[k].update(v)
+                continue
+            if k == "available":
+                build_packages[k].update(v)
+                continue
+            sys.stderr.write("Invalid product key in %s: %s\n" % (product, k))
+
+
+def preprocess_packages(packages):
+    observer = PackageLabelObserver()
+    imports_resolver = PackageImportsResolver(observer)
+    imported = imports_resolver.resolve_imports(packages)
+
+    if imports_resolver.errored():
+        raise ImportError
+
+    if imported == None:
+        return None
+
+    return observer.json_result
+
+
+def main():
+    parser = argparse.ArgumentParser(description="""
+Merge a list of product definitions to unique lists of GN labels:
+
+monolith   - the list of packages included in the base system images
+preinstall - the list of packages preinstall, but not part of OTA
+available  - the list of packages installable and updatable
+host_tests - host tests collected from all above package sets
+data_deps  - additional labels to build, such as host tools
+files_read - a list of files used to compute all of the above
+""")
+    parser.add_argument("--monolith",
+                        help="List of package definitions for the monolith",
+                        required=True)
+    parser.add_argument("--preinstall",
+                        help="List of package definitions for preinstalled packages",
+                        required=True)
+    parser.add_argument("--available",
+                        help="List of package definitions for available packages",
+                        required=True)
+    parser.add_argument("--legacy-products",
+                        help="List of legacy product definitions",
+                        required=False)
+    args = parser.parse_args()
+
+    build_packages = {
+        "monolith": set(),
+        "preinstall": set(),
+        "available": set(),
+        "files_read": set(),
+    }
+
+
+    # Parse monolith, preinstall, and available sets.
+    build_packages["monolith"].update(json.loads(args.monolith))
+    build_packages["preinstall"].update(json.loads(args.preinstall))
+    build_packages["available"].update(json.loads(args.available))
+
+    # Merge in the legacy product configurations, if set
+    [parse_product(product, build_packages) for product in
+            json.loads(args.legacy_products)]
+
+    try:
+        monolith_results = preprocess_packages(list(build_packages["monolith"]))
+        preinstall_results = preprocess_packages(list(build_packages["preinstall"]))
+        available_results = preprocess_packages(list(build_packages["available"]))
+    except ImportError:
+        return 1
+
+    host_tests = set()
+    data_deps = set()
+    for res in (monolith_results, preinstall_results, available_results):
+        if res is None:
+            continue
+        if res["host_tests"]:
+            host_tests.update(res["host_tests"])
+        if res["data_deps"]:
+            data_deps.update(res["data_deps"])
+        if res["files_read"]:
+            build_packages["files_read"].update(res["files_read"])
+
+    monolith_targets = set(monolith_results["targets"] if monolith_results else ())
+    preinstall_targets = set(preinstall_results["targets"] if preinstall_results else ())
+    available_targets = set(available_results["targets"] if available_results else ())
+
+    # preinstall_targets must not include monolith targets
+    preinstall_targets -= monolith_targets
+
+    # available_targets must include monolith and preinstall targets
+    available_targets |= monolith_targets | preinstall_targets
+
+    print(json.dumps({
+        "monolith": list(monolith_targets),
+        "preinstall": list(preinstall_targets),
+        "available": list(available_targets),
+        "host_tests": list(host_tests),
+        "data_deps": list(data_deps),
+        "files_read": list(build_packages["files_read"]),
+    }))
+
+    return 0
+
+if __name__ == "__main__":
+    sys.exit(main())
diff --git a/build/gn/write_package_json.py b/build/gn/write_package_json.py
new file mode 100755
index 0000000..7f90a5d
--- /dev/null
+++ b/build/gn/write_package_json.py
@@ -0,0 +1,25 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import json
+import sys
+
+
+def main():
+    parser = argparse.ArgumentParser('Write a package file')
+    parser.add_argument('--name', help='Package name', required=True)
+    parser.add_argument('--version', help='Package version', required=True)
+    parser.add_argument('path', help='Path to the package file')
+    args = parser.parse_args()
+
+    with open(args.path, 'w') as f:
+        json.dump({'name': args.name, 'version': args.version}, f)
+
+    return 0
+
+
+if __name__ == '__main__':
+    sys.exit(main())
diff --git a/build/gn_helpers.py b/build/gn_helpers.py
new file mode 100644
index 0000000..1beee6f
--- /dev/null
+++ b/build/gn_helpers.py
@@ -0,0 +1,351 @@
+# Copyright 2016 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+"""Helper functions useful when writing scripts that integrate with GN.
+
+The main functions are ToGNString and FromGNString which convert between
+serialized GN veriables and Python variables.
+
+To use in a random python file in the build:
+
+  import os
+  import sys
+
+  sys.path.append(os.path.join(os.path.dirname(__file__),
+                               os.pardir, os.pardir, "build"))
+  import gn_helpers
+
+Where the sequence of parameters to join is the relative path from your source
+file to the build directory."""
+
+class GNException(Exception):
+  pass
+
+
+def ToGNString(value, allow_dicts = True):
+  """Returns a stringified GN equivalent of the Python value.
+
+  allow_dicts indicates if this function will allow converting dictionaries
+  to GN scopes. This is only possible at the top level, you can't nest a
+  GN scope in a list, so this should be set to False for recursive calls."""
+  if isinstance(value, basestring):
+    if value.find('\n') >= 0:
+      raise GNException("Trying to print a string with a newline in it.")
+    return '"' + \
+        value.replace('\\', '\\\\').replace('"', '\\"').replace('$', '\\$') + \
+        '"'
+
+  if isinstance(value, unicode):
+    return ToGNString(value.encode('utf-8'))
+
+  if isinstance(value, bool):
+    if value:
+      return "true"
+    return "false"
+
+  if isinstance(value, list):
+    return '[ %s ]' % ', '.join(ToGNString(v) for v in value)
+
+  if isinstance(value, dict):
+    if not allow_dicts:
+      raise GNException("Attempting to recursively print a dictionary.")
+    result = ""
+    for key in sorted(value):
+      if not isinstance(key, basestring):
+        raise GNException("Dictionary key is not a string.")
+      result += "%s = %s\n" % (key, ToGNString(value[key], False))
+    return result
+
+  if isinstance(value, int):
+    return str(value)
+
+  raise GNException("Unsupported type when printing to GN.")
+
+
+def FromGNString(input_string):
+  """Converts the input string from a GN serialized value to Python values.
+
+  For details on supported types see GNValueParser.Parse() below.
+
+  If your GN script did:
+    something = [ "file1", "file2" ]
+    args = [ "--values=$something" ]
+  The command line would look something like:
+    --values="[ \"file1\", \"file2\" ]"
+  Which when interpreted as a command line gives the value:
+    [ "file1", "file2" ]
+
+  You can parse this into a Python list using GN rules with:
+    input_values = FromGNValues(options.values)
+  Although the Python 'ast' module will parse many forms of such input, it
+  will not handle GN escaping properly, nor GN booleans. You should use this
+  function instead.
+
+
+  A NOTE ON STRING HANDLING:
+
+  If you just pass a string on the command line to your Python script, or use
+  string interpolation on a string variable, the strings will not be quoted:
+    str = "asdf"
+    args = [ str, "--value=$str" ]
+  Will yield the command line:
+    asdf --value=asdf
+  The unquoted asdf string will not be valid input to this function, which
+  accepts only quoted strings like GN scripts. In such cases, you can just use
+  the Python string literal directly.
+
+  The main use cases for this is for other types, in particular lists. When
+  using string interpolation on a list (as in the top example) the embedded
+  strings will be quoted and escaped according to GN rules so the list can be
+  re-parsed to get the same result."""
+  parser = GNValueParser(input_string)
+  return parser.Parse()
+
+
+def FromGNArgs(input_string):
+  """Converts a string with a bunch of gn arg assignments into a Python dict.
+
+  Given a whitespace-separated list of
+
+    <ident> = (integer | string | boolean | <list of the former>)
+
+  gn assignments, this returns a Python dict, i.e.:
+
+    FromGNArgs("foo=true\nbar=1\n") -> { 'foo': True, 'bar': 1 }.
+
+  Only simple types and lists supported; variables, structs, calls
+  and other, more complicated things are not.
+
+  This routine is meant to handle only the simple sorts of values that
+  arise in parsing --args.
+  """
+  parser = GNValueParser(input_string)
+  return parser.ParseArgs()
+
+
+def UnescapeGNString(value):
+  """Given a string with GN escaping, returns the unescaped string.
+
+  Be careful not to feed with input from a Python parsing function like
+  'ast' because it will do Python unescaping, which will be incorrect when
+  fed into the GN unescaper."""
+  result = ''
+  i = 0
+  while i < len(value):
+    if value[i] == '\\':
+      if i < len(value) - 1:
+        next_char = value[i + 1]
+        if next_char in ('$', '"', '\\'):
+          # These are the escaped characters GN supports.
+          result += next_char
+          i += 1
+        else:
+          # Any other backslash is a literal.
+          result += '\\'
+    else:
+      result += value[i]
+    i += 1
+  return result
+
+
+def _IsDigitOrMinus(char):
+  return char in "-0123456789"
+
+
+class GNValueParser(object):
+  """Duplicates GN parsing of values and converts to Python types.
+
+  Normally you would use the wrapper function FromGNValue() below.
+
+  If you expect input as a specific type, you can also call one of the Parse*
+  functions directly. All functions throw GNException on invalid input. """
+  def __init__(self, string):
+    self.input = string
+    self.cur = 0
+
+  def IsDone(self):
+    return self.cur == len(self.input)
+
+  def ConsumeWhitespace(self):
+    while not self.IsDone() and self.input[self.cur] in ' \t\n':
+      self.cur += 1
+
+  def Parse(self):
+    """Converts a string representing a printed GN value to the Python type.
+
+    See additional usage notes on FromGNString above.
+
+    - GN booleans ('true', 'false') will be converted to Python booleans.
+
+    - GN numbers ('123') will be converted to Python numbers.
+
+    - GN strings (double-quoted as in '"asdf"') will be converted to Python
+      strings with GN escaping rules. GN string interpolation (embedded
+      variables preceded by $) are not supported and will be returned as
+      literals.
+
+    - GN lists ('[1, "asdf", 3]') will be converted to Python lists.
+
+    - GN scopes ('{ ... }') are not supported."""
+    result = self._ParseAllowTrailing()
+    self.ConsumeWhitespace()
+    if not self.IsDone():
+      raise GNException("Trailing input after parsing:\n  " +
+                        self.input[self.cur:])
+    return result
+
+  def ParseArgs(self):
+    """Converts a whitespace-separated list of ident=literals to a dict.
+
+    See additional usage notes on FromGNArgs, above.
+    """
+    d = {}
+
+    self.ConsumeWhitespace()
+    while not self.IsDone():
+      ident = self._ParseIdent()
+      self.ConsumeWhitespace()
+      if self.input[self.cur] != '=':
+        raise GNException("Unexpected token: " + self.input[self.cur:])
+      self.cur += 1
+      self.ConsumeWhitespace()
+      val = self._ParseAllowTrailing()
+      self.ConsumeWhitespace()
+      d[ident] = val
+
+    return d
+
+  def _ParseAllowTrailing(self):
+    """Internal version of Parse that doesn't check for trailing stuff."""
+    self.ConsumeWhitespace()
+    if self.IsDone():
+      raise GNException("Expected input to parse.")
+
+    next_char = self.input[self.cur]
+    if next_char == '[':
+      return self.ParseList()
+    elif _IsDigitOrMinus(next_char):
+      return self.ParseNumber()
+    elif next_char == '"':
+      return self.ParseString()
+    elif self._ConstantFollows('true'):
+      return True
+    elif self._ConstantFollows('false'):
+      return False
+    else:
+      raise GNException("Unexpected token: " + self.input[self.cur:])
+
+  def _ParseIdent(self):
+    ident = ''
+
+    next_char = self.input[self.cur]
+    if not next_char.isalpha() and not next_char=='_':
+      raise GNException("Expected an identifier: " + self.input[self.cur:])
+
+    ident += next_char
+    self.cur += 1
+
+    next_char = self.input[self.cur]
+    while next_char.isalpha() or next_char.isdigit() or next_char=='_':
+      ident += next_char
+      self.cur += 1
+      next_char = self.input[self.cur]
+
+    return ident
+
+  def ParseNumber(self):
+    self.ConsumeWhitespace()
+    if self.IsDone():
+      raise GNException('Expected number but got nothing.')
+
+    begin = self.cur
+
+    # The first character can include a negative sign.
+    if not self.IsDone() and _IsDigitOrMinus(self.input[self.cur]):
+      self.cur += 1
+    while not self.IsDone() and self.input[self.cur].isdigit():
+      self.cur += 1
+
+    number_string = self.input[begin:self.cur]
+    if not len(number_string) or number_string == '-':
+      raise GNException("Not a valid number.")
+    return int(number_string)
+
+  def ParseString(self):
+    self.ConsumeWhitespace()
+    if self.IsDone():
+      raise GNException('Expected string but got nothing.')
+
+    if self.input[self.cur] != '"':
+      raise GNException('Expected string beginning in a " but got:\n  ' +
+                        self.input[self.cur:])
+    self.cur += 1  # Skip over quote.
+
+    begin = self.cur
+    while not self.IsDone() and self.input[self.cur] != '"':
+      if self.input[self.cur] == '\\':
+        self.cur += 1  # Skip over the backslash.
+        if self.IsDone():
+          raise GNException("String ends in a backslash in:\n  " +
+                            self.input)
+      self.cur += 1
+
+    if self.IsDone():
+      raise GNException('Unterminated string:\n  ' + self.input[begin:])
+
+    end = self.cur
+    self.cur += 1  # Consume trailing ".
+
+    return UnescapeGNString(self.input[begin:end])
+
+  def ParseList(self):
+    self.ConsumeWhitespace()
+    if self.IsDone():
+      raise GNException('Expected list but got nothing.')
+
+    # Skip over opening '['.
+    if self.input[self.cur] != '[':
+      raise GNException("Expected [ for list but got:\n  " +
+                        self.input[self.cur:])
+    self.cur += 1
+    self.ConsumeWhitespace()
+    if self.IsDone():
+      raise GNException("Unterminated list:\n  " + self.input)
+
+    list_result = []
+    previous_had_trailing_comma = True
+    while not self.IsDone():
+      if self.input[self.cur] == ']':
+        self.cur += 1  # Skip over ']'.
+        return list_result
+
+      if not previous_had_trailing_comma:
+        raise GNException("List items not separated by comma.")
+
+      list_result += [ self._ParseAllowTrailing() ]
+      self.ConsumeWhitespace()
+      if self.IsDone():
+        break
+
+      # Consume comma if there is one.
+      previous_had_trailing_comma = self.input[self.cur] == ','
+      if previous_had_trailing_comma:
+        # Consume comma.
+        self.cur += 1
+        self.ConsumeWhitespace()
+
+    raise GNException("Unterminated list:\n  " + self.input)
+
+  def _ConstantFollows(self, constant):
+    """Returns true if the given constant follows immediately at the current
+    location in the input. If it does, the text is consumed and the function
+    returns true. Otherwise, returns false and the current position is
+    unchanged."""
+    end = self.cur + len(constant)
+    if end > len(self.input):
+      return False  # Not enough room.
+    if self.input[self.cur:end] == constant:
+      self.cur = end
+      return True
+    return False
diff --git a/build/gn_run_binary.sh b/build/gn_run_binary.sh
new file mode 100755
index 0000000..ae932cc
--- /dev/null
+++ b/build/gn_run_binary.sh
@@ -0,0 +1,32 @@
+#!/bin/sh
+
+# Copyright 2016 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+# Helper script to run an arbitrary binary produced by the current build.
+# The first argument is the bin directory of the toolchain, where
+# llvm-symbolizer can be found.  The second argument is the binary to run,
+# and remaining arguments are passed to that binary.
+
+clang_bindir="$1"
+shift
+
+binary="$1"
+shift
+
+case "$binary" in
+/*) ;;
+*) binary="./$binary" ;;
+esac
+
+# Make sure any sanitizer runtimes that might be included in the binary
+# can find llvm-symbolizer.
+symbolizer="${clang_bindir}/llvm-symbolizer"
+export ASAN_SYMBOLIZER_PATH="$symbolizer"
+export LSAN_SYMBOLIZER_PATH="$symbolizer"
+export MSAN_SYMBOLIZER_PATH="$symbolizer"
+export UBSAN_SYMBOLIZER_PATH="$symbolizer"
+export TSAN_OPTIONS="$TSAN_OPTIONS external_symbolizer_path=$symbolizer"
+
+exec "$binary" ${1+"$@"}
diff --git a/build/go/BUILD.gn b/build/go/BUILD.gn
new file mode 100644
index 0000000..a7102d1
--- /dev/null
+++ b/build/go/BUILD.gn
@@ -0,0 +1,19 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/toolchain/clang_toolchain.gni")
+
+# A toolchain dedicated to processing Go code.
+# The only targets in this toolchain are action() targets, so it
+# has no real tools.  But every toolchain needs stamp and copy.
+toolchain("gopher") {
+  tool("stamp") {
+    command = stamp_command
+    description = stamp_description
+  }
+  tool("copy") {
+    command = copy_command
+    description = copy_description
+  }
+}
diff --git a/build/go/build.py b/build/go/build.py
new file mode 100755
index 0000000..e892d3b
--- /dev/null
+++ b/build/go/build.py
@@ -0,0 +1,187 @@
+#!/usr/bin/env python
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+# Build script for a Go app.
+
+import argparse
+import os
+import subprocess
+import sys
+import string
+import shutil
+import errno
+
+from gen_library_metadata import get_sources
+
+
+def main():
+    parser = argparse.ArgumentParser()
+    parser.add_argument('--godepfile', help='Path to godepfile tool', required=True)
+    parser.add_argument('--root-out-dir', help='Path to root of build output',
+                        required=True)
+    parser.add_argument('--sysroot', help='The sysroot to use',
+                        required=False)
+    parser.add_argument('--depfile', help='The path to the depfile',
+                        required=True)
+    parser.add_argument('--current-cpu', help='Target architecture.',
+                        choices=['x64', 'arm64'], required=True)
+    parser.add_argument('--current-os', help='Target operating system.',
+                        choices=['fuchsia', 'linux', 'mac', 'win'], required=True)
+    parser.add_argument('--go-root', help='The go root to use for builds.', required=True)
+    parser.add_argument('--go-cache', help='Cache directory to use for builds.',
+                        required=False)
+    parser.add_argument('--is-test', help='True if the target is a go test',
+                        default=False)
+    parser.add_argument('--go-dep-files',
+                        help='List of files describing library dependencies',
+                        nargs='*',
+                        default=[])
+    parser.add_argument('--binname', help='Output file', required=True)
+    parser.add_argument('--unstripped-binname', help='Unstripped output file')
+    parser.add_argument('--toolchain-prefix', help='Path to toolchain binaries',
+                        required=False)
+    parser.add_argument('--verbose', help='Tell the go tool to be verbose about what it is doing',
+                        action='store_true')
+    parser.add_argument('--package', help='The package name', required=True)
+    parser.add_argument('--shared-libs-root', help='Path to the build shared libraries',
+                        required=False)
+    parser.add_argument('--fdio-include', help='Path to the FDIO include directory',
+                        required=False)
+    parser.add_argument('--vet', help='Run go vet',
+                        action='store_true')
+    args = parser.parse_args()
+
+    try:
+        os.makedirs(args.go_cache)
+    except OSError as e:
+        if e.errno == errno.EEXIST and os.path.isdir(args.go_cache):
+            pass
+        else:
+            raise
+
+    goarch = {
+        'x64': 'amd64',
+        'arm64': 'arm64',
+    }[args.current_cpu]
+    goos = {
+        'fuchsia': 'fuchsia',
+        'linux': 'linux',
+        'mac': 'darwin',
+        'win': 'windows',
+    }[args.current_os]
+
+    output_name = os.path.join(args.root_out_dir, args.binname)
+    build_id_dir = os.path.join(args.root_out_dir, '.build-id')
+    depfile_output = output_name
+    if args.unstripped_binname:
+        stripped_output_name = output_name
+        output_name = os.path.join(args.root_out_dir, 'exe.unstripped',
+                                   args.binname)
+
+    # Project path is a package specific gopath, also known as a "project" in go parlance.
+    project_path = os.path.join(args.root_out_dir, 'gen', 'gopaths', args.binname)
+
+    # Clean up any old project path to avoid leaking old dependencies
+    shutil.rmtree(os.path.join(project_path, 'src'), ignore_errors=True)
+    os.makedirs(os.path.join(project_path, 'src'))
+
+    if args.go_dep_files:
+      # Create a gopath for the packages dependency tree
+      for dst, src in get_sources(args.go_dep_files).items():
+        dstdir = os.path.join(project_path, 'src', os.path.dirname(dst))
+        try:
+          os.makedirs(dstdir)
+        except OSError as e:
+          # EEXIST occurs if two gopath entries share the same parent name
+          if e.errno != errno.EEXIST:
+            raise
+        # TODO(BLD-228): the following check might not be necessary anymore.
+        tgt = os.path.join(dstdir, os.path.basename(dst))
+        # The source tree is effectively read-only once the build begins.
+        # Therefore it is an error if tgt is in the source tree. At first
+        # glance this may seem impossible, but it can happen if dst is foo/bar
+        # and foo is a symlink back to the source tree.
+        canon_root_out_dir = os.path.realpath(args.root_out_dir)
+        canon_tgt = os.path.realpath(tgt)
+        if not canon_tgt.startswith(canon_root_out_dir):
+          raise ValueError("Dependency destination not in --root-out-dir: provided=%s, path=%s, realpath=%s" % (dst, tgt, canon_tgt))
+        os.symlink(os.path.relpath(src, os.path.dirname(tgt)), tgt)
+
+    gopath = os.path.abspath(project_path)
+    build_goroot = os.path.abspath(args.go_root)
+
+    env = {}
+    env['GOARCH'] = goarch
+    env['GOOS'] = goos
+    env['GOPATH'] = gopath
+    # Some users have GOROOT set in their parent environment, which can break
+    # things, so it is always set explicitly here.
+    env['GOROOT'] = build_goroot
+    env['GOCACHE'] = args.go_cache
+    env['CGO_CFLAGS'] = "--sysroot=" + args.sysroot
+
+    if goos == 'fuchsia':
+        env['CGO_ENABLED'] = '1'
+        env['CC'] = os.path.join(build_goroot, 'misc', 'fuchsia', 'clangwrap.sh')
+
+        # These are used by clangwrap.sh
+        env['FUCHSIA_SHARED_LIBS'] = args.shared_libs_root
+        env['CLANG_PREFIX'] = args.toolchain_prefix
+        env['FDIO_INCLUDE'] = args.fdio_include
+        env['ZIRCON_SYSROOT'] = args.sysroot
+
+    # /usr/bin:/bin are required for basic things like bash(1) and env(1), but
+    # preference the toolchain path. Note that on Mac, ld is also found from
+    # /usr/bin.
+    env['PATH'] = args.toolchain_prefix + ":/usr/bin:/bin"
+
+    go_tool = os.path.join(build_goroot, 'bin/go')
+
+    if args.vet:
+        retcode = subprocess.call([go_tool, 'vet', args.package], env=env)
+        if retcode != 0:
+          return retcode
+
+    cmd = [go_tool]
+    if args.is_test:
+      cmd += ['test', '-c']
+    else:
+      cmd += ['build']
+    if args.verbose:
+      cmd += ['-x']
+    cmd += ['-pkgdir', os.path.join(project_path, 'pkg'), '-o',
+            output_name, args.package]
+    retcode = subprocess.call(cmd, env=env)
+
+    if retcode == 0 and args.unstripped_binname:
+        if args.current_os == 'mac':
+            retcode = subprocess.call(['xcrun', 'strip', '-x', output_name,
+                                       '-o', stripped_output_name],
+                                      env=env)
+        else:
+            retcode = subprocess.call([os.path.join(args.toolchain_prefix,
+                                                    'llvm-objcopy'),
+                                       '--strip-sections',
+                                       '--build-id-link-dir=%s' % build_id_dir,
+                                       '--build-id-link-input=.debug',
+                                       '--build-id-link-output=',
+                                       output_name,
+                                       stripped_output_name],
+                                      env=env)
+
+    if retcode == 0:
+        if args.depfile is not None:
+            with open(args.depfile, "wb") as out:
+                godepfile_args = [args.godepfile, '-o', depfile_output]
+                if args.is_test:
+                    godepfile_args += [ '-test']
+                godepfile_args += [args.package]
+                subprocess.Popen(godepfile_args, stdout=out, env=env)
+
+    return retcode
+
+
+if __name__ == '__main__':
+    sys.exit(main())
diff --git a/build/go/fidl_go.gni b/build/go/fidl_go.gni
new file mode 100644
index 0000000..4fb36fd
--- /dev/null
+++ b/build/go/fidl_go.gni
@@ -0,0 +1,84 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/compiled_action.gni")
+import("//build/fidl/toolchain.gni")
+import("//build/go/go_library.gni")
+import("//build/go/toolchain.gni")
+
+# Generates some Go bindings for a FIDL library.
+#
+# The parameters for this template are defined in //build/fidl/fidl.gni. The
+# relevant parameters in this template are:
+#   - name.
+
+template("fidl_go") {
+  assert(current_toolchain == go_toolchain,
+         "This template can only be used in $go_toolchain.")
+
+  not_needed(invoker, [ "sources" ])
+
+  main_target_name = target_name
+  generation_target_name = "${target_name}_go_generate"
+
+  library_name = target_name
+  if (defined(invoker.name)) {
+    library_name = invoker.name
+  }
+
+  fidl_target_gen_dir =
+      get_label_info(":$target_name($fidl_toolchain)", "target_gen_dir")
+  file_stem = "$fidl_target_gen_dir/$library_name.fidl"
+  json_representation = "$fidl_target_gen_dir/$target_name.fidl.json"
+
+  compiled_action(generation_target_name) {
+    visibility = [ ":*" ]
+
+    tool = "//garnet/go/src/fidl:fidlgen"
+
+    inputs = [
+      json_representation,
+    ]
+
+    outputs = [
+      "$file_stem/impl.go",
+      "$file_stem/pkg_name",
+    ]
+
+    args = [
+      "--json",
+      rebase_path(json_representation, root_build_dir),
+      "--output-base",
+      rebase_path(file_stem, root_build_dir),
+      "--include-base",
+      rebase_path(root_gen_dir, root_build_dir),
+      "--generators",
+      "go",
+    ]
+
+    deps = [
+      ":$main_target_name($fidl_toolchain)",
+    ]
+  }
+
+  go_library(main_target_name) {
+    name_file = "$file_stem/pkg_name"
+
+    source_dir = file_stem
+
+    sources = [
+      "impl.go",
+    ]
+
+    non_go_deps = [ ":$generation_target_name" ]
+
+    deps = []
+    if (defined(invoker.deps)) {
+      deps += invoker.deps
+    }
+    if (defined(invoker.public_deps)) {
+      deps += invoker.public_deps
+    }
+  }
+}
diff --git a/build/go/gen_library_metadata.py b/build/go/gen_library_metadata.py
new file mode 100755
index 0000000..325c82d
--- /dev/null
+++ b/build/go/gen_library_metadata.py
@@ -0,0 +1,95 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import json
+import os
+import sys
+
+
+class Source(object):
+
+    def __init__(self, name, path, file):
+        self.name = name
+        self.path = path
+        self.file = file
+
+    def __str__(self):
+        return '%s[%s]' % (self.name, self.path)
+
+    def __hash__(self):
+        return hash((self.name, self.path))
+
+    def __eq__(self, other):
+        return self.name == other.name and self.path == other.path
+
+
+def get_sources(dep_files, extra_sources=None):
+    # Aggregate source data from dependencies.
+    sources = set()
+    if extra_sources:
+        sources.update(extra_sources)
+    for dep in dep_files:
+        with open(dep, 'r') as dep_file:
+            for name, path in json.load(dep_file).iteritems():
+                sources.add(Source(name, path, dep))
+
+    # Verify duplicates.
+    sources_by_name = {}
+    for src in sources:
+        sources_by_name.setdefault(src.name, []).append(src)
+    for name, srcs in sources_by_name.iteritems():
+        if len(srcs) <= 1:
+            continue
+        print('Error: source "%s" has multiple paths.' % name)
+        for src in srcs:
+            print(' - %s (%s)' % (src.path, src.file))
+        raise Exception('Could not aggregate sources')
+
+    return dict([(s.name, s.path) for s in sources])
+
+
+def main():
+    parser = argparse.ArgumentParser()
+    name_group = parser.add_mutually_exclusive_group(required=True)
+    name_group.add_argument('--name',
+                            help='Name of the current library')
+    name_group.add_argument('--name-file',
+                            help='Path to a file containing the name of the current library')
+    parser.add_argument('--source-dir',
+                        help='Path to the library\'s source directory',
+                        required=True)
+    parser.add_argument('--sources',
+                        help='List of source files',
+                        nargs='*')
+    parser.add_argument('--output',
+                        help='Path to the file to generate',
+                        required=True)
+    parser.add_argument('--deps',
+                        help='Dependencies of the current library',
+                        nargs='*')
+    args = parser.parse_args()
+    if args.name:
+        name = args.name
+    elif args.name_file:
+        with open(args.name_file, 'r') as name_file:
+            name = name_file.read()
+
+    current_sources = []
+    if args.sources:
+        # TODO(BLD-228): verify that the sources are in a single folder.
+        for source in args.sources:
+            current_sources.append(Source(os.path.join(name, source),
+                                          os.path.join(args.source_dir, source),
+                                          args.output))
+    else:
+        current_sources.append(Source(name, args.source_dir, args.output))
+    result = get_sources(args.deps, extra_sources=current_sources)
+    with open(args.output, 'w') as output_file:
+        json.dump(result, output_file, indent=2, sort_keys=True)
+
+
+if __name__ == '__main__':
+    sys.exit(main())
diff --git a/build/go/go_binary.gni b/build/go/go_binary.gni
new file mode 100644
index 0000000..5c84c91
--- /dev/null
+++ b/build/go/go_binary.gni
@@ -0,0 +1,24 @@
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/go/go_build.gni")
+
+# A template for an action that creates a Fuchsia Go binary.
+#
+# Parameters
+#
+#   sdk_category (optional)
+#     Publication level of the library in SDKs.
+#     See //build/sdk/sdk_atom.gni.
+#
+#   sdk_deps (optional)
+#     List of labels representing elements that should be added to SDKs
+#     alongside the present binary.
+#     Labels in the list must represent SDK-ready targets.
+
+template("go_binary") {
+  go_build(target_name) {
+    forward_variables_from(invoker, "*")
+  }
+}
diff --git a/build/go/go_build.gni b/build/go/go_build.gni
new file mode 100644
index 0000000..38ff3f0
--- /dev/null
+++ b/build/go/go_build.gni
@@ -0,0 +1,222 @@
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/config/clang/clang.gni")
+import("//build/config/sysroot.gni")
+import("//build/host.gni")
+import("//build/sdk/sdk_atom.gni")
+import("//build/testing/test_spec.gni")
+
+declare_args() {
+  #   gocache_dir
+  #     Directory GOCACHE environment variable will be set to. This directory
+  #     will have build and test results cached, and is safe to be written to
+  #     concurrently. If overridden, this directory must be a full path.
+  gocache_dir = rebase_path("$root_out_dir/.gocache")
+
+  #   go_vet_enabled
+  #     [bool] if false, go vet invocations are disabled for all builds.
+  go_vet_enabled = false
+}
+
+# A template for an action that builds a Go binary. Users should instead use the
+# go_binary or go_test rules.
+#
+# Parameters
+#
+#   sdk_category (optional)
+#     Publication level of the library in SDKs.
+#     See //build/sdk/sdk_atom.gni.
+#
+#   deps (optional)
+#     List of labels representing go_library targets this target depends on.
+#
+#   non_go_deps (optional)
+#     List of labels this target depends on that are not Go libraries.
+#
+#   skip_vet (optional)
+#     Whether to skip running go vet for this target. This flag should _only_
+#     be used for packages in the Go source tree itself that otherwise match
+#     whitelist entries in go vet all. Go vet is only run if go_vet_enabled is
+#     true.
+
+template("go_build") {
+  assert(defined(invoker.gopackage),
+         "gopackage must be defined for $target_name")
+
+  main_target_name = target_name
+
+  output_name = target_name
+  if (defined(invoker.output_name)) {
+    output_name = invoker.output_name
+  }
+  output_path = "${root_out_dir}/${output_name}"
+
+  # Test specs are used for linux and mac tests to record metadata for testing
+  # instruction; this happens within package.gni for fuchsia tests.
+  test_spec_target_name = "${target_name}_spec"
+  if (defined(invoker.test) && invoker.test && (is_linux || is_mac)) {
+    test_spec(test_spec_target_name) {
+      name = invoker.target_name
+      location = output_path
+      deps = []
+      if (defined(invoker.deps)) {
+        deps += invoker.deps
+      }
+      if (defined(invoker.non_go_deps)) {
+        deps += invoker.non_go_deps
+      }
+    }
+  } else {
+    not_needed([ "test_spec_target_name" ])
+  }
+
+  action(main_target_name) {
+    deps = []
+    if (defined(invoker.non_go_deps)) {
+      deps += invoker.non_go_deps
+    }
+
+    use_strip = is_fuchsia
+
+    outputs = [
+      output_path,
+    ]
+
+    if (use_strip) {
+      unstripped_output_path = "${root_out_dir}/exe.unstripped/${output_name}"
+      outputs += [ unstripped_output_path ]
+    }
+
+    script = "//build/go/build.py"
+    depfile = "${output_path}.d"
+
+    sources = [
+      "//build/go/gen_library_metadata.py",
+    ]
+
+    godepfile = "//buildtools/${host_platform}/godepfile"
+    inputs = [
+      godepfile,
+    ]
+
+    args = [
+      "--godepfile",
+      rebase_path(godepfile, "", root_build_dir),
+      "--root-out-dir",
+      rebase_path(root_out_dir, root_build_dir),
+      "--depfile",
+      rebase_path(depfile),
+      "--current-cpu",
+      current_cpu,
+      "--current-os",
+      current_os,
+      "--binname",
+      output_name,
+      "--toolchain-prefix",
+      rebase_path(clang_prefix, "", root_build_dir),
+      "--shared-libs-root",
+      rebase_path(
+          get_label_info("//default($shlib_toolchain)", "root_out_dir")),
+      "--sysroot",
+      sysroot,
+      "--go-cache",
+      gocache_dir,
+    ]
+
+    if (defined(invoker.skip_vet) && !invoker.skip_vet && go_vet_enabled) {
+      args += [ "--vet" ]
+    }
+
+    if (is_fuchsia) {
+      deps += [ "//third_party/go:go_runtime" ]
+
+      deps += [
+        "//sdk:zircon_sysroot_export",
+        "//zircon/public/lib/fdio",
+      ]
+
+      # GN provides no way to propagate include paths like this, so, this is brittle:
+      fdio_include = rebase_path("//zircon/system/ulib/fdio/include")
+
+      args += [
+        "--fdio-include",
+        fdio_include,
+        "--go-root",
+        rebase_path("$host_tools_dir/goroot"),
+      ]
+    } else {
+      args += [
+        "--go-root",
+        rebase_path("//buildtools/${host_platform}/go"),
+      ]
+    }
+
+    if (use_strip) {
+      args += [
+        "--unstripped-binname",
+        "exe.unstripped/${output_name}",
+      ]
+    }
+
+    if (defined(invoker.test) && invoker.test) {
+      args += [ "--is-test=true" ]
+
+      if (is_linux || is_mac) {
+        deps += [ ":$test_spec_target_name" ]
+      }
+    }
+
+    if (defined(invoker.deps)) {
+      deps += invoker.deps
+      args += [ "--go-dep-files" ]
+      foreach(dep, invoker.deps) {
+        gen_dir = get_label_info(dep, "target_gen_dir")
+        name = get_label_info(dep, "name")
+        args += [ rebase_path("$gen_dir/$name.go_deps") ]
+      }
+    }
+
+    args += [
+      "--package",
+      invoker.gopackage,
+    ]
+  }
+
+  # Allow host binaries to be published in SDKs.
+  if (defined(invoker.sdk_category) && invoker.sdk_category != "excluded" &&
+      !is_fuchsia && (!defined(invoker.test) || !invoker.test)) {
+    file_base = "tools/$output_name"
+
+    sdk_atom("${target_name}_sdk") {
+      id = "sdk://tools/$output_name"
+
+      category = invoker.sdk_category
+
+      meta = {
+        dest = "$file_base-meta.json"
+        schema = "host_tool"
+        value = {
+          type = "host_tool"
+          name = output_name
+          root = "tools"
+          files = [ file_base ]
+        }
+      }
+
+      files = [
+        {
+          source = output_path
+          dest = file_base
+        },
+      ]
+
+      if (defined(invoker.sdk_deps)) {
+        deps = invoker.sdk_deps
+      }
+
+      non_sdk_deps = [ ":$main_target_name" ]
+    }
+  }
+}
diff --git a/build/go/go_library.gni b/build/go/go_library.gni
new file mode 100644
index 0000000..05f7f06
--- /dev/null
+++ b/build/go/go_library.gni
@@ -0,0 +1,111 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+# Defines a set of Go code that can be used by other Go targets
+#
+# Parameters
+#
+#   name (optional)
+#     Name of the Go package.
+#     Defaults to the target name.
+#
+#   name_file (optional)
+#     Path to a file containing the name of the Go package..
+#     This should be used when the package's name requires some computation in
+#     its own build target.
+#
+#     NOTE: Exactly one of `name` or `name_file` may be set, but not both.
+#           If neither is set, then the target name is used.
+#
+#   source_dir (optional)
+#     Path to the root of the sources for the package.
+#     Defaults to the current directory.
+#
+#   sources (optional)
+#     List of source files, relative to source_dir.
+#     TODO(BLD-228): make this attribute required.
+#
+#   deps (optional)
+#     List of labels for Go libraries this target depends on.
+#
+#   non_go_deps (optional)
+#     List of labels for non-Go targets this library depends on.
+#
+#   metadata (optional)
+#     Scope giving the metadata of this library.
+#
+template("go_library") {
+  assert(!(defined(invoker.name) && defined(invoker.name_file)),
+         "Defining both name and name_file is not allowed")
+  if (defined(invoker.name)) {
+    name_args = [
+      "--name",
+      invoker.name,
+    ]
+  } else if (defined(invoker.name_file)) {
+    # Make name_file a system-absolute path and add it to args.
+    name_args = [
+      "--name-file",
+      rebase_path(invoker.name_file, ""),
+    ]
+  } else {
+    name_args = [
+      "--name",
+      target_name,
+    ]
+  }
+
+  source_dir = "."
+  if (defined(invoker.source_dir)) {
+    source_dir = invoker.source_dir
+  }
+
+  go_sources = []
+  if (defined(invoker.sources)) {
+    go_sources = invoker.sources
+  }
+
+  action(target_name) {
+    script = "//build/go/gen_library_metadata.py"
+
+    library_file = "$target_gen_dir/$target_name.go_deps"
+
+    outputs = [
+      library_file,
+    ]
+
+    deps = []
+    dependent_libraries = []
+
+    if (defined(invoker.deps)) {
+      deps += invoker.deps
+      foreach(dep, invoker.deps) {
+        gen_dir = get_label_info(dep, "target_gen_dir")
+        name = get_label_info(dep, "name")
+        dependent_libraries += [ "$gen_dir/$name.go_deps" ]
+      }
+    }
+
+    if (defined(invoker.non_go_deps)) {
+      deps += invoker.non_go_deps
+    }
+
+    inputs = dependent_libraries
+
+    args = name_args + [
+             "--source-dir",
+             rebase_path(source_dir),
+             "--sources",
+           ] + go_sources +
+           [
+             "--output",
+             rebase_path(library_file),
+             "--deps",
+           ] + rebase_path(dependent_libraries)
+
+    if (defined(invoker.metadata)) {
+      metadata = invoker.metadata
+    }
+  }
+}
diff --git a/build/go/go_test.gni b/build/go/go_test.gni
new file mode 100644
index 0000000..147c692
--- /dev/null
+++ b/build/go/go_test.gni
@@ -0,0 +1,14 @@
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+# A template for an action that creates a Fuchsia Go test binary.
+
+import("//build/go/go_build.gni")
+
+template("go_test") {
+  go_build(target_name) {
+    forward_variables_from(invoker, "*")
+    test = true
+  }
+}
diff --git a/build/go/toolchain.gni b/build/go/toolchain.gni
new file mode 100644
index 0000000..5f33282
--- /dev/null
+++ b/build/go/toolchain.gni
@@ -0,0 +1,5 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+go_toolchain = "//build/go:gopher"
diff --git a/build/gypi_to_gn.py b/build/gypi_to_gn.py
new file mode 100755
index 0000000..bbb05b1
--- /dev/null
+++ b/build/gypi_to_gn.py
@@ -0,0 +1,172 @@
+#!/usr/bin/env python
+# Copyright 2016 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+"""Converts a given gypi file to a python scope and writes the result to stdout.
+
+HOW TO USE
+
+It is assumed that the file contains a toplevel dictionary, and this script
+will return that dictionary as a GN "scope" (see example below). This script
+does not know anything about GYP and it will not expand variables or execute
+conditions.
+
+It will strip conditions blocks.
+
+A variables block at the top level will be flattened so that the variables
+appear in the root dictionary. This way they can be returned to the GN code.
+
+Say your_file.gypi looked like this:
+  {
+     'sources': [ 'a.cc', 'b.cc' ],
+     'defines': [ 'ENABLE_DOOM_MELON' ],
+  }
+
+You would call it like this:
+  gypi_values = exec_script("//build/gypi_to_gn.py",
+                            [ rebase_path("your_file.gypi") ],
+                            "scope",
+                            [ "your_file.gypi" ])
+
+Notes:
+ - The rebase_path call converts the gypi file from being relative to the
+   current build file to being system absolute for calling the script, which
+   will have a different current directory than this file.
+
+ - The "scope" parameter tells GN to interpret the result as a series of GN
+   variable assignments.
+
+ - The last file argument to exec_script tells GN that the given file is a
+   dependency of the build so Ninja can automatically re-run GN if the file
+   changes.
+
+Read the values into a target like this:
+  component("mycomponent") {
+    sources = gypi_values.sources
+    defines = gypi_values.defines
+  }
+
+Sometimes your .gypi file will include paths relative to a different
+directory than the current .gn file. In this case, you can rebase them to
+be relative to the current directory.
+  sources = rebase_path(gypi_values.sources, ".",
+                        "//path/gypi/input/values/are/relative/to")
+
+This script will tolerate a 'variables' in the toplevel dictionary or not. If
+the toplevel dictionary just contains one item called 'variables', it will be
+collapsed away and the result will be the contents of that dictinoary. Some
+.gypi files are written with or without this, depending on how they expect to
+be embedded into a .gyp file.
+
+This script also has the ability to replace certain substrings in the input.
+Generally this is used to emulate GYP variable expansion. If you passed the
+argument "--replace=<(foo)=bar" then all instances of "<(foo)" in strings in
+the input will be replaced with "bar":
+
+  gypi_values = exec_script("//build/gypi_to_gn.py",
+                            [ rebase_path("your_file.gypi"),
+                              "--replace=<(foo)=bar"],
+                            "scope",
+                            [ "your_file.gypi" ])
+
+"""
+
+import gn_helpers
+from optparse import OptionParser
+import sys
+
+def LoadPythonDictionary(path):
+  file_string = open(path).read()
+  try:
+    file_data = eval(file_string, {'__builtins__': None}, None)
+  except SyntaxError, e:
+    e.filename = path
+    raise
+  except Exception, e:
+    raise Exception("Unexpected error while reading %s: %s" % (path, str(e)))
+
+  assert isinstance(file_data, dict), "%s does not eval to a dictionary" % path
+
+  # Flatten any variables to the top level.
+  if 'variables' in file_data:
+    file_data.update(file_data['variables'])
+    del file_data['variables']
+
+  # Strip all elements that this script can't process.
+  elements_to_strip = [
+    'conditions',
+    'target_conditions',
+    'target_defaults',
+    'targets',
+    'includes',
+    'actions',
+  ]
+  for element in elements_to_strip:
+    if element in file_data:
+      del file_data[element]
+
+  return file_data
+
+
+def ReplaceSubstrings(values, search_for, replace_with):
+  """Recursively replaces substrings in a value.
+
+  Replaces all substrings of the "search_for" with "repace_with" for all
+  strings occurring in "values". This is done by recursively iterating into
+  lists as well as the keys and values of dictionaries."""
+  if isinstance(values, str):
+    return values.replace(search_for, replace_with)
+
+  if isinstance(values, list):
+    return [ReplaceSubstrings(v, search_for, replace_with) for v in values]
+
+  if isinstance(values, dict):
+    # For dictionaries, do the search for both the key and values.
+    result = {}
+    for key, value in values.items():
+      new_key = ReplaceSubstrings(key, search_for, replace_with)
+      new_value = ReplaceSubstrings(value, search_for, replace_with)
+      result[new_key] = new_value
+    return result
+
+  # Assume everything else is unchanged.
+  return values
+
+def main():
+  parser = OptionParser()
+  parser.add_option("-r", "--replace", action="append",
+    help="Replaces substrings. If passed a=b, replaces all substrs a with b.")
+  (options, args) = parser.parse_args()
+
+  if len(args) != 1:
+    raise Exception("Need one argument which is the .gypi file to read.")
+
+  data = LoadPythonDictionary(args[0])
+  if options.replace:
+    # Do replacements for all specified patterns.
+    for replace in options.replace:
+      split = replace.split('=')
+      # Allow "foo=" to replace with nothing.
+      if len(split) == 1:
+        split.append('')
+      assert len(split) == 2, "Replacement must be of the form 'key=value'."
+      data = ReplaceSubstrings(data, split[0], split[1])
+
+  # Sometimes .gypi files use the GYP syntax with percents at the end of the
+  # variable name (to indicate not to overwrite a previously-defined value):
+  #   'foo%': 'bar',
+  # Convert these to regular variables.
+  for key in data:
+    if len(key) > 1 and key[len(key) - 1] == '%':
+      data[key[:-1]] = data[key]
+      del data[key]
+
+  print gn_helpers.ToGNString(data)
+
+if __name__ == '__main__':
+  try:
+    main()
+  except Exception, e:
+    print str(e)
+    sys.exit(1)
diff --git a/build/host.gni b/build/host.gni
new file mode 100644
index 0000000..559a23d
--- /dev/null
+++ b/build/host.gni
@@ -0,0 +1,94 @@
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+declare_args() {
+  # This is the directory where host tools intended for manual use by
+  # developers get installed.  It's something a developer might put
+  # into their shell's $PATH.  Host tools that are just needed as part
+  # of the build do not get copied here.  This directory is only for
+  # things that are generally useful for testing or debugging or
+  # whatnot outside of the GN build itself.  These are only installed
+  # by an explicit install_host_tools() rule (see //build/host.gni).
+  host_tools_dir = "$root_build_dir/tools"
+}
+
+# This declares that a host tool (a target built in host_toolchain)
+# should be installed in host_tools_dir.  This target can be used in
+# any toolchain, and it will forward to host_toolchain.
+#
+# Parameters
+#
+#   outputs (required)
+#     [files list] Simple file name of each tool, should be the
+#     same as the output_name in the executable() or similar rule
+#     (which is usually just that target's name).
+#
+#   deps (recommended)
+#     [label list] Should list each target that actually builds each output.
+#     It does not need to use explicit toolchain suffixes; the only target
+#     using the deps will be instantiated only in host_toolchain.
+#
+#   testonly (optional)
+#   visibility (optional)
+#     Standard GN meaning.
+#
+# Example of usage:
+#
+#   executable("frob") { ... }
+#   install_host_tools("fiddlers") {
+#     deps = [ ":frob", "//some/other/dir:twiddle" ]
+#     outputs = [ "frob", "twiddle" ]
+#   }
+template("install_host_tools") {
+  assert(defined(invoker.outputs), "install_host_tools() must define outputs")
+
+  # There is more than one host_toolchain when variants are involved,
+  # but all those copy their executables to the base host_toolchain
+  # (which toolchain_variant.base points to in every host_toolchain).
+  if (current_toolchain == host_toolchain &&
+      current_toolchain == toolchain_variant.base) {
+    copy(target_name) {
+      forward_variables_from(invoker,
+                             [
+                               "deps",
+                               "testonly",
+                               "visibility",
+                             ])
+      if (defined(visibility)) {
+        visibility += [ ":$target_name" ]
+      }
+      outputs = [
+        "$host_tools_dir/{{source_file_part}}",
+      ]
+      sources = []
+      foreach(output_name, invoker.outputs) {
+        sources += [ "$root_out_dir/$output_name" ]
+      }
+    }
+  } else {
+    # Redirect to the base host_toolchain, where the copy rule above is.
+    # In a variant host_toolchain context, toolchain_variant.base points
+    # there, while host_toolchain always matches current_toolchain.
+    group(target_name) {
+      if (current_toolchain == host_toolchain) {
+        toolchain = toolchain_variant.base
+      } else {
+        toolchain = host_toolchain
+      }
+      deps = [
+        ":$target_name($toolchain)",
+      ]
+      forward_variables_from(invoker,
+                             [
+                               "testonly",
+                               "visibility",
+                             ])
+      not_needed(invoker,
+                 [
+                   "deps",
+                   "outputs",
+                 ])
+    }
+  }
+}
diff --git a/build/images/BUILD.gn b/build/images/BUILD.gn
new file mode 100644
index 0000000..e0f790f
--- /dev/null
+++ b/build/images/BUILD.gn
@@ -0,0 +1,1677 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/config/clang/clang.gni")
+import("//build/config/fuchsia/zbi.gni")
+import("//build/config/fuchsia/zircon.gni")
+import("//build/gn/packages.gni")
+import("//build/images/boot.gni")
+import("//build/images/custom_signing.gni")
+import("//build/images/fvm.gni")
+import("//build/images/json.gni")
+import("//build/images/manifest.gni")
+import("//build/images/max_fvm_size.gni")
+import("//build/package.gni")
+import("//build/sdk/sdk_atom.gni")
+import("//build/sdk/sdk_molecule.gni")
+import("//garnet/build/pkgfs.gni")
+import("//garnet/go/src/pm/pm.gni")
+
+declare_args() {
+  # Groups to include from the Zircon /boot manifest into /boot.
+  # This is either "all" or a comma-separated list of one or more of:
+  #   core -- necessary to boot
+  #   misc -- utilities in /bin
+  #   test -- test binaries in /bin and /test
+  zircon_boot_groups = "core"
+
+  # Path to manifest file containing data to place into the initial /data
+  # partition.
+  data_partition_manifest = ""
+}
+
+declare_args() {
+  # Groups to include from the Zircon /boot manifest into /system
+  # (instead of into /boot like Zircon's own bootdata.bin does).
+  # Should not include any groups that are also in zircon_boot_groups,
+  # which see.  If zircon_boot_groups is "all" then this should be "".
+  # **TODO(mcgrathr)**: _Could default to "" for `!is_debug`, or "production
+  # build".  Note including `"test"` here places all of Zircon's tests into
+  # `/system/test`, which means that Fuchsia bots run those tests too._
+  zircon_system_groups = "misc,test"
+  if (zircon_boot_groups == "all") {
+    zircon_system_groups = ""
+  }
+}
+
+if (zircon_boot_groups == "all") {
+  assert(zircon_system_groups == "",
+         "zircon_boot_groups already has everything")
+} else {
+  assert(zircon_system_groups != "all" && zircon_system_groups != "core",
+         "zircon_system_groups cannot include core (or all)")
+}
+
+# This will collect a list of scopes describing each image exported.
+# See json.gni.
+images = [
+  {
+    sources = [
+      "$root_build_dir/args.gn",
+    ]
+    json = {
+      name = "buildargs"
+      type = "gn"
+      archive = true
+    }
+    deps = []
+  },
+  {
+    sources = [
+      "//.jiri_root/update_history/latest",
+    ]
+    json = {
+      name = "jiri_snapshot"
+      type = "xml"
+      archive = true
+    }
+    deps = []
+  },
+  {
+    json = {
+      name = "bootserver"
+      type = "exe.$host_platform"
+      archive = true
+    }
+    sources = [
+      "$zircon_tools_dir/bootserver",
+    ]
+    deps = []
+  },
+]
+
+# Write a JSON metadata file about the packages in the build.
+_packages_json = {
+  _all = []
+  monolith = []
+  foreach(pkg, monolith_packages) {
+    monolith += [ get_label_info(pkg, "name") ]
+    _all += [ pkg ]
+    _all -= [ pkg ]
+    _all += [ pkg ]
+  }
+  preinstall = []
+  foreach(pkg, preinstall_packages) {
+    preinstall += [ get_label_info(pkg, "name") ]
+    _all += [ pkg ]
+    _all -= [ pkg ]
+    _all += [ pkg ]
+  }
+  available = []
+  foreach(pkg, available_packages) {
+    available += [ get_label_info(pkg, "name") ]
+    _all += [ pkg ]
+    _all -= [ pkg ]
+    _all += [ pkg ]
+  }
+  packages = []
+  foreach(pkg, _all) {
+    packages += [
+      {
+        dir = get_label_info(pkg, "dir")
+        name = get_label_info(pkg, "name")
+        build_dir =
+            rebase_path(get_label_info(pkg, "target_out_dir"), root_build_dir)
+      },
+    ]
+  }
+}
+write_file("$root_build_dir/packages.json",
+           {
+             forward_variables_from(_packages_json, "*", [ "_all" ])
+           },
+           "json")
+
+###
+### shell-commands package
+###
+### TODO(CF-223)
+### shell-commands is a Fuchsia package that aggregates all binaries from all
+### "available" Fuchsia packages producing "#!resolve URI" trampolines. This
+### package is a solution to enable the shell to resolve command line
+### programs out of ephemeral packages.
+
+# create-shell-commands performs two actions:
+#   - create trampoline scripts ("#!resolve COMMAND-URI") for each command.
+#   - produce a manifest that contains references to all of the trampolines.
+action("create-shell-commands") {
+  testonly = true
+  script = "create-shell-commands.py"
+  outputs = [
+    "$target_out_dir/shell-commands-extra.manifest",
+  ]
+  args = [
+    "--trampoline-dir",
+    rebase_path(target_out_dir + "/commands", root_build_dir),
+    "--output-manifest",
+    rebase_path(outputs[0], root_build_dir),
+  ]
+  deps = []
+  sources = []
+
+  foreach(pkg_label, available_packages) {
+    # Find the response file written by package().
+    pkg_target_name = get_label_info(pkg_label, "name")
+    if (pkg_target_name != "shell-commands") {
+      pkg_target_out_dir = get_label_info(pkg_label, "target_out_dir")
+      cmd_rspfile = "$pkg_target_out_dir/${pkg_target_name}.shell_commands.rsp"
+      deps += [ "${pkg_label}.shell_commands.rsp" ]
+      sources += [ cmd_rspfile ]
+      args += [ "@" + rebase_path(cmd_rspfile, root_build_dir) ]
+    }
+  }
+}
+
+package("shell-commands") {
+  testonly = true
+  extra = get_target_outputs(":create-shell-commands")
+  deps = [
+    ":create-shell-commands",
+  ]
+}
+
+###
+### Fuchsia system image.  This aggregates contributions from all the
+### package() targets enabled in the build.
+###
+
+pm_binary_label = "//garnet/go/src/pm:pm_bin($host_toolchain)"
+pm_out_dir = get_label_info(pm_binary_label, "root_out_dir")
+pm_binary = "$pm_out_dir/pm"
+
+# This just runs `pm -k $system_package_key genkey` if the file doesn't exist.
+# Every package() target depends on this.
+action("system_package_key_check") {
+  visibility = [ "*" ]
+  deps = [
+    pm_binary_label,
+  ]
+  outputs = [
+    "$target_out_dir/system_package_key_check_ok.stamp",
+  ]
+  script = "//build/gn_run_binary.sh"
+  inputs = [
+    "system_package_key_check.py",
+    pm_binary,
+  ]
+  args =
+      [ clang_prefix ] + rebase_path(inputs + outputs + [ system_package_key ])
+}
+
+# The pkgsvr index is a manifest mapping `package_name/package_version` to
+# the merkleroot of the package's meta.far file.
+pkgsvr_index = "$target_out_dir/pkgsvr_index"
+
+action("pkgsvr_index") {
+  visibility = [
+    ":system_image.manifest",
+    ":update_packages.manifest",
+  ]
+  testonly = true
+
+  script = "manifest.py"
+  args = [ "--contents" ]
+  outputs = [
+    "$target_out_dir/$target_name",
+  ]
+  args += [ "--output=" + rebase_path(outputs[0], root_build_dir) ]
+  sources = []
+  deps = []
+  foreach(pkg_label, monolith_packages) {
+    # Find the response file written by package().
+    pkg_target_name = get_label_info(pkg_label, "name")
+    pkg_target_out_dir = get_label_info(pkg_label, "target_out_dir")
+    pkg_rspfile = "$pkg_target_out_dir/${pkg_target_name}.pkgsvr_index.rsp"
+    deps += [ "${pkg_label}.pkgsvr_index.rsp" ]
+    sources += [ pkg_rspfile ]
+    args += [ "@" + rebase_path(pkg_rspfile, root_build_dir) ]
+  }
+}
+
+# preinstall.manifest is a manifest of files that are a pkgfs "dynamic index"
+# populated inside /pkgfs/packages that are {name}/{version} containing the
+# merkleroot of the meta.far of that package.
+action("preinstall.manifest") {
+  visibility = [ ":data.blk" ]
+  testonly = true
+
+  script = "manifest.py"
+  args = [ "--rewrite=*=pkgfs_index/packages/{target}={source}" ]
+  outputs = [
+    # The output is in the root build dir in order to avoid a complex rewrite
+    # of the source path. The minfs tool this is passed to interprets source
+    # locations relative to the manifest.
+    "$root_build_dir/$target_name",
+  ]
+  args += [ "--output=" + rebase_path(outputs[0], root_build_dir) ]
+  sources = []
+  deps = []
+  foreach(pkg_label, preinstall_packages) {
+    # Find the response file written by package().
+    pkg_target_name = get_label_info(pkg_label, "name")
+    pkg_target_out_dir = get_label_info(pkg_label, "target_out_dir")
+    pkg_rspfile = "$pkg_target_out_dir/${pkg_target_name}.pkgsvr_index.rsp"
+    deps += [ "${pkg_label}.pkgsvr_index.rsp" ]
+    sources += [ pkg_rspfile ]
+    args += [ "@" + rebase_path(pkg_rspfile, root_build_dir) ]
+  }
+
+  update_meta_dir =
+      get_label_info(":update.meta", "target_out_dir") + "/update.meta"
+  update_meta_far_merkle = update_meta_dir + "/meta.far.merkle"
+  args += [
+    "--entry",
+    "update/0=" + rebase_path(update_meta_far_merkle, root_build_dir),
+  ]
+  deps += [ ":update.meta" ]
+}
+
+# The /boot and /system manifests have to be generated in concert.  Things
+# like drivers going into /system can affect what needs to go into /boot.
+boot_manifest = "$target_out_dir/boot.manifest"
+
+# The system_image "package" manifest is everything that appears in /system.
+generate_manifest("system_image.manifest") {
+  visibility = [ ":*" ]
+  testonly = true
+
+  # Create the /boot manifest that gets packed into BOOTFS in the ZBI.
+  # /system manifest files can assume that the /boot files are visible at
+  # runtime, so dependencies already in /boot won't be copied into /system.
+  bootfs_manifest = boot_manifest
+  bootfs_zircon_groups = zircon_boot_groups
+
+  # Collect whatever we want from Zircon that didn't go into /boot.
+  zircon_groups = zircon_system_groups
+
+  # Now each package() target in the build contributes manifest entries.
+  # For system_image packages, these contain binaries that need their
+  # references resolved from the auxiliary manifests or /boot (above).
+  args = []
+  deps = []
+  sources = []
+  foreach(pkg_label, monolith_packages) {
+    # Find the response file written by package().
+    pkg_target_name = get_label_info(pkg_label, "name")
+    pkg_target_out_dir = get_label_info(pkg_label, "target_out_dir")
+    pkg_system_rsp = "$pkg_target_out_dir/${pkg_target_name}.system.rsp"
+    deps += [ pkg_label ]
+    sources += [ pkg_system_rsp ]
+    args += [ "@" + rebase_path(pkg_system_rsp, root_build_dir) ]
+  }
+
+  args += [ "--entry-manifest=" +
+            get_label_info(":$target_name", "label_no_toolchain") ]
+
+  # Add the meta/package JSON file that makes this the "system_image" package.
+  json = "system_meta_package.json"
+  sources += [ json ]
+  args += [ "--entry=meta/package=" + rebase_path(json, root_build_dir) ]
+
+  # Add the static packages (pkgsvr) index.
+  deps += [ ":pkgsvr_index" ]
+  sources += [ pkgsvr_index ]
+  args += [ "--entry=data/static_packages=" +
+            rebase_path(pkgsvr_index, root_build_dir) ]
+}
+
+system_manifest_outputs = get_target_outputs(":system_image.manifest")
+assert(boot_manifest == system_manifest_outputs[2])
+system_build_id_map = system_manifest_outputs[1]
+
+# Generate, sign, and seal the system_image package file.
+pm_build_package("system_image.meta") {
+  visibility = [ ":*" ]
+  testonly = true
+  manifest = ":system_image.manifest"
+}
+
+# Now generate the blob manifest.  This lists all the source files
+# that need to go into the blobfs image.  That is everything from the
+# system_image manifest, everything from each package manifest, and
+# all the synthesized meta.far files.
+blob_manifest = "$root_build_dir/blob.manifest"
+
+action("blob.manifest") {
+  visibility = [ ":*" ]
+  testonly = true
+  outputs = [
+    blob_manifest,
+  ]
+  depfile = blob_manifest + ".d"
+  deps = [
+    ":system_image.meta",
+    ":update.meta",
+  ]
+  inputs = []
+  script = "blob_manifest.py"
+  args = [ "@{{response_file_name}}" ]
+
+  response_file_contents = [
+    "--output=" + rebase_path(blob_manifest, root_build_dir),
+    "--depfile=" + rebase_path(depfile, root_build_dir),
+    "--input=" + rebase_path("$target_out_dir/system_image.meta/blobs.json",
+                             root_build_dir),
+    "--input=" +
+        rebase_path("$target_out_dir/update.meta/blobs.json", root_build_dir),
+  ]
+  foreach(pkg_label, monolith_packages + preinstall_packages) {
+    pkg_target_name = get_label_info(pkg_label, "name")
+    pkg_target_out_dir = get_label_info(pkg_label, "target_out_dir")
+    pkg_blobs_rsp = "${pkg_target_out_dir}/${pkg_target_name}.blobs.rsp"
+    deps += [ "${pkg_label}.blobs.rsp" ]
+    inputs += [ pkg_blobs_rsp ]
+    response_file_contents +=
+        [ "@" + rebase_path(pkg_blobs_rsp, root_build_dir) ]
+  }
+}
+
+# Pack up all the blobs!
+zircon_tool_action("blob.blk") {
+  visibility = [ ":*" ]
+  testonly = true
+  deps = [
+    ":blob.manifest",
+  ]
+  blob_image_path = "$target_out_dir/$target_name"
+  blob_size_list = "$root_build_dir/blob.sizes"
+  outputs = [
+    blob_image_path,
+    # This should be an output too, but the generate_fvm template assumes that all
+    # outputs of these actions are inputs to the fvm tool.
+    # blob_size_list
+  ]
+  depfile = blob_image_path + ".d"
+  inputs = [
+    blob_manifest,
+  ]
+  tool = "blobfs"
+  args = [
+    "--depfile",
+    "--sizes",
+    rebase_path(blob_size_list, root_build_dir),
+    "--compress",
+    rebase_path(blob_image_path, root_build_dir),
+    "create",
+    "--manifest",
+    rebase_path(blob_manifest, root_build_dir),
+  ]
+}
+images += [
+  {
+    deps = [
+      ":blob.blk",
+    ]
+    public = [
+      "IMAGE_BLOB_RAW",
+    ]
+    json = {
+      name = "blob"
+      type = "blk"
+    }
+  },
+]
+
+###
+### Zircon Boot Images
+###
+
+declare_args() {
+  # List of arguments to add to /boot/config/devmgr.
+  # These come after synthesized arguments to configure blobfs and pkgfs.
+  devmgr_config = []
+
+  # List of kernel command line arguments to bake into the boot image.
+  # See also //zircon/docs/kernel_cmdline.md and
+  # [`devmgr_config`](#devmgr_config).
+  kernel_cmdline_args = []
+
+  # Files containing additional kernel command line arguments to bake into
+  # the boot image.  The contents of these files (in order) come after any
+  # arguments directly in [`kernel_cmdline_args`](#kernel_cmdline_args).
+  # These can be GN `//` source pathnames or absolute system pathnames.
+  kernel_cmdline_files = []
+
+  # List of extra manifest entries for files to add to the BOOTFS.
+  # Each entry can be a "TARGET=SOURCE" string, or it can be a scope
+  # with `sources` and `outputs` in the style of a copy() target:
+  # `outputs[0]` is used as `TARGET` (see `gn help source_expansion`).
+  bootfs_extra = []
+
+  # List of kernel images to include in the update (OTA) package.
+  # If no list is provided, all built kernels are included. The names in the
+  # list are strings that must match the filename to be included in the update
+  # package.
+  update_kernels = []
+}
+
+images += [
+  {
+    # This is the file to give QEMU's `-kernel` switch with a complete
+    # ZBI (some `IMAGE_*_ZBI` file) for its `-initrd` switch.
+    public = [
+      "IMAGE_QEMU_KERNEL_RAW",
+    ]
+    json = {
+      name = "qemu-kernel"
+      type = "kernel"
+    }
+    sdk = "qemu-kernel.bin"
+    deps = []
+    if (current_cpu == "arm64") {
+      sources = [
+        "$zircon_build_dir/qemu-boot-shim.bin",
+      ]
+    } else if (current_cpu == "x64") {
+      sources = [
+        "$zircon_build_dir/multiboot.bin",
+      ]
+    }
+  },
+]
+
+# Generate the /boot/config/devmgr file.  This looks like a kernel command
+# line file, but is read by devmgr (in addition to kernel command line
+# arguments), not by the kernel or boot loader.
+action("devmgr_config.txt") {
+  visibility = [ ":fuchsia" ]
+  testonly = true
+
+  script = "manifest.py"
+  outputs = [
+    "$target_out_dir/$target_name",
+  ]
+
+  pkgfs = "bin/" + pkgfs_binary_name
+  pkgfs_label = pkgfs_package_label + ".meta"
+  pkgfs_pkg_out_dir = get_label_info(pkgfs_label, "target_out_dir") + "/" +
+                      get_label_info(pkgfs_label, "name")
+  pkgfs_blob_manifest = "$pkgfs_pkg_out_dir/meta/contents"
+  system_image_merkleroot = "$target_out_dir/system_image.meta/meta.far.merkle"
+
+  deps = [
+    ":system_image.meta",
+    pkgfs_label,
+  ]
+  sources = [
+    pkgfs_blob_manifest,
+    system_image_merkleroot,
+  ]
+
+  args = [
+    "--output=" + rebase_path(outputs[0], root_build_dir),
+
+    # Start with the fixed options.
+    "--entry=devmgr.require-system=true",
+  ]
+
+  # Add the pkgfs command line, embedding the merkleroot of the system image.
+  args += [
+    "--contents",
+    "--rewrite=*=zircon.system.pkgfs.cmd={target}+{source}",
+    "--entry=${pkgfs}=" + rebase_path(system_image_merkleroot, root_build_dir),
+    "--no-contents",
+    "--reset-rewrite",
+  ]
+
+  # Embed the pkgfs blob manifest with the "zircon.system.pkgfs.file."
+  # prefix on target file names.
+  args += [
+    "--rewrite=*=zircon.system.pkgfs.file.{target}={source}",
+    "--manifest=" + rebase_path(pkgfs_blob_manifest, root_build_dir),
+    "--reset-rewrite",
+  ]
+
+  foreach(entry, devmgr_config) {
+    args += [ "--entry=$entry" ]
+  }
+
+  # If there were any ASan drivers in the build, bin/devhost.asan
+  # should have been brought into the boot manifest.  devmgr needs to
+  # be told to use it in case there are ASan drivers in /system but
+  # none in /boot.  If there were any non-ASan drivers in the build,
+  # bin/devhost.asan will load them and needs to know to moderate the
+  # checking for interacting with uninstrumented code.
+  deps += [ ":system_image.manifest" ]
+  sources += [ boot_manifest ]
+  args += [
+    "--include=bin/devhost.asan",
+    "--include=bin/devhost",
+    "--rewrite=bin/devhost.asan=devmgr.devhost.asan=true",
+    "--rewrite=bin/devhost=devhost.asan.strict=false",
+    "--manifest=" + rebase_path(boot_manifest, root_build_dir),
+  ]
+}
+
+# The main bootable image, which requires `blob.blk` to appear on some
+# attached storage device at runtime.
+zbi("fuchsia") {
+  testonly = true
+  deps = [
+    ":devmgr_config.txt",
+    ":system_image.manifest",
+  ]
+  inputs = [
+    "${zircon_build_dir}/kernel.zbi",
+    boot_manifest,
+  ]
+  manifest = [
+    {
+      outputs = [
+        "config/devmgr",
+      ]
+      sources = get_target_outputs(":devmgr_config.txt")
+    },
+  ]
+  cmdline = kernel_cmdline_args
+  cmdline_inputs = kernel_cmdline_files
+  manifest += bootfs_extra
+}
+images += [
+  {
+    deps = [
+      ":fuchsia",
+    ]
+    sdk = "fuchsia.zbi"
+    updater = "zbi"
+    installer = "fuchsia.zbi"
+    json = {
+      name = "zircon-a"
+      type = "zbi"
+
+      # TODO(IN-892): Although we wish to minimize the usage of mexec (ZX-2069),
+      # the infrastructure currently requires it for vim2 lifecycle management.
+      # (`fastboot continue` does not continue back to fuchsia after paving and
+      # rebooting in the case we do not mexec a kernel.)
+      bootserver_pave = [ "--boot" ]
+
+      if (custom_signing_script == "") {
+        bootserver_pave += [
+          "--zircona",
+          # TODO(ZX-2625): `dm reboot-recovery` boots from zircon-b instead of
+          # zircon-r, so for now zedboot is being paved to this slot.
+          # "--zirconb",
+        ]
+      }
+    }
+    public = [
+      "IMAGE_ZIRCONA_ZBI",
+
+      # TODO(mcgrathr): The complete ZBI can be used with a separate
+      # kernel too, the kernel image in it will just be ignored.  So
+      # just use the primary ZBI for this until all uses are
+      # converted to using the ZBI alone.  Then remove this as
+      # IMAGE_BOOT_RAM variable should no longer be in use.
+      "IMAGE_BOOT_RAM",
+    ]
+  },
+]
+
+if (custom_signing_script != "") {
+  custom_signed_zbi("signed") {
+    output_name = "fuchsia.zbi"
+    testonly = true
+    deps = [
+      ":fuchsia",
+    ]
+    zbi = get_target_outputs(":fuchsia")
+  }
+  images += [
+    {
+      deps = [
+        ":signed",
+      ]
+      sdk = "fuchsia.zbi.signed"
+      updater = "zbi.signed"
+      json = {
+        name = "zircon-a.signed"
+        type = "zbi.signed"
+        bootserver_pave = [ "--zircona" ]
+      }
+      public = [
+        "IMAGE_ZIRCONA_SIGNEDZBI",
+      ]
+    },
+  ]
+  if (use_vbmeta) {
+    images += [
+      {
+        deps = [
+          ":signed",
+        ]
+        sources = [
+          "$root_out_dir/fuchsia.zbi.vbmeta",
+        ]
+        json = {
+          name = "zircon-a.vbmeta"
+          type = "vbmeta"
+          bootserver_pave = [ "--vbmetaa" ]
+        }
+        public = [
+          "IMAGE_VBMETAA_RAW",
+        ]
+      },
+    ]
+  }
+}
+
+# The updater also wants the zedboot zbi as recovery.
+images += [
+  {
+    deps = [
+      "zedboot:zbi",
+    ]
+    sources = [
+      "$root_out_dir/zedboot.zbi",
+    ]
+    updater = "zedboot"
+    installer = "zedboot.zbi"
+  },
+]
+
+if (custom_signing_script != "") {
+  images += [
+    {
+      deps = [
+        "zedboot:signed",
+      ]
+      sources = [
+        "$root_out_dir/zedboot.zbi.signed",
+      ]
+      updater = "zedboot.signed"
+    },
+  ]
+}
+
+###
+### Complete images for booting and installing the whole system.
+###
+
+declare_args() {
+  # Build boot images that prefer Zedboot over local boot (only for EFI).
+  always_zedboot = false
+}
+
+# data.blk creates minfs data partition containing the preinstall package
+# index. The partition is included in fvm.blk and fvm.sparse.blk.
+# To increase the size of the data partition, increase the total size of the
+# fvm images using |fvm_image_size|.
+zircon_tool_action("data.blk") {
+  testonly = true
+  tool = "minfs"
+  data_image_path = "$target_out_dir/$target_name"
+  outputs = [
+    data_image_path,
+  ]
+  depfile = data_image_path + ".d"
+  deps = [
+    ":preinstall.manifest",
+  ]
+  preinstall_manifest = get_target_outputs(deps[0])
+  args = [
+    "--depfile",
+    rebase_path(data_image_path, root_build_dir),
+    "create",
+    "--manifest",
+    rebase_path(preinstall_manifest[0], root_build_dir),
+  ]
+  if (data_partition_manifest != "") {
+    args += [
+      "--manifest",
+      rebase_path(data_partition_manifest),
+    ]
+  }
+}
+images += [
+  {
+    public = [
+      "IMAGE_DATA_RAW",
+    ]
+    json = {
+      name = "data"
+      type = "blk"
+    }
+    deps = [
+      ":data.blk",
+    ]
+  },
+]
+
+# Record the maximum allowable FVM size in the build directory for later steps
+# to check against.
+max_fvm_size_file = "$root_build_dir/max_fvm_size.txt"
+write_file(max_fvm_size_file, max_fvm_size)
+
+# fvm.blk creates an FVM partition image containing the blob partition produced
+# by blob.blk and the data partition produced by data.blk. fvm.blk is primarily
+# invoked and used by the qemu run, via `fx run`.
+generate_fvm("fvm.blk") {
+  testonly = true
+  output_name = "$target_out_dir/fvm.blk"
+  args = fvm_create_args
+  if (fvm_image_size != "") {
+    args += [
+      "--length",
+      fvm_image_size,
+    ]
+  }
+  partitions = [
+    {
+      type = "blob"
+      dep = ":blob.blk"
+    },
+    {
+      type = "data"
+      dep = ":data.blk"
+    },
+  ]
+}
+images += [
+  {
+    deps = [
+      ":fvm.blk",
+    ]
+    json = {
+      name = "storage-full"
+      type = "blk"
+    }
+    sdk = "fvm.blk"
+    public = [
+      "IMAGE_FVM_RAW",
+    ]
+  },
+]
+
+# fvm.sparse.blk creates a sparse FVM partition image containing the blob
+# partition produced by blob.blk and the data partition produced by data.blk.
+# fvm.sparse.blk is primarily invoked and used by the paver boot, via `fx
+# pave`.
+generate_fvm("fvm.sparse.blk") {
+  testonly = true
+  output_name = "$target_out_dir/fvm.sparse.blk"
+  deps = [
+    ":blob.blk",
+    ":data.blk",
+  ]
+  args = fvm_sparse_args
+  partitions = [
+    {
+      type = "blob"
+      dep = ":blob.blk"
+    },
+    {
+      type = "data"
+      dep = ":data.blk"
+    },
+  ]
+}
+images += [
+  {
+    deps = [
+      ":fvm.sparse.blk",
+    ]
+    json = {
+      name = "storage-sparse"
+      type = "blk"
+      bootserver_pave = [ "--fvm" ]
+    }
+    installer = "fvm.sparse.blk"
+    sdk = "fvm.sparse.blk"
+    public = [
+      "IMAGE_FVM_SPARSE",
+    ]
+  },
+]
+
+# This rolls the primary ZBI together with a compressed RAMDISK image of
+# fvm.blk into a fat ZBI that boots the full system without using any real
+# storage.  The system decompresses the fvm.blk image into memory and then
+# sees that RAM disk just as if it were a real disk on the device.
+zbi("netboot") {
+  testonly = true
+  deps = [
+    ":fuchsia",
+    ":fvm.blk",
+  ]
+  inputs = get_target_outputs(":fuchsia")
+  ramdisk_inputs = get_target_outputs(":fvm.blk")
+}
+images += [
+  {
+    default = false
+    json = {
+      bootserver_netboot = [ "--boot" ]
+      name = "netboot"
+      type = "zbi"
+    }
+    public = [
+      "IMAGE_NETBOOT_ZBI",
+
+      # TODO(mcgrathr): The complete ZBI can be used with a separate kernel
+      # too, the kernel image in it will just be ignored.  So just use the
+      # primary ZBI for this until all uses are converted to using the ZBI
+      # alone.  Then remove this as IMAGE_BOOT_RAM variable should no
+      # longer be in use.
+      "IMAGE_NETBOOT_RAM",
+    ]
+    deps = [
+      ":netboot",
+    ]
+  },
+]
+
+if (target_cpu != "arm64") {
+  # ChromeOS vboot images.
+  vboot("vboot") {
+    testonly = true
+    output_name = "fuchsia"
+    deps = [
+      ":fuchsia",
+    ]
+  }
+  images += [
+    {
+      json = {
+        name = "zircon-vboot"
+        type = "vboot"
+        bootserver_pave = [ "--kernc" ]
+      }
+      deps = [
+        ":vboot",
+      ]
+      installer = "zircon.vboot"
+      sdk = "zircon.vboot"
+      updater = "kernc"
+      public = [
+        "IMAGE_ZIRCON_VBOOT",
+      ]
+    },
+  ]
+
+  images += [
+    {
+      deps = [
+        "zedboot:vboot",
+      ]
+      sources = [
+        "$root_out_dir/zedboot.vboot",
+      ]
+    },
+  ]
+
+  # EFI ESP images.
+  esp("esp") {
+    output_name = "fuchsia"
+    testonly = true
+    if (always_zedboot) {
+      cmdline = "zedboot/efi_cmdline.txt"
+    } else {
+      cmdline = "efi_local_cmdline.txt"
+    }
+  }
+  images += [
+    {
+      deps = [
+        ":esp",
+      ]
+      json = {
+        name = "efi"
+        type = "blk"
+        bootserver_pave = [ "--efi" ]
+      }
+      installer = "fuchsia.esp.blk"
+      sdk = "local.esp.blk"
+      updater = "efi"
+      public = [
+        "IMAGE_ESP_RAW",
+      ]
+    },
+    {
+      deps = [
+        "zedboot:esp",
+      ]
+      sources = [
+        "$root_out_dir/zedboot.esp.blk",
+      ]
+    },
+  ]
+}
+
+installer_label = "//garnet/bin/installer:install-fuchsia"
+installer_out_dir = get_label_info(installer_label, "root_out_dir")
+installer_path = "$installer_out_dir/install-fuchsia"
+
+action("installer.manifest") {
+  script = "manifest.py"
+  outputs = [
+    "$target_out_dir/installer.manifest",
+  ]
+  args = [
+    "--output=" + rebase_path(outputs[0], root_build_dir),
+    "--output-cwd=" + rebase_path(target_out_dir, root_build_dir),
+    "--entry=install-fuchsia=" + rebase_path(installer_path, root_build_dir),
+  ]
+  foreach(image, images) {
+    if (defined(image.installer)) {
+      image_sources = []
+      if (defined(image.sources)) {
+        image_sources += image.sources
+      } else {
+        foreach(label, image.deps) {
+          image_sources += get_target_outputs(label)
+        }
+      }
+      assert(image_sources == [ image_sources[0] ])
+      args += [ "--entry=${image.installer}=" +
+                rebase_path(image_sources[0], root_build_dir) ]
+    }
+  }
+}
+
+# installer.blk is a minfs partition image that includes all of the
+# images required to install a Fuchsia build.
+zircon_tool_action("installer") {
+  testonly = true
+  tool = "minfs"
+  deps = [
+    ":installer.manifest",
+    installer_label,
+  ]
+  outputs = [
+    "$target_out_dir/installer.blk",
+  ]
+  sources = []
+  foreach(image, images) {
+    if (defined(image.installer)) {
+      deps += image.deps
+      if (defined(image.sources)) {
+        sources += image.sources
+      } else {
+        foreach(label, image.deps) {
+          sources += get_target_outputs(label)
+        }
+      }
+    }
+  }
+  depfile = "$target_out_dir/installer.blk.d"
+  args = [
+    "--depfile",
+    rebase_path(outputs[0], root_build_dir),
+    "create",
+    "--manifest",
+  ]
+  args += rebase_path(get_target_outputs(deps[0]), root_build_dir)
+}
+images += [
+  {
+    default = false
+    public = [
+      "IMAGE_INSTALLER_RAW",
+    ]
+    deps = [
+      ":installer",
+    ]
+  },
+]
+
+group("images") {
+  testonly = true
+  deps = [
+    ":ids.txt",
+    ":paver-script",
+    "zedboot",
+  ]
+}
+
+group("default-images") {
+  testonly = true
+  deps = []
+  foreach(image, images) {
+    if (!defined(image.default) || image.default) {
+      deps += image.deps
+    }
+  }
+}
+
+###
+### Paver script and archives using those images and zedboot's images.
+###
+
+paver_targets = [
+  {
+    name = "paver-script"
+    outputs = [
+      "$root_build_dir/pave.sh",
+    ]
+    switch = "--pave="
+    json = {
+      name = "pave"
+      type = "sh"
+    }
+  },
+  {
+    name = "netboot-script"
+    outputs = [
+      "$root_build_dir/netboot.sh",
+    ]
+    switch = "--netboot="
+    json = {
+      name = "netboot"
+      type = "sh"
+    }
+  },
+]
+foreach(format,
+        [
+          "tgz",
+          "zip",
+        ]) {
+  paver_targets += [
+    {
+      name = "archive-$format"
+      outputs = [
+        "$root_build_dir/build-archive.$format",
+      ]
+      switch = "--archive="
+      json = {
+        name = "archive"
+        type = "$format"
+      }
+    },
+    {
+      name = "symbol-archive-$format"
+      outputs = [
+        "$root_build_dir/symbol-archive.$format",
+      ]
+      switch = "--symbol-archive="
+      json = {
+        name = "symbol-archive"
+        type = "$format"
+      }
+    },
+  ]
+}
+
+foreach(target, paver_targets) {
+  images += [
+    {
+      path = rebase_path(target.outputs, root_build_dir)
+      path = path[0]
+      json = target.json
+      deps = [
+        ":${target.name}",
+      ]
+    },
+  ]
+  action(target.name) {
+    deps = [
+      ":default-images",
+      ":netboot",
+      "zedboot",
+    ]
+    testonly = true
+    sources = [
+      "$root_build_dir/images.json",
+      "$root_build_dir/zedboot_images.json",
+    ]
+    outputs = target.outputs
+    depfile = "${outputs[0]}.d"
+    script = "pack-images.py"
+    args = [
+      "--depfile=" + rebase_path(depfile, root_build_dir),
+      target.switch + rebase_path(outputs[0], root_build_dir),
+    ]
+    args += rebase_path(sources, root_build_dir)
+  }
+}
+
+###
+### Amber updates.
+###
+
+# update_packages.manifest contains the same entries as the pkgsvr_index but
+# additionally includes the system_image package.
+action("update_packages.manifest") {
+  visibility = [ ":update.manifest" ]
+  testonly = true
+
+  script = "manifest.py"
+  outputs = [
+    "$target_out_dir/$target_name",
+  ]
+  args = [
+    "--contents",
+    "--output",
+    rebase_path(outputs[0], root_build_dir),
+  ]
+  deps = []
+  sources = []
+
+  deps += [ ":system_image.meta" ]
+  args += [ "--entry=system_image/0=" +
+            rebase_path("$target_out_dir/system_image.meta/meta.far.merkle",
+                        root_build_dir) ]
+
+  foreach(pkg_label, monolith_packages) {
+    # Find the response file written by package().
+    pkg_target_name = get_label_info(pkg_label, "name")
+    pkg_target_out_dir = get_label_info(pkg_label, "target_out_dir")
+    pkg_rspfile = "$pkg_target_out_dir/${pkg_target_name}.pkgsvr_index.rsp"
+    deps += [ "${pkg_label}.pkgsvr_index.rsp" ]
+    sources += [ pkg_rspfile ]
+    args += [ "@" + rebase_path(pkg_rspfile, root_build_dir) ]
+  }
+}
+
+# The update package manifest contains the pkgsvr_index and the target
+# system kernel images.
+action("update.manifest") {
+  visibility = [ ":*" ]
+  testonly = true
+
+  update_manifest = [
+    {
+      target = "packages"
+      deps = [
+        ":update_packages.manifest",
+      ]
+    },
+
+    # Add the meta/package JSON file that makes this the "update" package.
+    {
+      target = "meta/package"
+      sources = [
+        "update_package.json",
+      ]
+    },
+  ]
+
+  foreach(image, images) {
+    if (defined(image.updater)) {
+      if (update_kernels == []) {
+        update_manifest += [
+          {
+            target = image.updater
+            forward_variables_from(image,
+                                   [
+                                     "deps",
+                                     "sources",
+                                   ])
+          },
+        ]
+      } else {
+        foreach(kernel, update_kernels) {
+          if (image.updater == kernel) {
+            update_manifest += [
+              {
+                target = image.updater
+                forward_variables_from(image,
+                                       [
+                                         "deps",
+                                         "sources",
+                                       ])
+              },
+            ]
+          }
+        }
+      }
+    }
+  }
+
+  script = "manifest.py"
+
+  outputs = [
+    "$target_out_dir/$target_name",
+  ]
+
+  args = [ "--output=" + rebase_path(outputs[0], root_build_dir) ]
+  sources = []
+  deps = []
+
+  foreach(entry, update_manifest) {
+    entry_source = ""
+    if (defined(entry.deps)) {
+      deps += entry.deps
+    }
+
+    if (defined(entry.sources)) {
+      # TODO(BLD-354): We should only have single source
+      sources = []
+      sources += entry.sources
+      entry_source = sources[0]
+    } else if (defined(entry.deps)) {
+      foreach(label, entry.deps) {
+        # TODO(BLD-354): We should only have single output
+        dep_outputs = []
+        dep_outputs += get_target_outputs(label)
+        entry_source = dep_outputs[0]
+      }
+    }
+    entry_source = rebase_path(entry_source, root_build_dir)
+    args += [ "--entry=${entry.target}=${entry_source}" ]
+  }
+}
+
+pm_build_package("update.meta") {
+  visibility = [ ":*" ]
+  testonly = true
+  manifest = ":update.manifest"
+}
+
+# XXX(raggi): The following manifests retain the "meta/" files, resulting in
+# them being added as blobs, which they should not be. A likely better solution
+# here is to teach pm_build_package to produce either a blob manifest or a
+# manifest.py --contents compatible response file that excludes these files.
+
+action("update.sources.manifest") {
+  visibility = [ ":*" ]
+  testonly = true
+  script = "manifest.py"
+  deps = [
+    ":update.manifest",
+  ]
+  outputs = [
+    "$target_out_dir/update.sources.manifest",
+  ]
+  update_manifests = get_target_outputs(deps[0])
+  args = [
+    "--sources",
+    "--output=" + rebase_path(outputs[0], root_build_dir),
+    "--manifest=" + rebase_path(update_manifests[0]),
+  ]
+}
+
+# The amber index is the index of all requested packages, naming each meta.far
+# file instead of its merkleroot. Additionally the amber_index has the system
+# package itself, and the system update package.
+amber_index = "$target_out_dir/amber_index"
+
+action("amber_index") {
+  visibility = [ ":amber_publish_index" ]
+  testonly = true
+
+  script = "manifest.py"
+  args = [ "--absolute" ]
+  outputs = [
+    "$target_out_dir/$target_name",
+  ]
+  args += [ "--output=" + rebase_path(outputs[0], root_build_dir) ]
+  sources = []
+  deps = []
+  foreach(pkg_label, available_packages) {
+    # Find the response file written by package().
+    pkg_target_name = get_label_info(pkg_label, "name")
+    pkg_target_out_dir = get_label_info(pkg_label, "target_out_dir")
+    pkg_rspfile = "$pkg_target_out_dir/${pkg_target_name}.amber_index.rsp"
+    deps += [ "${pkg_label}.amber_index.rsp" ]
+    sources += [ pkg_rspfile ]
+    args += [ "@" + rebase_path(pkg_rspfile, root_build_dir) ]
+  }
+
+  system_image_meta_dir =
+      get_label_info(":system_image.meta", "target_out_dir") +
+      "/system_image.meta"
+  system_image_meta_far = system_image_meta_dir + "/meta.far"
+  args += [
+    "--entry",
+    "system_image/0=" + rebase_path(system_image_meta_far, root_build_dir),
+  ]
+  deps += [ ":system_image.meta" ]
+
+  update_meta_dir =
+      get_label_info(":update.meta", "target_out_dir") + "/update.meta"
+  update_meta_far = update_meta_dir + "/meta.far"
+  args += [
+    "--entry",
+    "update/0=" + rebase_path(update_meta_far, root_build_dir),
+  ]
+  deps += [ ":update.meta" ]
+}
+
+# The system index is the index of all available packages, naming each blobs.json
+# file instead of its merkleroot, and including a tag of the package set the
+# package is a part of (monolith/preinstall/available). Additionally the
+# system_index has the system package itself, and the system update package.
+system_index = "$target_out_dir/system_index"
+
+action("system_index") {
+  visibility = [ ":system_snapshot" ]
+  testonly = true
+
+  script = "manifest.py"
+  args = [ "--absolute" ]
+  outputs = [
+    "$target_out_dir/$target_name",
+  ]
+  args += [ "--output=" + rebase_path(outputs[0], root_build_dir) ]
+  sources = []
+  deps = []
+  foreach(pkg_label, available_packages) {
+    # Find the response file written by package().
+    pkg_target_name = get_label_info(pkg_label, "name")
+    pkg_target_out_dir = get_label_info(pkg_label, "target_out_dir")
+    pkg_rspfile = "$pkg_target_out_dir/${pkg_target_name}.system_index.rsp"
+    deps += [ "${pkg_label}.system_index.rsp" ]
+    sources += [ pkg_rspfile ]
+    args += [ "@" + rebase_path(pkg_rspfile, root_build_dir) ]
+  }
+
+  system_image_meta_dir =
+      get_label_info(":system_image.meta", "target_out_dir") +
+      "/system_image.meta"
+  system_image_blobs_json = system_image_meta_dir + "/blobs.json"
+  args += [
+    "--entry",
+    "system_image/0#monolith=" +
+        rebase_path(system_image_blobs_json, root_build_dir),
+  ]
+  deps += [ ":system_image.meta" ]
+
+  update_meta_dir =
+      get_label_info(":update.meta", "target_out_dir") + "/update.meta"
+  update_blobs_json = update_meta_dir + "/blobs.json"
+  args += [
+    "--entry",
+    "update/0#monolith=" + rebase_path(update_blobs_json, root_build_dir),
+  ]
+  deps += [ ":update.meta" ]
+}
+
+compiled_action("system_snapshot") {
+  tool = "//garnet/go/src/pm:pm_bin"
+  tool_output_name = "pm"
+
+  visibility = [ ":updates" ]
+  testonly = true
+
+  deps = [
+    ":system_index",
+  ]
+
+  inputs = [
+    system_index,
+  ]
+
+  outputs = [
+    "$target_out_dir/system.snapshot",
+  ]
+
+  args = [
+    "snapshot",
+    "--manifest",
+    rebase_path(inputs[0], root_build_dir),
+    "--output",
+    rebase_path(outputs[0], root_build_dir),
+  ]
+}
+
+# Available blob manifest is a manifest of merkleroot=source_path for all blobs
+# in all packages produced by the build, including the system image and the
+# update package.
+available_blob_manifest = "$root_build_dir/available_blobs.manifest"
+action("available_blobs.manifest") {
+  visibility = [ ":*" ]
+  testonly = true
+  outputs = [
+    available_blob_manifest,
+  ]
+  depfile = available_blob_manifest + ".d"
+  deps = [
+    ":system_image.meta",
+    ":update.meta",
+  ]
+  inputs = []
+  script = "blob_manifest.py"
+  args = [ "@{{response_file_name}}" ]
+
+  response_file_contents = [
+    "--output=" + rebase_path(available_blob_manifest, root_build_dir),
+    "--depfile=" + rebase_path(depfile, root_build_dir),
+    "--input=" + rebase_path("$target_out_dir/system_image.meta/blobs.json",
+                             root_build_dir),
+    "--input=" +
+        rebase_path("$target_out_dir/update.meta/blobs.json", root_build_dir),
+  ]
+  foreach(pkg_label, available_packages) {
+    pkg_target_name = get_label_info(pkg_label, "name")
+    pkg_target_out_dir = get_label_info(pkg_label, "target_out_dir")
+    pkg_blobs_rsp = "${pkg_target_out_dir}/${pkg_target_name}.blobs.rsp"
+    deps += [ "${pkg_label}.blobs.rsp" ]
+    inputs += [ pkg_blobs_rsp ]
+    response_file_contents +=
+        [ "@" + rebase_path(pkg_blobs_rsp, root_build_dir) ]
+  }
+}
+
+# Populate the repository directory with content ID-named copies.
+action("amber_publish_blobs") {
+  testonly = true
+  outputs = [
+    "$amber_repository_dir.stamp",
+  ]
+  deps = [
+    ":available_blobs.manifest",
+  ]
+  inputs = []
+  foreach(dep, deps) {
+    inputs += get_target_outputs(dep)
+  }
+  script = "manifest.py"
+  args = [
+    "--copy-contentaddr",
+    "--output=" + rebase_path(amber_repository_blobs_dir),
+    "--stamp=" + rebase_path("$amber_repository_dir.stamp"),
+  ]
+  foreach(manifest, inputs) {
+    args += [ "--manifest=" + rebase_path(manifest, root_build_dir) ]
+  }
+}
+
+# Sign and publish the package index.
+pm_publish("amber_publish_index") {
+  testonly = true
+  deps = [
+    ":amber_index",
+  ]
+  inputs = [
+    amber_index,
+  ]
+}
+
+group("updates") {
+  testonly = true
+  deps = [
+    ":amber_publish_blobs",
+    ":amber_publish_index",
+    ":ids.txt",
+    ":system_snapshot",
+  ]
+}
+
+###
+### Build ID maps.
+###
+
+# Combine the /boot, /system, and package build ID maps into one.
+# Nothing in the build uses this, but top-level targets always update
+# it so that debugging tools can rely on it.
+action("ids.txt") {
+  testonly = true
+
+  deps = [
+    ":kernel-ids.txt",
+    ":system_image.manifest",
+    ":zircon-asan-build-id",
+    ":zircon-build-id",
+  ]
+  sources = [
+    "$target_out_dir/kernel-ids.txt",
+    system_build_id_map,
+  ]
+
+  foreach(pkg_label, available_packages) {
+    # Find the ids.txt file written by package().
+    manifest = get_label_info(pkg_label, "label_no_toolchain") +
+               ".final.manifest.ids.txt"
+    manifest_target_name = get_label_info(manifest, "name")
+    manifest_target_out_dir = get_label_info(manifest, "target_out_dir")
+    deps += [ manifest ]
+    sources += [ "$manifest_target_out_dir/${manifest_target_name}" ]
+  }
+
+  script = "/usr/bin/sort"
+  outputs = [
+    "$root_out_dir/ids.txt",
+  ]
+  args = [
+           "-u",
+           "-o",
+         ] + rebase_path(outputs + sources, root_build_dir)
+}
+
+# The vDSO doesn't appear in any package and so doesn't get into any
+# ids.txt file produced by generate_manifest().  But it appears in memory
+# at runtime and in backtraces, so it should be in the aggregated ids.txt
+# for symbolization.  The vDSO doesn't appear in zircon_boot_manifests, so
+# fetching it out of Zircon's own ids.txt by name is the only thing to do.
+# Likewise for the kernel itself, whose build ID is useful to have in the map.
+action("kernel-ids.txt") {
+  script = "manifest.py"
+  sources = [
+    "$zircon_build_dir/ids.txt",
+  ]
+  outputs = [
+    "$target_out_dir/kernel-ids.txt",
+  ]
+  args = [
+    "--separator= ",
+    "--output=" + rebase_path(outputs[0], root_build_dir),
+    "--include-source=*/libzircon.so",
+    "--include-source=*/zircon.elf",
+    "--manifest=" + rebase_path(sources[0], root_build_dir),
+  ]
+}
+
+# TODO(TC-303): This is a temporary hack to get all of Zircon's debug files
+# into the $root_build_dir/.build-id hierarchy.  The Zircon build produces
+# its own .build-id hierarchy under $zircon_build_dir, but using its ids.txt
+# is the simpler way to populate the one in this build.  When ids.txt is fully
+# obsolete, hopefully Zircon will be in the unified build anyway.
+foreach(target,
+        [
+          {
+            name = "zircon-build-id"
+            sources = [
+              "$zircon_build_dir/ids.txt",
+            ]
+          },
+          {
+            name = "zircon-asan-build-id"
+            sources = [
+              "$zircon_asan_build_dir/ids.txt",
+            ]
+          },
+        ]) {
+  action(target.name) {
+    visibility = [ ":ids.txt" ]
+    script = "//scripts/build_id_conv.py"
+    sources = target.sources
+    outputs = [
+      "$root_build_dir/${target_name}.stamp",
+    ]
+    args =
+        [ "--stamp=" + rebase_path(outputs[0], root_build_dir) ] +
+        rebase_path(sources + [ "$root_build_dir/.build-id" ], root_build_dir)
+  }
+}
+
+images += [
+  {
+    deps = [
+      ":ids.txt",
+    ]
+    json = {
+      name = "build-id"
+      type = "txt"
+    }
+  },
+]
+
+write_images_manifest("images-manifest") {
+  outputs = [
+    "$root_build_dir/images.json",
+    "$root_build_dir/image_paths.sh",
+  ]
+}
+
+###
+### SDK
+###
+
+sdk_images = []
+
+foreach(image, images) {
+  if (defined(image.sdk)) {
+    image_target_name = "${image.sdk}_sdk"
+    sdk_images += [ ":$image_target_name" ]
+    sdk_atom(image_target_name) {
+      id = "sdk://images/${image.sdk}"
+      category = "partner"
+      testonly = true
+
+      file_content = "target/$target_cpu/${image.sdk}"
+      meta = {
+        dest = "images/${image.sdk}-meta.json"
+        schema = "image"
+        value = {
+          type = "image"
+          name = "${image.sdk}"
+          file = {
+            if (target_cpu == "x64") {
+              x64 = file_content
+            } else if (target_cpu == "arm64") {
+              arm64 = file_content
+            } else {
+              assert(false, "Unsupported target architecture: $target_cpu")
+            }
+          }
+        }
+      }
+
+      image_sources = []
+      if (defined(image.sources)) {
+        image_sources += image.sources
+      } else {
+        foreach(label, image.deps) {
+          image_sources += get_target_outputs(label)
+        }
+      }
+
+      files = [
+        {
+          source = image_sources[0]
+          dest = "target/$target_cpu/${image.sdk}"
+        },
+      ]
+
+      non_sdk_deps = image.deps
+    }
+  }
+}
+
+sdk_molecule("images_sdk") {
+  testonly = true
+
+  deps = sdk_images
+}
diff --git a/build/images/blob_manifest.py b/build/images/blob_manifest.py
new file mode 100755
index 0000000..ebcf25b
--- /dev/null
+++ b/build/images/blob_manifest.py
@@ -0,0 +1,45 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import json
+import sys
+import shlex
+
+def main():
+    parser = argparse.ArgumentParser(
+        description='Product a blobfs manifest from a set of blobs.json',
+        fromfile_prefix_chars='@')
+    parser.convert_arg_line_to_args = shlex.split
+    parser.add_argument('--output', required=True,
+                        help='Output manifest path')
+    parser.add_argument('--depfile', required=True,
+                        help='Dependency file output path')
+    parser.add_argument('--input', action='append', default=[],
+                        help='Input blobs.json, repeated')
+
+    args = parser.parse_args()
+
+    all_blobs = dict()
+
+    with open(args.depfile, 'w') as depfile:
+        depfile.write(args.output)
+        depfile.write(':')
+        for path in args.input:
+            depfile.write(' ' + path)
+            with open(path) as input:
+                blobs = json.load(input)
+                for blob in blobs:
+                    src = blob['source_path']
+                    all_blobs[blob['merkle']] = src
+                    depfile.write(' ' + src)
+
+    with open(args.output, 'w') as output:
+        for merkle, src in all_blobs.items():
+            output.write('%s=%s\n' % (merkle, src))
+
+
+if __name__ == '__main__':
+    sys.exit(main())
diff --git a/build/images/boot.gni b/build/images/boot.gni
new file mode 100644
index 0000000..f66f0e8
--- /dev/null
+++ b/build/images/boot.gni
@@ -0,0 +1,193 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/compiled_action.gni")
+import("//build/config/clang/clang.gni")
+import("//build/config/fuchsia/zircon.gni")
+
+# Build a "kernel partition" target for ChromeOS targets.
+#
+# Parameters
+#
+#   deps (required)
+#     [list of one label] Must be a `zbi()` target defined earlier in the file.
+#
+#   output_name (optional, default: `target_name`)
+#   output_extension (optional, default: `".vboot"`)
+#     [string] Determines the file name, in `root_out_dir`.
+#
+template("vboot") {
+  if (defined(invoker.output_name)) {
+    output_file = invoker.output_name
+  } else {
+    output_file = target_name
+  }
+  if (defined(invoker.output_extension)) {
+    if (invoker.output_extension != "") {
+      output_file += ".${invoker.output_extension}"
+    }
+  } else {
+    output_file += ".vboot"
+  }
+  output_file = "$root_out_dir/$output_file"
+
+  compiled_action(target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "deps",
+                             "testonly",
+                             "visibility",
+                           ])
+
+    tool = "//garnet/tools/vboot_reference:futility"
+    outputs = [
+      output_file,
+    ]
+
+    vboot_dir = "//third_party/vboot_reference"
+    kernel_keyblock = "$vboot_dir/tests/devkeys/kernel.keyblock"
+    private_keyblock = "$vboot_dir/tests/devkeys/kernel_data_key.vbprivk"
+    inputs = [
+      kernel_keyblock,
+      private_keyblock,
+    ]
+
+    assert(defined(deps), "vboot() requires deps")
+    zbi = []
+    foreach(label, deps) {
+      zbi += get_target_outputs(label)
+    }
+    inputs += zbi
+    assert(zbi == [ zbi[0] ], "vboot() requires exactly one zbi() in deps")
+
+    # The CrOS bootloader supports Multiboot (with `--flags 0x2` below).
+    # The Multiboot trampoline is the "kernel" (`--vmlinuz` switch) and the
+    # ZBI is the RAM disk (`--bootloader` switch).
+    assert(current_cpu == "x64")
+    kernel = "${zircon_build_dir}/multiboot.bin"
+    inputs += [ kernel ]
+
+    args = [
+      "vbutil_kernel",
+      "--pack",
+      rebase_path(output_file),
+      "--keyblock",
+      rebase_path(kernel_keyblock),
+      "--signprivate",
+      rebase_path(private_keyblock),
+      "--bootloader",
+      rebase_path(zbi[0]),
+      "--vmlinuz",
+      rebase_path(kernel),
+      "--version",
+      "1",
+      "--flags",
+      "0x2",
+    ]
+  }
+}
+
+# Build an "EFI System Partition" target for EFI targets.
+#
+# Parameters
+#
+#   deps (optional)
+#     [list of labels] Targets that generate the other inputs.
+#
+#   output_name (optional, default: `target_name`)
+#   output_extension (optional, default: `".esp.blk"`)
+#     [string] Determines the file name, in `root_out_dir`.
+#
+#   bootdata_bin (optional)
+#     [path] Must be a ramdisk that compliments zircon_bin.
+#
+#   zircon_bin (optional)
+#     [path] A zircon kernel.
+#
+#   zedboot (optional)
+#     [label] A Zedboot `zbi()` target.
+#
+#   cmdline (optional)
+#     [path] A bootloader (Gigaboot) cmdline file to include in the EFI root.
+#
+template("esp") {
+  if (defined(invoker.output_name)) {
+    output_file = invoker.output_name
+  } else {
+    output_file = target_name
+  }
+  if (defined(invoker.output_extension)) {
+    if (invoker.output_extension != "") {
+      output_file += ".${invoker.output_extension}"
+    }
+  } else {
+    output_file += ".esp.blk"
+  }
+  output_file = "$root_out_dir/$output_file"
+
+  compiled_action(target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "deps",
+                             "testonly",
+                             "visibility",
+                           ])
+
+    tool = "//garnet/go/src/make-efi"
+    mkfs_msdosfs_bin = "$zircon_tools_dir/mkfs-msdosfs"
+
+    outputs = [
+      output_file,
+    ]
+    inputs = [
+      mkfs_msdosfs_bin,
+    ]
+    args = [
+      "--output",
+      rebase_path(output_file),
+      "--mkfs",
+      rebase_path(mkfs_msdosfs_bin),
+    ]
+
+    if (defined(invoker.zircon_bin)) {
+      args += [
+        "--zircon",
+        rebase_path(invoker.zircon_bin),
+      ]
+      inputs += [ invoker.zircon_bin ]
+    }
+
+    if (defined(invoker.bootdata_bin)) {
+      args += [
+        "--bootdata",
+        rebase_path(invoker.bootdata_bin),
+      ]
+      inputs += [ invoker.bootdata_bin ]
+    }
+
+    if (defined(invoker.zedboot)) {
+      args += [
+        "--zedboot",
+        rebase_path(invoker.zedboot),
+      ]
+      inputs += [ invoker.zedboot ]
+    }
+
+    if (defined(invoker.cmdline)) {
+      args += [
+        "--cmdline",
+        rebase_path(invoker.cmdline),
+      ]
+    }
+
+    if (target_cpu == "x64") {
+      gigaboot_bin = "${zircon_build_dir}/bootloader/bootx64.efi"
+      args += [
+        "--efi-bootloader",
+        rebase_path(gigaboot_bin),
+      ]
+      inputs += [ gigaboot_bin ]
+    }
+  }
+}
diff --git a/build/images/create-shell-commands.py b/build/images/create-shell-commands.py
new file mode 100755
index 0000000..0b05d25
--- /dev/null
+++ b/build/images/create-shell-commands.py
@@ -0,0 +1,48 @@
+#!/usr/bin/env python
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import argparse
+import sys
+import os
+import shlex
+
+
+def main():
+    parser = argparse.ArgumentParser(
+        description='Create trampolines and a manifest for a set of shell commands',
+        fromfile_prefix_chars='@')
+    parser.convert_arg_line_to_args = shlex.split
+    parser.add_argument('--trampoline-dir', required=True,
+                        help='Directory in which to create trampolines')
+    parser.add_argument('--output-manifest', required=True,
+                        help='Output manifest path')
+    parser.add_argument('--uri', action='append', default=[],
+                        help='A command URI to create an entry for')
+
+    args = parser.parse_args()
+
+    if not os.path.exists(args.trampoline_dir):
+        os.makedirs(args.trampoline_dir)
+
+    commands = dict()
+
+    for uri in args.uri:
+        name = uri.split('#')[-1]
+        name = os.path.split(name)[-1]
+        if name in commands:
+            sys.stderr.write('Duplicate shell command name: %s\n' % name)
+            return 1
+        path = os.path.join(args.trampoline_dir, name)
+        with open(path, 'w') as f:
+            f.write('#!resolve %s\n' % uri)
+        commands[name] = path
+
+    with open(args.output_manifest, 'w') as output:
+        for name, path in commands.items():
+            output.write('bin/%s=%s\n' % (name, path))
+
+
+if __name__ == '__main__':
+    sys.exit(main())
diff --git a/build/images/custom_signing.gni b/build/images/custom_signing.gni
new file mode 100644
index 0000000..6a895a4
--- /dev/null
+++ b/build/images/custom_signing.gni
@@ -0,0 +1,99 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/config/fuchsia/zircon.gni")
+
+declare_args() {
+  # If non-empty, the given script will be invoked to produce a signed ZBI
+  # image. The given script must accept -z for the input zbi path, and -o for
+  # the output signed zbi path. The path must be in GN-label syntax (i.e.
+  # starts with //).
+  custom_signing_script = ""
+
+  # If true, then the paving script will pave vbmeta images to the target device.
+  # It is assumed that the vbmeta image will be created by the custom_signing_script.
+  use_vbmeta = false
+}
+
+# Template for producing signed ZBI images given a custom signing script.
+# The signing script is required to accept two parameters:
+#  -z  the path to the ZBI image to be signed
+#  -o  the path to the image file to be output
+#  -v  the path to the vbmeta file to be output (scripts not using AVB may ignore this)
+#  -B  the path to the zircon build directory
+#
+# TODO(BLD-323): add flags for producing depfiles
+# TODO(raggi): add support for custom flags (e.g. to switch keys)
+#
+# Paramters
+#
+#   output_name (optional, default: target_name)
+#   output_extension (optional, default: signed)
+#       [string] These together determine the name of the output file.
+#       If `output_name` is omitted, then the name of the target is
+#       used.  If `output_extension` is "" then `output_name` is the
+#       file name; otherwise, `${output_name}.${output_extension}`;
+#       the output file is always under `root_out_dir`.
+#
+#   zbi (required)
+#       [list-of-strings] path to a ZBI image to be signed. Must only
+#       contain a single entry.
+#
+#   deps (usually required)
+#   visibility (optional)
+#   testonly (optional)
+#       Same as for any GN `action` target.  `deps` must list labels that
+#       produce all the `inputs`, `cmdline_inputs`, and `ramdisk_inputs`
+#       that are generated by the build (none are required for inputs that
+#       are part of the source tree).
+template("custom_signed_zbi") {
+  if (defined(invoker.output_name)) {
+    output_file = invoker.output_name
+  } else {
+    output_file = target_name
+  }
+
+  vbmeta_file = output_file + ".vbmeta"
+
+  if (defined(invoker.output_extension)) {
+    if (invoker.output_extension != "") {
+      output_file += ".${invoker.output_extension}"
+    }
+  } else {
+    output_file += ".signed"
+  }
+
+  forward_variables_from(invoker,
+                         [
+                           "testonly",
+                           "deps",
+                           "zbi",
+                         ])
+
+  assert([ zbi[0] ] == zbi, "zbi parameter must contain a single entry")
+
+  output_file = "$root_out_dir/$output_file"
+  vbmeta_file = "$root_out_dir/$vbmeta_file"
+  action(target_name) {
+    script = custom_signing_script
+
+    inputs = zbi
+    outputs = [
+      output_file,
+    ]
+    if (use_vbmeta) {
+      outputs += [ vbmeta_file ]
+    }
+    args = [
+      "-z",
+      rebase_path(inputs[0], root_build_dir),
+      "-o",
+      rebase_path(outputs[0], root_build_dir),
+      "-v",
+      rebase_path(vbmeta_file, root_build_dir),
+      "-B",
+      rebase_path(zircon_build_dir, root_build_dir),
+    ]
+  }
+}
diff --git a/build/images/efi_local_cmdline.txt b/build/images/efi_local_cmdline.txt
new file mode 100644
index 0000000..2482a4b
--- /dev/null
+++ b/build/images/efi_local_cmdline.txt
@@ -0,0 +1,2 @@
+bootloader.default=local
+bootloader.timeout=1
diff --git a/build/images/elfinfo.py b/build/images/elfinfo.py
new file mode 100755
index 0000000..3aa4806
--- /dev/null
+++ b/build/images/elfinfo.py
@@ -0,0 +1,589 @@
+#!/usr/bin/env python
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+from contextlib import contextmanager
+from collections import namedtuple
+import mmap
+import os
+import struct
+
+
+# Standard ELF constants.
+ELFMAG = '\x7fELF'
+EI_CLASS = 4
+ELFCLASS32 = 1
+ELFCLASS64 = 2
+EI_DATA = 5
+ELFDATA2LSB = 1
+ELFDATA2MSB = 2
+EM_386 = 3
+EM_ARM = 40
+EM_X86_64 = 62
+EM_AARCH64 = 183
+PT_LOAD = 1
+PT_DYNAMIC = 2
+PT_INTERP = 3
+PT_NOTE = 4
+DT_NEEDED = 1
+DT_STRTAB = 5
+DT_SONAME = 14
+NT_GNU_BUILD_ID = 3
+SHT_SYMTAB = 2
+
+
+class elf_note(
+    namedtuple('elf_note', [
+        'name',
+        'type',
+        'desc',
+    ])):
+
+    # An ELF note is identified by (name_string, type_integer).
+    def ident(self):
+        return (self.name, self.type)
+
+    def is_build_id(self):
+        return self.ident() == ('GNU\0', NT_GNU_BUILD_ID)
+
+    def build_id_hex(self):
+        if self.is_build_id():
+            return ''.join(('%02x' % ord(byte)) for byte in self.desc)
+        return None
+
+    def __repr__(self):
+        return ('elf_note(%r, %#x, <%d bytes>)' %
+                (self.name, self.type, len(self.desc)))
+
+
+def gen_elf():
+    # { 'Struct1': (ELFCLASS32 fields, ELFCLASS64 fields),
+    #   'Struct2': fields_same_for_both, ... }
+    elf_types = {
+        'Ehdr': ([
+            ('e_ident', '16s'),
+            ('e_type', 'H'),
+            ('e_machine', 'H'),
+            ('e_version', 'I'),
+            ('e_entry', 'I'),
+            ('e_phoff', 'I'),
+            ('e_shoff', 'I'),
+            ('e_flags', 'I'),
+            ('e_ehsize', 'H'),
+            ('e_phentsize', 'H'),
+            ('e_phnum', 'H'),
+            ('e_shentsize', 'H'),
+            ('e_shnum', 'H'),
+            ('e_shstrndx', 'H'),
+        ], [
+            ('e_ident', '16s'),
+            ('e_type', 'H'),
+            ('e_machine', 'H'),
+            ('e_version', 'I'),
+            ('e_entry', 'Q'),
+            ('e_phoff', 'Q'),
+            ('e_shoff', 'Q'),
+            ('e_flags', 'I'),
+            ('e_ehsize', 'H'),
+            ('e_phentsize', 'H'),
+            ('e_phnum', 'H'),
+            ('e_shentsize', 'H'),
+            ('e_shnum', 'H'),
+            ('e_shstrndx', 'H'),
+        ]),
+        'Phdr': ([
+            ('p_type', 'I'),
+            ('p_offset', 'I'),
+            ('p_vaddr', 'I'),
+            ('p_paddr', 'I'),
+            ('p_filesz', 'I'),
+            ('p_memsz', 'I'),
+            ('p_flags', 'I'),
+            ('p_align', 'I'),
+        ], [
+            ('p_type', 'I'),
+            ('p_flags', 'I'),
+            ('p_offset', 'Q'),
+            ('p_vaddr', 'Q'),
+            ('p_paddr', 'Q'),
+            ('p_filesz', 'Q'),
+            ('p_memsz', 'Q'),
+            ('p_align', 'Q'),
+        ]),
+        'Shdr': ([
+            ('sh_name', 'L'),
+            ('sh_type', 'L'),
+            ('sh_flags', 'L'),
+            ('sh_addr', 'L'),
+            ('sh_offset', 'L'),
+            ('sh_size', 'L'),
+            ('sh_link', 'L'),
+            ('sh_info', 'L'),
+            ('sh_addralign', 'L'),
+            ('sh_entsize', 'L'),
+        ], [
+            ('sh_name', 'L'),
+            ('sh_type', 'L'),
+            ('sh_flags', 'Q'),
+            ('sh_addr', 'Q'),
+            ('sh_offset', 'Q'),
+            ('sh_size', 'Q'),
+            ('sh_link', 'L'),
+            ('sh_info', 'L'),
+            ('sh_addralign', 'Q'),
+            ('sh_entsize', 'Q'),
+        ]),
+        'Dyn': ([
+            ('d_tag', 'i'),
+            ('d_val', 'I'),
+        ], [
+            ('d_tag', 'q'),
+            ('d_val', 'Q'),
+        ]),
+        'Nhdr': [
+            ('n_namesz', 'I'),
+            ('n_descsz', 'I'),
+            ('n_type', 'I'),
+        ],
+        'dwarf2_line_header': [
+            ('unit_length', 'L'),
+            ('version', 'H'),
+            ('header_length', 'L'),
+            ('minimum_instruction_length', 'B'),
+            ('default_is_stmt', 'B'),
+            ('line_base', 'b'),
+            ('line_range', 'B'),
+            ('opcode_base', 'B'),
+        ],
+        'dwarf4_line_header': [
+            ('unit_length', 'L'),
+            ('version', 'H'),
+            ('header_length', 'L'),
+            ('minimum_instruction_length', 'B'),
+            ('maximum_operations_per_instruction', 'B'),
+            ('default_is_stmt', 'B'),
+            ('line_base', 'b'),
+            ('line_range', 'b'),
+            ('opcode_base', 'B'),
+        ],
+    }
+
+    # There is an accessor for each struct, e.g. Ehdr.
+    # Ehdr.read is a function like Struct.unpack_from.
+    # Ehdr.size is the size of the struct.
+    elf_accessor = namedtuple('elf_accessor',
+                              ['size', 'read', 'write', 'pack'])
+
+    # All the accessors for a format (class, byte-order) form one elf,
+    # e.g. use elf.Ehdr and elf.Phdr.
+    elf = namedtuple('elf', elf_types.keys())
+
+    def gen_accessors(is64, struct_byte_order):
+        def make_accessor(type, decoder):
+            return elf_accessor(
+                size=decoder.size,
+                read=lambda buffer, offset=0: type._make(
+                    decoder.unpack_from(buffer, offset)),
+                write=lambda buffer, offset, x: decoder.pack_into(
+                    buffer, offset, *x),
+                pack=lambda x: decoder.pack(*x))
+        for name, fields in elf_types.iteritems():
+            if isinstance(fields, tuple):
+                fields = fields[1 if is64 else 0]
+            type = namedtuple(name, [field_name for field_name, fmt in fields])
+            decoder = struct.Struct(struct_byte_order +
+                                    ''.join(fmt for field_name, fmt in fields))
+            yield make_accessor(type, decoder)
+
+    for elfclass, is64 in [(ELFCLASS32, False), (ELFCLASS64, True)]:
+        for elf_bo, struct_bo in [(ELFDATA2LSB, '<'), (ELFDATA2MSB, '>')]:
+            yield ((chr(elfclass), chr(elf_bo)),
+                   elf(*gen_accessors(is64, struct_bo)))
+
+# e.g. ELF[file[EI_CLASS], file[EI_DATA]].Ehdr.read(file).e_phnum
+ELF = dict(gen_elf())
+
+def get_elf_accessor(file):
+    # If it looks like an ELF file, whip out the decoder ring.
+    if file[:len(ELFMAG)] == ELFMAG:
+        return ELF[file[EI_CLASS], file[EI_DATA]]
+    return None
+
+
+def gen_phdrs(file, elf, ehdr):
+  for pos in xrange(0, ehdr.e_phnum * elf.Phdr.size, elf.Phdr.size):
+      yield elf.Phdr.read(file, ehdr.e_phoff + pos)
+
+
+def gen_shdrs(file, elf, ehdr):
+  for pos in xrange(0, ehdr.e_shnum * elf.Shdr.size, elf.Shdr.size):
+      yield elf.Shdr.read(file, ehdr.e_shoff + pos)
+
+
+cpu = namedtuple('cpu', [
+    'e_machine',                # ELF e_machine int
+    'llvm',                     # LLVM triple CPU component
+    'gn',                       # GN target_cpu
+])
+
+ELF_MACHINE_TO_CPU = {elf: cpu(elf, llvm, gn) for elf, llvm, gn in [
+    (EM_386, 'i386', 'x86'),
+    (EM_ARM, 'arm', 'arm'),
+    (EM_X86_64, 'x86_64', 'x64'),
+    (EM_AARCH64, 'aarch64', 'arm64'),
+]}
+
+
+@contextmanager
+def mmapper(filename):
+    """A context manager that yields (fd, file_contents) given a file name.
+This ensures that the mmap and file objects are closed at the end of the
+'with' statement."""
+    fileobj = open(filename, 'rb')
+    fd = fileobj.fileno()
+    if os.fstat(fd).st_size == 0:
+        # mmap can't handle empty files.
+        try:
+            yield fd, ''
+        finally:
+            fileobj.close()
+    else:
+        mmapobj = mmap.mmap(fd, 0, access=mmap.ACCESS_READ)
+        try:
+            yield fd, mmapobj
+        finally:
+            mmapobj.close()
+            fileobj.close()
+
+
+# elf_info objects are only created by `get_elf_info` or the `copy` or
+# `rename` methods.
+class elf_info(
+    namedtuple('elf_info', [
+        'filename',
+        'cpu',                     # cpu tuple
+        'notes',                   # list of (ident, desc): selected notes
+        'build_id',                # string: lowercase hex
+        'stripped',                # bool: Has no symbols or .debug_* sections
+        'interp',                  # string or None: PT_INTERP (without \0)
+        'soname',                  # string or None: DT_SONAME
+        'needed',                  # list of strings: DT_NEEDED
+    ])):
+
+    def rename(self, filename):
+        assert os.path.samefile(self.filename, filename)
+        # Copy the tuple.
+        clone = self.__class__(filename, *self[1:])
+        # Copy the lazy state.
+        clone.elf = self.elf
+        if self.get_sources == clone.get_sources:
+            raise Exception("uninitialized elf_info object!")
+        clone.get_sources = self.get_sources
+        return clone
+
+    def copy(self):
+        return self.rename(self.filename)
+
+    # This is replaced with a closure by the creator in get_elf_info.
+    def get_sources(self):
+        raise Exception("uninitialized elf_info object!")
+
+    def strip(self, stripped_filename):
+        """Write stripped output to the given file unless it already exists
+with identical contents.  Returns True iff the file was changed."""
+        with mmapper(self.filename) as mapped:
+            fd, file = mapped
+            ehdr = self.elf.Ehdr.read(file)
+
+            stripped_ehdr = ehdr._replace(e_shoff=0, e_shnum=0, e_shstrndx=0)
+            stripped_size = max(phdr.p_offset + phdr.p_filesz
+                                for phdr in gen_phdrs(file, self.elf, ehdr)
+                                if phdr.p_type == PT_LOAD)
+            assert ehdr.e_phoff + (ehdr.e_phnum *
+                                   ehdr.e_phentsize) <= stripped_size
+
+            def gen_stripped_contents():
+                yield self.elf.Ehdr.pack(stripped_ehdr)
+                yield file[self.elf.Ehdr.size:stripped_size]
+
+            def old_file_matches():
+                old_size = os.path.getsize(stripped_filename)
+                new_size = sum(len(x) for x in gen_stripped_contents())
+                if old_size != new_size:
+                    return False
+                with open(stripped_filename, 'rb') as f:
+                    for chunk in gen_stripped_contents():
+                        if f.read(len(chunk)) != chunk:
+                            return False
+                return True
+
+            if os.path.exists(stripped_filename):
+                if old_file_matches():
+                    return False
+                else:
+                    os.remove(stripped_filename)
+
+            # Create the new file with the same mode as the original.
+            with os.fdopen(os.open(stripped_filename,
+                                   os.O_WRONLY | os.O_CREAT | os.O_EXCL,
+                                   os.fstat(fd).st_mode & 0777),
+                           'wb') as stripped_file:
+                stripped_file.write(self.elf.Ehdr.pack(stripped_ehdr))
+                stripped_file.write(file[self.elf.Ehdr.size:stripped_size])
+            return True
+
+def get_elf_info(filename, match_notes=False):
+    file = None
+    elf = None
+    ehdr = None
+    phdrs = None
+
+    # Yields an elf_note for each note in any PT_NOTE segment.
+    def gen_notes():
+        def round_up_to(size):
+            return ((size + 3) / 4) * 4
+        for phdr in phdrs:
+            if phdr.p_type == PT_NOTE:
+                pos = phdr.p_offset
+                while pos < phdr.p_offset + phdr.p_filesz:
+                    nhdr = elf.Nhdr.read(file, pos)
+                    pos += elf.Nhdr.size
+                    name = file[pos:pos + nhdr.n_namesz]
+                    pos += round_up_to(nhdr.n_namesz)
+                    desc = file[pos:pos + nhdr.n_descsz]
+                    pos += round_up_to(nhdr.n_descsz)
+                    yield elf_note(name, nhdr.n_type, desc)
+
+    def gen_sections():
+        shdrs = list(gen_shdrs(file, elf, ehdr))
+        if not shdrs:
+            return
+        strtab_shdr = shdrs[ehdr.e_shstrndx]
+        for shdr, i in zip(shdrs, xrange(len(shdrs))):
+            if i == 0:
+                continue
+            assert shdr.sh_name < strtab_shdr.sh_size, (
+                "%s: invalid sh_name" % filename)
+            yield (shdr,
+                   extract_C_string(strtab_shdr.sh_offset + shdr.sh_name))
+
+    # Generates '\0'-terminated strings starting at the given offset,
+    # until an empty string.
+    def gen_strings(start):
+        while True:
+            end = file.find('\0', start)
+            assert end >= start, (
+                "%s: Unterminated string at %#x" % (filename, start))
+            if start == end:
+                break
+            yield file[start:end]
+            start = end + 1
+
+    def extract_C_string(start):
+        for string in gen_strings(start):
+            return string
+        return ''
+
+    # Returns a string of hex digits (or None).
+    def get_build_id():
+        build_id = None
+        for note in gen_notes():
+            # Note that the last build_id note needs to be used due to TO-442.
+            possible_build_id = note.build_id_hex()
+            if possible_build_id:
+                build_id = possible_build_id
+        return build_id
+
+    # Returns a list of elf_note objects.
+    def get_matching_notes():
+        if isinstance(match_notes, bool):
+            if match_notes:
+                return list(gen_notes())
+            else:
+                return []
+        # If not a bool, it's an iterable of ident pairs.
+        return [note for note in gen_notes() if note.ident() in match_notes]
+
+    # Returns a string (without trailing '\0'), or None.
+    def get_interp():
+        # PT_INTERP points directly to a string in the file.
+        for interp in (phdr for phdr in phdrs if phdr.p_type == PT_INTERP):
+            interp = file[interp.p_offset:interp.p_offset + interp.p_filesz]
+            if interp[-1:] == '\0':
+                interp = interp[:-1]
+            return interp
+        return None
+
+    # Returns a set of strings.
+    def get_soname_and_needed():
+        # Each DT_NEEDED or DT_SONAME points to a string in the .dynstr table.
+        def GenDTStrings(tag):
+            return (extract_C_string(strtab_offset + dt.d_val)
+                    for dt in dyn if dt.d_tag == tag)
+
+        # PT_DYNAMIC points to the list of ElfNN_Dyn tags.
+        for dynamic in (phdr for phdr in phdrs if phdr.p_type == PT_DYNAMIC):
+            dyn = [elf.Dyn.read(file, dynamic.p_offset + dyn_offset)
+                   for dyn_offset in xrange(0, dynamic.p_filesz, elf.Dyn.size)]
+
+            # DT_STRTAB points to the string table's vaddr (.dynstr).
+            [strtab_vaddr] = [dt.d_val for dt in dyn if dt.d_tag == DT_STRTAB]
+
+            # Find the PT_LOAD containing the vaddr to compute the file offset.
+            [strtab_offset] = [
+                strtab_vaddr - phdr.p_vaddr + phdr.p_offset
+                for phdr in phdrs
+                if (phdr.p_type == PT_LOAD and
+                    phdr.p_vaddr <= strtab_vaddr and
+                    strtab_vaddr - phdr.p_vaddr < phdr.p_filesz)
+            ]
+
+            soname = None
+            for soname in GenDTStrings(DT_SONAME):
+                break
+
+            return soname, set(GenDTStrings(DT_NEEDED))
+        return None, set()
+
+    def get_stripped():
+        return all(
+            shdr.sh_type != SHT_SYMTAB and not name.startswith('.debug_')
+            for shdr, name in gen_sections())
+
+    def get_cpu():
+        return ELF_MACHINE_TO_CPU.get(ehdr.e_machine)
+
+    def gen_source_files():
+        # Given the file position of a CU header (starting with the
+        # beginning of the .debug_line section), return the position
+        # of the include_directories portion and the position of the
+        # next CU header.
+        def read_line_header(pos):
+                # Decode DWARF .debug_line per-CU header.
+                hdr_type = elf.dwarf2_line_header
+                hdr = hdr_type.read(file, pos)
+                assert hdr.unit_length < 0xfffffff0, (
+                    "%s: 64-bit DWARF" % filename)
+                assert hdr.version in [2,3,4], (
+                    "%s: DWARF .debug_line version %r" %
+                    (filename, hdr.version))
+                if hdr.version == 4:
+                    hdr_type = elf.dwarf4_line_header
+                    hdr = hdr_type.read(file, pos)
+                return (pos + hdr_type.size + hdr.opcode_base - 1,
+                        pos + 4 + hdr.unit_length)
+
+        # Decode include_directories portion of DWARF .debug_line format.
+        def read_include_dirs(pos):
+            include_dirs = list(gen_strings(pos))
+            pos += sum(len(dir) + 1 for dir in include_dirs) + 1
+            return pos, include_dirs
+
+        # Decode file_paths portion of DWARF .debug_line format.
+        def gen_file_paths(start, limit):
+            while start < limit:
+                end = file.find('\0', start, limit)
+                assert end >= start, (
+                    "%s: Unterminated string at %#x" % (filename, start))
+                if start == end:
+                    break
+                name = file[start:end]
+                start = end + 1
+                # Decode 3 ULEB128s to advance start, but only use the first.
+                for i in range(3):
+                    value = 0
+                    bits = 0
+                    while start < limit:
+                        byte = ord(file[start])
+                        start += 1
+                        value |= (byte & 0x7f) << bits
+                        if (byte & 0x80) == 0:
+                            break
+                        bits += 7
+                    if i == 0:
+                        include_idx = value
+                # Ignore the fake file names the compiler leaks into the DWARF.
+                if name not in ['<stdin>', '<command-line>']:
+                    yield name, include_idx
+
+        for shdr, name in gen_sections():
+            if name == '.debug_line':
+                next = shdr.sh_offset
+                while next < shdr.sh_offset + shdr.sh_size:
+                    pos, next = read_line_header(next)
+
+                    pos, include_dirs = read_include_dirs(pos)
+                    assert pos <= next
+
+                    # 0 means relative to DW_AT_comp_dir, which should be ".".
+                    # Indices into the actual table start at 1.
+                    include_dirs.insert(0, '')
+
+                    # Decode file_paths and apply include directories.
+                    for name, i in gen_file_paths(pos, next):
+                        name = os.path.join(include_dirs[i], name)
+                        yield os.path.normpath(name)
+
+    # This closure becomes the elf_info object's `get_sources` method.
+    def lazy_get_sources():
+        # Run the generator and cache its results as a set.
+        sources_cache = set(gen_source_files())
+        # Replace the method to just return the cached set next time.
+        info.get_sources = lambda: sources_cache
+        return sources_cache
+
+    # Map in the whole file's contents and use it as a string.
+    with mmapper(filename) as mapped:
+        fd, file = mapped
+        elf = get_elf_accessor(file)
+        if elf is not None:
+            # ELF header leads to program headers.
+            ehdr = elf.Ehdr.read(file)
+            assert ehdr.e_phentsize == elf.Phdr.size, (
+                "%s: invalid e_phentsize" % filename)
+            phdrs = list(gen_phdrs(file, elf, ehdr))
+            info = elf_info(filename,
+                            get_cpu(),
+                            get_matching_notes(),
+                            get_build_id(),
+                            get_stripped(),
+                            get_interp(),
+                            *get_soname_and_needed())
+            info.elf = elf
+            info.get_sources = lazy_get_sources
+            return info
+
+    return None
+
+
+# Module public API.
+__all__ = ['cpu', 'elf_info', 'elf_note', 'get_elf_accessor', 'get_elf_info']
+
+
+def test_main_strip(filenames):
+    for filename in filenames:
+        info = get_elf_info(filename)
+        print info
+        stripped_filename = info.filename + '.ei-strip'
+        info.strip(stripped_filename)
+        print '\t%s: %u -> %u' % (stripped_filename,
+                                  os.stat(filename).st_size,
+                                  os.stat(stripped_filename).st_size)
+
+
+def test_main_get_info(filenames):
+    for filename in filenames:
+        info = get_elf_info(filename)
+        print info
+        for source in info.get_sources():
+            print '\t' + source
+
+
+# For manual testing.
+if __name__ == "__main__":
+    import sys
+    if sys.argv[1] == '-strip':
+        test_main_strip(sys.argv[2:])
+    else:
+        test_main_get_info(sys.argv[1:])
diff --git a/build/images/finalize_manifests.py b/build/images/finalize_manifests.py
new file mode 100755
index 0000000..cb05b79
--- /dev/null
+++ b/build/images/finalize_manifests.py
@@ -0,0 +1,440 @@
+#!/usr/bin/env python
+# Copyright 2017 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+"""
+This tool takes in multiple manifest files:
+ * system image and archive manifest files from each package
+ * Zircon's bootfs.manifest, optionally using a subset selected by the
+   "group" syntax (e.g. could specify just "core", or "core,misc" or
+   "core,misc,test").
+ * "auxiliary" manifests
+ ** one from the toolchain for the target libraries (libc++ et al)
+ ** one from the build-zircon/*-ulib build, which has the Zircon ASan libraries
+ ** the unselected parts of the "main" manifests (i.e. Zircon)
+
+It emits final /boot and /system manifests used to make the actual images,
+final archive manifests used to make each package, and the build ID map.
+
+The "auxiliary" manifests just supply a pool of files that might be used to
+satisfy dependencies; their files are not included in the output a priori.
+
+The tool examines each file in its main input manifests.  If it's not an
+ELF file, it just goes into the appropriate output manifest.  If it's an
+ELF file, then the tool figures out what "variant" it is (if any), such as
+"asan" and what other ELF files it requires via PT_INTERP and DT_NEEDED.
+It then finds those dependencies and includes them in the output manifest,
+and iterates on their dependencies.  Each dependency is found either in the
+*-shared/ toolchain $root_out_dir for the same variant toolchain that built
+the root file, or among the files in auxiliary manifests (i.e. toolchain
+and Zircon libraries).  For things built in the asan variant, it finds the
+asan versions of the toolchain/Zircon libraries.
+"""
+
+from collections import namedtuple
+import argparse
+import fnmatch
+import itertools
+import manifest
+import os
+import sys
+import variant
+
+
+binary_info = variant.binary_info
+
+# An entry for a binary is (manifest.manifest_entry, elfinfo.elf_info).
+binary_entry = namedtuple('binary_entry', ['entry', 'info'])
+
+# In recursions of CollectBinaries.AddBinary, this is the type of the
+# context argument.
+binary_context = namedtuple('binary_context', [
+    'variant',
+    'soname_map',
+    'root_dependent',
+])
+
+# Each --output argument yields an output_manifest tuple.
+output_manifest = namedtuple('output_manifest', ['file', 'manifest'])
+
+# Each --binary argument yields a input_binary tuple.
+input_binary = namedtuple('input_binary', ['target_pattern', 'output_group'])
+
+
+# Collect all the binaries from auxiliary manifests into
+# a dictionary mapping entry.target to binary_entry.
+def collect_auxiliaries(manifest, examined):
+    aux_binaries = {}
+    for entry in manifest:
+        examined.add(entry.source)
+        info = binary_info(entry.source)
+        if info:
+            new_binary = binary_entry(entry, info)
+            binary = aux_binaries.setdefault(entry.target, new_binary)
+            if binary.entry.source != new_binary.entry.source:
+                raise Exception(
+                    "'%s' in both %r and %r" %
+                    (entry.target, binary.entry, entry))
+    return aux_binaries
+
+
+# Return an iterable of binary_entry for all the binaries in `manifest` and
+# `input_binaries` and their dependencies from `aux_binaries`, and an
+# iterable of manifest_entry for all the other files in `manifest`.
+def collect_binaries(manifest, input_binaries, aux_binaries, examined):
+    # As we go, we'll collect the actual binaries for the output
+    # in this dictionary mapping entry.target to binary_entry.
+    binaries = {}
+
+    # We'll collect entries in the manifest that aren't binaries here.
+    nonbinaries = []
+
+    # This maps GN toolchain (from variant.shared_toolchain) to a
+    # dictionary mapping DT_SONAME string to binary_entry.
+    soname_map_by_toolchain = {}
+
+    def rewrite_binary_group(old_binary, group_override):
+        return binary_entry(
+            old_binary.entry._replace(group=group_override),
+            old_binary.info)
+
+    def add_binary(binary, context=None, auxiliary=False):
+        # Add a binary by target name.
+        def add_auxiliary(target, required, group_override=None):
+            if group_override is None:
+                group_override = binary.entry.group
+                aux_context = context
+            else:
+                aux_context = None
+            # Look for the target in auxiliary manifests.
+            aux_binary = aux_binaries.get(target)
+            if required:
+                assert aux_binary, (
+                    "'%s' not in auxiliary manifests, needed by %r via %r" %
+                    (target, binary.entry, context.root_dependent))
+            if aux_binary:
+                add_binary(rewrite_binary_group(aux_binary, group_override),
+                           aux_context, True)
+                return True
+            return False
+
+        existing_binary = binaries.get(binary.entry.target)
+        if existing_binary is not None:
+            if existing_binary.entry.source != binary.entry.source:
+                raise Exception("%r in both %r and %r" %
+                                (binary.entry.target, existing_binary, binary))
+            # If the old record was in a later group, we still need to
+            # process all the dependencies again to promote them to
+            # the new group too.
+            if existing_binary.entry.group <= binary.entry.group:
+                return
+
+        examined.add(binary.entry.source)
+
+        # If we're not part of a recursion, discover the binary's context.
+        if context is None:
+            binary_variant, variant_file = variant.find_variant(binary.info)
+            if variant_file is not None:
+              # This is a variant that was actually built in a different
+              # place than its original name says.  Rewrite everything to
+              # refer to the "real" name.
+              binary = binary_entry(binary.entry._replace(source=variant_file),
+                                    binary.info.rename(variant_file))
+              examined.add(variant_file)
+            context = binary_context(binary_variant,
+                                     soname_map_by_toolchain.setdefault(
+                                         binary_variant.shared_toolchain, {}),
+                                     binary)
+
+        binaries[binary.entry.target] = binary
+        assert binary.entry.group is not None, binary
+
+        if binary.info.soname:
+            # This binary has a SONAME, so record it in the map.
+            soname_binary = context.soname_map.setdefault(binary.info.soname,
+                                                          binary)
+            if soname_binary.entry.source != binary.entry.source:
+                raise Exception(
+                    "SONAME '%s' in both %r and %r" %
+                    (binary.info.soname, soname_binary, binary))
+            if binary.entry.group < soname_binary.entry.group:
+                # Update the record to the earliest group.
+                context.soname_map[binary.info.soname] = binary
+
+        # The PT_INTERP is implicitly required from an auxiliary manifest.
+        if binary.info.interp:
+            add_auxiliary('lib/' + binary.info.interp, True)
+
+        # The variant might require other auxiliary binaries too.
+        for variant_aux, variant_aux_group in context.variant.aux:
+            add_auxiliary(variant_aux, True, variant_aux_group)
+
+        # Handle the DT_NEEDED list.
+        for soname in binary.info.needed:
+            # The vDSO is not actually a file.
+            if soname == 'libzircon.so':
+                continue
+
+            lib = context.soname_map.get(soname)
+            if lib and lib.entry.group <= binary.entry.group:
+                # Already handled this one in the same or earlier group.
+                continue
+
+            # The DT_SONAME is libc.so, but the file is ld.so.1 on disk.
+            if soname == 'libc.so':
+                soname = 'ld.so.1'
+
+            # Translate the SONAME to a target file name.
+            target = ('lib/' +
+                      ('' if soname == context.variant.runtime
+                       else context.variant.libprefix) +
+                      soname)
+            if add_auxiliary(target, auxiliary):
+                # We found it in an existing manifest.
+                continue
+
+            # An auxiliary's dependencies must all be auxiliaries too.
+            assert not auxiliary, (
+                "missing '%s' needed by auxiliary %r via %r" %
+                 (target, binary, context.root_dependent))
+
+            # It must be in the shared_toolchain output directory.
+            # Context like group is inherited from the dependent.
+            lib_entry = binary.entry._replace(
+                source=os.path.join(context.variant.shared_toolchain, soname),
+                target=target)
+
+            assert os.path.exists(lib_entry.source), (
+                "missing %r needed by %r via %r" %
+                (lib_entry, binary, context.root_dependent))
+
+            # Read its ELF info and sanity-check.
+            lib = binary_entry(lib_entry, binary_info(lib_entry.source))
+            assert lib.info and lib.info.soname == soname, (
+                "SONAME '%s' expected in %r, needed by %r via %r" %
+                (soname, lib, binary, context.root_dependent))
+
+            # Recurse.
+            add_binary(lib, context)
+
+    for entry in manifest:
+        try:
+            info = None
+            # Don't inspect data resources in the manifest. Regardless of the
+            # bits in these files, we treat them as opaque data.
+            if not entry.target.startswith('data/'):
+                info = binary_info(entry.source)
+        except IOError as e:
+            raise Exception('%s from %s' % (e, entry))
+        if info:
+            add_binary(binary_entry(entry, info))
+        else:
+            nonbinaries.append(entry)
+
+    matched_binaries = set()
+    for input_binary in input_binaries:
+        matches = fnmatch.filter(aux_binaries.iterkeys(),
+                                 input_binary.target_pattern)
+        assert matches, (
+            "--input-binary='%s' did not match any binaries" %
+            input_binary.target_pattern)
+        for target in matches:
+            assert target not in matched_binaries, (
+                "'%s' matched by multiple --input-binary patterns" % target)
+            matched_binaries.add(target)
+            add_binary(rewrite_binary_group(aux_binaries[target],
+                                            input_binary.output_group),
+                       auxiliary=True)
+
+    return binaries.itervalues(), nonbinaries
+
+
+# Take an iterable of binary_entry, and return list of binary_entry (all
+# stripped files), a list of binary_info (all debug files), and a boolean
+# saying whether any new stripped output files were written in the process.
+def strip_binary_manifest(manifest, stripped_dir, examined):
+    new_output = False
+
+    def find_debug_file(filename):
+        # In the Zircon makefile build, the file to be installed is called
+        # foo.strip and the unstripped file is called foo.  In the GN build,
+        # the file to be installed is called foo and the unstripped file has
+        # the same name in the exe.unstripped or lib.unstripped subdirectory.
+        if filename.endswith('.strip'):
+            debugfile = filename[:-6]
+        else:
+            dir, file = os.path.split(filename)
+            if file.endswith('.so') or '.so.' in file:
+                subdir = 'lib.unstripped'
+            else:
+                subdir = 'exe.unstripped'
+            debugfile = os.path.join(dir, subdir, file)
+            while not os.path.exists(debugfile):
+                # For dir/foo/bar, if dir/foo/exe.unstripped/bar
+                # didn't exist, try dir/exe.unstripped/foo/bar.
+                parent, dir = os.path.split(dir)
+                if not parent or not dir:
+                    return None
+                dir, file = parent, os.path.join(dir, file)
+                debugfile = os.path.join(dir, subdir, file)
+            if not os.path.exists(debugfile):
+                debugfile = os.path.join(subdir, filename)
+                if not os.path.exists(debugfile):
+                    return None
+        debug = binary_info(debugfile)
+        assert debug, ("Debug file '%s' for '%s' is invalid" %
+                       (debugfile, filename))
+        examined.add(debugfile)
+        return debug
+
+    # The toolchain-supplied shared libraries, and Go binaries, are
+    # delivered unstripped.  For these, strip the binary right here and
+    # update the manifest entry to point to the stripped file.
+    def make_debug_file(entry, info):
+        debug = info
+        stripped = os.path.join(stripped_dir, entry.target)
+        dir = os.path.dirname(stripped)
+        if not os.path.isdir(dir):
+            os.makedirs(dir)
+        if info.strip(stripped):
+            new_output = True
+        info = binary_info(stripped)
+        assert info, ("Stripped file '%s' for '%s' is invalid" %
+                      (stripped, debug.filename))
+        examined.add(debug.filename)
+        examined.add(stripped)
+        return entry._replace(source=stripped), info, debug
+
+    stripped_manifest = []
+    debug_list = []
+    for entry, info in manifest:
+        assert entry.source == info.filename
+        if info.stripped:
+            debug = find_debug_file(info.filename)
+        else:
+            entry, info, debug = make_debug_file(entry, info)
+        stripped_manifest.append(binary_entry(entry, info))
+        if debug is None:
+            print 'WARNING: no debug file found for %s' % info.filename
+            continue
+        assert debug.build_id, "'%s' has no build ID" % debug.filename
+        assert not debug.stripped, "'%s' is stripped" % debug.filename
+        assert info == debug._replace(filename=info.filename, stripped=True), (
+            "Debug file mismatch: %r vs %r" % (info, debug))
+        debug_list.append(debug)
+
+    return stripped_manifest, debug_list, new_output
+
+
+def emit_manifests(args, selected, unselected, input_binaries):
+    def update_file(file, contents, force=False):
+        if (not force and
+            os.path.exists(file) and
+            os.path.getsize(file) == len(contents)):
+            with open(file, 'r') as f:
+                if f.read() == contents:
+                    return
+        with open(file, 'w') as f:
+            f.write(contents)
+
+    # The name of every file we examine to make decisions goes into this set.
+    examined = set(args.manifest)
+
+    # Collect all the inputs and reify.
+    aux_binaries = collect_auxiliaries(unselected, examined)
+    binaries, nonbinaries = collect_binaries(selected, input_binaries,
+                                             aux_binaries, examined)
+
+    # Prepare to collate groups.
+    outputs = [output_manifest(file, []) for file in args.output]
+
+    # Finalize the output binaries.  If stripping wrote any new/changed files,
+    # then force an update of the manifest file even if it's identical.  The
+    # manifest file's timestamp is what GN/Ninja sees as running this script
+    # having touched any of its outputs, and GN/Ninja doesn't know that the
+    # stripped files are implicit outputs (there's no such thing as a depfile
+    # for outputs, only for inputs).
+    binaries, debug_files, force_update = strip_binary_manifest(
+        binaries, args.stripped_dir, examined)
+
+    # Collate groups.
+    for entry in itertools.chain((binary.entry for binary in binaries),
+                                 nonbinaries):
+        outputs[entry.group].manifest.append(entry._replace(group=None))
+
+    all_binaries = {binary.info.build_id: binary.entry for binary in binaries}
+    all_debug_files = {info.build_id: info for info in debug_files}
+
+    # Emit each primary manifest.
+    for output in outputs:
+        depfile_output = output.file
+        # Sort so that functionally identical output is textually
+        # identical.
+        output.manifest.sort(key=lambda entry: entry.target)
+        update_file(output.file,
+                    manifest.format_manifest_file(output.manifest),
+                    force_update)
+
+    # Emit the build ID list.
+    # Sort so that functionally identical output is textually identical.
+    debug_files = sorted(all_debug_files.itervalues(),
+                         key=lambda info: info.build_id)
+    update_file(args.build_id_file, ''.join(
+        info.build_id + ' ' + os.path.abspath(info.filename) + '\n'
+        for info in debug_files))
+
+    # Emit the depfile.
+    if args.depfile:
+        with open(args.depfile, 'w') as f:
+            f.write(depfile_output + ':')
+            for file in sorted(examined):
+                f.write(' ' + file)
+            f.write('\n')
+
+
+class input_binary_action(argparse.Action):
+    def __call__(self, parser, namespace, values, option_string=None):
+        binaries = getattr(namespace, self.dest, None)
+        if binaries is None:
+            binaries = []
+            setattr(namespace, self.dest, binaries)
+        outputs = getattr(namespace, 'output', None)
+        output_group = len(outputs) - 1
+        binaries.append(input_binary(values, output_group))
+
+
+def parse_args():
+    parser = argparse.ArgumentParser(description='''
+Massage manifest files from the build to produce images.
+''',
+        epilog='''
+The --cwd and --group options apply to subsequent --manifest arguments.
+Each input --manifest is assigned to the preceding --output argument file.
+Any input --manifest that precedes all --output arguments
+just supplies auxiliary files implicitly required by other (later) input
+manifests, but does not add all its files to any --output manifest.  This
+is used for shared libraries and the like.
+''')
+    parser.add_argument('--build-id-file', required=True,
+                        metavar='FILE',
+                        help='Output build ID list')
+    parser.add_argument('--depfile',
+                        metavar='DEPFILE',
+                        help='Ninja depfile to write')
+    parser.add_argument('--binary', action=input_binary_action, default=[],
+                        metavar='PATH',
+                        help='Take matching binaries from auxiliary manifests')
+    parser.add_argument('--stripped-dir', required=True,
+                        metavar='STRIPPED_DIR',
+                        help='Directory to hold stripped copies when needed')
+    return manifest.common_parse_args(parser)
+
+
+def main():
+    args = parse_args()
+    emit_manifests(args, args.selected, args.unselected, args.binary)
+
+
+if __name__ == "__main__":
+    main()
diff --git a/build/images/fvm.gni b/build/images/fvm.gni
new file mode 100644
index 0000000..675a6f9
--- /dev/null
+++ b/build/images/fvm.gni
@@ -0,0 +1,83 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/config/fuchsia/zircon.gni")
+
+declare_args() {
+  # The size in bytes of the FVM partition image to create. Normally this is
+  # computed to be just large enough to fit the blob and data images. The
+  # default value is "", which means to size based on inputs. Specifying a size
+  # that is too small will result in build failure.
+  fvm_image_size = ""
+
+  # The size of the FVM partition images "slice size". The FVM slice size is a
+  # minimum size of a particular chunk of a partition that is stored within
+  # FVM. A very small slice size may lead to decreased throughput. A very large
+  # slice size may lead to wasted space. The selected default size of 8mb is
+  # selected for conservation of space, rather than performance.
+  fvm_slice_size = "8388608"
+}
+
+# Build an FVM partition
+#
+# Parameters
+#
+#   args (optional)
+#     [list of strings] Additional arguments to pass to the FVM tool.
+#
+#   output_name (required)
+#     [string] The filename to produce.
+#
+#   partitions (required)
+#     [list of scopes] a list of partitions to be included
+#       dep (required)
+#         [label] The label must be defined earlier in the same file.
+#       type (required)
+#         [string] A partition type accepted by fvm (e.g. blob, data, data-unsafe)
+#
+#   deps (optional)
+#   testonly (optional)
+#   visibility (optional)
+#     Same as for any GN `action()` target.
+template("generate_fvm") {
+  zircon_tool_action(target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "testonly",
+                             "deps",
+                             "visibility",
+                           ])
+    tool = "fvm"
+    outputs = [
+      invoker.output_name,
+    ]
+    args = rebase_path(outputs, root_build_dir)
+    if (defined(invoker.args)) {
+      args += invoker.args
+    }
+    sources = []
+    if (!defined(deps)) {
+      deps = []
+    }
+    foreach(part, invoker.partitions) {
+      args += [ "--${part.type}" ]
+      deps += [ part.dep ]
+      sources += get_target_outputs(part.dep)
+      args += rebase_path(get_target_outputs(part.dep), root_build_dir)
+    }
+  }
+}
+
+fvm_slice_args = [
+  "--slice",
+  fvm_slice_size,
+]
+
+fvm_create_args = [ "create" ] + fvm_slice_args
+
+fvm_sparse_args = [
+                    "sparse",
+                    "--compress",
+                    "lz4",
+                  ] + fvm_slice_args
diff --git a/build/images/guest/BUILD.gn b/build/images/guest/BUILD.gn
new file mode 100644
index 0000000..6812232
--- /dev/null
+++ b/build/images/guest/BUILD.gn
@@ -0,0 +1,241 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/config/fuchsia/zbi.gni")
+import("//build/config/fuchsia/zircon.gni")
+import("//build/images/manifest.gni")
+import("//build/package.gni")
+import("//garnet/build/pkgfs.gni")
+
+guest_packages = [
+  get_label_info("//build/images:shell-commands", "label_no_toolchain"),
+  get_label_info("//garnet/bin/appmgr", "label_no_toolchain"),
+  get_label_info("//garnet/bin/guest/integration:guest_integration_tests_utils",
+                 "label_no_toolchain"),
+  get_label_info("//garnet/bin/guest/pkg/zircon_guest:services_config",
+                 "label_no_toolchain"),
+  get_label_info("//garnet/bin/run", "label_no_toolchain"),
+  get_label_info("//garnet/bin/sysmgr", "label_no_toolchain"),
+  get_label_info("//garnet/bin/trace", "label_no_toolchain"),
+  get_label_info("//garnet/bin/vsock_service:vsock_service",
+                 "label_no_toolchain"),
+  get_label_info(pkgfs_package_label, "label_no_toolchain"),
+]
+
+# The pkgsvr index is a manifest mapping `package_name/package_version` to
+# the merkleroot of the package's meta.far file.
+action("pkgsvr_index") {
+  visibility = [ ":*" ]
+  testonly = true
+  script = "//build/images/manifest.py"
+  args = [ "--contents" ]
+  outputs = [
+    "$target_out_dir/$target_name",
+  ]
+  args += [ "--output=" + rebase_path(outputs[0], root_build_dir) ]
+  sources = []
+  deps = []
+  foreach(pkg_label, guest_packages) {
+    # Find the response file written by package().
+    pkg_target_name = get_label_info(pkg_label, "name")
+    pkg_target_out_dir = get_label_info(pkg_label, "target_out_dir")
+    pkg_rspfile = "$pkg_target_out_dir/${pkg_target_name}.pkgsvr_index.rsp"
+    deps += [ "${pkg_label}.pkgsvr_index.rsp" ]
+    sources += [ pkg_rspfile ]
+    args += [ "@" + rebase_path(pkg_rspfile, root_build_dir) ]
+  }
+}
+
+boot_manifest = "$target_out_dir/boot.manifest"
+generate_manifest("guest.manifest") {
+  visibility = [ ":*" ]
+  testonly = true
+
+  bootfs_manifest = boot_manifest
+  bootfs_zircon_groups = "misc,test"
+
+  args = []
+  deps = []
+  sources = []
+  foreach(pkg_label, guest_packages) {
+    # Find the response file written by package().
+    pkg_target_name = get_label_info(pkg_label, "name")
+    pkg_target_out_dir = get_label_info(pkg_label, "target_out_dir")
+    pkg_system_rsp = "$pkg_target_out_dir/${pkg_target_name}.system.rsp"
+    deps += [ pkg_label ]
+    sources += [ pkg_system_rsp ]
+    args += [ "@" + rebase_path(pkg_system_rsp, root_build_dir) ]
+  }
+
+  json = "guest_meta_package.json"
+  sources += [ json ]
+  args += [ "--entry=meta/package=" + rebase_path(json, root_build_dir) ]
+
+  # Add the static packages (pkgsvr) index.
+  deps += [ ":pkgsvr_index" ]
+  pkgsvr_index = "$target_out_dir/pkgsvr_index"
+  sources += [ pkgsvr_index ]
+  args += [ "--entry=data/static_packages=" +
+            rebase_path(pkgsvr_index, root_build_dir) ]
+}
+
+# Generate, sign, and seal the package file.
+pm_build_package("guest.meta") {
+  visibility = [ ":*" ]
+  testonly = true
+  manifest = ":guest.manifest"
+}
+
+guest_blob_manifest = "$root_build_dir/guest_blob.manifest"
+action("guest_blob.manifest") {
+  visibility = [ ":*" ]
+  testonly = true
+  deps = [
+    ":guest.manifest",
+    ":guest.meta",
+  ]
+  outputs = [
+    guest_blob_manifest,
+  ]
+  depfile = guest_blob_manifest + ".d"
+  guest_manifest_outputs = get_target_outputs(":guest.manifest")
+  guest_manifest = guest_manifest_outputs[0]
+  inputs = [
+    guest_manifest,
+  ]
+  script = "//build/images/blob_manifest.py"
+  args = [ "@{{response_file_name}}" ]
+  response_file_contents = [
+    "--depfile=" + rebase_path(depfile, root_build_dir),
+    "--output=" + rebase_path(guest_blob_manifest, root_build_dir),
+    "--input=" +
+        rebase_path("$target_out_dir/guest.meta/blobs.json", root_build_dir),
+  ]
+  foreach(pkg_label, guest_packages) {
+    pkg_target_name = get_label_info(pkg_label, "name")
+    pkg_target_out_dir = get_label_info(pkg_label, "target_out_dir")
+    pkg_blobs_rsp = "$pkg_target_out_dir/${pkg_target_name}.blobs.rsp"
+    deps += [ "${pkg_label}.blobs.rsp" ]
+    inputs += [ pkg_blobs_rsp ]
+    response_file_contents +=
+        [ "@" + rebase_path(pkg_blobs_rsp, root_build_dir) ]
+  }
+}
+
+zircon_tool_action("guest_blob.blk") {
+  visibility = [ ":*" ]
+  testonly = true
+  deps = [
+    ":guest_blob.manifest",
+  ]
+  blob_image_path = "$target_out_dir/$target_name"
+  outputs = [
+    blob_image_path,
+  ]
+  depfile = blob_image_path + ".d"
+  inputs = [
+    guest_blob_manifest,
+  ]
+  tool = "blobfs"
+  args = [
+    "--depfile",
+    rebase_path(blob_image_path, root_build_dir),
+    "create",
+    "--manifest",
+    rebase_path(guest_blob_manifest, root_build_dir),
+  ]
+}
+
+zircon_tool_action("guest_fvm") {
+  testonly = true
+  fvm = "$root_out_dir/guest_fvm.blk"
+  outputs = [
+    fvm,
+  ]
+  tool = "fvm"
+  sources = []
+  deps = [
+    ":guest_blob.blk",
+  ]
+  foreach(label, deps) {
+    sources += get_target_outputs(label)
+  }
+  args = [
+           rebase_path(fvm, root_build_dir),
+           "create",
+           "--blob",
+         ] + rebase_path(sources, root_build_dir)
+}
+
+# Generate the /boot/config/devmgr file.
+action("devmgr_config.txt") {
+  visibility = [ ":*" ]
+  testonly = true
+  script = "//build/images/manifest.py"
+  outputs = [
+    "$target_out_dir/$target_name",
+  ]
+  pkgfs = "bin/" + pkgfs_binary_name
+  pkgfs_label = pkgfs_package_label + ".meta"
+  pkgfs_pkg_out_dir = get_label_info(pkgfs_label, "target_out_dir") + "/" +
+                      get_label_info(pkgfs_label, "name")
+  pkgfs_blob_manifest = "$pkgfs_pkg_out_dir/meta/contents"
+  system_image_merkleroot = "$target_out_dir/guest.meta/meta.far.merkle"
+
+  deps = [
+    ":guest.manifest",
+    ":guest.meta",
+    pkgfs_label,
+  ]
+  sources = [
+    boot_manifest,
+    pkgfs_blob_manifest,
+    system_image_merkleroot,
+  ]
+
+  args = [
+    "--output=" + rebase_path(outputs[0], root_build_dir),
+
+    # Start with the fixed options.
+    "--entry=devmgr.require-system=true",
+
+    # Add the pkgfs command line, embedding the merkleroot of the system image.
+    "--contents",
+    "--rewrite=*=zircon.system.pkgfs.cmd={target}+{source}",
+    "--entry=${pkgfs}=" + rebase_path(system_image_merkleroot, root_build_dir),
+    "--no-contents",
+    "--reset-rewrite",
+
+    # Embed the pkgfs blob manifest with the "zircon.system.pkgfs.file."
+    # prefix on target file names.
+    "--rewrite=*=zircon.system.pkgfs.file.{target}={source}",
+    "--manifest=" + rebase_path(pkgfs_blob_manifest, root_build_dir),
+    "--reset-rewrite",
+    "--include=bin/devhost",
+    "--rewrite=bin/devhost=devhost.asan.strict=false",
+    "--manifest=" + rebase_path(boot_manifest, root_build_dir),
+  ]
+}
+
+zbi("guest") {
+  testonly = true
+  deps = [
+    ":devmgr_config.txt",
+    ":guest.manifest",
+  ]
+  inputs = [
+    "${zircon_build_dir}/zircon.zbi",
+    boot_manifest,
+  ]
+  manifest = [
+    {
+      outputs = [
+        "config/devmgr",
+      ]
+      sources = get_target_outputs(":devmgr_config.txt")
+    },
+  ]
+  cmdline = []
+  cmdline_inputs = []
+}
diff --git a/build/images/guest/guest_meta_package.json b/build/images/guest/guest_meta_package.json
new file mode 100644
index 0000000..90415a1
--- /dev/null
+++ b/build/images/guest/guest_meta_package.json
@@ -0,0 +1 @@
+{"name": "guest_image", "version": "0"}
diff --git a/build/images/json.gni b/build/images/json.gni
new file mode 100644
index 0000000..4b18920
--- /dev/null
+++ b/build/images/json.gni
@@ -0,0 +1,112 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+# Write an image manfiest at `gn gen` time.
+#
+# Parameters
+#
+#   outputs (required)
+#     [list of two files] The first output is JSON, the second sh variables.
+#
+#   images (required)
+#     [list of scopes] See below.
+#
+# Each scope in $images contains:
+#
+#   default (optional)
+#     [bool] Include image in the default group (default: true).
+#     It is preferred that very large images that are rarely used are not
+#     in the default group.  Absence of a value is equivalent to true.
+#
+#   deps (required)
+#     [list of labels] Target that generates the image file.
+#     If `sources` is not specified, this must be an action in this file,
+#     and that action must produce a valid ZBI as the first declared output.
+#
+#   installer (optional)
+#     [string] Put this image into the installer image under this name.
+#
+#   public (optional)
+#     [list of strings] Each is "IMAGE_{NAME}_{TYPE}" where `TYPE` can be:
+#     `SPARSE` (sparse FVM), `RAW` (block image for any FS), `ZBI`
+#     (bootable zircon image), `RAM` (ramdisk without kernel--obsolete),
+#     `VBOOT` (ZBI in a vboot container).  "IMAGE_{NAME}_{TYPE}={FILE}"
+#     will be written out (with `FILE` relative to `root_build_dir`), to be
+#     consumed by //scripts and various tools to find the relevant images.
+#
+#   json (optional)
+#     [scope] Content for images.json; `path` is added automatically.
+#     Other fields should match TODO(mcgrathr): some JSON schema.
+#     Standard fields not properly documented in a schema yet:
+#       path (required)
+#         [file] Path relative to $root_build_dir where the image is found.
+#         Also serves as the Ninja command-line target argument to build it.
+#       name (required)
+#         [string] A simple identifier for the image.
+#       type (required)
+#         [string] Type of image: "zbi", "blk", "kernel", "vboot"
+#       bootserver_pave (optional)
+#         [string] The command-line switch to `bootserver` that should
+#         precede this file's name.  The presence of this field implies
+#         that this image is needed for paving via Zedboot.  The value ""
+#         means this image is the primary ZBI, which is not preceded by a
+#         switch on the `bootserver` command line.
+#       bootserver_netboot (optional)
+#         [string] The command-line switch to `bootserver` that should
+#         precede this file's name.  The presence of this field implies
+#         that this image is needed for netbooting via Zedboot.  The value ""
+#         means this image is the primary ZBI, which is not preceded by a
+#         switch on the `bootserver` command line.
+#       archive (optional; default: false)
+#         [boolean] This image should be included in a build archive.
+#         Implied by the presence of `bootserver`.
+#
+#   sdk (optional)
+#     [string] Put this image into the SDK under this name.
+#
+#   sources (optional)
+#     [list of files] The image file.
+#
+#   updater (optional)
+#     [string] Put this image into the update manifest under this name.
+#
+template("write_images_manifest") {
+  not_needed([ "target_name" ])  # Seriously.
+  images_json = []
+  image_paths = []
+  foreach(image, invoker.images) {
+    image_sources = []
+    if (defined(image.sources)) {
+      image_sources += image.sources
+    } else {
+      foreach(label, image.deps) {
+        image_sources += get_target_outputs(label)
+      }
+    }
+    image_file = rebase_path(image_sources[0], root_build_dir)
+
+    if (defined(image.json)) {
+      images_json += [
+        {
+          forward_variables_from(image.json, "*")
+          path = image_file
+        },
+      ]
+    }
+
+    if (defined(image.public)) {
+      foreach(name, image.public) {
+        image_paths += [ "${name}=${image_file}" ]
+      }
+    }
+  }
+
+  outputs = invoker.outputs
+  assert(outputs == [
+           outputs[0],
+           outputs[1],
+         ])
+  write_file(outputs[0], images_json, "json")
+  write_file(outputs[1], image_paths)
+}
diff --git a/build/images/manifest.gni b/build/images/manifest.gni
new file mode 100644
index 0000000..e13f2cf
--- /dev/null
+++ b/build/images/manifest.gni
@@ -0,0 +1,261 @@
+# Copyright 2018 The Fuchsia Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+import("//build/config/clang/clang.gni")
+import("//build/config/fuchsia/zircon.gni")
+
+declare_args() {
+  # Manifest files describing target libraries from toolchains.
+  # Can be either // source paths or absolute system paths.
+  toolchain_manifests = [
+    # clang_prefix is relative to root_build_dir.
+    rebase_path("${clang_prefix}/../lib/${clang_target}.manifest",
+                "",
+                root_build_dir),
+  ]
+
+  # Manifest files describing extra libraries from a Zircon build
+  # not included in `zircon_boot_manifests`, such as an ASan build.
+  # Can be either // source paths or absolute system paths.
+  #
+  # Since Zircon manifest files are relative to a Zircon source directory
+  # rather than to the directory containing the manifest, these are assumed
+  # to reside in a build directory that's a direct subdirectory of the
+  # Zircon source directory and thus their contents can be taken as
+  # relative to `get_path_info(entry, "dir") + "/.."`.
+  # TODO(mcgrathr): Make Zircon manifests self-relative too and then
+  # merge this and toolchain_manifests into generic aux_manifests.
+  if (zircon_use_asan) {
+    zircon_aux_manifests = [ "$zircon_build_abi_dir/bootfs.manifest" ]
+  } else {
+    zircon_aux_manifests = [ "$zircon_asan_build_dir/bootfs.manifest" ]
+  }
+
+  # Manifest files describing files to go into the `/boot` filesystem.
+  # Can be either // source paths or absolute system paths.
+  # `zircon_boot_groups` controls which files are actually selected.
+  #
+  # Since Zircon manifest files are relative to a Zircon source directory
+  # rather than to the directory containing the manifest, these are assumed
+  # to reside in a build directory that's a direct subdirectory of the
+  # Zircon source directory and thus their contents can be taken as
+  # relative to `get_path_info(entry, "dir") + "/.."`.
+  zircon_boot_manifests = [ "$zircon_build_dir/bootfs.manifest" ]
+
+  # Extra args to globally apply to the manifest generation script.
+  extra_manifest_args = []
+}
+
+# Action target that generates a response file in GN's "shlex" format.
+#
+# Parameters
+#
+#   output_name (optional, default: target_name)
+#     [path] Response file to write (if relative, relative to target_out_dir).
+#
+#   response_file_contents (required)
+#   data_deps (optional)
+#   deps (optional)
+#   public_deps (optional)
+#   testonly (optional)
+#   visibility (optional)
+#     Same as for any GN `action()` target.
+#
+template("generate_response_file") {
+  action(target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "data_deps",
+                             "deps",
+                             "public_deps",
+                             "output_name",
+                             "response_file_contents",
+                             "testonly",
+                             "visibility",
+                           ])
+    if (!defined(output_name)) {
+      output_name = target_name
+    }
+    outputs = [
+      "$target_out_dir/$output_name",
+    ]
+    assert(
+        defined(response_file_contents),
+        "generate_response_file(\"${target_name}\") must define response_file_contents")
+
+    if (response_file_contents == []) {
+      # GN doesn't allow an empty response file.
+      script = "/bin/cp"
+      args = [
+        "-f",
+        "/dev/null",
+      ]
+    } else {
+      script = "/bin/ln"
+      args = [
+        "-f",
+        "{{response_file_name}}",
+      ]
+    }
+    args += rebase_path(outputs, root_build_dir)
+  }
+}
+
+# Action target that generates a manifest file in the `target=/abs/file`
+# format used by `zbi`, `blobfs`, etc.  ELF files in the manifest have
+# their dynamic linking details examined and other necessary ELF files
+# implicitly added to the manifest.  All such files have their build IDs
+# and unstripped files recorded in a build ID map (`ids.txt` file).
+# Outputs: $target_out_dir/$target_name, $target_out_dir/$target_name.ids.txt
+#
+# Parameters
+#
+#   args (required)
+#     [list of strings] Additional arguments to finalize_manifests.py;
+#     `sources` should list any files directly referenced.
+#
+#   bootfs_manifest (optional)
+#     [string] Output a separate manifest file for the Zircon BOOTFS.  This
+#     manifest will get the `bootfs_zircon_groups` selections, while the
+#     main manifest will get `zircon_groups` and the other entries
+#     indicated by `args`.  The main output manifest will assume that
+#     libraries from the BOOTFS are available and not duplicate them.
+#
+#   bootfs_zircon_groups (required with `bootfs_manifest`)
+#     [string] Comma-separated list of Zircon manifest groups to include
+#     in `bootfs_manifest`.
+#
+#   zircon_groups (optional, default: "")
+#     [string] Comma-separated list of Zircon manifest groups to include.
+#     If this is "", then the Zircon manifest only provides binaries
+#     to satisfy dependencies.
+#
+#   deps (optional)
+#   sources (optional)
+#   testonly (optional)
+#   visibility (optional)
+#     Same as for any GN `action()` target.
+#
+template("generate_manifest") {
+  assert(defined(invoker.args),
+         "generate_manifest(\"${target_name}\") requires args")
+  action(target_name) {
+    forward_variables_from(invoker,
+                           [
+                             "deps",
+                             "public_deps",
+                             "sources",
+                             "testonly",
+                             "visibility",
+                             "zircon_groups",
+                           ])
+    if (!defined(sources)) {
+      sources = []
+    }
+    if (!defined(zircon_groups)) {
+      zircon_groups = ""
+    }
+    manifest_file = "$target_out_dir/$target_name"
+    depfile = "${manifest_file}.d"
+    build_id_file = "${manifest_file}.ids.txt"
+    stripped_dir = "${manifest_file}.stripped"
+
+    script = "//build/images/finalize_manifests.py"
+    inputs = rebase_path([
+                           "elfinfo.py",
+                           "manifest.py",
+                           "variant.py",
+                         ],
+                         "",
+                         "//build/images")
+    outputs = [
+      manifest_file,
+      build_id_file,
+    ]
+    args = extra_manifest_args + [
+             "--depfile=" + rebase_path(depfile, root_build_dir),
+             "--build-id-file=" + rebase_path(build_id_file, root_build_dir),
+             "--stripped-dir=" + rebase_path(stripped_dir, root_build_dir),
+             "@{{response_file_name}}",
+           ]
+    response_file_contents = []
+
+    # First the toolchain and Zircon manifests are pure auxiliaries:
+    # they just supply libraries that might satisfy dependencies.
+    sources += toolchain_manifests
+    foreach(manifest, toolchain_manifests) {
+      manifest_cwd = get_path_info(rebase_path(manifest), "dir")
+      response_file_contents += [
+        "--cwd=$manifest_cwd",
+        "--manifest=" + rebase_path(manifest),
+      ]
+    }
+    sources += zircon_aux_manifests + zircon_boot_manifests
+    foreach(manifest, zircon_aux_manifests + zircon_boot_manifests) {
+      manifest_cwd = get_path_info(rebase_path(manifest), "dir") + "/.."
+      response_file_contents += [
+        "--cwd=$manifest_cwd",
+        "--manifest=" + rebase_path(manifest),
+      ]
+    }
+
+    manifests = []
+
+    if (defined(invoker.bootfs_manifest)) {
+      assert(
+          defined(invoker.bootfs_zircon_groups),
+          "generate_manifest with bootfs_manifest needs bootfs_zircon_groups")
+      outputs += [ invoker.bootfs_manifest ]
+      manifests += [
+        {
+          output = invoker.bootfs_manifest
+          groups = invoker.bootfs_zircon_groups
+        },
+      ]
+
+      # Elide both devhost variants from the Zircon input m