Fuchsia developers seek guidance on what tests are actually necessary to validate the software they write. This includes component authors, driver authors, and anyone who publishes or maintains aspects of the API and ABI surface area of Fuchsia.
We generally write tests to detect things that may go wrong with our code, and different types of tests provide coverage for different potential problems.
This document describes the kinds of tests that provide different types of coverage.
The following sections categorize types of tests in terms of the following criteria:
The below table provides an overview of the types of tests categorized by what needs that type of test.
Source Code | Components | Drivers | Protocols | |
---|---|---|---|---|
Unit | All | - | - | - |
Hermetic integration | - | All | Some | - |
Non-hermetic integration | - | Few | Few | - |
Compatibility (CTF) | - | Some | Some | All (SDK) |
Spec Conformance | - | Some | Some | Some |
On-device System Validation | - | Some | Some | Some |
Host-driven System Automation (Lacewing) | - | Some | Some | Some |
All code should be covered by the smallest possible test that is able to validate functionality. Many functions and classes in the languages we use can have their contracts tested directly by small, focused unit tests.
Unit Testing is a well understood area, and for the most part Fuchsia developers may use any test framework for their language of choice, with a few notable differences:
ffx test
.ffx coverage
help to automate the processing of the data. Coverage information is surfaced in Gerrit and the Fuchsia Coverage dashboard for in-tree tests, while OOT test executors will need their own scripts to process the coverage output of tests.Learn how to write driver unit tests in the Driver unit testing quick start.
While unit tests are small and focused on specific pieces of business logic, Integration Tests are focused on testing the interactions between software modules as a group.
Integration testing is another well understood area, but Fuchsia provides features for testing that are radically different from what exists on other operating systems. In particular, Fuchsia‘s Component Framework enforces explicit usage and routing of “capabilities” that builds on top of Zircon’s capability-based security principles. The end result is that Fuchsia tests can be provably hermetic and recursively symmetric. The implications of these properties have been referred to as “Fuchsia's testing superpower.” Tests may be perfectly isolated from one another, and components may be nested arbitrarily. This means that one or more entire Fuchsia subsystems may be run in isolated test contexts (for example, DriverTestRealm and Test UI Stack).
All tests on Fuchsia are hermetic by default, which means they automatically benefit from provable hermeticity and the ability to arbitrarily nest dependencies.
Hermetic Integration Tests simply build on top of this foundation to run a component or driver in an isolated test environment and interact with it using FIDL protocols. These tests cover the following scenarios for a component/driver:
The recommended pattern for writing hermetic integration tests is Test Realm Factory (TRF), which prepares the test for reuse in different types of tests below. TRF tests have three parts:
Each test case in the Test Suite calls methods on the RealmFactory to create a Test Realm, interacts with that realm over the capabilities it exposes, and asserts on the responses it receives. The TRF docs provide instructions for using the testgen tool to automatically create skeletons of the above to be filled out with details.
Hermetic Integration Tests using TRF forms the foundation for many of the below types of tests.
All components and many drivers require an integration test, and those tests should use TRF.
While Hermetic Integration Tests are what we should strive for, certain tests are difficult to write hermetically. Often this is because those tests have difficulty with dependencies that are not yet written in a way that can be hermetically packaged. For instance, we do not yet have a high-fidelity mock for Vulkan, so we allow certain tests access to the system-wide Vulkan capabilities.
Tests that access system capabilities are called Non-hermetic Integration Tests. While they are technically not hermetic, they should still try to be as isolated as possible:
Non-hermetic integration tests must be run in a location of the component topology that already has the required capabilities routed to it. A list of existing locations and instructions for adding new ones are here.
Non-hermetic Integration Tests should be used sparingly and for situations where no hermetic solution is practical. It is preferred that they are used as a stop-gap solution for a test that otherwise would be made hermetic given appropriate mocks or isolation features. They should not be used for tests that legitimately want to assert on the behavior of a given system globally (see instead On-device System Validation and Host-driven System Interaction Tests).
In general, every FIDL protocol's stable API exposed by the SDK should have a compatibility test for its API levels.
These tests verify that clients of the protocols, targeting a stable API level and built with the SDK, will receive a platform that is compatible with their expectations.
FIDL protocols evolve over time both in terms of their stated interface as well as the behaviors they exhibit. A common error arises when the output of some protocol changes in a way that differs from previous expectations.
These errors are difficult to identify using only integration tests, since the tests are often updated at the same time as the implementation they are testing. Furthermore, we need to maintain a degree of compatibility across the API revisions exposed in the SDK.
Compatibility Tests for Fuchsia (CTF) enable different versions of a component implementation to be tested against different sets of test expectations for compatibility. The TRF pattern is fully integrated with CTF, and TRF tests may be nominated for compatibility testing via a config change (no source change necessary).
The mechanism is as follows:
The end result is that the old expectations for behavior of exposed protocols are maintained across future modifications. Failing to provide this coverage means that subtle changes to the behavior or interface of SDK protocols will cause downstream breakages that are especially difficult to root cause. CTF tests provide early warning that a downstream breakage is possible due to a platform change, and it is especially important to ensure that our platform ABI remains stable.
Enabling CTF mode for a TRF test is a simple configuration option, and converting existing integration tests to TRF is straightforward (examples). Authors of components/drivers that implement SDK protocols should prioritize converting their tests to TRF and enable CTF mode to help stabilize the Fuchsia platform and save themselves ongoing large scale change overhead.
All components exposing protocols in the partner or public SDK should have a CTF test.
It is common for the Fuchsia Platform to define a contract that must be fulfilled by one or more implementations, some of which may be defined outside of the fuchsia.git repository:
LOG("Hello, world")
produces binary-compatible output no matter the library producing it.It is important to know that an implementation conforms to the specification, and a Spec Conformance Test is used to validate this is the case.
Spec Conformance Tests may build on top of TRF tests to have identical structure to Compatibility Tests. In this scenario, the primary difference is in how the different pieces of the TRF test are used.
The recommended pattern for Spec Conformance testing is to define a RealmFactory (containing an implementation under test), a Test Suite (validating the implementation of the specification) wherever the contract is defined (e.g. fuchsia.git for SDK protocols), and the FIDL protocol for driving the test (which is responsible for instantiating and interacting with a set of components under test). The Test Suite and FIDL protocol are distributed to implementers (for example, through the SDK). Developers who implement the contract may use the distributed Test Suite and implement their own RealmFactory that wraps their implementation behind the FIDL protocol. This means that the same exact set of tests that define the contract are applied to each implementation.
Alternatively, Spec Conformance tests may be written using Lacewing and run as host-driven system interaction tests. This is particularly useful when the implementer of the protocol is a driver or otherwise depends on specific hardware. This is especially useful to assert that product images including the driver both conform to the spec and were appropriately assembled to support interacting with hardware.
More concretely, we can solve the above examples as follows:
Interfaces that are expected to be implemented multiple times should ship a spec conformance test for integrators to build on top of.
Hermetic integration tests ensure that a component performs correctly in isolation, but it does not validate that an assembled system image including that component works properly. System validation tests are a special kind of non-hermetic integration test that ensures the real component behaves as expected, subject to some constraints.
On-device system validation tests are typically based on hermetic TRF tests consisting of a RealmFactory and Test Suite. Instead of using the RealmFactory (which instantiates isolated components under test), system validation tests use a stand-in component that provides access to the real system capabilities.
For example, if you are testing fuchsia.example.Echo
, your hermetic TRF test will provide a RealmFactory that exposes fuchsia.example.test.EchoHarness
over which you can CreateRealm()
to obtain an isolated fuchsia.example.Echo
connection. A system validation test's stand-in component also implements CreateRealm()
, but provides a real fuchsia.example.Echo
connection from the system itself.
This pattern allows you to use the exact same test code in hermetic and non-hermetic cases, with incompatibilities handled by the UNSUPPORTED
return value.
To illustrate how this would work, consider system validation testing with a harness that includes method SetUpDependency()
in addition to CreateRealm()
. If it is not possible to set up the dependency when running in a non-hermetic setting, that method simply returns UNSUPPORTED
and tests that depend on it are skipped. Consider having test cases read_any_data()
and read_specific_data()
which skip calling SetUpDependency()
and do call it respectively. The former case ensures that any data can be read in the correct format (both hermetically and non-hermetically), while the latter case ensures that specific data is returned (hermetically only).
To aid OOT system integrators, we may ship system validation test suites in the SDK to be run against assembled product images OOT. This is a primary mechanism for validating the behavior of drivers written OOT.
Platform components and drivers should have system validation tests. The Fuchsia SDK should make a validation test suite available for each driver expected to be implemented in a separate repository.
Note: We differentiate between on-device and host-driven system validation tests, recognizing that some system validation tasks cannot happen within the context of a device (e.g. rebooting the device as part of testing requires some process external to the device to coordinate). Certain host-driven system interaction tests (implemented using Lacewing) can provide the same coverage as on-device system validation tests.
Fuchsia makes hermetic testing possible for a wide range of cases that would be infeasible to write on other systems, but in some cases there is just no replacement for physically controlling an entire device. Instead of running these tests on the device under test, the host system instead takes responsibility for controlling one or more devices to exercise end-to-end code paths.
A host-driven system interaction test has the ability to fully control a Fuchsia device using SDK tools and direct connection to services on the device. They are written using the Lacewing framework (build on Mobly), so we will refer to them as Lacewing tests for short.
Lacewing tests can arbitrarily connect to services on a target system. Some tests are written to target specific drivers or subsystems (e.g. does the real-time clock save time across reboots?), some are written to cover user journeys that require device-wide configuration (e.g. can a logged-in user open a web browser and navigate to a web page?), and some are written to control a number of Fuchsia and non-Fuchsia devices in concert (e.g. can a Fuchsia device pair with a bluetooth accessory?).
Lacewing test's interactions with the device are handled through “affordances,” which provide evolvable interfaces to interact with specific device subsystems (e.g. Bluetooth affordance, WLAN affordance, etc).
As with most end-to-end (E2E) tests, this kind of testing can be expensive for several reasons:
E2E testing should be done sparingly for those reasons, but often it is the last line of defense that covers one of the hardest testing gaps: ensuring that a real user interacting with a system will see the desired outputs.
Some system components and drivers need this kind of test, but the main benefit of Lacewing tests is to cover real-world device interactions that cannot be covered by isolated on-device tests. Choosing between system validation and Lacewing is often a judgement call, but there is space for both kinds of testing in a complete test strategy. Test authors should seek to get the coverage they need for the lowest maintenance cost.