Tests help uncover potential issues in code, and the various types of tests offer different levels of coverage.
This document guides developers (component authors, driver authors, and API/ABI maintainers) on the essential tests for validating Fuchsia software.
The table below provides an overview of the types of tests categorized by what needs that type of test.
Source Code | Components | Drivers | Protocols | |
---|---|---|---|---|
Unit | All | - | - | - |
Integration | - | All | Some | - |
Compatibility (CTF) | - | Some | Some | All (SDK) |
Spec Conformance | - | Some | All | Some |
Platform Expectation | - | Some | Some | Some |
System Interaction (Product) | - | Some | Some | Some |
The following sections categorize types of tests in terms of the following criteria:
All code should be covered by the smallest possible test that is able to validate functionality. Many functions and classes in the languages we use can have their contracts tested directly by small, focused unit tests.
Unit Testing is a well understood area, and for the most part Fuchsia developers may use any test framework for their language of choice, with a few notable differences:
ffx test
.ffx coverage
help to automate the processing of the data. Coverage information is surfaced in Gerrit and the Fuchsia Coverage dashboard for in-tree tests, while OOT test executors will need their own scripts to process the coverage output of tests.Learn how to write driver unit tests in the Driver unit testing quick start.
The diagram below shows unit tests running in a Fuchsia system.
Integration tests check that the interface and behavior of one component works alongside another component that calls it. Validate that different components work together as a system and interact as expected.
The following scenarios are validated for the component under test:
The recommendation is to to run integration tests hermetically (in isolation) using Test Realm Factory, but they can be run in non-hermetically if needed.
While unit tests are small and focused on specific pieces of business logic, Integration Tests are focused on testing the interactions between software modules as a group.
Integration testing is another well understood area, but Fuchsia provides features for testing that are radically different from what exists on other operating systems. In particular, Fuchsia‘s Component Framework enforces explicit usage and routing of “capabilities” that builds on top of Zircon’s capability-based security principles. The end result is that Fuchsia tests can be provably hermetic and recursively symmetric. The implications of these properties have been referred to as “Fuchsia's testing superpower.” Tests may be perfectly isolated from one another, and components may be nested arbitrarily. This means that one or more entire Fuchsia subsystems may be run in isolated test contexts (for example, DriverTestRealm and Test UI Stack).
The diagram below shows hermetic integration tests using the Test Realm pattern.
All tests on Fuchsia are hermetic by default, which means they automatically benefit from provable hermeticity and the ability to arbitrarily nest dependencies.
Hermetic Integration Tests simply build on top of this foundation to run a component or driver in an isolated test environment and interact with it using FIDL protocols. These tests cover the following scenarios for a component/driver:
The recommended pattern for writing hermetic integration tests is Test Realm Factory (TRF), which prepares the test for reuse in different types of tests below. TRF tests have three parts:
Each test case in the Test Suite calls methods on the RealmFactory to create a Test Realm, interacts with that realm over the capabilities it exposes, and asserts on the responses it receives. The TRF docs provide instructions for using the testgen tool to automatically create skeletons of the above to be filled out with details.
Hermetic Integration Tests using TRF forms the foundation for many of the below types of tests.
All components and many drivers require an integration test, and those tests should use TRF.
While Hermetic Integration Tests are what we should strive for, certain tests are difficult to write hermetically. Often this is because those tests have difficulty with dependencies that are not yet written in a way that can be hermetically packaged. For instance, we do not yet have a high-fidelity mock for Vulkan, so we allow certain tests access to the system-wide Vulkan capabilities.
The diagram below shows non-hermetic integration tests with an outside system, component or driver interaction.
Tests that access system capabilities are called Non-hermetic Integration Tests. While they are technically not hermetic, they should still try to be as isolated as possible:
Non-hermetic integration tests must be run in a location of the component topology that already has the required capabilities routed to it. A list of existing locations and instructions for adding new ones are here.
Non-hermetic Integration Tests should be used sparingly and for situations where no hermetic solution is practical. It is preferred that they are used as a stop-gap solution for a test that otherwise would be made hermetic given appropriate mocks or isolation features. They should not be used for tests that legitimately want to assert on the behavior of a given system globally (see instead On-device System Validation and Host-driven System Interaction Tests).
Compatibility tests provide early warning that a downstream breakage is possible due to a platform change, and it is especially important to ensure that our platform ABI remains stable. In general, every FIDL protocol's stable API exposed by the SDK should have a compatibility test for its API levels.
These tests verify that clients of the protocols, targeting a stable API level and built with the SDK, will receive a platform that is compatible with their expectations.
FIDL protocols evolve over time both in terms of their stated interface as well as the behaviors they exhibit. A common error arises when the output of some protocol changes in a way that differs from previous expectations.
These errors are difficult to identify using only integration tests, since the tests are often updated at the same time as the implementation they are testing. Furthermore, we need to maintain a degree of compatibility across the API revisions exposed in the SDK.
Compatibility Tests for Fuchsia (CTF) enable different versions of a component implementation to be tested against different sets of test expectations for compatibility. The TRF pattern is fully integrated with CTF, and TRF tests may be nominated for compatibility testing via a config change (no source change necessary).
The mechanism is as follows:
The end result is that the old expectations for behavior of exposed protocols are maintained across future modifications. Failing to provide this coverage means that subtle changes to the behavior or interface of SDK protocols will cause downstream breakages that are especially difficult to root cause.
The diagram below shows compatibility tests for using the Test Realm Factory (TRF) pattern and a frozen component fake.
Enabling CTF mode for a TRF test is a simple configuration option, and converting existing integration tests to TRF is straightforward (examples). Authors of components/drivers that implement SDK protocols should prioritize converting their tests to TRF and enable CTF mode to help stabilize the Fuchsia platform and save themselves ongoing large scale change overhead.
All components exposing protocols in the partner or public SDK should have a CTF test.
It is common for the Fuchsia Platform to define a contract that must be fulfilled by one or more implementations, some of which may be defined outside of the fuchsia.git repository:
LOG("Hello, world")
produces binary-compatible output no matter the library producing it.It is important to know that an implementation conforms to the specification, and a Spec Conformance Test is used to validate this is the case.
We test driver conformance by exercising the driver and checking for expected behaviour - for drivers that control devices this will require running on hardware (recommended approach). This is run by driver developers using the SDK at their desk (on hardware), and in their CI/CQ
Spec Conformance Tests may build on top of TRF tests to have identical structure to Compatibility Tests. In this scenario, the primary difference is in how the different pieces of the TRF test are used.
The recommended pattern for Spec Conformance testing is to define a RealmFactory (containing an implementation under test), a Test Suite (validating the implementation of the specification) wherever the contract is defined (e.g. fuchsia.git for SDK protocols), and the FIDL protocol for driving the test (which is responsible for instantiating and interacting with a set of components under test). The Test Suite and FIDL protocol are distributed to implementers (for example, through the SDK). Developers who implement the contract may use the distributed Test Suite and implement their own RealmFactory that wraps their implementation behind the FIDL protocol. This means that the same exact set of tests that define the contract are applied to each implementation.
The diagram below shows an approach for running spec conformance tests in a hermetic way.
The non-hermetic approach to test spec conformance is to run the test on a working system using Lacewing for (hardware-dependent implementation). This is run by developers in their CI/CQ against the product implementation.
The diagram below shows an approach for running spec conformance tests in a non-hermetic way.
Alternatively, Spec Conformance tests may be written using Lacewing and run as host-driven system interaction tests. This is particularly useful when the implementer of the protocol is a driver or otherwise depends on specific hardware. This is especially useful to assert that product images including the driver both conform to the spec and were appropriately assembled to support interacting with hardware.
The diagram below shows an approach for running spec conformance tests in a host-driven way.
More concretely, we can solve the above examples as follows:
Interfaces that are expected to be implemented multiple times should ship a spec conformance test for integrators to build on top of.
The diagram below shows platform expectation tests where the tests are shipped with the SDK.
Hermetic integration tests ensure that a component performs correctly in isolation, but it does not validate that an assembled system image including that component works properly. System validation tests are a special kind of non-hermetic integration test that ensures the real component behaves as expected, subject to some constraints.
Platform Expectation tests are typically based on hermetic TRF tests consisting of a RealmFactory and Test Suite. Instead of using the RealmFactory (which instantiates isolated components under test), system validation tests use a stand-in component that provides access to the real system capabilities.
For example, if you are testing fuchsia.example.Echo
, your hermetic TRF test will provide a RealmFactory that exposes fuchsia.example.test.EchoHarness
over which you can CreateRealm()
to obtain an isolated fuchsia.example.Echo
connection. A system validation test's stand-in component also implements CreateRealm()
, but provides a real fuchsia.example.Echo
connection from the system itself.
This pattern allows you to use the exact same test code in hermetic and non-hermetic cases, with incompatibilities handled by the UNSUPPORTED
return value.
To illustrate how this would work, consider system validation testing with a harness that includes method SetUpDependency()
in addition to CreateRealm()
. If it is not possible to set up the dependency when running in a non-hermetic setting, that method simply returns UNSUPPORTED
and tests that depend on it are skipped. Consider having test cases read_any_data()
and read_specific_data()
which skip calling SetUpDependency()
and do call it respectively. The former case ensures that any data can be read in the correct format (both hermetically and non-hermetically), while the latter case ensures that specific data is returned (hermetically only).
To aid OOT system integrators, we may ship system validation test suites in the SDK to be run against assembled product images OOT. This is a primary mechanism for validating the behavior of drivers written OOT.
Platform components and drivers should have system validation tests. The Fuchsia SDK should make a validation test suite available for each driver expected to be implemented in a separate repository.
Note: We differentiate between on-device and host-driven system validation tests, recognizing that some system validation tasks cannot happen within the context of a device (e.g. rebooting the device as part of testing requires some process external to the device to coordinate). Certain host-driven system interaction tests (implemented as System Interaction Tests) can provide the same coverage as Platform Expectation tests.
Fuchsia makes hermetic testing possible for a wide range of cases that would be infeasible to write on other systems, but in some cases there is just no replacement for physically controlling an entire device. Instead of running these tests on the device under test, the host system instead takes responsibility for controlling one or more devices to exercise end-to-end code paths.
A host-driven system interaction test has the ability to fully control a Fuchsia device using SDK tools and direct connection to services on the device. They are written using the Lacewing framework (build on Mobly).
Tests that use Lacewing can arbitrarily connect to services on a target system. Some tests are written to target specific drivers or subsystems (e.g. does the real-time clock save time across reboots?), some are written to cover user journeys that require device-wide configuration (e.g. can a logged-in user open a web browser and navigate to a web page?), and some are written to control a number of Fuchsia and non-Fuchsia devices in concert (e.g. can a Fuchsia device pair with a bluetooth accessory?).
With the Lacewing Framework, interactions with the device are handled through “affordances,” which provide evolvable interfaces to interact with specific device subsystems (e.g. Bluetooth affordance, WLAN affordance, etc).
Diagram shows an approach for running system interaction tests using Lacewing.
As with most end-to-end (E2E) tests, this kind of testing can be expensive for several reasons:
E2E testing should be done sparingly for those reasons, but often it is the last line of defense that covers one of the hardest testing gaps: ensuring that a real user interacting with a system will see the desired outputs.
Some system components and drivers need this kind of test, but the main benefit of using Lacewing is to cover real-world device interactions that cannot be covered by isolated on-device tests. Choosing between system validation and Lacewing is often a judgement call, but there is space for both kinds of testing in a complete test strategy. Test authors should seek to get the coverage they need for the lowest maintenance cost.