[kernel][vm] Syscall to query modified state
Add zx_pager_query_vmo_stats() that can be used to query pager-related
stats on a VMO. Currently only returns whether the VMO was modified
since the last zx_pager_query_vmo_stats() call.
Test: core-pager-writeback
Bug: 63989
Run-All-Tests: true
Change-Id: I461be1764690092c271a0c710fee0102110368b1
Reviewed-on: https://fuchsia-review.googlesource.com/c/fuchsia/+/688663
Reviewed-by: Carlos Pizano <cpu@google.com>
Reviewed-by: Adrian Danis <adanis@google.com>
Commit-Queue: Rasha Eqbal <rashaeqbal@google.com>
API-Review: Carlos Pizano <cpu@google.com>
diff --git a/docs/reference/syscalls/_toc.yaml b/docs/reference/syscalls/_toc.yaml
index 28ac8ca..917962c 100644
--- a/docs/reference/syscalls/_toc.yaml
+++ b/docs/reference/syscalls/_toc.yaml
@@ -226,6 +226,8 @@
path: /docs/reference/syscalls/pager_op_range.md
- title: "zx_pager_query_dirty_ranges"
path: /docs/reference/syscalls/pager_query_dirty_ranges.md
+ - title: "zx_pager_query_vmo_stats"
+ path: /docs/reference/syscalls/pager_query_vmo_stats.md
- title: "Cryptographically secure RNG"
section:
- title: "zx_cprng_add_entropy"
diff --git a/docs/reference/syscalls/pager_query_dirty_ranges.md b/docs/reference/syscalls/pager_query_dirty_ranges.md
index dcd7fc0..2b0e4eaa 100644
--- a/docs/reference/syscalls/pager_query_dirty_ranges.md
+++ b/docs/reference/syscalls/pager_query_dirty_ranges.md
@@ -118,6 +118,7 @@
- [`zx_pager_create_vmo()`]
- [`zx_pager_detach_vmo()`]
- [`zx_pager_op_range()`]
+ - [`zx_pager_query_vmo_stats()`]
- [`zx_pager_supply_pages()`]
<!-- References updated by update-docs-from-fidl, do not edit. -->
@@ -125,4 +126,5 @@
[`zx_pager_create_vmo()`]: pager_create_vmo.md
[`zx_pager_detach_vmo()`]: pager_detach_vmo.md
[`zx_pager_op_range()`]: pager_op_range.md
+[`zx_pager_query_vmo_stats()`]: pager_query_vmo_stats.md
[`zx_pager_supply_pages()`]: pager_supply_pages.md
diff --git a/docs/reference/syscalls/pager_query_vmo_stats.md b/docs/reference/syscalls/pager_query_vmo_stats.md
new file mode 100644
index 0000000..a7bbd97
--- /dev/null
+++ b/docs/reference/syscalls/pager_query_vmo_stats.md
@@ -0,0 +1,86 @@
+# zx_pager_query_vmo_stats
+
+## NAME
+
+<!-- Contents of this heading updated by update-docs-from-fidl, do not edit. -->
+
+Query pager related statistics on a pager owned VMO.
+
+## SYNOPSIS
+
+<!-- Contents of this heading updated by update-docs-from-fidl, do not edit. -->
+
+```c
+#include <zircon/syscalls.h>
+
+zx_status_t zx_pager_query_vmo_stats(zx_handle_t pager,
+ zx_handle_t pager_vmo,
+ uint32_t options,
+ void* buffer,
+ size_t buffer_size);
+```
+
+## DESCRIPTION
+
+Queries *pager_vmo* for any pager related statistics, e.g. whether *pager_vmo* has been modified.
+The *pager_vmo* must have previously been created from the *pager* by [`zx_pager_create_vmo()`].
+
+*options* can be **ZX_PAGER_RESET_VMO_STATS** if the caller also wishes to reset the queried stats.
+An *options* value of 0 does not reset any state, and performs a pure query.
+
+*buffer* should be a pointer to a `zx_pager_vmo_stats_t` struct that will hold the result of the
+query, and *buffer_size* should be large enough to accommodate the struct.
+
+```c
+typedef struct zx_pager_vmo_stats {
+ // Will be set to ZX_PAGER_VMO_STATS_MODIFIED if the VMO was modified, or 0 otherwise.
+ // Note that this can be set to 0 if a previous zx_pager_query_vmo_stats() call specified the
+ // ZX_PAGER_RESET_VMO_STATS option, which resets the modified state.
+ uint32_t modified;
+} zx_pager_vmo_stats_t;
+```
+
+Note that this call can have an effect on future `zx_pager_query_vmo_stats()` calls by consuming
+queryable state if the **ZX_PAGER_RESET_VMO_STATS** option is specified. For example, if a
+`zx_vmo_write()` is followed by two consecutive `zx_pager_query_vmo_stats()` with the
+**ZX_PAGER_RESET_VMO_STATS** option, only the first of those will see `modified` set to
+**ZX_PAGER_VMO_STATS_MODIFIED**. Since no further modifications took place after the first
+`zx_pager_query_vmo_stats()`, the second `zx_pager_query_vmo_stats()` will return `modified` as 0.
+
+## RIGHTS
+
+<!-- Contents of this heading updated by update-docs-from-fidl, do not edit. -->
+
+*pager* must be of type **ZX_OBJ_TYPE_PAGER**.
+
+*pager_vmo* must be of type **ZX_OBJ_TYPE_VMO**.
+
+## RETURN VALUE
+
+`zx_pager_query_vmo_stats()` returns **ZX_OK** on success. In the event of failure, a negative error
+value is returned.
+
+## ERRORS
+
+**ZX_ERR_BAD_HANDLE** *pager* or *pager_vmo* is not a valid handle.
+
+**ZX_ERR_WRONG_TYPE** *pager* is not a pager handle, or *pager_vmo* is not a vmo handle.
+
+**ZX_ERR_INVALID_ARGS** *pager_vmo* is not a vmo created from *pager*, or *options* is neither 0 or
+**ZX_PAGER_RESET_VMO_STATS**.
+
+## SEE ALSO
+
+ - [`zx_pager_create_vmo()`]
+ - [`zx_pager_detach_vmo()`]
+ - [`zx_pager_op_range()`]
+ - [`zx_pager_query_dirty_ranges()`]
+ - [`zx_pager_supply_pages()`]
+
+<!-- References updated by update-docs-from-fidl, do not edit. -->
+
+[`zx_pager_create_vmo()`]: pager_create_vmo.md
+[`zx_pager_detach_vmo()`]: pager_detach_vmo.md
+[`zx_pager_op_range()`]: pager_op_range.md
+[`zx_pager_query_dirty_ranges()`]: pager_query_dirty_ranges.md
+[`zx_pager_supply_pages()`]: pager_supply_pages.md
diff --git a/src/zircon/lib/zircon/zircon.ifs b/src/zircon/lib/zircon/zircon.ifs
index 88a2ea3..0853c90 100644
--- a/src/zircon/lib/zircon/zircon.ifs
+++ b/src/zircon/lib/zircon/zircon.ifs
@@ -85,6 +85,7 @@
- { Name: _zx_pager_detach_vmo, Type: Func }
- { Name: _zx_pager_op_range, Type: Func }
- { Name: _zx_pager_query_dirty_ranges, Type: Func }
+ - { Name: _zx_pager_query_vmo_stats, Type: Func }
- { Name: _zx_pager_supply_pages, Type: Func }
- { Name: _zx_pc_firmware_tables, Type: Func }
- { Name: _zx_pci_add_subtract_io_range, Type: Func }
@@ -274,6 +275,7 @@
- { Name: zx_pager_detach_vmo, Type: Func, Weak: true }
- { Name: zx_pager_op_range, Type: Func, Weak: true }
- { Name: zx_pager_query_dirty_ranges, Type: Func, Weak: true }
+ - { Name: zx_pager_query_vmo_stats, Type: Func, Weak: true }
- { Name: zx_pager_supply_pages, Type: Func, Weak: true }
- { Name: zx_pc_firmware_tables, Type: Func, Weak: true }
- { Name: zx_pci_add_subtract_io_range, Type: Func, Weak: true }
diff --git a/zircon/kernel/lib/syscalls/pager.cc b/zircon/kernel/lib/syscalls/pager.cc
index 09e104b..84a41a8 100644
--- a/zircon/kernel/lib/syscalls/pager.cc
+++ b/zircon/kernel/lib/syscalls/pager.cc
@@ -231,3 +231,27 @@
return pager_dispatcher->QueryDirtyRanges(up->aspace().get(), pager_vmo_dispatcher->vmo(), offset,
length, buffer, buffer_size, actual, avail);
}
+
+// zx_status_t zx_pager_query_vmo_stats
+zx_status_t sys_pager_query_vmo_stats(zx_handle_t pager, zx_handle_t pager_vmo, uint32_t options,
+ user_out_ptr<void> buffer, size_t buffer_size) {
+ auto up = ProcessDispatcher::GetCurrent();
+ fbl::RefPtr<PagerDispatcher> pager_dispatcher;
+ zx_status_t status = up->handle_table().GetDispatcher(*up, pager, &pager_dispatcher);
+ if (status != ZX_OK) {
+ return status;
+ }
+
+ fbl::RefPtr<VmObjectDispatcher> pager_vmo_dispatcher;
+ status = up->handle_table().GetDispatcher(*up, pager_vmo, &pager_vmo_dispatcher);
+ if (status != ZX_OK) {
+ return status;
+ }
+
+ if (pager_vmo_dispatcher->pager_koid() != pager_dispatcher->get_koid()) {
+ return ZX_ERR_INVALID_ARGS;
+ }
+
+ return pager_dispatcher->QueryPagerVmoStats(up->aspace().get(), pager_vmo_dispatcher->vmo(),
+ options, buffer, buffer_size);
+}
diff --git a/zircon/kernel/object/include/object/pager_dispatcher.h b/zircon/kernel/object/include/object/pager_dispatcher.h
index 7585c98..2486a27 100644
--- a/zircon/kernel/object/include/object/pager_dispatcher.h
+++ b/zircon/kernel/object/include/object/pager_dispatcher.h
@@ -32,6 +32,9 @@
uint64_t length, user_out_ptr<void> buffer, size_t buffer_size,
user_out_ptr<size_t> actual, user_out_ptr<size_t> avail);
+ zx_status_t QueryPagerVmoStats(VmAspace* current_aspace, fbl::RefPtr<VmObject> vmo,
+ uint32_t options, user_out_ptr<void> buffer, size_t buffer_size);
+
zx_obj_type_t get_type() const final { return ZX_OBJ_TYPE_PAGER; }
void on_zero_handles() final;
diff --git a/zircon/kernel/object/pager_dispatcher.cc b/zircon/kernel/object/pager_dispatcher.cc
index c9c0533..3bca7b1 100644
--- a/zircon/kernel/object/pager_dispatcher.cc
+++ b/zircon/kernel/object/pager_dispatcher.cc
@@ -271,3 +271,39 @@
}
return status;
}
+
+zx_status_t PagerDispatcher::QueryPagerVmoStats(VmAspace* current_aspace, fbl::RefPtr<VmObject> vmo,
+ uint32_t options, user_out_ptr<void> buffer,
+ size_t buffer_size) {
+ if (buffer_size < sizeof(zx_pager_vmo_stats_t)) {
+ return ZX_ERR_BUFFER_TOO_SMALL;
+ }
+
+ bool reset = options & ZX_PAGER_RESET_VMO_STATS;
+ options &= ~ZX_PAGER_RESET_VMO_STATS;
+ if (options) {
+ return ZX_ERR_INVALID_ARGS;
+ }
+
+ zx_pager_vmo_stats_t stats;
+ zx_status_t status = vmo->QueryPagerVmoStats(reset, &stats);
+ if (status != ZX_OK) {
+ return status;
+ }
+
+ do {
+ UserCopyCaptureFaultsResult copy_result =
+ buffer.reinterpret<zx_pager_vmo_stats_t>().copy_to_user_capture_faults(stats);
+ if (copy_result.status == ZX_OK) {
+ break;
+ }
+ DEBUG_ASSERT(copy_result.fault_info.has_value());
+ zx_status_t fault_status =
+ current_aspace->SoftFault(copy_result.fault_info->pf_va, copy_result.fault_info->pf_flags);
+ if (fault_status != ZX_OK) {
+ return fault_status;
+ }
+ } while (true);
+
+ return ZX_OK;
+}
diff --git a/zircon/kernel/vm/include/vm/vm_cow_pages.h b/zircon/kernel/vm/include/vm/vm_cow_pages.h
index dceac884..f94761b 100644
--- a/zircon/kernel/vm/include/vm/vm_cow_pages.h
+++ b/zircon/kernel/vm/include/vm/vm_cow_pages.h
@@ -187,6 +187,15 @@
return result;
}
+ // The modified state is only supported for root pager-backed VMOs, and will get queried (and
+ // possibly reset) on the next QueryPagerVmoStatsLocked() call.
+ void mark_modified_locked() TA_REQ(lock_) {
+ if (!is_source_preserving_page_content_locked()) {
+ return;
+ }
+ pager_stats_modified_ = true;
+ }
+
bool is_source_preserving_page_content_locked() const TA_REQ(lock_) {
bool result = page_source_ && page_source_->properties().is_preserving_page_content;
DEBUG_ASSERT(result == debug_is_user_pager_backed_locked());
@@ -314,6 +323,22 @@
DirtyRangeEnumerateFunction&& dirty_range_fn)
TA_REQ(lock_);
+ // Query pager VMO |stats|, and reset them too if |reset| is set to true.
+ zx_status_t QueryPagerVmoStatsLocked(bool reset, zx_pager_vmo_stats_t* stats) TA_REQ(lock_) {
+ DEBUG_ASSERT(stats);
+ // The modified state should only be set for VMOs directly backed by a pager.
+ DEBUG_ASSERT(!pager_stats_modified_ || is_source_preserving_page_content_locked());
+
+ if (!is_source_preserving_page_content_locked()) {
+ return ZX_ERR_NOT_SUPPORTED;
+ }
+ stats->modified = pager_stats_modified_ ? ZX_PAGER_VMO_STATS_MODIFIED : 0;
+ if (reset) {
+ pager_stats_modified_ = false;
+ }
+ return ZX_OK;
+ }
+
// See VmObject::WritebackBegin
zx_status_t WritebackBeginLocked(uint64_t offset, uint64_t len) TA_REQ(lock_);
@@ -1258,6 +1283,10 @@
// this bool is part of mitigating any potential DMA-while-not-pinned (which is not permitted
// but is also difficult to detect or prevent without an IOMMU).
bool ever_pinned_ TA_GUARDED(lock_) = false;
+
+ // Tracks whether this VMO was modified (written / resized) if backed by a pager. This gets reset
+ // to false if QueryPagerVmoStatsLocked() is called with |reset| set to true.
+ bool pager_stats_modified_ TA_GUARDED(lock_) = false;
};
// VmCowPagesContainer exists to essentially split the VmCowPages ref_count_ into two counts, so
diff --git a/zircon/kernel/vm/include/vm/vm_object.h b/zircon/kernel/vm/include/vm/vm_object.h
index 80b1f2f..7a155a1 100644
--- a/zircon/kernel/vm/include/vm/vm_object.h
+++ b/zircon/kernel/vm/include/vm/vm_object.h
@@ -286,6 +286,9 @@
virtual bool is_private_pager_copy_supported() const { return false; }
// Returns true if the VMO's pages require dirty bit tracking.
virtual bool is_dirty_tracked_locked() const TA_REQ(lock_) { return false; }
+ // Marks the VMO as modified if the VMO tracks modified state (only supported for pager-backed
+ // VMOs).
+ virtual void mark_modified_locked() TA_REQ(lock_) {}
// Returns true if the vmo is a hidden paged vmo.
virtual bool is_hidden() const { return false; }
@@ -417,6 +420,13 @@
return ZX_ERR_NOT_SUPPORTED;
}
+ // Query pager relevant VMO stats, e.g. whether the VMO has been modified. If |reset| is set to
+ // true, the queried stats are reset as well, potentially affecting the queried state returned by
+ // future calls to this function.
+ virtual zx_status_t QueryPagerVmoStats(bool reset, zx_pager_vmo_stats_t* stats) {
+ return ZX_ERR_NOT_SUPPORTED;
+ }
+
// Indicates start of writeback for the range [offset, offset + len). Any Dirty pages in the range
// are transitioned to AwaitingClean, in preparation for transition to Clean when the writeback is
// done (See VmCowPages::DirtyState for details of these states). |offset| and |len| must be page
diff --git a/zircon/kernel/vm/include/vm/vm_object_paged.h b/zircon/kernel/vm/include/vm/vm_object_paged.h
index 626d23c..1f387f9 100644
--- a/zircon/kernel/vm/include/vm/vm_object_paged.h
+++ b/zircon/kernel/vm/include/vm/vm_object_paged.h
@@ -82,6 +82,9 @@
bool is_dirty_tracked_locked() const override TA_REQ(lock_) {
return cow_pages_locked()->is_dirty_tracked_locked();
}
+ void mark_modified_locked() override TA_REQ(lock_) {
+ return cow_pages_locked()->mark_modified_locked();
+ }
ChildType child_type() const override {
if (is_slice()) {
return ChildType::kSlice;
@@ -165,6 +168,11 @@
return cow_pages_locked()->EnumerateDirtyRangesLocked(offset, len, ktl::move(dirty_range_fn));
}
+ zx_status_t QueryPagerVmoStats(bool reset, zx_pager_vmo_stats_t* stats) override {
+ Guard<Mutex> guard{&lock_};
+ return cow_pages_locked()->QueryPagerVmoStatsLocked(reset, stats);
+ }
+
zx_status_t WritebackBegin(uint64_t offset, uint64_t len) override {
Guard<Mutex> guard{&lock_};
return cow_pages_locked()->WritebackBeginLocked(offset, len);
diff --git a/zircon/kernel/vm/vm_mapping.cc b/zircon/kernel/vm/vm_mapping.cc
index 98bb820..b17d26b 100644
--- a/zircon/kernel/vm/vm_mapping.cc
+++ b/zircon/kernel/vm/vm_mapping.cc
@@ -866,6 +866,11 @@
}
DEBUG_ASSERT(lookup_info.num_pages > 0);
+ // We looked up in order to write. Mark as modified.
+ if (pf_flags & VMM_PF_FLAG_WRITE) {
+ object_->mark_modified_locked();
+ }
+
// if we read faulted, and lookup didn't say that this is always writable, then we map or modify
// the page without any write permissions. This ensures we will fault again if a write is
// attempted so we can potentially replace this page with a copy or a new one, or update the
diff --git a/zircon/kernel/vm/vm_object_paged.cc b/zircon/kernel/vm/vm_object_paged.cc
index e50748c..6c8fc37 100644
--- a/zircon/kernel/vm/vm_object_paged.cc
+++ b/zircon/kernel/vm/vm_object_paged.cc
@@ -958,6 +958,14 @@
uint64_t page_count_before = is_contiguous() ? cow_pages_locked()->DebugGetPageCountLocked() : 0;
#endif
+ auto mark_modified = fit::defer([this, original_start = start, &start]() {
+ if (start > original_start) {
+ // Mark modified since we wrote zeros.
+ AssertHeld(lock_);
+ mark_modified_locked();
+ }
+ });
+
// We might need a page request if the VMO is backed by a page source.
__UNINITIALIZED LazyPageRequest page_request;
while (start < end) {
@@ -1016,6 +1024,8 @@
if (status != ZX_OK) {
return status;
}
+ // We were able to successfully resize. Mark as modified.
+ mark_modified_locked();
return ZX_OK;
}
@@ -1056,6 +1066,15 @@
// Track our two offsets.
uint64_t src_offset = offset;
size_t dest_offset = 0;
+
+ auto mark_modified = fit::defer([this, &dest_offset, write]() {
+ if (write && dest_offset > 0) {
+ // We wrote something, so mark as modified.
+ AssertHeld(lock_);
+ mark_modified_locked();
+ }
+ });
+
__UNINITIALIZED LookupInfo pages;
// Record the current generation count, we can use this to attempt to avoid re-performing checks
// whilst copying.
diff --git a/zircon/public/sysroot/sdk/sysroot.api b/zircon/public/sysroot/sdk/sysroot.api
index f62702f..ae3b38a 100644
--- a/zircon/public/sysroot/sdk/sysroot.api
+++ b/zircon/public/sysroot/sdk/sysroot.api
@@ -218,7 +218,7 @@
"include/zircon/sanitizer.h": "b7c33e026916780e24880cf0dba9867c",
"include/zircon/status.h": "dd933b11e68e1c297bc781074bae511a",
"include/zircon/string_view.h": "a143924409072ee53142d61494e09c1c",
- "include/zircon/syscalls-next.h": "c1d5888183250d1372b6a1fc0b9b6f9a",
+ "include/zircon/syscalls-next.h": "dd091c3163982c77dad56acb01683885",
"include/zircon/syscalls.h": "dc7e8379ef6fa3a2969ff90d08a64143",
"include/zircon/syscalls/clock.h": "5df7b36bb718402d9c5c9fa482d4e1eb",
"include/zircon/syscalls/debug.h": "f57f586b395aaca99a83d549e72143cc",
diff --git a/zircon/system/public/zircon/syscalls-next.h b/zircon/system/public/zircon/syscalls-next.h
index ea31af4b..1d885a3 100644
--- a/zircon/system/public/zircon/syscalls-next.h
+++ b/zircon/system/public/zircon/syscalls-next.h
@@ -65,6 +65,20 @@
// options flags for zx_vmo_dirty_range_t
#define ZX_VMO_DIRTY_RANGE_IS_ZERO ((uint64_t)1u)
+// Struct used by the zx_pager_query_vmo_stats() syscall.
+typedef struct zx_pager_vmo_stats {
+ // Will be set to ZX_PAGER_VMO_STATS_MODIFIED if the VMO was modified, or 0 otherwise.
+ // Note that this can be set to 0 if a previous zx_pager_query_vmo_stats() call specified the
+ // ZX_PAGER_RESET_VMO_STATS option, which resets the modified state.
+ uint32_t modified;
+} zx_pager_vmo_stats_t;
+
+// values for zx_pager_vmo_stats.modified
+#define ZX_PAGER_VMO_STATS_MODIFIED ((uint32_t)1u)
+
+// options for zx_pager_query_vmo_stats()
+#define ZX_PAGER_RESET_VMO_STATS ((uint32_t)1u)
+
// ====== End of pager writeback support ====== //
__END_CDECLS
diff --git a/zircon/system/utest/core/pager-writeback/pager-writeback.cc b/zircon/system/utest/core/pager-writeback/pager-writeback.cc
index a4eb11c..98440fe 100644
--- a/zircon/system/utest/core/pager-writeback/pager-writeback.cc
+++ b/zircon/system/utest/core/pager-writeback/pager-writeback.cc
@@ -3,14 +3,21 @@
// found in the LICENSE file.
#include <lib/fit/defer.h>
+#include <lib/zx/bti.h>
+#include <lib/zx/iommu.h>
#include <zircon/syscalls-next.h>
#include <zircon/syscalls.h>
+#include <zircon/syscalls/iommu.h>
#include <zxtest/zxtest.h>
#include "test_thread.h"
#include "userpager.h"
+__BEGIN_CDECLS
+__WEAK extern zx_handle_t get_root_resource(void);
+__END_CDECLS
+
namespace pager_tests {
// Convenience macro for tests that want to create VMOs both with and without the ZX_VMO_TRAP_DIRTY
@@ -4052,4 +4059,616 @@
ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, nullptr, 0));
}
+// Tests that a VMO is marked modified on a zx_vmo_write.
+TEST_WITH_AND_WITHOUT_TRAP_DIRTY(ModifiedOnVmoWrite, 0) {
+ UserPager pager;
+ ASSERT_TRUE(pager.Init());
+
+ Vmo* vmo;
+ ASSERT_TRUE(pager.CreateVmoWithOptions(1, create_option, &vmo));
+ ASSERT_TRUE(pager.SupplyPages(vmo, 0, 1));
+
+ std::vector<uint8_t> expected(zx_system_get_page_size(), 0);
+ vmo->GenerateBufferContents(expected.data(), 1, 0);
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 1, expected.data(), true));
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, nullptr, 0));
+
+ if (create_option & ZX_VMO_TRAP_DIRTY) {
+ // Dirty the page in preparation for the write, avoiding the need to trap.
+ ASSERT_TRUE(pager.DirtyPages(vmo, 0, 1));
+ }
+
+ // The VMO hasn't been written to yet, so it shouldn't be marked modified.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Write to the VMO.
+ uint8_t data = 0xaa;
+ ASSERT_OK(vmo->vmo().write(&data, 0, sizeof(data)));
+
+ // The VMO should be marked modified.
+ ASSERT_TRUE(pager.VerifyModified(vmo));
+ // Querying the modified state should have reset the modified flag.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Verify contents and dirty ranges.
+ memset(expected.data(), data, sizeof(data));
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 1, expected.data(), true));
+ zx_vmo_dirty_range_t range = {.offset = 0, .length = 1, .options = 0};
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, &range, 1));
+}
+
+// Tests that a VMO is marked modified when written through a mapping.
+TEST_WITH_AND_WITHOUT_TRAP_DIRTY(ModifiedOnMappingWrite, 0) {
+ UserPager pager;
+ ASSERT_TRUE(pager.Init());
+
+ Vmo* vmo;
+ ASSERT_TRUE(pager.CreateVmoWithOptions(1, create_option, &vmo));
+ ASSERT_TRUE(pager.SupplyPages(vmo, 0, 1));
+
+ std::vector<uint8_t> expected(zx_system_get_page_size(), 0);
+ vmo->GenerateBufferContents(expected.data(), 1, 0);
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 1, expected.data(), true));
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, nullptr, 0));
+
+ if (create_option & ZX_VMO_TRAP_DIRTY) {
+ // Dirty the page in preparation for the write, avoiding the need to trap.
+ ASSERT_TRUE(pager.DirtyPages(vmo, 0, 1));
+ }
+
+ // Map the VMO.
+ zx_vaddr_t ptr;
+ ASSERT_OK(zx::vmar::root_self()->map(ZX_VM_PERM_READ | ZX_VM_PERM_WRITE, 0, vmo->vmo(), 0,
+ zx_system_get_page_size(), &ptr));
+
+ auto unmap = fit::defer([&]() {
+ // Cleanup the mapping we created.
+ zx::vmar::root_self()->unmap(ptr, zx_system_get_page_size());
+ });
+
+ // The VMO hasn't been written to yet, so it shouldn't be marked modified.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Write to the VMO via the mapping.
+ auto buf = reinterpret_cast<uint8_t*>(ptr);
+ uint8_t data = 0xbb;
+ *buf = data;
+
+ // The VMO should be marked modified.
+ ASSERT_TRUE(pager.VerifyModified(vmo));
+ // Querying the modified state should have reset the modified flag.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Verify contents and dirty ranges.
+ memset(expected.data(), data, sizeof(data));
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 1, expected.data(), true));
+ zx_vmo_dirty_range_t range = {.offset = 0, .length = 1, .options = 0};
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, &range, 1));
+}
+
+// Tests that a VMO is marked modified on resize.
+TEST_WITH_AND_WITHOUT_TRAP_DIRTY(ModifiedOnResize, ZX_VMO_RESIZABLE) {
+ UserPager pager;
+ ASSERT_TRUE(pager.Init());
+
+ Vmo* vmo;
+ ASSERT_TRUE(pager.CreateVmoWithOptions(1, create_option, &vmo));
+ ASSERT_TRUE(pager.SupplyPages(vmo, 0, 1));
+
+ std::vector<uint8_t> expected(2 * zx_system_get_page_size(), 0);
+ vmo->GenerateBufferContents(expected.data(), 1, 0);
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 1, expected.data(), true));
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, nullptr, 0));
+
+ // The VMO hasn't been resized yet, so it shouldn't be marked modified.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Resize the VMO down.
+ ASSERT_TRUE(vmo->Resize(0));
+
+ // The VMO should be marked modified.
+ ASSERT_TRUE(pager.VerifyModified(vmo));
+ // Querying the modified state should have reset the modified flag.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Verify dirty ranges.
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, nullptr, 0));
+
+ // Resize the VMO up.
+ ASSERT_TRUE(vmo->Resize(2));
+
+ // The VMO should be marked modified.
+ ASSERT_TRUE(pager.VerifyModified(vmo));
+ // Querying the modified state should have reset the modified flag.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Verify contents and dirty ranges.
+ memset(expected.data(), 0, 2 * zx_system_get_page_size());
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 2, expected.data(), true));
+ zx_vmo_dirty_range_t range = {.offset = 0, .length = 2, .options = ZX_VMO_DIRTY_RANGE_IS_ZERO};
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, &range, 1));
+}
+
+// Tests that a VMO is marked modified on a ZX_VMO_OP_ZERO.
+TEST_WITH_AND_WITHOUT_TRAP_DIRTY(ModifiedOnOpZero, 0) {
+ UserPager pager;
+ ASSERT_TRUE(pager.Init());
+
+ Vmo* vmo;
+ ASSERT_TRUE(pager.CreateVmoWithOptions(1, create_option, &vmo));
+ ASSERT_TRUE(pager.SupplyPages(vmo, 0, 1));
+
+ std::vector<uint8_t> expected(zx_system_get_page_size(), 0);
+ vmo->GenerateBufferContents(expected.data(), 1, 0);
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 1, expected.data(), true));
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, nullptr, 0));
+
+ if (create_option & ZX_VMO_TRAP_DIRTY) {
+ // Dirty the page in preparation for the write, avoiding the need to trap.
+ ASSERT_TRUE(pager.DirtyPages(vmo, 0, 1));
+ }
+
+ // The VMO hasn't been written to yet, so it shouldn't be marked modified.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Zero the VMO.
+ ASSERT_OK(vmo->vmo().op_range(ZX_VMO_OP_ZERO, 0, zx_system_get_page_size(), nullptr, 0));
+
+ // The VMO should be marked modified.
+ ASSERT_TRUE(pager.VerifyModified(vmo));
+ // Querying the modified state should have reset the modified flag.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Verify contents and dirty ranges.
+ memset(expected.data(), 0, zx_system_get_page_size());
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 1, expected.data(), true));
+ zx_vmo_dirty_range_t range = {.offset = 0, .length = 1, .options = 0};
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, &range, 1));
+}
+
+// Tests that a VMO is not marked modified on a zx_vmo_read.
+TEST_WITH_AND_WITHOUT_TRAP_DIRTY(NotModifiedOnVmoRead, 0) {
+ UserPager pager;
+ ASSERT_TRUE(pager.Init());
+
+ Vmo* vmo;
+ ASSERT_TRUE(pager.CreateVmoWithOptions(1, create_option, &vmo));
+ ASSERT_TRUE(pager.SupplyPages(vmo, 0, 1));
+
+ std::vector<uint8_t> expected(zx_system_get_page_size(), 0);
+ vmo->GenerateBufferContents(expected.data(), 1, 0);
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 1, expected.data(), true));
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, nullptr, 0));
+
+ // The VMO hasn't been written to yet, so it shouldn't be marked modified.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Read from the VMO.
+ uint8_t data;
+ ASSERT_OK(vmo->vmo().read(&data, 0, sizeof(data)));
+
+ // The VMO shouldn't be modified.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Verify contents and dirty ranges.
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 1, expected.data(), true));
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, nullptr, 0));
+}
+
+// Tests that a VMO is not marked modified when read through a mapping.
+TEST_WITH_AND_WITHOUT_TRAP_DIRTY(NotModifiedOnMappingRead, 0) {
+ UserPager pager;
+ ASSERT_TRUE(pager.Init());
+
+ Vmo* vmo;
+ ASSERT_TRUE(pager.CreateVmoWithOptions(1, create_option, &vmo));
+ ASSERT_TRUE(pager.SupplyPages(vmo, 0, 1));
+
+ std::vector<uint8_t> expected(zx_system_get_page_size(), 0);
+ vmo->GenerateBufferContents(expected.data(), 1, 0);
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 1, expected.data(), true));
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, nullptr, 0));
+
+ // Map the VMO.
+ zx_vaddr_t ptr;
+ ASSERT_OK(zx::vmar::root_self()->map(ZX_VM_PERM_READ | ZX_VM_PERM_WRITE, 0, vmo->vmo(), 0,
+ zx_system_get_page_size(), &ptr));
+
+ auto unmap = fit::defer([&]() {
+ // Cleanup the mapping we created.
+ zx::vmar::root_self()->unmap(ptr, zx_system_get_page_size());
+ });
+
+ // The VMO hasn't been written to yet, so it shouldn't be marked modified.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Read from the VMO via the mapping.
+ auto buf = reinterpret_cast<uint8_t*>(ptr);
+ uint8_t data = *buf;
+
+ // The VMO shouldn't be modified.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Verify contents and dirty ranges.
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 1, expected.data(), true));
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, nullptr, 0));
+ ASSERT_EQ(*(uint8_t*)(expected.data()), data);
+}
+
+// Tests that a VMO is not marked modified when a write is failed by failing a DIRTY request.
+TEST(PagerWriteback, NotModifiedOnFailedDirtyRequest) {
+ UserPager pager;
+ ASSERT_TRUE(pager.Init());
+
+ Vmo* vmo;
+ ASSERT_TRUE(pager.CreateVmoWithOptions(1, ZX_VMO_TRAP_DIRTY, &vmo));
+ ASSERT_TRUE(pager.SupplyPages(vmo, 0, 1));
+
+ std::vector<uint8_t> expected(zx_system_get_page_size(), 0);
+ vmo->GenerateBufferContents(expected.data(), 1, 0);
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 1, expected.data(), true));
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, nullptr, 0));
+
+ // The VMO hasn't been written to yet, so it shouldn't be marked modified.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Try to write to the VMO.
+ TestThread t1([vmo]() -> bool {
+ uint8_t data = 0xaa;
+ return vmo->vmo().write(&data, 0, sizeof(data)) == ZX_OK;
+ });
+ ASSERT_TRUE(t1.Start());
+
+ ASSERT_TRUE(t1.WaitForBlocked());
+ ASSERT_TRUE(pager.WaitForPageDirty(vmo, 0, 1, ZX_TIME_INFINITE));
+
+ // Fail the dirty request.
+ ASSERT_TRUE(pager.FailPages(vmo, 0, 1));
+ ASSERT_TRUE(t1.WaitForFailure());
+
+ // The VMO should not be modified.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Verify contents and dirty ranges.
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 1, expected.data(), true));
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, nullptr, 0));
+
+ // Map the VMO.
+ zx_vaddr_t ptr;
+ ASSERT_OK(zx::vmar::root_self()->map(ZX_VM_PERM_READ | ZX_VM_PERM_WRITE, 0, vmo->vmo(), 0,
+ zx_system_get_page_size(), &ptr));
+
+ auto unmap = fit::defer([&]() {
+ // Cleanup the mapping we created.
+ zx::vmar::root_self()->unmap(ptr, zx_system_get_page_size());
+ });
+
+ // Try to write to the VMO via the mapping.
+ TestThread t2([ptr]() -> bool {
+ auto buf = reinterpret_cast<uint8_t*>(ptr);
+ *buf = 0xbb;
+ return true;
+ });
+ ASSERT_TRUE(t2.Start());
+
+ ASSERT_TRUE(t2.WaitForBlocked());
+ ASSERT_TRUE(pager.WaitForPageDirty(vmo, 0, 1, ZX_TIME_INFINITE));
+
+ // Fail the dirty request.
+ ASSERT_TRUE(pager.FailPages(vmo, 0, 1));
+ ASSERT_TRUE(t2.WaitForCrash(ptr, ZX_ERR_IO));
+
+ // The VMO should not be modified.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Verify contents and dirty ranges.
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 1, expected.data(), true));
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, nullptr, 0));
+}
+
+// Tests that a VMO is not marked modified on a failed zx_vmo_write.
+TEST_WITH_AND_WITHOUT_TRAP_DIRTY(NotModifiedOnFailedVmoWrite, 0) {
+ UserPager pager;
+ ASSERT_TRUE(pager.Init());
+
+ Vmo* vmo;
+ ASSERT_TRUE(pager.CreateVmoWithOptions(1, create_option, &vmo));
+ ASSERT_TRUE(pager.SupplyPages(vmo, 0, 1));
+
+ std::vector<uint8_t> expected(zx_system_get_page_size(), 0);
+ vmo->GenerateBufferContents(expected.data(), 1, 0);
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 1, expected.data(), true));
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, nullptr, 0));
+
+ if (create_option & ZX_VMO_TRAP_DIRTY) {
+ // Dirty the page in preparation for the write, avoiding the need to trap.
+ ASSERT_TRUE(pager.DirtyPages(vmo, 0, 1));
+ }
+
+ // The VMO hasn't been written to yet, so it shouldn't be marked modified.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Write to the VMO with the source buffer set up such that the copying fails. Make the source
+ // buffer pager backed too, and fail reads from it.
+ Vmo* src_vmo;
+ ASSERT_TRUE(pager.CreateVmo(1, &src_vmo));
+
+ // Map the source VMO.
+ zx_vaddr_t ptr;
+ ASSERT_OK(zx::vmar::root_self()->map(ZX_VM_PERM_READ | ZX_VM_PERM_WRITE, 0, src_vmo->vmo(), 0,
+ zx_system_get_page_size(), &ptr));
+
+ auto unmap = fit::defer([&]() {
+ // Cleanup the mapping we created.
+ zx::vmar::root_self()->unmap(ptr, zx_system_get_page_size());
+ });
+
+ // Attempt the VMO write.
+ TestThread t([vmo, ptr]() -> bool {
+ auto buf = reinterpret_cast<uint8_t*>(ptr);
+ return vmo->vmo().write(buf, 0, sizeof(uint8_t)) == ZX_OK;
+ });
+ ASSERT_TRUE(t.Start());
+
+ // We should see a read request when the VMO write attempts reading from the source VMO.
+ ASSERT_TRUE(t.WaitForBlocked());
+ ASSERT_TRUE(pager.WaitForPageRead(src_vmo, 0, 1, ZX_TIME_INFINITE));
+
+ // Fail the read request so that the write fails.
+ ASSERT_TRUE(pager.FailPages(src_vmo, 0, 1));
+ ASSERT_TRUE(t.WaitForFailure());
+
+ // The VMO should not be modified.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Verify contents and dirty ranges.
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 1, expected.data(), true));
+ // We mark pages dirty when they are looked up, i.e. *before* writing to them, so they will still
+ // be reported as dirty.
+ zx_vmo_dirty_range_t range = {.offset = 0, .length = 1, .options = 0};
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, &range, 1));
+}
+
+// Tests that a VMO is not marked modified on a failed resize.
+TEST_WITH_AND_WITHOUT_TRAP_DIRTY(NotModifiedOnFailedResize, ZX_VMO_RESIZABLE) {
+ // Please do not use get_root_resource() in new code. See fxbug.dev/31358.
+ if (!&get_root_resource) {
+ printf("Root resource not available, skipping\n");
+ return;
+ }
+
+ UserPager pager;
+ ASSERT_TRUE(pager.Init());
+
+ Vmo* vmo;
+ ASSERT_TRUE(pager.CreateVmoWithOptions(2, create_option, &vmo));
+ ASSERT_TRUE(pager.SupplyPages(vmo, 0, 2));
+
+ std::vector<uint8_t> expected(2 * zx_system_get_page_size(), 0);
+ vmo->GenerateBufferContents(expected.data(), 2, 0);
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 2, expected.data(), true));
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, nullptr, 0));
+
+ // The VMO hasn't been resized yet, so it shouldn't be marked modified.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Pin a page.
+ zx::iommu iommu;
+ zx::bti bti;
+ zx::pmt pmt;
+ zx::unowned_resource root_res(get_root_resource());
+ zx_iommu_desc_dummy_t desc;
+ ASSERT_OK(zx_iommu_create(get_root_resource(), ZX_IOMMU_TYPE_DUMMY, &desc, sizeof(desc),
+ iommu.reset_and_get_address()));
+ ASSERT_OK(zx::bti::create(iommu, 0, 0xdeadbeef, &bti));
+ zx_paddr_t addr;
+ ASSERT_OK(bti.pin(ZX_BTI_PERM_READ, vmo->vmo(), zx_system_get_page_size(),
+ zx_system_get_page_size(), &addr, 1, &pmt));
+
+ // Try to resize down across the pinned page. The resize should fail.
+ ASSERT_EQ(vmo->vmo().set_size(zx_system_get_page_size()), ZX_ERR_BAD_STATE);
+
+ if (pmt) {
+ pmt.unpin();
+ }
+
+ // The VMO should not be modified.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Verify contents and dirty ranges.
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 2, expected.data(), true));
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, nullptr, 0));
+}
+
+// Tests that a VMO is marked modified when a zx_vmo_write partially succeeds.
+TEST_WITH_AND_WITHOUT_TRAP_DIRTY(ModifiedOnPartialVmoWrite, 0) {
+ UserPager pager;
+ ASSERT_TRUE(pager.Init());
+
+ Vmo* vmo;
+ ASSERT_TRUE(pager.CreateVmoWithOptions(2, create_option, &vmo));
+ ASSERT_TRUE(pager.SupplyPages(vmo, 0, 2));
+
+ std::vector<uint8_t> expected(2 * zx_system_get_page_size(), 0);
+ vmo->GenerateBufferContents(expected.data(), 2, 0);
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 2, expected.data(), true));
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, nullptr, 0));
+
+ // The VMO hasn't been written to yet, so it shouldn't be marked modified.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ if (create_option & ZX_VMO_TRAP_DIRTY) {
+ // Dirty the pages in preparation for the write, avoiding the need to trap.
+ ASSERT_TRUE(pager.DirtyPages(vmo, 0, 2));
+ }
+
+ // Write to the VMO with the source buffer set up such that the copying partially fails. Make the
+ // source buffer pager backed too, and fail reads from it.
+ Vmo* src_vmo;
+ ASSERT_TRUE(pager.CreateVmo(2, &src_vmo));
+ // Supply a single page in the source, so we can partially read from it.
+ ASSERT_TRUE(pager.SupplyPages(src_vmo, 0, 1));
+
+ // Map the source VMO.
+ zx_vaddr_t ptr;
+ ASSERT_OK(zx::vmar::root_self()->map(ZX_VM_PERM_READ | ZX_VM_PERM_WRITE, 0, src_vmo->vmo(), 0,
+ 2 * zx_system_get_page_size(), &ptr));
+
+ auto unmap = fit::defer([&]() {
+ // Cleanup the mapping we created.
+ zx::vmar::root_self()->unmap(ptr, 2 * zx_system_get_page_size());
+ });
+
+ // Attempt the VMO write.
+ TestThread t([vmo, ptr]() -> bool {
+ auto buf = reinterpret_cast<uint8_t*>(ptr);
+ return vmo->vmo().write(buf, 0, 2 * zx_system_get_page_size()) == ZX_OK;
+ });
+ ASSERT_TRUE(t.Start());
+
+ // We should see a read request when the VMO write attempts reading from the source VMO.
+ ASSERT_TRUE(t.WaitForBlocked());
+ ASSERT_TRUE(pager.WaitForPageRead(src_vmo, 1, 1, ZX_TIME_INFINITE));
+
+ // Fail the read request so that the write fails.
+ ASSERT_TRUE(pager.FailPages(src_vmo, 1, 1));
+ ASSERT_TRUE(t.WaitForFailure());
+
+ // The write partially succeeded, so the VMO should be modified.
+ ASSERT_TRUE(pager.VerifyModified(vmo));
+
+ // Verify dirty pages and contents.
+ src_vmo->GenerateBufferContents(expected.data(), 1, 0);
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 2, expected.data(), true));
+ // We mark pages dirty when they are looked up, i.e. *before* writing to them, so they will still
+ // be reported as dirty.
+ zx_vmo_dirty_range_t range = {.offset = 0, .length = 2, .options = 0};
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, &range, 1));
+
+ // We will now try a partial write by failing dirty requests, which is only relevant for
+ // TRAP_DIRTY.
+ if (!(create_option & ZX_VMO_TRAP_DIRTY)) {
+ return;
+ }
+
+ // Start with clean pages again.
+ ASSERT_TRUE(pager.WritebackBeginPages(vmo, 0, 2));
+ ASSERT_TRUE(pager.WritebackEndPages(vmo, 0, 2));
+
+ // Dirty a single page, so that writing to the other generates a dirty request.
+ ASSERT_TRUE(pager.DirtyPages(vmo, 0, 1));
+
+ // Try to write to the VMO.
+ TestThread t1([vmo]() -> bool {
+ uint8_t data[2 * zx_system_get_page_size()];
+ memset(data, 0xaa, 2 * zx_system_get_page_size());
+ return vmo->vmo().write(&data, 0, sizeof(data)) == ZX_OK;
+ });
+ ASSERT_TRUE(t1.Start());
+
+ // Should see a dirty request for page 1.
+ ASSERT_TRUE(t1.WaitForBlocked());
+ ASSERT_TRUE(pager.WaitForPageDirty(vmo, 1, 1, ZX_TIME_INFINITE));
+
+ // Fail the dirty request.
+ ASSERT_TRUE(pager.FailPages(vmo, 1, 1));
+ ASSERT_TRUE(t1.WaitForFailure());
+
+ // The write succeeded partially, so the VMO should be modified.
+ ASSERT_TRUE(pager.VerifyModified(vmo));
+
+ // Verify contents and dirty ranges.
+ memset(expected.data(), 0xaa, zx_system_get_page_size());
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 2, expected.data(), true));
+ range.length = 1;
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, &range, 1));
+}
+
+// Tests that a clone cannot be marked modified.
+TEST_WITH_AND_WITHOUT_TRAP_DIRTY(NotModifiedCloneWrite, 0) {
+ UserPager pager;
+ ASSERT_TRUE(pager.Init());
+
+ Vmo* vmo;
+ ASSERT_TRUE(pager.CreateVmoWithOptions(1, create_option, &vmo));
+ ASSERT_TRUE(pager.SupplyPages(vmo, 0, 1));
+
+ std::vector<uint8_t> expected(zx_system_get_page_size(), 0);
+ vmo->GenerateBufferContents(expected.data(), 1, 0);
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 1, expected.data(), true));
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, nullptr, 0));
+
+ // The VMO hasn't been written to, so it shouldn't be marked modified.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Create a clone.
+ auto clone = vmo->Clone();
+ ASSERT_NOT_NULL(clone);
+
+ // Write to the clone.
+ uint8_t data[zx_system_get_page_size()];
+ memset(data, 0xcc, zx_system_get_page_size());
+ ASSERT_OK(clone->vmo().write(&data, 0, sizeof(data)));
+
+ // The VMO should not be modified.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 1, expected.data(), true));
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, nullptr, 0));
+
+ // The clone should not support the modified query.
+ zx_pager_vmo_stats_t stats;
+ ASSERT_EQ(ZX_ERR_INVALID_ARGS, zx_pager_query_vmo_stats(pager.pager().get(), clone->vmo().get(),
+ 0, &stats, sizeof(stats)));
+ ASSERT_FALSE(pager.VerifyModified(clone.get()));
+
+ // Verify clone contents.
+ memcpy(expected.data(), data, sizeof(data));
+ ASSERT_TRUE(check_buffer_data(clone.get(), 0, 1, expected.data(), true));
+ ASSERT_FALSE(pager.VerifyDirtyRanges(clone.get(), nullptr, 0));
+}
+
+// Tests that querying the modified state without the reset option does not reset.
+TEST_WITH_AND_WITHOUT_TRAP_DIRTY(ModifiedNoReset, 0) {
+ UserPager pager;
+ ASSERT_TRUE(pager.Init());
+
+ Vmo* vmo;
+ ASSERT_TRUE(pager.CreateVmoWithOptions(1, create_option, &vmo));
+ ASSERT_TRUE(pager.SupplyPages(vmo, 0, 1));
+
+ std::vector<uint8_t> expected(zx_system_get_page_size(), 0);
+ vmo->GenerateBufferContents(expected.data(), 1, 0);
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 1, expected.data(), true));
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, nullptr, 0));
+
+ if (create_option & ZX_VMO_TRAP_DIRTY) {
+ // Dirty the page in preparation for the write, avoiding the need to trap.
+ ASSERT_TRUE(pager.DirtyPages(vmo, 0, 1));
+ }
+
+ // The VMO hasn't been written to yet, so it shouldn't be marked modified.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+
+ // Write to the VMO.
+ uint8_t data = 0xaa;
+ ASSERT_OK(vmo->vmo().write(&data, 0, sizeof(data)));
+
+ // Verify modified state without resetting it.
+ zx_pager_vmo_stats_t stats;
+ ASSERT_OK(
+ zx_pager_query_vmo_stats(pager.pager().get(), vmo->vmo().get(), 0, &stats, sizeof(stats)));
+ ASSERT_EQ(ZX_PAGER_VMO_STATS_MODIFIED, stats.modified);
+
+ // Verify contents and dirty ranges.
+ memset(expected.data(), data, sizeof(data));
+ ASSERT_TRUE(check_buffer_data(vmo, 0, 1, expected.data(), true));
+ zx_vmo_dirty_range_t range = {.offset = 0, .length = 1, .options = 0};
+ ASSERT_TRUE(pager.VerifyDirtyRanges(vmo, &range, 1));
+
+ // The VMO should still be marked modified.
+ ASSERT_TRUE(pager.VerifyModified(vmo));
+ // Querying the modified state now with the reset option should have reset the modified flag.
+ ASSERT_FALSE(pager.VerifyModified(vmo));
+}
+
} // namespace pager_tests
diff --git a/zircon/system/utest/core/pager/userpager.cc b/zircon/system/utest/core/pager/userpager.cc
index b11bb1c..06d5f1c 100644
--- a/zircon/system/utest/core/pager/userpager.cc
+++ b/zircon/system/utest/core/pager/userpager.cc
@@ -471,6 +471,17 @@
return true;
}
+bool UserPager::VerifyModified(Vmo* paged_vmo) {
+ zx_pager_vmo_stats_t stats;
+ zx_status_t status = zx_pager_query_vmo_stats(pager_.get(), paged_vmo->vmo().get(),
+ ZX_PAGER_RESET_VMO_STATS, &stats, sizeof(stats));
+ if (status != ZX_OK) {
+ fprintf(stderr, "failed to query pager vmo stats with %s\n", zx_status_get_string(status));
+ return false;
+ }
+ return stats.modified == ZX_PAGER_VMO_STATS_MODIFIED;
+}
+
bool UserPager::VerifyDirtyRanges(Vmo* paged_vmo, zx_vmo_dirty_range_t* dirty_ranges_to_verify,
size_t num_dirty_ranges_to_verify) {
if (num_dirty_ranges_to_verify > 0 && dirty_ranges_to_verify == nullptr) {
diff --git a/zircon/system/utest/core/pager/userpager.h b/zircon/system/utest/core/pager/userpager.h
index d2f55c3..2b69e1c 100644
--- a/zircon/system/utest/core/pager/userpager.h
+++ b/zircon/system/utest/core/pager/userpager.h
@@ -124,6 +124,9 @@
// passed in with |num_dirty_ranges_to_verify|.
bool VerifyDirtyRanges(Vmo* paged_vmo, zx_vmo_dirty_range_t* dirty_ranges_to_verify,
size_t num_dirty_ranges_to_verify);
+ // Queries pager vmo stats, and returns whether the |paged_vmo| has been modified since the last
+ // query.
+ bool VerifyModified(Vmo* paged_vmo);
// Begins and ends writeback on pages in the specified range.
bool WritebackBeginPages(Vmo* vmo, uint64_t page_offset, uint64_t page_count);
diff --git a/zircon/tools/kazoo/golden.txt b/zircon/tools/kazoo/golden.txt
index 79c0675..0843fc9 100644
--- a/zircon/tools/kazoo/golden.txt
+++ b/zircon/tools/kazoo/golden.txt
@@ -148,6 +148,7 @@
#define HAVE_SYSCALL_CATEGORY_next 1
SYSCALL_CATEGORY_BEGIN(next)
SYSCALL_IN_CATEGORY(pager_query_dirty_ranges)
+ SYSCALL_IN_CATEGORY(pager_query_vmo_stats)
SYSCALL_IN_CATEGORY(syscall_next_1)
SYSCALL_CATEGORY_END(next)
@@ -214,6 +215,14 @@
size_t* actual,
size_t* avail))
+_ZX_SYSCALL_DECL(pager_query_vmo_stats, zx_status_t, /* no attributes */, 5,
+ (pager, pager_vmo, options, buffer, buffer_size), (
+ _ZX_SYSCALL_ANNO(use_handle("Fuchsia")) zx_handle_t pager,
+ _ZX_SYSCALL_ANNO(use_handle("Fuchsia")) zx_handle_t pager_vmo,
+ uint32_t options,
+ void* buffer,
+ size_t buffer_size))
+
_ZX_SYSCALL_DECL(syscall_next_1, zx_status_t, /* no attributes */, 1,
(arg), (
int32_t arg))
@@ -602,6 +611,10 @@
TEXT ·Sys_pager_query_dirty_ranges(SB),NOSPLIT,$0
JMP runtime·vdsoCall_zx_pager_query_dirty_ranges(SB)
+// func Sys_pager_query_vmo_stats(pager Handle, pager_vmo Handle, options uint32, buffer unsafe.Pointer, buffer_size uint) Status
+TEXT ·Sys_pager_query_vmo_stats(SB),NOSPLIT,$0
+ JMP runtime·vdsoCall_zx_pager_query_vmo_stats(SB)
+
// func Sys_pc_firmware_tables(handle Handle, acpi_rsdp *Paddr, smbios *Paddr) Status
TEXT ·Sys_pc_firmware_tables(SB),NOSPLIT,$0
JMP runtime·vdsoCall_zx_pc_firmware_tables(SB)
@@ -1390,6 +1403,10 @@
//go:noescape
//go:nosplit
+func Sys_pager_query_vmo_stats(pager Handle, pager_vmo Handle, options uint32, buffer unsafe.Pointer, buffer_size uint) Status
+
+//go:noescape
+//go:nosplit
func Sys_pc_firmware_tables(handle Handle, acpi_rsdp *Paddr, smbios *Paddr) Status
//go:noescape
@@ -2172,6 +2189,10 @@
TEXT ·Sys_pager_query_dirty_ranges(SB),NOSPLIT,$0
JMP runtime·vdsoCall_zx_pager_query_dirty_ranges(SB)
+// func Sys_pager_query_vmo_stats(pager Handle, pager_vmo Handle, options uint32, buffer unsafe.Pointer, buffer_size uint) Status
+TEXT ·Sys_pager_query_vmo_stats(SB),NOSPLIT,$0
+ JMP runtime·vdsoCall_zx_pager_query_vmo_stats(SB)
+
// func Sys_pc_firmware_tables(handle Handle, acpi_rsdp *Paddr, smbios *Paddr) Status
TEXT ·Sys_pc_firmware_tables(SB),NOSPLIT,$0
JMP runtime·vdsoCall_zx_pc_firmware_tables(SB)
@@ -4328,6 +4349,26 @@
MOVD $0, m_vdsoSP(R21)
RET
+// func vdsoCall_zx_pager_query_vmo_stats(pager uint32, pager_vmo uint32, options uint32, buffer unsafe.Pointer, buffer_size uint) int32
+TEXT runtime·vdsoCall_zx_pager_query_vmo_stats(SB),NOSPLIT,$0-36
+ GO_ARGS
+ NO_LOCAL_POINTERS
+ MOVD g_m(g), R21
+ MOVD LR, m_vdsoPC(R21)
+ DMB $0xe
+ MOVD RSP, R20
+ MOVD R20, m_vdsoSP(R21)
+ MOVW pager+0(FP), R0
+ MOVW pager_vmo+4(FP), R1
+ MOVW options+8(FP), R2
+ MOVD buffer+16(FP), R3
+ MOVD buffer_size+24(FP), R4
+ BL vdso_zx_pager_query_vmo_stats(SB)
+ MOVW R0, ret+32(FP)
+ MOVD g_m(g), R21
+ MOVD $0, m_vdsoSP(R21)
+ RET
+
// func vdsoCall_zx_pc_firmware_tables(handle uint32, acpi_rsdp unsafe.Pointer, smbios unsafe.Pointer) int32
TEXT runtime·vdsoCall_zx_pc_firmware_tables(SB),NOSPLIT,$0-28
GO_ARGS
@@ -6297,6 +6338,7 @@
{"_zx_pager_supply_pages", 0x69d1fc7f, &vdso_zx_pager_supply_pages},
{"_zx_pager_op_range", 0x5e8195ae, &vdso_zx_pager_op_range},
{"_zx_pager_query_dirty_ranges", 0x1e13a323, &vdso_zx_pager_query_dirty_ranges},
+ {"_zx_pager_query_vmo_stats", 0xd3f95338, &vdso_zx_pager_query_vmo_stats},
{"_zx_pc_firmware_tables", 0x1a05d1fe, &vdso_zx_pc_firmware_tables},
{"_zx_pci_get_nth_device", 0x32106f08, &vdso_zx_pci_get_nth_device},
{"_zx_pci_enable_bus_master", 0x76091cab, &vdso_zx_pci_enable_bus_master},
@@ -6492,6 +6534,7 @@
//go:cgo_import_dynamic vdso_zx_pager_supply_pages zx_pager_supply_pages
//go:cgo_import_dynamic vdso_zx_pager_op_range zx_pager_op_range
//go:cgo_import_dynamic vdso_zx_pager_query_dirty_ranges zx_pager_query_dirty_ranges
+//go:cgo_import_dynamic vdso_zx_pager_query_vmo_stats zx_pager_query_vmo_stats
//go:cgo_import_dynamic vdso_zx_pc_firmware_tables zx_pc_firmware_tables
//go:cgo_import_dynamic vdso_zx_pci_get_nth_device zx_pci_get_nth_device
//go:cgo_import_dynamic vdso_zx_pci_enable_bus_master zx_pci_enable_bus_master
@@ -6686,6 +6729,7 @@
//go:linkname vdso_zx_pager_supply_pages vdso_zx_pager_supply_pages
//go:linkname vdso_zx_pager_op_range vdso_zx_pager_op_range
//go:linkname vdso_zx_pager_query_dirty_ranges vdso_zx_pager_query_dirty_ranges
+//go:linkname vdso_zx_pager_query_vmo_stats vdso_zx_pager_query_vmo_stats
//go:linkname vdso_zx_pc_firmware_tables vdso_zx_pc_firmware_tables
//go:linkname vdso_zx_pci_get_nth_device vdso_zx_pci_get_nth_device
//go:linkname vdso_zx_pci_enable_bus_master vdso_zx_pci_enable_bus_master
@@ -7161,6 +7205,10 @@
//go:noescape
//go:nosplit
+func vdsoCall_zx_pager_query_vmo_stats(pager uint32, pager_vmo uint32, options uint32, buffer unsafe.Pointer, buffer_size uint) int32
+
+//go:noescape
+//go:nosplit
func vdsoCall_zx_pc_firmware_tables(handle uint32, acpi_rsdp unsafe.Pointer, smbios unsafe.Pointer) int32
//go:noescape
@@ -7653,6 +7701,7 @@
vdso_zx_pager_supply_pages uintptr
vdso_zx_pager_op_range uintptr
vdso_zx_pager_query_dirty_ranges uintptr
+ vdso_zx_pager_query_vmo_stats uintptr
vdso_zx_pc_firmware_tables uintptr
vdso_zx_pci_get_nth_device uintptr
vdso_zx_pci_enable_bus_master uintptr
@@ -9915,6 +9964,30 @@
MOVQ $0, m_vdsoSP(R14)
RET
+// func vdsoCall_zx_pager_query_vmo_stats(pager uint32, pager_vmo uint32, options uint32, buffer unsafe.Pointer, buffer_size uint) int32
+TEXT runtime·vdsoCall_zx_pager_query_vmo_stats(SB),NOSPLIT,$8-36
+ GO_ARGS
+ NO_LOCAL_POINTERS
+ get_tls(CX)
+ MOVQ g(CX), AX
+ MOVQ g_m(AX), R14
+ PUSHQ R14
+ MOVQ 24(SP), DX
+ MOVQ DX, m_vdsoPC(R14)
+ LEAQ 24(SP), DX
+ MOVQ DX, m_vdsoSP(R14)
+ MOVL pager+0(FP), DI
+ MOVL pager_vmo+4(FP), SI
+ MOVL options+8(FP), DX
+ MOVQ buffer+16(FP), CX
+ MOVQ buffer_size+24(FP), R8
+ MOVQ vdso_zx_pager_query_vmo_stats(SB), AX
+ CALL AX
+ MOVL AX, ret+32(FP)
+ POPQ R14
+ MOVQ $0, m_vdsoSP(R14)
+ RET
+
// func vdsoCall_zx_pc_firmware_tables(handle uint32, acpi_rsdp unsafe.Pointer, smbios unsafe.Pointer) int32
TEXT runtime·vdsoCall_zx_pc_firmware_tables(SB),NOSPLIT,$8-28
GO_ARGS
@@ -16048,6 +16121,58 @@
"return_type": "zx_status_t"
},
{
+ "name": "pager_query_vmo_stats",
+ "attributes": [
+ "*",
+ "next"
+ ],
+ "top_description": [
+ "Query", "pager", "related", "statistics", "on", "a", "pager", "owned", "VMO", "."
+ ],
+ "requirements": [
+ "pager", "must", "be", "of", "type", "ZX_OBJ_TYPE_PAGER", ".",
+ "pager_vmo", "must", "be", "of", "type", "ZX_OBJ_TYPE_VMO", "."
+ ],
+ "arguments": [
+ {
+ "name": "pager",
+ "type": "zx_handle_t",
+ "is_array": false,
+ "attributes": [
+ ]
+ },
+ {
+ "name": "pager_vmo",
+ "type": "zx_handle_t",
+ "is_array": false,
+ "attributes": [
+ ]
+ },
+ {
+ "name": "options",
+ "type": "uint32_t",
+ "is_array": false,
+ "attributes": [
+ ]
+ },
+ {
+ "name": "buffer",
+ "type": "any",
+ "is_array": true,
+ "attributes": [
+ ]
+ },
+ {
+ "name": "buffer_size",
+ "type": "size_t",
+ "is_array": false,
+ "attributes": [
+ ]
+ }
+ ],
+ "return_type": "zx_status_t"
+ },
+ {
"name": "pc_firmware_tables",
"attributes": [
"*"
@@ -20574,6 +20699,14 @@
user_out_ptr<size_t> actual,
user_out_ptr<size_t> avail))
+KERNEL_SYSCALL(pager_query_vmo_stats, zx_status_t, /* no attributes */, 5,
+ (pager, pager_vmo, options, buffer, buffer_size), (
+ _ZX_SYSCALL_ANNO(use_handle("Fuchsia")) zx_handle_t pager,
+ _ZX_SYSCALL_ANNO(use_handle("Fuchsia")) zx_handle_t pager_vmo,
+ uint32_t options,
+ user_out_ptr<void> buffer,
+ size_t buffer_size))
+
KERNEL_SYSCALL(pc_firmware_tables, zx_status_t, /* no attributes */, 3,
(handle, acpi_rsdp, smbios), (
_ZX_SYSCALL_ANNO(use_handle("Fuchsia")) zx_handle_t handle,
@@ -22056,6 +22189,14 @@
});
}
+syscall_result wrapper_pager_query_vmo_stats(SafeSyscallArgument<zx_handle_t>::RawType pager, SafeSyscallArgument<zx_handle_t>::RawType pager_vmo, SafeSyscallArgument<uint32_t>::RawType options, SafeSyscallArgument<void*>::RawType buffer, SafeSyscallArgument<size_t>::RawType buffer_size, uint64_t pc);
+syscall_result wrapper_pager_query_vmo_stats(SafeSyscallArgument<zx_handle_t>::RawType pager, SafeSyscallArgument<zx_handle_t>::RawType pager_vmo, SafeSyscallArgument<uint32_t>::RawType options, SafeSyscallArgument<void*>::RawType buffer, SafeSyscallArgument<size_t>::RawType buffer_size, uint64_t pc) {
+ return do_syscall(ZX_SYS_pager_query_vmo_stats, pc, &VDso::ValidSyscallPC::pager_query_vmo_stats, [&](ProcessDispatcher* current_process) -> uint64_t {
+ auto result = sys_pager_query_vmo_stats(SafeSyscallArgument<zx_handle_t>::Sanitize(pager), SafeSyscallArgument<zx_handle_t>::Sanitize(pager_vmo), SafeSyscallArgument<uint32_t>::Sanitize(options), make_user_out_ptr(SafeSyscallArgument<void*>::Sanitize(buffer)), SafeSyscallArgument<size_t>::Sanitize(buffer_size));
+ return result;
+ });
+}
+
syscall_result wrapper_pc_firmware_tables(SafeSyscallArgument<zx_handle_t>::RawType handle, SafeSyscallArgument<zx_paddr_t*>::RawType acpi_rsdp, SafeSyscallArgument<zx_paddr_t*>::RawType smbios, uint64_t pc);
syscall_result wrapper_pc_firmware_tables(SafeSyscallArgument<zx_handle_t>::RawType handle, SafeSyscallArgument<zx_paddr_t*>::RawType acpi_rsdp, SafeSyscallArgument<zx_paddr_t*>::RawType smbios, uint64_t pc) {
return do_syscall(ZX_SYS_pc_firmware_tables, pc, &VDso::ValidSyscallPC::pc_firmware_tables, [&](ProcessDispatcher* current_process) -> uint64_t {
@@ -23609,6 +23750,14 @@
size_t* actual,
size_t* avail))
+KERNEL_SYSCALL(pager_query_vmo_stats, zx_status_t, /* no attributes */, 5,
+ (pager, pager_vmo, options, buffer, buffer_size), (
+ _ZX_SYSCALL_ANNO(use_handle("Fuchsia")) zx_handle_t pager,
+ _ZX_SYSCALL_ANNO(use_handle("Fuchsia")) zx_handle_t pager_vmo,
+ uint32_t options,
+ void* buffer,
+ size_t buffer_size))
+
KERNEL_SYSCALL(pc_firmware_tables, zx_status_t, /* no attributes */, 3,
(handle, acpi_rsdp, smbios), (
_ZX_SYSCALL_ANNO(use_handle("Fuchsia")) zx_handle_t handle,
@@ -25948,6 +26097,14 @@
avail: *mut usize
) -> zx_status_t;
+ pub fn zx_pager_query_vmo_stats(
+ pager: zx_handle_t,
+ pager_vmo: zx_handle_t,
+ options: u32,
+ buffer: *mut u8,
+ buffer_size: usize
+ ) -> zx_status_t;
+
pub fn zx_pc_firmware_tables(
handle: zx_handle_t,
acpi_rsdp: *mut zx_paddr_t,
@@ -26681,101 +26838,102 @@
#define ZX_SYS_pager_supply_pages 80
#define ZX_SYS_pager_op_range 81
#define ZX_SYS_pager_query_dirty_ranges 82
-#define ZX_SYS_pc_firmware_tables 83
-#define ZX_SYS_pci_get_nth_device 84
-#define ZX_SYS_pci_enable_bus_master 85
-#define ZX_SYS_pci_reset_device 86
-#define ZX_SYS_pci_config_read 87
-#define ZX_SYS_pci_config_write 88
-#define ZX_SYS_pci_cfg_pio_rw 89
-#define ZX_SYS_pci_get_bar 90
-#define ZX_SYS_pci_map_interrupt 91
-#define ZX_SYS_pci_query_irq_mode 92
-#define ZX_SYS_pci_set_irq_mode 93
-#define ZX_SYS_pci_init 94
-#define ZX_SYS_pci_add_subtract_io_range 95
-#define ZX_SYS_pmt_unpin 96
-#define ZX_SYS_port_create 97
-#define ZX_SYS_port_queue 98
-#define ZX_SYS_port_wait 99
-#define ZX_SYS_port_cancel 100
-#define ZX_SYS_process_exit 101
-#define ZX_SYS_process_create 102
-#define ZX_SYS_process_start 103
-#define ZX_SYS_process_read_memory 104
-#define ZX_SYS_process_write_memory 105
-#define ZX_SYS_profile_create 106
-#define ZX_SYS_resource_create 107
-#define ZX_SYS_smc_call 108
-#define ZX_SYS_socket_create 109
-#define ZX_SYS_socket_write 110
-#define ZX_SYS_socket_read 111
-#define ZX_SYS_socket_set_disposition 112
-#define ZX_SYS_stream_create 113
-#define ZX_SYS_stream_writev 114
-#define ZX_SYS_stream_writev_at 115
-#define ZX_SYS_stream_readv 116
-#define ZX_SYS_stream_readv_at 117
-#define ZX_SYS_stream_seek 118
-#define ZX_SYS_syscall_test_0 119
-#define ZX_SYS_syscall_test_1 120
-#define ZX_SYS_syscall_test_2 121
-#define ZX_SYS_syscall_test_3 122
-#define ZX_SYS_syscall_test_4 123
-#define ZX_SYS_syscall_test_5 124
-#define ZX_SYS_syscall_test_6 125
-#define ZX_SYS_syscall_test_7 126
-#define ZX_SYS_syscall_test_8 127
-#define ZX_SYS_syscall_next_1 128
-#define ZX_SYS_syscall_test_wrapper 129
-#define ZX_SYS_syscall_test_handle_create 130
-#define ZX_SYS_syscall_test_widening_unsigned_narrow 131
-#define ZX_SYS_syscall_test_widening_signed_narrow 132
-#define ZX_SYS_syscall_test_widening_unsigned_wide 133
-#define ZX_SYS_syscall_test_widening_signed_wide 134
-#define ZX_SYS_system_get_event 135
-#define ZX_SYS_system_set_performance_info 136
-#define ZX_SYS_system_get_performance_info 137
-#define ZX_SYS_system_mexec 138
-#define ZX_SYS_system_mexec_payload_get 139
-#define ZX_SYS_system_powerctl 140
-#define ZX_SYS_task_suspend 141
-#define ZX_SYS_task_suspend_token 142
-#define ZX_SYS_task_create_exception_channel 143
-#define ZX_SYS_task_kill 144
-#define ZX_SYS_thread_exit 145
-#define ZX_SYS_thread_create 146
-#define ZX_SYS_thread_start 147
-#define ZX_SYS_thread_read_state 148
-#define ZX_SYS_thread_write_state 149
-#define ZX_SYS_thread_legacy_yield 150
-#define ZX_SYS_timer_create 151
-#define ZX_SYS_timer_set 152
-#define ZX_SYS_timer_cancel 153
-#define ZX_SYS_vcpu_create 154
-#define ZX_SYS_vcpu_enter 155
-#define ZX_SYS_vcpu_kick 156
-#define ZX_SYS_vcpu_interrupt 157
-#define ZX_SYS_vcpu_read_state 158
-#define ZX_SYS_vcpu_write_state 159
-#define ZX_SYS_vmar_allocate 160
-#define ZX_SYS_vmar_destroy 161
-#define ZX_SYS_vmar_map 162
-#define ZX_SYS_vmar_unmap 163
-#define ZX_SYS_vmar_protect 164
-#define ZX_SYS_vmar_op_range 165
-#define ZX_SYS_vmo_create 166
-#define ZX_SYS_vmo_read 167
-#define ZX_SYS_vmo_write 168
-#define ZX_SYS_vmo_get_size 169
-#define ZX_SYS_vmo_set_size 170
-#define ZX_SYS_vmo_op_range 171
-#define ZX_SYS_vmo_create_child 172
-#define ZX_SYS_vmo_set_cache_policy 173
-#define ZX_SYS_vmo_replace_as_executable 174
-#define ZX_SYS_vmo_create_contiguous 175
-#define ZX_SYS_vmo_create_physical 176
-#define ZX_SYS_COUNT 177
+#define ZX_SYS_pager_query_vmo_stats 83
+#define ZX_SYS_pc_firmware_tables 84
+#define ZX_SYS_pci_get_nth_device 85
+#define ZX_SYS_pci_enable_bus_master 86
+#define ZX_SYS_pci_reset_device 87
+#define ZX_SYS_pci_config_read 88
+#define ZX_SYS_pci_config_write 89
+#define ZX_SYS_pci_cfg_pio_rw 90
+#define ZX_SYS_pci_get_bar 91
+#define ZX_SYS_pci_map_interrupt 92
+#define ZX_SYS_pci_query_irq_mode 93
+#define ZX_SYS_pci_set_irq_mode 94
+#define ZX_SYS_pci_init 95
+#define ZX_SYS_pci_add_subtract_io_range 96
+#define ZX_SYS_pmt_unpin 97
+#define ZX_SYS_port_create 98
+#define ZX_SYS_port_queue 99
+#define ZX_SYS_port_wait 100
+#define ZX_SYS_port_cancel 101
+#define ZX_SYS_process_exit 102
+#define ZX_SYS_process_create 103
+#define ZX_SYS_process_start 104
+#define ZX_SYS_process_read_memory 105
+#define ZX_SYS_process_write_memory 106
+#define ZX_SYS_profile_create 107
+#define ZX_SYS_resource_create 108
+#define ZX_SYS_smc_call 109
+#define ZX_SYS_socket_create 110
+#define ZX_SYS_socket_write 111
+#define ZX_SYS_socket_read 112
+#define ZX_SYS_socket_set_disposition 113
+#define ZX_SYS_stream_create 114
+#define ZX_SYS_stream_writev 115
+#define ZX_SYS_stream_writev_at 116
+#define ZX_SYS_stream_readv 117
+#define ZX_SYS_stream_readv_at 118
+#define ZX_SYS_stream_seek 119
+#define ZX_SYS_syscall_test_0 120
+#define ZX_SYS_syscall_test_1 121
+#define ZX_SYS_syscall_test_2 122
+#define ZX_SYS_syscall_test_3 123
+#define ZX_SYS_syscall_test_4 124
+#define ZX_SYS_syscall_test_5 125
+#define ZX_SYS_syscall_test_6 126
+#define ZX_SYS_syscall_test_7 127
+#define ZX_SYS_syscall_test_8 128
+#define ZX_SYS_syscall_next_1 129
+#define ZX_SYS_syscall_test_wrapper 130
+#define ZX_SYS_syscall_test_handle_create 131
+#define ZX_SYS_syscall_test_widening_unsigned_narrow 132
+#define ZX_SYS_syscall_test_widening_signed_narrow 133
+#define ZX_SYS_syscall_test_widening_unsigned_wide 134
+#define ZX_SYS_syscall_test_widening_signed_wide 135
+#define ZX_SYS_system_get_event 136
+#define ZX_SYS_system_set_performance_info 137
+#define ZX_SYS_system_get_performance_info 138
+#define ZX_SYS_system_mexec 139
+#define ZX_SYS_system_mexec_payload_get 140
+#define ZX_SYS_system_powerctl 141
+#define ZX_SYS_task_suspend 142
+#define ZX_SYS_task_suspend_token 143
+#define ZX_SYS_task_create_exception_channel 144
+#define ZX_SYS_task_kill 145
+#define ZX_SYS_thread_exit 146
+#define ZX_SYS_thread_create 147
+#define ZX_SYS_thread_start 148
+#define ZX_SYS_thread_read_state 149
+#define ZX_SYS_thread_write_state 150
+#define ZX_SYS_thread_legacy_yield 151
+#define ZX_SYS_timer_create 152
+#define ZX_SYS_timer_set 153
+#define ZX_SYS_timer_cancel 154
+#define ZX_SYS_vcpu_create 155
+#define ZX_SYS_vcpu_enter 156
+#define ZX_SYS_vcpu_kick 157
+#define ZX_SYS_vcpu_interrupt 158
+#define ZX_SYS_vcpu_read_state 159
+#define ZX_SYS_vcpu_write_state 160
+#define ZX_SYS_vmar_allocate 161
+#define ZX_SYS_vmar_destroy 162
+#define ZX_SYS_vmar_map 163
+#define ZX_SYS_vmar_unmap 164
+#define ZX_SYS_vmar_protect 165
+#define ZX_SYS_vmar_op_range 166
+#define ZX_SYS_vmo_create 167
+#define ZX_SYS_vmo_read 168
+#define ZX_SYS_vmo_write 169
+#define ZX_SYS_vmo_get_size 170
+#define ZX_SYS_vmo_set_size 171
+#define ZX_SYS_vmo_op_range 172
+#define ZX_SYS_vmo_create_child 173
+#define ZX_SYS_vmo_set_cache_policy 174
+#define ZX_SYS_vmo_replace_as_executable 175
+#define ZX_SYS_vmo_create_contiguous 176
+#define ZX_SYS_vmo_create_physical 177
+#define ZX_SYS_COUNT 178
----- syscall-numbers.h END -----
diff --git a/zircon/vdso/pager.fidl b/zircon/vdso/pager.fidl
index 33e06786..ff09fd1 100644
--- a/zircon/vdso/pager.fidl
+++ b/zircon/vdso/pager.fidl
@@ -82,4 +82,17 @@
actual optional_usize;
avail optional_usize;
});
+
+ /// Query pager related statistics on a pager owned VMO.
+ /// Rights: pager must be of type ZX_OBJ_TYPE_PAGER.
+ /// Rights: pager_vmo must be of type ZX_OBJ_TYPE_VMO.
+ @next
+ pager_query_vmo_stats(resource struct {
+ pager handle:PAGER;
+ pager_vmo handle:VMO;
+ options uint32;
+ }) -> (struct {
+ status status;
+ buffer vector_void;
+ });
};