[fxfs] Initial revision.

Squashed commit with the following:

(Attempt to) add an initial volume

mkfs adds a volume named "default", which mount will open up.

Adds a "root object ID" to object stores.

Volumes are managed in a directory which is the root object for the root
object store. Entries in the directory point to object stores which root
each volume; the root object ID for these volume object stores is the
root directory.

Added a test which unfortunately fails ATM (disabled_test_add_volume).
The volume doesn't appear after remounting the filesystem.

[fxfs] Fixed volume creation.

Object stores need to be registered with store manager so that we can
find them when replaying the log. When we're replaying the log we need a
way to lazily open stores.

Volume information is stored in a new reserved object (0) that exists
for all object stores. This volume information points at the root
directory for the volume.

[fxfs] Change everything to be async.

Figured it's probably better to switch to async sooner rather than
later.

Also took the opportunity to refactor Log a bit and separate out
LogReader.

[fxfs] Remove unwanted file.

[fxfs] Stand up server on mount

Serves /svc and the root directory handle out of the "default" volume.
Currently we're baking in an assumption that we just support a single
volume, but we should change that later and support multiple.

After this CL, the fs_test suite gets a bit further. It still crashes
and burns, though. :)

[fxfs] open root dir handle on mount

Adds support for DirectoryEntry::open() to FxDirectory.

[fxfs] Support file/dir creation

basic-test.cm in fs_test gets a bit further with this change.

[fxfs] fix runner syntax

[fxfs] Add FxFile

Supports read and write (although, read is suspect; see stubbed out test
case in file.rs).

[fxfs] Fix File::read

[fxfs] Log when we call unimplemented APIs

[fxfs] Fix lots of bugs

 1. Rewrite the allocator merger so that it uses reference counted
    extents rather than full back references. Full backreferences are
    hard to handle if/when we ever support cloning files because we
    would potentially have to split an unbounded number of extents, or
    we would need to make the layers support some kind of interval tree
    (which would be complicated).

 2. Change the API to the merger so that it is simpler and MergeResult
    is a bit more flexible.

 3. Reserve the first 100 object IDs for well-known objects. This
    will likely make debugging a little easier.

 4. Make the log file a child of the root parent store rather than
    the root store. This is necessary so that we can defer opening
    the root store until all log records are replayed.

 5. Refactor application of mutation records so that it is handled
    within the objects that they apply to rather than within log.rs.

 6. Add TreeSeal and TreeCompact so that we correctly capture what
    we need to in the log when compacting trees.

[fxfs] Convert println to syslog

[fxfs] First pass at File::append

This doesn't sort out atomicity yet; left a TODO for that.
lliv@jfsulliv:~/src/fuchsia$ head -n200 /tmp/log
it 21f3c9ac043b2dc7ca2053a0dcd2646aac8ad091
or: Chris Suter <csuter@google.com>
:   Fri Mar 12 19:05:55 2021 +1100

[fxfs] Initial revision.

Squashed commit with the following:

(Attempt to) add an initial volume

mkfs adds a volume named "default", which mount will open up.

Adds a "root object ID" to object stores.

Volumes are managed in a directory which is the root object for the root
object store. Entries in the directory point to object stores which root
each volume; the root object ID for these volume object stores is the
root directory.

Added a test which unfortunately fails ATM (disabled_test_add_volume).
The volume doesn't appear after remounting the filesystem.

[fxfs] Fixed volume creation.

Object stores need to be registered with store manager so that we can
find them when replaying the log. When we're replaying the log we need a
way to lazily open stores.

Volume information is stored in a new reserved object (0) that exists
for all object stores. This volume information points at the root
directory for the volume.

[fxfs] Change everything to be async.

Figured it's probably better to switch to async sooner rather than
later.

Also took the opportunity to refactor Log a bit and separate out
LogReader.

[fxfs] Remove unwanted file.

[fxfs] Stand up server on mount

Serves /svc and the root directory handle out of the "default" volume.
Currently we're baking in an assumption that we just support a single
volume, but we should change that later and support multiple.

After this CL, the fs_test suite gets a bit further. It still crashes
and burns, though. :)

[fxfs] open root dir handle on mount

Adds support for DirectoryEntry::open() to FxDirectory.

[fxfs] Support file/dir creation

basic-test.cm in fs_test gets a bit further with this change.

[fxfs] fix runner syntax

[fxfs] Add FxFile

Supports read and write (although, read is suspect; see stubbed out test
case in file.rs).

[fxfs] Fix File::read

[fxfs] Log when we call unimplemented APIs

[fxfs] Fix lots of bugs

 1. Rewrite the allocator merger so that it uses reference counted
    extents rather than full back references. Full backreferences are
    hard to handle if/when we ever support cloning files because we
    would potentially have to split an unbounded number of extents, or
    we would need to make the layers support some kind of interval tree
    (which would be complicated).

 2. Change the API to the merger so that it is simpler and MergeResult
    is a bit more flexible.

 3. Reserve the first 100 object IDs for well-known objects. This
    will likely make debugging a little easier.

 4. Make the log file a child of the root parent store rather than
    the root store. This is necessary so that we can defer opening
    the root store until all log records are replayed.

 5. Refactor application of mutation records so that it is handled
    within the objects that they apply to rather than within log.rs.

 6. Add TreeSeal and TreeCompact so that we correctly capture what
    we need to in the log when compacting trees.

[fxfs] Convert println to syslog

[fxfs] First pass at File::append

This doesn't sort out atomicity yet; left a TODO for that.

Also cleaned up the existing tests with some helpers.

[fxfs] Implement seek

This actually just required implementing get_size.

[fxfs] Rename log -> journal

[fxfs] Implement the extension case for truncate

The shrinking case isn't implemented yet.

[fxfs] Fix truncate bug

Truncate didn't work quite right with file holes due to a logic bug.

Fixed the test to actually spans blocks, which is much more interesting
(since it exercises the file-hole logic).

[fxfs] Implement truncate

Truncate (in the shrinking case) does the following transactionally:
- Inserts a deleted extent (represented as an ExtentValue with an
  absent device offset) into the tree,
- Overwrites the tail of the last extent with zeroes, and
- Sets the file's length.

During reads, we simply skip past deleted extents. The existing merge
implementation already handled deleted extents (although we do need to
eventually delete mutations at the base layer, which is a TODO).

Left a TODO in delete_old_extents to deal with deleted extent records; I
think we'll still need to do some work on them.

Also did a bit of refactoring on the write path.

[fxfs] Fix a bug in truncate

Truncate (or write) could hang because we weren't incrementing the
iterator when skipping past deleted extents.

[fxfs] Fixed merge issues

 + Fixed merge issues resulting from earlier commit.
 + Changed everything to use anyhow::Error.
 + Refactored some tests.

[fxfs] remove syslog lib dependency

Use the log crate instead of syslog (which only needs to be included in
main.rs for initialization).

This is pre-work to include the fxfs library in host code.

[fxfs] merge issues

[fxfs] Make a Filesystem trait

[fxfs] Use Buffer/Device abstractions

Change-Id: Iec86cb571021f585c50eed010e4ead6102104cd7
diff --git a/src/storage/fxfs/BUILD.gn b/src/storage/fxfs/BUILD.gn
index 4066ad4..2899781 100644
--- a/src/storage/fxfs/BUILD.gn
+++ b/src/storage/fxfs/BUILD.gn
@@ -32,7 +32,9 @@
   "src/lsm_tree.rs",
   "src/lsm_tree/merge.rs",
   "src/lsm_tree/simple_persistent_layer.rs",
+  "src/lsm_tree/single_item_layer.rs",
   "src/lsm_tree/skip_list_layer.rs",
+  "src/lsm_tree/tests.rs",
   "src/lsm_tree/types.rs",
   "src/mkfs.rs",
   "src/mount.rs",
@@ -52,6 +54,7 @@
   "src/object_store/testing.rs",
   "src/object_store/testing/fake_allocator.rs",
   "src/object_store/testing/fake_filesystem.rs",
+  "src/object_store/tests.rs",
   "src/object_store/transaction.rs",
   "src/testing.rs",
   "src/testing/fake_device.rs",
diff --git a/src/storage/fxfs/meta/fxfs_lib_test.cml b/src/storage/fxfs/meta/fxfs_lib_test.cml
index 97df5f2..622235b 100644
--- a/src/storage/fxfs/meta/fxfs_lib_test.cml
+++ b/src/storage/fxfs/meta/fxfs_lib_test.cml
@@ -5,6 +5,7 @@
     include: [ "sdk/lib/diagnostics/syslog/client.shard.cml" ],
     program: {
         runner: "rust_test_runner",
+        runner: "rust_test_runner",
         binary: "bin/fxfs_lib_test",
     },
     capabilities: [
diff --git a/src/storage/fxfs/src/lsm_tree/single_item_layer.rs b/src/storage/fxfs/src/lsm_tree/single_item_layer.rs
new file mode 100644
index 0000000..357e372
--- /dev/null
+++ b/src/storage/fxfs/src/lsm_tree/single_item_layer.rs
@@ -0,0 +1,61 @@
+use crate::lsm_tree::{BoxedLayerIterator, ItemRef, Layer, LayerIterator};
+use failure::Error;
+use std::ops::Bound;
+
+pub struct SingleItemLayer<'item, K, V> {
+    item: ItemRef<'item, K, V>,
+}
+
+impl<K, V> SingleItemLayer<'_, K, V> {
+    pub fn new(item: ItemRef<K, V>) -> SingleItemLayer<K, V> {
+        SingleItemLayer { item }
+    }
+}
+
+enum SingleItemLayerIteratorState {
+    Start,
+    AtItem,
+    End,
+}
+
+struct SingleItemLayerIterator<'item, K, V> {
+    item: ItemRef<'item, K, V>,
+    pos: i32,
+}
+
+impl<K: PartialOrd, V> Layer<K, V> for SingleItemLayer<'_, K, V> {
+    fn get_iterator(&self) -> BoxedLayerIterator<K, V> {
+        Box::new(SingleItemLayerIterator { item: self.item, pos: 0 })
+    }
+}
+
+impl<K: PartialOrd, V> LayerIterator<K, V> for SingleItemLayerIterator<'_, K, V> {
+    fn seek(&mut self, bound: std::ops::Bound<&K>) -> Result<(), Error> {
+        match bound {
+            Bound::Unbounded => self.pos = 1,
+            Bound::Included(key) => {
+                if key <= self.item.key {
+                    self.pos = 1;
+                } else {
+                    self.pos = 2;
+                }
+            }
+            Bound::Excluded(key) => {
+                panic!("Not supported!");
+            }
+        }
+        Ok(())
+    }
+
+    fn advance(&mut self) -> Result<(), Error> {
+        self.pos += 1;
+        Ok(())
+    }
+
+    fn get(&self) -> Option<ItemRef<K, V>> {
+        match self.pos {
+            1 => Some(self.item),
+            _ => None,
+        }
+    }
+}
diff --git a/src/storage/fxfs/src/lsm_tree/tests.rs b/src/storage/fxfs/src/lsm_tree/tests.rs
new file mode 100644
index 0000000..3addb18
--- /dev/null
+++ b/src/storage/fxfs/src/lsm_tree/tests.rs
@@ -0,0 +1,102 @@
+use {
+    crate::{
+        lsm_tree::{
+            merge::{MergeIterator, MergeResult},
+            skip_list_layer::SkipListLayer,
+            types::{Item, Layer, MutableLayer, OrdLowerBound},
+            LSMTree,
+        },
+        testing::fake_object::{FakeObject, FakeObjectHandle},
+    },
+    anyhow::Error,
+    fuchsia_async as fasync,
+    std::{
+        ops::Bound,
+        rc::Rc,
+        sync::{Arc, Mutex},
+    },
+};
+
+fn merge<K: std::fmt::Debug, V: std::fmt::Debug>(
+    _left: &MergeIterator<'_, K, V>,
+    _right: &MergeIterator<'_, K, V>,
+) -> MergeResult<K, V> {
+    MergeResult::EmitLeft
+}
+
+#[derive(Eq, PartialEq, PartialOrd, Ord, Debug, serde::Serialize, serde::Deserialize)]
+struct TestKey(i32);
+
+impl OrdLowerBound for TestKey {
+    fn cmp_lower_bound(&self, other: &Self) -> std::cmp::Ordering {
+        std::cmp::Ord::cmp(self, other)
+    }
+}
+
+#[fasync::run_singlethreaded(test)]
+async fn test_lsm_tree_commit() -> Result<(), Error> {
+    let object1 = Arc::new(Mutex::new(FakeObject::new()));
+    let object2 = Arc::new(Mutex::new(FakeObject::new()));
+    let tree = LSMTree::<TestKey, u8>::new(merge);
+    tree.insert(Item::new(TestKey(1), 2)).await;
+    tree.insert(Item::new(TestKey(3), 4)).await;
+    let object_handle = FakeObjectHandle::new(object1.clone());
+    tree.seal();
+    tree.compact(object_handle).await?;
+    tree.insert(Item::new(TestKey(2), 5)).await;
+    let object_handle = FakeObjectHandle::new(object2.clone());
+    tree.seal();
+    tree.compact(object_handle).await?;
+    let mut merger = tree.range_from(std::ops::Bound::Unbounded).await?;
+    assert_eq!(merger.get().unwrap().key, &TestKey(1));
+    merger.advance().await?;
+    assert_eq!(merger.get().unwrap().key, &TestKey(2));
+    merger.advance().await?;
+    assert_eq!(merger.get().unwrap().key, &TestKey(3));
+    merger.advance().await?;
+    assert!(merger.get().is_none());
+    Ok(())
+}
+
+#[fasync::run_singlethreaded(test)]
+async fn test_skip_list() -> Result<(), Error> {
+    let mut skip_list = Rc::new(SkipListLayer::new(100));
+    let sl = Rc::get_mut(&mut skip_list).unwrap();
+    sl.merge_into(Item::new(TestKey(1), 1), &TestKey(1), merge).await;
+    {
+        let mut iter = sl.get_iterator();
+        iter.seek(Bound::Included(&TestKey(1))).await?;
+        assert_eq!(iter.get().unwrap().key, &TestKey(1));
+    }
+    sl.merge_into(Item::new(TestKey(2), 1), &TestKey(2), merge).await;
+    sl.merge_into(Item::new(TestKey(3), 1), &TestKey(3), merge).await;
+    let mut iter = skip_list.get_iterator();
+    iter.seek(Bound::Included(&TestKey(2))).await?;
+    assert_eq!(iter.get().unwrap().key, &TestKey(2));
+    iter.advance().await?;
+    assert_eq!(iter.get().unwrap().key, &TestKey(3));
+    iter.advance().await?;
+    assert!(iter.get().is_none());
+    let mut iter = skip_list.get_iterator();
+    iter.seek(Bound::Included(&TestKey(1))).await?;
+    assert_eq!(iter.get().unwrap().key, &TestKey(1));
+    Ok(())
+}
+
+#[fasync::run_singlethreaded(test)]
+async fn test_skip_list_with_large_number_of_items() -> Result<(), Error> {
+    let mut skip_list = Rc::new(SkipListLayer::new(100));
+    let sl = Rc::get_mut(&mut skip_list).unwrap();
+    let item_count = 10;
+    for i in 1..item_count {
+        sl.merge_into(Item::new(TestKey(i), 1), &TestKey(i), merge).await;
+    }
+    let mut iter = skip_list.get_iterator();
+    iter.seek(Bound::Included(&TestKey(item_count - 2))).await?;
+    for i in item_count - 2..item_count {
+        assert_eq!(iter.get().unwrap().key, &TestKey(i));
+        iter.advance().await?;
+    }
+    assert!(iter.get().is_none());
+    Ok(())
+}
diff --git a/src/storage/fxfs/src/object_store/allocator/tests.rs b/src/storage/fxfs/src/object_store/allocator/tests.rs
new file mode 100644
index 0000000..1d49882
--- /dev/null
+++ b/src/storage/fxfs/src/object_store/allocator/tests.rs
@@ -0,0 +1,23 @@
+/* TODO: bring back when we have FakeFilesystem.
+
+use {
+    super::SimpleAllocator,
+    crate::object_store::{allocator::Allocator, filesystem::ObjectManager, Journal, Transaction},
+    anyhow::Error,
+    fuchsia_async as fasync,
+    std::sync::Arc,
+};
+
+#[fasync::run_singlethreaded(test)]
+async fn test_allocate_reserves() -> Result<(), Error> {
+    let objects = Arc::new(ObjectManager::new());
+    let journal = Arc::new(Journal::new(objects.clone()));
+    let allocator = Arc::new(SimpleAllocator::new(&journal));
+    objects.set_allocator(allocator.clone());
+    let mut transaction = Transaction::new();
+    let allocation1 = allocator.allocate(0, 1, 0, 0..512, &mut transaction).await?;
+    let allocation2 = allocator.allocate(0, 1, 0, 0..512, &mut transaction).await?;
+    assert!(allocation2.start >= allocation1.end || allocation2.end <= allocation1.start);
+    Ok(())
+}
+*/
diff --git a/src/storage/fxfs/src/object_store/tests.rs b/src/storage/fxfs/src/object_store/tests.rs
new file mode 100644
index 0000000..400e8f1
--- /dev/null
+++ b/src/storage/fxfs/src/object_store/tests.rs
@@ -0,0 +1,199 @@
+use {
+    crate::{
+        lsm_tree::LSMTree,
+        object_handle::ObjectHandle,
+        object_store::{
+            filesystem::{Filesystem, FxFilesystem, SyncOptions},
+            merge,
+            record::{ExtentKey, ObjectItem, ObjectKey, ObjectValue},
+            transaction::Transaction,
+            HandleOptions,
+        },
+        testing::fake_device::FakeDevice,
+    },
+    anyhow::Error,
+    fuchsia_async as fasync,
+    std::sync::Arc,
+};
+
+#[fasync::run_singlethreaded(test)]
+async fn test_object_store() -> Result<(), Error> {
+    let device = Arc::new(FakeDevice::new(512));
+    let object_id;
+    {
+        let filesystem = FxFilesystem::new_empty(device.clone()).await?;
+        let root_store = filesystem.root_store();
+        let mut transaction = Transaction::new();
+        let handle = root_store
+            .create_object(&mut transaction, HandleOptions::default())
+            .await
+            .expect("create_object failed");
+        filesystem.commit_transaction(transaction).await;
+        object_id = handle.object_id();
+        handle.write(5, b"hello").await.expect("write failed");
+        {
+            let mut buf = [0; 5];
+            handle.read(5, &mut buf).await.expect("read failed");
+        }
+        handle.write(6, b"hello").await.expect("write failed");
+        {
+            let mut buf = [0; 6];
+            handle.read(5, &mut buf).await.expect("read failed");
+        }
+        filesystem.sync(SyncOptions::default()).await.expect("sync failed");
+    }
+    let object_id2;
+    {
+        let filesystem = FxFilesystem::open(device.clone()).await.expect("open failed");
+        let root_store = filesystem.root_store();
+        let handle = root_store
+            .open_object(object_id, HandleOptions::default())
+            .await
+            .expect("open_object failed");
+        let mut buf = [0; 5];
+        handle.read(6, &mut buf).await.expect("read failed");
+        assert_eq!(&buf, b"hello");
+        let mut transaction = Transaction::new();
+        let handle = root_store
+            .create_object(&mut transaction, HandleOptions::default())
+            .await
+            .expect("create_object failed");
+        filesystem.commit_transaction(transaction).await;
+        object_id2 = handle.object_id();
+        handle.write(5000, b"foo").await.expect("write failed");
+        filesystem
+            .sync(SyncOptions { new_super_block: true, ..Default::default() })
+            .await
+            .expect("sync failed");
+    }
+    {
+        let filesystem = FxFilesystem::open(device.clone()).await.expect("open failed");
+        let root_store = filesystem.root_store();
+        let handle = root_store
+            .open_object(object_id2, HandleOptions::default())
+            .await
+            .expect("open_object failed");
+        let mut buf = [0; 3];
+        handle.read(5000, &mut buf).await.expect("read failed");
+        assert_eq!(&buf, b"foo");
+        root_store.flush(true).await.expect("flush failed");
+        filesystem
+            .sync(SyncOptions { new_super_block: true, ..Default::default() })
+            .await
+            .expect("sync failed");
+    }
+    let mut object_ids = Vec::new();
+    {
+        let filesystem = FxFilesystem::open(device.clone()).await.expect("open failed");
+        let root_store = filesystem.root_store();
+        for _i in 0u16..500u16 {
+            let mut transaction = Transaction::new();
+            let handle = root_store
+                .create_object(&mut transaction, HandleOptions::default())
+                .await
+                .expect("create_object failed");
+            filesystem.commit_transaction(transaction).await;
+            object_ids.push(handle.object_id());
+            handle.write(10000, b"bar").await.expect("write failed");
+        }
+    }
+    {
+        let filesystem = FxFilesystem::open(device.clone()).await.expect("open failed");
+        let root_store = filesystem.root_store();
+        let handle = root_store
+            .open_object(object_ids[0], HandleOptions::default())
+            .await
+            .expect("open_object failed");
+        let mut buf = [0; 3];
+        handle.read(10000, &mut buf).await.expect("read failed");
+        assert_eq!(&buf, b"bar");
+    }
+    Ok(())
+}
+
+#[fasync::run_singlethreaded(test)]
+async fn test_extents_merging() -> Result<(), Error> {
+    let tree = LSMTree::new(merge::merge);
+    let item = ObjectItem {
+        key: ObjectKey::extent(0, ExtentKey::new(0, 0..10)),
+        value: ObjectValue::extent(0),
+    };
+    let lower_bound = item.key.search_key();
+    tree.merge_into(item, &lower_bound).await;
+    let item = ObjectItem {
+        key: ObjectKey::extent(0, ExtentKey::new(0, 3..7)),
+        value: ObjectValue::extent(0),
+    };
+    let lower_bound = item.key.search_key();
+    tree.merge_into(item, &lower_bound).await;
+    {
+        let layer_set = tree.layer_set();
+        let mut iter = layer_set.get_iterator();
+        iter.advance().await?;
+        assert_eq!(iter.get().unwrap().key, &ObjectKey::extent(0, ExtentKey::new(0, 0..3)));
+        iter.advance().await?;
+        assert_eq!(iter.get().unwrap().key, &ObjectKey::extent(0, ExtentKey::new(0, 3..7)));
+        iter.advance().await?;
+        assert_eq!(iter.get().unwrap().key, &ObjectKey::extent(0, ExtentKey::new(0, 7..10)));
+        iter.advance().await?;
+        assert!(iter.get().is_none());
+    }
+    {
+        let layer_set = tree.layer_set();
+        let mut iter = layer_set.get_iterator();
+        iter.advance().await?;
+        assert_eq!(iter.get().unwrap().key, &ObjectKey::extent(0, ExtentKey::new(0, 0..3)));
+        iter.advance().await?;
+        assert_eq!(iter.get().unwrap().key, &ObjectKey::extent(0, ExtentKey::new(0, 3..7)));
+        iter.advance().await?;
+        assert_eq!(iter.get().unwrap().key, &ObjectKey::extent(0, ExtentKey::new(0, 7..10)));
+        iter.advance().await?;
+        assert!(iter.get().is_none());
+    }
+    let item = ObjectItem {
+        key: ObjectKey::extent(0, ExtentKey::new(0, 2..9)),
+        value: ObjectValue::extent(0),
+    };
+    let lower_bound = item.key.search_key();
+    tree.merge_into(item, &lower_bound).await;
+    {
+        let layer_set = tree.layer_set();
+        let mut iter = layer_set.get_iterator();
+        iter.advance().await?;
+        assert_eq!(iter.get().unwrap().key, &ObjectKey::extent(0, ExtentKey::new(0, 0..2)));
+        iter.advance().await?;
+        assert_eq!(iter.get().unwrap().key, &ObjectKey::extent(0, ExtentKey::new(0, 2..9)));
+        iter.advance().await?;
+        assert_eq!(iter.get().unwrap().key, &ObjectKey::extent(0, ExtentKey::new(0, 9..10)));
+        iter.advance().await?;
+        assert!(iter.get().is_none());
+    }
+    Ok(())
+}
+
+#[fasync::run_singlethreaded(test)]
+async fn test_directory() -> Result<(), Error> {
+    let device = Arc::new(FakeDevice::new(512));
+    let directory_id;
+    {
+        let filesystem = FxFilesystem::new_empty(device.clone()).await.expect("new_empty failed");
+        let root_store = filesystem.root_store();
+        let directory = root_store.create_directory().await.expect("create_directory failed");
+        directory.create_child_file("foo").await.expect("create_child_file failed");
+        directory_id = directory.object_id();
+        let directory =
+            root_store.open_directory(directory_id).await.expect("open directory failed");
+        directory.lookup("foo").await.expect("open foo failed");
+        directory.lookup("bar").await.map(|_| ()).expect_err("open bar succeeded");
+        filesystem.sync(SyncOptions::default()).await.expect("sync failed");
+    }
+    {
+        let filesystem = FxFilesystem::open(device.clone()).await.expect("open failed");
+        let root_store = filesystem.root_store();
+        let directory =
+            root_store.open_directory(directory_id).await.expect("open directory failed");
+        directory.lookup("foo").await.expect("open foo failed");
+        directory.lookup("bar").await.map(|_| ()).expect_err("open bar succeeded");
+    }
+    Ok(())
+}