| // Copyright 2018 The Fuchsia Authors. All rights reserved. |
| // Use of this source code is governed by a BSD-style license that can be |
| // found in the LICENSE file. |
| |
| //! Utilities for safely operating on memory shared between untrusting |
| //! processes. |
| //! |
| //! `shared-buffer` provides support for safely operating on memory buffers |
| //! which are shared with another process which is untrusted. The Rust memory |
| //! model assumes that only code running in the current process - and thus |
| //! either trusted or generated by Rust itself - operates on a given region of |
| //! memory. As a result, simply treating a region of memory to which another, |
| //! untrusted process has read or write access as equivalent to normal process |
| //! memory is unsafe. This crate provides the `SharedBuffer` type, which has |
| //! methods that allow safe access to such memory. |
| //! |
| //! Examples of issues that could arise if shared memory were treated as normal |
| //! memory include: |
| //! - Unintentionally leaking sensitive values to another process |
| //! - Allowing other processes to cause an invalid sequence of memory to be |
| //! interpreted as a given type |
| |
| // NOTES(joshlf) on implementation: We need to worry about the following issues: |
| // - If another process has write access to a given region of memory, then |
| // arbitrary writes may happen at any time. Thus, it is never safe to access |
| // this memory through any Rust type other than a raw pointer, or else the |
| // compiler might allow operations or make optimizations based on the |
| // assumption that the memory is either owned (in the case of a mutable |
| // reference) or immutable (in the case of an immutable reference). In either |
| // of these cases, any such allowance or optimization would be unsound. For |
| // example, the compiler might decide that, after having written a T to a |
| // particular memory location, it is safe to read that memory location and |
| // treat it as a T. This would cause undefined behavior if the other process |
| // modified that memory location in the meantime. Perhaps more fundamentally, |
| // both mutable and immutable references guarantee that nobody else is |
| // modifying this memory other than me (and not even me, in the case of an |
| // immutable reference). On this basis alone, it is clear that neither |
| // reference is compatible with foreign write access to the referent. |
| // - If another process has read access to a given region of memory, then it |
| // cannot affect the correctness of a Rust program. However, it can do things |
| // that do not technically violate correctness, but are still undesirable. The |
| // canonical example is reading memory which contains sensitive information. |
| // Even if the programmer were to construct a mutable reference to such memory |
| // and write a value to it which the programmer intended to be shared with the |
| // other process, the compiler might use the fact that it had exclusive access |
| // to the memory (so says the mutable reference...) to store any arbitrary |
| // value in the memory temporarily. So long as it's not observable from the |
| // Rust program, it preserves the semantics of the program. Of course, it *is* |
| // observable from the other process, and there are no guarantees on what the |
| // compiler might decide to store there, including any value currently in your |
| // memory space, including particularly sensitive values. As a result, while |
| // read-only access doesn't violate the correctness of a Rust program, it's |
| // still worth handling carefully. |
| // |
| // In order to address both of these issues, our approach is simple: never treat |
| // the memory as anything other than a raw pointer. Do not construct any |
| // references, mutable or immutable, even temporarily, and even if they are |
| // never used. This basically boils down to only accessing the memory using the |
| // various functions from core::ptr which operate directly on raw pointers. |
| |
| // NOTE(joshlf): |
| // - Since you must assume that the other process might be writing to the |
| // memory, there's no technical requirement to have exclusive access. E.g., we |
| // could safely implement Clone, have write and write_at take immutable |
| // references, etc. (see here for a discussion of the soundness of using |
| // copy_nonoverlapping simultaneously in multiple threads: |
| // https://users.rust-lang.org/t/copy-nonoverlapping-concurrently/18353). |
| // However, this would be confusing because it would depart from the Rust |
| // idiom. Instead, we provide SharedBuffer, which has ownership semantics |
| // analogous to Vec, and SharedBufferSlice and SharedBufferSliceMut, which |
| // have reference semantics analogous to immutable and mutable slice |
| // references. Similarly, write, write_at, and release_writes take mutable |
| // references. |
| // Clone and provide slicing methods. There's no point not to. |
| // - Since all access to these buffers must go through the methods of |
| // SharedBuffer, correct code may not construct a reference to this memory. |
| // Thus, the references to dst and src passed to read, read_at, write, and |
| // write_at cannot overlap with the buffer itself, and so it's safe to use |
| // ptr::copy_nonoverlapping. |
| // - Note on volatility and observability: The memory in a SharedBuffer is |
| // either allocated by this process and then sent to another process, or |
| // allocated by another process and sent to this process. However, on Fuchsia, |
| // what's actually shared is a VMO, which is then mapped into the address |
| // space. While LLVM is almost certainly guaranteed to treat this call as |
| // opaque, and thus to be unable to prove to itself that the returned memory |
| // is not shared, it is worth hedging against that reasoning being wrong. If |
| // LLVM were, for some reason, to decide that mapping a VMO resulted in |
| // uniquely owned memory, it would be able to reason that writes to that |
| // memory could never be observed by other threads, and so if the writes were |
| // not observed by the _current_ thread, they could be elided altogether since |
| // they could have no effect. In order to hedge against this possibility, and |
| // to ensure that LLVM definitely cannot take this line of reasoning, we |
| // volatile write the pointer when we first construct the SharedBuffer. LLVM |
| // must conclude that it doesn't know who else is using the memory once a |
| // pointer to it has been written in a volatile manner, and so must assume |
| // that all future writes must be observable. This single volatile write which |
| // happens at most once per message (although more likely once when the |
| // connection is first established) has minimal performance overhead. |
| |
| // TODO(joshlf): |
| // - Create a variant for read-only memory |
| // - Create a variant for write-only memory? |
| |
| #![no_std] |
| |
| use core::marker::PhantomData; |
| use core::ops::{Bound, Range, RangeBounds}; |
| use core::ptr; |
| use core::sync::atomic::{fence, Ordering}; |
| |
| // A buffer with no ownership or reference semantics. It is the caller's |
| // responsibility to wrap this type in a type which provides ownership or |
| // reference semantics, and to only call methods when apporpriate. |
| #[derive(Debug)] |
| struct SharedBufferInner { |
| // invariant: '(buf as usize) + len' doesn't overflow usize |
| buf: *mut u8, |
| len: usize, |
| } |
| |
| impl SharedBufferInner { |
| fn read_at(&self, offset: usize, dst: &mut [u8]) -> usize { |
| if let Some(to_copy) = overlap(offset, dst.len(), self.len) { |
| // Since overlap returned Some, we're guaranteed that 'offset + |
| // to_copy <= self.len'. That in turn means that, so long as the |
| // invariant holds that '(self.buf as usize) + self.len' doesn't |
| // overflow usize, then this call to offset_from won't overflow, and |
| // neither will the call to copy_nonoverlapping. |
| let base = offset_from(self.buf, offset); |
| unsafe { ptr::copy_nonoverlapping(base, dst.as_mut_ptr(), to_copy) }; |
| to_copy |
| } else { |
| panic!("byte offset {} out of range for SharedBuffer of length {}", offset, self.len); |
| } |
| } |
| |
| fn write_at(&self, offset: usize, src: &[u8]) -> usize { |
| if let Some(to_copy) = overlap(offset, src.len(), self.len) { |
| // Since overlap returned Some, we're guaranteed that 'offset + |
| // to_copy <= self.len'. That in turn means that, so long as the |
| // invariant holds that '(self.buf as usize) + self.len' doesn't |
| // overflow usize, then this call to offset_from won't overflow, and |
| // neither will the call to copy_nonoverlapping. |
| let base = offset_from(self.buf, offset); |
| unsafe { ptr::copy_nonoverlapping(src.as_ptr(), base, to_copy) }; |
| to_copy |
| } else { |
| panic!("byte offset {} out of range for SharedBuffer of length {}", offset, self.len); |
| } |
| } |
| |
| fn slice<R: RangeBounds<usize>>(&self, range: R) -> SharedBufferInner { |
| let range = canonicalize_range_infallible(self.len, range); |
| SharedBufferInner { buf: offset_from(self.buf, range.start), len: range.end - range.start } |
| } |
| |
| fn split_at(&self, idx: usize) -> (SharedBufferInner, SharedBufferInner) { |
| assert!(idx <= self.len, "split index out of bounds"); |
| let a = SharedBufferInner { buf: self.buf, len: idx }; |
| let b = SharedBufferInner { buf: offset_from(self.buf, idx), len: self.len - idx }; |
| (a, b) |
| } |
| } |
| |
| // Verifies that 'offset' is in range of range_len (that 'offset <= range_len'), |
| // and returns the amount of overlap between a copy of length 'copy_len' |
| // starting at 'offset' and a buffer of length 'range_len'. The number it |
| // returns is guaranteed to be less than or equal to 'range_len'. |
| // |
| // overlap is guaranteed to be correct for any three usize values. |
| fn overlap(offset: usize, copy_len: usize, range_len: usize) -> Option<usize> { |
| if offset > range_len { |
| None |
| } else if offset.checked_add(copy_len).map(|sum| sum <= range_len).unwrap_or(false) { |
| // if 'offset + copy_len' overflows usize, then 'offset + copy_len > |
| // range_len', so we unwrap_or(false) |
| Some(copy_len) |
| } else { |
| Some(range_len - offset) |
| } |
| } |
| |
| // Like the offset method on primitive pointers, but for unsigned offsets. Both |
| // the 'offset' and 'add' methods on primitive pointers have the limitation that |
| // the offset cannot overflow an isize or else it will cause UB. offset_from |
| // function has no such restriction. |
| // |
| // The caller must guarantee that '(ptr as usize) + offset' doesn't overflow |
| // usize. |
| fn offset_from(ptr: *mut u8, offset: usize) -> *mut u8 { |
| // just in case our logic is wrong, better to catch it at runtime than |
| // invoke UB |
| (ptr as usize).checked_add(offset).unwrap() as *mut u8 |
| } |
| |
| // Return the inclusive equivalent of the bound. |
| fn canonicalize_lower_bound(bound: Bound<&usize>) -> usize { |
| match bound { |
| Bound::Included(x) => *x, |
| Bound::Excluded(x) => *x + 1, |
| Bound::Unbounded => 0, |
| } |
| } |
| // Return the exclusive equivalent of the bound, verifying that it is in range |
| // of len. |
| fn canonicalize_upper_bound(len: usize, bound: Bound<&usize>) -> Option<usize> { |
| let bound = match bound { |
| Bound::Included(x) => *x + 1, |
| Bound::Excluded(x) => *x, |
| Bound::Unbounded => len, |
| }; |
| if bound > len { |
| return None; |
| } |
| Some(bound) |
| } |
| // Return the inclusive-exclusive equivalent of the bound, verifying that it is |
| // in range of len, and panicking if it is not or if the range is nonsensical. |
| fn canonicalize_range_infallible<R: RangeBounds<usize>>(len: usize, range: R) -> Range<usize> { |
| let lower = canonicalize_lower_bound(range.start_bound()); |
| let upper = |
| canonicalize_upper_bound(len, range.end_bound()).expect("slice range out of bounds"); |
| assert!(lower <= upper, "invalid range"); |
| lower..upper |
| } |
| |
| /// A shared region of memory. |
| /// |
| /// A `SharedBuffer` is a view into a region of memory to which another process |
| /// has access. It provides methods to access this memory in a way that |
| /// preserves memory safety. From the perspective of the current process, it |
| /// owns its memory (analogous to a `Vec`). |
| /// |
| /// Since the buffer is shared by an untrusted process, it is never valid to |
| /// assume that a given region of the buffer will not change in between method |
| /// calls. Even if no thread in this process wrote anything to the buffer, the |
| /// other process might have. |
| /// |
| /// # Unmapping |
| /// |
| /// `SharedBuffer`s do nothing when dropped. In order to avoid leaking memory, |
| /// use the `consume` method to consume the `SharedBuffer` and get back the |
| /// underlying pointer and length, and unmap the memory manually. |
| #[derive(Debug)] |
| pub struct SharedBuffer { |
| inner: SharedBufferInner, |
| } |
| |
| impl SharedBuffer { |
| /// Create a new `SharedBuffer` from a raw buffer. |
| /// |
| /// `new` creates a new `SharedBuffer` from the provided buffer and lenth, |
| /// taking ownership of the memory. |
| /// |
| /// # Safety |
| /// |
| /// Memory in a shared buffer must never be accessed except through the |
| /// methods of `SharedBuffer`. It must not be treated as normal memory, and |
| /// pointers to it must not be passed to unsafe code which is designed to |
| /// operate on normal memory. It must be guaranteed that, for the lifetime |
| /// of the `SharedBuffer`, the memory region is mapped, readable, and |
| /// writable. |
| /// |
| /// If any of these guarantees are violated, it may cause undefined |
| /// behavior. |
| #[inline] |
| pub unsafe fn new(buf: *mut u8, len: usize) -> SharedBuffer { |
| // Write the pointer and the length using a volatile write so that LLVM |
| // must assume that the memory has escaped, and that all future writes |
| // to it are observable. See the NOTE above for more details. |
| let mut scratch = (ptr::null_mut(), 0); |
| ptr::write_volatile(&mut scratch, (buf, len)); |
| |
| // Acquire any writes to the buffer that happened in a different thread |
| // or process already so they are visible without having to call the |
| // acquire_writes method. |
| fence(Ordering::Acquire); |
| SharedBuffer { inner: SharedBufferInner { buf, len } } |
| } |
| |
| /// Read bytes from the buffer. |
| /// |
| /// Read up to `dst.len()` bytes from the buffer, returning how many bytes |
| /// were read. The only thing that can cause fewer bytes to be read than |
| /// requested is if `dst` is larger than the buffer itself. |
| /// |
| /// A call to `read` is only guaranteed to happen after an operation in |
| /// another thread or process if the mechanism used to signal the other |
| /// process has well-defined memory ordering semantics. Otherwise, the |
| /// `acquire_writes` method must be called before `read` and after receiving |
| /// a signal from the other process in order to provide such ordering |
| /// guarantees. In practice, this means that `acquire_writes` should be the |
| /// first read operation that happens after receiving a signal from another |
| /// process that the memory may be read. See the `acquire_writes` |
| /// documentation for more details. |
| #[inline] |
| pub fn read(&self, dst: &mut [u8]) -> usize { |
| self.inner.read_at(0, dst) |
| } |
| |
| /// Read bytes from the buffer at an offset. |
| /// |
| /// Read up to `dst.len()` bytes starting at `offset` into the buffer, |
| /// returning how many bytes were read. The only thing that can cause fewer |
| /// bytes to be read than requested is if there are fewer than `dst.len()` |
| /// bytes available starting at `offset` within the buffer. |
| /// |
| /// A call to `read_at` is only guaranteed to happen after an operation in |
| /// another thread or process if the mechanism used to signal the other |
| /// process has well-defined memory ordering semantics. Otherwise, the |
| /// `acquire_writes` method must be called before `read_at` and after |
| /// receiving a signal from the other process in order to provide such |
| /// ordering guarantees. In practice, this means that `acquire_writes` |
| /// should be the first read operation that happens after receiving a signal |
| /// from another process that the memory may be read. See the |
| /// `acquire_writes` documentation for more details. |
| /// |
| /// # Panics |
| /// |
| /// `read_at` panics if `offset` is greater than the length of the buffer. |
| #[inline] |
| pub fn read_at(&self, offset: usize, dst: &mut [u8]) -> usize { |
| self.inner.read_at(offset, dst) |
| } |
| |
| /// Write bytes to the buffer. |
| /// |
| /// Write up to `src.len()` bytes into the buffer, returning how many bytes |
| /// were written. The only thing that can cause fewer bytes to be written |
| /// than requested is if `src` is larger than the buffer itself. |
| /// |
| /// A call to `write` is only guaranteed to happen before an operation in |
| /// another thread or process if the mechanism used to signal the other |
| /// process has well-defined memory ordering semantics. Otherwise, the |
| /// `release_writes` method must be called after `write` and before |
| /// signalling the other process in order to provide such ordering |
| /// guarantees. In practice, this means that `release_writes` should be the |
| /// last write operation that happens before signalling another process that |
| /// the memory may be read. See the `release_writes` documentation for more |
| /// details. |
| #[inline] |
| pub fn write(&self, src: &[u8]) -> usize { |
| self.inner.write_at(0, src) |
| } |
| |
| /// Write bytes to the buffer at an offset. |
| /// |
| /// Write up to `src.len()` bytes starting at `offset` into the buffer, |
| /// returning how many bytes were written. The only thing that can cause |
| /// fewer bytes to be written than requested is if there are fewer than |
| /// `src.len()` bytes available starting at `offset` within the buffer. |
| /// |
| /// A call to `write_at` is only guaranteed to happen before an operation in |
| /// another thread or process if the mechanism used to signal the other |
| /// process has well-defined memory ordering semantics. Otherwise, the |
| /// `release_writes` method must be called after `write_at` and before |
| /// signalling the other process in order to provide such ordering |
| /// guarantees. In practice, this means that `release_writes` should be the |
| /// last write operation that happens before signalling another process that |
| /// the memory may be read. See the `release_writes` documentation for more |
| /// details. |
| /// |
| /// # Panics |
| /// |
| /// `write_at` panics if `offset` is greater than the length of the buffer. |
| #[inline] |
| pub fn write_at(&self, offset: usize, src: &[u8]) -> usize { |
| self.inner.write_at(offset, src) |
| } |
| |
| /// Acquire all writes performed by the other process. |
| /// |
| /// On some systems (such as Fuchsia, currently), the communication |
| /// mechanism used for signalling a process that memory is readable does not |
| /// have well-defined synchronization semantics. On those systems, this |
| /// method MUST be called after receiving such a signal, or else writes |
| /// performed before that signal are not guaranteed to be observed by this |
| /// process. |
| /// |
| /// `acquire_writes` acquires any writes performed on this buffer or any |
| /// slice within the buffer. |
| /// |
| /// # Note on Fuchsia |
| /// |
| /// Zircon, the Fuchsia kernel, will likely eventually have well-defined |
| /// semantics around the synchronization behavior of various syscalls. Once |
| /// that happens, calling this method in Fuchsia programs may become |
| /// optional. This work is tracked in [fxbug.dev/32098]. |
| /// |
| /// [fxbug.dev/32098]: # |
| // TODO(joshlf): Replace with link once issues are public. |
| #[inline] |
| pub fn acquire_writes(&self) { |
| fence(Ordering::Acquire); |
| } |
| |
| /// Release all writes performed so far. |
| /// |
| /// On some systems (such as Fuchsia, currently), the communication |
| /// mechanism used for signalling the other process that memory is readable |
| /// does not have well-defined synchronization semantics. On those systems, |
| /// this method MUST be called before such signalling, or else writes |
| /// performed before that signal are not guaranteed to be observed by the |
| /// other process. |
| /// |
| /// `release_writes` releases any writes performed on this buffer or any |
| /// slice within the buffer. |
| /// |
| /// # Note on Fuchsia |
| /// |
| /// Zircon, the Fuchsia kernel, will likely eventually have well-defined |
| /// semantics around the synchronization behavior of various syscalls. Once |
| /// that happens, calling this method in Fuchsia programs may become |
| /// optional. This work is tracked in [fxbug.dev/32098]. |
| /// |
| /// [fxbug.dev/32098]: # |
| // TODO(joshlf): Replace with link once issues are public. |
| #[inline] |
| pub fn release_writes(&mut self) { |
| fence(Ordering::Release); |
| } |
| |
| /// The number of bytes in this `SharedBuffer`. |
| #[inline] |
| pub fn len(&self) -> usize { |
| self.inner.len |
| } |
| |
| /// Create a slice of the original `SharedBuffer`. |
| /// |
| /// Just like the slicing operation on array and slice references, `slice` |
| /// constructs a `SharedBufferSlice` which points to the same memory as the |
| /// original `SharedBuffer`, but starting and index `from` (inclusive) and |
| /// ending at index `to` (exclusive). |
| /// |
| /// # Panics |
| /// |
| /// `slice` panics if `range` is out of bounds of `self` or if `range` is |
| /// nonsensical (its lower bound is larger than its upper bound). |
| #[inline] |
| pub fn slice<'a, R: RangeBounds<usize>>(&'a self, range: R) -> SharedBufferSlice<'a> { |
| SharedBufferSlice { inner: self.inner.slice(range), _marker: PhantomData } |
| } |
| |
| /// Create a mutable slice of the original `SharedBuffer`. |
| /// |
| /// Just like the mutable slicing operation on array and slice references, |
| /// `slice_mut` constructs a `SharedBufferSliceMut` which points to the same |
| /// memory as the original `SharedBuffer`, but starting and index `from` |
| /// (inclusive) and ending at index `to` (exclusive). |
| /// |
| /// # Panics |
| /// |
| /// `slice_mut` panics if `range` is out of bounds of `self` or if `range` |
| /// is nonsensical (its lower bound is larger than its upper bound). |
| #[inline] |
| pub fn slice_mut<'a, R: RangeBounds<usize>>( |
| &'a mut self, |
| range: R, |
| ) -> SharedBufferSliceMut<'a> { |
| SharedBufferSliceMut { inner: self.inner.slice(range), _marker: PhantomData } |
| } |
| |
| /// Create two non-overlapping slices of the original `SharedBuffer`. |
| /// |
| /// Just like the `split_at` method on array and slice references, |
| /// `split_at` constructs one `SharedBufferSlice` which represents bytes |
| /// `[0, idx)`, and one which represents bytes `[idx, len)`, where `len` is |
| /// the length of the buffer. |
| /// |
| /// # Panics |
| /// |
| /// `split_at` panics if `idx > self.len()`. |
| #[inline] |
| pub fn split_at<'a>(&'a self, idx: usize) -> (SharedBufferSlice<'a>, SharedBufferSlice<'a>) { |
| let (a, b) = self.inner.split_at(idx); |
| let a = SharedBufferSlice { inner: a, _marker: PhantomData }; |
| let b = SharedBufferSlice { inner: b, _marker: PhantomData }; |
| (a, b) |
| } |
| |
| /// Create two non-overlapping mutable slices of the original `SharedBuffer`. |
| /// |
| /// Just like the `split_at_mut` method on array and slice references, |
| /// `split_at_miut` constructs one `SharedBufferSliceMut` which represents |
| /// bytes `[0, idx)`, and one which represents bytes `[idx, len)`, where |
| /// `len` is the length of the buffer. |
| /// |
| /// # Panics |
| /// |
| /// `split_at_mut` panics if `idx > self.len()`. |
| #[inline] |
| pub fn split_at_mut<'a>( |
| &'a mut self, |
| idx: usize, |
| ) -> (SharedBufferSliceMut<'a>, SharedBufferSliceMut<'a>) { |
| let (a, b) = self.inner.split_at(idx); |
| let a = SharedBufferSliceMut { inner: a, _marker: PhantomData }; |
| let b = SharedBufferSliceMut { inner: b, _marker: PhantomData }; |
| (a, b) |
| } |
| |
| /// Get the buffer pointer and length so that the memory can be freed. |
| /// |
| /// This method is an alternative to calling `consume` if relinquishing |
| /// ownership of the object is infeasible (for example, when the object is a |
| /// struct field and thus can't be moved out of the struct). Since it allows |
| /// the object to continue existing, it must be used with care (see the |
| /// "Safety" section below). |
| /// |
| /// # Safety |
| /// |
| /// The returned pointer must *only* be used to free the memory. Since the |
| /// memory is shared by another process, using it as a normal raw pointer to |
| /// normal memory owned by this process is unsound. |
| /// |
| /// If the pointer is used for this purpose, then the caller must ensure |
| /// that no methods will be called on the object after the call to |
| /// `as_ptr_len`. The only scenario in which the object may be used again is |
| /// if the caller does nothing at all with the return value of this method |
| /// (although that would be kind of pointless...). |
| pub fn as_ptr_len(&mut self) -> (*mut u8, usize) { |
| (self.inner.buf, self.inner.len) |
| } |
| |
| /// Consume the `SharedBuffer`, returning the underlying buffer pointer and |
| /// length. |
| /// |
| /// Since `SharedBuffer`s do nothing on drop, the only way to ensure that |
| /// resources are not leaked is to `consume` a `SharedBuffer` and then unmap |
| /// the memory manually. |
| #[inline] |
| pub fn consume(self) -> (*mut u8, usize) { |
| (self.inner.buf, self.inner.len) |
| } |
| } |
| |
| impl Drop for SharedBuffer { |
| fn drop(&mut self) { |
| // Release any writes performed after the last call to |
| // self.release_writes(). |
| fence(Ordering::Release); |
| } |
| } |
| |
| /// An immutable slice into a `SharedBuffer`. |
| /// |
| /// A `SharedBufferSlice` is created with `SharedBuffer::slice`, |
| /// `SharedBufferSlice::slice`, or `SharedBufferSliceMut::slice`. |
| #[derive(Debug)] |
| pub struct SharedBufferSlice<'a> { |
| inner: SharedBufferInner, |
| _marker: PhantomData<&'a ()>, |
| } |
| |
| impl<'a> SharedBufferSlice<'a> { |
| /// Read bytes from the buffer. |
| /// |
| /// Read up to `dst.len()` bytes from the buffer, returning how many bytes |
| /// were read. The only thing that can cause fewer bytes to be read than |
| /// requested is if `dst` is larger than the buffer itself. |
| /// |
| /// A call to `read` is only guaranteed to happen after an operation in |
| /// another thread or process if the mechanism used to signal the other |
| /// process has well-defined memory ordering semantics. Otherwise, the |
| /// `acquire_writes` method must be called before `read` and after receiving |
| /// a signal from the other process in order to provide such ordering |
| /// guarantees. In practice, this means that `acquire_writes` should be the |
| /// first read operation that happens after receiving a signal from another |
| /// process that the memory may be read. See the `acquire_writes` |
| /// documentation for more details. |
| #[inline] |
| pub fn read(&self, dst: &mut [u8]) -> usize { |
| self.inner.read_at(0, dst) |
| } |
| |
| /// Read bytes from the buffer at an offset. |
| /// |
| /// Read up to `dst.len()` bytes starting at `offset` into the buffer, |
| /// returning how many bytes were read. The only thing that can cause fewer |
| /// bytes to be read than requested is if there are fewer than `dst.len()` |
| /// bytes available starting at `offset` within the buffer. |
| /// |
| /// A call to `read_at` is only guaranteed to happen after an operation in |
| /// another thread or process if the mechanism used to signal the other |
| /// process has well-defined memory ordering semantics. Otherwise, the |
| /// `acquire_writes` method must be called before `read_at` and after |
| /// receiving a signal from the other process in order to provide such |
| /// ordering guarantees. In practice, this means that `acquire_writes` |
| /// should be the first read operation that happens after receiving a signal |
| /// from another process that the memory may be read. See the |
| /// `acquire_writes` documentation for more details. |
| /// |
| /// # Panics |
| /// |
| /// `read_at` panics if `offset` is greater than the length of the buffer. |
| #[inline] |
| pub fn read_at(&self, offset: usize, dst: &mut [u8]) -> usize { |
| self.inner.read_at(offset, dst) |
| } |
| |
| /// Acquire all writes performed by the other process. |
| /// |
| /// On some systems (such as Fuchsia, currently), the communication |
| /// mechanism used for signalling a process that memory is readable does not |
| /// have well-defined synchronization semantics. On those systems, this |
| /// method MUST be called after receiving such a signal, or else writes |
| /// performed before that signal are not guaranteed to be observed by this |
| /// process. |
| /// |
| /// `acquire_writes` acquires any writes performed on this buffer or any |
| /// slice within the buffer. |
| /// |
| /// # Note on Fuchsia |
| /// |
| /// Zircon, the Fuchsia kernel, will likely eventually have well-defined |
| /// semantics around the synchronization behavior of various syscalls. Once |
| /// that happens, calling this method in Fuchsia programs may become |
| /// optional. This work is tracked in [fxbug.dev/32098]. |
| /// |
| /// [fxbug.dev/32098]: # |
| // TODO(joshlf): Replace with link once issues are public. |
| #[inline] |
| pub fn acquire_writes(&self) { |
| fence(Ordering::Acquire); |
| } |
| |
| /// Create a sub-slice of this `SharedBufferSlice`. |
| /// |
| /// Just like the slicing operation on array and slice references, `slice` |
| /// constructs a new `SharedBufferSlice` which points to the same memory as |
| /// the original, but starting and index `from` (inclusive) and ending at |
| /// index `to` (exclusive). |
| /// |
| /// # Panics |
| /// |
| /// `slice` panics if `range` is out of bounds of `self` or if `range` is |
| /// nonsensical (its lower bound is larger than its upper bound). |
| #[inline] |
| pub fn slice<R: RangeBounds<usize>>(&self, range: R) -> SharedBufferSlice<'a> { |
| SharedBufferSlice { inner: self.inner.slice(range), _marker: PhantomData } |
| } |
| |
| /// Split this `SharedBufferSlice` in two. |
| /// |
| /// Just like the `split_at` method on array and slice references, |
| /// `split_at` constructs one `SharedBufferSlice` which represents bytes |
| /// `[0, idx)`, and one which represents bytes `[idx, len)`, where `len` is |
| /// the length of the buffer slice. |
| /// |
| /// # Panics |
| /// |
| /// `split_at` panics if `idx > self.len()`. |
| #[inline] |
| pub fn split_at(&self, idx: usize) -> (SharedBufferSlice<'a>, SharedBufferSlice<'a>) { |
| let (a, b) = self.inner.split_at(idx); |
| let a = SharedBufferSlice { inner: a, _marker: PhantomData }; |
| let b = SharedBufferSlice { inner: b, _marker: PhantomData }; |
| (a, b) |
| } |
| |
| /// The number of bytes in this `SharedBufferSlice`. |
| #[inline] |
| pub fn len(&self) -> usize { |
| self.inner.len |
| } |
| } |
| |
| /// A mutable slice into a `SharedBuffer`. |
| /// |
| /// A `SharedBufferSliceMut` is created with `SharedBuffer::slice_mut` or |
| /// `SharedBufferSliceMut::slice_mut`. |
| #[derive(Debug)] |
| pub struct SharedBufferSliceMut<'a> { |
| inner: SharedBufferInner, |
| _marker: PhantomData<&'a ()>, |
| } |
| |
| impl<'a> SharedBufferSliceMut<'a> { |
| /// Read bytes from the buffer. |
| /// |
| /// Read up to `dst.len()` bytes from the buffer, returning how many bytes |
| /// were read. The only thing that can cause fewer bytes to be read than |
| /// requested is if `dst` is larger than the buffer itself. |
| /// |
| /// A call to `read` is only guaranteed to happen after an operation in |
| /// another thread or process if the mechanism used to signal the other |
| /// process has well-defined memory ordering semantics. Otherwise, the |
| /// `acquire_writes` method must be called before `read` and after receiving |
| /// a signal from the other process in order to provide such ordering |
| /// guarantees. In practice, this means that `acquire_writes` should be the |
| /// first read operation that happens after receiving a signal from another |
| /// process that the memory may be read. See the `acquire_writes` |
| /// documentation for more details. |
| #[inline] |
| pub fn read(&self, dst: &mut [u8]) -> usize { |
| self.inner.read_at(0, dst) |
| } |
| |
| /// Read bytes from the buffer at an offset. |
| /// |
| /// Read up to `dst.len()` bytes starting at `offset` into the buffer, |
| /// returning how many bytes were read. The only thing that can cause fewer |
| /// bytes to be read than requested is if there are fewer than `dst.len()` |
| /// bytes available starting at `offset` within the buffer. |
| /// |
| /// A call to `read_at` is only guaranteed to happen after an operation in |
| /// another thread or process if the mechanism used to signal the other |
| /// process has well-defined memory ordering semantics. Otherwise, the |
| /// `acquire_writes` method must be called before `read_at` and after |
| /// receiving a signal from the other process in order to provide such |
| /// ordering guarantees. In practice, this means that `acquire_writes` |
| /// should be the first read operation that happens after receiving a signal |
| /// from another process that the memory may be read. See the |
| /// `acquire_writes` documentation for more details. |
| /// |
| /// # Panics |
| /// |
| /// `read_at` panics if `offset` is greater than the length of the buffer. |
| #[inline] |
| pub fn read_at(&self, offset: usize, dst: &mut [u8]) -> usize { |
| self.inner.read_at(offset, dst) |
| } |
| |
| /// Write bytes to the buffer. |
| /// |
| /// Write up to `src.len()` bytes into the buffer, returning how many bytes |
| /// were written. The only thing that can cause fewer bytes to be written |
| /// than requested is if `src` is larger than the buffer itself. |
| /// |
| /// A call to `write` is only guaranteed to happen before an operation in |
| /// another thread or process if the mechanism used to signal the other |
| /// process has well-defined memory ordering semantics. Otherwise, the |
| /// `release_writes` method must be called after `write` and before |
| /// signalling the other process in order to provide such ordering |
| /// guarantees. In practice, this means that `release_writes` should be the |
| /// last write operation that happens before signalling another process that |
| /// the memory may be read. See the `release_writes` documentation for more |
| /// details. |
| #[inline] |
| pub fn write(&self, src: &[u8]) -> usize { |
| self.inner.write_at(0, src) |
| } |
| |
| /// Write bytes to the buffer at an offset. |
| /// |
| /// Write up to `src.len()` bytes starting at `offset` into the buffer, |
| /// returning how many bytes were written. The only thing that can cause |
| /// fewer bytes to be written than requested is if there are fewer than |
| /// `src.len()` bytes available starting at `offset` within the buffer. |
| /// |
| /// A call to `write_at` is only guaranteed to happen before an operation in |
| /// another thread or process if the mechanism used to signal the other |
| /// process has well-defined memory ordering semantics. Otherwise, the |
| /// `release_writes` method must be called after `write_at` and before |
| /// signalling the other process in order to provide such ordering |
| /// guarantees. In practice, this means that `release_writes` should be the |
| /// last write operation that happens before signalling another process that |
| /// the memory may be read. See the `release_writes` documentation for more |
| /// details. |
| /// |
| /// # Panics |
| /// |
| /// `write_at` panics if `offset` is greater than the length of the buffer. |
| #[inline] |
| pub fn write_at(&self, offset: usize, src: &[u8]) -> usize { |
| self.inner.write_at(offset, src) |
| } |
| |
| /// Acquire all writes performed by the other process. |
| /// |
| /// On some systems (such as Fuchsia, currently), the communication |
| /// mechanism used for signalling a process that memory is readable does not |
| /// have well-defined synchronization semantics. On those systems, this |
| /// method MUST be called after receiving such a signal, or else writes |
| /// performed before that signal are not guaranteed to be observed by this |
| /// process. |
| /// |
| /// `acquire_writes` acquires any writes performed on this buffer or any |
| /// slice within the buffer. |
| /// |
| /// # Note on Fuchsia |
| /// |
| /// Zircon, the Fuchsia kernel, will likely eventually have well-defined |
| /// semantics around the synchronization behavior of various syscalls. Once |
| /// that happens, calling this method in Fuchsia programs may become |
| /// optional. This work is tracked in [fxbug.dev/32098]. |
| /// |
| /// [fxbug.dev/32098]: # |
| // TODO(joshlf): Replace with link once issues are public. |
| #[inline] |
| pub fn acquire_writes(&self) { |
| fence(Ordering::Acquire); |
| } |
| |
| /// Atomically release all writes performed so far. |
| /// |
| /// On some systems (such as Fuchsia, currently), the communication |
| /// mechanism used for signalling the other process that memory is readable |
| /// does not have well-defined synchronization semantics. On those systems, |
| /// this method MUST be called before such signalling, or else writes |
| /// performed before that signal are not guaranteed to be observed by the |
| /// other process. |
| /// |
| /// `release_writes` releases any writes performed on this slice or any |
| /// sub-slice of this slice. |
| /// |
| /// # Note on Fuchsia |
| /// |
| /// Zircon, the Fuchsia kernel, will likely eventually have well-defined |
| /// semantics around the synchronization behavior of various syscalls. Once |
| /// that happens, calling this method in Fuchsia programs may become |
| /// optional. This work is tracked in [fxbug.dev/32098]. |
| /// |
| /// [fxbug.dev/32098]: # |
| // TODO(joshlf): Replace with link once issues are public. |
| #[inline] |
| pub fn release_writes(&mut self) { |
| fence(Ordering::Release); |
| } |
| |
| /// Create a sub-slice of this `SharedBufferSliceMut`. |
| /// |
| /// Just like the slicing operation on array and slice references, `slice` |
| /// constructs a new `SharedBufferSlice` which points to the same memory as |
| /// the original, but starting and index `from` (inclusive) and ending at |
| /// index `to` (exclusive). |
| /// |
| /// # Panics |
| /// |
| /// `slice` panics if `range` is out of bounds of `self` or if `range` is |
| /// nonsensical (its lower bound is larger than its upper bound). |
| #[inline] |
| pub fn slice<R: RangeBounds<usize>>(&self, range: R) -> SharedBufferSlice<'a> { |
| SharedBufferSlice { inner: self.inner.slice(range), _marker: PhantomData } |
| } |
| |
| /// Create a mutable slice of the original `SharedBufferSliceMut`. |
| /// |
| /// Just like the mutable slicing operation on array and slice references, |
| /// `slice_mut` constructs a new `SharedBufferSliceMut` which points to the |
| /// same memory as the original, but starting and index `from` (inclusive) |
| /// and ending at index `to` (exclusive). |
| /// |
| /// # Panics |
| /// |
| /// `slice_mut` panics if `range` is out of bounds of `self` or if `range` |
| /// is nonsensical (its lower bound is larger than its upper bound). |
| #[inline] |
| pub fn slice_mut<R: RangeBounds<usize>>(&mut self, range: R) -> SharedBufferSliceMut<'a> { |
| SharedBufferSliceMut { inner: self.inner.slice(range), _marker: PhantomData } |
| } |
| |
| /// Split this `SharedBufferSliceMut` into two immutable slices. |
| /// |
| /// Just like the `split_at` method on array and slice references, |
| /// `split_at` constructs one `SharedBufferSlice` which represents bytes |
| /// `[0, idx)`, and one which represents bytes `[idx, len)`, where `len` is |
| /// the length of the buffer slice. |
| /// |
| /// # Panics |
| /// |
| /// `split_at` panics if `idx > self.len()`. |
| #[inline] |
| pub fn split_at(&self, idx: usize) -> (SharedBufferSlice<'a>, SharedBufferSlice<'a>) { |
| let (a, b) = self.inner.split_at(idx); |
| let a = SharedBufferSlice { inner: a, _marker: PhantomData }; |
| let b = SharedBufferSlice { inner: b, _marker: PhantomData }; |
| (a, b) |
| } |
| |
| /// Split this `SharedBufferSliceMut` in two. |
| /// |
| /// Just like the `split_at_mut` method on array and slice references, |
| /// `split_at` constructs one `SharedBufferSliceMut` which represents bytes |
| /// `[0, idx)`, and one which represents bytes `[idx, len)`, where `len` is |
| /// the length of the buffer slice. |
| /// |
| /// # Panics |
| /// |
| /// `split_at_mut` panics if `idx > self.len()`. |
| #[inline] |
| pub fn split_at_mut( |
| &mut self, |
| idx: usize, |
| ) -> (SharedBufferSliceMut<'a>, SharedBufferSliceMut<'a>) { |
| let (a, b) = self.inner.split_at(idx); |
| let a = SharedBufferSliceMut { inner: a, _marker: PhantomData }; |
| let b = SharedBufferSliceMut { inner: b, _marker: PhantomData }; |
| (a, b) |
| } |
| |
| /// The number of bytes in this `SharedBufferSlice`. |
| #[inline] |
| pub fn len(&self) -> usize { |
| self.inner.len |
| } |
| } |
| |
| // Send and Sync implementations. Send and Sync are definitely safe since |
| // SharedBufferXXX are all written under the assumption that a remote process is |
| // concurrently modifying the memory. However, we aim to provide a Rust-like API |
| // with lifetimes and an immutable/mutable distinction, so the real question is |
| // whether Send and Sync make sense by analogy to normal Rust types. Insofar as |
| // SharedBuffer is analogous to [u8], SharedBufferSlice is analogous to &[u8], |
| // and SharedBufferSliceMut is analogous to &mut [u8], the answer is yes - all |
| // of those types implement both Send and Sync. |
| |
| unsafe impl Send for SharedBuffer {} |
| unsafe impl Sync for SharedBuffer {} |
| unsafe impl<'a> Send for SharedBufferSlice<'a> {} |
| unsafe impl<'a> Sync for SharedBufferSlice<'a> {} |
| unsafe impl<'a> Send for SharedBufferSliceMut<'a> {} |
| unsafe impl<'a> Sync for SharedBufferSliceMut<'a> {} |
| |
| #[cfg(test)] |
| mod tests { |
| use core::mem; |
| use core::ptr; |
| |
| use super::{overlap, SharedBuffer}; |
| |
| // use the referent as the backing memory for a SharedBuffer |
| unsafe fn buf_from_ref<T>(x: &mut T) -> SharedBuffer { |
| let size = mem::size_of::<T>(); |
| SharedBuffer::new(x as *mut _ as *mut u8, size) |
| } |
| |
| #[test] |
| fn test_buf() { |
| // initialize some memory and turn it into a SharedBuffer |
| const ONE: [u8; 8] = [0, 1, 2, 3, 4, 5, 6, 7]; |
| let mut buf_memory = ONE; |
| let buf = unsafe { buf_from_ref(&mut buf_memory) }; |
| |
| // we read the same initial contents back |
| let mut bytes = [0u8; 8]; |
| assert_eq!(buf.read(&mut bytes[..]), 8); |
| assert_eq!(bytes, ONE); |
| |
| // when we write new contents, we read those back |
| const TWO: [u8; 8] = [7, 6, 5, 4, 3, 2, 1, 0]; |
| assert_eq!(buf.write(&TWO[..]), 8); |
| assert_eq!(buf.read(&mut bytes[..]), 8); |
| assert_eq!(bytes, TWO); |
| |
| // even with a bigger buffer, we still only read 8 bytes |
| let mut bytes = [0u8; 16]; |
| assert_eq!(buf.read(&mut bytes[..]), 8); |
| // starting at offset 4, there are only 4 bytes left, so we only read 4 |
| // bytes |
| assert_eq!(buf.read_at(4, &mut bytes[..]), 4); |
| } |
| |
| #[test] |
| fn test_slice() { |
| // various slices give us the lengths we expect |
| let buf = unsafe { SharedBuffer::new(ptr::null_mut(), 10) }; |
| let tmp = buf.slice(..); |
| assert_eq!(tmp.len(), 10); |
| let tmp = buf.slice(..10); |
| assert_eq!(tmp.len(), 10); |
| let tmp = buf.slice(5..10); |
| assert_eq!(tmp.len(), 5); |
| let tmp = buf.slice(0..0); |
| assert_eq!(tmp.len(), 0); |
| let tmp = buf.slice(10..10); |
| assert_eq!(tmp.len(), 0); |
| |
| // initialize some memory and turn it into a SharedBuffer |
| const INIT: [u8; 8] = [0, 1, 2, 3, 4, 5, 6, 7]; |
| let mut buf_memory = INIT; |
| let buf = unsafe { buf_from_ref(&mut buf_memory) }; |
| |
| // we read the same initial contents back |
| let mut bytes = [0u8; 8]; |
| assert_eq!(buf.read_at(0, &mut bytes[..]), 8); |
| assert_eq!(bytes, INIT); |
| |
| // create a slice to the second half of the SharedBuffer |
| let buf2 = buf.slice(4..8); |
| |
| // now we read back only the second half of the original SharedBuffer |
| bytes = [0; 8]; |
| assert_eq!(buf2.read(&mut bytes[..]), 4); |
| assert_eq!(bytes, [4, 5, 6, 7, 0, 0, 0, 0]); |
| } |
| |
| #[test] |
| fn test_split() { |
| // various splits give us the lengths we expect |
| let buf = unsafe { SharedBuffer::new(ptr::null_mut(), 10) }; |
| let (tmp1, tmp2) = buf.split_at(10); |
| assert_eq!(tmp1.len(), 10); |
| assert_eq!(tmp2.len(), 0); |
| let (tmp1, tmp2) = buf.split_at(5); |
| assert_eq!(tmp1.len(), 5); |
| assert_eq!(tmp2.len(), 5); |
| let (tmp1, tmp2) = buf.split_at(0); |
| assert_eq!(tmp1.len(), 0); |
| assert_eq!(tmp2.len(), 10); |
| |
| // initialize some memory and turn it into a SharedBuffer |
| const INIT: [u8; 8] = [0, 1, 2, 3, 4, 5, 6, 7]; |
| let mut buf_memory = INIT; |
| let mut buf = unsafe { buf_from_ref(&mut buf_memory) }; |
| |
| // we read the same initial contents back |
| let mut bytes = [0u8; 8]; |
| assert_eq!(buf.read_at(0, &mut bytes[..]), 8); |
| assert_eq!(bytes, INIT); |
| |
| // split in two equal-sized halves |
| let (buf1, buf2) = buf.split_at_mut(4); |
| |
| // now we read back the halves separately |
| bytes = [0; 8]; |
| assert_eq!(buf1.read(&mut bytes[..4]), 4); |
| assert_eq!(buf2.read(&mut bytes[4..]), 4); |
| assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]); |
| |
| // use the mutable slices to write to the buffer |
| assert_eq!(buf1.write(&[7, 6, 5, 4]), 4); |
| assert_eq!(buf2.write(&[3, 2, 1, 0]), 4); |
| |
| // split again into equal-sized quarters |
| let ((buf1, buf2), (buf3, buf4)) = (buf1.split_at(2), buf2.split_at(2)); |
| |
| // now we read back the quarters separately |
| bytes = [0; 8]; |
| assert_eq!(buf1.read(&mut bytes[..2]), 2); |
| assert_eq!(buf2.read(&mut bytes[2..4]), 2); |
| assert_eq!(buf3.read(&mut bytes[4..6]), 2); |
| assert_eq!(buf4.read(&mut bytes[6..]), 2); |
| assert_eq!(bytes, [7, 6, 5, 4, 3, 2, 1, 0]); |
| } |
| |
| #[test] |
| fn test_overlap() { |
| // overlap(offset, copy_len, range_len) |
| |
| // first branch: offset > range_len |
| assert_eq!(overlap(10, 4, 8), None); |
| |
| // middle branch: offset + copy_len <= range_len |
| assert_eq!(overlap(0, 4, 8), Some(4)); |
| assert_eq!(overlap(4, 4, 8), Some(4)); |
| |
| // middle branch: 'offset + copy_len' overflows usize |
| assert_eq!(overlap(4, ::core::usize::MAX, 8), Some(4)); |
| |
| // last branch: else |
| assert_eq!(overlap(6, 4, 8), Some(2)); |
| assert_eq!(overlap(8, 4, 8), Some(0)); |
| } |
| |
| #[test] |
| #[should_panic] |
| fn test_panic_read_at() { |
| let buf = unsafe { SharedBuffer::new(ptr::null_mut(), 10) }; |
| // "byte offset 11 out of range for SharedBuffer of length 10" |
| buf.read_at(11, &mut [][..]); |
| } |
| |
| #[test] |
| #[should_panic] |
| fn test_panic_write_at() { |
| let buf = unsafe { SharedBuffer::new(ptr::null_mut(), 10) }; |
| // "byte offset 11 out of range for SharedBuffer of length 10" |
| buf.write_at(11, &[][..]); |
| } |
| |
| #[test] |
| #[should_panic] |
| fn test_panic_slice_1() { |
| let buf = unsafe { SharedBuffer::new(ptr::null_mut(), 10) }; |
| // "byte index 11 out of range for SharedBuffer of length 10" |
| buf.slice(0..11); |
| } |
| |
| #[test] |
| #[should_panic] |
| fn test_panic_slice_2() { |
| let buf = unsafe { SharedBuffer::new(ptr::null_mut(), 10) }; |
| // "slice starts at byte 6 but ends at byte 5" |
| #[allow(clippy::reversed_empty_ranges)] |
| buf.slice(6..5); |
| } |
| } |