Prevent false sharing by padding and aligning to the length of a cache line.
In concurrent programming, sometimes it is desirable to make sure commonly accessed shared data is not all placed into the same cache line. Updating an atomic value invalides the whole cache line it belongs to, which makes the next access to the same cache line slower for other CPU cores. Use CachePadded
to ensure updating one piece of data doesn't invalidate other cached data.
Cache lines are assumed to be N bytes long, depending on the architecture:
Note that N is just a reasonable guess and is not guaranteed to match the actual cache line length of the machine the program is running on.
The size of CachePadded<T>
is the smallest multiple of N bytes large enough to accommodate a value of type T
.
The alignment of CachePadded<T>
is the maximum of N bytes and the alignment of T
.
Alignment and padding:
use cache_padded::CachePadded; let array = [CachePadded::new(1i8), CachePadded::new(2i8)]; let addr1 = &*array[0] as *const i8 as usize; let addr2 = &*array[1] as *const i8 as usize; assert!(addr2 - addr1 >= 64); assert_eq!(addr1 % 64, 0); assert_eq!(addr2 % 64, 0);
When building a concurrent queue with a head and a tail index, it is wise to place indices in different cache lines so that concurrent threads pushing and popping elements don‘t invalidate each other’s cache lines:
use cache_padded::CachePadded; use std::sync::atomic::AtomicUsize; struct Queue<T> { head: CachePadded<AtomicUsize>, tail: CachePadded<AtomicUsize>, buffer: *mut T, }
Licensed under either of
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.