commit | 6f29a8392403f70bfa1080964a65540b6f3699fe | [log] [tgz] |
---|---|---|
author | Jason Evans <jasone@canonware.com> | Thu Jun 02 18:43:10 2016 -0700 |
committer | Jason Evans <jasone@canonware.com> | Sun Jun 05 20:59:57 2016 -0700 |
tree | 60dcedc3dc0b77c6e3280979ae5a5a267834de4e | |
parent | 7be2ebc23f0f145e095e7230d7d8a202b8dcc55e [diff] |
Add rtree lookup path caching. rtree-based extent lookups remain more expensive than chunk-based run lookups, but with this optimization the fast path slowdown is ~3 CPU cycles per metadata lookup (on Intel Core i7-4980HQ), versus ~11 cycles prior. The path caching speedup tends to degrade gracefully unless allocated memory is spread far apart (as is the case when using a mixture of sbrk() and mmap()).