HotSort throughput on Vulkan (Mesa) with a 704-core AMD V1807B APU:
HotSort throughput on Vulkan with a 192-core Intel HD 630:
Note that these sorting rates translate to sub-millisecond to multi-millisecond execution times on small GPUs:
HotSort provides a build-time tool (named
hotsort_gen) to generate highly-optimized sets of compiled compute kernels that target a specific GPU architecture. These binary code modules are packed with configuration data into what is called a HotSort target file.
HotSort also provides a small runtime library to load a target into your application.
A simple benchmarking example for HotSort can be found here:
Note that HotSort implements a comparison sort and supports in-place sorting.
Not all targeted architectures have been tested.
The following architectures are supported:
|NVIDIA||sm_35,sm_37,sm_50,sm_52,sm_60,sm_61,sm_70||✔||✔||❌||Not tested on all architectures|
|NVIDIA||sm_30,sm_32,sm_53,sm_62||❌||❌||❌||Need to generate properly shaped kernels|
|AMD||GCN||✔||✔||❌||Tested on Linux MESA 18.2|
|Intel||GEN8+||✔||✔||❌||Good but the assumed best-shaped kernels aren't being used due to a compiler issue|
|Intel||APL/GLK using a 2x9 or 1x12 thread pool||❌||❌||❌||Need to generate properly shaped kernels|
A HotSort target can be generated using the
hotsort_target GN template.
hotsort_vk_bench BUILD.gn for a concrete example.
By default, the HotSort target's binary data will be available as C source file defining an array of
uint32_t literals, and a corresponding header declaring the array by name.
Include the generated
hs_target.h file into your source code to access the HotSort target's data compiled as an uint32_t array.
hotsort_vk.h into your source code to access the HotSort Vulkan-based APIs to load the HotSort target data and run it. See comments in this header file for more details about the API.
For example, to sort
count keys on Vulkan:
// Provide hotsort_vk_xxx() functions. #include "hotsort_vk.h" // Defines the hs_intel_gen8_u32 variable pointing to the target's data. #include "targets/intel/gen8/u32/hs_target.h" // create HotSort instance from a target struct hotsort_vk * hs = hotsort_vk_create(..., <pipeline layout>, <descriptor set locations>, &hs_intel_gen8_u32); ... // bind pipeline-compatible descriptor sets ... // see how much padding may be required hotsort_vk_pad(hs, count, &count_padded_in, &count_padded_out); // append compute shaders to command buffer hotsort_vk_sort(cb, hs, <array offsets>, count, padded_in, padded_out); // command buffer end and queue submit ... // release the HotSort instance hotsort_vk_release(hs, ...);
The HotSort sorting algorithm was created in 2012 and generalized in 2015 to support GPUs that benefit from non-power-of-two workgroups.
The primary use case of HotSort is to achieve high throughput as early as possible on small GPUs when sorting arrays containing 1000‘s to 100’s of thousands of integer keys.
HotSort uses unique properties of bitonic sequences to create a novel mapping of keys onto data-parallel devices like GPUs.
The overall flow of the HotSort algorithm is below. Kernel launches are in italics.
The algorithm begins with a very dense per-multiprocessor block sorting algorithm that loads a “slab” of keys into a subgroup's registers, sorts the slabs, merges all slabs in the workgroup, and stores the slabs back to global memory.
In the slab sorting phase, each lane of a subgroup executes a bitonic sorting network on its registers and successively merges lanes until the slab of registers is sorted in serpentine order.
HotSort has several different merging strategies.
The merging kernels leverage the multiprocessor's register file by loading, merging and storing a large number of strided slab rows without using local memory.
The merging kernels exploit the bitonic sequence property that interleaved subsequences of a bitonic sequence are also bitonic sequences. This property also holds for non-power-of-two sequences.
As an example, the Streaming Flip Merge kernel is illustrated below:
HotSort's initial sorting and merging phases are performed on bitonic sequences. Because of this, throughput decreases as the problem size increases.
A hybrid algorithm that combined HotSort‘s block sorter and several rounds of merging with a state-of-the-art GPU merging algorithm would probably improve the algorithm’s performance on larger arrays.
The original version of HotSort ran on pre-Kepler GPUs without intra-warp/inter-lane shuffling ― reenable this capability.