commit | 83fd40d730feb0804fafbc2d8814bcc19a17b2e5 | [log] [tgz] |
---|---|---|
author | Ruy Contributors <ruy-eng@google.com> | Thu Dec 19 23:03:08 2024 -0800 |
committer | Copybara-Service <copybara-worker@google.com> | Thu Dec 19 23:03:33 2024 -0800 |
tree | fa81b0e42b12852d426aefff2f1f921d526ce263 | |
parent | 8467039b81d036015e4f116a0e9eb783fac51a0f [diff] |
EvalGemmlowp: Construct 0-size gemmlowp::VectorMap if ptr is null This avoids constructing an invalid VectorMap (with null base and non-zero size). Assertions will be added to gemmlowp for this case. PiperOrigin-RevId: 708191712
This is not an officially supported Google product.
ruy is a matrix multiplication library. Its focus is to cover the matrix multiplication needs of neural network inference engines. Its initial user has been TensorFlow Lite, where it is used by default on the ARM CPU architecture.
ruy supports both floating-point and 8bit-integer-quantized matrices.
ruy is designed to achieve high performance not just on very large sizes, as is the focus of many established libraries, but on whatever are the actual sizes and shapes of matrices most critical in current TensorFlow Lite applications. This often means quite small sizes, e.g. 100x100 or even 50x50, and all sorts of rectangular shapes. It's not as fast as completely specialized code for each shape, but it aims to offer a good compromise of speed across all shapes and a small binary size.
Some documentation will eventually be available in the doc/ directory, see doc/README.md.