commit | 172f2fcfc410048b710fe3769d3623d706b29bcd | [log] [tgz] |
---|---|---|
author | Rubens Farias <rubensf@google.com> | Wed Nov 11 14:09:08 2020 -0500 |
committer | GitHub <noreply@github.com> | Wed Nov 11 14:09:08 2020 -0500 |
tree | 57fcda4dc207737ea6ed2ddba9af82d34fe970ae | |
parent | d33054059ee327e0b388ff329f8d2f4ccea50776 [diff] |
Make Chunker.Next read "generic" and independent from digest size. (#236) * Make Chunker.Next read "generic" and independent from digest size. These are both pre-works for adding write compression support to the remote-apis-sdks. Chunker.Next should be "generic" as it allows for a drop in implementation of a reader that compresses on the fly. I still left the special case of caching data in memory as it saves the effort from memory copies. Making the chunker reads work independently from the digest size is also useful as it allows us to not have to pre-compute the digest of the compressed blobs. The current draft of the RE API never requires the digest of the compressed blob at any point, and this saves us the trouble of needing to read the data twice. Notice that this implies that the chunker won't be actually matching data against the supplied digest at any moment. The digest is now purely information storage, rather than necessary for chunker logic. As a caveat, for simplicity, I made it that we only cache in memory files that are smaller than the *chunk* size rather than the IO buffer size. * Break reader in its own package. * Change Seek -> SeekOffset so govet will stop complaining. * Fix nit typo * Use test_util.CreateFile * Converge io.EOF and io.UnexpectedEOF errors cases * Re-add t.Parallel
This repository contains SDKs for the Remote Execution API.
See each language subdirectory's README.md
for more specific instructions on using the SDK for that language.