Hi, * The code to iterate over reftable blocks has seen some optimization to reduce memory allocation and deallocation. Preceding patch series have optimized memory allocation patterns when iterating through ref or log records. This has led to a constant number of allocations when decoding and yielding those records to our callers. One thing that is still missing though is an optimization for the table and block iterators which are responsible for iterating through the separate blocks in the table. So while the number of allocations does not scale with the number of refs (directly) anymore, it still scales with the number of blocks. This is getting tackled by this patch series now which refactors the table and block iterators such that the former can reuse the latter without reallocations. With this change iterating through records is now using a truly constant number of allocations. Patrick Patrick Steinhardt (9): reftable/block: rename `block_reader_start()` reftable/block: merge `block_iter_seek()` and `block_reader_seek()` reftable/block: better grouping of functions reftable/block: introduce `block_reader_release()` reftable/block: move ownership of block reader into `struct table_iter` reftable/reader: iterate to next block in place reftable/block: reuse uncompressed blocks reftable/block: open-code call to `uncompress2()` reftable/block: reuse `zstream` state on inflation reftable/block.c | 152 ++++++++++++++++++++++-------------- reftable/block.h | 49 +++++++----- reftable/block_test.c | 6 +- reftable/iter.c | 2 +- reftable/reader.c | 176 ++++++++++++++++++++++-------------------- 5 files changed, 222 insertions(+), 163 deletions(-) -- 2.44.GIT
Attachment:
signature.asc
Description: PGP signature