Hi, this is the second version of my patch series that optimizes some I/O patterns that the reftable library uses. Changes compared to v1: - Added missing signoffs. - The validity cache is now cleared when reloading the reftable stack fails. - Style fixes for `reftable_block_source_from_file()` are now in a separate commit. - We now use `xmmap()` exclusively, relying on its fallback code on NO_MMAP platforms. - The file descriptor is closed immediately after mmapping now. Thanks for your review, Junio! Patrick Patrick Steinhardt (5): reftable/stack: refactor stack reloading to have common exit path reftable/stack: refactor reloading to use file descriptor reftable/stack: use stat info to avoid re-reading stack list reftable/blocksource: refactor code to match our coding style reftable/blocksource: use mmap to read tables reftable/blocksource.c | 39 ++++++-------- reftable/stack.c | 113 +++++++++++++++++++++++++---------------- reftable/stack.h | 1 + reftable/system.h | 1 + 4 files changed, 85 insertions(+), 69 deletions(-) Range-diff against v1: 1: 01ece2626d ! 1: 4b7f52c415 reftable/stack: refactor stack reloading to have common exit path @@ Commit message Refactor the function to have a common exit path. While at it, touch up the style of this function a bit to match our usual coding style better. + Signed-off-by: Patrick Steinhardt <ps@xxxxxx> + ## reftable/stack.c ## @@ reftable/stack.c: static int tv_cmp(struct timeval *a, struct timeval *b) static int reftable_stack_reload_maybe_reuse(struct reftable_stack *st, 2: 726d302d7b ! 2: 36b9f6b624 reftable/stack: refactor reloading to use file descriptor @@ Commit message Prepare for this by converting the code to use `fd_read_lines()` so that we have the file descriptor readily available. + Signed-off-by: Patrick Steinhardt <ps@xxxxxx> + ## reftable/stack.c ## @@ reftable/stack.c: static int reftable_stack_reload_maybe_reuse(struct reftable_stack *st, struct timeval deadline; 3: 4fabdc3d80 ! 3: c0f7cec95b reftable/stack: use stat info to avoid re-reading stack list @@ Commit message cached value from the last time we have updated the stack. This should always work alright because "tables.list" is updated atomically via a rename, so even if the ctime or mtime wasn't granular enough to identify - a change, at least the inode number should have changed. + a change, at least the inode number or file size should have changed. This change significantly speeds up operations where many refs are read, like when using git-update-ref(1). The following benchmark creates N refs in an otherwise-empty repository via `git update-ref --stdin`: Benchmark 1: update-ref: create many refs (refcount = 1, revision = HEAD~) - Time (mean ± σ): 5.6 ms ± 0.2 ms [User: 2.5 ms, System: 3.0 ms] - Range (min … max): 5.2 ms … 6.0 ms 89 runs + Time (mean ± σ): 5.1 ms ± 0.2 ms [User: 2.4 ms, System: 2.6 ms] + Range (min … max): 4.8 ms … 7.2 ms 109 runs Benchmark 2: update-ref: create many refs (refcount = 100, revision = HEAD~) - Time (mean ± σ): 19.2 ms ± 0.3 ms [User: 8.6 ms, System: 10.4 ms] - Range (min … max): 18.4 ms … 19.8 ms 63 runs + Time (mean ± σ): 19.1 ms ± 0.9 ms [User: 8.9 ms, System: 9.9 ms] + Range (min … max): 18.4 ms … 26.7 ms 72 runs Benchmark 3: update-ref: create many refs (refcount = 10000, revision = HEAD~) - Time (mean ± σ): 1.310 s ± 0.014 s [User: 0.566 s, System: 0.724 s] - Range (min … max): 1.291 s … 1.342 s 10 runs + Time (mean ± σ): 1.336 s ± 0.018 s [User: 0.590 s, System: 0.724 s] + Range (min … max): 1.314 s … 1.373 s 10 runs Benchmark 4: update-ref: create many refs (refcount = 1, revision = HEAD) - Time (mean ± σ): 5.7 ms ± 0.4 ms [User: 2.6 ms, System: 3.1 ms] - Range (min … max): 5.1 ms … 9.5 ms 91 runs + Time (mean ± σ): 5.1 ms ± 0.2 ms [User: 2.4 ms, System: 2.6 ms] + Range (min … max): 4.8 ms … 7.2 ms 109 runs Benchmark 5: update-ref: create many refs (refcount = 100, revision = HEAD) - Time (mean ± σ): 15.2 ms ± 0.3 ms [User: 7.0 ms, System: 8.1 ms] - Range (min … max): 14.3 ms … 17.1 ms 71 runs + Time (mean ± σ): 14.8 ms ± 0.2 ms [User: 7.1 ms, System: 7.5 ms] + Range (min … max): 14.2 ms … 15.2 ms 82 runs Benchmark 6: update-ref: create many refs (refcount = 10000, revision = HEAD) - Time (mean ± σ): 916.1 ms ± 11.0 ms [User: 420.8 ms, System: 495.0 ms] - Range (min … max): 906.9 ms … 943.8 ms 10 runs + Time (mean ± σ): 927.6 ms ± 5.3 ms [User: 437.8 ms, System: 489.5 ms] + Range (min … max): 919.4 ms … 936.4 ms 10 runs Summary - update-ref: create many refs (refcount = 1, revision = HEAD~) ran - 1.01 ± 0.09 times faster than update-ref: create many refs (refcount = 1, revision = HEAD) - 2.72 ± 0.11 times faster than update-ref: create many refs (refcount = 100, revision = HEAD) - 3.42 ± 0.13 times faster than update-ref: create many refs (refcount = 100, revision = HEAD~) - 163.59 ± 5.62 times faster than update-ref: create many refs (refcount = 10000, revision = HEAD) - 233.91 ± 7.92 times faster than update-ref: create many refs (refcount = 10000, revision = HEAD~) + update-ref: create many refs (refcount = 1, revision = HEAD) ran + 1.00 ± 0.07 times faster than update-ref: create many refs (refcount = 1, revision = HEAD~) + 2.89 ± 0.14 times faster than update-ref: create many refs (refcount = 100, revision = HEAD) + 3.74 ± 0.25 times faster than update-ref: create many refs (refcount = 100, revision = HEAD~) + 181.26 ± 8.30 times faster than update-ref: create many refs (refcount = 10000, revision = HEAD) + 261.01 ± 12.35 times faster than update-ref: create many refs (refcount = 10000, revision = HEAD~) + + Signed-off-by: Patrick Steinhardt <ps@xxxxxx> ## reftable/stack.c ## @@ reftable/stack.c: void reftable_stack_destroy(struct reftable_stack *st) @@ reftable/stack.c: static int reftable_stack_reload_maybe_reuse(struct reftable_s + stat_validity_update(&st->list_validity, fd); + out: ++ if (err) ++ stat_validity_clear(&st->list_validity); if (fd >= 0) close(fd); + free_names(names); @@ reftable/stack.c: static int reftable_stack_reload_maybe_reuse(struct reftable_stack *st, static int stack_uptodate(struct reftable_stack *st) { -: ---------- > 4: 96e79dc3ba reftable/blocksource: refactor code to match our coding style 4: a23f38a806 ! 5: e53a20c8e1 reftable/blocksource: use mmap to read tables @@ Commit message already and ignores any such errors. Also, `reftable_stack_clean()` will prune stale tables which are not referenced by "tables.list" anymore so that those files can eventually be pruned. And second, we never rewrite - already-rewritten stacks, so it does not matter that we cannot rename a + already-written stacks, so it does not matter that we cannot rename a file over an mmaped file, either. Another unfortunate property of mmap is that it is not supported by all systems. But given that the size of reftables should typically be rather limited (megabytes at most in the vast majority of repositories), we can - provide an easy fallback by just reading the complete table into memory - on such platforms. This is the same strategy that the "packed" backend - uses in case the platform does not provide mmap. + use the fallback implementation provided by `git_mmap()` which reads the + whole file into memory instead. This is the same strategy that the + "packed" backend uses. While this change doesn't significantly improve performance in the case where we're seeking through stacks once (like e.g. git-for-each-ref(1) @@ Commit message empty repository: Benchmark 1: update-ref: create many refs (refcount = 1, revision = HEAD~) - Time (mean ± σ): 5.6 ms ± 0.2 ms [User: 2.7 ms, System: 2.8 ms] - Range (min … max): 5.1 ms … 6.0 ms 90 runs + Time (mean ± σ): 5.1 ms ± 0.2 ms [User: 2.5 ms, System: 2.5 ms] + Range (min … max): 4.8 ms … 7.1 ms 111 runs Benchmark 2: update-ref: create many refs (refcount = 100, revision = HEAD~) - Time (mean ± σ): 15.1 ms ± 0.4 ms [User: 7.1 ms, System: 8.0 ms] - Range (min … max): 14.2 ms … 16.5 ms 71 runs + Time (mean ± σ): 14.8 ms ± 0.5 ms [User: 7.1 ms, System: 7.5 ms] + Range (min … max): 14.1 ms … 18.7 ms 84 runs Benchmark 3: update-ref: create many refs (refcount = 10000, revision = HEAD~) - Time (mean ± σ): 919.4 ms ± 11.2 ms [User: 427.5 ms, System: 491.7 ms] - Range (min … max): 908.1 ms … 936.6 ms 10 runs + Time (mean ± σ): 926.4 ms ± 5.6 ms [User: 448.5 ms, System: 477.7 ms] + Range (min … max): 920.0 ms … 936.1 ms 10 runs Benchmark 4: update-ref: create many refs (refcount = 1, revision = HEAD) - Time (mean ± σ): 5.5 ms ± 0.3 ms [User: 2.4 ms, System: 3.1 ms] - Range (min … max): 5.0 ms … 7.3 ms 89 runs + Time (mean ± σ): 5.0 ms ± 0.2 ms [User: 2.4 ms, System: 2.5 ms] + Range (min … max): 4.7 ms … 5.4 ms 111 runs Benchmark 5: update-ref: create many refs (refcount = 100, revision = HEAD) - Time (mean ± σ): 10.4 ms ± 0.3 ms [User: 5.1 ms, System: 5.3 ms] - Range (min … max): 9.7 ms … 11.0 ms 78 runs + Time (mean ± σ): 10.5 ms ± 0.2 ms [User: 5.0 ms, System: 5.3 ms] + Range (min … max): 10.0 ms … 10.9 ms 93 runs Benchmark 6: update-ref: create many refs (refcount = 10000, revision = HEAD) - Time (mean ± σ): 483.5 ms ± 6.3 ms [User: 227.8 ms, System: 255.6 ms] - Range (min … max): 477.5 ms … 498.8 ms 10 runs + Time (mean ± σ): 529.6 ms ± 9.1 ms [User: 268.0 ms, System: 261.4 ms] + Range (min … max): 522.4 ms … 547.1 ms 10 runs Summary update-ref: create many refs (refcount = 1, revision = HEAD) ran 1.01 ± 0.06 times faster than update-ref: create many refs (refcount = 1, revision = HEAD~) - 1.89 ± 0.11 times faster than update-ref: create many refs (refcount = 100, revision = HEAD) - 2.73 ± 0.16 times faster than update-ref: create many refs (refcount = 100, revision = HEAD~) - 87.55 ± 4.65 times faster than update-ref: create many refs (refcount = 10000, revision = HEAD) - 166.48 ± 8.80 times faster than update-ref: create many refs (refcount = 10000, revision = HEAD~) + 2.08 ± 0.07 times faster than update-ref: create many refs (refcount = 100, revision = HEAD) + 2.95 ± 0.14 times faster than update-ref: create many refs (refcount = 100, revision = HEAD~) + 105.33 ± 3.76 times faster than update-ref: create many refs (refcount = 10000, revision = HEAD) + 184.24 ± 5.89 times faster than update-ref: create many refs (refcount = 10000, revision = HEAD~) Theoretically, we could also replicate the strategy of the "packed" backend where small tables are read into memory instead of using mmap. Benchmarks did not confirm that this has a performance benefit though. + Signed-off-by: Patrick Steinhardt <ps@xxxxxx> + ## reftable/blocksource.c ## -@@ reftable/blocksource.c: license that can be found in the LICENSE file or at - #include "reftable-blocksource.h" - #include "reftable-error.h" - -+#if defined(NO_MMAP) -+static int use_mmap = 0; -+#else -+static int use_mmap = 1; -+#endif -+ - static void strbuf_return_block(void *b, struct reftable_block *dest) - { - if (dest->len) @@ reftable/blocksource.c: struct reftable_block_source malloc_block_source(void) + } + struct file_block_source { - int fd; +- int fd; uint64_t size; + unsigned char *data; }; @@ reftable/blocksource.c: static uint64_t file_size(void *b) - if (fd > 0) { - close(fd); - ((struct file_block_source *)b)->fd = 0; +- } +- + struct file_block_source *b = v; -+ -+ if (b->fd >= 0) { -+ close(b->fd); -+ b->fd = -1; - } - -+ if (use_mmap) -+ munmap(b->data, b->size); -+ else -+ reftable_free(b->data); -+ b->data = NULL; -+ ++ munmap(b->data, b->size); reftable_free(b); } @@ reftable/blocksource.c: static int file_read_block(void *v, struct reftable_bloc return size; } @@ reftable/blocksource.c: int reftable_block_source_from_file(struct reftable_block_source *bs, - { - struct stat st = { 0 }; - int err = 0; -- int fd = open(name, O_RDONLY); -+ int fd; - struct file_block_source *p = NULL; -+ -+ fd = open(name, O_RDONLY); - if (fd < 0) { - if (errno == ENOENT) { - return REFTABLE_NOT_EXIST_ERROR; -@@ reftable/blocksource.c: int reftable_block_source_from_file(struct reftable_block_source *bs, - p = reftable_calloc(sizeof(struct file_block_source)); + p = reftable_calloc(sizeof(*p)); p->size = st.st_size; - p->fd = fd; -+ if (use_mmap) { -+ p->data = xmmap(NULL, st.st_size, PROT_READ, MAP_PRIVATE, fd, 0); -+ p->fd = fd; -+ } else { -+ p->data = xmalloc(st.st_size); -+ if (read_in_full(fd, p->data, st.st_size) != st.st_size) { -+ close(fd); -+ return -1; -+ } -+ close(fd); -+ p->fd = -1; -+ } ++ p->data = xmmap(NULL, st.st_size, PROT_READ, MAP_PRIVATE, fd, 0); ++ close(fd); assert(!bs->ops); bs->ops = &file_vtable; base-commit: a54a84b333adbecf7bc4483c0e36ed5878cac17b -- 2.43.GIT
Attachment:
signature.asc
Description: PGP signature