"Derrick Stolee via GitGitGadget" <gitgitgadget@xxxxxxxxx> writes: > From: Derrick Stolee <dstolee@xxxxxxxxxxxxx> > > When repacking during the 'incremental-repack' task, we use the > --batch-size option in 'git multi-pack-index repack'. The initial setting > used --batch-size=0 to repack everything into a single pack-file. This is > not sustaintable for a large repository. The amount of work required is sustainable. > also likely to use too many system resources for a background job. > > Update the 'incremental-repack' task by dynamically computing a > --batch-size option based on the current pack-file structure. OK. > The dynamic default size is computed with this idea in mind for a client > repository that was cloned from a very large remote: there is likely one > "big" pack-file that was created at clone time. Thus, do not try > repacking it as it is likely packed efficiently by the server. > > Instead, we select the second-largest pack-file, and create a batch size > that is one larger than that pack-file. If there are three or more > pack-files, then this guarantees that at least two will be combined into > a new pack-file. > > Of course, this means that the second-largest pack-file size is likely > to grow over time and may eventually surpass the initially-cloned > pack-file. Recall that the pack-file batch is selected in a greedy > manner: the packs are considered from oldest to newest and are selected > if they have size smaller than the batch size until the total selected > size is larger than the batch size. Thus, that oldest "clone" pack will > be first to repack after the new data creates a pack larger than that. > > We also want to place some limits on how large these pack-files become, > in order to bound the amount of time spent repacking. A maximum > batch-size of two gigabytes means that large repositories will never be > packed into a single pack-file using this job, but also that repack is > rather expensive. This is a trade-off that is valuable to have if the > maintenance is being run automatically or in the background. Users who > truly want to optimize for space and performance (and are willing to pay > the upfront cost of a full repack) can use the 'gc' task to do so. It might be too late to ask this now, but how does the quality of the resulting combined pack ensured, wrt locality and deltification? > +#define TWO_GIGABYTES (2147483647) > +#define UNSET_BATCH_SIZE ((unsigned long)-1) > + > +static off_t get_auto_pack_size(void) > +{ > + /* > + * The "auto" value is special: we optimize for > + * one large pack-file (i.e. from a clone) and > + * expect the rest to be small and they can be > + * repacked quickly. > + * > + * The strategy we select here is to select a > + * size that is one more than the second largest > + * pack-file. This ensures that we will repack > + * at least two packs if there are three or more > + * packs. > + */ > + off_t max_size = 0; > + off_t second_largest_size = 0; > + off_t result_size; > + struct packed_git *p; > + struct repository *r = the_repository; > + > + reprepare_packed_git(r); > + for (p = get_all_packs(r); p; p = p->next) { > + if (p->pack_size > max_size) { > + second_largest_size = max_size; > + max_size = p->pack_size; > + } else if (p->pack_size > second_largest_size) > + second_largest_size = p->pack_size; > + } > + > + result_size = second_largest_size + 1; We won't worry about this addition wrapping around; I guess we cannot do anything intelligent when it happens. > + /* But limit ourselves to a batch size of 2g */ > + if (result_size > TWO_GIGABYTES) > + result_size = TWO_GIGABYTES; Well, when it happens, we'd cap to 2G, which must be a reasonable fallback value, so it would be OK. > + return result_size; > +} > + > static int multi_pack_index_repack(void) > { > int result; > struct argv_array cmd = ARGV_ARRAY_INIT; > + struct strbuf batch_arg = STRBUF_INIT; > + > argv_array_pushl(&cmd, "multi-pack-index", "repack", NULL); > > if (opts.quiet) > argv_array_push(&cmd, "--no-progress"); > > - argv_array_push(&cmd, "--batch-size=0"); > + strbuf_addf(&batch_arg, "--batch-size=%"PRIuMAX, > + (uintmax_t)get_auto_pack_size()); > + argv_array_push(&cmd, batch_arg.buf); > > close_object_store(the_repository->objects); > result = run_command_v_opt(cmd.argv, RUN_GIT_CMD); > + strbuf_release(&batch_arg); I think I saw a suggestion to use xstrfmt() with free() instead of the sequence of strbuf_init(), strbuf_addf(), and strbuf_release() in a similar but different context. Perhaps we should follow suit here, too? Thanks. That's it for today from me.