Hi Derrick, > On Aug 6, 2020, at 18:30, Derrick Stolee via GitGitGadget <gitgitgadget@xxxxxxxxx> wrote: > > From: Derrick Stolee <dstolee@xxxxxxxxxxxxx> > > When repacking during the 'incremental-repack' task, we use the > --batch-size option in 'git multi-pack-index repack'. The initial setting > used --batch-size=0 to repack everything into a single pack-file. This is > not sustainable for a large repository. The amount of work required is > also likely to use too many system resources for a background job. > > Update the 'incremental-repack' task by dynamically computing a > --batch-size option based on the current pack-file structure. > > The dynamic default size is computed with this idea in mind for a client > repository that was cloned from a very large remote: there is likely one > "big" pack-file that was created at clone time. Thus, do not try > repacking it as it is likely packed efficiently by the server. > > Instead, we select the second-largest pack-file, and create a batch size > that is one larger than that pack-file. If there are three or more > pack-files, then this guarantees that at least two will be combined into > a new pack-file. I have been using this strategy with git-care.sh [1] with large success. However it worth to note that there are still edge case where I observed that pack count keep increasing because using '--batch-size=<second-biggest-pack>+1' did not resulted in any repacking. In one case, I have observed a local copy went up to 160+ packs without being able to repack. I have been considering whether a strategy such as falling back to the '(3rd biggest pack size) + 1' and 4th and 5th and so on... when midx repack call resulted in no-op, as that was how I fixed my repo when the edge case happen. Such strategy would require a way to detect midx repack to signal when no-op happen, so something like 'git multi-pack-index repack --batch-size=123456 --exit-code' would be much desirable. > > Of course, this means that the second-largest pack-file size is likely > to grow over time and may eventually surpass the initially-cloned > pack-file. Recall that the pack-file batch is selected in a greedy > manner: the packs are considered from oldest to newest and are selected > if they have size smaller than the batch size until the total selected > size is larger than the batch size. Thus, that oldest "clone" pack will > be first to repack after the new data creates a pack larger than that. > > We also want to place some limits on how large these pack-files become, > in order to bound the amount of time spent repacking. A maximum > batch-size of two gigabytes means that large repositories will never be > packed into a single pack-file using this job, but also that repack is > rather expensive. This is a trade-off that is valuable to have if the > maintenance is being run automatically or in the background. Users who > truly want to optimize for space and performance (and are willing to pay > the upfront cost of a full repack) can use the 'gc' task to do so. > > Create a test for this two gigabyte limit by creating an EXPENSIVE test > that generates two pack-files of roughly 2.5 gigabytes in size, then > performs an incremental repack. Check that the --batch-size argument in > the subcommand uses the hard-coded maximum. > > Helped-by: Chris Torek <chris.torek@xxxxxxxxx> > Reported-by: Son Luong Ngoc <sluongng@xxxxxxxxx> > Signed-off-by: Derrick Stolee <dstolee@xxxxxxxxxxxxx> Generally, I have found working with '--batch-size' to be a bit unpredictable. I wonder if we could tweak the behavior somewhat so that its more consistent to use and test? Thanks a lot for making this happen. Hope this patch would make it in stable soon Cheers, Son Luong. [1]: https://github.com/sluongng/git-care