Re: [PATCH v3 5/7] gc: handle a corner case in gc.bigPackThreshold

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 16 2018, Nguyễn Thái Ngọc Duy jotted:

> This config allows us to keep <N> packs back if their size is larger
> than a limit. But if this N >= gc.autoPackLimit, we may have a
> problem. We are supposed to reduce the number of packs after a
> threshold because it affects performance.
>
> We could tell the user that they have incompatible gc.bigPackThreshold
> and gc.autoPackLimit, but it's kinda hard when 'git gc --auto' runs in
> background. Instead let's fall back to the next best stategy: try to
> reduce the number of packs anyway, but keep the base pack out. This
> reduces the number of packs to two and hopefully won't take up too
> much resources to repack (the assumption still is the base pack takes
> most resources to handle).

I think this strategy makes perfect sense.

Those with say a 1GB "base" pack might set this setting at to 500MB or
something large like that, then it's realistically never going to happen
that you're going to then have a collision between gc.bigPackThreshold
and gc.autoPackLimit, even if your checkout is many years old *maybe*
you've accumulated 5-10 of those 500MB packs for any sane repo.

But this also allows for setting this value really low, e.g. 50MB or
something to place a very low upper bound on how much memory GC takes on
a regular basis, but of course you'll need to repack that set of 50MB's
eventually.

Great!



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux