Re: [PATCH] docs: update 64-bit core.packedGitLimit default

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 21, 2017 at 11:38:54AM -0700, Junio C Hamano wrote:

> Jeff King <peff@xxxxxxxx> writes:
> 
> > So the other direction, instead of avoiding the memory limit in (4), is
> > to stop closing "small" packs in (2). But I don't think that's a good
> > idea. Even with the code after David's patch, you can still trigger the
> > problem by running out of file descriptors. And if we stop closing
> > small packs, that makes it even more likely for that to happen.
> 
> I recall that when we notice that we cannot access a loose one that
> we earlier thought existed we fall back to rescan the packs?  Would
> an approach similar to that can work to deal with the "closed small
> pack goes away" scenario?

Not very well. See the first paragraph of my explanation. Basically,
pack-objects is special because it makes decisions based on (and records
pointers to) the particular packed representation. If that goes away, it
just bails.

Which isn't to say that falling back is impossible. I think in the worst
case that it could say "oops, I can't access the pack that has object X
anymore", fall back to finding _any_ copy of it and including it as a
pure base object (it's too late at that point to make a delta, and
trying to be clever about reusing on-disk deltas is likely just going to
end up with a broken corner case).

So then you have a sub-optimal pack, but at least it didn't die(). If
that happens for one object, I don't think it's that big a deal. But the
resulting pack could end up pretty sub-optimal if you lose access to a
whole pack. And remember, "small" here is just smaller than the window
size, which is a gigabyte on 64 bit systems. So imagine that you lose
access to a 500 MB pack, but we recover by sending base objects. Then
everything that was in that pack gets converted to its full non-delta
representation, which could mean it expands to several gigabytes. The
current behavior to die() and retry the fetch is not that bad an
alternative.

Of course, the best alternative is retaining access to the packs, which
is what typically happens now on 64-bit systems (it's just that the
packedGitLimit was set pointlessly low). I'm not sure if you're asking
in general, or as a last-ditch effort for 32-bit systems.

-Peff



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux