Re: Git 2.26 fetches many times more objects than it should, wasting gigabytes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 22, 2020 at 08:33:48AM -0700, Junio C Hamano wrote:

> > I don't quite think that's the solution, though. Both old and new are
> > supposed to be respecting MAX_IN_VAIN. So it's not at all clear to me
> > why it restricts the number of haves we'll send in v2, but not in v0.
> 
> Thanks for digging.  I tend to agree with your assessment that the
> setting should not make a difference, if v0 find the common out of
> the exchange within the same number of "have"s.

I think v0 sends many more haves. Again, it's hard to compare the
protocol traces because of the framing, but if I simplify each one like:

  perl -lne '/fetch-pack([<>] .*)/ and print "fetch$1"' <packet-v0.trace  >small-v0.trace
  perl -lne '/fetch([<>] .*)/ and print "fetch$1"' <packet-v2.trace  >small-v2.trace

I think we can get an apples-to-apples-ish comparison. And the results
are quite different:

  $ grep -c have small-v0.trace
  11342
  $ grep -c have small-v2.trace
  496

So I think the two protocols are treating MAX_IN_VAIN quite differently.

It looks like v0 only respects it after seeing a "continue" (or maybe
any non-common ACK; they all seem to trigger got_continue), but v2 will
use it to limit the haves we send when the other side is just NAKing.

> I am guilty of introducing the hardcoded "give up after this many
> naks", which I admit I was never fond of, back in the days there was
> only one original protocol.  In retrospect, I probably should have
> done "after this many naks, stop sending each and every commit but
> start skipping exponentially (or fibonacci)" instead.  After all,
> this was meant to prevent walking all the way down to a different
> root commit when you have more of them than the repository you are
> fetching from---but (1) skipping exponentially down to root is way
> less expensive, even if it is a bit more expensive than not walking
> at all, and (2) if we find a common tree, even if it is distant, it
> is way better than not having any common tree at all.

I think fetch.negotiationAlgorithm=skipping is that thing. And it _does_
paper over the problem (the most horrific case goes away, but you end up
with twice as many objects as v2 finds).

Limiting the amount of work we're willing to spend digging in history
does make sense, but it seems like we'd always want to at least dig a
little on each ref. For example, imagine a pathological case like this:

  - the client has 10,001 refs; the first 10,000 (sorted alphabetically)
    point to commit graph X. The last one points to some disjoint commit
    graph Y.

  - the server only cares about Y, and it has some Y' that adds one
    commit on top

We _should_ be able to serve that fetch with a single commit (Y->Y').
And we could find it trivially by feeding all of the ref tips as "have"
lines. But I suspect we wouldn't with v2, as we'd feed the first couple
hundred haves and then give up.

-Peff



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux