Re: [PATCH 1/2] fetch-pack: Finish negotation if remote replies "ACK %s ready"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 17, 2011 at 00:15, Jeff King <peff@xxxxxxxx> wrote:
>> Over smart HTTP, the client must do an additional 10 HTTP POST
>> requests, each of which incurs round-trip latency, and must upload
>> the entire state vector of all known common objects.  On the final
>> POST request, this is 16 KiB worth of data.
>
> This optimization aside, I wonder if it is worth bumping up the number
> of haves we send in a chunk from 32 to something higher.

I have been considering that myself. 32 isn't a good number here. Its
1604 bytes per round (32 have lines, and flush-pkt). Ethernet's
default MTU is 1500 bytes, so stopping at 32 causes us to need more
than one packet, and the last one isn't full. In native git:// or
ssh:// where its bi-directional the client does "race ahead" and send
another 1604 bytes, but now we're at 3208 bytes which still doesn't
fit well within the MTU. :-\

I've thought about increasing this to 64. On git:// or ssh:// that can
be wasteful as the remote should be able to stop us earlier,
especially if we are common very early (e.g. infrequent contributor
who is only 1 commit ahead). The same is true for smart HTTP, boosting
it to 64 would help the maintainer pull from a lieutenant with fewer
rounds, but it hurts the infrequent contributor as his only round is
larger for no good reason.

The better approach might be to automatically double the round size on
each successive round, until we reach an upper limit of say 1024. For
the infrequent contributor we might even consider cutting the initial
round to 16, as it would allow the entire initial round to fit into a
single Ethernet MTU over git://, and with the no-done capability, the
entire exchange is over in that single packet. :-)

For the maintainer, 16 is way too small, but they will then try 32,
64, 128, 256, 512... and should pick up the common point quickly.
Assuming the maintainer is already 600 commits ahead when he pulls,
we'll find the common point in 6 rounds, vs. the current approach that
requires 19 rounds.


But we should really cap the size at something sane like 1024 to
prevent HTTP POST payloads from being more than 64 KiB. But
practically this is <32 KiB because we gzip the POST body, and there
only content is hex digits and the word "have". It should be deflating
to smaller than 50% of the original size. For various selfish reasons
I wish I could keep this under 8192 bytes for the entire HTTP headers
and POST body, but this is under 64 "have" lines, and that's useless.

I'll try putting together this exponential round size patch today, I
have a few other git things to do today.

-- 
Shawn.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]