Re: [PATCH 3/3] fetch-pack: use smaller handshake window for initial request

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 18, 2011 at 15:27, Junio C Hamano <junio@xxxxxxxxx> wrote:
> Start the initial request small by halving the INITIAL_FLUSH (we will try
> to stay one window ahead of the server, so we would end up giving twice as
> many "have" in flight at the very beginning).  We may want to tweak these
> values even more, taking MTU into account.

Thanks Junio, this patch series looks good to me.

I keep thinking about trying to maximize to the MTU, but this is
difficult. If we only consider the anonymous git:// over TCP case,
clients start the conversation with their "want" list, which includes
the capability list after the first want line. Because the want list
isn't regular in size (clients request different branches based on
what the server has offered, and what it is behind on, and what the
user may have asked for on the command line) and the capability list
isn't either (clients request a lot of capabilities these days) sizing
the initial "have" list to round out to fill an MTU is difficult to do
with a static constant. The best way to size the initial transfer to
an MTU boundary is to keep a running counter of bytes written thus
far, write *at least* 16 (or whatever our INITIAL_FLUSH is), and then
write additional "have" lines until we cannot fit another one in the
current MTU.

For subsequent rounds, yes, we can statically size to an MTU, but
there isn't much benefit to doing a static size here if we have the
code to dynamically size the first batch based on the capability list
and the wants.

For smart HTTP however, sizing to an MTU is much more difficult. The
HTTP headers sent by libcurl are difficult to predict, and go in front
of the want list. And subsequent rounds include not just the HTTP
headers, but also the prior want list and any prior have lines that
received back ACK %s common from the remote peer. So we probably have
to use a dynamic size for the subsequent rounds too. To make things
more difficult, smart HTTP usually gzips the POST body before
transmission, which compresses the want/have list somewhat and makes
it even harder to predict where an MTU would be full.

Long story short, I'm not sure its worth trying to optimize to fill an
MTU. But if I'm right, doubling the size of the window on each round
will reduce the number of round-trips involved. Over git:// this might
not be very noticeable since the server is already sending you TCP ACK
messages, and the Git-level ACK/NAKs can be piggy-backed into the TCP
ACKs. But over http:// I think its a big win because the Git-level
ACK/NAKs cannot be used until the entire HTTP request has been
processed. It might not seem like a lot, but if your HTTP client is
behind 2 HTTP proxies (e.g. your local LAN proxy, and then the remote
server is actually a reverse proxy), the HTTP processing can really
start to dominate the round-trip time.

-- 
Shawn.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]