Re: [JGIT PATCH 2/2] Decrease the fetch pack client buffer to the lower minimum

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Junio C Hamano <gitster@xxxxxxxxx> wrote:
> "Shawn O. Pearce" <spearce@xxxxxxxxxxx> writes:
> 
> > This is the lowest buffer size we actually require to keep the
> > client and server sides from deadlocking against each other.
> 
> Is this about the fetch-pack protocol where

Yes.
 
>  (1) upload-pack shows what it has; fetch-pack keeps reading until it sees
>      a flush; then
> 
>  (2) fetch-pack shows what it wants; upload-pack keeps reading; then
> 
>  (3) fetch-pack sends a bunch of have's, followed by a flush; upload-pack
>      keeps reading and then responds with an ACK-continue or NAK, which
>      fetch-pack reads; this step continues zero or more times; and then
>      finally
> 
>  (4) fetch-pack sends a bunch of have's, followed by a flush; upload-pack
>      keeps reading and then responds with an ACK, fetch-pack says done.
> 
> Where do you need "enough buffer"?  The conversation looks very much "it's
> my turn to talk", "now it's your turn to talk and I'll wait until I hear
> from you", to me.  I am puzzled.

In step 3 during the first round the client can send up to 2 blocks
worth of data, with 32 haves per block.  This means the client
writes 2952 bytes of data before it reads.

C Git doesn't run into this sort of problem because a normal pipe
would get 1 page (~4096 bytes) in the kernel for the FIFO buffer.

In SSH transport, we still have 4096 between us and the ssh client
process, plus that has its own buffering.

In TCP transport, we have the kernel TX buffer on this side, and the
kernel RX buffer on the remote side, plus network switch buffers in
the middle.  2952 bytes nicely fits into just over 2 IP packets,
and the TCP window is sufficiently large enough to allow these to
be sent without blocking the writer.

We need to be able to shove 2952 bytes down at the other guy before
we start listening to him.  The upload-pack side of the system can
(at worst) send us 64 "ACK %s continue" lines.  We must be able
to enter into the receive mode before the upload-pack side fills
their outgoing buffer.

In the Sun JVMs a pure in-memory pipe only has room for 1024 bytes
in the FIFO before it blocks.  Though the technique I am using to
boost the FIFO from 1024 to 2952 bytes isn't necessarily going to
be portable to other JVMs.  If both sides only have 1024 bytes of
buffer available and both sides can possibly write more than that,
we deadlock.

> > +	/**
> > +	 * Amount of data the client sends before starting to read.
> > +	 * <p>
> > +	 * Any output stream given to the client must be able to buffer this many
> > +	 * bytes before the client will stop writing and start reading from the
> > +	 * input stream. If the output stream blocks before this many bytes are in
> > +	 * the send queue, the system will deadlock.
> > +	 */
> > +	protected static final int MIN_CLIENT_BUFFER = 2 * 32 * 46 + 4;

And this should be + 8 here.  F@!*!

Robin, can you amend?  It should be + 8 because we send to end()
packets in that initial burst, each packet is 4 bytes in size.

-- 
Shawn.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]