Junio C Hamano <gitster@xxxxxxxxx> wrote: > "Shawn O. Pearce" <spearce@xxxxxxxxxxx> writes: > > > In step 3 during the first round the client can send up to 2 blocks > > worth of data, with 32 haves per block. This means the client > > writes 2952 bytes of data before it reads. > > Sorry, perhaps I am being extremely slow, but even if the client writes > millions of bytes before it starts reading, I do not see how it would be a > problem as long as the other side reads these millions of bytes before > saying "Ok, I've heard about them and my response so far is Ack-continue > (or NAK)", which the client needs to read. Ok, maybe my understanding of the fetch-pack/upload-pack protocol is incorrect. If multi_ack is enabled then isn't it possible for the remote to return "ACK %s continue" for the first 63 "have %s" lines the client sent? E.g. the case is the client has only one ref, and has only 1 commit the other side doesn't have, and the other side has only one ref, and has only 1 commit the client doesn't have (so the client will fetch 1 commit). In such a case, the client will blast 64 have lines before pausing to listen to the server. But the server will have 63 of those lines, and will try to send "ACK %s continue" in vain at the client, hoping it will stop enumerating along that branch. If there is insufficient buffering along one of those writers, the entire thing deadlocks. -- Shawn. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html