Re: [PATCH] Fix potential local deadlock during fetch-pack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 29, 2011 at 10:22, Shawn Pearce <spearce@xxxxxxxxxxx> wrote:
> On Tue, Mar 29, 2011 at 10:06, Junio C Hamano <gitster@xxxxxxxxx> wrote:
>> The fetch-pack/upload-pack protocol relies on the underlying transport
>> (local pipe or TCP socket) to have enough slack to allow one window worth
>> of data in flight without blocking the writer.  Traditionally we always
>> relied on being able to have a batch of 32 "have"s in flight (roughly 1.5k
>
>> +               count += flush_limit;
>
> Nak. You still deadlock because when count reaches PIPESAFE_FLUSH you
> still double it to 2*PIPESAFE_FLUSH here. Instead I think you mean:

I take this comment back. Re-reading fetch-pack.c the next_flush()
method is accepting as input a running counter of how many have lines
have already been sent to the remote peer, and is never reset to 0.
Therefore it is necessary to add the next round size to count and
return it.

-- 
Shawn.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]