Re: [PATCH 2/2] lock_packed_refs(): allow retries when acquiring the packed-refs lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 1, 2015 at 10:19 PM, Michael Haggerty <mhagger@xxxxxxxxxxxx> wrote:
> On 05/01/2015 08:22 PM, Jeff King wrote:
>> On Fri, May 01, 2015 at 10:51:47AM -0700, Stefan Beller wrote:
>>
>>>> diff --git a/refs.c b/refs.c
>>>> index 47e4e53..3f8ac63 100644
>>>> --- a/refs.c
>>>> +++ b/refs.c
>>>> @@ -2413,9 +2413,19 @@ static int write_packed_entry_fn(struct ref_entry *entry, void *cb_data)
>>>>  /* This should return a meaningful errno on failure */
>>>>  int lock_packed_refs(int flags)
>>>>  {
>>>> +       static int timeout_configured = 0;
>>>> +       static int timeout_value = 1000;
>>>
>>> I'd personally be more happier with a default value of 100 ms or less
>>> The reason is found in the human nature, as humans tend to perceive
>>> anything faster than 100ms as "instant"[1], while a 100ms is a long time
>>> for computers.
>>>
>>> Now a small default time may lead to to little retries, so maybe it's worth
>>> checking at the very end of the time again (ignoring the computed backoff
>>> times). As pushes to $server usually take a while (connecting, packing packs,
>>> writing objects etc), this may be overcautios bikeshedding from my side.
>>
>> Keep in mind that this 1s is the maximum time to wait. The
>> lock_file_timeout() code from patch 1 starts off at 1ms, grows
>> quadratically, and quits as soon as it succeeds. So in most cases, the
>> user will wait a much smaller amount of time.

Yes, I forgot about that when having an opinion. I agree a one second
time out is reasonable.

> The current code would poll at the following times (in ms), ignoring the
> time taken for the actual polling attempt and ignoring the random component:
>
>     time  backoff  percent
>     ----  -------  -------
>        0        1     N/A
>        1        4     400%
>        5        9     180%
>       14       16     114%
>       30       25      83%
>       55       36      65%
>       91       49      54%
>      140       64      46%
>      204       81      40%
>      285      100      35%
>      385      121      31%
>      506      144      28%
>      650      169      26%
>      819      196      24%
>     1015      225      22%  <- Stop here with the default 1 s timeout

So a configuration of one second only has about twice the attempts
than a 100ms configuration in the worst case, which is nice for the
machine load, and as I said above, not too long for the user.

Thanks for laying out the numbers here, it's more
understandable now.

>     1240      256      21%
>     1496      289      19%
>     1785      324      18%
>     2109      361      17%
>     2470      400      16%
>     2870      441      15%
>     3311      484      15%
>     3795      529      14%
>     4324      576      13%
>     4900      625      13%
>     5525      676      12%
>     6201      729      12%
>     6930      784      11%
>     7714      841      11%
>     8555      900      11%
>     9455      961      10%
>    10416     1000      10%
>    11416     1000       9%
>    12416     1000       8%
>
> From the table, the first backoff that is longer than 100 ms doesn't
> start until 385 ms, and in the worst case, that 121 ms delay would
> increase the *total* wait by only 31%. And the longest backoff within
> the first second is only 196 ms. The backoff doesn't reach its maximum
> value, 1 s, until the process has already been blocked for 10.4 seconds.
>
> Remember, these backoffs are the *maximum* time that the user might wait
> between the time that one process is finished and the time that the
> second process resumes. The *average* wait time will be half of that.
>
> And finally, remember that lock contention should be very rare in the
> first place, and will mostly occur on servers (because normal users are
> unlikely to have lots of parallel processes accessing a single git
> repository). What are the most likely scenarios for lock contention in
> the real world?
>
> * Occasionally, by chance, under normal operations. In most cases, the
> blocking process will finish up within a few milliseconds, the blocked
> process will resume almost immediately, and nobody will notice a thing.
>
> * In a pathological repository that has, say, a million packed
> references, and writing the packed-refs file takes unusually long. Here,
> typical lock contention delays will be long anyway, and adding (worst
> case) 121 ms == 31% to the delay is not unreasonable.
>
> * When the server is struggling with enormous load, or a
> denial-of-service attack, or some other system-wide problem that is
> making processes super slow. In this case it would be counterproductive
> to have *additional* processes waking up every 100 ms.
>
> * When a packed-refs.lock file fails to get cleaned up for some reason.
> In this case the "contention" will never go away on its own, so the
> polling is wasted effort. (Happily, I've almost never seen this happen
> on our servers.)
>
> It would be trivial to increase or decrease the maximum backoff. But I
> think the current value is a reasonable compromise.
>
> Michael
>
> --
> Michael Haggerty
> mhagger@xxxxxxxxxxxx
>
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]