Re: [PATCH] Retry acquiring reference locks for 100ms

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Michael Haggerty <mhagger@xxxxxxxxxxxx> writes:

> The philosophy of reference locking has been, "if another process is
> changing a reference, then whatever I'm trying to do to it will
> probably fail anyway because my old-SHA-1 value is probably no longer
> current". But this argument falls down if the other process has locked
> the reference to do something that doesn't actually change the value
> of the reference, such as `pack-refs` or `reflog expire`. There
> actually *is* a decent chance that a planned reference update will
> still be able to go through after the other process has released the
> lock.

The reason why these 'read-only' operations take locks is because
they want to ensure that other people will not mess with the
anchoring points of the history they base their operation on while
they do their work, right?

> So when trying to lock an individual reference (e.g., when creating
> "refs/heads/master.lock"), if it is already locked, then retry the
> lock acquisition for approximately 100 ms before giving up. This
> should eliminate some unnecessary lock conflicts without wasting a lot
> of time.
>
> Add a configuration setting, `core.filesRefLockTimeout`, to allow this
> setting to be tweaked.

I suspect that this came from real-life needs of a server operator.
What numbers should I be asking to justify this change? ;-) 

"Without this change, 0.4% of pushes used to fail due to losing the
race against periodic GC, but with this, the rate went down to 0.2%,
which is 50% improvement!" or something like that?






[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux