On Sat, May 02, 2015 at 07:19:28AM +0200, Michael Haggerty wrote: > 100 ms seems to be considered an acceptable delay between the time that > a user, say, clicks a button and the time that the button reacts. What > we are talking about is the time between the release of a lock by one > process and the resumption of another process that was blocked waiting > for the lock. The former is probably not under the control of the user > anyway, and perhaps not even observable by the user. Thus I don't think > that a perceivable delay between that event and the resumption of the > blocked process would be annoying. The more salient delay is between the > time that the user started the blocked command and when that command > completed. Let's look in more detail. Yeah, you can't impact when the other process will drop the lock, but if we assume that it takes on the order of 100ms for the other process to do its whole operation, then on average we experience half that. And then tack on to that whatever time we waste in sleep() after the other guy drops the lock. And that's on average half of our backoff time. So something like 100ms max backoff makes sense to me, in that it keeps us in the same order of magnitude as the expected time that the lock is held. Of course these numbers are all grossly hand-wavy, and as you point out, the current formula never even hits 100ms with the current 1s timeout, anyway. So for the record, I'm fine leaving your patch as-is. I think for our disgusting 1GB packed-refs files at GitHub, we will end up bumping the maximum timeout, but by my own argument above, it will be fine for the backoff to increase at the same time (i.e., they will remain in the same rough order of magnitude). -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html