Re: [PATCH 2/2] lock_packed_refs(): allow retries when acquiring the packed-refs lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 01, 2015 at 10:51:47AM -0700, Stefan Beller wrote:

> > diff --git a/refs.c b/refs.c
> > index 47e4e53..3f8ac63 100644
> > --- a/refs.c
> > +++ b/refs.c
> > @@ -2413,9 +2413,19 @@ static int write_packed_entry_fn(struct ref_entry *entry, void *cb_data)
> >  /* This should return a meaningful errno on failure */
> >  int lock_packed_refs(int flags)
> >  {
> > +       static int timeout_configured = 0;
> > +       static int timeout_value = 1000;
> 
> I'd personally be more happier with a default value of 100 ms or less
> The reason is found in the human nature, as humans tend to perceive
> anything faster than 100ms as "instant"[1], while a 100ms is a long time
> for computers.
> 
> Now a small default time may lead to to little retries, so maybe it's worth
> checking at the very end of the time again (ignoring the computed backoff
> times). As pushes to $server usually take a while (connecting, packing packs,
> writing objects etc), this may be overcautios bikeshedding from my side.

Keep in mind that this 1s is the maximum time to wait. The
lock_file_timeout() code from patch 1 starts off at 1ms, grows
quadratically, and quits as soon as it succeeds. So in most cases, the
user will wait a much smaller amount of time.

The factors that go into this timeout length are really:

  1. If there's a stale lockfile, the user will have to wait the whole
     period. How long do we keep retrying before giving up?

  2. How long do we typically hold the lock for? Aside from absurd
     cases, writing out the packed-refs file isn't that expensive. But
     while holding the packed-refs lock, we may actually be iterating
     the loose refs, which can be rather slow on a cold-cache.

If we want to improve _responsiveness_ in the normal case, I think it's
not the max timeout we want to tweak but the resolution of retries.
That's set in patch 1 by the maximum backoff multiplier, which can put
us up to 1s between retries. It might make sense to drop that to 500ms
or even less.

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]