On Mon, May 11, 2015 at 10:49 AM, Stefan Beller <sbeller@xxxxxxxxxx> wrote: > On Mon, May 11, 2015 at 9:49 AM, Jeff King <peff@xxxxxxxx> wrote: >> On Mon, May 11, 2015 at 12:26:23PM +0200, Michael Haggerty wrote: >> >>> > So something like 100ms max backoff makes sense to me, in that it keeps >>> > us in the same order of magnitude as the expected time that the lock is >>> > held. [...] >>> >>> I don't understand your argument. If another process blocks us for on >>> the order of 100 ms, the backoff time (reading from my table) is less >>> than half of that. >> >> I think it is just that I was agreeing with you, but communicated it >> badly. I think your series is fine as-is. > > By now I also think your series is fine as is. > I am currently implementing something similar for Gerrit and I realize > testing time based things is a royal pain in the butt. The tests you propose > just take wall clock time and all should is good, though slowing down > the test suite > by another second or three in the worst case. > > So for tests in Gerrit I use a dedicated java class which is > specialized to pretend different > times, such that you can write: > > doThings(); > pretendTimePassing(1 second); > checkResultsFromThreads(); > > but running the tests takes less than 1 second as it's no real wall > clock time passing. > > On my machine there is > /bin/usleep - sleep some number of microseconds As we're using perl anyway we may even want to just do perl -e "select(undef,undef,undef,0.1);" as found at http://serverfault.com/questions/469247/how-do-i-sleep-for-a-millisecond-in-bash-or-ksh -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html