On Thu, 2010-12-23 at 17:31 +1100, Dave Chinner wrote: > On Wed, Dec 22, 2010 at 09:56:42PM -0600, Alex Elder wrote: > > In __percpu_counter_add_unless_lt() we don't need to disable > > preemption unless we're manipulating a per-cpu variable. That only > > happens in a limited case, so narrow the scope of that preemption to > > surround that case. This makes the "out" label rather unnecessary, > > so replace a couple "goto out" calls to just return. . . . > > Regardless of the other changes, this is not valid. That is: You're right. I was thinking about updates to fbc->count being protected by the spinlock, but that doesn't address the cached value getting stale if this CPU gets preempted and another thread passes through this code before the first one gets resumed. I'm also looking at the other patches and your responses and will be done with it today. I don't want to hold up your pull request any longer. If you found anything of value in the little series I posted feel free to incorporate it into your own changes. -Alex > amount = -1; > count = fbc->count; > ..... > > <get preempted> > > <other operations may significantly change fbc->count (i.e > lots more than error will catch), so the current value of > count in this context is wrong and cannot be trusted> > > <start running again> > > if (count - error + amount > threshold) { > <not valid to run this lockless optimisation based > on a stale count value> > > .... > } > > Effectively, if we want to be able to use lockless optimisations, we > need to ensure that the value of the global counter that we read > remains within the given error bounds until we have finished making > the lockless modification. That is done via disabling preemption > across the entire function... > > Cheers, > > Dave. _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs