On Mon, Jun 21, 2010 at 11:14:16PM -0700, Andrew Morton wrote: > On Tue, 22 Jun 2010 15:44:09 +1000 Dave Chinner <david@xxxxxxxxxxxxx> wrote: > > > > > And so on. This isn't necessarily bad - we'll throttle for longer > > > > than we strictly need to - but the cumulative counter resolution > > > > error gets worse as the number of CPUs doing IO completion grows. > > > > Worst case ends up at for (num cpus * 31) + 1 pages of writeback for > > > > just the first waiter. For an arbitrary FIFO queue of depth d, the > > > > worst case is more like d * (num cpus * 31 + 1). > > > Hmm, I don't see how the error would depend on the FIFO depth. > > > > It's the cumulative error that depends on the FIFO depth, not the > > error seen by a single waiter. > > Could use the below to basically eliminate the inaccuracies. > > Obviously things might get a bit expensive in certain threshold cases > but with some hysteresis that should be manageable. That seems a lot more... unpredictable than modifying the accounting to avoid cumulative errors. > + /* Check to see if rough count will be sufficient for comparison */ > + if (abs(count - rhs) > (percpu_counter_batch*num_online_cpus())) { Also, that's a big margin when we are doing equality matches for every page IO completion. If we a large CPU count machine where per-cpu counters actually improve performance (say 16p) then we're going to be hitting the slow path for the last 512 pages of every waiter. Hence I think the counter sum is compared too often to scale with this method of comparison. Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>