On Tue, 26 Apr 2011, Tejun Heo wrote: > > However, after the change, especially with high @batch count, the > result may deviate significantly even with low frequency concurrent > updates. @batch deviations won't happen often but will happen once in > a while, which is just nasty and makes the API much less useful and > those occasional deviations can cause sporadic erratic behaviors - > e.g. filesystems use it for free block accounting. It's actually used > for somewhat critical decision making. This worried me a little when the percpu block counting went into tmpfs, though it's not really critical there. Would it be feasible, with these counters that are used against limits, to have an adaptive batching scheme such that the batches get smaller and smaller, down to 1 and to 0, as the total approaches the limit? (Of course a single global percpu_counter_batch won't do for this.) Perhaps it's a demonstrable logical impossibility, perhaps it would slow down the fast (far from limit) path more than we can afford, perhaps I haven't read enough of this thread and I'm taking it off-topic. Forgive me if so. Hugh -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>