On Thu, Aug 18, 2011 at 10:26:58AM -0400, Valdis.Kletnieks@xxxxxx wrote: > On Thu, 18 Aug 2011 11:38:00 +0200, Johannes Weiner said: > > > Note that on non-x86, these operations themselves actually disable and > > reenable preemption each time, so you trade a pair of add and sub on > > x86 > > > > - preempt_disable() > > __this_cpu_xxx() > > __this_cpu_yyy() > > - preempt_enable() > > > > with > > > > preempt_disable() > > __this_cpu_xxx() > > + preempt_enable() > > + preempt_disable() > > __this_cpu_yyy() > > preempt_enable() > > > > everywhere else. > > That would be an unexpected race condition on non-x86, if you expected _xxx and > _yyy to be done together without a preempt between them. Would take mere > mortals forever to figure that one out. :) That should be fine, we don't require the two counters to be perfectly coherent with respect to each other, which is the justification for this optimization in the first place. But on non-x86, the operation to increase a single per-cpu counter (read-modify-write) itself is made atomic by disabling preemption. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>