On Tue, 18 Mar 2014, Andrew Morton wrote: > Christoph caught one. How does this look? The fundamental decision to be made here is if we want the counter overhead coming on platforms that do not have lockless percpu atomics and therefore would require an irq on/off sequence for safe counter increments. So far we have said that we do allow the counters to be racy for performance sake. Your patch would remove the races. If we want to keep the races and the performance than we need to change __count_vm_events to use raw_cpu_add instead of __this_cpu_add. Subject: vmstat: Use raw_cpu_ops to avoid false positives on preemption checks vm counters are allowed to be racy. Use raw_cpu_ops to avoid preemption checks. Signed-off-by: Christoph Lameter <cl@xxxxxxxxx> Index: linux/include/linux/vmstat.h =================================================================== --- linux.orig/include/linux/vmstat.h 2014-02-10 08:54:02.318697828 -0600 +++ linux/include/linux/vmstat.h 2014-03-20 09:02:05.132852038 -0500 @@ -29,7 +29,7 @@ DECLARE_PER_CPU(struct vm_event_state, v static inline void __count_vm_event(enum vm_event_item item) { - __this_cpu_inc(vm_event_states.event[item]); + raw_cpu_inc(vm_event_states.event[item]); } static inline void count_vm_event(enum vm_event_item item) @@ -39,7 +39,7 @@ static inline void count_vm_event(enum v static inline void __count_vm_events(enum vm_event_item item, long delta) { - __this_cpu_add(vm_event_states.event[item], delta); + raw_cpu_add(vm_event_states.event[item], delta); } static inline void count_vm_events(enum vm_event_item item, long delta) -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>