On Thu, 22 Jan 2015, Andrey Skvortsov wrote: > diff --git a/mm/slub.c b/mm/slub.c > index ceee1d7..6bcd031 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2404,7 +2404,7 @@ redo: > */ > do { > tid = this_cpu_read(s->cpu_slab->tid); > - c = this_cpu_ptr(s->cpu_slab); > + c = raw_cpu_ptr(s->cpu_slab); > } while (IS_ENABLED(CONFIG_PREEMPT) && unlikely(tid != c->tid)); > > /* > @@ -2670,7 +2670,7 @@ redo: > */ > do { > tid = this_cpu_read(s->cpu_slab->tid); > - c = this_cpu_ptr(s->cpu_slab); > + c = raw_cpu_ptr(s->cpu_slab); > } while (IS_ENABLED(CONFIG_PREEMPT) && unlikely(tid != c->tid)); > > /* Same with comment on barrier() in slab_alloc_node() */ This should already be fixed with http://ozlabs.org/~akpm/mmotm/broken-out/mm-slub-optimize-alloc-free-fastpath-by-removing-preemption-on-off-v3.patch You can find the latest mmotm, which was just released, at http://ozlabs.org/~akpm/mmotm and it should be in linux-next tomorrow. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>