Avoid using the page struct address on free by just doing an address comparison. That is easily doable now that the page address is available in the page struct and we already have the page struct address of the object to be freed calculated. Signed-off-by: Christoph Lameter <cl@xxxxxxxxx> Index: linux/mm/slub.c =================================================================== --- linux.orig/mm/slub.c 2014-12-09 12:25:45.770405462 -0600 +++ linux/mm/slub.c 2014-12-09 12:25:45.766405582 -0600 @@ -2625,6 +2625,13 @@ slab_empty: discard_slab(s, page); } +static bool same_slab_page(struct kmem_cache *s, struct page *page, void *p) +{ + long d = p - page->address; + + return d > 0 && d < (1 << MAX_ORDER) && d < (compound_order(page) << PAGE_SHIFT); +} + /* * Fastpath with forced inlining to produce a kfree and kmem_cache_free that * can perform fastpath freeing without additional function calls. @@ -2658,7 +2665,7 @@ redo: tid = c->tid; preempt_enable(); - if (likely(page == c->page)) { + if (likely(same_slab_page(s, page, c->freelist))) { set_freepointer(s, object, c->freelist); if (unlikely(!this_cpu_cmpxchg_double( -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>