On Mon, Dec 15, 2014 at 08:16:00AM -0600, Christoph Lameter wrote: > On Mon, 15 Dec 2014, Joonsoo Kim wrote: > > > > +static bool same_slab_page(struct kmem_cache *s, struct page *page, void *p) > > > +{ > > > + long d = p - page->address; > > > + > > > + return d > 0 && d < (1 << MAX_ORDER) && d < (compound_order(page) << PAGE_SHIFT); > > > +} > > > + > > > > Somtimes, compound_order() induces one more cacheline access, because > > compound_order() access second struct page in order to get order. Is there > > any way to remove this? > > I already have code there to avoid the access if its within a MAX_ORDER > page. We could probably go for a smaller setting there. PAGE_COSTLY_ORDER? That is the solution to avoid compound_order() call when slab of object isn't matched with per cpu slab. What I'm asking is whether there is a way to avoid compound_order() call when slab of object is matched with per cpu slab or not. Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>