On Tue, 3 Dec 2013, Andrew Morton wrote: > > page = alloc_slab_page(alloc_gfp, node, oo); > > if (unlikely(!page)) { > > oo = s->min; > > What is the value of s->min? Please tell me it's zero. It usually is. > > @@ -1349,7 +1350,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) > > && !(s->flags & (SLAB_NOTRACK | DEBUG_DEFAULT_FLAGS))) { > > int pages = 1 << oo_order(oo); > > > > - kmemcheck_alloc_shadow(page, oo_order(oo), flags, node); > > + kmemcheck_alloc_shadow(page, oo_order(oo), alloc_gfp, node); > > That seems reasonable, assuming kmemcheck can handle the allocation > failure. > > > Still I dislike this practice of using unnecessarily large allocations. > What does it gain us? Slightly improved object packing density. > Anything else? The fastpath for slub works only within the bounds of a single slab page. Therefore a larger frame increases the number of allocation possible from the fastpath without having to use the slowpath and also reduces the management overhead in the partial lists. There is a kernel parameter that can be used to control the maximum order slub_max_order The default is PAGE_ALLOC_COSTLY_ORDER. See also Documentation/vm/slub.txt. Booting with slub_max_order=1 will force order 0/1 pages. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>