On Mon, 26 Jan 2015, Vladimir Davydov wrote: > SLUB's kmem_cache_shrink not only removes empty slabs from the cache, > but also sorts slabs by the number of objects in-use to cope with > fragmentation. To achieve that, it tries to allocate a temporary array. > If it fails, it will abort the whole procedure. I do not think its worth optimizing this. If we cannot allocate even a small object then the system is in an extremely bad state anyways. > @@ -3400,7 +3407,9 @@ int __kmem_cache_shrink(struct kmem_cache *s) > * list_lock. page->inuse here is the upper limit. > */ > list_for_each_entry_safe(page, t, &n->partial, lru) { > - list_move(&page->lru, slabs_by_inuse + page->inuse); > + if (page->inuse < objects) > + list_move(&page->lru, > + slabs_by_inuse + page->inuse); > if (!page->inuse) > n->nr_partial--; > } The condition is always true. A page that has page->inuse == objects would not be on the partial list. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>