The patch titled SLUB: Fixes to kmem_cache_shrink() has been removed from the -mm tree. Its filename was slub-free-slabs-and-sort-partial-slab-lists-in-kmem_cache_shrink-fixes-to-kmem_cache_shrink.patch This patch was dropped because it was folded into slub-free-slabs-and-sort-partial-slab-lists-in-kmem_cache_shrink.patch ------------------------------------------------------ Subject: SLUB: Fixes to kmem_cache_shrink() From: Christoph Lameter <clameter@xxxxxxx> 1. Reclaim all empty slabs even if we are below MIN_PARTIAL partial slabs. The point here is to recover all possible memory. 2. Fix race condition vs. slab_free. If we want to free a slab then we need to acquire the slab lock since slab_free may have freed an object and is waiting to acquire the lock to remove the slab. We do a trylock. If its unsuccessful then we are racing with slab_free. Simply keep the empty slab on the partial lists. slab_free will remove the slab as soon as we drop the list_lock. 3. #2 may have the result that we end up with empty slabs on the slabs_by_inuse array. So make sure that we also splice in the zeroeth element. Signed-off-by: Christoph Lameter <clameter@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slub.c | 19 +++++++++++++------ 1 files changed, 13 insertions(+), 6 deletions(-) diff -puN mm/slub.c~slub-free-slabs-and-sort-partial-slab-lists-in-kmem_cache_shrink-fixes-to-kmem_cache_shrink mm/slub.c --- a/mm/slub.c~slub-free-slabs-and-sort-partial-slab-lists-in-kmem_cache_shrink-fixes-to-kmem_cache_shrink +++ a/mm/slub.c @@ -2237,7 +2237,7 @@ int kmem_cache_shrink(struct kmem_cache for_each_online_node(node) { n = get_node(s, node); - if (n->nr_partial <= MIN_PARTIAL) + if (!n->nr_partial) continue; for (i = 0; i < s->objects; i++) @@ -2254,14 +2254,21 @@ int kmem_cache_shrink(struct kmem_cache * the upper limit. */ list_for_each_entry_safe(page, t, &n->partial, lru) { - if (!page->inuse) { + if (!page->inuse && slab_trylock(page)) { + /* + * Must hold slab lock here because slab_free + * may have freed the last object and be + * waiting to release the slab. + */ list_del(&page->lru); n->nr_partial--; + slab_unlock(page); discard_slab(s, page); - } else - if (n->nr_partial > MAX_PARTIAL) - list_move(&page->lru, + } else { + if (n->nr_partial > MAX_PARTIAL) + list_move(&page->lru, slabs_by_inuse + page->inuse); + } } if (n->nr_partial <= MAX_PARTIAL) @@ -2271,7 +2278,7 @@ int kmem_cache_shrink(struct kmem_cache * Rebuild the partial list with the slabs filled up * most first and the least used slabs at the end. */ - for (i = s->objects - 1; i > 0; i--) + for (i = s->objects - 1; i >= 0; i--) list_splice(slabs_by_inuse + i, n->partial.prev); out: _ Patches currently in -mm which might be from clameter@xxxxxxx are extend-print_symbol-capability.patch slab-introduce-krealloc.patch ia64-sn-xpc-convert-to-use-kthread-api-fix.patch ia64-sn-xpc-convert-to-use-kthread-api-fix-2.patch add-apply_to_page_range-which-applies-a-function-to-a-pte-range.patch add-apply_to_page_range-which-applies-a-function-to-a-pte-range-fix.patch safer-nr_node_ids-and-nr_node_ids-determination-and-initial.patch use-zvc-counters-to-establish-exact-size-of-dirtyable-pages.patch slab-ensure-cache_alloc_refill-terminates.patch smaps-extract-pmd-walker-from-smaps-code.patch smaps-add-pages-referenced-count-to-smaps.patch smaps-add-clear_refs-file-to-clear-reference.patch slab-use-num_possible_cpus-in-enable_cpucache.patch i386-use-page-allocator-to-allocate-thread_info-structure.patch slub-core.patch make-page-private-usable-in-compound-pages-v1.patch optimize-compound_head-by-avoiding-a-shared-page.patch add-virt_to_head_page-and-consolidate-code-in-slab-and-slub.patch slub-fix-object-tracking.patch slub-enable-tracking-of-full-slabs.patch slub-validation-of-slabs-metadata-and-guard-zones.patch slub-add-min_partial.patch slub-add-ability-to-list-alloc--free-callers-per-slab.patch slub-free-slabs-and-sort-partial-slab-lists-in-kmem_cache_shrink.patch slub-free-slabs-and-sort-partial-slab-lists-in-kmem_cache_shrink-fixes-to-kmem_cache_shrink.patch slub-remove-object-activities-out-of-checking-functions.patch slub-remove-object-activities-out-of-checking-functions-printk-cleanup-diagnostic-functions.patch slub-user-documentation.patch slub-user-documentation-fix.patch slub-add-slabinfo-tool.patch slub-add-slabinfo-tool-update-slabinfoc.patch slub-major-slabinfo-update.patch slub-slabinfo-remove-hackname.patch slub-slabinfo-more-statistic-fixes-and-handling-fixes.patch slub-exploit-page-mobility-to-increase-allocation-order.patch slub-mm-only-make-slub-the-default-slab-allocator.patch quicklists-for-page-table-pages.patch quicklists-for-page-table-pages-avoid-useless-virt_to_page-conversion.patch quicklists-for-page-table-pages-avoid-useless-virt_to_page-conversion-fix.patch quicklist-support-for-ia64.patch quicklist-support-for-x86_64.patch quicklist-support-for-sparc64.patch slab-allocators-remove-obsolete-slab_must_hwcache_align.patch kmem_cache-simplify-slab-cache-creation.patch slab-allocators-remove-slab_debug_initial-flag.patch slab-allocators-remove-slab_debug_initial-flag-locks-fix.patch slab-allocators-remove-multiple-alignment-specifications.patch slab-allocators-remove-slab_ctor_atomic.patch fault-injection-fix-failslab-with-config_numa.patch mm-fix-handling-of-panic_on_oom-when-cpusets-are-in-use.patch slub-i386-support.patch slab-shutdown-cache_reaper-when-cpu-goes-down.patch mm-implement-swap-prefetching.patch revoke-core-code-slab-allocators-remove-slab_debug_initial-flag-revoke.patch vmstat-use-our-own-timer-events.patch readahead-state-based-method-aging-accounting.patch - To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html