- slub-fix-cpu-slab-flushing-behavior-so-that-counters-match.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     SLUB: fix cpu slab flushing behavior so that counters match
has been removed from the -mm tree.  Its filename was
     slub-fix-cpu-slab-flushing-behavior-so-that-counters-match.patch

This patch was dropped because it was folded into slub-core.patch

------------------------------------------------------
Subject: SLUB: fix cpu slab flushing behavior so that counters match
From: Christoph Lameter <clameter@xxxxxxx>

Currently we have a check for keventd in slab_alloc() and we only schedule
the event to check for inactive slabs if keventd is up.  This was done in
the belief that later allocations would start up the checking for all
slabs.  However, that is not true for slabs that are only allocated from
during boot.  We will then have per cpus slabs but s->cpu_slabs is zero. 
As a result flush_all() will not flush the cpu_slabs from these slabcaches
and slab validation wil report counter inconsistencies.

Fix that by removing the check for keventd from slab_alloc.  Instead we set
cpu_slabs to 1 during boot so that slab_alloc believes that a check is
already scheduled.  Thus is will not try to schedule via keventd during
early boot.

Later when sysfs is brought up we have to scan through the list of boot
caches anyways.  At that point we simply flush all active slabs which will
set cpu_slabs to zero.  Any new cpu slab after sysfs init will then cause
the inactive cpu slab event to be triggered.

Signed-off-by: Christoph Lameter <clameter@xxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/slub.c |   26 +++++++++++++++++++++++---
 1 files changed, 23 insertions(+), 3 deletions(-)

diff -puN mm/slub.c~slub-fix-cpu-slab-flushing-behavior-so-that-counters-match mm/slub.c
--- a/mm/slub.c~slub-fix-cpu-slab-flushing-behavior-so-that-counters-match
+++ a/mm/slub.c
@@ -65,7 +65,7 @@
  * SLUB assigns one slab for allocation to each processor.
  * Allocations only occur from these slabs called cpu slabs.
  *
- * If a cpu slab exists then a workqueue thread checks every 10
+ * If a cpu slab exists then a workqueue thread checks every 30
  * seconds if the cpu slab is still in use. The cpu slab is pushed back
  * to the list if inactive [only needed for SMP].
  *
@@ -1171,7 +1171,7 @@ have_slab:
 		SetPageActive(page);
 
 #ifdef CONFIG_SMP
-		if (!atomic_read(&s->cpu_slabs) && keventd_up()) {
+		if (!atomic_read(&s->cpu_slabs)) {
 			atomic_inc(&s->cpu_slabs);
 			schedule_delayed_work(&s->flush, 30 * HZ);
 		}
@@ -1683,7 +1683,20 @@ static int kmem_cache_open(struct kmem_c
 
 #ifdef CONFIG_SMP
 	mutex_init(&s->flushing);
-	atomic_set(&s->cpu_slabs, 0);
+	if (slab_state >= SYSFS)
+		atomic_set(&s->cpu_slabs, 0);
+	else
+		/*
+		 * Keventd may not be up yet. Pretend that we have active
+		 * per_cpu slabs so that there will be no attempt to
+		 * schedule a flusher in slab_alloc.
+		 *
+		 * We fix the situation up later when sysfs is brought up
+		 * by flushing all slabs (which puts the slab caches that
+		 * are mostly/only used in a nice quiet state).
+		 */
+		atomic_set(&s->cpu_slabs, 1);
+
 	INIT_DELAYED_WORK(&s->flush, flusher);
 #endif
 	if (init_kmem_cache_nodes(s, gfpflags & ~SLUB_DMA))
@@ -2945,6 +2958,13 @@ int __init slab_sysfs_init(void)
 
 		err = sysfs_slab_add(s);
 		BUG_ON(err);
+		/*
+		 * Start the periodic checks for inactive cpu slabs.
+		 * flush_all() will zero s->cpu_slabs which will cause
+		 * any allocation of a new cpu slab to schedule an event
+		 * via keventd to watch for inactive cpu slabs.
+		 */
+		flush_all(s);
 	}
 
 	while (alias_list) {
_

Patches currently in -mm which might be from clameter@xxxxxxx are

slab-introduce-krealloc.patch
ia64-sn-xpc-convert-to-use-kthread-api-fix.patch
add-apply_to_page_range-which-applies-a-function-to-a-pte-range.patch
safer-nr_node_ids-and-nr_node_ids-determination-and-initial.patch
use-zvc-counters-to-establish-exact-size-of-dirtyable-pages.patch
slab-ensure-cache_alloc_refill-terminates.patch
smaps-extract-pmd-walker-from-smaps-code.patch
smaps-add-pages-referenced-count-to-smaps.patch
smaps-add-clear_refs-file-to-clear-reference.patch
slab-use-num_possible_cpus-in-enable_cpucache.patch
extend-print_symbol-capability.patch
i386-use-page-allocator-to-allocate-thread_info-structure.patch
slub-core.patch
slub-fix-cpu-slab-flushing-behavior-so-that-counters-match.patch
slub-extract-finish_bootstrap-function-for-clean-sysfs-boot.patch
slub-core-fix-kmem_cache_destroy.patch
slub-core-fix-validation.patch
slub-core-add-after-object-padding.patch
slub-core-resiliency-fixups.patch
slub-core-resiliency-fixups-fix.patch
slub-core-resiliency-test.patch
slub-core-update-cpu-after-new_slab.patch
slub-core-fix-sysfs-directory-handling.patch
slub-core-conform-more-to-slabs-slab_hwcache_align-behavior.patch
slub-core-reduce-the-order-of-allocations-to-avoid-fragmentation.patch
make-page-private-usable-in-compound-pages-v1.patch
make-page-private-usable-in-compound-pages-v1-hugetlb-fix.patch
optimize-compound_head-by-avoiding-a-shared-page.patch
add-virt_to_head_page-and-consolidate-code-in-slab-and-slub.patch
slub-fix-object-tracking.patch
slub-enable-tracking-of-full-slabs.patch
slub-enable-tracking-of-full-slabs-fix.patch
slub-enable-tracking-of-full-slabs-add-checks-for-interrupts-disabled.patch
slub-validation-of-slabs-metadata-and-guard-zones.patch
slub-validation-of-slabs-metadata-and-guard-zones-fix-pageerror-checks-during-validation.patch
slub-validation-of-slabs-metadata-and-guard-zones-remove-duplicate-vm_bug_on.patch
slub-add-min_partial.patch
slub-add-ability-to-list-alloc--free-callers-per-slab.patch
slub-add-ability-to-list-alloc--free-callers-per-slab-tidy.patch
slub-free-slabs-and-sort-partial-slab-lists-in-kmem_cache_shrink.patch
slub-remove-object-activities-out-of-checking-functions.patch
slub-user-documentation.patch
slub-user-documentation-fix.patch
slub-add-slabinfo-tool.patch
slub-add-slabinfo-tool-update-slabinfoc.patch
slub-major-slabinfo-update.patch
slub-exploit-page-mobility-to-increase-allocation-order.patch
slub-mm-only-make-slub-the-default-slab-allocator.patch
quicklists-for-page-table-pages.patch
quicklists-for-page-table-pages-avoid-useless-virt_to_page-conversion.patch
quicklists-for-page-table-pages-avoid-useless-virt_to_page-conversion-fix.patch
quicklist-support-for-ia64.patch
quicklist-support-for-x86_64.patch
quicklist-support-for-sparc64.patch
slab-allocators-remove-obsolete-slab_must_hwcache_align.patch
kmem_cache-simplify-slab-cache-creation.patch
slab-allocators-remove-slab_debug_initial-flag.patch
slab-allocators-remove-slab_debug_initial-flag-locks-fix.patch
slab-allocators-remove-multiple-alignment-specifications.patch
slab-allocators-remove-slab_ctor_atomic.patch
fault-injection-fix-failslab-with-config_numa.patch
mm-fix-handling-of-panic_on_oom-when-cpusets-are-in-use.patch
slub-i386-support.patch
slab-shutdown-cache_reaper-when-cpu-goes-down.patch
mm-implement-swap-prefetching.patch
revoke-core-code-slab-allocators-remove-slab_debug_initial-flag-revoke.patch
readahead-state-based-method-aging-accounting.patch

-
To unsubscribe from this list: send the line "unsubscribe mm-commits" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Kernel Newbies FAQ]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Photo]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux