More fixes for kmem on slabs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



More testing revealed a machine in our stable that either failed to
initialize kmem:

please wait... (gathering kmem slab cache data)
crash-6.0.3: page excluded: kernel virtual address: ffff8801263d6000  type: "kmem_cache buffer"

crash-6.0.3: unable to initialize kmem slab cache subsystem

Or succeeded on initialize and then failed on a kmem -s command:

crash-6.0.3> kmem -s
CACHE            NAME                 OBJSIZE  ALLOCATED     TOTAL  SLABS  SSIZE
Segmentation fault


The problem is that the array struct at the end of kmem_cache remains declared as
32 elements, but for all dynamically allocated copies, is actually trimmed down
to nr_cpu_ids in length.

crash-6.0.3.best> struct kmem_cache
struct kmem_cache {
    unsigned int batchcount;
...

    struct list_head next;
    struct kmem_list3 **nodelists;
    struct array_cache *array[32];
}
SIZE: 368


On my normal play machine, nr_cpu_ids = 32 and actual cpus = 16.

On the failing machine, nr_cpus_ids and actual cpus are both 2.

Two problems occur:

1)  max_cpudata_limit traverses the array until it finds a 0x0 or
reaches the real size.  On the 2-cpu system, the "third" element in the
array belonged elsewhere, was non-zero, and pointed to data that caused
the apparent limit to be 0xffffffffffff8801, which didn't work well as
a length in a memcopy.

2) kmem_cache structs can be allocated near enough to the edge of a page
that the old incorrect length crosses the page boundary, even though the
real smaller structure fits in the page.  That caused a readmem of the
structure to cross into a coincidentally missing page in the dump.

This patch fixes both of those (after wrestling ARRAY_LENGTH to the
ground), but *does not* fix the similar page crossing problem when I try
to use a "struct kmem_cache" command on the particular structure at the
end of the page.

Reference this unfortunate comment in include/linux/slab_def.h:

/* 6) per-cpu/per-node data, touched during every alloc/free */
        /*
         * We put array[] at the end of kmem_cache, because we want to size
         * this array to nr_cpu_ids slots instead of NR_CPUS
         * (see kmem_cache_init())
         * We still use [NR_CPUS] and not [1] or [0] because cache_cache
         * is statically defined, so we reserve the max number of cpus.
         */
        struct kmem_list3 **nodelists;
        struct array_cache *array[NR_CPUS];
        /*
         * Do not add fields after array[]
         */
};

Bob Montgomery

--- memory.c.orig	2012-02-08 11:38:26.000000000 -0700
+++ memory.c	2012-02-08 16:08:18.000000000 -0700
@@ -7806,6 +7806,7 @@ vaddr_to_slab(ulong vaddr)
 char slab_hdr[100] = { 0 };
 char kmem_cache_hdr[100] = { 0 };
 char free_inuse_hdr[100] = { 0 };
+static int kmem_cache_nr_cpu = 0;
 
 static void
 kmem_cache_init(void)
@@ -7979,12 +7980,14 @@ kmem_cache_downsize(void)
 	int nr_node_ids;
 
 	if ((THIS_KERNEL_VERSION < LINUX(2,6,22)) ||
-	    (vt->flags & NODELISTS_IS_PTR) ||
 	    !(vt->flags & PERCPU_KMALLOC_V2_NODES) ||
 	    !kernel_symbol_exists("cache_cache") ||
 	    !MEMBER_EXISTS("kmem_cache", "buffer_size"))
 		return;
 
+	if (vt->flags & NODELISTS_IS_PTR) 
+		goto kmem_cache_s_nodelists_is_ptr;
+
 	cache_buf = GETBUF(SIZE(kmem_cache_s));
 
 	if (!readmem(symbol_value("cache_cache"), KVADDR, cache_buf, 
@@ -8026,6 +8029,21 @@ kmem_cache_downsize(void)
 	}
 
 	FREEBUF(cache_buf);
+	return;
+
+kmem_cache_s_nodelists_is_ptr:
+	/* struct array_cache array is actually sized by number of cpus */
+	/* real value is nr_cpu_ids, but fallback is kt->cpus */
+
+	if (symbol_exists("nr_cpu_ids"))
+		get_symbol_data("nr_cpu_ids", sizeof(int), &kmem_cache_nr_cpu);
+	else 
+		kmem_cache_nr_cpu = kt->cpus;
+	
+	ARRAY_LENGTH(kmem_cache_s_array) = kmem_cache_nr_cpu;
+	ASSIGN_SIZE(kmem_cache_s) = OFFSET(kmem_cache_s_array) +
+			sizeof(ulong) * kmem_cache_nr_cpu;
+
 }
 
 
@@ -8117,8 +8135,9 @@ kmem_cache_s_array_nodes:
             "array cache array", RETURN_ON_ERROR))
 		goto bail_out;
 
-	for (i = max_limit = 0; (i < ARRAY_LENGTH(kmem_cache_s_array)) && 
-	     cpudata[i]; i++) {
+	for (i = max_limit = 0; (i < kmem_cache_nr_cpu) 
+			&& (i < ARRAY_LENGTH(kmem_cache_s_array)) 
+			&& cpudata[i]; i++) {
                 if (!readmem(cpudata[i]+OFFSET(array_cache_limit),
                     KVADDR, &limit, sizeof(int),
                     "array cache limit", RETURN_ON_ERROR))
--
Crash-utility mailing list
Crash-utility@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/crash-utility

[Index of Archives]     [Fedora Development]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]     [Fedora Tools]

 

Powered by Linux