----- Original Message ----- > In include/linux/slab_def.h circa linux 3.0, this def for field > nodelists: > > struct kmem_cache { > /* 1) per-cpu data, touched during every alloc/free */ > struct array_cache *array[NR_CPUS]; > > ... > > struct kmem_list3 *nodelists[MAX_NUMNODES]; > /* > * Do not add fields after nodelists[] > */ > }; > > Became this in 3.1: > > struct kmem_cache { > ... > > /* 6) per-cpu/per-node data, touched during every alloc/free */ > /* > * We put array[] at the end of kmem_cache, because we want to size > * this array to nr_cpu_ids slots instead of NR_CPUS > * (see kmem_cache_init()) > * We still use [NR_CPUS] and not [1] or [0] because > cache_cache > * is statically defined, so we reserve the max number of cpus. > */ > struct kmem_list3 **nodelists; > struct array_cache *array[NR_CPUS]; > /* > * Do not add fields after array[] > */ > }; > > > Which causes this in crash/memory.c:vm_init() > > ARRAY_LENGTH_INIT(vt->kmem_cache_len_nodes, NULL, > "kmem_cache.nodelists", NULL, 0); > > to set vt->kmem_cache_len_nodes to 0, and leads to the initialization > failure when max_cpudata_limit calls getbuf with a size of 0. > > Got a fix in the works yet? > > Thanks, > Bob Montgomery No, afraid not. Fedora uses slub instead of slab, so I haven't noticed it. I wonder why kmem_cache_downsize() doesn't recalculate vt->kmem_cache_len_nodes based upon "nr_node_ids"?: if (buffer_size < SIZE(kmem_cache_s)) { if (kernel_symbol_exists("nr_node_ids")) { get_symbol_data("nr_node_ids", sizeof(int), &nr_node_ids); vt->kmem_cache_len_nodes = nr_node_ids; } else vt->kmem_cache_len_nodes = 1; Dave -- Crash-utility mailing list Crash-utility@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/crash-utility