Two patches against slub to enable deconfiguring cpu_partial support. First one is a bug fix (Pekka please pick up this one or use Joonsoo's earlier one). Subject: slub: Fix object counts in acquire_slab It seems that we were overallocating objects from the slab queues since get_partial_node() assumed that page->inuse was undisturbed by acquire_slab(). Save the # of objects in page->lru.next in acquire_slab() and pass it to get_partial_node() that way. I have a vague memory that Joonsoo also ran into this issue awhile back. Signed-off-by: Christoph Lameter <cl@xxxxxxxxx> Index: linux/mm/slub.c =================================================================== --- linux.orig/mm/slub.c 2013-03-28 12:14:26.958358688 -0500 +++ linux/mm/slub.c 2013-03-28 12:16:57.240785613 -0500 @@ -1498,6 +1498,7 @@ static inline void *acquire_slab(struct void *freelist; unsigned long counters; struct page new; + unsigned long objects; /* * Zap the freelist and set the frozen bit. @@ -1507,6 +1508,7 @@ static inline void *acquire_slab(struct freelist = page->freelist; counters = page->counters; new.counters = counters; + objects = page->objects; if (mode) { new.inuse = page->objects; new.freelist = NULL; @@ -1524,6 +1526,7 @@ static inline void *acquire_slab(struct return NULL; remove_partial(n, page); + page->lru.next = (void *)objects; WARN_ON(!freelist); return freelist; } @@ -1565,7 +1568,7 @@ static void *get_partial_node(struct kme c->page = page; stat(s, ALLOC_FROM_PARTIAL); object = t; - available = page->objects - page->inuse; + available = page->objects - (unsigned long)page->lru.next; } else { available = put_cpu_partial(s, page, 0); stat(s, CPU_PARTIAL_NODE); -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html