On Tue, 2013-04-02 at 09:42 +0900, Joonsoo Kim wrote: > > > > Signed-off-by: Christoph Lameter <cl@xxxxxxxxx> > > > > Index: linux/mm/slub.c > > =================================================================== > > --- linux.orig/mm/slub.c 2013-03-28 12:14:26.958358688 -0500 > > +++ linux/mm/slub.c 2013-04-01 10:23:24.677584499 -0500 > > @@ -1498,6 +1498,7 @@ static inline void *acquire_slab(struct > > void *freelist; > > unsigned long counters; > > struct page new; > > + unsigned long objects; > > > > /* > > * Zap the freelist and set the frozen bit. > > @@ -1507,6 +1508,7 @@ static inline void *acquire_slab(struct > > freelist = page->freelist; > > counters = page->counters; > > new.counters = counters; > > + objects = page->inuse; > > if (mode) { > > new.inuse = page->objects; > > new.freelist = NULL; > > @@ -1524,6 +1526,7 @@ static inline void *acquire_slab(struct > > return NULL; > > > > remove_partial(n, page); > > + page->lru.next = (void *)objects; > > WARN_ON(!freelist); > > return freelist; > > } > > Good. I like your method which use lru.next in order to hand over > number of objects. I hate it ;-) It just seems to be something that's not very robust and can cause hours of debugging in the future. I mean, there's not even a comment explaining what is happening. The lru is a union with other slub partials structs that is not very obvious. If something is out of order, it can easily break, and there's nothing here that points to why. Just pass the damn objects pointer by reference and use that. It's easy to understand, read and is robust. -- Steve > > > @@ -1565,7 +1568,7 @@ static void *get_partial_node(struct kme > > c->page = page; > > stat(s, ALLOC_FROM_PARTIAL); > > object = t; > > - available = page->objects - page->inuse; > > + available = page->objects - (unsigned long)page->lru.next; > > } else { > > available = put_cpu_partial(s, page, 0); > > stat(s, CPU_PARTIAL_NODE); > -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html