On Tue, Jun 24, 2014 at 04:44:01PM +0900, Joonsoo Kim wrote: > min_partial means minimum number of slab cached in node partial > list. So, if nr_partial is less than it, we keep newly empty slab > on node partial list rather than freeing it. But if nr_partial is > equal or greater than it, it means that we have enough partial slabs > so should free newly empty slab. Current implementation missed > the equal case so if we set min_partial is 0, then, at least one slab > could be cached. This is critical problem to kmemcg destroying logic > because it doesn't works properly if some slabs is cached. This patch > fixes this problem. Oops, my fault :-( Thank you for catching this! > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Acked-by: Vladimir Davydov <vdavydov@xxxxxxxxxxxxx> > > diff --git a/mm/slub.c b/mm/slub.c > index c567927..67da14d 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -1851,7 +1851,7 @@ redo: > > new.frozen = 0; > > - if (!new.inuse && n->nr_partial > s->min_partial) > + if (!new.inuse && n->nr_partial >= s->min_partial) > m = M_FREE; > else if (new.freelist) { > m = M_PARTIAL; > @@ -1962,7 +1962,7 @@ static void unfreeze_partials(struct kmem_cache *s, > new.freelist, new.counters, > "unfreezing slab")); > > - if (unlikely(!new.inuse && n->nr_partial > s->min_partial)) { > + if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) { > page->next = discard_page; > discard_page = page; > } else { > @@ -2595,7 +2595,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, > return; > } > > - if (unlikely(!new.inuse && n->nr_partial > s->min_partial)) > + if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) > goto slab_empty; > > /* > -- > 1.7.9.5 > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>