On Tue, Jun 24, 2014 at 04:50:11PM +0900, Joonsoo Kim wrote: > On Fri, Jun 13, 2014 at 12:38:21AM +0400, Vladimir Davydov wrote: > > @@ -3409,6 +3417,9 @@ int __kmem_cache_shrink(struct kmem_cache *s) > > kmalloc(sizeof(struct list_head) * objects, GFP_KERNEL); > > unsigned long flags; > > > > + if (memcg_cache_dead(s)) > > + s->min_partial = 0; > > + > > if (!slabs_by_inuse) { > > /* > > * Do not fail shrinking empty slabs if allocation of the > > I think that you should move down n->nr_partial test after holding the > lock in __kmem_cache_shrink(). Access to n->nr_partial without node lock > is racy and you can see wrong value. It results in skipping to free empty > slab so your destroying logic could fail. You're right! Will fix this. And there seems to be the same problem in SLAB, where we check node->slabs_free list emptiness w/o holding node->list_lock (see drain_freelist) while it can be modified concurrently by free_block. This will be fixed automatically after we make __kmem_cache_shrink unset node->free_limit (which must be done under the lock) though. Thank you! -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>