On Tue, 13 May 2014, Vladimir Davydov wrote: > Currently full slabs are only kept on per-node lists for debugging, but > we need this feature to reparent per memcg caches, so let's enable it > for them too. That will significantly impact the fastpaths for alloc and free. Also a pretty significant change the logic of the fastpaths since they were not designed to handle the full lists. In debug mode all operations were only performed by the slow paths and only the slow paths so far supported tracking full slabs. > @@ -2587,6 +2610,9 @@ static void __slab_free(struct kmem_cache *s, struct page *page, > > } else { /* Needs to be taken off a list */ > > + if (kmem_cache_has_cpu_partial(s) && !prior) > + new.frozen = 1; > + > n = get_node(s, page_to_nid(page)); Make this code conditional? > /* > * Speculatively acquire the list_lock. > @@ -2606,6 +2632,12 @@ static void __slab_free(struct kmem_cache *s, struct page *page, > object, new.counters, > "__slab_free")); > > + if (unlikely(n) && new.frozen && !was_frozen) { > + remove_full(s, n, page); > + spin_unlock_irqrestore(&n->list_lock, flags); > + n = NULL; > + } > + > if (likely(!n)) { Here too. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>