On 04/18/2014 05:23 PM, Johannes Weiner wrote: >> First, it removes async per memcg cache destruction (see patches 1, 2). >> Now caches are only destroyed on memcg offline. That means the caches >> that are not empty on memcg offline will be leaked. However, they are >> already leaked, because memcg_cache_params::nr_pages normally never >> drops to 0 so the destruction work is never scheduled except >> kmem_cache_shrink is called explicitly. In the future I'm planning >> reaping such dead caches on vmpressure or periodically. > > I like the synchronous handling on css destruction, but the periodical > reaping part still bothers me. If there is absolutely 0 use for these > caches remaining, they shouldn't hang around until we encounter memory > pressure or a random time interval. Agree. > Would it be feasible to implement cache merging in both slub and slab, > so that upon css destruction the child's cache's remaining slabs could > be moved to the parent's cache? If the parent doesn't have one, just > reparent the whole cache. Interesting idea. That would definitely look neater than periodic reaping. But it's going to be an uneasy thing to do I guess, because synchronization in sl[au]b is a subtle thing. I'll have a closer look at slab's internals to understand if it's feasible. > >> Second, it substitutes per memcg slab_caches_mutex's with the global >> memcg_slab_mutex, which should be taken during the whole per memcg cache >> creation/destruction path before the slab_mutex (see patch 3). This >> greatly simplifies synchronization among various per memcg cache >> creation/destruction paths. > > This sounds reasonable. I'll go look at the code. Thank you! -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>