On Wed, Sep 28, 2016 at 11:09:53AM +0300, Vladimir Davydov wrote: > On Tue, Sep 27, 2016 at 10:03:47PM -0400, Johannes Weiner wrote: > > [CC Vladimir] > > > > These are the delayed memcg cache allocations, where in a fresh memcg > > that doesn't have per-memcg caches yet, every accounted allocation > > schedules a kmalloc work item in __memcg_schedule_kmem_cache_create() > > until the cache is finally available. It looks like those can be many > > more than the number of slab caches in existence, if there is a storm > > of slab allocations before the workers get a chance to run. > > > > Vladimir, what do you think of embedding the work item into the > > memcg_cache_array? That way we make sure we have exactly one work per > > cache and not an unbounded number of them. The downside of course is > > that we'd have to keep these things around as long as the memcg is in > > existence, but that's the only place I can think of that allows us to > > serialize this. > > We could set the entry of the root_cache->memcg_params.memcg_caches > array corresponding to the cache being created to a special value, say > (void*)1, and skip scheduling cache creation work on kmalloc if the > caller sees it. I'm not sure it's really worth it though, because > work_struct isn't that big (at least, in comparison with the cache > itself) to avoid embedding it at all costs. Hello, Johannes and Vladimir. I'm not familiar with memcg so have a question about this solution. This solution will solve the current issue but if burst memcg creation happens, similar issue would happen again. My understanding is correct? I think that the other cause of the problem is that we call synchronize_sched() which is rather slow with holding a slab_mutex and it blocks further kmem_cache creation. Should we fix that, too? Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>