On Thu, 2023-01-26 at 15:19 -0300, Marcelo Tosatti wrote: > On Wed, Jan 25, 2023 at 03:14:48PM -0800, Roman Gushchin wrote: > > On Wed, Jan 25, 2023 at 03:22:00PM -0300, Marcelo Tosatti wrote: > > > On Wed, Jan 25, 2023 at 08:06:46AM -0300, Leonardo Brás wrote: > > > > On Wed, 2023-01-25 at 09:33 +0100, Michal Hocko wrote: > > > > > On Wed 25-01-23 04:34:57, Leonardo Bras wrote: > > > > > > Disclaimer: > > > > > > a - The cover letter got bigger than expected, so I had to split it in > > > > > > sections to better organize myself. I am not very confortable with it. > > > > > > b - Performance numbers below did not include patch 5/5 (Remove flags > > > > > > from memcg_stock_pcp), which could further improve performance for > > > > > > drain_all_stock(), but I could only notice the optimization at the > > > > > > last minute. > > > > > > > > > > > > > > > > > > 0 - Motivation: > > > > > > On current codebase, when drain_all_stock() is ran, it will schedule a > > > > > > drain_local_stock() for each cpu that has a percpu stock associated with a > > > > > > descendant of a given root_memcg. > > > > Do you know what caused those drain_all_stock() calls? I wonder if we should look > > into why we have many of them and whether we really need them? > > > > It's either some user's actions (e.g. reducing memory.max), either some memcg > > is entering pre-oom conditions. In the latter case a lot of drain calls can be > > scheduled without a good reason (assuming the cgroup contain multiple tasks running > > on multiple cpus). Essentially each cpu will try to grab the remains of the memory quota > > and move it locally. I wonder in such circumstances if we need to disable the pcp-caching > > on per-cgroup basis. > > > > Generally speaking, draining of pcpu stocks is useful only if an idle cpu is holding some > > charges/memcg references (it might be not completely idle, but running some very special > > workload which is not doing any kernel allocations or a process belonging to the root memcg). > > In all other cases pcpu stock will be either drained naturally by an allocation from another > > memcg or an allocation from the same memcg will "restore" it, making draining useless. > > > > We also can into drain_all_pages() opportunistically, without waiting for the result. > > On a busy system it's most likely useless, we might oom before scheduled works will be executed. > > > > I admit I planned to do some work around and even started, but then never had enough time to > > finish it. > > > > Overall I'm somewhat resistant to an idea of making generic allocation & free paths slower > > for an improvement of stock draining. It's not a strong objection, but IMO we should avoid > > doing this without a really strong reason. > > The expectation would be that cache locking should not cause slowdown of > the allocation and free paths: > > https://manualsbrain.com/en/manuals/1246877/?page=313 > > For the P6 and more recent processor families, if the area of memory being locked > during a LOCK operation is cached in the processor that is performing the LOCK oper- > ation as write-back memory and is completely contained in a cache line, the > processor may not assert the LOCK# signal on the bus. Instead, it will modify the > memory location internally and allow it’s cache coherency mechanism to insure that > the operation is carried out atomically. This operation is called “cache locking.” The > cache coherency mechanism automatically prevents two or more processors that ... > > Just to keep the info easily available: the protected structure (struct memcg_stock_pcp) fits in 48 Bytes, which is less than the usual 64B cacheline. struct memcg_stock_pcp { spinlock_t stock_lock; /* 0 4 */ unsigned int nr_pages; /* 4 4 */ struct mem_cgroup * cached; /* 8 8 */ struct obj_cgroup * cached_objcg; /* 16 8 */ struct pglist_data * cached_pgdat; /* 24 8 */ unsigned int nr_bytes; /* 32 4 */ int nr_slab_reclaimable_b; /* 36 4 */ int nr_slab_unreclaimable_b; /* 40 4 */ /* size: 48, cachelines: 1, members: 8 */ /* padding: 4 */ /* last cacheline: 48 bytes */ }; (It got smaller after patches 3/5, 4/5 and 5/5, which remove holes, work_struct and flags respectively.) On top of that, patch 1/5 makes sure the percpu allocation is aligned to cacheline size.