On Mon 21-02-22 16:19:52, Sebastian Andrzej Siewior wrote: > On 2022-02-21 15:46:30 [+0100], Michal Hocko wrote: > > On Thu 17-02-22 10:48:02, Sebastian Andrzej Siewior wrote: > > [...] > > > @@ -2266,7 +2273,6 @@ static void drain_all_stock(struct mem_cgroup *root_memcg) > > > * as well as workers from this path always operate on the local > > > * per-cpu data. CPU up doesn't touch memcg_stock at all. > > > */ > > > - curcpu = get_cpu(); > > > > Could you make this a separate patch? > > Sure. > > > > for_each_online_cpu(cpu) { > > > struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu); > > > struct mem_cgroup *memcg; > > > @@ -2282,14 +2288,9 @@ static void drain_all_stock(struct mem_cgroup *root_memcg) > > > rcu_read_unlock(); > > > > > > if (flush && > > > - !test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags)) { > > > - if (cpu == curcpu) > > > - drain_local_stock(&stock->work); > > > - else > > > - schedule_work_on(cpu, &stock->work); > > > - } > > > + !test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags)) > > > + schedule_work_on(cpu, &stock->work); > > > > Maybe I am missing but on !PREEMPT kernels there is nothing really > > guaranteeing that the worker runs so there should be cond_resched after > > the mutex is unlocked. I do not think we want to rely on callers to be > > aware of this subtlety. > > There is no guarantee on PREEMPT kernels, too. The worker will be made > running and will be put on the CPU when the scheduler sees it fit and > there could be other worker which take precedence (queued earlier). > But I was not aware that the worker _needs_ to run before we return. A lack of draining will not be a correctness problem (sorry I should have made that clear). It is more about subtlety than anything. E.g. the charging path could be forced to memory reclaim because of the cached charges which are still waiting for their draining. Not really something to lose sleep over from the runtime perspective. I was just wondering that this makes things more complex than necessary. > We > might get migrated after put_cpu() so I wasn't aware that this is > important. Should we attempt best effort and wait for the worker on the > current CPU? > > An alternative would be to split out __drain_local_stock which doesn't > > do local_lock. > > but isn't the section in drain_local_stock() unprotected then? local_lock instead of {get,put}_cpu would handle that right? -- Michal Hocko SUSE Labs