On Thu 17-02-22 10:48:02, Sebastian Andrzej Siewior wrote: [...] > @@ -2266,7 +2273,6 @@ static void drain_all_stock(struct mem_cgroup *root_memcg) > * as well as workers from this path always operate on the local > * per-cpu data. CPU up doesn't touch memcg_stock at all. > */ > - curcpu = get_cpu(); Could you make this a separate patch? > for_each_online_cpu(cpu) { > struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu); > struct mem_cgroup *memcg; > @@ -2282,14 +2288,9 @@ static void drain_all_stock(struct mem_cgroup *root_memcg) > rcu_read_unlock(); > > if (flush && > - !test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags)) { > - if (cpu == curcpu) > - drain_local_stock(&stock->work); > - else > - schedule_work_on(cpu, &stock->work); > - } > + !test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags)) > + schedule_work_on(cpu, &stock->work); Maybe I am missing but on !PREEMPT kernels there is nothing really guaranteeing that the worker runs so there should be cond_resched after the mutex is unlocked. I do not think we want to rely on callers to be aware of this subtlety. An alternative would be to split out __drain_local_stock which doesn't do local_lock. -- Michal Hocko SUSE Labs