On 7/6/2023 11:34 AM, Alexei Starovoitov wrote: > From: Alexei Starovoitov <ast@xxxxxxxxxx> > > The next patch will introduce cross-cpu llist access and existing > irq_work_sync() + drain_mem_cache() + rcu_barrier_tasks_trace() mechanism will > not be enough, since irq_work_sync() + drain_mem_cache() on cpu A won't > guarantee that llist on cpu A are empty. The free_bulk() on cpu B might add > objects back to llist of cpu A. Add 'bool draining' flag. > The modified sequence looks like: > for_each_cpu: > WRITE_ONCE(c->draining, true); // do_call_rcu_ttrace() won't be doing call_rcu() any more > irq_work_sync(); // wait for irq_work callback (free_bulk) to finish > drain_mem_cache(); // free all objects > rcu_barrier_tasks_trace(); // wait for RCU callbacks to execute > > Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx> Acked-by: Hou Tao <houtao1@xxxxxxxxxx>