On 8/26/2023 1:16 AM, Alexei Starovoitov wrote: > On Thu, Aug 24, 2023 at 11:04 PM Hou Tao <houtao@xxxxxxxxxxxxxxx> wrote: >>> Could you try the following: >>> diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c >>> index 9c49ae53deaf..ee8262f58c5a 100644 >>> --- a/kernel/bpf/memalloc.c >>> +++ b/kernel/bpf/memalloc.c >>> @@ -442,7 +442,10 @@ static void bpf_mem_refill(struct irq_work *work) >>> >>> static void notrace irq_work_raise(struct bpf_mem_cache *c) >>> { >>> - irq_work_queue(&c->refill_work); >>> + if (!irq_work_queue(&c->refill_work)) { >>> + preempt_disable_notrace(); >>> + preempt_enable_notrace(); >>> + } >>> } >>> >>> The idea that it will ask for resched if preemptible. >>> will it address the issue you're seeing? >>> >>> . >> No. It didn't work. > why? Don't know the extra reason. It seems preempt_enable_notrace() inovked in the preemption task doesn't return the CPU back to the preempted task. Will add some debug info to check that. > >> If you are concerning about the overhead of >> preempt_enabled_notrace(), we could use local_irq_save() and >> local_irq_restore() instead. > That's much better. > Moving local_irq_restore() after irq_work_raise() in process ctx > would mean that irq_work will execute immediately after local_irq_restore() > which would make bpf_ma to behave like sync allocation. > Which is the ideal situation. preempt disable/enable game is more fragile. OK. So you are OK to wrap the whole implementation of unit_alloc() and unit_free() by local_irq_saved() and local_irq_restore(), right ?