Re: [PATCH bpf-next] bpf: Add a retry after refilling the free list when unit_alloc() fails

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 12, 2025 at 12:49 AM Changwoo Min <changwoo@xxxxxxxxxx> wrote:
>
> (e.g., bpf_cpumask_create), allocate the additional free entry in an atomic
> manner (atomic = true in alloc_bulk).

...
> +       if (unlikely(!llnode && !retry)) {
> +               int cpu = smp_processor_id();
> +               alloc_bulk(c, 1, cpu_to_node(cpu), true);

This is broken.
Passing atomic doesn't help.
unit_alloc() can be called from any context
including NMI/IRQ/kprobe deeply nested in slab internals.
kmalloc() is not safe from there.
The whole point of bpf_mem_alloc() is to be safe from
unknown context. If we could do kmalloc(GFP_NOWAIT)
everywhere bpf_mem_alloc() would be needed.

But we may do something.
Draining free_by_rcu_ttrace and waiting_for_gp_ttrace can be done,
but will it address your case?
The commit log is too terse to understand what exactly is going on.
Pls share the call stack. What is the allocation size?
How many do you do in a sequence?
Why irq-s are disabled? Isn't this for scx ?

pw-bot: cr





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux