Re: [PATCH bpf-next] bpf: Add a retry after refilling the free list when unit_alloc() fails

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

>  > What is sizeof(struct bpf_cpumask) in your system?
>
> In my system, sizeof(struct bpf_cpumask) is 1032.
It was a wrong number. sizeof(struct bpf_cpumask) is actually 16.

On 25. 2. 16. 00:16, Changwoo Min wrote:
Hello,

On 25. 2. 15. 12:51, Alexei Starovoitov wrote:
> On Fri, Feb 14, 2025 at 1:24 AM Changwoo Min <changwoo@xxxxxxxxxx> wrote:
 >>
 >> Hello Alexei,
 >>
 >> Thank you for the comments! I reordered your comments for ease of
 >> explanation.
 >>
 >> On 25. 2. 14. 02:45, Alexei Starovoitov wrote:
>>> On Wed, Feb 12, 2025 at 12:49 AM Changwoo Min <changwoo@xxxxxxxxxx> wrote:
 >>
 >>> The commit log is too terse to understand what exactly is going on.
 >>> Pls share the call stack. What is the allocation size?
 >>> How many do you do in a sequence?
 >>
 >> The symptom is that an scx scheduler (scx_lavd) fails to load on
 >> an ARM64 platform on its first try. The second try succeeds. In
 >> the failure case, the kernel spits the following messages:
 >>
 >> [   27.431380] sched_ext: BPF scheduler "lavd" disabled (runtime error)
 >> [   27.431396] sched_ext: lavd: ops.init() failed (-12)
 >> [   27.431401]    scx_ops_enable.isra.0+0x838/0xe48
 >> [   27.431413]    bpf_scx_reg+0x18/0x30
 >> [   27.431418]    bpf_struct_ops_link_create+0x144/0x1a0
 >> [   27.431427]    __sys_bpf+0x1560/0x1f98
 >> [   27.431433]    __arm64_sys_bpf+0x2c/0x80
 >> [   27.431439]    do_el0_svc+0x74/0x120
 >> [   27.431446]    el0_svc+0x80/0xb0
 >> [   27.431454]    el0t_64_sync_handler+0x120/0x138
 >> [   27.431460]    el0t_64_sync+0x174/0x178
 >>
 >> The ops.init() failed because the 5th bpf_cpumask_create() calls
 >> failed during the initialization of the BPF scheduler. The exact
 >> point where bpf_cpumask_create() failed is here [1]. That scx
 >> scheduler allocates 5 CPU masks to aid its scheduling decision.
 >
 > ...
 >
 >> In this particular scenario, the IRQ is not disabled. I just
 >
 > since irq-s are not disabled the unit_alloc() should have done:
 >          if (cnt < c->low_watermark)
 >                  irq_work_raise(c);
 >
 > and alloc_bulk() should have started executing after the first
 > calloc_cpumask(&active_cpumask);
 > to refill it from 3 to 64

Is there any possibility that irq_work is not scheduled right away on aarch64?

 >
 > What is sizeof(struct bpf_cpumask) in your system?

In my system, sizeof(struct bpf_cpumask) is 1032.

 >
 > Something doesn't add up. irq_work_queue() should be
 > instant when irq-s are not disabled.
 > This is not IRQ_WORK_LAZY.> Are you running PREEMPT_RT ?

No, CONFIG_PREEMPT_RT is not set.

Regards,
Changwoo Min







[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux