Re: [PATCH v2 bpf-next] bpf: Avoid unnecessary -EBUSY from htab_lock_bucket

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Oct 4, 2023, at 9:18 AM, Song Liu <songliubraving@xxxxxxxx> wrote:
> 
> 
> 
>> On Oct 3, 2023, at 8:33 PM, Alexei Starovoitov <alexei.starovoitov@xxxxxxxxx> wrote:
>> 
>> On Tue, Oct 3, 2023 at 8:08 PM Andrii Nakryiko
>> <andrii.nakryiko@xxxxxxxxx> wrote:
>>> 
>>> On Tue, Oct 3, 2023 at 5:45 PM Song Liu <song@xxxxxxxxxx> wrote:
>>>> 
>>>> htab_lock_bucket uses the following logic to avoid recursion:
>>>> 
>>>> 1. preempt_disable();
>>>> 2. check percpu counter htab->map_locked[hash] for recursion;
>>>>  2.1. if map_lock[hash] is already taken, return -BUSY;
>>>> 3. raw_spin_lock_irqsave();
>>>> 
>>>> However, if an IRQ hits between 2 and 3, BPF programs attached to the IRQ
>>>> logic will not able to access the same hash of the hashtab and get -EBUSY.
>>>> This -EBUSY is not really necessary. Fix it by disabling IRQ before
>>>> checking map_locked:
>>>> 
>>>> 1. preempt_disable();
>>>> 2. local_irq_save();
>>>> 3. check percpu counter htab->map_locked[hash] for recursion;
>>>>  3.1. if map_lock[hash] is already taken, return -BUSY;
>>>> 4. raw_spin_lock().
>>>> 
>>>> Similarly, use raw_spin_unlock() and local_irq_restore() in
>>>> htab_unlock_bucket().
>>>> 
>>>> Suggested-by: Tejun Heo <tj@xxxxxxxxxx>
>>>> Signed-off-by: Song Liu <song@xxxxxxxxxx>
>>>> 
>>>> ---
>>>> Changes in v2:
>>>> 1. Use raw_spin_unlock() and local_irq_restore() in htab_unlock_bucket().
>>>>  (Andrii)
>>>> ---
>>>> kernel/bpf/hashtab.c | 7 +++++--
>>>> 1 file changed, 5 insertions(+), 2 deletions(-)
>>>> 
>>> 
>>> Now it's more symmetrical and seems correct to me, thanks!
>>> 
>>> Acked-by: Andrii Nakryiko <andrii@xxxxxxxxxx>
>>> 
>>>> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
>>>> index a8c7e1c5abfa..fd8d4b0addfc 100644
>>>> --- a/kernel/bpf/hashtab.c
>>>> +++ b/kernel/bpf/hashtab.c
>>>> @@ -155,13 +155,15 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab,
>>>>       hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
>>>> 
>>>>       preempt_disable();
>>>> +       local_irq_save(flags);
>>>>       if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) {
>>>>               __this_cpu_dec(*(htab->map_locked[hash]));
>>>> +               local_irq_restore(flags);
>>>>               preempt_enable();
>>>>               return -EBUSY;
>>>>       }
>>>> 
>>>> -       raw_spin_lock_irqsave(&b->raw_lock, flags);
>>>> +       raw_spin_lock(&b->raw_lock);
>> 
>> Song,
>> 
>> take a look at s390 crash in BPF CI.
>> I suspect this patch is causing it.
> 
> It indeed looks like triggered by this patch. But I haven't figured
> out why it happens. v1 seems ok for the same tests. 

I guess I finally figured out this (should be simple) bug. If I got it 
correctly, we need:

diff --git c/kernel/bpf/hashtab.c w/kernel/bpf/hashtab.c
index fd8d4b0addfc..1cfa2329a53a 100644
--- c/kernel/bpf/hashtab.c
+++ w/kernel/bpf/hashtab.c
@@ -160,6 +160,7 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab,
                __this_cpu_dec(*(htab->map_locked[hash]));
                local_irq_restore(flags);
                preempt_enable();
+               *pflags = flags;
                return -EBUSY;
        }


Running CI tests here:

https://github.com/kernel-patches/bpf/pull/5769

If it works, I will send v3. 

Thanks,
Song

PS: s390x CI is running slow. I got some jobs stayed in the queue 
for more than a hour. 

> 
> Song
> 
>> 
>> Ilya,
>> 
>> do you have an idea what is going on?
> 





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux