On Tue, Oct 3, 2023 at 8:08 PM Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> wrote: > > On Tue, Oct 3, 2023 at 5:45 PM Song Liu <song@xxxxxxxxxx> wrote: > > > > htab_lock_bucket uses the following logic to avoid recursion: > > > > 1. preempt_disable(); > > 2. check percpu counter htab->map_locked[hash] for recursion; > > 2.1. if map_lock[hash] is already taken, return -BUSY; > > 3. raw_spin_lock_irqsave(); > > > > However, if an IRQ hits between 2 and 3, BPF programs attached to the IRQ > > logic will not able to access the same hash of the hashtab and get -EBUSY. > > This -EBUSY is not really necessary. Fix it by disabling IRQ before > > checking map_locked: > > > > 1. preempt_disable(); > > 2. local_irq_save(); > > 3. check percpu counter htab->map_locked[hash] for recursion; > > 3.1. if map_lock[hash] is already taken, return -BUSY; > > 4. raw_spin_lock(). > > > > Similarly, use raw_spin_unlock() and local_irq_restore() in > > htab_unlock_bucket(). > > > > Suggested-by: Tejun Heo <tj@xxxxxxxxxx> > > Signed-off-by: Song Liu <song@xxxxxxxxxx> > > > > --- > > Changes in v2: > > 1. Use raw_spin_unlock() and local_irq_restore() in htab_unlock_bucket(). > > (Andrii) > > --- > > kernel/bpf/hashtab.c | 7 +++++-- > > 1 file changed, 5 insertions(+), 2 deletions(-) > > > > Now it's more symmetrical and seems correct to me, thanks! > > Acked-by: Andrii Nakryiko <andrii@xxxxxxxxxx> > > > diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c > > index a8c7e1c5abfa..fd8d4b0addfc 100644 > > --- a/kernel/bpf/hashtab.c > > +++ b/kernel/bpf/hashtab.c > > @@ -155,13 +155,15 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab, > > hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1); > > > > preempt_disable(); > > + local_irq_save(flags); > > if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) { > > __this_cpu_dec(*(htab->map_locked[hash])); > > + local_irq_restore(flags); > > preempt_enable(); > > return -EBUSY; > > } > > > > - raw_spin_lock_irqsave(&b->raw_lock, flags); > > + raw_spin_lock(&b->raw_lock); Song, take a look at s390 crash in BPF CI. I suspect this patch is causing it. Ilya, do you have an idea what is going on?