Patch "bpf: Fix unnecessary -EBUSY from htab_lock_bucket" has been added to the 6.5-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    bpf: Fix unnecessary -EBUSY from htab_lock_bucket

to the 6.5-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     bpf-fix-unnecessary-ebusy-from-htab_lock_bucket.patch
and it can be found in the queue-6.5 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 9cd85760c4639a8a25f6ef891bf1d9735c900172
Author: Song Liu <song@xxxxxxxxxx>
Date:   Wed Oct 11 22:57:41 2023 -0700

    bpf: Fix unnecessary -EBUSY from htab_lock_bucket
    
    [ Upstream commit d35381aa73f7e1e8b25f3ed5283287a64d9ddff5 ]
    
    htab_lock_bucket uses the following logic to avoid recursion:
    
    1. preempt_disable();
    2. check percpu counter htab->map_locked[hash] for recursion;
       2.1. if map_lock[hash] is already taken, return -BUSY;
    3. raw_spin_lock_irqsave();
    
    However, if an IRQ hits between 2 and 3, BPF programs attached to the IRQ
    logic will not able to access the same hash of the hashtab and get -EBUSY.
    
    This -EBUSY is not really necessary. Fix it by disabling IRQ before
    checking map_locked:
    
    1. preempt_disable();
    2. local_irq_save();
    3. check percpu counter htab->map_locked[hash] for recursion;
       3.1. if map_lock[hash] is already taken, return -BUSY;
    4. raw_spin_lock().
    
    Similarly, use raw_spin_unlock() and local_irq_restore() in
    htab_unlock_bucket().
    
    Fixes: 20b6cc34ea74 ("bpf: Avoid hashtab deadlock with map_locked")
    Suggested-by: Tejun Heo <tj@xxxxxxxxxx>
    Signed-off-by: Song Liu <song@xxxxxxxxxx>
    Signed-off-by: Andrii Nakryiko <andrii@xxxxxxxxxx>
    Signed-off-by: Daniel Borkmann <daniel@xxxxxxxxxxxxx>
    Link: https://lore.kernel.org/bpf/7a9576222aa40b1c84ad3a9ba3e64011d1a04d41.camel@xxxxxxxxxxxxx
    Link: https://lore.kernel.org/bpf/20231012055741.3375999-1-song@xxxxxxxxxx
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 56d3da7d0bc66..e209e748a8e05 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -155,13 +155,15 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab,
 	hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
 
 	preempt_disable();
+	local_irq_save(flags);
 	if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) {
 		__this_cpu_dec(*(htab->map_locked[hash]));
+		local_irq_restore(flags);
 		preempt_enable();
 		return -EBUSY;
 	}
 
-	raw_spin_lock_irqsave(&b->raw_lock, flags);
+	raw_spin_lock(&b->raw_lock);
 	*pflags = flags;
 
 	return 0;
@@ -172,8 +174,9 @@ static inline void htab_unlock_bucket(const struct bpf_htab *htab,
 				      unsigned long flags)
 {
 	hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
-	raw_spin_unlock_irqrestore(&b->raw_lock, flags);
+	raw_spin_unlock(&b->raw_lock);
 	__this_cpu_dec(*(htab->map_locked[hash]));
+	local_irq_restore(flags);
 	preempt_enable();
 }
 



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux