Re: [PATCH v2 bpf-next 6/7] bpf: Allow bpf_spin_{lock,unlock} in sleepable progs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 8/22/23 12:46 PM, Alexei Starovoitov wrote:
On Mon, Aug 21, 2023 at 07:53:22PM -0700, Yonghong Song wrote:


On 8/21/23 12:33 PM, Dave Marchevsky wrote:
Commit 9e7a4d9831e8 ("bpf: Allow LSM programs to use bpf spin locks")
disabled bpf_spin_lock usage in sleepable progs, stating:

   Sleepable LSM programs can be preempted which means that allowng spin
   locks will need more work (disabling preemption and the verifier
   ensuring that no sleepable helpers are called when a spin lock is
   held).

This patch disables preemption before grabbing bpf_spin_lock. The second
requirement above "no sleepable helpers are called when a spin lock is
held" is implicitly enforced by current verifier logic due to helper
calls in spin_lock CS being disabled except for a few exceptions, none
of which sleep.

Due to above preemption changes, bpf_spin_lock CS can also be considered
a RCU CS, so verifier's in_rcu_cs check is modified to account for this.

Signed-off-by: Dave Marchevsky <davemarchevsky@xxxxxx>
---
   kernel/bpf/helpers.c  | 2 ++
   kernel/bpf/verifier.c | 9 +++------
   2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 945a85e25ac5..8bd3812fb8df 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -286,6 +286,7 @@ static inline void __bpf_spin_lock(struct bpf_spin_lock *lock)
   	compiletime_assert(u.val == 0, "__ARCH_SPIN_LOCK_UNLOCKED not 0");
   	BUILD_BUG_ON(sizeof(*l) != sizeof(__u32));
   	BUILD_BUG_ON(sizeof(*lock) != sizeof(__u32));
+	preempt_disable();
   	arch_spin_lock(l);
   }
@@ -294,6 +295,7 @@ static inline void __bpf_spin_unlock(struct bpf_spin_lock *lock)
   	arch_spinlock_t *l = (void *)lock;
   	arch_spin_unlock(l);
+	preempt_enable();
   }

preempt_disable()/preempt_enable() is not needed. Is it possible we can

preempt_disable is needed in all cases. This mistake slipped in when
we converted preempt disabled bpf progs into migrate disabled.
For example, see how raw_spin_lock is doing it.

Okay, a slipped bug. That explains the difference between our bpf_spin_lock and raw_spin_lock. The change then makes sense.




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux