Thanks all for the reply.
On 2024/11/12 23:08, Thomas Gleixner wrote:
On Fri, Nov 08 2024 at 14:32, Kunwu Chan wrote:
When PREEMPT_RT is enabled, 'spinlock_t' becomes preemptible
and bpf program has owned a raw_spinlock under a interrupt handler,
which results in invalid lock acquire context.
This explanation is just wrong.
The problem has nothing to do with an interrupt handler. Interrupt
handlers on RT kernels are force threaded.
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
trie_delete_elem+0x96/0x6a0 kernel/bpf/lpm_trie.c:462
bpf_prog_2c29ac5cdc6b1842+0x43/0x47
bpf_dispatcher_nop_func include/linux/bpf.h:1290 [inline]
__bpf_prog_run include/linux/filter.h:701 [inline]
bpf_prog_run include/linux/filter.h:708 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2340 [inline]
bpf_trace_run1+0x2ca/0x520 kernel/trace/bpf_trace.c:2380
trace_workqueue_activate_work+0x186/0x1f0 include/trace/events/workqueue.h:59
__queue_work+0xc7b/0xf50 kernel/workqueue.c:2338
The problematic lock nesting is the work queue pool lock, which is a raw
spinlock.
@@ -330,7 +330,7 @@ static long trie_update_elem(struct bpf_map *map,
if (key->prefixlen > trie->max_prefixlen)
return -EINVAL;
- spin_lock_irqsave(&trie->lock, irq_flags);
+ raw_spin_lock_irqsave(&trie->lock, irq_flags);
/* Allocate and fill a new node */
Making this a raw spinlock moves the problem from the BPF trie code into
the memory allocator. On RT the memory allocator cannot be invoked under
a raw spinlock.
I'am newbiee in this field. But actually when i change it to a raw
spinlock, the problem syzbot reported dispeared.
If don't change like this, we should do what to deal with this problem,
if you have any good idea, pls tell me to do.
Thanks,
tglx
--
Thanks,
Kunwu.Chan