Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx> writes: > On 14-Feb-2020 02:39:31 PM, Thomas Gleixner wrote: >> Replace the preempt_disable/enable() pairs with migrate_disable/enable() >> pairs to prepare BPF to work on PREEMPT_RT enabled kernels. On a non-RT >> kernel this maps to preempt_disable/enable(), i.e. no functional change. ... > Having all those events randomly and silently discarded might be quite > unexpected from a user standpoint. This turns the deadlock prevention > mechanism into a random tracepoint-dropping facility, which is > unsettling. Well, it randomly drops events which might be unrelated to the syscall target today already, this will just drop some more. Shrug. > One alternative approach we could consider to solve this is to make > this deadlock prevention nesting counter per-thread rather than > per-cpu. That should work both on !RT and RT. > Also, I don't think using __this_cpu_inc() without preempt-disable or > irq off is safe. You'll probably want to move to this_cpu_inc/dec > instead, which can be heavier on some architectures. Good catch. Thanks, tglx