On 3/3/20 5:08 PM, Alexei Starovoitov wrote:
On Tue, Mar 03, 2020 at 03:15:54PM -0800, Yonghong Song wrote:
When experimenting with bpf_send_signal() helper in our production environment,
we experienced a deadlock in NMI mode:
#0 [fffffe000046be58] crash_nmi_callback at ffffffff8103f48b
#1 [fffffe000046be60] nmi_handle at ffffffff8101feed
#2 [fffffe000046beb8] default_do_nmi at ffffffff8102027e
#3 [fffffe000046bed8] do_nmi at ffffffff81020434
#4 [fffffe000046bef0] end_repeat_nmi at ffffffff81c01093
[exception RIP: queued_spin_lock_slowpath+68]
RIP: ffffffff8110be24 RSP: ffffc9002219f770 RFLAGS: 00000002
RAX: 0000000000000101 RBX: 0000000000000046 RCX: 000000000000002a
RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff88871c96c044
RBP: 0000000000000000 R8: ffff88870f11f040 R9: 0000000000000000
R10: 0000000000000008 R11: 00000000acd93e4d R12: ffff88871c96c044
R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000001
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
--- <NMI exception stack> ---
#5 [ffffc9002219f770] queued_spin_lock_slowpath at ffffffff8110be24
#6 [ffffc9002219f770] _raw_spin_lock_irqsave at ffffffff81a43012
#7 [ffffc9002219f780] try_to_wake_up at ffffffff810e7ecd
#8 [ffffc9002219f7e0] signal_wake_up_state at ffffffff810c7b55
#9 [ffffc9002219f7f0] __send_signal at ffffffff810c8602
#10 [ffffc9002219f830] do_send_sig_info at ffffffff810ca31a
#11 [ffffc9002219f868] bpf_send_signal at ffffffff8119d227
#12 [ffffc9002219f988] bpf_overflow_handler at ffffffff811d4140
#13 [ffffc9002219f9e0] __perf_event_overflow at ffffffff811d68cf
#14 [ffffc9002219fa10] perf_swevent_overflow at ffffffff811d6a09
#15 [ffffc9002219fa38] ___perf_sw_event at ffffffff811e0f47
#16 [ffffc9002219fc30] __schedule at ffffffff81a3e04d
#17 [ffffc9002219fc90] schedule at ffffffff81a3e219
#18 [ffffc9002219fca0] futex_wait_queue_me at ffffffff8113d1b9
#19 [ffffc9002219fcd8] futex_wait at ffffffff8113e529
#20 [ffffc9002219fdf0] do_futex at ffffffff8113ffbc
#21 [ffffc9002219fec0] __x64_sys_futex at ffffffff81140d1c
#22 [ffffc9002219ff38] do_syscall_64 at ffffffff81002602
#23 [ffffc9002219ff50] entry_SYSCALL_64_after_hwframe at ffffffff81c00068
Basically, when task->pi_lock is taken, a NMI happens, bpf program executes,
which calls bpf program. The bpf program calls bpf_send_signal() helper,
which will call group_send_sig_info() in irq_work, which will try to
grab task->pi_lock again and failed due to deadlock.
To break the deadlock, group_send_sig_info() call should be delayed
until it is safe to do.
This patch registers a task_work callback inside the irq_work so
group_send_sig_info() in the task_work can be called later safely.
This patch also fixed a potential issue where the "current"
task in nmi context is gone when the actual irq_work is triggered.
Hold a reference to the task and drop the reference inside
the irq_work to ensure the task is not gone.
Fixes: 8482941f0906 ("bpf: Add bpf_send_signal_thread() helper")
Fixes: 8b401f9ed244 ("bpf: implement bpf_send_signal() helper")
Cc: Rik van Riel <riel@xxxxxxxxxxx>
Suggested-by: Jens Axboe <axboe@xxxxxxxxx>
Signed-off-by: Yonghong Song <yhs@xxxxxx>
I don't think that fixes it.
The stack trace is not doing nmi.
It's a sw event and 'if (in_nmi())' is false.
try_to_wake_up() is safe to do from irq_work for both current and other tasks.
I don't think task_work() is necessary here.
I thought nmi is there but is gone and irq_work takes over ...
But clearly I am wrong, looks like a perf_sw_event...
It's a very similar issue that was addressed by
commit eac9153f2b58 ("bpf/stackmap: Fix deadlock with rq_lock in bpf_get_stack()")
Imo the same approach will work here.
Please craft a reproducer first though.
Thanks for the tip. I am not aware of this. Will try to reproduce and
then fix properly.
I think the one Song did for the above commit may be adopted for this case too.