Hi, On Tue, 2023-04-18 at 15:31 +0000, Aditi Ghag wrote: > Previously, BPF TCP iterator was acquiring fast version of sock lock that > disables the BH. This introduced a circular dependency with code paths that > later acquire sockets hash table bucket lock. > Replace the fast version of sock lock with slow that faciliates BPF > programs executed from the iterator to destroy TCP listening sockets > using the bpf_sock_destroy kfunc (implemened in follow-up commits). > > Here is a stack trace that motivated this change: > > ``` > 1) sock_lock with BH disabled + bucket lock > > lock_acquire+0xcd/0x330 > _raw_spin_lock_bh+0x38/0x50 > inet_unhash+0x96/0xd0 > tcp_set_state+0x6a/0x210 > tcp_abort+0x12b/0x230 > bpf_prog_f4110fb1100e26b5_iter_tcp6_server+0xa3/0xaa > bpf_iter_run_prog+0x1ff/0x340 > bpf_iter_tcp_seq_show+0xca/0x190 > bpf_seq_read+0x177/0x450 > vfs_read+0xc6/0x300 > ksys_read+0x69/0xf0 > do_syscall_64+0x3c/0x90 > entry_SYSCALL_64_after_hwframe+0x72/0xdc > > 2) sock lock with BH enable > > [ 1.499968] lock_acquire+0xcd/0x330 > [ 1.500316] _raw_spin_lock+0x33/0x40 The above is quite confusing to me, here BH are disabled as well (otherwise the whole softirq processing would be really broken) Thanks, Paolo