On 1/27/25 2:15 PM, Alexei Starovoitov wrote:
On Sun, Jan 26, 2025 at 1:31 AM Abel Wu <wuyun.abel@xxxxxxxxxxxxx> wrote:
On 1/25/25 4:20 AM, Martin KaFai Lau Wrote:
On 12/20/24 10:10 PM, Abel Wu wrote:
The following commit
bc235cdb423a ("bpf: Prevent deadlock from recursive bpf_task_storage_[get|delete]")
first introduced deadlock prevention for fentry/fexit programs attaching
on bpf_task_storage helpers. That commit also employed the logic in map
free path in its v6 version.
Later bpf_cgrp_storage was first introduced in
c4bcfb38a95e ("bpf: Implement cgroup storage available to non-cgroup-attached bpf progs")
which faces the same issue as bpf_task_storage, instead of its busy
counter, NULL was passed to bpf_local_storage_map_free() which opened
a window to cause deadlock:
<TASK>
(acquiring local_storage->lock)
_raw_spin_lock_irqsave+0x3d/0x50
bpf_local_storage_update+0xd1/0x460
bpf_cgrp_storage_get+0x109/0x130
bpf_prog_a4d4a370ba857314_cgrp_ptr+0x139/0x170
? __bpf_prog_enter_recur+0x16/0x80
bpf_trampoline_6442485186+0x43/0xa4
cgroup_storage_ptr+0x9/0x20
(holding local_storage->lock)
bpf_selem_unlink_storage_nolock.constprop.0+0x135/0x160
bpf_selem_unlink_storage+0x6f/0x110
bpf_local_storage_map_free+0xa2/0x110
bpf_map_free_deferred+0x5b/0x90
process_one_work+0x17c/0x390
worker_thread+0x251/0x360
kthread+0xd2/0x100
ret_from_fork+0x34/0x50
ret_from_fork_asm+0x1a/0x30
</TASK>
Progs:
- A: SEC("fentry/cgroup_storage_ptr")
The v1 thread has suggested using notrace in a few functions. I didn't see any counterarguments that wouldn't be sufficient.
imo, that should be a better option instead of having more unnecessary failures in all other normal use cases which will not be interested in tracing cgroup_storage_ptr().
Martin,
task_storage_map_free() is doing this busy inc/dec already,
in that sense doing the same in cgroup_storage_map_free() fits.
sgtm. Agree to be consistent with the task_storage_map_free.
would be nice if the busy inc/dec usage can be revisited after the rqspinlock work.
Acked-by: Martin KaFai Lau <martin.lau@xxxxxxxxxx>