On Thu, Feb 11, 2021 at 05:42:29PM -0800, Song Liu wrote: > +static int __task_vma_seq_show(struct seq_file *seq, bool in_stop) > +{ > + struct bpf_iter_seq_task_vma_info *info = seq->private; > + struct bpf_iter__task_vma ctx; > + struct bpf_iter_meta meta; > + struct bpf_prog *prog; > + > + meta.seq = seq; > + prog = bpf_iter_get_info(&meta, in_stop); > + if (!prog) > + return 0; > + > + ctx.meta = &meta; > + ctx.task = info->task; > + ctx.vma = info->vma; > + return bpf_iter_run_prog(prog, &ctx); > +} That part doesn't match patch 2. If it needs to call sleepable prog it should call it differently, since bpf_iter_run_prog() is doing rcu_read_lock(). Try adding the following: diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 845b2168e006..06f65b18bc54 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -1159,6 +1159,8 @@ BPF_CALL_3(bpf_d_path, struct path *, path, char *, buf, u32, sz) long len; char *p; + might_sleep(); and you will see a splat from patch 4's task_vma test. But this might_sleep above is not correct. d_path is not about sleepability. It does its locking under rcu_read_lock. The bpf_d_path_allowed is confusing, since it checks for lsm sleepable hooks, but it's not about sleeping. Those hooks are safe because they don't hold any vfs locks. With bpf iterators the situation is the same. They all are called from bpf_seq_read. Which runs in user context and there are no vfs locks to worry about. So it's safe to enable all iterators to access bpf_d_path. No need to introduce sleepable iterators. The upcoming bpf_for_each_map_elem is kinda an iterator too where map elem callback shouldn't be able to call bpf_d_path, but it won't have expected_attach_type == BPF_TRACE_ITER, so bpf_for_each_map_elem will be fine the way last patches were done. So just drop 'prog->aux->sleepable' from patch 2 and delete patch 3.