On Fri, Aug 23, 2024 at 3:22 PM Eduard Zingerman <eddyz87@xxxxxxxxx> wrote: > > On Wed, 2024-08-14 at 11:54 -0700, Andrii Nakryiko wrote: > > Add sleepable implementations of bpf_get_stack() and > > bpf_get_task_stack() helpers and allow them to be used from sleepable > > BPF program (e.g., sleepable uprobes). > > > > Note, the stack trace IPs capturing itself is not sleepable (that would > > need to be a separate project), only build ID fetching is sleepable and > > thus more reliable, as it will wait for data to be paged in, if > > necessary. For that we make use of sleepable build_id_parse() > > implementation. > > > > Now that build ID related internals in kernel/bpf/stackmap.c can be used > > both in sleepable and non-sleepable contexts, we need to add additional > > rcu_read_lock()/rcu_read_unlock() protection around fetching > > perf_callchain_entry, but with the refactoring in previous commit it's > > now pretty straightforward. We make sure to do rcu_read_unlock (in > > sleepable mode only) right before stack_map_get_build_id_offset() call > > which can sleep. By that time we don't have any more use of > > perf_callchain_entry. > > > > Note, bpf_get_task_stack() will fail for user mode if task != current. > > And for kernel mode build ID are irrelevant. So in that sense adding > > sleepable bpf_get_task_stack() implementation is a no-op. It feel right > > to wire this up for symmetry and completeness, but I'm open to just > > dropping it until we support `user && crosstask` condition. > > > > Signed-off-by: Andrii Nakryiko <andrii@xxxxxxxxxx> > > --- > > All seems logical. > You skip wiring up support for sleepable bpf_get_task_stack() in > tp_prog_func_proto(), pe_prog_func_proto() and > raw_tp_prog_func_proto(), this is because these are used for programs > that are never run in sleepable context, right? Correct. I contemplated for a bit wiring it up for tracepoints, as we might get sleepable tracepoints eventually, but we can always do it as a follow up. > > Reviewed-by: Eduard Zingerman <eddyz87@xxxxxxxxx> > > [...] >