I'm afraid filtering in user space tools is not enough, cause it's a kernel BUG. it would 100% trigger a kernel crash if you run cmd like retsnoop -e 'pick_next_task_fair' -a ':kernel/sched/*.c' -vvv which is caused by that BPF_LINK_TYPE_KPROBE_MULTI accidentally attaches bpf progs to preempt_count_{add, sub}, which in turn triggers stackoverflow because the handler itself calls those functions. checking in libbpf is not enough if some tools use bpf syscall directly, you know, we can not cover all cases. So kernel checking is a must, we cannot rely on users to not crash the kernel. Thanks Ze On Wed, May 10, 2023 at 10:14 PM Yonghong Song <yhs@xxxxxxxx> wrote: > > > > On 5/10/23 5:20 AM, Ze Gao wrote: > > BPF_LINK_TYPE_KPROBE_MULTI attaches kprobe programs through fprobe, > > however it does not takes those kprobe blacklisted into consideration, > > which likely introduce recursive traps and blows up stacks. > > > > this patch adds simple check and remove those are in kprobe_blacklist > > from one fprobe during bpf_kprobe_multi_link_attach. And also > > check_kprobe_address_safe is open for more future checks. > > > > note that ftrace provides recursion detection mechanism, but for kprobe > > only, we can directly reject those cases early without turning to ftrace. > > > > Signed-off-by: Ze Gao <zegao@xxxxxxxxxxx> > > --- > > kernel/trace/bpf_trace.c | 37 +++++++++++++++++++++++++++++++++++++ > > 1 file changed, 37 insertions(+) > > > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > > index 9a050e36dc6c..44c68bc06bbd 100644 > > --- a/kernel/trace/bpf_trace.c > > +++ b/kernel/trace/bpf_trace.c > > @@ -2764,6 +2764,37 @@ static int get_modules_for_addrs(struct module ***mods, unsigned long *addrs, u3 > > return arr.mods_cnt; > > } > > > > +static inline int check_kprobe_address_safe(unsigned long addr) > > +{ > > + if (within_kprobe_blacklist(addr)) > > + return -EINVAL; > > + else > > + return 0; > > +} > > + > > +static int check_bpf_kprobe_addrs_safe(unsigned long *addrs, int num) > > +{ > > + int i, cnt; > > + char symname[KSYM_NAME_LEN]; > > + > > + for (i = 0; i < num; ++i) { > > + if (check_kprobe_address_safe((unsigned long)addrs[i])) { > > + lookup_symbol_name(addrs[i], symname); > > + pr_warn("bpf_kprobe: %s at %lx is blacklisted\n", symname, addrs[i]); > > So user request cannot be fulfilled and a warning is issued and some > of user requests are discarded and the rest is proceeded. Does not > sound a good idea. > > Maybe we should do filtering in user space, e.g., in libbpf, check > /sys/kernel/debug/kprobes/blacklist and return error > earlier? bpftrace/libbpf-tools/bcc-tools all do filtering before > requesting kprobe in the kernel. > > > + /* mark blacklisted symbol for remove */ > > + addrs[i] = 0; > > + } > > + } > > + > > + /* remove blacklisted symbol from addrs */ > > + for (i = 0, cnt = 0; i < num; ++i) { > > + if (addrs[i]) > > + addrs[cnt++] = addrs[i]; > > + } > > + > > + return cnt; > > +} > > + > > int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) > > { > > struct bpf_kprobe_multi_link *link = NULL; > > @@ -2859,6 +2890,12 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr > > else > > link->fp.entry_handler = kprobe_multi_link_handler; > > > > + cnt = check_bpf_kprobe_addrs_safe(addrs, cnt); > > + if (!cnt) { > > + err = -EINVAL; > > + goto error; > > + } > > + > > link->addrs = addrs; > > link->cookies = cookies; > > link->cnt = cnt;