On Mon, Jun 05, 2023 at 04:00:16PM -0700, Alexei Starovoitov wrote: > On Sun, Jun 4, 2023 at 7:01 AM Jiri Olsa <jolsa@xxxxxxxxxx> wrote: > > > > Anastasios reported crash on stable 5.15 kernel with following > > bpf attached to lsm hook: > > > > SEC("lsm.s/bprm_creds_for_exec") > > int BPF_PROG(bprm_creds_for_exec, struct linux_binprm *bprm) > > { > > struct path *path = &bprm->executable->f_path; > > char p[128] = { 0 }; > > > > bpf_d_path(path, p, 128); > > return 0; > > } > > > > but bprm->executable can be NULL, so bpf_d_path call will crash: > > > > BUG: kernel NULL pointer dereference, address: 0000000000000018 > > #PF: supervisor read access in kernel mode > > #PF: error_code(0x0000) - not-present page > > PGD 0 P4D 0 > > Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC NOPTI > > ... > > RIP: 0010:d_path+0x22/0x280 > > ... > > Call Trace: > > <TASK> > > bpf_d_path+0x21/0x60 > > bpf_prog_db9cf176e84498d9_bprm_creds_for_exec+0x94/0x99 > > bpf_trampoline_6442506293_0+0x55/0x1000 > > bpf_lsm_bprm_creds_for_exec+0x5/0x10 > > security_bprm_creds_for_exec+0x29/0x40 > > bprm_execve+0x1c1/0x900 > > do_execveat_common.isra.0+0x1af/0x260 > > __x64_sys_execve+0x32/0x40 > > > > It's problem for all stable trees with bpf_d_path helper, which was > > added in 5.9. > > > > This issue is fixed in current bpf code, where we identify and mark > > trusted pointers, so the above code would fail to load. > > > > For the sake of the stable trees and to workaround potentially broken > > verifier in the future, adding the code that reads the path object from > > the passed pointer and verifies it's valid in kernel space. > > > > Cc: stable@xxxxxxxxxxxxxxx # v5.9+ > > Fixes: 6e22ab9da793 ("bpf: Add d_path helper") > > Suggested-by: Alexei Starovoitov <ast@xxxxxxxxxx> > > Reported-by: Anastasios Papagiannis <tasos.papagiannnis@xxxxxxxxx> > > Signed-off-by: Jiri Olsa <jolsa@xxxxxxxxxx> > > --- > > kernel/trace/bpf_trace.c | 10 ++++++++++ > > 1 file changed, 10 insertions(+) > > > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > > index 9a050e36dc6c..aecd98ee73dc 100644 > > --- a/kernel/trace/bpf_trace.c > > +++ b/kernel/trace/bpf_trace.c > > @@ -900,12 +900,22 @@ static const struct bpf_func_proto bpf_send_signal_thread_proto = { > > > > BPF_CALL_3(bpf_d_path, struct path *, path, char *, buf, u32, sz) > > { > > + struct path copy; > > long len; > > char *p; > > > > if (!sz) > > return 0; > > > > + /* > > + * The path pointer is verified as trusted and safe to use, > > + * but let's double check it's valid anyway to workaround > > + * potentially broken verifier. > > + */ > > + len = copy_from_kernel_nofault(©, path, sizeof(*path)); > > + if (len < 0) > > + return len; > > + > > p = d_path(path, buf, sz); > > Since we copied it anyway, let's use a stable copy here? > Otherwise somebody might send a patch to remove 'dead code'. ok, will change that jirka