On Thu, Apr 20, 2023 at 8:46 PM Joanne Koong <joannelkoong@xxxxxxxxx> wrote: > > On Thu, Apr 20, 2023 at 11:38 AM Alexei Starovoitov > <alexei.starovoitov@xxxxxxxxx> wrote: > > > > On Thu, Apr 20, 2023 at 12:14:10AM -0700, Joanne Koong wrote: > > > return obj; > > > @@ -2369,6 +2394,7 @@ BTF_ID_FLAGS(func, bpf_dynptr_slice_rdwr, KF_RET_NULL) > > > BTF_ID_FLAGS(func, bpf_iter_num_new, KF_ITER_NEW) > > > BTF_ID_FLAGS(func, bpf_iter_num_next, KF_ITER_NEXT | KF_RET_NULL) > > > BTF_ID_FLAGS(func, bpf_iter_num_destroy, KF_ITER_DESTROY) > > > +BTF_ID_FLAGS(func, bpf_dynptr_adjust) > > > > I've missed this earlier. > > Shouldn't we change all the existing dynptr kfuncs to be KF_TRUSTED_ARGS? > > Otherwise when people start passing bpf_dynptr-s from kernel code > > (like fuse-bpf is planning to do) > > the bpf prog might get vanilla ptr_to_btf_id to bpf_dynptr_kern. > > It's probably not possible right now, so not a high-pri issue, but still. > > Or something in the verifier makes sure that dynptr-s are all trusted? > > In my understanding, the checks the verifier enforces for > KF_TRUSTED_ARGS are that the reg->offset is 0 and the reg may not be > null. The verifier logic does this for dynptrs currently, it enforces > that reg->offset is 0 (in stack_slot_obj_get_spi()) and that the > reg->type is PTR_TO_STACK or CONST_PTR_TO_DYNPTR (in > check_kfunc_args() for KF_ARG_PTR_TO_DYNPTR case). But maybe it's a > good idea to add the KF_TRUSTED_ARGS flag anyways in case more safety > checks are added to KF_TRUSTED_ARGS in the future? Yeah. You're right. The verifier is doing the same checks for dynptr and for trusted ptrs. So adding KF_TRUSTED_ARGS to bpf_dynptr_adjust is not mandatory. Maybe an opportunity to generalize the checks between KF_ARG_PTR_TO_BTF_ID and KF_ARG_PTR_TO_DYNPTR. But KF_TRUSTED_ARGS is necessary for bpf_dynptr_from_skb otherwise old style ptr_to_btf_id skb can be passed in. For example the following passes test_progs: diff --git a/net/core/filter.c b/net/core/filter.c index d9ce04ca22ce..abb14036b455 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -11718,6 +11718,7 @@ static int __init bpf_kfunc_init(void) ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_LWT_XMIT, &bpf_kfunc_set_skb); ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_LWT_SEG6LOCAL, &bpf_kfunc_set_skb); ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_NETFILTER, &bpf_kfunc_set_skb); + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &bpf_kfunc_set_skb); return ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP, &bpf_kfunc_set_xdp); } late_initcall(bpf_kfunc_init); diff --git a/tools/testing/selftests/bpf/progs/dynptr_success.c b/tools/testing/selftests/bpf/progs/dynptr_success.c index b2fa6c47ecc0..bd8fbc3e04ea 100644 --- a/tools/testing/selftests/bpf/progs/dynptr_success.c +++ b/tools/testing/selftests/bpf/progs/dynptr_success.c @@ -4,6 +4,7 @@ #include <string.h> #include <linux/bpf.h> #include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h> #include "bpf_misc.h" #include "bpf_kfuncs.h" #include "errno.h" @@ -187,6 +188,15 @@ int test_skb_readonly(struct __sk_buff *skb) return 1; } +SEC("fentry/__kfree_skb") +int BPF_PROG(test_skb, struct __sk_buff *skb) +{ + struct bpf_dynptr ptr; + + bpf_dynptr_from_skb(skb, 0, &ptr); + return 0; +} but shouldn't. skb in fentry is not trusted. It's not an issue right now, because bpf_dynptr_from_skb() is enabled for networking prog types only, but BPF_PROG_TYPE_NETFILTER is already blending the boundary. It's more networking than tracing and normal tracing should be able to examine skb. dynptr allows to do it nicely. Not a blocker for this set. Just something to follow up.