On Sun, Sep 18, 2022 at 5:08 PM Kumar Kartikeya Dwivedi <memxor@xxxxxxxxx> wrote: > > On Thu, 8 Sept 2022 at 02:16, Joanne Koong <joannelkoong@xxxxxxxxx> wrote: > > > > Add a new helper function, bpf_dynptr_iterator: > > > > long bpf_dynptr_iterator(struct bpf_dynptr *ptr, void *callback_fn, > > void *callback_ctx, u64 flags) > > > > where callback_fn is defined as: > > > > long (*callback_fn)(struct bpf_dynptr *ptr, void *ctx) > > > > and callback_fn returns the number of bytes to advance the > > dynptr by (or an error code in the case of error). The iteration > > will stop if the callback_fn returns 0 or an error or tries to > > advance by more bytes than available. > > > > Signed-off-by: Joanne Koong <joannelkoong@xxxxxxxxx> > > --- > > This is buggy as is. > > A user can just reinitialize the dynptr from inside the callback, and > then you will never stop advancing it inside your helper, therefore an > infinite loop can be constructed. The stack frame of the caller is > accessible using callback_ctx. > > For example (modifying your selftest) > > diff --git a/tools/testing/selftests/bpf/progs/dynptr_success.c > b/tools/testing/selftests/bpf/progs/dynptr_success.c > index 22164ad6df9d..a9e78316c508 100644 > --- a/tools/testing/selftests/bpf/progs/dynptr_success.c > +++ b/tools/testing/selftests/bpf/progs/dynptr_success.c > @@ -540,6 +540,19 @@ struct iter_ctx { > > static int iter_callback(struct bpf_dynptr *ptr, struct iter_ctx *ctx) > { > + struct map_value *map_val; > + int key = 0; > + > + map_val = bpf_map_lookup_elem(&array_map2, &key); > + if (!map_val) { > + return 0; > + } > + > + *(void **)ptr = 0; > + *((void **)ptr + 1) = 0; > + bpf_dynptr_from_mem(map_val, 2, 0, ptr); > + return 1; > + > if (ctx->trigger_err_erange) > return bpf_dynptr_get_size(ptr) + 1; > > ... leads to a lockup. > > It doesn't have to be ringbuf_reserver_dynptr, it can just be > dynptr_from_mem, which also gets around reference state restrictions > inside callbacks. > > You cannot prevent overwriting dynptr stack slots in general. Some of > them don't have to be released. It would be prohibitive for stack > reuse. > > So it seems like you need to temporarily mark both the slots as > immutable for the caller stack state during exploration of the > callback. > Setting some flag in r1 for callback is not enough (as it can reload > PTR_TO_STACK of caller stack frame pointing at dynptr from > callback_ctx). It needs to be set in spilled_ptr. This sounds overcomplicated. We need a local copy of dynptr for the duration of iteration and work with it. Basically internal bpf_dynptr_clone(). See my other reply in this thread. > > Then certain operations modifying the view of the dynptr do not accept > dynptr with that type flag set (e.g. trim, advance, init functions). > While for others which only operate on the underlying view, you fold > the flag (e.g. read/write/dynptr_data). > > It is the difference between struct bpf_dynptr *, vs const struct > bpf_dynptr *, we need to give the callback access to the latter. > I.e. it should still allow accessing the dynptr's view, but not modifying it. > > And at the risk of sounding like a broken record (and same suggestion > as Martin in skb/xdp v6 set), the view's mutability should ideally > also be part of the verifier's state. That doesn't preclude runtime > tracking later, but there seems to be no strong motivation for that > right now. The unexpected NULL for bpf_dynptr_data() vs bpf_dynptr_data_rdonly() argument from Martin is pretty convincing, I agree. So I guess I don't mind tracking it statically at this point. > > > include/uapi/linux/bpf.h | 20 ++++++++++++++ > > kernel/bpf/helpers.c | 48 +++++++++++++++++++++++++++++++--- > > kernel/bpf/verifier.c | 27 +++++++++++++++++++ > > tools/include/uapi/linux/bpf.h | 20 ++++++++++++++ > > 4 files changed, 111 insertions(+), 4 deletions(-) > > please trim irrelevant parts [...]