Re: [PATCH bpf-next v1 3/8] bpf: Fix partial dynptr stack slot reads/writes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 9, 2023 at 3:52 AM Kumar Kartikeya Dwivedi <memxor@xxxxxxxxx> wrote:
>
> On Thu, Jan 05, 2023 at 08:36:07AM IST, Alexei Starovoitov wrote:
> > On Sun, Jan 01, 2023 at 02:03:57PM +0530, Kumar Kartikeya Dwivedi wrote:
> > > Currently, while reads are disallowed for dynptr stack slots, writes are
> > > not. Reads don't work from both direct access and helpers, while writes
> > > do work in both cases, but have the effect of overwriting the slot_type.
> >
> > Unrelated to this patch set, but disallowing reads from dynptr slots
> > seems like unnecessary restriction.
> > We allow reads from spilled slots and conceptually dynptr slots should
> > fall in is_spilled_reg() category in check_stack_read_*().
> >
> > We already can do:
> > d = bpf_rdonly_cast(dynptr, bpf_core_type_id_kernel(struct bpf_dynptr_kern))
> > d->size;
>
> Not sure this cast is required, it can just be reads from the stack and clang
> will generate CO-RE relocatable accesses when casted to the right struct with
> preserve_access_index attribute set? Or did I miss something?

rdonly_cast is required today, because the verifier rejects raw reads.

> >
> > With verifier allowing reads into dynptr we can also enable bpf_cast_to_kern_ctx()
> > to convert struct bpf_dynptr to struct bpf_dynptr_kern and enable
> > even faster reads.
>
> I think rdonly_cast is unnecessary, just enabling reads into stack will be
> enough to enable this.

Right. I was thinking that bpf_cast_to_kern_ctx will make things
more robust, but plain vanilla cast to struct bpf_dynptr_kern *
and using CO-RE will work when the verifier enables reads.

> relocatable enum values which check for set bits etc.). But in the end how do
> you define such an interface, will it be UAPI like xdp_md or __sk_buff, or
> unstable like kfunc, or just best-effort stable as long as user is making use of
> CO-RE relocs?

We can start best-effort with CORE.
All our fixed uapi things like __sk_buff and xdp_md have ongoing
usability issues. People always need something new and we keep
adding to those structs every now and then.
struct __sk_buff is quite scary. Its vlan_* fields was a bit of a pain
to adjust when corresponding vlan changes were made in the sk_buff
and networking core.
Eventually we still had to do:
        /* Simulate the following kernel macro:
         *   #define skb_shinfo(SKB) ((struct skb_shared_info
*)(skb_end_pointer(SKB)))
         */
        shared_info = bpf_rdonly_cast(kskb->head + kskb->end,
                bpf_core_type_id_kernel(struct skb_shared_info));
I'm arguing that everything we expose to bpf progs should
be unstable in the beginning.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux