Re: [PATCH v2 bpf-next 08/20] bpf: Add x86-64 JIT support for bpf_cast_user instruction.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Feb 10, 2024 at 12:40 AM Kumar Kartikeya Dwivedi
<memxor@xxxxxxxxx> wrote:
>
> On Fri, 9 Feb 2024 at 05:06, Alexei Starovoitov
> <alexei.starovoitov@xxxxxxxxx> wrote:
> >
> > From: Alexei Starovoitov <ast@xxxxxxxxxx>
> >
> > LLVM generates bpf_cast_kern and bpf_cast_user instructions while translating
> > pointers with __attribute__((address_space(1))).
> >
> > rX = cast_kern(rY) is processed by the verifier and converted to
> > normal 32-bit move: wX = wY
> >
> > bpf_cast_user has to be converted by JIT.
> >
> > rX = cast_user(rY) is
> >
> > aux_reg = upper_32_bits of arena->user_vm_start
> > aux_reg <<= 32
> > wX = wY // clear upper 32 bits of dst register
> > if (wX) // if not zero add upper bits of user_vm_start
> >   wX |= aux_reg
> >
>
> Would this be ok if the rY is somehow aligned at the 4GB boundary, and
> the lower 32-bits end up being zero.
> Then this transformation would confuse it with the NULL case, right?

yes. it will. I tried to fix it by reserving a zero page,
but the end result was bad. See discussion with Barret.
So we decided to drop this idea.
Might come back to it eventually.
Also, I was thinking of doing
if (rX) instead of if (wX) to mitigate a bit,
but that is probably wrong too.
The best is to mitigate this inside bpf program by never returning lo32 zero
from bpf_alloc() function.
In general with the latest llvm we see close to zero cast_user
when bpf prog is not mixing (void *) with (void __arena *) casts,
so it shouldn't be an issue in practice with patches as-is.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux