Re: [PATCH bpf v4] bpf: verifier: prevent userspace memory access

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 22, 2024 at 9:28 AM Daniel Borkmann <daniel@xxxxxxxxxxxxx> wrote:
>
> On 3/22/24 4:05 PM, Puranjay Mohan wrote:
> [...]
> >>> +           /* Make it impossible to de-reference a userspace address */
> >>> +           if (BPF_CLASS(insn->code) == BPF_LDX &&
> >>> +               (BPF_MODE(insn->code) == BPF_PROBE_MEM ||
> >>> +                BPF_MODE(insn->code) == BPF_PROBE_MEMSX)) {
> >>> +                   struct bpf_insn *patch = &insn_buf[0];
> >>> +                   u64 uaddress_limit = bpf_arch_uaddress_limit();
> >>> +
> >>> +                   if (!uaddress_limit)
> >>> +                           goto next_insn;
> >>> +
> >>> +                   *patch++ = BPF_MOV64_REG(BPF_REG_AX, insn->src_reg);
> >>> +                   if (insn->off)
> >>> +                           *patch++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_AX, insn->off);
> >>> +                   *patch++ = BPF_ALU64_IMM(BPF_RSH, BPF_REG_AX, 32);
> >>> +                   *patch++ = BPF_JMP_IMM(BPF_JLE, BPF_REG_AX, uaddress_limit >> 32, 2);
> >>> +                   *patch++ = *insn;
> >>> +                   *patch++ = BPF_JMP_IMM(BPF_JA, 0, 0, 1);
> >>> +                   *patch++ = BPF_MOV64_IMM(insn->dst_reg, 0);
> >>
> >> But how does this address other cases where we could fault e.g. non-canonical,
> >> vsyscall page, etc? Technically, we would have to call to copy_from_kernel_nofault_allowed()
> >> to really address all the cases aside from the overflow (good catch btw!) where kernel
> >> turns into user address.
> >
> > So, we are trying to ~simulate a call to
> > copy_from_kernel_nofault_allowed() here. If the address under
> > consideration is below TASK_SIZE (TASK_SIZE + 4GB to be precise) then we
> > skip that load because that address could be mapped by the user.
> >
> > If the address is above TASK_SIZE + 4GB, we allow the load and it could
> > cause a fault if the address is invalid, non-canonical etc. Taking the
> > fault is fine because JIT will add an exception table entry for
> > for that load with BPF_PBOBE_MEM.
>
> Are you sure? I don't think the kernel handles non-canonical fixup.

I believe it handles it fine otherwise our selftest bpf_testmod_return_ptr:
   case 4: return (void *)(1ull << 60);    /* non-canonical and invalid */
would have been crashing for the last 3 years,
since we've been running it.

> > The vsyscall page is special, this approach skips all loads from this
> > page. I am not sure if that is acceptable.
>
> The bpf_probe_read_kernel() does handle it fine via copy_from_kernel_nofault().
>
> So there is tail risk that BPF_PROBE_* could trigger a crash.

For this patch let's do
return max(TASK_SIZE_MAX + PAGE_SIZE, VSYSCALL_ADDR)
to cover both with one check?

> Other archs might
> have other quirks, e.g. in case of loongarch it says highest bit set means kernel
> space.

let's tackle loongarch with whatever quirks it has separately.





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux