Re: [bpf PATCH v3] bpf: verifier, do_refine_retval_range may clamp umin to 0 incorrectly

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 30, 2020 at 09:38:07AM -0800, John Fastabend wrote:
> Alexei Starovoitov wrote:
> > On Wed, Jan 29, 2020 at 02:52:10PM -0800, John Fastabend wrote:
> > > Daniel Borkmann wrote:
> > > > On 1/29/20 8:28 PM, Alexei Starovoitov wrote:
> > > > > On Wed, Jan 29, 2020 at 8:25 AM Daniel Borkmann <daniel@xxxxxxxxxxxxx> wrote:
> > > > >>>
> > > > >>> Fixes: 849fa50662fbc ("bpf/verifier: refine retval R0 state for bpf_get_stack helper")
> > > > >>> Signed-off-by: John Fastabend <john.fastabend@xxxxxxxxx>
> > > > >>
> > > > >> Applied, thanks!
> > > > > 
> > > > > Daniel,
> > > > > did you run the selftests before applying?
> > > > > This patch breaks two.
> > > > > We have to find a different fix.
> > > > > 
> > > > > ./test_progs -t get_stack
> > > > > 68: (85) call bpf_get_stack#67
> > > > >   R0=inv(id=0,smax_value=800) R1_w=ctx(id=0,off=0,imm=0)
> > > > > R2_w=map_value(id=0,off=0,ks=4,vs=1600,umax_value=4294967295,var_off=(0x0;
> > > > > 0xffffffff)) R3_w=inv(id=0,umax_value=4294967295,var_off=(0x0;
> > > > > 0xffffffff)) R4_w=inv0 R6=ctx(id=0,off=0,im?
> > > > > R2 unbounded memory access, make sure to bounds check any array access
> > > > > into a map
> > > > 
> > > > Sigh, had it in my wip pre-rebase tree when running tests. I've revert it from the
> > > > tree since this needs to be addressed. Sorry for the trouble.
> > > 
> > > Thanks I'm looking into it now. Not sure how I missed it on
> > > selftests either older branch or I missed the test somehow. I've
> > > updated toolchain and kernel now so shouldn't happen again.
> > 
> > Looks like smax_value was nuked by <<32 >>32 shifts.
> > 53: (bf) r8 = r0   // R0=inv(id=0,smax_value=800)
> > 54: (67) r8 <<= 32  // R8->smax_value = S64_MAX; in adjust_scalar_min_max_vals()
> > 55: (c7) r8 s>>= 32
> > ; if (usize < 0)
> > 56: (c5) if r8 s< 0x0 goto pc+28
> > // and here "less than zero check" doesn't help anymore.
> > 
> > Not sure how to fix it yet, but the code pattern used in
> > progs/test_get_stack_rawtp.c
> > is real. Plenty of bpf progs rely on this.
> 
> OK I see what happened I have some patches on my llvm tree and forgot to
> pop them off before running selftests :/ These <<=32 s>>=32 pattern pops up
> in a few places for us and causes verifier trouble whenever it is hit.
> 
> I think I have a fix for this in llvm, if that is OK. And we can make
> the BPF_RSH and BPF_LSH verifier bounds tighter if we also define the
> architecture expectation on the jit side. For example x86 jit code here,
> 
> 146:   shl    $0x20,%rdi
> 14a:   shr    $0x20,%rdi
> 
> the shr will clear the most significant bit so we can say something about
> the min sign value. I'll generate a couple patches today and send them
> out to discuss. Probably easier to explain with code and examples.

How about we detect this pattern on the verifier side and replace with
pseudo insn that will do 32-bit sign extend. Most archs have special
cpu instruction to do this much more efficiently than two shifts.
If JIT doesn't implement that pseudo yet the verifier can convert
it back to two shifts.
Then during verification it will process pseudo_sign_extend op easily.
So the idea:
1. pattern match sequence of two shifts in a pass similar to
   replace_map_fd_with_map_ptr() before main do_check()
2. pseudo_sign_extend gets process in do_check() doing the right thing
   with bpf_reg_state.
3. JIT this pseudo insn or convert back

Long term we can upgrade this pseudo insn into uapi and let llvm emit it.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux