On Tue, Jun 11, 2024 at 1:09 PM Eduard Zingerman <eddyz87@xxxxxxxxx> wrote: > > On Mon, 2024-06-10 at 16:08 -0700, Alexei Starovoitov wrote: > > [...] > > > diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h > > index 50aa87f8d77f..2b54e25d2364 100644 > > --- a/include/linux/bpf_verifier.h > > +++ b/include/linux/bpf_verifier.h > > @@ -73,7 +73,10 @@ enum bpf_iter_state { > > struct bpf_reg_state { > > /* Ordering of fields matters. See states_equal() */ > > enum bpf_reg_type type; > > - /* Fixed part of pointer offset, pointer types only */ > > + /* > > + * Fixed part of pointer offset, pointer types only. > > + * Or constant delta between "linked" scalars with the same ID. > > + */ > > s32 off; > > After thinking about this some more I came to conclusion that ->off > has to be checked for scalar registers in regsafe(). > Otherwise the following test is marked as safe: > > char buf[10] SEC(".data.buf"); > > SEC("socket") > __failure > __flag(BPF_F_TEST_STATE_FREQ) > __naked void check_add_const_regsafe_off(void) > { > asm volatile ( > "r8 = %[buf];" > "call %[bpf_ktime_get_ns];" > "r6 = r0;" > "call %[bpf_ktime_get_ns];" > "r7 = r0;" > "call %[bpf_ktime_get_ns];" > "r1 = r0;" /* same ids for r1 and r0 */ > "if r6 > r7 goto 1f;" /* this jump can't be predicted */ > "r1 += 1;" /* r1.off == +1 */ > "goto 2f;" > "1: r1 += 100;" /* r1.off == +100 */ > "goto +0;" /* force checkpoint, must verify r1.off in regsafe() here */ The goto +0 is unnecessary. It will force a checkpoint at the target. Which is the next insn, but the next insn is 'if' already which will be marked as a checkpoint. But I'll keep it in the next version, since it makes the verifier log easier to read. Without goto +0 the verifier doesn't have a chance to print the value of R1 after addition. I'll only adjust the comment to say /* verify r1.off in regsafe() after this insn */ > "2: if r0 > 8 goto 3f;" /* r0 range [0,8], r1 range either [1,9] or [100,108]*/ > "r8 += r1;" > "*(u8 *)(r8 +0) = r0;" /* potentially unsafe, buf size is 10 */ > "3: exit;" > : > : __imm(bpf_ktime_get_ns), > __imm_ptr(buf) > : __clobber_common); > } > > Sorry for missing this yesterday. > Something like below is necessary. > (To trigger ((rold->id & BPF_ADD_CONST) != (rcur->id & BPF_ADD_CONST)) > a variation of the test where r1 += 1 is not done is necessary). Yes. Will copy paste into another test. > > --- > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index ad11e5441860..70e44fa4f765 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -16797,6 +16797,10 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold, > } > if (!rold->precise && exact == NOT_EXACT) > return true; > + if ((rold->id & BPF_ADD_CONST) != (rcur->id & BPF_ADD_CONST)) > + return false; > + if ((rold->id & BPF_ADD_CONST) && (rold->off != rcur->off)) > + return false; Thanks for the diff. I haven't considered the case where explored state have R1=Pscalar(id=3) and its boundaries were not adjusted by later find_equal_scalars(). Only precision was propagated. And it will incorrectly match in regsafe() to current R1=scalar(id=3+any)