On Wed, 4 Dec 2024 at 21:12, Eduard Zingerman <eddyz87@xxxxxxxxx> wrote: > > On Tue, 2024-12-03 at 18:41 -0800, Kumar Kartikeya Dwivedi wrote: > > [...] > > > +/* r2 with offset is checked, which marks r1 with off=0 as non-NULL */ > > +SEC("tp_btf/bpf_testmod_test_raw_tp_null") > > +__failure > > +__msg("3: (07) r2 += 8 ; R2_w=trusted_ptr_or_null_sk_buff(id=1,off=8)") > > +__msg("4: (15) if r2 == 0x0 goto pc+2 ; R2_w=trusted_ptr_or_null_sk_buff(id=2,off=8)") > > +__msg("5: (bf) r1 = r1 ; R1_w=trusted_ptr_sk_buff()") > > This looks like a bug. > 'r1 != 0' does not follow from 'r2 == r1 + 8 and r2 != 0'. > Hmm, yes, it's broken. I am realizing where we do it now will walk r1 first and we'll not see r2 off != 0 until after we mark it already. I guess we need to do the check sooner outside this function in mark_ptr_or_null_regs. There we have the register being operated on, so if off != 0 we don't walk all regs in state. Do you think that should fix this? > > +int BPF_PROG(test_raw_tp_null_copy_check_with_off, struct sk_buff *skb) > > +{ > > + asm volatile ( > > + "r1 = *(u64 *)(r1 +0); \ > > + r2 = r1; \ > > + r3 = 0; \ > > + r2 += 8; \ > > + if r2 == 0 goto jmp2; \ > > + r1 = r1; \ > > + *(u64 *)(r3 +0) = r3; \ > > + jmp2: " > > + :: > > + : __clobber_all > > + ); > > + return 0; > > +} > > [...]