On Tue, Nov 21, 2023 at 8:20 AM Eduard Zingerman <eddyz87@xxxxxxxxx> wrote: > > On Mon, 2023-11-20 at 16:22 -0800, Andrii Nakryiko wrote: > [...] > > > +SEC("raw_tp") > > +__log_level(2) > > +__success > > +__naked void partial_stack_load_preserves_zeros(void) > > +{ > > + asm volatile ( > > + /* fp-8 is all STACK_ZERO */ > > + "*(u64 *)(r10 -8) = 0;" > > This fails when compiled with llvm-16, bpf st assembly support is only > present in llvm-18. If we want to preserve support for llvm-16 this > test would require ifdefs or the following patch: Ok, I'll use that, thanks! I need BPF_ST here to avoid register spill code path. > > @@ -3,6 +3,7 @@ > > #include <linux/bpf.h> > #include <bpf/bpf_helpers.h> > +#include "../../../include/linux/filter.h" > #include "bpf_misc.h" > > struct { > @@ -510,7 +511,7 @@ __naked void partial_stack_load_preserves_zeros(void) > { > asm volatile ( > /* fp-8 is all STACK_ZERO */ > - "*(u64 *)(r10 -8) = 0;" > + ".8byte %[fp8_st_zero];" > > /* fp-16 is const zero register */ > "r0 = 0;" > @@ -559,7 +560,8 @@ __naked void partial_stack_load_preserves_zeros(void) > "r0 = 0;" > "exit;" > : > - : __imm_ptr(single_byte_buf) > + : __imm_ptr(single_byte_buf), > + __imm_insn(fp8_st_zero, BPF_ST_MEM(BPF_DW, BPF_REG_FP, -8, 0)) > : __clobber_common); > } > > > > + /* fp-16 is const zero register */ > > + "r0 = 0;" > > + "*(u64 *)(r10 -16) = r0;" > > + > > + /* load single U8 from non-aligned STACK_ZERO slot */ > > + "r1 = %[single_byte_buf];" > > + "r2 = *(u8 *)(r10 -1);" > > + "r1 += r2;" /* this should be fine */ > > Question: the comment suggests that adding something other than > zero would be an error, however error would only be > reported if *r1 is attempted, maybe add such access? > E.g. "*(u8 *)(r1 + 0) = r2;"? you are right! I assumed we check bounds during pointer increment, but I'm wrong. I added dereference instruction everywhere, thanks! > > [...] > > >