On 1/9/24 2:47 PM, Andrii Nakryiko wrote:
On Mon, Jan 8, 2024 at 8:05 PM Yonghong Song <yonghong.song@xxxxxxxxx> wrote:
Add a selftest with a 4 bytes BPF_ST of 0 where the store is not
8-byte aligned. The goal is to ensure that STACK_ZERO is properly
marked for the spill and the STACK_ZERO value can propagate
properly during the load.
Signed-off-by: Yonghong Song <yonghong.song@xxxxxxxxx>
---
.../selftests/bpf/progs/verifier_spill_fill.c | 44 +++++++++++++++++++
1 file changed, 44 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
index d4b3188afe07..6017b26d957d 100644
--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
@@ -583,6 +583,50 @@ __naked void partial_stack_load_preserves_zeros(void)
: __clobber_common);
}
+SEC("raw_tp")
+__log_level(2)
+__success
+/* fp-4 is STACK_ZERO */
+__msg("2: (62) *(u32 *)(r10 -4) = 0 ; R10=fp0 fp-8=0000????")
+/* validate that assigning R2 from STACK_ZERO with zero value doesn't mark register
+ * precise immediately; if necessary, it will be marked precise later
+ */
this comment is not accurate in this test, this unaligned write
doesn't preserve register and writes STACK_ZERO, so there is no
precision going on here, right?
I checked the source code again in check_stack_write_fixed_off() and
backtrack_insn(). Indeed, check_stack_write_fixed_off() assigned
STACK_ZERO to the stack slot and with imm = 0 and value_regno = -1
mark_chain_precision() will be an nop. Also, in backtrack_insn()
load with STACK_ZERO will stop the backtracking.
Will remove the above comments.
Other than that LGTM
Acked-by: Andrii Nakryiko <andrii@xxxxxxxxxx>
+__msg("4: (71) r2 = *(u8 *)(r10 -1) ; R2_w=0 R10=fp0 fp-8=0000????")
+__msg("5: (0f) r1 += r2")
+__msg("mark_precise: frame0: last_idx 5 first_idx 0 subseq_idx -1")
+__msg("mark_precise: frame0: regs=r2 stack= before 4: (71) r2 = *(u8 *)(r10 -1)")
+__naked void partial_stack_load_preserves_partial_zeros(void)
+{
+ asm volatile (
+ /* fp-4 is value zero */
+ ".8byte %[fp4_st_zero];" /* LLVM-18+: *(u32 *)(r10 -4) = 0; */
+
+ /* load single U8 from non-aligned stack zero slot */
+ "r1 = %[single_byte_buf];"
+ "r2 = *(u8 *)(r10 -1);"
+ "r1 += r2;"
+ "*(u8 *)(r1 + 0) = r2;" /* this should be fine */
+
+ /* load single U16 from non-aligned stack zero slot */
+ "r1 = %[single_byte_buf];"
+ "r2 = *(u16 *)(r10 -2);"
+ "r1 += r2;"
+ "*(u8 *)(r1 + 0) = r2;" /* this should be fine */
+
+ /* load single U32 from non-aligned stack zero slot */
+ "r1 = %[single_byte_buf];"
+ "r2 = *(u32 *)(r10 -4);"
+ "r1 += r2;"
+ "*(u8 *)(r1 + 0) = r2;" /* this should be fine */
+
+ "r0 = 0;"
+ "exit;"
+ :
+ : __imm_ptr(single_byte_buf),
+ __imm_insn(fp4_st_zero, BPF_ST_MEM(BPF_W, BPF_REG_FP, -4, 0))
+ : __clobber_common);
+}
+
char two_byte_buf[2] SEC(".data.two_byte_buf");
SEC("raw_tp")
--
2.34.1