From: Alexei Starovoitov <ast@xxxxxxxxxx> v2->v3: - Fixed test_verifier due to output change - Fixed regsafe() - Add another test v1->v2: - Fixed find_equal_scalars() logic. off,id,add_const flag must be preserved. - Fixed mark_precise_scalar_ids() that should ignore add_const bit in ID. - Fixed asm test and added two more tests. - Gate the feature by cap_bpf. v1: Compilers can generate the code r1 = r2 r1 += 0x1 if r2 < 1000 goto ... use knowledge of r2 range in subsequent r1 operations The "undo" pass was introduced in LLVM https://reviews.llvm.org/D121937 to prevent this optimization, but it cannot cover all cases. Instead of fighting middle end optimizer in BPF backend teach the verifier about this pattern. The veristat difference: File Program Insns (A) Insns (B) Insns (DIFF) ---------------------------------- ------------------ --------- --------- ---------------- arena_htab.bpf.o arena_htab_llvm 18656 747 -17909 (-96.00%) arena_htab_asm.bpf.o arena_htab_asm 18523 618 -17905 (-96.66%) iters.bpf.o iter_subprog_iters 1109 981 -128 (-11.54%) verifier_iterating_callbacks.bpf.o cond_break2 113 128 +15 (+13.27%) Alexei Starovoitov (4): bpf: Relax tuple len requirement for sk helpers. bpf: Track delta between "linked" registers. bpf: Support can_loop/cond_break on big endian selftests/bpf: Add tests for add_const include/linux/bpf_verifier.h | 12 +- kernel/bpf/log.c | 4 +- kernel/bpf/verifier.c | 95 ++++++- net/core/filter.c | 24 +- .../testing/selftests/bpf/bpf_experimental.h | 28 +++ .../testing/selftests/bpf/progs/arena_htab.c | 16 +- .../bpf/progs/verifier_iterating_callbacks.c | 236 ++++++++++++++++++ .../testing/selftests/bpf/verifier/precise.c | 22 +- 8 files changed, 398 insertions(+), 39 deletions(-) -- 2.43.0