From: Martin KaFai Lau <martin.lau@xxxxxxxxxx> The next patch will add a ctx ptr saving instruction "(r1 = *(u64 *)(r10 -8)" at the beginning for the main prog when there is an epilogue patch (by the .gen_epilogue() verifier ops added in the next patch). There is one corner case if the bpf prog has a BPF_JMP that jumps to the 1st instruction. It needs an adjustment such that those BPF_JMP instructions won't jump to the newly added ctx saving instruction. The commit 5337ac4c9b80 ("bpf: Fix the corner case with may_goto and jump to the 1st insn.") has the details on this case. Note that the jump back to 1st instruction is not limited to the ctx ptr saving instruction. The same also applies to the prologue. A later test, pro_epilogue_goto_start.c, has a test for the prologue only case. Thus, this patch does one adjustment after gen_prologue and the future ctx ptr saving. It is done by adjust_jmp_off(env->prog, 0, delta) where delta has the total number of instructions in the prologue and the future ctx ptr saving instruction. The adjust_jmp_off(env->prog, 0, delta) assumes that the prologue does not have a goto 1st instruction itself. To accommodate the prologue might have a goto 1st insn itself, adjust_jmp_off() needs to skip the prologue instructions. This patch adds a skip_cnt argument to the adjust_jmp_off(). The skip_cnt is the number of instructions at the beginning that does not need adjustment. adjust_jmp_off(prog, 0, delta, delta) is used in this patch. Signed-off-by: Martin KaFai Lau <martin.lau@xxxxxxxxxx> --- kernel/bpf/verifier.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index b408692a12d7..8714b83c5fb8 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -19277,14 +19277,14 @@ static struct bpf_prog *bpf_patch_insn_data(struct bpf_verifier_env *env, u32 of * For all jmp insns in a given 'prog' that point to 'tgt_idx' insn adjust the * jump offset by 'delta'. */ -static int adjust_jmp_off(struct bpf_prog *prog, u32 tgt_idx, u32 delta) +static int adjust_jmp_off(struct bpf_prog *prog, u32 tgt_idx, u32 delta, u32 skip_cnt) { - struct bpf_insn *insn = prog->insnsi; + struct bpf_insn *insn = prog->insnsi + skip_cnt; u32 insn_cnt = prog->len, i; s32 imm; s16 off; - for (i = 0; i < insn_cnt; i++, insn++) { + for (i = skip_cnt; i < insn_cnt; i++, insn++) { u8 code = insn->code; if ((BPF_CLASS(code) != BPF_JMP && BPF_CLASS(code) != BPF_JMP32) || @@ -19705,6 +19705,9 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) } } + if (delta) + WARN_ON(adjust_jmp_off(env->prog, 0, delta, delta)); + if (bpf_prog_is_offloaded(env->prog->aux)) return 0; @@ -21136,7 +21139,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) * to insn after BPF_ST that inits may_goto count. * Adjustment will succeed because bpf_patch_insn_data() didn't fail. */ - WARN_ON(adjust_jmp_off(env->prog, subprog_start, 1)); + WARN_ON(adjust_jmp_off(env->prog, subprog_start, 1, 0)); } /* Since poke tab is now finalized, publish aux to tracker. */ -- 2.43.5