Patch "bpf: remove unnecessary prune and jump points" has been added to the 6.1-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    bpf: remove unnecessary prune and jump points

to the 6.1-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     bpf-remove-unnecessary-prune-and-jump-points.patch
and it can be found in the queue-6.1 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 90b6441df9cf455cd5ad99ec2231d29e605a5a47
Author: Andrii Nakryiko <andrii@xxxxxxxxxx>
Date:   Tue Dec 6 15:33:45 2022 -0800

    bpf: remove unnecessary prune and jump points
    
    [ Upstream commit 618945fbed501b6e5865042068a51edfb2dda948 ]
    
    Don't mark some instructions as jump points when there are actually no
    jumps and instructions are just processed sequentially. Such case is
    handled naturally by precision backtracking logic without the need to
    update jump history. See get_prev_insn_idx(). It goes back linearly by
    one instruction, unless current top of jmp_history is pointing to
    current instruction. In such case we use `st->jmp_history[cnt - 1].prev_idx`
    to find instruction from which we jumped to the current instruction
    non-linearly.
    
    Also remove both jump and prune point marking for instruction right
    after unconditional jumps, as program flow can get to the instruction
    right after unconditional jump instruction only if there is a jump to
    that instruction from somewhere else in the program. In such case we'll
    mark such instruction as prune/jump point because it's a destination of
    a jump.
    
    This change has no changes in terms of number of instructions or states
    processes across Cilium and selftests programs.
    
    Signed-off-by: Andrii Nakryiko <andrii@xxxxxxxxxx>
    Acked-by: John Fastabend <john.fastabend@xxxxxxxxx>
    Link: https://lore.kernel.org/r/20221206233345.438540-4-andrii@xxxxxxxxxx
    Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx>
    Stable-dep-of: 3feb263bb516 ("bpf: handle ldimm64 properly in check_cfg()")
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index ec688665aaa25..09631797d9e0c 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -11093,13 +11093,12 @@ static int visit_func_call_insn(int t, int insn_cnt,
 	if (ret)
 		return ret;
 
-	if (t + 1 < insn_cnt) {
-		mark_prune_point(env, t + 1);
-		mark_jmp_point(env, t + 1);
-	}
+	mark_prune_point(env, t + 1);
+	/* when we exit from subprog, we need to record non-linear history */
+	mark_jmp_point(env, t + 1);
+
 	if (visit_callee) {
 		mark_prune_point(env, t);
-		mark_jmp_point(env, t);
 		ret = push_insn(t, t + insns[t].imm + 1, BRANCH, env,
 				/* It's ok to allow recursion from CFG point of
 				 * view. __check_func_call() will do the actual
@@ -11133,15 +11132,13 @@ static int visit_insn(int t, int insn_cnt, struct bpf_verifier_env *env)
 		return DONE_EXPLORING;
 
 	case BPF_CALL:
-		if (insns[t].imm == BPF_FUNC_timer_set_callback) {
-			/* Mark this call insn to trigger is_state_visited() check
-			 * before call itself is processed by __check_func_call().
-			 * Otherwise new async state will be pushed for further
-			 * exploration.
+		if (insns[t].imm == BPF_FUNC_timer_set_callback)
+			/* Mark this call insn as a prune point to trigger
+			 * is_state_visited() check before call itself is
+			 * processed by __check_func_call(). Otherwise new
+			 * async state will be pushed for further exploration.
 			 */
 			mark_prune_point(env, t);
-			mark_jmp_point(env, t);
-		}
 		return visit_func_call_insn(t, insn_cnt, insns, env,
 					    insns[t].src_reg == BPF_PSEUDO_CALL);
 
@@ -11155,26 +11152,15 @@ static int visit_insn(int t, int insn_cnt, struct bpf_verifier_env *env)
 		if (ret)
 			return ret;
 
-		/* unconditional jmp is not a good pruning point,
-		 * but it's marked, since backtracking needs
-		 * to record jmp history in is_state_visited().
-		 */
 		mark_prune_point(env, t + insns[t].off + 1);
 		mark_jmp_point(env, t + insns[t].off + 1);
-		/* tell verifier to check for equivalent states
-		 * after every call and jump
-		 */
-		if (t + 1 < insn_cnt) {
-			mark_prune_point(env, t + 1);
-			mark_jmp_point(env, t + 1);
-		}
 
 		return ret;
 
 	default:
 		/* conditional jump with two edges */
 		mark_prune_point(env, t);
-		mark_jmp_point(env, t);
+
 		ret = push_insn(t, t + 1, FALLTHROUGH, env, true);
 		if (ret)
 			return ret;




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux