Re: [RFC bpf-next 01/11] bpf: use branch predictions in opt_hard_wire_dead_code_branches()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 7, 2024 at 9:51 AM Eduard Zingerman <eddyz87@xxxxxxxxx> wrote:
>
> Consider dead code elimination problem for program like below:
>
>     main:
>       1: r1 = 42
>       2: call <subprogram>;
>       3: exit
>
>     subprogram:
>       4: r0 = 1
>       5: if r1 != 42 goto +1
>       6: r0 = 2
>       7: exit;
>
> Here verifier would visit every instruction and thus
> bpf_insn_aux_data->seen flag would be set for both true (7)
> and falltrhough (6) branches of conditional (5).
> Hence opt_hard_wire_dead_code_branches() will not replace
> conditional (5) with unconditional jump.
>
> To cover such cases:
> - add two fields in struct bpf_insn_aux_data:
>   - true_branch_taken;
>   - false_branch_taken;
> - adjust check_cond_jmp_op() to set the fields according to jump
>   predictions;
> - handle these flags in the opt_hard_wire_dead_code_branches():
>   - true_branch_taken && !false_branch_taken
>     always jump, replace instruction with 'goto off';
>   - !true_branch_taken && false_branch_taken
>     always falltrhough, replace with 'goto +0';
>   - true_branch_taken && false_branch_taken
>     jump and falltrhough are possible, don't change the instruction;
>   - !true_branch_taken && !false_branch_taken
>     neither jump, nor falltrhough are possible, if condition itself
>     must be a dead code (should be removed by opt_remove_dead_code).
>
> Signed-off-by: Eduard Zingerman <eddyz87@xxxxxxxxx>
> ---
>  include/linux/bpf_verifier.h |  4 +++-
>  kernel/bpf/verifier.c        | 16 ++++++++++++----
>  2 files changed, 15 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> index 4513372c5bc8..ed4eacfd4db7 100644
> --- a/include/linux/bpf_verifier.h
> +++ b/include/linux/bpf_verifier.h
> @@ -570,7 +570,9 @@ struct bpf_insn_aux_data {
>         struct btf_struct_meta *kptr_struct_meta;
>         u64 map_key_state; /* constant (32 bit) key tracking for maps */
>         int ctx_field_size; /* the ctx field size for load insn, maybe 0 */
> -       u32 seen; /* this insn was processed by the verifier at env->pass_cnt */
> +       bool seen; /* this insn was processed by the verifier at env->pass_cnt */
> +       bool true_branch_taken; /* for cond jumps, set if verifier ever took true branch */
> +       bool false_branch_taken; /* for cond jumps, set if verifier ever took false branch */

we'll now have 12 bool fields in bpf_insn_aux_data, probably it's time
to switch to bitfields for those (even though you are trading 4 for 3
bytes here), 72+ bytes per instruction is not unnoticeable, especially
for big BPF programs

>         bool sanitize_stack_spill; /* subject to Spectre v4 sanitation */
>         bool zext_dst; /* this insn zero extends dst reg */
>         bool needs_zext; /* alu op needs to clear upper bits */

[...]





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux