Re: [PATCH bpf-next] bpf: x86: Fix BPF_FETCH atomic and/or/xor with r0 as src

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 15, 2021 at 5:00 PM Brendan Jackman <jackmanb@xxxxxxxxxx> wrote:
>
> This code generates a CMPXCHG loop in order to implement atomic_fetch
> bitwise operations. Because CMPXCHG is hard-coded to use rax (which
> holds the BPF r0 value), it saves the _real_ r0 value into the
> internal "ax" temporary register and restores it once the loop is
> complete.
>
> In the middle of the loop, the actual bitwise operation is performed
> using src_reg. The bug occurs when src_reg is r0: as described above,
> r0 has been clobbered and the real r0 value is in the ax register.
>
> Therefore, perform this operation on the ax register instead, when
> src_reg is r0.
>
> Fixes: 981f94c3e921 ("bpf: Add bitwise atomic instructions")
> Signed-off-by: Brendan Jackman <jackmanb@xxxxxxxxxx>
> ---
>  arch/x86/net/bpf_jit_comp.c                   |  7 +++---
>  .../selftests/bpf/verifier/atomic_and.c       | 23 +++++++++++++++++++
>  2 files changed, 27 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index 79e7a0ec1da5..0c9edfe42340 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -1349,6 +1349,7 @@ st:                       if (is_imm8(insn->off))
>                             insn->imm == (BPF_XOR | BPF_FETCH)) {
>                                 u8 *branch_target;
>                                 bool is64 = BPF_SIZE(insn->code) == BPF_DW;
> +                               u32 real_src_reg = src_reg == BPF_REG_0 ? BPF_REG_AX : src_reg;

I think it would be more readable as:

 u32 real_src_reg =  src_reg;

/* Add a comment here why this is needed */
if (src_reg == BPF_REG_0)
  real_src_reg = BPF_REG_AX;

>
>                                 /*
>                                  * Can't be implemented with a single x86 insn.
> @@ -1366,9 +1367,9 @@ st:                       if (is_imm8(insn->off))
>                                  * put the result in the AUX_REG.
>                                  */
>                                 emit_mov_reg(&prog, is64, AUX_REG, BPF_REG_0);
> -                               maybe_emit_mod(&prog, AUX_REG, src_reg, is64);
> +                               maybe_emit_mod(&prog, AUX_REG, real_src_reg, is64);
>                                 EMIT2(simple_alu_opcodes[BPF_OP(insn->imm)],
> -                                     add_2reg(0xC0, AUX_REG, src_reg));
> +                                     add_2reg(0xC0, AUX_REG, real_src_reg));
>                                 /* Attempt to swap in new value */
>                                 err = emit_atomic(&prog, BPF_CMPXCHG,
>                                                   dst_reg, AUX_REG, insn->off,
> @@ -1381,7 +1382,7 @@ st:                       if (is_imm8(insn->off))
>                                  */
>                                 EMIT2(X86_JNE, -(prog - branch_target) - 2);
>                                 /* Return the pre-modification value */
> -                               emit_mov_reg(&prog, is64, src_reg, BPF_REG_0);
> +                               emit_mov_reg(&prog, is64, real_src_reg, BPF_REG_0);
>                                 /* Restore R0 after clobbering RAX */
>                                 emit_mov_reg(&prog, true, BPF_REG_0, BPF_REG_AX);

[...]

>
> base-commit: 5e1d40b75ed85ecd76347273da17e5da195c3e96
> --
> 2.30.0.478.g8a0d178c01-goog
>



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux