Re: [PATCH v4 bpf-next] bpf: Explicitly zero-extend R0 after 32-bit cmpxchg

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2021-02-23 at 15:08 +0000, Brendan Jackman wrote:
> As pointed out by Ilya and explained in the new comment, there's a
> discrepancy between x86 and BPF CMPXCHG semantics: BPF always loads
> the value from memory into r0, while x86 only does so when r0 and the
> value in memory are different. The same issue affects s390.
> 
> At first this might sound like pure semantics, but it makes a real
> difference when the comparison is 32-bit, since the load will
> zero-extend r0/rax.
> 
> The fix is to explicitly zero-extend rax after doing such a
> CMPXCHG. Since this problem affects multiple archs, this is done in
> the verifier by patching in a BPF_ZEXT_REG instruction after every
> 32-bit cmpxchg. Any archs that don't need such manual zero-extension
> can do a look-ahead with insn_is_zext to skip the unnecessary mov.
> 
> There was actually already logic to patch in zero-extension insns
> after 32-bit cmpxchgs, in opt_subreg_zext_lo32_rnd_hi32. To avoid
> bloating the prog with unnecessary movs, we now explicitly check and
> skip that logic for this case.
> 
> Reported-by: Ilya Leoshkevich <iii@xxxxxxxxxxxxx>
> Fixes: 5ffa25502b5a ("bpf: Add instructions for atomic_[cmp]xchg")
> Signed-off-by: Brendan Jackman <jackmanb@xxxxxxxxxx>
> ---
> 
> Differences v3->v4[1]:
>  - Moved the optimization against pointless zext into the correct
> place:
>    opt_subreg_zext_lo32_rnd_hi32 is called _after_ fixup_bpf_calls.
> 
> Differences v2->v3[1]:
>  - Moved patching into fixup_bpf_calls (patch incoming to rename this
> function)
>  - Added extra commentary on bpf_jit_needs_zext
>  - Added check to avoid adding a pointless zext(r0) if there's
> already one there.
> 
> Difference v1->v2[1]: Now solved centrally in the verifier instead of
>   specifically for the x86 JIT. Thanks to Ilya and Daniel for the
> suggestions!
> 
> [1] v3: 
> https://lore.kernel.org/bpf/08669818-c99d-0d30-e1db-53160c063611@xxxxxxxxxxxxx/T/#t
>     v2: 
> https://lore.kernel.org/bpf/08669818-c99d-0d30-e1db-53160c063611@xxxxxxxxxxxxx/T/#t
>     v1: 
> https://lore.kernel.org/bpf/d7ebaefb-bfd6-a441-3ff2-2fdfe699b1d2@xxxxxxxxxxxxx/T/#t
> 
>  kernel/bpf/core.c                             |  4 +++
>  kernel/bpf/verifier.c                         | 33
> +++++++++++++++++--
>  .../selftests/bpf/verifier/atomic_cmpxchg.c   | 25 ++++++++++++++
>  .../selftests/bpf/verifier/atomic_or.c        | 26 +++++++++++++++
>  4 files changed, 86 insertions(+), 2 deletions(-)

I think I managed to figure out what is wrong with
adjust_insn_aux_data(): insn_has_def32() does not know about BPF_FETCH.
I'll post a fix shortly; in the meantime, based on my debugging
experience and on looking at the code for a while, I have a few
comments regarding the patch.

[...]

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 3d34ba492d46..ec1cbd565140 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -11061,8 +11061,16 @@ static int
> opt_subreg_zext_lo32_rnd_hi32(struct bpf_verifier_env *env,
>                          */
>                         if (WARN_ON(!(insn.imm & BPF_FETCH)))
>                                 return -EINVAL;
> -                       load_reg = insn.imm == BPF_CMPXCHG ?
> BPF_REG_0
> -                                                          :
> insn.src_reg;
> +                       /* There should already be a zero-extension
> inserted after BPF_CMPXCHG. */
> +                       if (insn.imm == BPF_CMPXCHG) {
> +                               struct bpf_insn *next =
> &insns[adj_idx + 1];

Would it make sense to check bounds here? Not sure whether the
verification process might come that far with the last instruction
being cmpxchg and not ret, but still..

> +
> +                               if (WARN_ON(!insn_is_zext(next) ||
> next->dst_reg != insn.src_reg))

We generate BPF_ZEXT_REG(BPF_REG_0), so we should probably use
BPF_REG_0 instead of insn.src_reg here.

> +                                       return -EINVAL;
> +                               continue;

I think we need i++ before continue, otherwise we would stumble upon
BPF_ZEXT_REG itself on the next iteration, and it is also marked with
zext_dst.

> +                       }
> +
> +                       load_reg = insn.src_reg;
>                 } else {
>                         load_reg = insn.dst_reg;
>                 }
> @@ -11666,6 +11674,27 @@ static int fixup_bpf_calls(struct
> bpf_verifier_env *env)
>                         continue;
>                 }
> 
> +               /* BPF_CMPXCHG always loads a value into R0,
> therefore always
> +                * zero-extends. However some archs' equivalent
> instruction only
> +                * does this load when the comparison is successful.
> So here we
> +                * add a BPF_ZEXT_REG after every 32-bit CMPXCHG, so
> that such
> +                * archs' JITs don't need to deal with the issue.
> Archs that
> +                * don't face this issue may use insn_is_zext to
> detect and skip
> +                * the added instruction.
> +                */
> +               if (insn->code == (BPF_STX | BPF_W | BPF_ATOMIC) &&
> insn->imm == BPF_CMPXCHG) {

Since we want this only for JITs and not the interpreter, would it make
sense to check prog->jit_requested, like some other fragments of this
function do?

[...]




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux