Re: [PATCH v4 bpf-next] bpf: Explicitly zero-extend R0 after 32-bit cmpxchg

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 23, 2021 at 03:08:45PM +0000, Brendan Jackman wrote:
[ ... ]

> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index 0ae015ad1e05..dcf18612841b 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -2342,6 +2342,10 @@ bool __weak bpf_helper_changes_pkt_data(void *func)
>  /* Return TRUE if the JIT backend wants verifier to enable sub-register usage
>   * analysis code and wants explicit zero extension inserted by verifier.
>   * Otherwise, return FALSE.
> + *
> + * The verifier inserts an explicit zero extension after BPF_CMPXCHGs even if
> + * you don't override this. JITs that don't want these extra insns can detect
> + * them using insn_is_zext.
>   */
>  bool __weak bpf_jit_needs_zext(void)
>  {
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 3d34ba492d46..ec1cbd565140 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -11061,8 +11061,16 @@ static int opt_subreg_zext_lo32_rnd_hi32(struct bpf_verifier_env *env,
>  			 */
>  			if (WARN_ON(!(insn.imm & BPF_FETCH)))
>  				return -EINVAL;
> -			load_reg = insn.imm == BPF_CMPXCHG ? BPF_REG_0
> -							   : insn.src_reg;
> +			/* There should already be a zero-extension inserted after BPF_CMPXCHG. */
> +			if (insn.imm == BPF_CMPXCHG) {
> +				struct bpf_insn *next = &insns[adj_idx + 1];
> +
> +				if (WARN_ON(!insn_is_zext(next) || next->dst_reg != insn.src_reg))
> +					return -EINVAL;
> +				continue;
This is to avoid zext_patch again for the JITs with
bpf_jit_needs_zext() == true.

IIUC, at this point, aux[adj_idx].zext_dst == true which
means that the check_atomic() has already marked the
reg0->subreg_def properly.

> +			}
> +
> +			load_reg = insn.src_reg;
>  		} else {
>  			load_reg = insn.dst_reg;
>  		}
> @@ -11666,6 +11674,27 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
>  			continue;
>  		}
> 
> +		/* BPF_CMPXCHG always loads a value into R0, therefore always
> +		 * zero-extends. However some archs' equivalent instruction only
> +		 * does this load when the comparison is successful. So here we
> +		 * add a BPF_ZEXT_REG after every 32-bit CMPXCHG, so that such
> +		 * archs' JITs don't need to deal with the issue. Archs that
> +		 * don't face this issue may use insn_is_zext to detect and skip
> +		 * the added instruction.
> +		 */
> +		if (insn->code == (BPF_STX | BPF_W | BPF_ATOMIC) && insn->imm == BPF_CMPXCHG) {
> +			struct bpf_insn zext_patch[2] = { *insn, BPF_ZEXT_REG(BPF_REG_0) };
Then should this zext_patch only be done for "!bpf_jit_needs_zext()"
such that the above change in opt_subreg_zext_lo32_rnd_hi32()
becomes unnecessary?

> +
> +			new_prog = bpf_patch_insn_data(env, i + delta, zext_patch, 2);
> +			if (!new_prog)
> +				return -ENOMEM;
> +
> +			delta    += 1;
> +			env->prog = prog = new_prog;
> +			insn      = new_prog->insnsi + i + delta;
> +			continue;
> +		}
> +
>  		if (insn->code != (BPF_JMP | BPF_CALL))
>  			continue;
>  		if (insn->src_reg == BPF_PSEUDO_CALL)



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux