Re: [PATCH v2 bpf-next 09/20] bpf: Recognize cast_kern/user instructions in the verifier.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2024-02-08 at 20:05 -0800, Alexei Starovoitov wrote:
[...]

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 3c77a3ab1192..5eeb9bf7e324 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c

[...]

> @@ -13837,6 +13844,21 @@ static int adjust_reg_min_max_vals(struct bpf_verifier_env *env,
>  
>  	dst_reg = &regs[insn->dst_reg];
>  	src_reg = NULL;
> +
> +	if (dst_reg->type == PTR_TO_ARENA) {
> +		struct bpf_insn_aux_data *aux = cur_aux(env);
> +
> +		if (BPF_CLASS(insn->code) == BPF_ALU64)
> +			/*
> +			 * 32-bit operations zero upper bits automatically.
> +			 * 64-bit operations need to be converted to 32.
> +			 */
> +			aux->needs_zext = true;

It should be possible to write an example, when the same insn is
visited with both PTR_TO_ARENA and some other PTR type.
Such examples should be rejected as is currently done in do_check()
for BPF_{ST,STX} using save_aux_ptr_type().

[...]

> @@ -13954,16 +13976,17 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
>  	} else if (opcode == BPF_MOV) {
>  
>  		if (BPF_SRC(insn->code) == BPF_X) {
> -			if (insn->imm != 0) {
> -				verbose(env, "BPF_MOV uses reserved fields\n");
> -				return -EINVAL;
> -			}
> -
>  			if (BPF_CLASS(insn->code) == BPF_ALU) {
> -				if (insn->off != 0 && insn->off != 8 && insn->off != 16) {
> +				if ((insn->off != 0 && insn->off != 8 && insn->off != 16) ||
> +				    insn->imm) {
>  					verbose(env, "BPF_MOV uses reserved fields\n");
>  					return -EINVAL;
>  				}
> +			} else if (insn->off == BPF_ARENA_CAST_KERN || insn->off == BPF_ARENA_CAST_USER) {
> +				if (!insn->imm) {
> +					verbose(env, "cast_kern/user insn must have non zero imm32\n");
> +					return -EINVAL;
> +				}
>  			} else {
>  				if (insn->off != 0 && insn->off != 8 && insn->off != 16 &&
>  				    insn->off != 32) {

I think it is now necessary to check insn->imm here,
as is it allows ALU64 move with non-zero imm.

> @@ -13993,7 +14016,12 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
>  			struct bpf_reg_state *dst_reg = regs + insn->dst_reg;
>  
>  			if (BPF_CLASS(insn->code) == BPF_ALU64) {
> -				if (insn->off == 0) {
> +				if (insn->imm) {
> +					/* off == BPF_ARENA_CAST_KERN || off == BPF_ARENA_CAST_USER */
> +					mark_reg_unknown(env, regs, insn->dst_reg);
> +					if (insn->off == BPF_ARENA_CAST_KERN)
> +						dst_reg->type = PTR_TO_ARENA;

This effectively allows casting anything to PTR_TO_ARENA.
Do we want to check that src_reg somehow originates from arena?
Might be tricky, a new type modifier bit or something like that.

> +				} else if (insn->off == 0) {
>  					/* case: R1 = R2
>  					 * copy register state to dest reg
>  					 */
> @@ -14059,6 +14087,9 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
>  						dst_reg->subreg_def = env->insn_idx + 1;
>  						coerce_subreg_to_size_sx(dst_reg, insn->off >> 3);
>  					}
> +				} else if (src_reg->type == PTR_TO_ARENA) {
> +					mark_reg_unknown(env, regs, insn->dst_reg);
> +					dst_reg->type = PTR_TO_ARENA;

This describes a case wX = wY, where rY is PTR_TO_ARENA,
should rX be marked as SCALAR instead of PTR_TO_ARENA?

[...]

> @@ -18235,6 +18272,31 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env)
>  				fdput(f);
>  				return -EBUSY;
>  			}
> +			if (map->map_type == BPF_MAP_TYPE_ARENA) {
> +				if (env->prog->aux->arena) {

Does this have to be (env->prog->aux->arena && env->prog->aux->arena != map) ?

> +					verbose(env, "Only one arena per program\n");
> +					fdput(f);
> +					return -EBUSY;
> +				}

[...]

> @@ -18799,6 +18861,18 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
>  			   insn->code == (BPF_ST | BPF_MEM | BPF_W) ||
>  			   insn->code == (BPF_ST | BPF_MEM | BPF_DW)) {
>  			type = BPF_WRITE;
> +		} else if (insn->code == (BPF_ALU64 | BPF_MOV | BPF_X) && insn->imm) {
> +			if (insn->off == BPF_ARENA_CAST_KERN ||
> +			    (((struct bpf_map *)env->prog->aux->arena)->map_flags & BPF_F_NO_USER_CONV)) {
> +				/* convert to 32-bit mov that clears upper 32-bit */
> +				insn->code = BPF_ALU | BPF_MOV | BPF_X;
> +				/* clear off, so it's a normal 'wX = wY' from JIT pov */
> +				insn->off = 0;
> +			} /* else insn->off == BPF_ARENA_CAST_USER should be handled by JIT */
> +			continue;
> +		} else if (env->insn_aux_data[i + delta].needs_zext) {
> +			/* Convert BPF_CLASS(insn->code) == BPF_ALU64 to 32-bit ALU */
> +			insn->code = BPF_ALU | BPF_OP(insn->code) | BPF_SRC(insn->code);

Tbh, I think this should be done in do_misc_fixups(),
mixing it with context handling in convert_ctx_accesses()
seems a bit confusing.





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux