Re: [PATCH bpf-next v3 1/2] bpf: verifier: Support eliding map lookup nullness

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2024-09-24 at 04:40 -0600, Daniel Xu wrote:
> This commit allows progs to elide a null check on statically known map
> lookup keys. In other words, if the verifier can statically prove that
> the lookup will be in-bounds, allow the prog to drop the null check.
> 
> This is useful for two reasons:
> 
> 1. Large numbers of nullness checks (especially when they cannot fail)
>    unnecessarily pushes prog towards BPF_COMPLEXITY_LIMIT_JMP_SEQ.
> 2. It forms a tighter contract between programmer and verifier.
> 
> For (1), bpftrace is starting to make heavier use of percpu scratch
> maps. As a result, for user scripts with large number of unrolled loops,
> we are starting to hit jump complexity verification errors.  These
> percpu lookups cannot fail anyways, as we only use static key values.
> Eliding nullness probably results in less work for verifier as well.
> 
> For (2), percpu scratch maps are often used as a larger stack, as the
> currrent stack is limited to 512 bytes. In these situations, it is
> desirable for the programmer to express: "this lookup should never fail,
> and if it does, it means I messed up the code". By omitting the null
> check, the programmer can "ask" the verifier to double check the logic.
> 
> Tests also have to be updated in sync with these changes, as the
> verifier is more efficient with this change. Notable, iters.c tests had
> to be changed to use a map type that still requires null checks, as it's
> exercising verifier tracking logic w.r.t iterators.
> 
> Signed-off-by: Daniel Xu <dxu@xxxxxxxxx>
> ---

Acked-by: Eduard Zingerman <eddyz87@xxxxxxxxx>

[...]

> +/* Returns constant key value if possible, else -1 */
> +static long get_constant_map_key(struct bpf_verifier_env *env,
> +				 struct bpf_reg_state *key)
> +{
> +	struct bpf_func_state *state = func(env, key);
> +	struct bpf_reg_state *reg;
> +	int stack_off;
> +	int slot;
> +	int spi;
> +
> +	if (key->type != PTR_TO_STACK)
> +		return -1;
> +	if (!tnum_is_const(key->var_off))
> +		return -1;
> +
> +	stack_off = key->off + key->var_off.value;
> +	slot = -stack_off - 1;
> +	if (slot < 0)
> +		/* Stack grew upwards */
> +		return -1;

Nitpick: I'd also add a test like below:

SEC("socket")
__failure __msg("invalid indirect access to stack R2 off=4096 size=4")
__naked void key_lookup_at_invalid_fp(void)
{
	asm volatile ("					\
	r1 = %[map_array] ll;				\
	r2 = r10;					\
	r2 += 4096;					\
	call %[bpf_map_lookup_elem];			\
	r0 = *(u64*)(r0 + 0);				\
	exit;						\
"	:
	: __imm(bpf_map_lookup_elem),
	  __imm_addr(map_array)
	: __clobber_all);
}

(double checked with v2 and this test does cause page fault)

[...]






[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux