Re: [PATCH v6 bpf-next 07/17] bpf: improve deduction of 64-bit bounds from 32-bit bounds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 01, 2023 at 08:37:49PM -0700, Andrii Nakryiko wrote:
> Add a few interesting cases in which we can tighten 64-bit bounds based
> on newly learnt information about 32-bit bounds. E.g., when full u64/s64
> registers are used in BPF program, and then eventually compared as
> u32/s32. The latter comparison doesn't change the value of full
> register, but it does impose new restrictions on possible lower 32 bits
> of such full registers. And we can use that to derive additional full
> register bounds information.
> 
> Acked-by: Eduard Zingerman <eddyz87@xxxxxxxxx>
> Signed-off-by: Andrii Nakryiko <andrii@xxxxxxxxxx>

Acked-by: Shung-Hsi Yu <shung-hsi.yu@xxxxxxxx>

One question below

> ---
>  kernel/bpf/verifier.c | 44 +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 44 insertions(+)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 08888784cbc8..d0d0a1a1b662 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -2536,10 +2536,54 @@ static void __reg64_deduce_bounds(struct bpf_reg_state *reg)
>  	}
>  }
>  
> +static void __reg_deduce_mixed_bounds(struct bpf_reg_state *reg)
> +{
> +	/* Try to tighten 64-bit bounds from 32-bit knowledge, using 32-bit
> +	 * values on both sides of 64-bit range in hope to have tigher range.
> +	 * E.g., if r1 is [0x1'00000000, 0x3'80000000], and we learn from
> +	 * 32-bit signed > 0 operation that s32 bounds are now [1; 0x7fffffff].
> +	 * With this, we can substitute 1 as low 32-bits of _low_ 64-bit bound
> +	 * (0x100000000 -> 0x100000001) and 0x7fffffff as low 32-bits of
> +	 * _high_ 64-bit bound (0x380000000 -> 0x37fffffff) and arrive at a
> +	 * better overall bounds for r1 as [0x1'000000001; 0x3'7fffffff].
> +	 * We just need to make sure that derived bounds we are intersecting
> +	 * with are well-formed ranges in respecitve s64 or u64 domain, just
> +	 * like we do with similar kinds of 32-to-64 or 64-to-32 adjustments.
> +	 */
> +	__u64 new_umin, new_umax;
> +	__s64 new_smin, new_smax;
> +
> +	/* u32 -> u64 tightening, it's always well-formed */
> +	new_umin = (reg->umin_value & ~0xffffffffULL) | reg->u32_min_value;
> +	new_umax = (reg->umax_value & ~0xffffffffULL) | reg->u32_max_value;
> +	reg->umin_value = max_t(u64, reg->umin_value, new_umin);
> +	reg->umax_value = min_t(u64, reg->umax_value, new_umax);
> +	/* u32 -> s64 tightening, u32 range embedded into s64 preserves range validity */
> +	new_smin = (reg->smin_value & ~0xffffffffULL) | reg->u32_min_value;
> +	new_smax = (reg->smax_value & ~0xffffffffULL) | reg->u32_max_value;
> +	reg->smin_value = max_t(s64, reg->smin_value, new_smin);
> +	reg->smax_value = min_t(s64, reg->smax_value, new_smax);
> +
> +	/* if s32 can be treated as valid u32 range, we can use it as well */
> +	if ((u32)reg->s32_min_value <= (u32)reg->s32_max_value) {
> +		/* s32 -> u64 tightening */
> +		new_umin = (reg->umin_value & ~0xffffffffULL) | (u32)reg->s32_min_value;
> +		new_umax = (reg->umax_value & ~0xffffffffULL) | (u32)reg->s32_max_value;
> +		reg->umin_value = max_t(u64, reg->umin_value, new_umin);
> +		reg->umax_value = min_t(u64, reg->umax_value, new_umax);
> +		/* s32 -> s64 tightening */
> +		new_smin = (reg->smin_value & ~0xffffffffULL) | (u32)reg->s32_min_value;
> +		new_smax = (reg->smax_value & ~0xffffffffULL) | (u32)reg->s32_max_value;
> +		reg->smin_value = max_t(s64, reg->smin_value, new_smin);
> +		reg->smax_value = min_t(s64, reg->smax_value, new_smax);
> +	}
> +}
> +

Guess this might be something you've considered already, but I think it
won't hurt to ask:

All verifier.c patches up to till this point all use a lot of

	reg->min_value = max_t(typeof(reg->min_value), reg->min_value, new_min);
	reg->max_value = min_t(typeof(reg->max_value), reg->max_value, new_max);

where min_value/max_value is one of umin, smin, u32, or s32. Could we
refactor those out with some form of

	reg_bounds_intersect(reg, new_min, new_max)

The point of this is not really about reducing the line of code, but to
reduce the cognitive load of juggling all the min_t and max_t. With
something reg_bounds_intersect() we only need to check that
new_min/new_max pair is valid and trust the macro/function itself to
handle the rest correctly.

>  static void __reg_deduce_bounds(struct bpf_reg_state *reg)
>  {
>  	__reg32_deduce_bounds(reg);
>  	__reg64_deduce_bounds(reg);
> +	__reg_deduce_mixed_bounds(reg);
>  }
>  
>  /* Attempts to improve var_off based on unsigned min/max information */
> -- 
> 2.34.1
> 




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux