Re: [PATCH v4 bpf-next 3/7] bpf: enhance subregister bounds deduction logic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Oct 22, 2023 at 01:57:39PM -0700, Andrii Nakryiko wrote:
> Add handling of a bunch of possible cases which allows deducing extra
> information about subregister bounds, both u32 and s32, from full register
> u64/s64 bounds.
> 
> Also add smin32/smax32 bounds derivation from corresponding umin32/umax32
> bounds, similar to what we did with smin/smax from umin/umax derivation in
> previous patch.
> 
> Signed-off-by: Andrii Nakryiko <andrii@xxxxxxxxxx>
> 
> ---
>  kernel/bpf/verifier.c | 52 +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 52 insertions(+)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 885dd4a2ff3a..3fc9bd5e72b8 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -2130,6 +2130,58 @@ static void __update_reg_bounds(struct bpf_reg_state *reg)
>  /* Uses signed min/max values to inform unsigned, and vice-versa */
>  static void __reg32_deduce_bounds(struct bpf_reg_state *reg)
>  {
> +	/* if upper 32 bits of u64/s64 range don't change,
> +	 * we can use lower 32 bits to improve our u32/s32 boundaries
> +	 */
> +	if ((reg->umin_value >> 32) == (reg->umax_value >> 32)) {
> +		/* u64 to u32 casting preserves validity of low 32 bits as
> +		 * a range, if upper 32 bits are the same
> +		 */
> +		reg->u32_min_value = max_t(u32, reg->u32_min_value, (u32)reg->umin_value);
> +		reg->u32_max_value = min_t(u32, reg->u32_max_value, (u32)reg->umax_value);
> +
> +		if ((s32)reg->umin_value <= (s32)reg->umax_value) {
> +			reg->s32_min_value = max_t(s32, reg->s32_min_value, (s32)reg->umin_value);
> +			reg->s32_max_value = min_t(s32, reg->s32_max_value, (s32)reg->umax_value);
> +		}
> +	}
> +	if ((reg->smin_value >> 32) == (reg->smax_value >> 32)) {
> +		/* low 32 bits should form a proper u32 range */
> +		if ((u32)reg->smin_value <= (u32)reg->smax_value) {
> +			reg->u32_min_value = max_t(u32, reg->u32_min_value, (u32)reg->smin_value);
> +			reg->u32_max_value = min_t(u32, reg->u32_max_value, (u32)reg->smax_value);
> +		}
> +		/* low 32 bits should form a proper s32 range */
> +		if ((s32)reg->smin_value <= (s32)reg->smax_value) {
> +			reg->s32_min_value = max_t(s32, reg->s32_min_value, (s32)reg->smin_value);
> +			reg->s32_max_value = min_t(s32, reg->s32_max_value, (s32)reg->smax_value);
> +		}
> +	}
> +	/* Special case where upper bits form a small sequence of two
> +	 * sequential numbers (in 32-bit unsigned space, so 0xffffffff to
> +	 * 0x00000000 is also valid), while lower bits form a proper s32 range
> +	 * going from negative numbers to positive numbers.
> +	 * E.g.: [0xfffffff0ffffff00; 0xfffffff100000010]. Iterating
> +	 * over full 64-bit numbers range will form a proper [-16, 16]
> +	 * ([0xffffff00; 0x00000010]) range in its lower 32 bits.
> +	 */

Not sure if we want ascii art here but though it'd be useful to share. It
took a while to wrap my head around this concept until I look at this as
number lines.

Say we've got umin, umax tracked like so (asterisk * marks the sequence of
numbers we believe is possible to occur).

              u64            
  |--------***--------------|
   {  32-bits }{  32-bits  }

And s32_min, s32_max tracked liked so.

                           s32
                     |***---------|

The above u64 range can be mapped into two possible s32 range when we've
removed the upper 32-bits.

              u64               same u64 wrapped
  |--------***--------------|-----...
           |||
        |--***-------|------------|
              s32          s32
 
Since both s32 range are possible, we take the union between then, and the
s32 range we're already tracking

        |------------|
        |--***-------|
        |***---------|

And arrives at the final s32 range.

        |*****-------|

Taking this (wrapped) number line view and operates them with set operations
(latter is similar to what tnum does) is quite useful and I think hints that
we may be able to unify signed and unsigned range tracking. I'll look into
this a bit more and send a follow up.

> +	if ((u32)(reg->umin_value >> 32) + 1 == (u32)(reg->umax_value >> 32) &&
> +	    (s32)reg->umin_value < 0 && (s32)reg->umax_value >= 0) {
> +		reg->s32_min_value = max_t(s32, reg->s32_min_value, (s32)reg->umin_value);
> +		reg->s32_max_value = min_t(s32, reg->s32_max_value, (s32)reg->umax_value);
> +	}
> +	if ((u32)(reg->smin_value >> 32) + 1 == (u32)(reg->smax_value >> 32) &&
> +	    (s32)reg->smin_value < 0 && (s32)reg->smax_value >= 0) {
> +		reg->s32_min_value = max_t(s32, reg->s32_min_value, (s32)reg->smin_value);
> +		reg->s32_max_value = min_t(s32, reg->s32_max_value, (s32)reg->smax_value);
> +	}
> +	/* if u32 range forms a valid s32 range (due to matching sign bit),
> +	 * try to learn from that
> +	 */
> +	if ((s32)reg->u32_min_value <= (s32)reg->u32_max_value) {
> +		reg->s32_min_value = max_t(s32, reg->s32_min_value, reg->u32_min_value);
> +		reg->s32_max_value = min_t(s32, reg->s32_max_value, reg->u32_max_value);
> +	}
>  	/* Learn sign from signed bounds.
>  	 * If we cannot cross the sign boundary, then signed and unsigned bounds
>  	 * are the same, so combine.  This works even in the negative case, e.g.
> -- 
> 2.34.1
> 




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux