Re: [PATCH bpf-next v3 2/2] selftests/bpf: Add ldsx selftests for ldsx and subreg compare

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 7/16/24 5:12 PM, Eduard Zingerman wrote:
On Tue, 2024-07-16 at 15:38 -0700, Yonghong Song wrote:

[...]

diff --git a/tools/testing/selftests/bpf/prog_tests/reg_bounds.c b/tools/testing/selftests/bpf/prog_tests/reg_bounds.c
index eb74363f9f70..c88602908cfe 100644
--- a/tools/testing/selftests/bpf/prog_tests/reg_bounds.c
+++ b/tools/testing/selftests/bpf/prog_tests/reg_bounds.c
@@ -441,6 +441,22 @@ static struct range range_refine(enum num_t x_t, struct range x, enum num_t y_t,
          if (t_is_32(y_t) && !t_is_32(x_t)) {
                  struct range x_swap;
+ /* If we know that
+                *   - *x* is in the range of signed 32bit value
+                *   - *y_cast* range is 32-bit sign non-negative, and
+                * then *x* range can be narrowed to the interaction of
+                * *x* and *y_cast*. Otherwise, if the new range for *x*
+                * allows upper 32-bit 0xffffffff then the eventual new
+                * range for *x* will be out of signed 32-bit range
+                * which violates the origin *x* range.
+                */
+               if (x_t == S64 && y_t == S32 &&
+                   !(y_cast.a & 0xffffffff80000000ULL) && !(y_cast.b & 0xffffffff80000000) &&
+                   (long long)x.a >= S32_MIN && (long long)x.b <= S32_MAX) {
+                               return range(S64, max_t(S64, y_cast.a, x.a),
+                                            min_t(S64, y_cast.b, x.b));
+               }
+
                  /* some combinations of upper 32 bits and sign bit can lead to
                   * invalid ranges, in such cases it's easier to detect them
                   * after cast/swap than try to enumerate all the conditions
@@ -2108,6 +2124,9 @@ static struct subtest_case crafted_cases[] = {
          {S32, U32, {(u32)S32_MIN, 0}, {0, 0}},
          {S32, U32, {(u32)S32_MIN, 0}, {(u32)S32_MIN, (u32)S32_MIN}},
          {S32, U32, {(u32)S32_MIN, S32_MAX}, {S32_MAX, S32_MAX}},
+       {S64, U32, {0x0, 0x1f}, {0xffffffff80000000ULL, 0x000000007fffffffULL}},
+       {S64, U32, {0x0, 0x1f}, {0xffffffffffff8000ULL, 0x0000000000007fffULL}},
+       {S64, U32, {0x0, 0x1f}, {0xffffffffffffff80ULL, 0x000000000000007fULL}},
   };
/* Go over crafted hard-coded cases. This is fast, so we do it as part of

The logic is very similar to kernel implementation but has a difference in generating
the final range. In reg_bounds implementation, the range is narrowed by intersecting
y_cast and x range which are necessary.

In kernel implementation, there is no interection since we only have one register
and two register has been compared before.

Eduard, could you take a look at the above code?
I think this change is correct.
The return clause could be simplified a bit:

	return range_improve(x_t, x, y_cast);

Indeed. This is much simpler. I will use reg_bounds testing instead of verifier_ldsx testing
in next revision.


[...]




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux