On 12/22/21 5:10 AM, Jackie Liu wrote:
From: Jackie Liu <liuyun01@xxxxxxxxxx>
s32 is always true regardless of the values of its operands. let's
cleanup.
Fixes: e572ff80f05c ("bpf: Make 32->64 bounds propagation slightly more robust")
Reported-by: k2ci <kernel-bot@xxxxxxxxxx>
Signed-off-by: Jackie Liu <liuyun01@xxxxxxxxxx>
---
kernel/bpf/verifier.c | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index b532f1058d35..43812ee58304 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1366,11 +1366,6 @@ static void __reg_bound_offset(struct bpf_reg_state *reg)
reg->var_off = tnum_or(tnum_clear_subreg(var64_off), var32_off);
}
-static bool __reg32_bound_s64(s32 a)
-{
- return a >= 0 && a <= S32_MAX;
-}
The following bpf tree commit triggered the above change:
commit e572ff80f05c33cd0cb4860f864f5c9c044280b6
Author: Daniel Borkmann <daniel@xxxxxxxxxxxxx>
Date: Wed Dec 15 22:28:48 2021 +0000
bpf: Make 32->64 bounds propagation slightly more robust
There is no need to fix bpf tree since as this patch just a cleanup
patch and there is no functionality change.
Maybe wait for the above patch available in bpf-next and submit
this patch again? Daniel, do you have any suggestions for this patch?
-
static void __reg_assign_32_into_64(struct bpf_reg_state *reg)
{
reg->umin_value = reg->u32_min_value;
@@ -1380,8 +1375,7 @@ static void __reg_assign_32_into_64(struct bpf_reg_state *reg)
* be positive otherwise set to worse case bounds and refine later
* from tnum.
*/
- if (__reg32_bound_s64(reg->s32_min_value) &&
- __reg32_bound_s64(reg->s32_max_value)) {
+ if (reg->s32_min_value >= 0 && reg->s32_max_value >= 0) {
reg->smin_value = reg->s32_min_value;
reg->smax_value = reg->s32_max_value;
} else {