In an effort to separate intentional arithmetic wrap-around from unexpected wrap-around, we need to refactor places that depend on this kind of math. One of the most common code patterns of this is: VAR + value < VAR Notably, this is considered "undefined behavior" for signed and pointer types, which the kernel works around by using the -fno-strict-overflow option in the build[1] (which used to just be -fwrapv). Regardless, we want to get the kernel source to the position where we can meaningfully instrument arithmetic wrap-around conditions and catch them when they are unexpected, regardless of whether they are signed[2], unsigned[3], or pointer[4] types. Refactor open-coded wrap-around addition test to use add_would_overflow(). This paves the way to enabling the wrap-around sanitizers in the future. Link: https://git.kernel.org/linus/68df3755e383e6fecf2354a67b08f92f18536594 [1] Link: https://github.com/KSPP/linux/issues/26 [2] Link: https://github.com/KSPP/linux/issues/27 [3] Link: https://github.com/KSPP/linux/issues/344 [4] Cc: Alexei Starovoitov <ast@xxxxxxxxxx> Cc: Daniel Borkmann <daniel@xxxxxxxxxxxxx> Cc: John Fastabend <john.fastabend@xxxxxxxxx> Cc: Andrii Nakryiko <andrii@xxxxxxxxxx> Cc: Martin KaFai Lau <martin.lau@xxxxxxxxx> Cc: Song Liu <song@xxxxxxxxxx> Cc: Yonghong Song <yonghong.song@xxxxxxxxx> Cc: KP Singh <kpsingh@xxxxxxxxxx> Cc: Stanislav Fomichev <sdf@xxxxxxxxxx> Cc: Hao Luo <haoluo@xxxxxxxxxx> Cc: Jiri Olsa <jolsa@xxxxxxxxxx> Cc: bpf@xxxxxxxxxxxxxxx Signed-off-by: Kees Cook <keescook@xxxxxxxxxxxx> --- kernel/bpf/verifier.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 65f598694d55..21e3f30c8757 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -12901,8 +12901,8 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, dst_reg->smin_value = smin_ptr + smin_val; dst_reg->smax_value = smax_ptr + smax_val; } - if (umin_ptr + umin_val < umin_ptr || - umax_ptr + umax_val < umax_ptr) { + if (add_would_overflow(umin_ptr, umin_val) || + add_would_overflow(umax_ptr, umax_val)) { dst_reg->umin_value = 0; dst_reg->umax_value = U64_MAX; } else { @@ -13023,8 +13023,8 @@ static void scalar32_min_max_add(struct bpf_reg_state *dst_reg, dst_reg->s32_min_value += smin_val; dst_reg->s32_max_value += smax_val; } - if (dst_reg->u32_min_value + umin_val < umin_val || - dst_reg->u32_max_value + umax_val < umax_val) { + if (add_would_overflow(umin_val, dst_reg->u32_min_value) || + add_would_overflow(umax_val, dst_reg->u32_max_value)) { dst_reg->u32_min_value = 0; dst_reg->u32_max_value = U32_MAX; } else { @@ -13049,8 +13049,8 @@ static void scalar_min_max_add(struct bpf_reg_state *dst_reg, dst_reg->smin_value += smin_val; dst_reg->smax_value += smax_val; } - if (dst_reg->umin_value + umin_val < umin_val || - dst_reg->umax_value + umax_val < umax_val) { + if (add_would_overflow(umin_val, dst_reg->umin_value) || + add_would_overflow(umax_val, dst_reg->umax_value)) { dst_reg->umin_value = 0; dst_reg->umax_value = U64_MAX; } else { -- 2.34.1