DIV_ROUND_DOWN_ULL promotes division arguments to 64-bit, but DIV_ROUND_UP_ULL did so only for the division, not the addition for the round up. This would lead to a wrong result when the 32-bit addition wraps around. Linux has an explicit cast to fix this, so do likewise in barebox. Signed-off-by: Ahmad Fatoum <a.fatoum@xxxxxxxxxxxxxx> --- include/linux/math.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/math.h b/include/linux/math.h index acbffa96e6eb..f5d5cc714c4e 100644 --- a/include/linux/math.h +++ b/include/linux/math.h @@ -23,7 +23,8 @@ #define DIV_ROUND_DOWN_ULL(ll, d) \ ({ unsigned long long _tmp = (ll); do_div(_tmp, d); _tmp; }) -#define DIV_ROUND_UP_ULL(ll, d) DIV_ROUND_DOWN_ULL((ll) + (d) - 1, (d)) +#define DIV_ROUND_UP_ULL(ll, d) \ + DIV_ROUND_DOWN_ULL((unsigned long long)(ll) + (d) - 1, (d)) #define DIV_ROUND_CLOSEST(x, divisor)( \ { \ -- 2.39.2