From: Colin Ian King <colin.i.king@xxxxxxxxx> commit 08245672cdc6505550d1a5020603b0a8d4a6dcc7 upstream. The left shift of int 32 bit integer constant 1 is evaluated using 32 bit arithmetic and then passed as a 64 bit function argument. In the case where i is 32 or more this can lead to an overflow. Avoid this by shifting using the BIT_ULL macro instead. Fixes: 471af006a747 ("perf/x86/amd: Constrain Large Increment per Cycle events") Signed-off-by: Colin Ian King <colin.i.king@xxxxxxxxx> Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> Acked-by: Ian Rogers <irogers@xxxxxxxxxx> Acked-by: Kim Phillips <kim.phillips@xxxxxxx> Link: https://lore.kernel.org/r/20221202135149.1797974-1-colin.i.king@xxxxxxxxx Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- arch/x86/events/amd/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/arch/x86/events/amd/core.c +++ b/arch/x86/events/amd/core.c @@ -1387,7 +1387,7 @@ static int __init amd_core_pmu_init(void * numbered counter following it. */ for (i = 0; i < x86_pmu.num_counters - 1; i += 2) - even_ctr_mask |= 1 << i; + even_ctr_mask |= BIT_ULL(i); pair_constraint = (struct event_constraint) __EVENT_CONSTRAINT(0, even_ctr_mask, 0,