From: Steven Noonan <steven@xxxxxxxxxxxxxx> There's some overhead in writing and reading MSR_IA32_TSC. We try to account for it. But sometimes overhead gets under or over estimated. When we retry syncing, it sees the clock "go backwards". Hence, ignore random wrap if using direct sync. Signed-off-by: Steven Noonan <steven@xxxxxxxxxxxxxx> Signed-off-by: Muhammad Usama Anjum <usama.anjum@xxxxxxxxxxxxx> --- arch/x86/kernel/tsc_sync.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kernel/tsc_sync.c b/arch/x86/kernel/tsc_sync.c index 2a855991f982..1fc751212a0e 100644 --- a/arch/x86/kernel/tsc_sync.c +++ b/arch/x86/kernel/tsc_sync.c @@ -405,7 +405,7 @@ void check_tsc_sync_source(int cpu) pr_debug("TSC synchronization [CPU#%d -> CPU#%d]: passed\n", smp_processor_id(), cpu); - } else if (atomic_dec_and_test(&test_runs) || random_warps) { + } else if (atomic_dec_and_test(&test_runs) || (random_warps && !tsc_allow_direct_sync)) { /* Force it to 0 if random warps brought us here */ atomic_set(&test_runs, 0); -- 2.30.2