Sean Christopherson <seanjc@xxxxxxxxxx> writes: > On Wed, Jun 01, 2022, Vitaly Kuznetsov wrote: >> hyperv_clock doesn't always give a stable test result, especially with >> AMD CPUs. The test compares Hyper-V MSR clocksource (acquired either >> with rdmsr() from within the guest or KVM_GET_MSRS from the host) >> against rdtsc(). To increase the accuracy, increase the measured delay >> (done with nop loop) by two orders of magnitude and take the mean rdtsc() >> value before and after rdmsr()/KVM_GET_MSRS. > > Rather than "fixing" the test by reducing the impact of noise, can we first try > to reduce the noise itself? E.g. pin the test to a single CPU, redo the measurement > if the test is interrupted (/proc/interrupts?), etc... Bonus points if that can > be implemented as a helper or pair of helpers so that other tests that want to > measure latency/time don't need to reinvent the wheel. While I'm not certain task migration to another CPU was always the problem here (maybe the measured interval is too short anyway), I agree these are good ideas, I'll look into them, thanks! -- Vitaly