On Wed, 2022-06-01 at 16:06 +0000, Sean Christopherson wrote: > On Wed, Jun 01, 2022, Vitaly Kuznetsov wrote: > > hyperv_clock doesn't always give a stable test result, especially > > with > > AMD CPUs. The test compares Hyper-V MSR clocksource (acquired > > either > > with rdmsr() from within the guest or KVM_GET_MSRS from the host) > > against rdtsc(). To increase the accuracy, increase the measured > > delay > > (done with nop loop) by two orders of magnitude and take the mean > > rdtsc() > > value before and after rdmsr()/KVM_GET_MSRS. > > Rather than "fixing" the test by reducing the impact of noise, can we > first try > to reduce the noise itself? E.g. pin the test to a single CPU, redo > the measurement Pinning is a good idea overall, however IMHO should not be done in all KVM selftests, as vCPU migration itself can be source of bugs. > if the test is interrupted (/proc/interrupts?), etc... Bonus points This is not feasable IMHO - timer interrupt alone can fire at rate of 1000 interrupts/s. Just while reading /proc/interurpts you probably get few of interrupts. > if that can > be implemented as a helper or pair of helpers so that other tests > that want to > measure latency/time don't need to reinvent the wheel. > Best regards, Maxim Levitsky