Re: [PATCH kvm-unit-tests] KVM: x86: add hyperv clock test case

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 25, 2016 at 11:47:23AM +0300, Roman Kagan wrote:
> On Fri, Apr 22, 2016 at 08:08:47PM +0200, Paolo Bonzini wrote:
> > On 22/04/2016 15:32, Roman Kagan wrote:
> > > The first value is derived from the kvm_clock's tsc_to_system_mul and
> > > tsc_shift, and matches hosts's vcpu->hw_tsc_khz.  The second is
> > > calibrated using emulated HPET.  The difference is those +14 ppm.
> > > 
> > > This is on i7-2600, invariant TSC present, TSC scaling not present.
> > > 
> > > I'll dig further but I'd appreciate any comment on whether it was within
> > > tolerance or not.
> > 
> > The solution to the bug is to change the Hyper-V reference time MSR to
> > use the same formula as the Hyper-V TSC-based clock.  Likewise,
> > KVM_GET_CLOCK and KVM_SET_CLOCK should not use ktime_get_ns().
> 
> Umm, I'm not sure it's a good idea...
> 
> E.g. virtualized HPET sits in userspace and thus uses
> clock_gettime(CLOCK_MONOTONIC), so the drift will remain.
> 
> AFAICT the root cause is the following: KVM master clock uses the same
> multiplier/shift as the vsyscall time in host userspace.  However, the
> offsets in vsyscall_gtod_data get updated all the time with corrections
> from NTP and so on.  Therefore even if the TSC rate is somewhat
> miscalibrated, the error is kept small in vsyscall time functions.  OTOH
> the offsets in KVM clock are basically never updated, so the error keeps
> linearly growing over time.

This seems to be due to a typo:

--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5819,7 +5819,7 @@ static int pvclock_gtod_notify(struct notifier_block *nb, unsigned long unused,
        /* disable master clock if host does not trust, or does not
         * use, TSC clocksource
         */
-       if (gtod->clock.vclock_mode != VCLOCK_TSC &&
+       if (gtod->clock.vclock_mode == VCLOCK_TSC &&
            atomic_read(&kvm_guest_has_master_clock) != 0)
                queue_work(system_long_wq, &pvclock_gtod_work);


as a result, the global pvclock_gtod_data was kept up to date, but the
requests to update per-vm copies were never issued.

With the patch I'm now seeing different test failures which I'm looking
into.

Meanwhile I'm wondering if this scheme is not too costly: on my machine
pvclock_gtod_notify() is called at kHz rate, and the work it schedules
does

static void pvclock_gtod_update_fn(struct work_struct *work)
{
[...]
        spin_lock(&kvm_lock);
        list_for_each_entry(kvm, &vm_list, vm_list)
                kvm_for_each_vcpu(i, vcpu, kvm)
                        kvm_make_request(KVM_REQ_MASTERCLOCK_UPDATE, vcpu);
        atomic_set(&kvm_guest_has_master_clock, 0);
        spin_unlock(&kvm_lock);
}

KVM_REQ_MASTERCLOCK_UPDATE makes all VCPUs synchronize:

static void kvm_gen_update_masterclock(struct kvm *kvm)
{
[...]
        spin_lock(&ka->pvclock_gtod_sync_lock);
        kvm_make_mclock_inprogress_request(kvm);
        /* no guest entries from this point */
        pvclock_update_vm_gtod_copy(kvm);

        kvm_for_each_vcpu(i, vcpu, kvm)
                kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);

        /* guest entries allowed */
        kvm_for_each_vcpu(i, vcpu, kvm)
                clear_bit(KVM_REQ_MCLOCK_INPROGRESS, &vcpu->requests);

        spin_unlock(&ka->pvclock_gtod_sync_lock);
[...]
}

so on a host with many VMs it may become an issue.

Roman.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux