Re: [PATCH 1/2] KVM: x86: reduce pvclock_gtod_sync_lock critical sections

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/04/21 14:00, Marcelo Tosatti wrote:

KVM_REQ_MCLOCK_INPROGRESS is only needed to kick running vCPUs out of the
execution loop;
We do not want vcpus with different system_timestamp/tsc_timestamp
pair:

  * To avoid that problem, do not allow visibility of distinct
  * system_timestamp/tsc_timestamp values simultaneously: use a master
  * copy of host monotonic time values. Update that master copy
  * in lockstep.

So KVM_REQ_MCLOCK_INPROGRESS also ensures that no vcpu enters
guest mode (via vcpu->requests check before VM-entry) with a
different system_timestamp/tsc_timestamp pair.

Yes this is what KVM_REQ_MCLOCK_INPROGRESS does, but it does not have to be done that way. All you really need is the IPI with KVM_REQUEST_WAIT, which ensures that updates happen after the vCPUs have exited guest mode. You don't need to loop on vcpu->requests for example, because kvm_guest_time_update could just spin on pvclock_gtod_sync_lock until pvclock_update_vm_gtod_copy is done.

So this morning I tried protecting the kvm->arch fields for kvmclock using a seqcount, which is nice also because get_kvmclock_ns() does not have to bounce the cacheline of pvclock_gtod_sync_lock anymore. I'll post it tomorrow or next week.

Paolo




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux