Hi Paolo, On Thu, Apr 08, 2021 at 10:15:16AM +0200, Paolo Bonzini wrote: > On 07/04/21 19:40, Marcelo Tosatti wrote: > > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > > > index fe806e894212..0a83eff40b43 100644 > > > --- a/arch/x86/kvm/x86.c > > > +++ b/arch/x86/kvm/x86.c > > > @@ -2562,10 +2562,12 @@ static void kvm_gen_update_masterclock(struct kvm *kvm) > > > kvm_hv_invalidate_tsc_page(kvm); > > > - spin_lock(&ka->pvclock_gtod_sync_lock); > > > kvm_make_mclock_inprogress_request(kvm); > > > + > > Might be good to serialize against two kvm_gen_update_masterclock > > callers? Otherwise one caller could clear KVM_REQ_MCLOCK_INPROGRESS, > > while the other is still at pvclock_update_vm_gtod_copy(). > > Makes sense, but this stuff has always seemed unnecessarily complicated to > me. > > KVM_REQ_MCLOCK_INPROGRESS is only needed to kick running vCPUs out of the > execution loop; We do not want vcpus with different system_timestamp/tsc_timestamp pair: * To avoid that problem, do not allow visibility of distinct * system_timestamp/tsc_timestamp values simultaneously: use a master * copy of host monotonic time values. Update that master copy * in lockstep. So KVM_REQ_MCLOCK_INPROGRESS also ensures that no vcpu enters guest mode (via vcpu->requests check before VM-entry) with a different system_timestamp/tsc_timestamp pair. > clearing it in kvm_gen_update_masterclock is unnecessary, > because KVM_REQ_CLOCK_UPDATE takes pvclock_gtod_sync_lock too and thus will > already wait for pvclock_update_vm_gtod_copy to end. > > I think it's possible to use a seqcount in KVM_REQ_CLOCK_UPDATE instead of > KVM_REQ_MCLOCK_INPROGRESS. Both cause the vCPUs to spin. I'll take a look. > > Paolo