Re: [PATCH 1/2] kvm: x86: Refine kvm_write_tsc synchronization generations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 15, 2020 at 4:07 PM Jim Mattson <jmattson@xxxxxxxxxx> wrote:
>
> Start a new TSC synchronization generation whenever the
> IA32_TIME_STAMP_COUNTER MSR is written on a vCPU that has already
> participated in the current TSC synchronization generation.
>
> Previously, it was not possible to restore the IA32_TIME_STAMP_COUNTER
> MSR to a value less than the TSC frequency. Since vCPU initialization
> sets the IA32_TIME_STAMP_COUNTER MSR to zero, a subsequent
> KVM_SET_MSRS ioctl that attempted to write a small value to the
> IA32_TIME_STAMP_COUNTER MSR was viewed as an attempt at TSC
> synchronization. Notably, this was the case even for single vCPU VMs,
> which were always synchronized.
>
> Signed-off-by: Jim Mattson <jmattson@xxxxxxxxxx>
> Reviewed-by: Peter Shier <pshier@xxxxxxxxxx>
> Reviewed-by: Oliver Upton <oupton@xxxxxxxxxx>
> ---
>  arch/x86/kvm/x86.c | 13 +++++--------
>  1 file changed, 5 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 9e41b5135340..2555ea2cd91e 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -2015,7 +2015,6 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr)
>         u64 offset, ns, elapsed;
>         unsigned long flags;
>         bool matched;
> -       bool already_matched;
>         u64 data = msr->data;
>         bool synchronizing = false;
>
> @@ -2032,7 +2031,8 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr)
>                          * kvm_clock stable after CPU hotplug
>                          */
>                         synchronizing = true;
> -               } else {
> +               } else if (vcpu->arch.this_tsc_generation !=
> +                          kvm->arch.cur_tsc_generation) {
>                         u64 tsc_exp = kvm->arch.last_tsc_write +
>                                                 nsec_to_cycles(vcpu, elapsed);
>                         u64 tsc_hz = vcpu->arch.virtual_tsc_khz * 1000LL;
> @@ -2062,7 +2062,6 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr)
>                         offset = kvm_compute_tsc_offset(vcpu, data);
>                 }
>                 matched = true;
> -               already_matched = (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation);
>         } else {
>                 /*
>                  * We split periods of matched TSC writes into generations.
> @@ -2102,12 +2101,10 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr)
>         raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags);
>
>         spin_lock(&kvm->arch.pvclock_gtod_sync_lock);
> -       if (!matched) {
> -               kvm->arch.nr_vcpus_matched_tsc = 0;
> -       } else if (!already_matched) {
> +       if (matched)
>                 kvm->arch.nr_vcpus_matched_tsc++;
> -       }
> -
> +       else
> +               kvm->arch.nr_vcpus_matched_tsc = 0;
>         kvm_track_tsc_matching(vcpu);
>         spin_unlock(&kvm->arch.pvclock_gtod_sync_lock);
>  }
> --
> 2.27.0.290.gba653c62da-goog
>
Ping.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux