Re: [PATCH 1/2] KVM: x86: remaster kvm_write_tsc code

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2017-04-06 11:08+0300, Denis Plotnikov:
> Reuse existing code instead of using inline asm.
> Make the code more concise and clear in the TSC
> synchronization part.
> 
> Signed-off-by: Denis Plotnikov <dplotnikov@xxxxxxxxxxxxx>
> Reviewed-by: Roman Kagan <rkagan@xxxxxxxxxxxxx>
> ---
>  arch/x86/kvm/x86.c | 51 ++++++++++++---------------------------------------
>  1 file changed, 12 insertions(+), 39 deletions(-)

I like this patch a lot.

> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> @@ -1455,51 +1455,24 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr)
>  	elapsed = ns - kvm->arch.last_tsc_nsec;
>  
>  	if (vcpu->arch.virtual_tsc_khz) {
> -		int faulted = 0;
> -
> -		/* n.b - signed multiplication and division required */
> -		usdiff = data - kvm->arch.last_tsc_write;
> -#ifdef CONFIG_X86_64
> -		usdiff = (usdiff * 1000) / vcpu->arch.virtual_tsc_khz;
> -#else
> -		/* do_div() only does unsigned */
> -		asm("1: idivl %[divisor]\n"
> -		    "2: xor %%edx, %%edx\n"
> -		    "   movl $0, %[faulted]\n"
> -		    "3:\n"
> -		    ".section .fixup,\"ax\"\n"
> -		    "4: movl $1, %[faulted]\n"
> -		    "   jmp  3b\n"
> -		    ".previous\n"
> -
> -		_ASM_EXTABLE(1b, 4b)
> -
> -		: "=A"(usdiff), [faulted] "=r" (faulted)
> -		: "A"(usdiff * 1000), [divisor] "rm"(vcpu->arch.virtual_tsc_khz));
> -
> -#endif
> -		do_div(elapsed, 1000);

Oh, this is actually fixing a bug, because we later consider elapsed in
nanoseconds, but this one converts it to microsends ...

> -		usdiff -= elapsed;
> -		if (usdiff < 0)
> -			usdiff = -usdiff;
> -
> -		/* idivl overflow => difference is larger than USEC_PER_SEC */
> -		if (faulted)
> -			usdiff = USEC_PER_SEC;
> -	} else
> -		usdiff = USEC_PER_SEC; /* disable TSC match window below */
> +		u64 tsc_exp = kvm->arch.last_tsc_write +
> +					nsec_to_cycles(vcpu, elapsed);
> +		u64 tsc_hz = vcpu->arch.virtual_tsc_khz * 1000LL;
> +		/*
> +		 * Special case: TSC write with a small delta (1 second) of virtual
> +		 * cycle time against real time is interpreted as an attempt to
> +		 * synchronize the CPU.
> +		 */
> +		synchronizing = data < tsc_exp + tsc_hz && data > tsc_exp - tsc_hz;

This condition is wrong -- if tsc_exp < tsc_hz, then it will wraparound
and resolve as false.  It should read:

   synchronizing = data < tsc_exp + tsc_hz && data + tsc_hz > tsc_exp;

> +	}
>  
>  	/*
> -	 * Special case: TSC write with a small delta (1 second) of virtual
> -	 * cycle time against real time is interpreted as an attempt to
> -	 * synchronize the CPU.
> -         *
>  	 * For a reliable TSC, we can match TSC offsets, and for an unstable
>  	 * TSC, we add elapsed time in this computation.  We could let the
>  	 * compensation code attempt to catch up if we fall behind, but
>  	 * it's better to try to match offsets from the beginning.
>           */
> -	if (usdiff < USEC_PER_SEC &&
> +	if (synchronizing &&
>  	    vcpu->arch.virtual_tsc_khz == kvm->arch.last_tsc_khz) {
>  		if (!check_tsc_unstable()) {
>  			offset = kvm->arch.cur_tsc_offset;
> -- 
> 1.8.3.1
> 



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux