On Wed, Jun 16, 2010 at 04:10:10PM +0800, Jason Wang wrote: > Zachary Amsden wrote: > > > > void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > > { > > + kvm_x86_ops->vcpu_load(vcpu, cpu); > > if (unlikely(vcpu->cpu != cpu)) { > > + /* Make sure TSC doesn't go backwards */ > > + s64 tsc_delta = !vcpu->arch.last_host_tsc ? 0 : > > + native_read_tsc() - vcpu->arch.last_host_tsc; > > + if (tsc_delta < 0 || check_tsc_unstable()) > > > It's better to do the adjustment also when tsc_delta > 0 And why do you think so? Doing it on tsc_delta > 0 would force us to adjust at every entry but the first. And I guess we want to adjust as few times as we can. For example, we would adjust on every cpu bounce even for machines that has a perfectly sync tsc. This could introduce an error not present before. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html