On 06/28/2010 09:42 AM, Sheng Yang wrote:
+static void wbinvd_ipi(void *garbage)
+{
+ wbinvd();
+}
Like Jan mentioned, this is quite heavy. What about a clflush() loop
instead? That may take more time, but at least it's preemptible. Of
course, it isn't preemptible in an IPI.
I think this kind of behavior happened rarely, and most recent processor should
have WBINVD exit which means it's an IPI... So I think it's maybe acceptable here.
Several milliseconds of non-responsiveness may not be acceptable for
some applications. So I think queue_work_on() and a clflush loop is
better than an IPI and wbinvd.
+
void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
{
+ /* Address WBINVD may be executed by guest */
+ if (vcpu->kvm->arch.iommu_domain) {
+ if (kvm_x86_ops->has_wbinvd_exit())
+ cpu_set(cpu, vcpu->arch.wbinvd_dirty_mask);
+ else if (vcpu->cpu != -1)
+ smp_call_function_single(vcpu->cpu,
+ wbinvd_ipi, NULL, 1);
Is there any point to doing this if !has_wbinvd_exit()? The vcpu might
not have migrated in time, so the cache is flushed too late.
For the !has_wbinvd_exit(), the instruction would be executed by guest and flush
the current processor immediately. And we can ensure that it's clean in the last
CPU, so we're fine.
Ah, yes.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html