Re: [PATCH 4/4] x86: kvm: mmu: use ept a/d in vmcs02 iff used in vmcs12

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 30, 2017 at 10:29 PM, Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote:
>
>> @@ -8358,7 +8349,7 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu)
>>        * mode as if vcpus is in root mode, the PML buffer must has been
>>        * flushed already.
>>        */
>> -     if (enable_pml)
>> +     if (enable_pml && !is_guest_mode(vcpu))
>>               vmx_flush_pml_buffer(vcpu);
>>
>>       /* If guest state is invalid, start emulating */
>
> I don't understand this.  You need to flush the PML buffer if
> L2 is running with EPT A/D bits enabled, don't you? Apart from
> this it seems sane, I only have to look at patch 3 more carefully.

You're right: this is busted. I wrote these patches before you
implemented EPT A/D nesting (i.e., PML was moot for guest mode).

I think the patch hunk can go away entirely actually. As long as PML
is enabled, it's ok to flush the buffer. The interesting case is when
the vcpu is in guest mode with EPT A/D disabled. In this case, L0's
PML isn't filled while L2 runs because EPT A/D is disabled in the
vmcs02 (thanks to this patch), so there's nothing in the buffer!

It's troubling is that there's no test case covering L0's use of PML +
nesting. Stress testing live migrations of L1 hypervisors (and
implicitly their L2 guests) is one way of doing it, but it's pretty
clumsy. A tightly coupled L0 userspace, L1 and L2 guests would be the
way to go because you could just coordinate ioctls with guest memory
accesses.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux