On Tue, Apr 4, 2017 at 2:44 PM, David Hildenbrand <david@xxxxxxxxxx> wrote: > On 04.04.2017 14:18, Ladi Prosek wrote: >> L2 was running with uninitialized PML fields which led to incomplete >> dirty bitmap logging. This manifested as all kinds of subtle erratic >> behavior of the nested guest. >> >> Fixes: 843e4330573c ("KVM: VMX: Add PML support in VMX") >> Signed-off-by: Ladi Prosek <lprosek@xxxxxxxxxx> >> --- >> arch/x86/kvm/vmx.c | 12 ++++++++++++ >> 1 file changed, 12 insertions(+) >> >> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c >> index 2ee00db..f47d701 100644 >> --- a/arch/x86/kvm/vmx.c >> +++ b/arch/x86/kvm/vmx.c >> @@ -10267,6 +10267,18 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, >> >> } >> >> + if (enable_pml) { >> + /* >> + * Conceptually we want to copy the PML address and index from >> + * vmcs01 here, and then back to vmcs01 on nested vmexit. But, >> + * since we always flush the log on each vmexit, this happens > > we == KVM running in g2? > > If so, other hypervisors might handle this differently. No, we as KVM in L0. Hypervisors running in L1 do not see PML at all, this is L0-only code. I hope the comment is not confusing. The desired behavior is that PML maintains the same state, regardless of whether we are in guest mode or not. But the implementation allows for this shortcut where we just reset the fields to their initial values on each nested entry. >> + * to be equivalent to simply resetting the fields in vmcs02. >> + */ >> + ASSERT(vmx->pml_pg); >> + vmcs_write64(PML_ADDRESS, page_to_phys(vmx->pml_pg)); >> + vmcs_write16(GUEST_PML_INDEX, PML_ENTITY_NUM - 1); >> + } >> + >> if (nested_cpu_has_ept(vmcs12)) { >> kvm_mmu_unload(vcpu); >> nested_ept_init_mmu_context(vcpu); >> > > > -- > > Thanks, > > David