2017-04-04 14:18+0200, Ladi Prosek: > L2 was running with uninitialized PML fields which led to incomplete > dirty bitmap logging. This manifested as all kinds of subtle erratic > behavior of the nested guest. > > Fixes: 843e4330573c ("KVM: VMX: Add PML support in VMX") > Signed-off-by: Ladi Prosek <lprosek@xxxxxxxxxx> > --- Applied to kvm/master, thanks. (I should get a newer test machine ...) > arch/x86/kvm/vmx.c | 12 ++++++++++++ > 1 file changed, 12 insertions(+) > > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index 2ee00db..f47d701 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -10267,6 +10267,18 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, > > } > > + if (enable_pml) { > + /* > + * Conceptually we want to copy the PML address and index from > + * vmcs01 here, and then back to vmcs01 on nested vmexit. But, > + * since we always flush the log on each vmexit, this happens > + * to be equivalent to simply resetting the fields in vmcs02. > + */ > + ASSERT(vmx->pml_pg); > + vmcs_write64(PML_ADDRESS, page_to_phys(vmx->pml_pg)); > + vmcs_write16(GUEST_PML_INDEX, PML_ENTITY_NUM - 1); > + } > + > if (nested_cpu_has_ept(vmcs12)) { > kvm_mmu_unload(vcpu); > nested_ept_init_mmu_context(vcpu); > -- > 2.9.3 >