>>> + if (enable_pml) { >>> + /* >>> + * Conceptually we want to copy the PML address and index from >>> + * vmcs01 here, and then back to vmcs01 on nested vmexit. But, >>> + * since we always flush the log on each vmexit, this happens >> >> we == KVM running in g2? >> >> If so, other hypervisors might handle this differently. > > No, we as KVM in L0. Hypervisors running in L1 do not see PML at all, > this is L0-only code. Okay, was just confused why we enable PML for our nested guest (L2) although not supported/enabled for guest hypervisors (L1). I would have guessed that it is to be kept disabled completely for nested guests (!SECONDARY_EXEC_ENABLE_PML). But I assume that this a mysterious detail of the MMU code I still have to look into in detail. > > I hope the comment is not confusing. The desired behavior is that PML > maintains the same state, regardless of whether we are in guest mode > or not. But the implementation allows for this shortcut where we just > reset the fields to their initial values on each nested entry. If we really treat PML here just like ordinary L1 runs, than it makes perfect sense and the comment is not confusing. vmcs01 says it all. Just me being curious :) > >>> + * to be equivalent to simply resetting the fields in vmcs02. >>> + */ >>> + ASSERT(vmx->pml_pg); Looking at the code (especially the check in vmx_vcpu_setup()), I think this ASSERT can be removed. >>> + vmcs_write64(PML_ADDRESS, page_to_phys(vmx->pml_pg)); >>> + vmcs_write16(GUEST_PML_INDEX, PML_ENTITY_NUM - 1); So this really just mimics vmx_vcpu_setup() pml handling here. >>> + } >>> + >>> if (nested_cpu_has_ept(vmcs12)) { >>> kvm_mmu_unload(vcpu); >>> nested_ept_init_mmu_context(vcpu); >>> >> -- Thanks, David