On Wed, 2021-11-10 at 15:48 +0100, Vitaly Kuznetsov wrote: > Maxim Levitsky <mlevitsk@xxxxxxxxxx> writes: > > > When running mix of 32 and 64 bit guests, it is possible to have mmu > > reset with same mmu role but different root level (32 bit vs 64 bit paging) > > > > Signed-off-by: Maxim Levitsky <mlevitsk@xxxxxxxxxx> > > --- > > arch/x86/kvm/mmu/mmu.c | 14 ++++++++++---- > > 1 file changed, 10 insertions(+), 4 deletions(-) > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > index 354d2ca92df4d..763867475860f 100644 > > --- a/arch/x86/kvm/mmu/mmu.c > > +++ b/arch/x86/kvm/mmu/mmu.c > > @@ -4745,7 +4745,10 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) > > union kvm_mmu_role new_role = > > kvm_calc_tdp_mmu_root_page_role(vcpu, ®s, false); > > > > - if (new_role.as_u64 == context->mmu_role.as_u64) > > + u8 new_root_level = role_regs_to_root_level(®s); > > + > > + if (new_role.as_u64 == context->mmu_role.as_u64 && > > + context->root_level == new_root_level) > > return; > > role_regs_to_root_level() uses 3 things: CR0.PG, EFER.LMA and CR4.PAE > and two of these three are already encoded into extended mmu role > (kvm_calc_mmu_role_ext()). Could we achieve the same result by adding > EFER.LMA there? Absolutely. I just wanted your feedback on this to see if there is any reason to not do this. Also it seems that only basic role is compared here. I don't 100% know the reason why we have basic and extended roles - there is a comment about basic/extended mmu role to minimize the size of arch.gfn_track, but I haven't yet studied in depth why. Best regards, Maxim Levitsky > > > > > context->mmu_role.as_u64 = new_role.as_u64; > > @@ -4757,7 +4760,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) > > context->get_guest_pgd = get_cr3; > > context->get_pdptr = kvm_pdptr_read; > > context->inject_page_fault = kvm_inject_page_fault; > > - context->root_level = role_regs_to_root_level(®s); > > + context->root_level = new_root_level; > > > > if (!is_cr0_pg(context)) > > context->gva_to_gpa = nonpaging_gva_to_gpa; > > @@ -4806,7 +4809,10 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte > > struct kvm_mmu_role_regs *regs, > > union kvm_mmu_role new_role) > > { > > - if (new_role.as_u64 == context->mmu_role.as_u64) > > + u8 new_root_level = role_regs_to_root_level(regs); > > + > > + if (new_role.as_u64 == context->mmu_role.as_u64 && > > + context->root_level == new_root_level) > > return; > > > > context->mmu_role.as_u64 = new_role.as_u64; > > @@ -4817,8 +4823,8 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte > > paging64_init_context(context); > > else > > paging32_init_context(context); > > - context->root_level = role_regs_to_root_level(regs); > > > > + context->root_level = new_root_level; > > reset_guest_paging_metadata(vcpu, context); > > context->shadow_root_level = new_role.base.level;