On Thu, Jul 29, 2021 at 01:17:43PM +0800, Yu Zhang wrote: > On Thu, Jul 29, 2021 at 10:58:15AM +0800, Yan Zhao wrote: > > On Thu, Jul 29, 2021 at 11:00:56AM +0800, Yu Zhang wrote: > > > > > > > > Ooof that's a lot of resets, though if there are only a handful of > > > > pages mapped, it might not be a noticeable performance impact. I think > > > > it'd be worth collecting some performance data to quantify the impact. > > > > > > Yes. Too many reset will definitely hurt the performance, though I did not see > > > obvious delay. > > > > > > > if I add below limits before unloading mmu, and with > > enable_unrestricted_guest=0, the boot time can be reduced to 31 secs > > from more than 5 minutes. > > Sorry? Do you mean your VM needs 5 minute to boot? What is your configuration? > yes. the VM needs 5 minutes to boot when I forced enable_unrestricted_guest=0 in kvm. > VMX unrestricted guest has been supported on all Intel platforms since years > ago. I do not see any reason to disable it. > yes. just for test purpose. To study the impact to the mode enable_unrestricted_guest=0, since in this mode, cr0, cr4 causes lots of vmexit. > > > > > void kvm_mmu_reset_context(struct kvm_vcpu *vcpu) > > { > > - kvm_mmu_unload(vcpu); > > - kvm_init_mmu(vcpu, true); > > + union kvm_mmu_role new_role = > > + kvm_calc_tdp_mmu_root_page_role(vcpu, false); > > + struct kvm_mmu *context = &vcpu->arch.root_mmu; > > + bool reset = false; > > + > > + if (new_role.as_u64 != context->mmu_role.as_u64) { > > + kvm_mmu_unload(vcpu); > > + reset = true; > > + } > > + kvm_init_mmu(vcpu, reset); > > > > But with enable_unrestricted_guest=0, if I further modify the limits to > > "if (new_role.base.word != context->mmu_role.base.word)", the VM would > > fail to boot. > > so, with mmu extended role changes, unload the mmu is necessary in some > > situation, or at least we need to zap related sptes. > > BTW, update some performance data when enable_unrestricted_guest=0. 1. without the restricts in above, i.e. always call kvm_mmu_unload: VM boot time: around 5 minutes. kvm_mmu_unload times during VM boot: 3696 2. with the above restricts, i.e. only call kvm_mmu_unload when kvm_mmu_role changes. VM boot time: around 30 secs. kvm_mmu_unload times during VM boot: 18 3. with above restricts + Sean's suggestion in another mail. @@ -4567,6 +4567,11 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, bool base_only) role.base.direct = true; role.base.gpte_is_8_bytes = true; + role.base.nxe = 0; + role.base.cr0_wp = 0; + role.base.smep_andnot_wp = 0; + role.base.smap_andnot_wp = 0; + return role; } VM boot time: around 30 secs. kvm_mmu_unload times during VM boot: 15. sorry, though I'm not testing on latest code base (I'm testing on 5.10.0), I guess the general idea is the same. Thanks Yan