On Fri, Feb 04, 2022 at 06:56:58AM -0500, Paolo Bonzini wrote: > The level field of the MMU role can act as a marker for validity > instead: it is guaranteed to be nonzero so a zero value means the role > is invalid and the MMU properties will be computed again. > > Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> > --- > arch/x86/include/asm/kvm_host.h | 4 +--- > arch/x86/kvm/mmu/mmu.c | 9 +++------ > 2 files changed, 4 insertions(+), 9 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index e7e5bd9a984d..4ec7d1e3aa36 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -342,8 +342,7 @@ union kvm_mmu_page_role { > * kvm_mmu_extended_role complements kvm_mmu_page_role, tracking properties > * relevant to the current MMU configuration. When loading CR0, CR4, or EFER, > * including on nested transitions, if nothing in the full role changes then > - * MMU re-configuration can be skipped. @valid bit is set on first usage so we > - * don't treat all-zero structure as valid data. > + * MMU re-configuration can be skipped. > * > * The properties that are tracked in the extended role but not the page role > * are for things that either (a) do not affect the validity of the shadow page > @@ -360,7 +359,6 @@ union kvm_mmu_page_role { > union kvm_mmu_extended_role { > u32 word; > struct { > - unsigned int valid:1; > unsigned int execonly:1; > unsigned int cr0_pg:1; > unsigned int cr4_pae:1; > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index b0065ae3cea8..0039b2f21286 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -4683,8 +4683,6 @@ static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *vcpu, > ext.efer_lma = ____is_efer_lma(regs); > } > > - ext.valid = 1; > - > return ext; > } > > @@ -4891,7 +4889,6 @@ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty, > /* EPT, and thus nested EPT, does not consume CR0, CR4, nor EFER. */ > role.ext.word = 0; > role.ext.execonly = execonly; > - role.ext.valid = 1; > > return role; > } > @@ -5039,9 +5036,9 @@ void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu) > * problem is swept under the rug; KVM's CPUID API is horrific and > * it's all but impossible to solve it without introducing a new API. > */ > - vcpu->arch.root_mmu.mmu_role.ext.valid = 0; > - vcpu->arch.guest_mmu.mmu_role.ext.valid = 0; > - vcpu->arch.nested_mmu.mmu_role.ext.valid = 0; > + vcpu->arch.root_mmu.mmu_role.base.level = 0; > + vcpu->arch.guest_mmu.mmu_role.base.level = 0; > + vcpu->arch.nested_mmu.mmu_role.base.level = 0; I agree this will work but I think it makes the code more difficult to follow (and I start worrying that some code that relies on level being accurate will creep in in the future). At minimum we should extend the comment here to describe why level is being changed. I did a half-assed attempt to pass something like "bool force_role_reset" down to the MMU initialization functions as an alternative but it very quickly got out of hand. What about just changing `valid` to `cpuid_stale` and flip the meaning? kvm_mmu_after_set_cpuid() would set the cpuid_stale bit and then reset the MMUs. > kvm_mmu_reset_context(vcpu); > > /* > -- > 2.31.1 > >