On 05/05/20 00:35, Krish Sadhukhan wrote: > + if (!(vmcb->save.efer & EFER_LMA)) { > + if (vmcb->save.cr4 & X86_CR4_PAE) { > + if (vmcb->save.cr3 & MSR_CR3_LEGACY_PAE_RESERVED_MASK) > + return false; > + } else { > + if (vmcb->save.cr3 & MSR_CR3_LEGACY_RESERVED_MASK) > + return false; > + } > + if (vmcb->save.cr4 & MSR_CR4_LEGACY_RESERVED_MASK) > + return false; > + } else { > + if ((vmcb->save.cr4 & X86_CR4_PAE) && > + (vmcb->save.cr3 & MSR_CR3_LONG_RESERVED_MASK)) > + return false; > + if (vmcb->save.cr4 & MSR_CR4_RESERVED_MASK) > + return false; > + } > + > if ((vmcb->control.intercept & (1ULL << INTERCEPT_VMRUN)) == 0) > return false; > I think checking LMA from the guest state is incorrect, the number of bits in CR3 and CR4 remains 64 as long as the host processor is 64-bits. This of course is unless you have reproduced on bare metal that a hypervisor running in 32-bit mode ignores the top 32 bits. Also, the checks for CR4 must use the guest's reserved bits, using kvm_valid_cr4. However this can be a bit slow so it is probably a good idea to cache the bits in kvm_update_cpuid. Thanks, Paolo > diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h > index df3474f..796c083 100644 > --- a/arch/x86/kvm/svm/svm.h > +++ b/arch/x86/kvm/svm/svm.h > @@ -354,7 +354,12 @@ static inline bool gif_set(struct vcpu_svm *svm) > } > > /* svm.c */ > -#define MSR_INVALID 0xffffffffU > +#define MSR_CR3_LEGACY_RESERVED_MASK 0xfe7U > +#define MSR_CR3_LEGACY_PAE_RESERVED_MASK 0x7U > +#define MSR_CR3_LONG_RESERVED_MASK 0xfff0000000000fe7U > +#define MSR_CR4_LEGACY_RESERVED_MASK 0xffbaf000U > +#define MSR_CR4_RESERVED_MASK 0xffffffffffbaf000U > +#define MSR_INVALID 0xffffffffU >