On 09/07/20 19:12, Jim Mattson wrote: >> + >> + /* The processor ignores EFER.LMA, but svm_set_efer needs it. */ >> + efer &= ~EFER_LMA; >> + if ((nested_vmcb->save.cr0 & X86_CR0_PG) >> + && (nested_vmcb->save.cr4 & X86_CR4_PAE) >> + && (efer & EFER_LME)) >> + efer |= EFER_LMA; > The CR4.PAE check is unnecessary, isn't it? The combination CR0.PG=1, > EFER.LMA=1, and CR4.PAE=0 is not a legal processor state. Yeah, I was being a bit cautious because this is the nested VMCB and it can be filled in with invalid state, but indeed that condition was added just yesterday by myself in nested_vmcb_checks (while reviewing Krish's CR0/CR3/CR4 reserved bit check series). That said, the VMCB here is guest memory and it can change under our feet between nested_vmcb_checks and nested_prepare_vmcb_save. Copying the whole save area is overkill, but we probably should copy at least EFER/CR0/CR3/CR4 in a struct at the beginning of nested_svm_vmrun; this way there'd be no TOC/TOU issues between nested_vmcb_checks and nested_svm_vmrun. This would also make it easier to reuse the checks in svm_set_nested_state. Maybe Maxim can look at it while I'm on vacation, as he's eager to do more nSVM stuff. :D I'll drop this patch for now. Thanks for the speedy review! Paolo > According to the SDM, > > * IA32_EFER.LME cannot be modified while paging is enabled (CR0.PG = > 1). Attempts to do so using WRMSR cause a general-protection exception > (#GP(0)). > * Paging cannot be enabled (by setting CR0.PG to 1) while CR4.PAE = 0 > and IA32_EFER.LME = 1. Attempts to do so using MOV to CR0 cause a > general-protection exception (#GP(0)). > * CR4.PAE and CR4.LA57 cannot be modified while either 4-level paging > or 5-level paging is in use (when CR0.PG = 1 and IA32_EFER.LME = 1). > Attempts to do so using MOV to CR4 cause a general-protection > exception (#GP(0)). >