On Tue, Feb 14, 2023 at 01:28:33PM +0800, Robert Hoo wrote: >> > + >> > /* >> > * Do not condition the GPA check on long mode, this helper is >> > used to >> > * stuff CR3, e.g. for RSM emulation, and there is no guarantee >> > that >> > @@ -1268,8 +1272,20 @@ int kvm_set_cr3(struct kvm_vcpu *vcpu, >> > unsigned long cr3) >> > if (is_pae_paging(vcpu) && !load_pdptrs(vcpu, cr3)) >> > return 1; >> > >> > - if (cr3 != kvm_read_cr3(vcpu)) >> > - kvm_mmu_new_pgd(vcpu, cr3); >> > + old_cr3 = kvm_read_cr3(vcpu); >> > + if (cr3 != old_cr3) { >> > + if ((cr3 ^ old_cr3) & CR3_ADDR_MASK) { >> >This means those effective addr bits changes, then no matter LAM bits >toggled or not, it needs new pgd. > >> Does this check against CR3_ADDR_MASK necessarily mean LAM bits are >> toggled, i.e., CR3_ADDR_MASK == ~(X86_CR3_LAM_U48 | X86_CR3_LAM_U57)? >> >> Why not check if LAM bits are changed? This way the patch only >> changes >> cases related to LAM, keeping other cases intact. > >Yes, I can better to add check in "else" that LAM bits changes. >But in fact above kvm_is_valid_cr3() has guaranteed no other high order >bits changed. >Emm, now you might ask to melt LAM bits into vcpu- >>arch.reserved_gpa_bits? ;) no. I am not asking for that. My point is for example, bit X isn't in CR3_ADDR_MASK. then toggling the bit X will go into the else{} branch, which is particularly for LAM bits. So, the change is correct iff CR3_ADDR_MASK = ~(X86_CR3_LAM_U48 | X86_CR3_LAM_U57). I didn't check if that is true on your code base. If it isn't, replace CR3_ADDR_MASK with ~(X86_CR3_LAM_U48 | X86_CR3_LAM_U57). >> >> > + kvm_mmu_new_pgd(vcpu, cr3 & ~(X86_CR3_LAM_U48 | >> > + X86_CR3_LAM_U57)); >> >> Do you need to touch kvm_mmu_new_pgd() in nested_vmx_load_cr3()? > >Didn't scope nested LAM case in this patch set. Is there any justificaiton for not considering nested virtualization? Won't nested virtualization be broken by this series?