On Thu, 2022-11-03 at 00:05 +0300, Kirill A. Shutemov wrote: > On Wed, Nov 02, 2022 at 03:29:10PM +0800, Robert Hoo wrote: > > On Tue, 2022-11-01 at 05:04 +0300, Kirill A. Shutemov wrote: > > ... > > > > > > - if (cr3 != kvm_read_cr3(vcpu)) > > > > > > - kvm_mmu_new_pgd(vcpu, cr3); > > > > > > + old_cr3 = kvm_read_cr3(vcpu); > > > > > > + if (cr3 != old_cr3) { > > > > > > + if ((cr3 ^ old_cr3) & CR3_ADDR_MASK) { > > > > > > + kvm_mmu_new_pgd(vcpu, cr3 & > > > > > > ~(X86_CR3_LAM_U48 | > > > > > > + X86_CR3_LAM_U57)); > > > > > > + } else { > > > > > > + /* Only LAM conf changes, no tlb flush > > > > > > needed > > > > > > */ > > > > > > + skip_tlb_flush = true; > > > > > > > > > > I'm not sure about this. > > > > > > > > > > Consider case when LAM_U48 gets enabled on 5-level paging > > > > > machines. > > > > > We may > > > > > have valid TLB entries for addresses above 47-bit. It's kinda > > > > > broken > > > > > case, > > > > > but seems valid from architectural PoV, no? > > > > > > > > You're right, thanks Kirill. > > > > > > > > I noticed in your Kernel enabling, because of this LAM_U48 and > > > > LA_57 > > > > overlapping, you enabled LAM_U57 only for simplicity at this > > > > moment. I > > > > thought at that time, that this trickiness will be contained in > > > > Kernel > > > > layer, but now it turns out at least non-EPT KVM MMU is not > > > > spared. > > > > > > > > > > I guess after enabling LAM, these entries will never match. > > > > > But > > > > > if > > > > > LAM > > > > > gets disabled again they will become active. Hm? > > > > > > > > > > Maybe just flush? > > > > > > > > Now we have 2 options > > > > 1. as you suggested, just flush > > > > 2. more precisely identify the case Guest.LA57 && > > > > (CR3.bit[62:61] > > > > 00 > > > > -->10 switching), flush. (LAM_U57 bit take precedence over > > > > LAM_U48, > > > > from spec.) > > > > > > > > Considering CR3 change is relatively hot path, and tlb flush is > > > > heavy, > > > > I lean towards option 2. Your opinion? > > > > > > 11 in bits [62:61] is also considered LAM_U57. So your option 2 > > > is > > > broken. > > > > Hi Kirill, > > > > When I came to cook v2 per your suggestion, i.e. leave it just > > flush, I > > pondered on the necessity on all the cases of the 2 bits (LAM_U48, > > LAM_U57) flips. > > Hold this: LAM_U57 (bit61) takes precedence over LAM_U48 (bit62). > > > > (0,0) --> {(0,1), (1,0), (1,1)} > > (0,1) --> {(0,0), (1,0), (1,1)} > > (1,0) --> {(0,0), (0,1), (1,1)} > > (1,1) --> {(0,0), (1,0), (1,0)} > > > > Among all the 12 cases, only (0,0) --> (1,0) && 5-level paging on, > > has > > to flush tlb. Am I right? if so, would you still prefer > > unconditionally > > flush, just for 1/12 necessity? (if include 5-level/4-level > > variations, > > 1/24) > > I would keep it simple. We can always add optimization later if > there's > a workload that actually benefit from it. But I cannot imagine > situation > where enabling LAM is a hot path. > OK, I'm open to this. I also notice that skip_tlb_flush is set when pcid_enabled && (CR3 & X86_CR3_PCID_NOFLUSH). Under this condition, do you think (0,0) --> (1,0) need to flip it back to false? int kvm_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3) { bool skip_tlb_flush = false; unsigned long pcid = 0, old_cr3; #ifdef CONFIG_X86_64 bool pcid_enabled = !!kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE); if (pcid_enabled) { skip_tlb_flush = cr3 & X86_CR3_PCID_NOFLUSH;