On Wed, Nov 02, 2022 at 03:29:10PM +0800, Robert Hoo wrote: > On Tue, 2022-11-01 at 05:04 +0300, Kirill A. Shutemov wrote: > ... > > > > > - if (cr3 != kvm_read_cr3(vcpu)) > > > > > - kvm_mmu_new_pgd(vcpu, cr3); > > > > > + old_cr3 = kvm_read_cr3(vcpu); > > > > > + if (cr3 != old_cr3) { > > > > > + if ((cr3 ^ old_cr3) & CR3_ADDR_MASK) { > > > > > + kvm_mmu_new_pgd(vcpu, cr3 & > > > > > ~(X86_CR3_LAM_U48 | > > > > > + X86_CR3_LAM_U57)); > > > > > + } else { > > > > > + /* Only LAM conf changes, no tlb flush > > > > > needed > > > > > */ > > > > > + skip_tlb_flush = true; > > > > > > > > I'm not sure about this. > > > > > > > > Consider case when LAM_U48 gets enabled on 5-level paging > > > > machines. > > > > We may > > > > have valid TLB entries for addresses above 47-bit. It's kinda > > > > broken > > > > case, > > > > but seems valid from architectural PoV, no? > > > > > > You're right, thanks Kirill. > > > > > > I noticed in your Kernel enabling, because of this LAM_U48 and > > > LA_57 > > > overlapping, you enabled LAM_U57 only for simplicity at this > > > moment. I > > > thought at that time, that this trickiness will be contained in > > > Kernel > > > layer, but now it turns out at least non-EPT KVM MMU is not spared. > > > > > > > > I guess after enabling LAM, these entries will never match. But > > > > if > > > > LAM > > > > gets disabled again they will become active. Hm? > > > > > > > > Maybe just flush? > > > > > > Now we have 2 options > > > 1. as you suggested, just flush > > > 2. more precisely identify the case Guest.LA57 && (CR3.bit[62:61] > > > 00 > > > -->10 switching), flush. (LAM_U57 bit take precedence over LAM_U48, > > > from spec.) > > > > > > Considering CR3 change is relatively hot path, and tlb flush is > > > heavy, > > > I lean towards option 2. Your opinion? > > > > 11 in bits [62:61] is also considered LAM_U57. So your option 2 is > > broken. > > Hi Kirill, > > When I came to cook v2 per your suggestion, i.e. leave it just flush, I > pondered on the necessity on all the cases of the 2 bits (LAM_U48, > LAM_U57) flips. > Hold this: LAM_U57 (bit61) takes precedence over LAM_U48 (bit62). > > (0,0) --> {(0,1), (1,0), (1,1)} > (0,1) --> {(0,0), (1,0), (1,1)} > (1,0) --> {(0,0), (0,1), (1,1)} > (1,1) --> {(0,0), (1,0), (1,0)} > > Among all the 12 cases, only (0,0) --> (1,0) && 5-level paging on, has > to flush tlb. Am I right? if so, would you still prefer unconditionally > flush, just for 1/12 necessity? (if include 5-level/4-level variations, > 1/24) I would keep it simple. We can always add optimization later if there's a workload that actually benefit from it. But I cannot imagine situation where enabling LAM is a hot path. -- Kiryl Shutsemau / Kirill A. Shutemov