On Mon, Apr 4, 2022 at 11:20 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > On Mon, Apr 04, 2022, Ben Gardon wrote: > > On Thu, Mar 31, 2022 at 11:36 PM Mingwei Zhang <mizhang@xxxxxxxxxx> wrote: > > > diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h > > > index 1bff453f7cbe..4a0087efa1e3 100644 > > > --- a/arch/x86/kvm/mmu/mmu_internal.h > > > +++ b/arch/x86/kvm/mmu/mmu_internal.h > > > @@ -168,7 +168,7 @@ void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_ > > > > > > void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); > > > > > > -void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); > > > +void __account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); > > > > I believe we need to modify the usage of this function in > > paging_tmpl.h as well, at which point there should be no users of > > account_huge_nx_page, so we can just modify the function directly > > instead of adding a __helper. > > (Disregard if the source I was looking at was out of date. Lots of > > churn in this code recently.) > > paging_tmpl.h is shadow paging only, i.e. will always handled page faults with > mmu_lock held for write and it also needs the check for sp->lpage_disallowed > already being set. Only the TDP MMU code is special in that (a) it holds mmu_lock > for read and (b) never reuses shadow pages when inserting into the page tables. > > Or did I completely misunderstand what you meant by "need to modify the usage"? Ah right duh. For some reason I thought we were modifying __direct_map in this commit too. Nevermind, no change needed.