On Mon, 6 Mar 2023 14:41:14 -0800 Vipin Sharma <vipinsh@xxxxxxxxxx> wrote: > Add pages in split_shadow_page_cache to the global counter > kvm_total_unused_cached_pages. These pages will be freed by MMU shrinker > in future commit. > > Signed-off-by: Vipin Sharma <vipinsh@xxxxxxxxxx> > --- > arch/x86/kvm/mmu/mmu.c | 7 +++++-- > 1 file changed, 5 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index df8dcb7e5de7..0ebb8a2eaf47 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -6149,7 +6149,9 @@ static void mmu_free_vm_memory_caches(struct kvm *kvm) > { > kvm_mmu_free_memory_cache(&kvm->arch.split_desc_cache); > kvm_mmu_free_memory_cache(&kvm->arch.split_page_header_cache); > - kvm_mmu_free_memory_cache(&kvm->arch.split_shadow_page_cache); > + mutex_lock(&kvm->slots_lock); > + mmu_free_sp_memory_cache(&kvm->arch.split_shadow_page_cache); > + mutex_unlock(&kvm->slots_lock); Taking the lock of the calling path in the layer of cache topping/free layer seems off. My vote goes to have a lock for each cache and take the lock of the cache when topping/free the cache. It is more self-contained and architecturally nice. > } > > void kvm_mmu_uninit_vm(struct kvm *kvm) > @@ -6303,7 +6305,7 @@ static int topup_split_caches(struct kvm *kvm) > if (r) > return r; > > - return kvm_mmu_topup_memory_cache(&kvm->arch.split_shadow_page_cache, 1); > + return mmu_topup_sp_memory_cache(&kvm->arch.split_shadow_page_cache, 1); > } > > static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *huge_sptep) > @@ -6328,6 +6330,7 @@ static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *hu > /* Direct SPs do not require a shadowed_info_cache. */ > caches.page_header_cache = &kvm->arch.split_page_header_cache; > caches.shadow_page_cache = &kvm->arch.split_shadow_page_cache; > + caches.count_shadow_page_allocation = true; > > /* Safe to pass NULL for vCPU since requesting a direct SP. */ > return __kvm_mmu_get_shadow_page(kvm, NULL, &caches, gfn, role);