On Thu, Mar 09, 2023 at 11:59:00AM -0800, Vipin Sharma wrote: > On Thu, Mar 9, 2023 at 7:58 AM Zhi Wang <zhi.wang.linux@xxxxxxxxx> wrote: > > > > On Mon, 6 Mar 2023 14:41:14 -0800 > > Vipin Sharma <vipinsh@xxxxxxxxxx> wrote: > > > > > Add pages in split_shadow_page_cache to the global counter > > > kvm_total_unused_cached_pages. These pages will be freed by MMU shrinker > > > in future commit. > > > > > > Signed-off-by: Vipin Sharma <vipinsh@xxxxxxxxxx> > > > --- > > > arch/x86/kvm/mmu/mmu.c | 7 +++++-- > > > 1 file changed, 5 insertions(+), 2 deletions(-) > > > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > > index df8dcb7e5de7..0ebb8a2eaf47 100644 > > > --- a/arch/x86/kvm/mmu/mmu.c > > > +++ b/arch/x86/kvm/mmu/mmu.c > > > @@ -6149,7 +6149,9 @@ static void mmu_free_vm_memory_caches(struct kvm *kvm) > > > { > > > kvm_mmu_free_memory_cache(&kvm->arch.split_desc_cache); > > > kvm_mmu_free_memory_cache(&kvm->arch.split_page_header_cache); > > > - kvm_mmu_free_memory_cache(&kvm->arch.split_shadow_page_cache); > > > + mutex_lock(&kvm->slots_lock); > > > + mmu_free_sp_memory_cache(&kvm->arch.split_shadow_page_cache); > > > + mutex_unlock(&kvm->slots_lock); > > > > Taking the lock of the calling path in the layer of cache topping/free layer > > seems off. > > > > My vote goes to have a lock for each cache and take the lock of the cache when > > topping/free the cache. It is more self-contained and architecturally nice. > > > > Yeah, this can be one way. However, in future patches when I am adding > per NUMA node cache, it will add up a lot of locks for the same code > path before a topup. In split huge page case we know what NUMA node we > need to allocate from so we can fine tune which lock to take but in > fault path code we don't know what NUMA node the page will be coming > from so we need to topup all of the NUMA caches. Having a single lock > simplifies code a little bit. > > I agree with you on being more self-contained. I will wait for others > to also chime in on this and go from there. As a general rule, please only added locking when it's needed. Adding the lock in this commit is just confusing. But that aside, I don't think acquiring the slots lock is even needed in this commit. mmu_free_vm_memory_caches() is never called while the the VM is on vm_list. i.e. This can never race with the shrinker. If you want to be paranoid you can add a WARN to ensure that stays true going forward: /* ... comment ... */ WARN_ON_ONCE(!list_empty(&kvm->vm_list));