Re: [Patch v4 03/18] KVM: x86/mmu: Track count of pages in KVM MMU page caches globally

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 9, 2023 at 3:53 PM David Matlack <dmatlack@xxxxxxxxxx> wrote:
>
> On Mon, Mar 06, 2023 at 02:41:12PM -0800, Vipin Sharma wrote:
> > Create a global counter for total number of pages available
> > in MMU page caches across all VMs. Add mmu_shadow_page_cache
> > pages to this counter.
>
> I think I prefer counting the objects on-demand in mmu_shrink_count(),
> instead of keeping track of the count. Keeping track of the count adds
> complexity to the topup/alloc paths for the sole benefit of the
> shrinker. I'd rather contain that complexity to the shrinker code unless
> there is a compelling reason to optimize mmu_shrink_count().
>
> IIRC we discussed this at one point. Was there a reason to take this
> approach that I'm just forgetting?

To count on demand, we first need to lock on kvm_lock and then for
each VMs iterate through each vCPU, take a lock, and sum the objects
count in caches. When the NUMA support will be introduced in this
series then it means we have to iterate even more caches. We
can't/shouldn't use mutex_trylock() as it will not give the correct
picture and when shrink_scan is called count can be totally different.

scan_count() API comment says to not do any deadlock check (I don't
know what does that mean) and percpu_counter is very fast when we are
adding/subtracting pages so the effect of using it to keep global
count is very minimal. Since, there is not much impact to using
percpu_count compared to previous one, we ended our discussion with
keeping this per cpu counter.




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux