On Mon, Oct 28, 2024, David Matlack wrote: > On Fri, Oct 25, 2024 at 10:37 AM Vipin Sharma <vipinsh@xxxxxxxxxx> wrote: > > > > On Thu, Oct 24, 2024 at 4:25 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > > > > > On Fri, Oct 04, 2024, Vipin Sharma wrote: > > > > +out_mmu_memory_cache_unlock: > > > > + mutex_unlock(&vcpu->arch.mmu_memory_cache_lock); > > > > > > I've been thinking about this patch on and off for the past few weeks, and every > > > time I come back to it I can't shake the feeling that we came up with a clever > > > solution for a problem that doesn't exist. I can't recall a single complaint > > > about KVM consuming an unreasonable amount of memory for page tables. In fact, > > > the only time I can think of where the code in question caused problems was when > > > I unintentionally inverted the iterator and zapped the newest SPs instead of the > > > oldest SPs. > > > > > > So, I'm leaning more and more toward simply removing the shrinker integration. > > > > One thing we can agree on is that we don't need MMU shrinker in its > > current form because it is removing pages which are very well being > > used by VM instead of shrinking its cache. > > > > Regarding the current series, the biggest VM in GCE we can have 416 > > vCPUs, considering each thread can have 40 pages in its cache, total > > cost gonna be around 65 MiB, doesn't seem much to me considering these > > VMs have memory in TiB. Since caches in VMs are not unbounded, I think > > it is fine to not have a MMU shrinker as its impact is miniscule in > > KVM. > > I have no objection to removing the shrinker entirely. Let's do that. In the unlikely scenario someone comes along with a strong use case for purging the vCPU caches, we can always resurrect this approach. Vipin, can you send a v3?