Re: [PATCH 4/4] KVM: MMU: Make mmu_shrink() scan nr_to_scan shadow pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/16/2011 04:58 PM, Takuya Yoshikawa wrote:
> > 
> > I'm not sure this is a meaningful test to verify this change is
> > worthwhile, because while the shrinker tries to free a shadow page from
> > one vm, the vm's position in the kvm list is changed, so the over time
> > the shrinker will cycle over all VMs.
>
> The test was for checking if mmu_shrink() would work as intended.  Maybe
> not good as a changelog, sorry.
>
>
> I admit that I could not find any strong reason except for protecting the
> host from intentionally induced shadowing.
>
> But for that, don't you think that freeing the same amount of shadow pages
> from good and bad VMs eaqually is bad thing?
>
> My method will try to free many shadow pages from VMs with many shadow
> pages;  e.g. if there is a pathological increase in shadow pages for one
> VM, that one will be intensively treated.
>
> If you can agree on this reasoning, I will update the description and resend.

Well, if one guest is twice as large as other guests, then it will want
twice as many shadow pages.  So our goal should be to zap pages from the
guest with the highest (shadow pages / memory) ratio.

> > 
> > Can you measure whether there is a significant difference in a synthetic
> > workload, and what that change is? Perhaps apply {moderate, high} memory
> > pressure load with {2, 4, 8, 16} VMs or something like that.
> > 
>
> I was running 4 VMs on my machine with enough high memory pressure.  The problem
> was that mmu_shrink() was not tuned to be called in usual memory pressures:  what
> I did was changing the seeks and batch parameters and making ept=0.
>
> At least, I have checked that if I make one VM do meaningless many copies, letting
> others keep silent, the shrinker frees shadow pages intensively from that one.
>
>
> Anyway, I don't think making the shrinker call mmu_shrink() with the default batch
> size, nr_to_scan=128, and just trying to free one shadow page is a good behaviour.

Yes, it's very conservative.  But on the other hand the shrinker is
tuned for dcache and icache, where there are usually tons of useless
objects.  If we have to free something, I'd rather free them instead of
mmu pages which tend to get recreated soon.

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux