Re: [PATCH 4/4] KVM: MMU: Make mmu_shrink() scan nr_to_scan shadow pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 16 Dec 2011 09:06:11 -0200
Marcelo Tosatti <mtosatti@xxxxxxxxxx> wrote:

> On Mon, Dec 12, 2011 at 07:26:47AM +0900, Takuya Yoshikawa wrote:
> > From: Takuya Yoshikawa <yoshikawa.takuya@xxxxxxxxxxxxx>
> > 
> > Currently, mmu_shrink() tries to free a shadow page from one kvm and
> > does not use nr_to_scan correctly.
> > 
> > This patch fixes this by making it try to free some shadow pages from
> > each kvm.  The number of shadow pages each kvm frees becomes
> > proportional to the number of shadow pages it is using.
> > 
> > Note: an easy way to see how this code works is to do
> >   echo 3 > /proc/sys/vm/drop_caches
> > during some virtual machines are running.  Shadow pages will be zapped
> > as expected by this.
> 
> I'm not sure this is a meaningful test to verify this change is
> worthwhile, because while the shrinker tries to free a shadow page from
> one vm, the vm's position in the kvm list is changed, so the over time
> the shrinker will cycle over all VMs.

The test was for checking if mmu_shrink() would work as intended.  Maybe
not good as a changelog, sorry.


I admit that I could not find any strong reason except for protecting the
host from intentionally induced shadowing.

But for that, don't you think that freeing the same amount of shadow pages
from good and bad VMs eaqually is bad thing?

My method will try to free many shadow pages from VMs with many shadow
pages;  e.g. if there is a pathological increase in shadow pages for one
VM, that one will be intensively treated.

If you can agree on this reasoning, I will update the description and resend.

> 
> Can you measure whether there is a significant difference in a synthetic
> workload, and what that change is? Perhaps apply {moderate, high} memory
> pressure load with {2, 4, 8, 16} VMs or something like that.
> 

I was running 4 VMs on my machine with enough high memory pressure.  The problem
was that mmu_shrink() was not tuned to be called in usual memory pressures:  what
I did was changing the seeks and batch parameters and making ept=0.

At least, I have checked that if I make one VM do meaningless many copies, letting
others keep silent, the shrinker frees shadow pages intensively from that one.


Anyway, I don't think making the shrinker call mmu_shrink() with the default batch
size, nr_to_scan=128, and just trying to free one shadow page is a good behaviour.


	Takuya
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux