(2011/12/19 17:43), Avi Kivity wrote:
Well, if one guest is twice as large as other guests, then it will want
twice as many shadow pages. So our goal should be to zap pages from the
guest with the highest (shadow pages / memory) ratio.
Can you measure whether there is a significant difference in a synthetic
workload, and what that change is? Perhaps apply {moderate, high} memory
pressure load with {2, 4, 8, 16} VMs or something like that.
I was running 4 VMs on my machine with enough high memory pressure. The problem
was that mmu_shrink() was not tuned to be called in usual memory pressures: what
I did was changing the seeks and batch parameters and making ept=0.
At least, I have checked that if I make one VM do meaningless many copies, letting
others keep silent, the shrinker frees shadow pages intensively from that one.
Anyway, I don't think making the shrinker call mmu_shrink() with the default batch
size, nr_to_scan=128, and just trying to free one shadow page is a good behaviour.
Yes, it's very conservative. But on the other hand the shrinker is
tuned for dcache and icache, where there are usually tons of useless
objects. If we have to free something, I'd rather free them instead of
mmu pages which tend to get recreated soon.
OK, to satisfy the requirements, I will do:
1. find the guest with the highest (shadow pages / memory) ratio
2. just zap one page from that guest, keeping the current conservative rate
I will update the patch.
Takuya
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html