On 06/28/2010 04:25 PM, Alexander Graf wrote:
Less and simpler code, better reporting through slabtop, less wastage
of partially allocated slab pages.
But it also means that one VM can spill the global slab cache and kill
another VM's mm performance, no?
What do you mean by spill?
Well?
btw, in the midst of the nit-picking frenzy I forgot to ask how the
individual hash chain lengths as well as the per-vm allocation were
limited.
On x86 we have a per-vm limit and we allow the mm shrinker to reduce
shadow mmu data structures dynamically.
Very simple. I keep an int with the number of allocated entries around
and if that hits a define'd threshold, I flush all shadow pages.
A truly nefarious guest will make all ptes hash to the same chain,
making some operations very long (O(n^2) in the x86 mmu, don't know
about ppc) under a spinlock. So we had to limit hash chains, not just
the number of entries.
But your mmu is per-cpu, no? In that case, no spinlock, and any damage
the guest does is limited to itself.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html