Andrew Theurer wrote:
Avi Kivity wrote:
KVM currently flushes the tlbs on all cpus when emulating invlpg. This
is because at the time of invlpg we lose track of the page, and leaving
stale tlb entries could cause the guest to access the page when it is
later freed (say after being swapped out).
However we have a second change to flush the tlbs, when an mmu
notifier is
called to let us know the host pte has been invalidated. We can safely
defer the flush to this point, which occurs much less frequently. Of
course,
we still do a local tlb flush when emulating invlpg.
I should be able to run some performance comparisons with this in the
next day or two.
Excellent. Note that while this does not improve performance relative
to released versions of kvm; rather it undoes a performance regression
caused by 967f61 ("KVM: Fix missing smp tlb flush in invlpg"), which
fixes a memory corruption problem.
The workloads which will exercise this are mmu-intensive smp workloads
with CONFIG_HIGHMEM (or CONFIG_HIGHMEM64) guests; 32-bit RHEL 3 is a
pretty bad offender.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html