On 06/17/2010 10:49 AM, Xiao Guangrong wrote:
Avi Kivity wrote:
On 06/15/2010 05:46 AM, Xiao Guangrong wrote:
Hi Avi, Marcelo,
This patchset support pte prefetch when intercepted guest #PF,
the aim is to reduce guest #PF which can be intercepted by VMM.
If we meet any failure in the prefetch path, we will exit it
and not try other ptes to avoid become heavy path.
During my performance test, under EPT enabled case, unixbench
shows the performance improved ~1.2%,
Once the guest has faulted in all memory, we shouldn't see much
improvement, yes?
I think you are right, this path only prefetch valid/pte.A=1 mapping.
I mean for tdp. Faulting is rare once the guest has touched all of memory.
user EPT disable case,
unixbench shows the performance improved ~3.6%
I'm a little worried about this. In some workloads, prefetch can often
fail due to gpte.a=0 so we spend effort doing nothing.
Yes, prefetch is not alway success, but the prefetch path is fast, it not cost
much time, at the worst case, only 128 bytes we need read form guest pte. Once
it's successful, much overload can be reduce.
Ok.
We should map those pages with pte.a=pte.d=0 so we don't confuse host
memory management. On EPT (which lacks a/d bits) we can't enable it
(but we can on NPT).
You are right, this is the speculative path.
For the pte.A bit:
we called mmu_set_spte() with speculative = true, so we set pte.a = 0 in this
path.
For the pte.D bit:
We should fix also set pte.d = 0 in speculative path, the same problem is in
invlpg/pte write path, will do it in the next version.
It's not enough to set spte.d=0, we also need to sample it later.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html