On 06/07/2010 11:43 AM, Lai Jiangshan wrote:
Avi Kivity wrote:
The kvm mmu synchronizes shadow ptes using the mmu lock, however the cpu
will happily ignore the lock when setting the accessed bit. This can cause
the accessed bit to be lost. Luckily this only results in incorrect page
selection for swap.
Atomic operation is heavy and slow, it hurts performance.
Incorrect page selection for swap also hurts performance.
We can avoid the exchange in most cases, for example if the new spte has
the accessed bit set (already in the patch set) or if the page is
already marked as accessed, or if we see the old spte has the accessed
bit set (so no race can occur). I'll update the patches to avoid
atomics when possible.
I don't think atomics are that expensive, though, ~20 cycles on modern
processors?
I think there are very rare competitions happened and cause
the accessed bit to be lost. Since there is no incorrect result
when the accessed bit is lost. Is this problem over concern?
The real concern is when we start using the dirty bit. I'd like to
fault read accesses with writeable sptes, but with the dirty bit clear.
This way we can allow a guest to write to a page without a fault, but
not cause it to swap too soon.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html