Avi Kivity wrote: > On 06/08/2010 05:35 AM, Xiao Guangrong wrote: >> >>> We can avoid the exchange in most cases, for example if the new spte has >>> the accessed bit set (already in the patch set) or if the page is >>> already marked as accessed, or if we see the old spte has the accessed >>> bit set (so no race can occur). I'll update the patches to avoid >>> atomics when possible. >>> >> Umm, the reason that we need atomics here is to avoid vcpu to update >> spte when we read A bit >> form it, so, perhaps we can use below way to avoid atomics completely: >> >> - set reserved bit in spte >> - get A bit form spte >> - set new spte >> >> the worst case is cause vcpu #PF here, but it doesn't matter since the >> old mapping is already invalid, >> also need a remote tlb flush later. >> > > To set the reserved bit in the spte, you need an atomic operation (well, > unless you use a sub-word-acccess to set a reserved bit in the high 32 > bits). I think we no need atomic here, for example, we can do it like this: *spte |= RSVD_BIT [ maybe need a write barrier here? ] After this sentence completed, we can ensure that the spte can not updated A bit by vcpu, so we can get A bit safely. > >>> I don't think atomics are that expensive, though, ~20 cycles on modern >>> processors? >>> >>> >> Yes, but atomics are "LOCK" instructions, it can stop multiple cpus >> runing in parallel. >> > > Only if those cpus are accessing the same word you're accessing. > Oh, you are right, the LOCK only locked the memory defined by the destination operand, but i also recall that page table access can pass LOCK instruction, below description is form intel' spec Vol. 3 7-5: Locked operations are atomic with respect to all other memory operations and all externally visible events. Only instruction fetch and page table accesses can pass locked instructions. Locked instructions can be used to synchronize data written by one processor and read by another processor. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html