Peter, On Sat, Apr 11, 2009 at 12:45:21AM -0400, Peter Teoh wrote: > In this function, the TLB flushing comes before spin unlock, > > void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot) > { > struct kvm_mmu_page *sp; > > spin_lock(&kvm->mmu_lock); > > kvm_flush_remote_tlbs(kvm); > spin_unlock(&kvm->mmu_lock); > } kvm_vm_ioctl_get_dirty_log does: down_write(slots_lock) - collect data from dirty bitmap (kvm_get_dirty_log) if (something was dirty) - remove write access for all translations - flush remote tlb's - clear the dirty bitmap up_write(slots_lock) The vmexit path (take a look at vcpu_run), takes slots_lock in read-mode. This means that no other vcpu will be able to dirty a shadow translation (spte) in the meantime. So its safe. > but in kvm_vm_ioctl_set_memory_alias(): > > spin_unlock(&kvm->mmu_lock); > kvm_mmu_zap_all(kvm); > > it comes after inside kvm_mmu_zap_all(). Does it sound logical? Note that here it also takes slots_lock in write-mode, which blocks all other vcpus. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html