On 05/03/2013 09:05 AM, Marcelo Tosatti wrote: >> + >> +/* >> + * Fast invalid all shadow pages belong to @slot. >> + * >> + * @slot != NULL means the invalidation is caused the memslot specified >> + * by @slot is being deleted, in this case, we should ensure that rmap >> + * and lpage-info of the @slot can not be used after calling the function. >> + * >> + * @slot == NULL means the invalidation due to other reasons, we need >> + * not care rmap and lpage-info since they are still valid after calling >> + * the function. >> + */ >> +void kvm_mmu_invalid_memslot_pages(struct kvm *kvm, >> + struct kvm_memory_slot *slot) >> +{ >> + spin_lock(&kvm->mmu_lock); >> + kvm->arch.mmu_valid_gen++; >> + >> + /* >> + * All shadow paes are invalid, reset the large page info, >> + * then we can safely desotry the memslot, it is also good >> + * for large page used. >> + */ >> + kvm_clear_all_lpage_info(kvm); > > Xiao, > > I understood it was agreed that simple mmu_lock lockbreak while > avoiding zapping of newly instantiated pages upon a > > if(spin_needbreak) > cond_resched_lock() > > cycle was enough as a first step? And then later introduce root zapping > along with measurements. > > https://lkml.org/lkml/2013/4/22/544 Yes, it is. See the changelog in 0/0: " we use lock-break technique to zap all sptes linked on the invalid rmap, it is not very effective but good for the first step." Thanks! -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html