On 12/14/2012 07:02 AM, Marcelo Tosatti wrote: >>> Same comment as before: the only case where it should not attempt to >>> emulate is when there is a condition which makes it impossible to fix >>> (the information is available to detect that condition). >>> >>> The earlier suggestion >>> >>> "How about recording the gfn number for shadow pages that have been >>> shadowed in the current pagefault run?" >>> >>> Was about that. >> >> I think we can have a try. Is this change good to you, Marcelo? >> >> [eric@localhost kvm]$ git diff >> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >> index 01d7c2a..e3d0001 100644 >> --- a/arch/x86/kvm/mmu.c >> +++ b/arch/x86/kvm/mmu.c >> @@ -4359,24 +4359,34 @@ unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm) >> return nr_mmu_pages; >> } >> >> -int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64 sptes[4]) >> +void kvm_mmu_get_sp_hierarchy(struct kvm_vcpu *vcpu, u64 addr, >> + struct kvm_mmu_sp_hierarchy *hierarchy) >> { >> struct kvm_shadow_walk_iterator iterator; >> u64 spte; >> - int nr_sptes = 0; >> + >> + hierarchy->max_level = hierarchy->nr_levels = 0; >> >> walk_shadow_page_lockless_begin(vcpu); >> for_each_shadow_entry_lockless(vcpu, addr, iterator, spte) { >> - sptes[iterator.level-1] = spte; >> - nr_sptes++; >> + struct kvm_mmu_page *sp = page_header(__pa(iterator.sptep)); >> + >> + if (hierarchy->indirect_only && sp->role.direct) >> + break; >> + >> + if (!hierarchy->max_level) >> + hierarchy->max_level = iterator.level; >> + >> + hierarchy->shadow_gfns[iterator.level-1] = sp->gfn; >> + hierarchy->sptes[iterator.level-1] = spte; >> + hierarchy->nr_levels++; >> + >> if (!is_shadow_present_pte(spte)) >> break; >> } >> walk_shadow_page_lockless_end(vcpu); >> - >> - return nr_sptes; >> } > > Record gfns while shadowing in the vcpu struct, in a struct, along with cr2. > Then validate > That way its guaranteed its not some other vcpu. Okay, i will try this way. :) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html