On 08/02/2012 09:14 PM, Marcelo Tosatti wrote: > On Sun, Jul 29, 2012 at 04:18:58PM +0800, Xiao Guangrong wrote: >> After commit a2766325cf9f9, the error pfn is replaced by the >> error code, it need not be released anymore >> >> [ The patch is compiling tested for powerpc ] >> >> Signed-off-by: Xiao Guangrong <xiaoguangrong@xxxxxxxxxxxxxxxxxx> >> --- >> arch/powerpc/kvm/e500_tlb.c | 1 - >> arch/x86/kvm/mmu.c | 6 +++--- >> arch/x86/kvm/mmu_audit.c | 4 +--- >> arch/x86/kvm/paging_tmpl.h | 8 ++------ >> virt/kvm/iommu.c | 1 - >> virt/kvm/kvm_main.c | 14 ++++++++------ >> 6 files changed, 14 insertions(+), 20 deletions(-) >> >> diff --git a/arch/powerpc/kvm/e500_tlb.c b/arch/powerpc/kvm/e500_tlb.c >> index c8f6c58..09ce5ac 100644 >> --- a/arch/powerpc/kvm/e500_tlb.c >> +++ b/arch/powerpc/kvm/e500_tlb.c >> @@ -524,7 +524,6 @@ static inline void kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500, >> if (is_error_pfn(pfn)) { >> printk(KERN_ERR "Couldn't get real page for gfn %lx!\n", >> (long)gfn); >> - kvm_release_pfn_clean(pfn); >> return; >> } >> >> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >> index 320a781..949a5b8 100644 >> --- a/arch/x86/kvm/mmu.c >> +++ b/arch/x86/kvm/mmu.c >> @@ -2498,7 +2498,9 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, >> rmap_recycle(vcpu, sptep, gfn); >> } >> } >> - kvm_release_pfn_clean(pfn); >> + >> + if (!is_error_pfn(pfn)) >> + kvm_release_pfn_clean(pfn); >> } > > Can it ever be error_pfn? Seems a problem if so. > Yes, the no-slot-pfn, we will cache the mmio access into spte. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html