On Thu, Jan 28, 2010 at 08:24:55PM -0200, Marcelo Tosatti wrote: > On Thu, Jan 28, 2010 at 12:37:57PM +0100, Joerg Roedel wrote: > > +static pfn_t kvm_pin_pages(struct kvm *kvm, struct kvm_memory_slot *slot, > > + gfn_t gfn, unsigned long size) > > +{ > > + gfn_t end_gfn; > > + pfn_t pfn; > > + > > + pfn = gfn_to_pfn_memslot(kvm, slot, gfn); > > If gfn_to_pfn_memslot returns pfn of bad_page, you might create a > large iommu translation for it? Right. But that was broken even before this patch. Anyway, I will fix it. > > + /* Map into IO address space */ > > + r = iommu_map(domain, gfn_to_gpa(gfn), pfn_to_hpa(pfn), > > + get_order(page_size), flags); > > + > > + gfn += page_size >> PAGE_SHIFT; > > Should increase gfn after checking for failure, otherwise wrong > npages is passed to kvm_iommu_put_pages. True. Will fix that too. Thanks, Joerg -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html