On Mon, 2011-11-14 at 13:54 -0700, Alex Williamson wrote: > On Fri, 2011-11-11 at 18:14 -0600, Scott Wood wrote: > > On 11/03/2011 03:12 PM, Alex Williamson wrote: > > > + for (i = 0; i < npage; i++, iova += PAGE_SIZE, vaddr += PAGE_SIZE) { > > > + unsigned long pfn = 0; > > > + > > > + ret = vaddr_get_pfn(vaddr, rdwr, &pfn); > > > + if (ret) { > > > + __vfio_dma_unmap(iommu, start, i, rdwr); > > > + return ret; > > > + } > > > + > > > + /* Only add actual locked pages to accounting */ > > > + if (!is_invalid_reserved_pfn(pfn)) > > > + locked++; > > > + > > > + ret = iommu_map(iommu->domain, iova, > > > + (phys_addr_t)pfn << PAGE_SHIFT, 0, prot); > > > + if (ret) { > > > + /* Back out mappings on error */ > > > + put_pfn(pfn, rdwr); > > > + __vfio_dma_unmap(iommu, start, i, rdwr); > > > + return ret; > > > + } > > > + } > > > > There's no way to hand this stuff to the IOMMU driver in chunks larger > > than a page? That's going to be a problem for our IOMMU, which wants to > > deal with large windows. > > There is, this is just a simple implementation that maps individual > pages. We "just" need to determine physically contiguous chunks and > mlock them instead of using get_user_pages. The current implementation > is much like how KVM maps iommu pages, but there shouldn't be a user API > change to try to use larger chinks. We want this for IOMMU large page > support too. Also, at one point intel-iommu didn't allow sub-ranges to be unmapped; an unmap of a single page would unmap the entire original mapping that contained that page. That made it easier to map each page individually for the flexibility it provided on unmap. I need to see if we still have that restriction. Thanks, Alex -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html