Hi Alex, While trying to get VFIO-PCI working on AArch64 (with 64k page size), I stumbled over the following piece of code: > static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu) > { > struct vfio_domain *domain; > unsigned long bitmap = PAGE_MASK; > > mutex_lock(&iommu->lock); > list_for_each_entry(domain, &iommu->domain_list, next) > bitmap &= domain->domain->ops->pgsize_bitmap; > mutex_unlock(&iommu->lock); > > return bitmap; > } The SMMU page mask is [ 3.054302] arm-smmu e0a00000.smmu: Supported page sizes: 0x40201000 but after this function, we end up supporting one 2MB pages and above. The reason for that is simple: You restrict the bitmap to PAGE_MASK and above. Now the big question is why you're doing that. I don't see why it would be a problem if the IOMMU maps a page in smaller chunks. So I tried to patch the code above with s/PAGE_MASK/1UL/ and everything seems to run fine. But maybe we're not lacking some sanity checks? Alex -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html