On Mon, 2014-10-13 at 13:50 -0700, Jan Sacha wrote: > Hi, > > I have a question about IOTLB. We are running KVM/Qemu VMs with huge > page memory backing on Intel Xeon Ivy Bridge machines. Our VMs use 10G > Ether NICs in Intel VT-d mode. We actually see that IOTLB becomes a > performance bottleneck when IOMMU uses 4k pages. We get much better > packet throughput with 2M IOTLB pages. > > We have tried two different Linux distributions (CentOS and Fedora). An This doesn't really help narrow your kernel version. > older CentOS kernel maps everything using 4k IOTLB pages. Our newer > Fedora kernel initially maps guest memory using 2M IOTLB pages, but we > see that a couple of seconds later it remaps the first 0xE000 pages > (3.75GB) of memory using 4k IOTLB pages. We still have 2M IOTLB page > mappings for memory above 4GB. 0xe0000 pages? 0xe000 pages isn't 3.75G. > > Why would the kernel change the IOTLB page size from 2M to 4k? How can > we make sure that all memory (except for some non-aligned bits) gets > mapped using 2M IOTLB pages? As I mentioned, we are using huge-page > memory backing for all our VMs. Any advice, also for debugging and > further diagnosis, would be appreciated. Legacy KVM device assignment maps IOMMU pages using the host kernel page size for the region while VFIO will pass the largest contiguous range of pages available to the IOMMU, regardless of kernel page size. If VFIO doesn't have the same problem then perhaps the kernel idea of the page size for that reason has changed between mappings. Thanks, Alex -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html