IOTLB page size question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have a question about IOTLB. We are running KVM/Qemu VMs with huge page memory backing on Intel Xeon Ivy Bridge machines. Our VMs use 10G Ether NICs in Intel VT-d mode. We actually see that IOTLB becomes a performance bottleneck when IOMMU uses 4k pages. We get much better packet throughput with 2M IOTLB pages.

We have tried two different Linux distributions (CentOS and Fedora). An older CentOS kernel maps everything using 4k IOTLB pages. Our newer Fedora kernel initially maps guest memory using 2M IOTLB pages, but we see that a couple of seconds later it remaps the first 0xE000 pages (3.75GB) of memory using 4k IOTLB pages. We still have 2M IOTLB page mappings for memory above 4GB.

Why would the kernel change the IOTLB page size from 2M to 4k? How can we make sure that all memory (except for some non-aligned bits) gets mapped using 2M IOTLB pages? As I mentioned, we are using huge-page memory backing for all our VMs. Any advice, also for debugging and further diagnosis, would be appreciated.

Jan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux