Re: IOTLB page size question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 10/13/2014 04:16 PM, Jan Sacha wrote:
Legacy KVM device assignment maps IOMMU pages using the host kernel page
size for the region while VFIO will pass the largest contiguous range of
pages available to the IOMMU, regardless of kernel page size. If VFIO
doesn't have the same problem then perhaps the kernel idea of the page
size for that reason has changed between mappings.  Thanks,
When I have a VM running, I can see on the host OS:

# cat /proc/1360/numa_maps
2aaaaac00000 prefer:0 file=/dev/hugepages/libvirt/qemu/qemu_back_mem.DO2GNu\040(deleted) huge dirty=3070 N0=3070
...

# cat /proc/1360/maps
2aaaaac00000-2aac2a800000 rw-s 00000000 00:1e 19928 /dev/hugepages/libvirt/qemu/qemu_back_mem.DO2GNu (deleted)
...

So it looks to me that there should be roughly 6GB mapped using huge (2M) pages. PID 1360 is the qemu process. The VM is configured to use 6GB. However, the IOTLB seems to be using 2k pages for all memory below 4GB in the guest physical space. It does use 2M pages for memory above 4GB. Does this make sense?

We found a solution so I can answer my own question. This behavior was caused by a bug in the kernel. It was fixed in 3.13.

https://github.com/torvalds/linux/commit/e0230e1327fb862c9b6cde24ae62d55f9db62c9b

Jan


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux