On 10/13/2014 02:41 PM, Alex Williamson wrote:
On Mon, 2014-10-13 at 13:50 -0700, Jan Sacha wrote:
We have tried two different Linux distributions (CentOS and Fedora)...
This doesn't really help narrow your kernel version.
Fedora kernel was 3.11.10-100.fc18 and CentOS kernel was an older
2.6.32-431.el6.
0xe0000 pages? 0xe000 pages isn't 3.75G.
Okay, never mind the 0xe000. I meant 3.75GB of memory.
Legacy KVM device assignment maps IOMMU pages using the host kernel page
size for the region while VFIO will pass the largest contiguous range of
pages available to the IOMMU, regardless of kernel page size. If VFIO
doesn't have the same problem then perhaps the kernel idea of the page
size for that reason has changed between mappings. Thanks,
When I have a VM running, I can see on the host OS:
# cat /proc/1360/numa_maps
2aaaaac00000 prefer:0
file=/dev/hugepages/libvirt/qemu/qemu_back_mem.DO2GNu\040(deleted) huge
dirty=3070 N0=3070
...
# cat /proc/1360/maps
2aaaaac00000-2aac2a800000 rw-s 00000000 00:1e 19928
/dev/hugepages/libvirt/qemu/qemu_back_mem.DO2GNu (deleted)
...
So it looks to me that there should be roughly 6GB mapped using huge
(2M) pages. PID 1360 is the qemu process. The VM is configured to use
6GB. However, the IOTLB seems to be using 2k pages for all memory below
4GB in the guest physical space. It does use 2M pages for memory above
4GB. Does this make sense?
Thanks,
Jan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html