What will happen to hugetlbfs backed guest memory when nx_huge_pages is enabled?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Recently I noticed a significant performance issue with some of our KVM guests.
It looked like that the ITLB_MULTIHIT mitigation patch had been
backported to the
Ubuntu kernel on which they were running on, and the KVM was trying to do the
mitigation task on the guest memory backed by hugetlbfs 2MB pages.

    perf showed me that the KVM was busy doing tdp_page_fault(), __direct_map(),
    kvm_mmu_get_page() things. Normally, these were called in a short
period of time
    right after the VM got started because the guest soon touched the
whole memory.

The patch explains when a guest attempts to execute on an NX marked huge page,
KVM will break it down into 4KB pages. I understand how this works for
THP backed
guest memory, but what will happen to hugetlbfs backed guest memory?

When a huge amount of system memory is reserved as the hugetlbfs pool and QEMU
is said to use pages from there by the -mem-path option, is it safe to
enable the
nx_huge_pages mitigation?

We can turn it off now because the KVM guests are not from outside, and we only
execute our applications on them.


  UBUNTU: SAUCE: kvm: mmu: ITLB_MULTIHIT mitigation
  https://git.launchpad.net/~ubuntu-kernel/ubuntu/ source/linux/
git/xenial/commit/arch/x86/kvm?id=c6c9a37b564b8b4f7aad099388c55978ef456bb5

  kvm: mmu: ITLB_MULTIHIT mitigation
  https://github.com/torvalds/linux/commit/b8e8c8303ff28c61046a4d0f6ea99aea609a7dc0

  Takuya



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux