dirty page tracking in kvm/qemu -- page faults inevitable?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I've got an issue where we're hitting major performance penalties while doing live migration, and it seems like it might be due to page faults triggering hypervisor exits, and then we get stuck waiting for the iothread lock which is held by the qemu dirty page scanning code.

Accordingly, I'm trying to figure out the actual mechanism whereby dirty pages are tracked in qemu/kvm. I've got an Ivy Bridge CPU, a 3.4 kernel on the host, and qemu 1.4.

Looking at the qemu code, it seems to be calling down into kvm to get the dirty page information.

Looking at kvm, most of what I read seems to be doing the usual "mark it read-only and then when we take a page fault mark it as dirty" trick.

However, I read something about Intel EPT having hardware support for tracking dirty pages. It seems like this might avoid the need for a page fault, but might only be available on Haswell or later CPUs--is that correct? Is it supported in kvm? If so, when was support added?

Thanks,
Chris

P.S.  Please CC me on reply, I'm not subscribed to the list.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux