Re: dirty page tracking in kvm/qemu -- page faults inevitable?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/30/2014 06:12 AM, Chris Friesen wrote:
> Hi,
> 
> I've got an issue where we're hitting major performance penalties while doing live migration, and it seems like it might
> be due to page faults triggering hypervisor exits, and then we get stuck waiting for the iothread lock which is held by
> the qemu dirty page scanning code.

I am afraid that using dirty-bit instead of write-protection may cause the case
even more worse for iothread-lock because we need to walk whole sptes to get
dirty-set pages, however currently we only need to walk the page set in the
bitmap.

> 
> Accordingly, I'm trying to figure out the actual mechanism whereby dirty pages are tracked in qemu/kvm.  I've got an Ivy Bridge CPU, a 3.4 kernel on the host, and qemu 1.4.
> 
> Looking at the qemu code, it seems to be calling down into kvm to get the dirty page information.
> 
> Looking at kvm, most of what I read seems to be doing the usual "mark it read-only and then when we take a page fault mark it as dirty" trick.
> 
> However, I read something about Intel EPT having hardware support for tracking dirty pages.  It seems like this might avoid the need for a page fault, but might only be available on Haswell or later CPUs--is that correct?  Is it supported in kvm?  If so, when was support added?

Actually, i have implemented the prototype long time ago, maybe it's the time to benchmark
it and post it out.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux