Re: [RFC] KVM MMU: improve large munmap efficiency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

(2012/01/27 8:24), Eric Northup wrote:
Flush the shadow MMU instead of iterating over each host VA when doing
a large invalidate range callback.

The previous code is O(N) in the number of virtual pages being
invalidated, while holding both the MMU spinlock and the mmap_sem.
Large unmaps can cause significant delay, during which the process is
unkillable.  Worse, all page allocation could be delayed if there's
enough memory pressure that mmu_shrink gets called.

Signed-off-by: Eric Northup<digitaleric@xxxxxxxxxx>

---

We have seen delays of over 30 seconds doing a large (128GB) unmap.

It'd be nicer to check if the amount of work to be done by the entire
flush is less than the work to be done iterating over each HVA page,
but that information isn't currently available to the arch-
independent part of KVM.

Using the number of (active) shadow pages may be one way.

See kvm->arch.n_used_mmu_pages.



Better ideas would be most welcome ;-)


I will soon, this weekend if possible, send a patch series which may
result in speeding up kvm_unmap_hva() loop.

Though my work has been done for optimizing a different thing, dirty
logging, I think this loop will also be optimized.

	I have checked that dirty logging improved significantly,
	so hope that your case will also.

So, in addition to your patch, please see to what extent my patch series
will help your case, if possible.

Thanks,
	Takuya
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux