Re: [RFC] KVM MMU: improve large munmap efficiency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/27/2012 01:24 AM, Eric Northup wrote:
> Flush the shadow MMU instead of iterating over each host VA when doing
> a large invalidate range callback.
>
> The previous code is O(N) in the number of virtual pages being
> invalidated, while holding both the MMU spinlock and the mmap_sem.
> Large unmaps can cause significant delay, during which the process is
> unkillable.  Worse, all page allocation could be delayed if there's
> enough memory pressure that mmu_shrink gets called.
>
> Signed-off-by: Eric Northup <digitaleric@xxxxxxxxxx>
>
> ---
>
> We have seen delays of over 30 seconds doing a large (128GB) unmap.
>
> It'd be nicer to check if the amount of work to be done by the entire
> flush is less than the work to be done iterating over each HVA page,
> but that information isn't currently available to the arch-
> independent part of KVM.
>
> Better ideas would be most welcome ;-)
>
>
> Tested by attaching a debugger to a running qemu w/kvm and running
> "call munmap(0, 1UL << 46)".
>

How about computing the intersection of (start, end) with the hva ranges
in kvm->memslots?

If there is no intersection, you exit immediately.

It's still possible for the work to drop the intersection to be larger
than dropping the entire shadow, but it's unlikely.

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux