Re: [RFC PATCH 12/15] KVM: x86/mmu: Split large pages when dirty logging is enabled

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 2, 2021 at 5:07 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>
> On Thu, Dec 02, 2021, David Matlack wrote:
> > Is there really no risk of long tail latency in kmem_cache_alloc() or
> > __get_free_page()? Even if it's rare, they will be common at scale.
>
> If there is a potentially long latency in __get_free_page(), then we're hosed no
> matter what because per alloc_pages(), it's allowed in any context, including NMI,
> IRQ, and Soft-IRQ.  I've no idea how often those contexts allocate, but I assume
> it's not _that_ rare given the amount of stuff that networking does in Soft-IRQ
> context, e.g. see the stack trace from commit 2620fe268e80, the use of PF_MEMALLOC,
> the use of GFP_ATOMIC in napi_alloc_skb, etc...  Anb it's not just direct
> allocations, e.g. anything that uses a radix tree or XArray will potentially
> trigger allocation on insertion.
>
> But I would be very, very surprised if alloc_pages() without GFP_DIRECT_RECLAIM
> has a long tail latency, otherwise allocating from any atomic context would be
> doomed.

In that case I agree your approach should not introduce any more MMU
lock contention than the split_caches approach in practice, and will
require a lot less new code. I'll attempt to do some testing to
confirm, but assuming that goes fine I'll go with your approach in v1.

Thanks!



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux