Re: [PATCH v3 00/22] Improve scalability of KVM + userfaultfd live migration via annotated memory faults.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 03, 2023 at 07:45:28PM -0400, Peter Xu wrote:
> On Wed, May 03, 2023 at 02:42:35PM -0700, Sean Christopherson wrote:
> > On Wed, May 03, 2023, Peter Xu wrote:
> > > Oops, bounced back from the list..
> > > 
> > > Forward with no attachment this time - I assume the information is still
> > > enough in the paragraphs even without the flamegraphs.
> > 
> > The flamegraphs are definitely useful beyond what is captured here.  Not sure
> > how to get them accepted on the list though.
> 
> Trying again with google drive:
> 
> single uffd:
> https://drive.google.com/file/d/1bYVYefIRRkW8oViRbYv_HyX5Zf81p3Jl/view
> 
> 32 uffds:
> https://drive.google.com/file/d/1T19yTEKKhbjU9G2FpANIvArSC61mqqtp/view
> 
> > 
> > > > From what I got there, vmx_vcpu_load() gets more highlights than the
> > > > spinlocks. I think that's the tlb flush broadcast.
> > 
> > No, it's KVM dealing with the vCPU being migrated to a different pCPU.  The
> > smp_call_function_single() that shows up is from loaded_vmcs_clear() and is
> > triggered when KVM needs to VMCLEAR the VMCS on the _previous_ pCPU (yay for the
> > VMCS caches not being coherent).
> > 
> > Task migration can also trigger IBPB (if mitigations are enabled), and also does
> > an "all contexts" INVEPT, i.e. flushes all TLB entries for KVM's MMU.
> > 
> > Can you trying 1:1 pinning of vCPUs to pCPUs?  That _should_ eliminate the
> > vmx_vcpu_load_vmcs() hotspot, and for large VMs is likely represenative of a real
> > world configuration.
> 
> Yes it does went away:
> 
> https://drive.google.com/file/d/1ZFhWnWjoU33Lxy43jTYnKFuluo4zZArm/view
> 
> With pinning vcpu threads only (again, over 40 hard cores/threads):
> 
> ./demand_paging_test -b 512M -u MINOR -s shmem -v 32 -c 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32
> 
> It seems to me for some reason the scheduler ate more than I expected..
> Maybe tomorrow I can try two more things:
> 
>   - Do cpu isolations, and
>   - pin reader threads too (or just leave the readers on housekeeping cores)

I gave it a shot by isolating 32 cores and split into two groups, 16 for
uffd threads and 16 for vcpu threads.  I got similiar results and I don't
see much changed.

I think it's possible it's just reaching the limit of my host since it only
got 40 cores anyway.  Throughput never hits over 350K faults/sec overall.

I assume this might not be the case for Anish if he has a much larger host.
So we can have similar test carried out and see how that goes.  I think the
idea is making sure vcpu load overhead during sched-in is ruled out, then
see whether it can keep scaling with more cores.

-- 
Peter Xu




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux