Re: [PATCH v3 00/22] Improve scalability of KVM + userfaultfd live migration via annotated memory faults.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On May 3, 2023, at 4:45 PM, Peter Xu <peterx@xxxxxxxxxx> wrote:
> 
> On Wed, May 03, 2023 at 02:42:35PM -0700, Sean Christopherson wrote:
>> On Wed, May 03, 2023, Peter Xu wrote:
>>> Oops, bounced back from the list..
>>> 
>>> Forward with no attachment this time - I assume the information is still
>>> enough in the paragraphs even without the flamegraphs.
>> 
>> The flamegraphs are definitely useful beyond what is captured here.  Not sure
>> how to get them accepted on the list though.
> 
> Trying again with google drive:
> 
> single uffd:
> https://drive.google.com/file/d/1bYVYefIRRkW8oViRbYv_HyX5Zf81p3Jl/view
> 
> 32 uffds:
> https://drive.google.com/file/d/1T19yTEKKhbjU9G2FpANIvArSC61mqqtp/view
> 
>> 
>>>> From what I got there, vmx_vcpu_load() gets more highlights than the
>>>> spinlocks. I think that's the tlb flush broadcast.
>> 
>> No, it's KVM dealing with the vCPU being migrated to a different pCPU.  The
>> smp_call_function_single() that shows up is from loaded_vmcs_clear() and is
>> triggered when KVM needs to VMCLEAR the VMCS on the _previous_ pCPU (yay for the
>> VMCS caches not being coherent).
>> 
>> Task migration can also trigger IBPB (if mitigations are enabled), and also does
>> an "all contexts" INVEPT, i.e. flushes all TLB entries for KVM's MMU.
>> 
>> Can you trying 1:1 pinning of vCPUs to pCPUs?  That _should_ eliminate the
>> vmx_vcpu_load_vmcs() hotspot, and for large VMs is likely represenative of a real
>> world configuration.
> 
> Yes it does went away:
> 
> https://drive.google.com/file/d/1ZFhWnWjoU33Lxy43jTYnKFuluo4zZArm/view
> 
> With pinning vcpu threads only (again, over 40 hard cores/threads):
> 
> ./demand_paging_test -b 512M -u MINOR -s shmem -v 32 -c 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32
> 
> It seems to me for some reason the scheduler ate more than I expected..
> Maybe tomorrow I can try two more things:
> 
>  - Do cpu isolations, and
>  - pin reader threads too (or just leave the readers on housekeeping cores)

For the record (and I hope I do not repeat myself): these scheduler overheads
is something that I have encountered before.

The two main solutions I tried were:

1. Optional polling on the faulting thread to avoid context switch on the
   faulting thread.

(something like https://lore.kernel.org/linux-mm/20201129004548.1619714-6-namit@xxxxxxxxxx/ )

and 

2. IO-uring to avoid context switch on the handler thread.

In addition, as I mentioned before, the queue locks is something that can be
simplified.





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux