Re: [PATCH RFC 0/4] KVM: x86: Drastically raise KVM_USER_MEM_SLOTS limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 15, 2021, Vitaly Kuznetsov wrote:
> Sean Christopherson <seanjc@xxxxxxxxxx> writes:
> 
> > On Fri, Jan 15, 2021, Vitaly Kuznetsov wrote:
> >> Longer version:
> >> 
> >> Current KVM_USER_MEM_SLOTS limit (509) can be a limiting factor for some
> >> configurations. In particular, when QEMU tries to start a Windows guest
> >> with Hyper-V SynIC enabled and e.g. 256 vCPUs the limit is hit as SynIC
> >> requires two pages per vCPU and the guest is free to pick any GFN for
> >> each of them, this fragments memslots as QEMU wants to have a separate
> >> memslot for each of these pages (which are supposed to act as 'overlay'
> >> pages).
> >
> > What exactly does QEMU do on the backend?  I poked around the code a bit, but
> > didn't see anything relevant.
> >
> 
> In QEMU's terms it registers memory sub-regions for these two pages (see
> synic_update() in hw/hyperv/hyperv.c). Memory for these page-sized
> sub-regions is allocated separately so in KVM terms they become
> page-sized slots and previously continuous 'system memory' slot breaks
> into several slots.

Doh, I had a super stale version checked out (2.9.50), no wonder I couldn't find
anything.

Isn't the memslot approach inherently flawed in that the SynIC is per-vCPU, but
memslots are per-VM?  E.g. if vCPU1 accesses vCPU0's SynIC GPA, I would expect
that to access real memory, not the overlay.  Or is there more QEMU magic going
on that I'm missing?



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux