Re: [PATCH RFC 0/4] KVM: x86: Drastically raise KVM_USER_MEM_SLOTS limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sean Christopherson <seanjc@xxxxxxxxxx> writes:

> On Fri, Jan 15, 2021, Vitaly Kuznetsov wrote:
>> TL;DR: any particular reason why KVM_USER_MEM_SLOTS is so low?
>
> Because memslots were allocated statically up until fairly recently (v5.7), and
> IIRC consumed ~92kb.  Doubling that for every VM would be quite painful. 
>

I should've added 'now' to the question). So the main reason is gone,
thanks for the confirmation!

>> Longer version:
>> 
>> Current KVM_USER_MEM_SLOTS limit (509) can be a limiting factor for some
>> configurations. In particular, when QEMU tries to start a Windows guest
>> with Hyper-V SynIC enabled and e.g. 256 vCPUs the limit is hit as SynIC
>> requires two pages per vCPU and the guest is free to pick any GFN for
>> each of them, this fragments memslots as QEMU wants to have a separate
>> memslot for each of these pages (which are supposed to act as 'overlay'
>> pages).
>
> What exactly does QEMU do on the backend?  I poked around the code a bit, but
> didn't see anything relevant.
>

In QEMU's terms it registers memory sub-regions for these two pages (see
synic_update() in hw/hyperv/hyperv.c). Memory for these page-sized
sub-regions is allocated separately so in KVM terms they become
page-sized slots and previously continuous 'system memory' slot breaks
into several slots.

-- 
Vitaly




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux