Re: [PATCH RFC 0/4] KVM: x86: Drastically raise KVM_USER_MEM_SLOTS limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 15.01.2021 14:18, Vitaly Kuznetsov wrote:
TL;DR: any particular reason why KVM_USER_MEM_SLOTS is so low?

Longer version:

Current KVM_USER_MEM_SLOTS limit (509) can be a limiting factor for some
configurations. In particular, when QEMU tries to start a Windows guest
with Hyper-V SynIC enabled and e.g. 256 vCPUs the limit is hit as SynIC
requires two pages per vCPU and the guest is free to pick any GFN for
each of them, this fragments memslots as QEMU wants to have a separate
memslot for each of these pages (which are supposed to act as 'overlay'
pages).

Memory slots are allocated dynamically in KVM when added so the only real
limitation is 'id_to_index' array which is 'short'. We don't have any
KVM_MEM_SLOTS_NUM/KVM_USER_MEM_SLOTS-sized statically defined arrays.

We could've just raised the limit to e.g. '1021' (we have 3 private
memslots on x86) and this should be enough for now as KVM_MAX_VCPUS is
'288' but AFAIK there are plans to raise this limit as well.


I have a patch series that reworks the whole memslot thing, bringing
performance improvements across the board.
Will post it in few days, together with a new mini benchmark set.

Maciej



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux