Hi Vitaly,
On 27.01.2021 18:55, Vitaly Kuznetsov wrote:
"Maciej S. Szmigiero" <maciej.szmigiero@xxxxxxxxxx> writes:
On 15.01.2021 14:18, Vitaly Kuznetsov wrote:
TL;DR: any particular reason why KVM_USER_MEM_SLOTS is so low?
Longer version:
Current KVM_USER_MEM_SLOTS limit (509) can be a limiting factor for some
configurations. In particular, when QEMU tries to start a Windows guest
with Hyper-V SynIC enabled and e.g. 256 vCPUs the limit is hit as SynIC
requires two pages per vCPU and the guest is free to pick any GFN for
each of them, this fragments memslots as QEMU wants to have a separate
memslot for each of these pages (which are supposed to act as 'overlay'
pages).
Memory slots are allocated dynamically in KVM when added so the only real
limitation is 'id_to_index' array which is 'short'. We don't have any
KVM_MEM_SLOTS_NUM/KVM_USER_MEM_SLOTS-sized statically defined arrays.
We could've just raised the limit to e.g. '1021' (we have 3 private
memslots on x86) and this should be enough for now as KVM_MAX_VCPUS is
'288' but AFAIK there are plans to raise this limit as well.
I have a patch series that reworks the whole memslot thing, bringing
performance improvements across the board.
Will post it in few days, together with a new mini benchmark set.
I'm about to send a successor of this series. It will be implmenting
Sean's idea to make the maximum number of memslots a per-VM thing (and
also raise the default). Hope it won't interfere with your work!
Thanks for your series and CC'ing me on it.
It looks like there should be no design conflicts, I will merely need to
rebase on top of it.
By the way I had to change a bit the KVM selftest framework memslot
handling for my stuff, too, since otherwise just adding 32k memslots
for a test would take almost forever.
Thanks,
Maciej