Re: [PATCH 1/5] KVM: Make the maximum number of user memslots a per-VM thing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 28.01.2021 09:45, Vitaly Kuznetsov wrote:
"Maciej S. Szmigiero" <maciej.szmigiero@xxxxxxxxxx> writes:

On 27.01.2021 18:57, Vitaly Kuznetsov wrote:
Limiting the maximum number of user memslots globally can be undesirable as
different VMs may have different needs. Generally, a relatively small
number should suffice and a VMM may want to enforce the limitation so a VM
won't accidentally eat too much memory. On the other hand, the number of
required memslots can depend on the number of assigned vCPUs, e.g. each
Hyper-V SynIC may require up to two additional slots per vCPU.

Prepare to limit the maximum number of user memslots per-VM. No real
functional change in this patch as the limit is still hard-coded to
KVM_USER_MEM_SLOTS.

Suggested-by: Sean Christopherson <seanjc@xxxxxxxxxx>
Signed-off-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>
---

Perhaps I didn't understand the idea clearly but I thought it was
to protect the kernel from a rogue userspace VMM allocating many
memslots and so consuming a lot of memory in kernel?

But then what's the difference between allocating 32k memslots for
one VM and allocating 509 slots for 64 VMs?


It was Sean's idea :-) Initially, I had the exact same thoughts but now
I agree with

"I see it as an easy way to mitigate the damage.  E.g. if a containers use case
is spinning up hundreds of VMs and something goes awry in the config, it would
be the difference between consuming tens of MBs and hundreds of MBs.  Cgroup
limits should also be in play, but defense in depth and all that. "

https://urldefense.com/v3/__https://lore.kernel.org/kvm/YAcU6swvNkpPffE7@xxxxxxxxxx/__;!!GqivPVa7Brio!MEvJAWTpdPwU7jynHog2X5g4AHX7YCbRlNvTC9x4xdmk3aiSMjwT_rMpvZM6g8TkoJfvcw$

That said it is not really a security feature, VMM still stays in
control.

A guest can't add a memslot on its own, only the host software
(like QEMU) can, right?


VMMs (especially big ones like QEMU) are complex and e.g. each driver
can cause memory regions (-> memslots in KVM) to change. With this
feature it becomes possible to set a limit upfront (based on VM
configuration) so it'll be more obvious when it's hit.


I see: it's a kind of a "big switch", so every VMM doesn't have to be
modified or audited.
Thanks for the explanation.

Maciej



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux