kvm scaling question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am wondering if anyone has investigated how well kvm scales when supporting many guests, or many vcpus or both.

I'll do some investigations into the per vm memory overhead and play with bumping the max vcpu limit way beyond 16, but hopefully someone can comment on issues such as locking problems that are known to exist and needing to be addressed to increased parallellism, general overhead percentages which can help provide consolidation expectations, etc.

Also, when I did a simple experiment with vcpu overcommitment, I was surprised how quickly performance suffered (just bringing a Linux vm up), since I would have assumed the additional vcpus would have been halted the vast majority of the time.  On a 2 proc box, overcommitment to 8 vcpus in a guest (I know this isn't a good usage scenario, but does provide some insights) caused the boot time to increase to almost exponential levels. At 16 vcpus, it took hours to just reach the gui login prompt.

Any perspective you can offer would be appreciated.

Bruce

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux