Re: Cores, Hyperthreads, and KVM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Mar 17, 2018 at 11:17:53AM +1300, Richard Hector wrote:
> On 17/03/18 04:05, Stefan Hajnoczi wrote:
> > On Thu, Mar 15, 2018 at 09:47:35PM +1300, Richard Hector wrote:
> >> Apologies to those who saw this earlier on debian-user.
> >>
> >> I generally (currently) use KVM with libvirt on Debian Stretch.
> >>
> >> When I configure a KVM guest to have 2 vcpus, will that be 2 full cores?
> >> Or will it give the guest both threads on the same real core? Or might
> >> it use half of each of 2 different cores?
> >>
> >> I guess the same applies to physical CPUs, too - there's presumably an
> >> advantage in giving a VM a set of cores all on the same CPU, to take
> >> advantage of shared caching - is that dealt with automatically?
> >>
> >> I've always assumed that I should allocate even numbers of vcpus on an
> >> HT capable machine, so that it keeps the threads together.
> >>
> >> Does any of this matter?
> > 
> > Yes, performance is affected by vcpu placement and topology.
> > 
> > There are two things going on:
> > 
> > 1. vcpu topology.  This is virtual.  You decide how many sockets,
> >    cores, and hardware threads the guest sees.  On the host side they
> >    are just a bunch of threads in the QEMU process and the host Linux
> >    scheduler decides when and where they execute.
> > 
> > 2. vcpu placement (affinity).  This lets you control which host CPUs the
> >    vcpu threads run on.  You can force a vcpu to run on a specific host
> >    CPU or you can give it a set of host CPUs where the host Linux
> >    scheduler will run it.  The default is that vcpu threads are not
> >    bound to any specific host CPU and could run anywhere!
> > 
> > Nothing is dealt with automatically if you are directly using libvirt
> > (virsh, virt-manager, etc).  A popular configuration is to mimic the
> > host CPU topology to the guest and then pin vcpu threads 1:1 onto their
> > host CPUs.  This way the guest kernel can make proper scheduling
> > decisions.
> > 
> > At what point does pinning matter for performance?  If you have a small
> > VM with few vcpus it might not be important.  But if you want to get
> > consistent performance, especially on larger hosts, it is probably a
> > good idea to configure the vcpu topology and placement.
> 
> Thanks Stefan - I'll have to think about this a bit more :-)
> 
> What's the best place to read more about it?
> 
> BTW, doesn't the kernel try to maintain a process/thread's affinity to a
> CPU anyway? So the VM should benefit from that regardless?

The host kernel will try to keep placement efficient.  The guest kernel
scheduler only knows about the virtual topology that the guest was
configured with, so it may not make smart choices without configuration.

Stefan

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux