Re: KVM and NUMA

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Daniel,

thanks for your response.

Am Donnerstag, den 15.07.2010, 20:31 +0100 schrieb Daniel P. Berrange:

> If numactl --hardware works, then libvirt should work,
> since libvirt uses the numactl library to query topology

Ok. I did not know that, and in my case it does not seem to work. See
below.

> The NUMA topology does not get put inside the <cpu> element. It 
> is one level up in a <topology> element. eg
> 
In my case (Ubuntu 10.04 LTS) it is just put inside the cpu element. 
Full host listing:

<capabilities>

  <host>
    <cpu>
      <arch>x86_64</arch>
      <model>core2duo</model>
      <topology sockets='2' cores='4' threads='1'/>
      <feature name='lahf_lm'/>
      <feature name='rdtscp'/>
      <feature name='popcnt'/>
      <feature name='dca'/>
      <feature name='xtpr'/>
      <feature name='cx16'/>
      <feature name='tm2'/>
      <feature name='est'/>
      <feature name='vmx'/>
      <feature name='ds_cpl'/>
      <feature name='pbe'/>
      <feature name='tm'/>
      <feature name='ht'/>
      <feature name='ss'/>
      <feature name='acpi'/>
      <feature name='ds'/>
    </cpu>
    <migration_features>
      <live/>
      <uri_transports>
        <uri_transport>tcp</uri_transport>
      </uri_transports>
    </migration_features>
    <secmodel>
      <model>apparmor</model>
      <doi>0</doi>
    </secmodel>
  </host>

> > I guess this is the fact, because QEMU does not recognize the
> > NUMA-Architecture (QEMU-Monitor):
> > (qemu) info numa
> > 0 nodes
Thanks for the clarification.

> There are two aspects to NUMA. 1. Placing QEMU on appropriate NUMA
> ndes. 2. defining guest NUMA topology
Right. I am interested in placing Qemu on the appropriate node.

> 
> By default QEMU will float freely across any CPUs and all the guest
> RAM will appear in one node. This is can be bad for performance,
> especially if you are benchmarking

> So for performance testing you definitely want to  bind QEMU to the
> CPUs within a single NUMA node at startup, this will mean that all
> memory accesses are local to the node. Unless you give the guest
> more virtual RAM, than there is free RAM on the local NUMA node.
> Since you suggest you're using libvirt, the low level way todo 
> this is in the guest XML at the <vcpu> element
Ok. But will my Qemu implementation use the appropriate RAM since it
does not recognize the architecture?

> For further performance you also really want to enable hugepages on
> your host (eg mount hugetlbfs at /dev/hugepages), then restart 
> libvirtd daemon, and then add the following to your guest XML just
> after the <memory> element:
> 
>   <memoryBacking>
>     <hugepages/>
>   </memoryBacking>
I have played with that, too. I could mount the hugetlbfs filesystem and
define the mountpoint in libvirt. The guest started ok but I could
verify that it was actually used. /proc/meminfo always showed 100% free
huge pages whether the guest was running or not. Shouldn't these pages
be used when the guest is running?

As I said: Ubuntu not RHEL.

Kind regards,

Ralf


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux